September 17, 2014

A follow up to yesterday's Videos new for 3.14
The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

September 16, 2014

ACPI, kernels and contracts with firmware
ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments
A Wayland status update

It has been a while since I last  wrote about the GNOME Wayland port; time for another status update of GNOME 3.14 on Wayland.

+

So, what have we achieved this cycle ?

  • Keyboard layouts are supported
  • Drag-and-Drop works (with limitations)
  • Touch is supported

The list of applications that work ‘natively’ (ie with the GTK+ Wayland backend) is looking pretty good, too.  The main straggler here is totem, where we are debugging some issues with the use of subsurfaces in clutter-gtk.

We are homing in on ‘day-to-day usable’.  I would love to say the Wayland session is “rock-solid”, but I just spent an hour trying to track track down an ugly memory leak that ended my session rather quickly. So, we are not quite there yet, and more work is needed.

If you are interested in helping us complete the port and take advantage of Wayland going forward, there is an opening in the desktop team at Red Hat for a Wayland developer.

Update: After a bit of collective head-scratching, Jasper fixed the memory leak here.

Videos 3.14 features
We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.
Master Document Templates
Writer has long had Master Documents. A master document lets you manage large documents, such as a book with many chapters. One odt per chapter, bundled into a single document via a master odm

LibreOffice 4.4 introduces Master Document Templates. What that means is that these Master Document Templates can be added to the Template Manager and from there you can create a new Master Document (odm) based on a Master Document Template (otm). The new odm of course having the same initial content as the Template it is based upon.

Thanks to Máirín Duffy (of Red Hat, Inc.) for prompting this feature. Any failures in execution are mine however.

September 13, 2014

More Font Support
Playing around with some Mac OS X fonts under Linux I noticed that LibreOffice wasn't listing a lot of them despite fontconfig announcing their existence. A little digging and some very small tweaks means that we now have mac ttf fontname encoding support along with support for version 2 ttc fonts. This is more fixing some oversights (version 2 of ttc came into existence after the ttc support was added so there was a "only if version is 1" condition) than implementing anything particularly new, but now LibreOffice under Linux works with a lot more ttf/ttc/otf fonts than it did before.

September 09, 2014

change image option in writer context menu

Change Image in Writer

Thanks to Jennifer Liebel, writer now has a change image option in its graphic context menu like Impress/Draw.

September 03, 2014

aarch64 libreoffice

Thanks to Stephen Bergmann of Red Hat, Inc. LibreOffice is now ported to aarch64. No new ports for years, and then two 64bit ports landed within a week of aarch64 and ppc64le.

August 31, 2014

Revisiting How We Put Together Linux Systems

In a previous blog story I discussed Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems, I now want to take the opportunity to explain a bit where we want to take this with systemd in the longer run, and what we want to build out of it. This is going to be a longer story, so better grab a cold bottle of Club Mate before you start reading.

Traditional Linux distributions are built around packaging systems like RPM or dpkg, and an organization model where upstream developers and downstream packagers are relatively clearly separated: an upstream developer writes code, and puts it somewhere online, in a tarball. A packager than grabs it and turns it into RPMs/DEBs. The user then grabs these RPMs/DEBs and installs them locally on the system. For a variety of uses this is a fantastic scheme: users have a large selection of readily packaged software available, in mostly uniform packaging, from a single source they can trust. In this scheme the distribution vets all software it packages, and as long as the user trusts the distribution all should be good. The distribution takes the responsibility of ensuring the software is not malicious, of timely fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn't fit many use-cases of our software particularly well. Let's have a look at the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream distributions to package their stuff. It's the downstream distribution that decides on schedules, packaging details, and how to handle support. Often upstream vendors want much faster release cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to impossible. Since the end-user can run a variety of different package versions together, and expects the software he runs to just work on any combination, the test matrix explodes. If upstream tests its version on distribution X release Y, then there's no guarantee that that's the precise combination of packages that the end user will eventually run. In fact, it is very unlikely that the end user will, since most distributions probably updated a number of libraries the package relies on by the time the package ends up being made available to the user. The fact that each package can be individually updated by the user, and each user can combine library versions, plug-ins and executables relatively freely, results in a high risk of something going wrong.

  • Since there are so many different distributions in so many different versions around, if upstream tries to build and test software for them it needs to do so for a large number of distributions, which is a massive effort.

  • The distributions are actually quite different in many ways. In fact, they are different in a lot of the most basic functionality. For example, the path where to put x86-64 libraries is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is hard: if you want to do it, you need to actually install them, each one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and trademark requirements (and rightly so), any kind of closed source software (or otherwise non-free) does not fit into this scheme at all.

This all together makes it really hard for many upstreams to work nicely with the current way how Linux works. Often they try to improve the situation for them, for example by bundling libraries, to make their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for people who want to put together their individual system, nicely adjusted to exactly what they need. However, this is not really how many of today's Linux systems are built, installed or updated. If you build any kind of embedded device, a server system, or even user systems, you frequently do your work based on complete system images, that are linearly versioned. You build these images somewhere, and then you replicate them atomically to a larger number of systems. On these systems, you don't install or remove packages, you get a defined set of files, and besides installing or updating the system there are no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing for this major use-case of Linux. Their strict focus on individual packages as well as package managers as end-user install and update tool is incompatible with what many system vendors want.

Users

The classic Linux distribution scheme is frequently not what end users want, either. Many users are used to app markets like Android, Windows or iOS/Mac have. Markets are a platform that doesn't package, build or maintain software like distributions do, but simply allows users to quickly find and download the software they need, with the app vendor responsible for keeping the app updated, secured, and all that on the vendor's release cycle. Users tend to be impatient. They want their software quickly, and the fine distinction between trusting a single distribution or a myriad of app developers individually is usually not important for them. The companies behind the marketplaces usually try to improve this trust problem by providing sand-boxing technologies: as a replacement for the distribution that audits, vets, builds and packages the software and thus allows users to trust it to a certain level, these vendors try to find technical solutions to ensure that the software they offer for download can't be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are sometimes quite successful attempts to do something about it. Ubuntu Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of this problem set, usually with a strict focus on one facet of Linux systems. For example, Ubuntu Apps focus strictly on end user (desktop) applications, and don't care about how we built/update/install the OS itself, or containers. Docker OTOH focuses on containers only, and doesn't care about end-user apps. Software Collections tries to focus on the development environments. ChromeOS focuses on the OS itself, but only for end-user devices. CoreOS also focuses on the OS, but only for server systems.

The approaches they find are usually good at specific things, and use a variety of different technologies, on different layers. However, none of these projects tried to fix this problems in a generic way, for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so generic: you can build supercomputers and tiny embedded devices out of it. It's time we come up with a basic, reusable scheme how to solve the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom Gundersen, David Herrmann, and yours truly) recently met in Berlin about all these things, and tried to come up with a scheme that is somewhat simple, but tries to solve the issues generically, for all use-cases, as part of the systemd project. All that in a way that is somewhat compatible with the current scheme of distributions, to allow a slow, gradual adoption. Also, and that's something one cannot stress enough: the toolbox scheme of classic Linux distributions is actually a good one, and for many cases the right one. However, we need to make sure we make distributions relevant again for all use-cases, not just those of highly individualized systems.

Anyway, so let's summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their software (regardless if just an app, or the whole OS) directly for the end user, and know the precise combination of libraries and packages it will operate with.

  • We want to allow end users and administrators to install these packages on their systems, regardless which distribution they have installed on it.

  • We want a unified solution that ultimately can cover updates for full systems, OS containers, end user apps, programming ABIs, and more. These updates shall be double-buffered, (at least). This is an absolute necessity if we want to prepare the ground for operating systems that manage themselves, that can update safely without administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a fully trustable OS, with images that can be verified by a full trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the kernel, and initrd. Cryptographically secure verification of the code we execute is relevant on the desktop (like ChromeOS does), but also for apps, for embedded devices and even on servers (in a post-Snowden world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So, now, let's discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs and Linux file system name-spacing. btrfs at this point already has a large number of features that fit neatly in our concept, and the maintainers are busy working on a couple of others we want to eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> -- This refers to a full vendor operating system tree. It's basically a /usr tree (and no other directories), in a specific version, with everything you need to boot it up inside it. The <vendorid> field is replaced by some vendor identifier, maybe a scheme like org.fedoraproject.FedoraWorkstation. The <architecture> field specifies a CPU architecture the OS is designed for, for example x86-64. The <version> field specifies a specific OS version, for example 23.4. An example sub-volume name could hence look like this: usr:org.fedoraproject.FedoraWorkstation:x86_64:23.4

  • root:<name>:<vendorid>:<architecture> -- This refers to an instance of an operating system. Its basically a root directory, containing primarily /etc and /var (but possibly more). Sub-volumes of this type do not contain a populated /usr tree though. The <name> field refers to some instance name (maybe the host name of the instance). The other fields are defined as above. An example sub-volume name is root:revolution:org.fedoraproject.FedoraWorkstation:x86_64.

  • runtime:<vendorid>:<architecture>:<version> -- This refers to a vendor runtime. A runtime here is supposed to be a set of libraries and other resources that are needed to run apps (for the concept of apps see below), all in a /usr tree. In this regard this is very similar to the usr sub-volumes explained above, however, while a usr sub-volume is a full OS and contains everything necessary to boot, a runtime is really only a set of libraries. You cannot boot it, but you can run apps with it. An example sub-volume name is: runtime:org.gnome.GNOME3_20:x86_64:3.20.1

  • framework:<vendorid>:<architecture>:<version> -- This is very similar to a vendor runtime, as described above, it contains just a /usr tree, but goes one step further: it additionally contains all development headers, compilers and build tools, that allow developing against a specific runtime. For each runtime there should be a framework. When you develop against a specific framework in a specific architecture, then the resulting app will be compatible with the runtime of the same vendor ID and architecture. Example: framework:org.gnome.GNOME3_20:x86_64:3.20.1

  • app:<vendorid>:<runtime>:<architecture>:<version> -- This encapsulates an application bundle. It contains a tree that at runtime is mounted to /opt/<vendorid>, and contains all the application's resources. The <vendorid> could be a string like org.libreoffice.LibreOffice, the <runtime> refers to one the vendor id of one specific runtime the application is built for, for example org.gnome.GNOME3_20:3.20.1. The <architecture> and <version> refer to the architecture the application is built for, and of course its version. Example: app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133

  • home:<user>:<uid>:<gid> -- This sub-volume shall refer to the home directory of the specific user. The <user> field contains the user name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs of the user. The idea here is that in the long run the list of sub-volumes is sufficient as a user database (but see below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly identifiable. It is our intention to introduce a new GPT partition type ID for this.

How To Use It

After we introduced this naming scheme let's see what we can build of this:

  • When booting up a system we mount the root directory from one of the root sub-volumes, and then mount /usr from a matching usr sub-volume. Matching here means it carries the same <vendor-id> and <architecture>. Of course, by default we should pick the matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when we boot up a regular system: we simply combine a usr sub-volume with a root sub-volume.

  • When we enumerate the system's users we simply go through the list of home snapshots.

  • When a user authenticates and logs in we mount his home directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the app sub-volume to /opt/<vendorid>/, and the appropriate runtime sub-volume the app picked to /usr, as well as the user's /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he installs the right framework, and then temporarily transitions into a name space where /usris mounted from the framework sub-volume, and /home/$USER from his own home directory. In this name space he then runs his build commands. He can build in multiple name spaces at the same time, if he intends to builds software for multiple runtimes or architectures at the same time.

Instantiating a new system or OS container (which is exactly the same in this scheme) just consists of creating a new appropriately named root sub-volume. Completely naturally you can share one vendor OS copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-fold-buffered), because usr, runtime, framework, app sub-volumes can exist in multiple versions. Of course, by default the execution logic should always pick the newest release of each sub-volume, but it is up to the user keep multiple versions around, and possibly execute older versions, if he desires to do so. In fact, like on ChromeOS this could even be handled automatically: if a system fails to boot with a newer snapshot, the boot loader can automatically revert back to an older version of the OS.

An Example

Note that in result this allows installing not only multiple end-user applications into the same btrfs volume, but also multiple operating systems, multiple system instances, multiple runtimes, multiple frameworks. Or to spell this out in an example:

Let's say Fedora, Mageia and ArchLinux all implement this scheme, and provide ready-made end-user images. Also, the GNOME, KDE, SDL projects all define a runtime+framework to develop against. Finally, both LibreOffice and Firefox provide their stuff according to this scheme. You can now trivially install of these into the same btrfs volume:

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:org.mageia.Client:i386:39.3
  • usr:org.mageia.Client:i386:39.4
  • usr:org.mageia.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:org.mageia.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.1
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.4
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
  • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
  • runtime:org.kde.KDE5_6:x86_64:5.6.0
  • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • framework:org.kde.KDE5_6:x86_64:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems installed. All of them in three versions, and one even in a beta version. We have four system instances around. Two of them of Fedora, maybe one of them we usually boot from, the other we run for very specific purposes in an OS container. We also have the runtimes for two GNOME releases in multiple versions, plus one for KDE. Then, we have the development trees for one version of KDE and GNOME around, as well as two apps, that make use of two releases of the GNOME runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can actually relatively freely mix and match apps and OSes, or develop against specific frameworks in specific versions on any operating system. It doesn't matter if you booted your ArchLinux instance, or your Fedora one, you can execute both LibreOffice and Firefox just fine, because at execution time they get matched up with the right runtime, and all of them are available from all the operating systems you installed. You get the precise runtime that the upstream vendor of Firefox/LibreOffice did their testing with. It doesn't matter anymore which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the sub-volume list, it doesn't matter which system you boot, the distribution should be able to find your local users automatically, without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on execution we already came quite far, but how do we actually get these sub-volumes onto the final machines, and how do we update them? Well, btrfs has a feature they call "send-and-receive". It basically allows you to "diff" two file system versions, and generate a binary delta. You can generate these deltas on a developer's machine and then push them into the user's system, and he'll get the exact same sub-volume too. This is how we envision installation and updating of operating systems, applications, runtimes, frameworks. At installation time, we simply deserialize an initial send-and-receive delta into our btrfs volume, and later, when a new version is released we just add in the few bits that are new, by dropping in another send-and-receive delta under a new sub-volume name. And we do it exactly the same for the OS itself, for a runtime, a framework or an app. There's no technical distinction anymore. The underlying operation for installing apps, runtime, frameworks, vendor OSes, as well as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an awful lot of waste, after all they will contain a lot of very similar data, since a lot of resources are shared between distributions, frameworks and runtimes. However, thankfully btrfs actually is able to de-duplicate this for us. If we add in a new app snapshot, this simply adds in the new files that changed. Moreover different runtimes and operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user, desktop side of things, the concept is also extremely powerful in server scenarios. For example, it is easy to build your own usr trees and deliver them to your hosts using this scheme. The usr sub-volumes are supposed to be something that administrators can put together. After deserializing them into a couple of hosts, you can trivially instantiate them as OS containers there, simply by adding a new root sub-volume for each instance, referencing the usr tree you just put together. Instantiating OS containers hence becomes as easy as creating a new btrfs sub-volume. And you can still update the images nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded use-cases. Regardless if you build a TV, an IVI system or a phone: you can put together you OS versions as usr trees, and then use btrfs-send-and-receive facilities to deliver them to the systems, and update them there.

Many people when they hear the word "btrfs" instantly reply with "is it ready yet?". Thankfully, most of the functionality we really need here is strictly read-only. With the exception of the home sub-volumes (see below) all snapshots are strictly read-only, and are delivered as immutable vendor trees onto the devices. They never are changed. Even if btrfs might still be immature, for this kind of read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example, an installer image could include a Fedora version compiled for x86-64, one for i386, one for ARM, all in the same btrfs volume. Due to btrfs' de-duplication they will share as much as possible, and when the image is booted up the right sub-volume is automatically picked. Something similar of course applies to the apps too!

This also allows us to implement something that we like to call Operating-System-As-A-Virus. Installing a new system is little more than:

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader into the EFI System Partition
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume, you can trivially duplicate this onto any block device you want. Let's say you are a happy Fedora user, and you want to provide a friend with his own installation of this awesome system, all on a USB stick. All you have to do for this is do the steps above, using your installed usr tree as source to copy. And there you go! And you don't have to be afraid that any of your personal data is copied too, as the usr sub-volume is the exact version your vendor provided you with. Or with other words: there's no distinction anymore between installer images and installed systems. It's all the same. Installation becomes replication, not more. Live-CDs and installed systems can be fully identical.

Note that in this design apps are actually developed against a single, very specific runtime, that contains all libraries it can link against (including a specific glibc version!). Any library that is not included in the runtime the developer picked must be included in the app itself. This is similar how apps on Android declare one very specific Android version they are developed against. This greatly simplifies application installation, as there's no dependency hell: each app pulls in one runtime, and the app is actually free to pick which one, as you can have multiple installed, though only one is used by each app.

Also note that operating systems built this way will never see "half-updated" systems, as it is common when a system is updated using RPM/dpkg. When updating the system the code will either run the old or the new version, but it will never see part of the old files and part of the new files. This is the same for apps, runtimes, and frameworks, too.

Where We Are Now

We are currently working on a lot of the groundwork necessary for this. This scheme relies on the ability to monopolize the vendor OS resources in /usr, which is the key of what I described in Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems a few weeks back. Then, of course, for the full desktop app concept we need a strong sandbox, that does more than just hiding files from the file system view. After all with an app concept like the above the primary interfacing between the executed desktop apps and the rest of the system is via IPC (which is why we work on kdbus and teach it all kinds of sand-boxing features), and the kernel itself. Harald Hoyer has started working on generating the btrfs send-and-receive images based on Fedora.

Getting to the full scheme will take a while. Currently we have many of the building blocks ready, but some major items are missing. For example, we push quite a few problems into btrfs, that other solutions try to solve in user space. One of them is actually signing/verification of images. The btrfs maintainers are working on adding this to the code base, but currently nothing exists. This functionality is essential though to come to a fully verified system where a trust chain exists all the way from the firmware to the apps. Also, to make the home sub-volume scheme fully workable we actually need encrypted sub-volumes, so that the sub-volume's pass-phrase can be used for authenticating users in PAM. This doesn't exist either.

Working towards this scheme is a gradual process. Many of the steps we require for this are useful outside of the grand scheme though, which means we can slowly work towards the goal, and our users can already take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with /usr, /home, /opt, /var, /etc. It executes in an environment that is pretty much identical to how it would be run on traditional systems.

There's no need to fully move to a system that uses only btrfs and follows strictly this sub-volume scheme. For example, we intend to provide implicit support for systems that are installed on ext4 or xfs, or that are put together with traditional packaging tools such as RPM or dpkg: if the the user tries to install a runtime/app/framework/os image on a system that doesn't use btrfs so far, it can just create a loop-back btrfs image in /var, and push the data into that. Even us developers will run our stuff like this for a while, after all this new scheme is not particularly useful for highly individualized systems, and we developers usually tend to run systems like that.

Also note that this in no way a departure from packaging systems like RPM or DEB. Even if the new scheme we propose is used for installing and updating a specific system, it is RPM/DEB that is used to put together the vendor OS tree initially. Hence, even in this scheme RPM/DEB are highly relevant, though not strictly as an end-user tool anymore, but as a build tool.

So Let's Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images, user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of all executed code can be done, all the way to the firmware, as standard feature of the system.

  • We want to allow app vendors to write their programs against very specific frameworks, under the knowledge that they will end up being executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions of them, multiple runtimes in multiple versions, as well as multiple frameworks in multiple versions. And of course, multiple apps in multiple versions.

  • We want everything double buffered (or actually n-fold buffered), to ensure we can reliably update/rollback versions, in particular to safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS container is as simple as adding in a new snapshot and restarting the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS instances from a single vendor tree, with zero difference for doing this on order to be able to boot it on bare metal/VM or as a container.

  • We want to enable Linux to have an open scheme that people can use to build app markets and similar schemes, not restricted to a specific vendor.

Final Words

I'll be talking about this at LinuxCon Europe in October. I originally intended to discuss this at the Linux Plumbers Conference (which I assumed was the right forum for this kind of major plumbing level improvement), and at linux.conf.au, but there was no interest in my session submissions there...

Of course this is all work in progress. These are our current ideas we are working towards. As we progress we will likely change a number of things. For example, the precise naming of the sub-volumes might look very different in the end.

Of course, we are developers of the systemd project. Implementing this scheme is not just a job for the systemd developers. This is a reinvention how distributions work, and hence needs great support from the distributions. We really hope we can trigger some interest by publishing this proposal now, to get the distributions on board. This after all is explicitly not supposed to be a solution for one specific project and one specific vendor product, we care about making this open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us (IRC, mail, G+, ...).

The future is going to be awesome!

August 29, 2014

Putting PackageKit metadata on the Fedora LiveCD

While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

August 22, 2014

ppc64le libreoffice
LibreOffice is now ported to ppc64le. make passes, testtools passes and the resulting application is capable of headlessly converting documents to pdf. There's no reason to think it's any less capable of anything else as any other port but I don't actually have a ppc64le and transatlantic ssh tunnels aren't conducive to extensive UI testing.

The tricky bit of the port as always is the uno bridge, especially because the ABI was changed for little endian

https://bugs.openjdk.java.net/browse/JDK-8035647 is handy to get the links to the original elf v2 abi change commits to gcc/libffi

https://ghc.haskell.org/trac/ghc/ticket/8965 is handy to get a friendlier translation of the change where if gcc can see that the arguments to the function to be called will fit in registers then no argument save area is created which stumped me for a while

August 15, 2014

dialog conversion status, 4 to go
http://magazine.uc.edu/issues/0509/rabbi.html

Converting LibreOffice dialogs to .ui format, 4 left

I should go on vacation more often. On my return I find that Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, have now converted all but 4 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format.

Here's the list of the last four. One (monster) whose conversion is in progress, one that should ideally be removed in favour of a duplicate dialog and two that have no known route to display them. Hacking the code temporarily to force those two to appear is probably no biggy.


Current conversion stats are:
820 .ui files currently exist
There are 3 unconverted dialogs
There are 1 unconverted tabpages
An estimated additional 4 .ui are required
We are 99% of the way through.

What's next, well *cough*, the above are all the dialogs and tabpages in the classic .src format. There are actually a host of ErrorBox, InfoBox and QueryBox that exist in the .src format as well.

These take just two pieces of information, a string to display and some bits that set what buttons to show, e.g. cancel, close, ok + cancel, etc. We want to remove them in favour of the Gtk-alike MessageDialog, but we don't want to actually convert them to .ui format, because they are so simple it makes more sense to just reduce them to strings like this sample commit demonstrates. This might even be possible to at least somewhat automate.

I've now updated count-todo-dialogs to display the count of those *Box elements that exist in src file format, but I'll elide the count of them until the last 4 true dialogs+tabpages are gone.

August 10, 2014

Birthplace
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comment count unavailable comments

August 07, 2014

Post-GUADEC

  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

August 04, 2014

Notes on Fedora on an Android device
A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.


$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.

Nice-to-haves

In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.

Conclusion

Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

July 30, 2014

you have a long road to walk, but first you have to leave the house
or why publishing code is STEP ZERO.

If you've been developing code internally for a kernel contribution, you've probably got a lot of reasons not to default to working in the open from the start, you probably don't work for Red Hat or other companies with default to open policies, or perhaps you are scared of the scary kernel community, and want to present a polished gem.

If your company is a pain with legal reviews etc, you have probably spent/wasted months of engineering time on internal reviews and stuff, so think all of this matters later, because why wouldn't it, you just spent (wasted) a lot of time on it, so it must matter.

So you have your polished codebase, why wouldn't those kernel maintainers love to merge it.

Then you publish the source code.

Oh, look you just left your house. The merging of your code is many many miles distant and you just started walking that road, just now, not when you started writing it, not when you started legal review, not when you rewrote it internally the 4th time. You just did it this moment.

You might have to rewrite it externally 6 times, you might never get it merged, it might be something your competitors are also working on, and the kernel maintainers would rather you cooperated with people your management would lose their minds over, that is the kernel development process.

step zero: publish the code. leave the house.

(lately I've been seeing this problem more and more, so I decided to write it up, and it really isn't directed at anyone in particular, I think a lot of vendors are guilty of this).

July 28, 2014

A talk in 9 images

My talk at GUADEC this year was about GTK+ dialogs. The first half of the talk consisted of a comparison of dialogs in GTK+ 2, in GTK+ 3 under gnome-shell and in GTK+ 3 under xfwm4 (as an example of an environment that does not favor client-side decorations).

The main take-away here should be that in 3.14, all GTK+ dialogs will again have traditional decorations if that is what works best in the environment it is used in.

About DialogsPreference DialogsFile ChoosersMessage DialogsError DialogsPrint DialogsFont DialogsColor DialogsAction DialogsThe second part of my talk was discussing best practices for dealing with various issues that can come up with custom GTK+ dialogs; I’ve summarized the main points in this HowDoI page.

July 25, 2014

Dialogs and Coverity, current numbers

Army massing

Converting LibreOffice dialogs to .ui format, 54 conversions remaining

We've now converted all but 54 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format. This is due to the much appreciated efforts of Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, who are tackling the last bunch of hard to find or hard to convert ones.

Current conversion stats are:
778 .ui files currently exist
There are 20 unconverted dialogs
There are 34 unconverted tabpages
An estimated additional 54 .ui are required
We are 93% of the way through.

Coverity Defect Density: LibreOffice vs Average

According to Coverity's overview dashboard our current status is:

LibreOffice: 9,425,526 line of code and 0.09 defect density

Open Source Defect Density By Project Size

Line of Code (LOC) Defect Density
Less than 100,0000.35
100,000 to 499,9990.5
500,000 to 1 million0.7
More than 1 million0.65
Note: Defect density is measured by the number of defects per 1,000 lines of code, identified by the Coverity platform. The numbers shown above are from our 2013 Coverity Scan Report, which analyzed 250 million lines of open source code.

July 24, 2014

Continuous testing and Wayland

The GNOME-Continuous  continuous integration and delivery system has been helping us to keep the quality of the GNOME code base up for a while now.

It is doing a number of things:

  • Builds changed modules
  • Creates vm images that can be downloaded for local testing
  • Smoke-tests the installed image  by verifying that it boots up to the login screen
  • Runs  more than 400 tests against the installed image
  • Launches all the applications that are part of the moduleset and takes screenshots of them

All of this happens after every commit, or at least very close to that, and the results are available at the build.gnome.org website.

You can learn more about GNOME-Continuous here.

As a member of the the GNOME release team I am really thankful for this service, it has made our job a lot easier – at release time everything just builds and works most of the time nowadays.

Earlier this year, we’ve made the smoke-testing aspect of gnome-continuous more useful by verifying not just GNOME, but also GNOME Classic and GNOME on Wayland.

Today, we’ve had a minor breakthrough: for the first time, the GNOME/Wayland smoke test succeeded all the way to taking a screenshot of the session.

GNOME/Wayland(of course, GNOME on Wayland looks just like GNOME under X11, so the screenshot is not very exciting by itself).

To get this far, we had to switch to the egl-drm branch of mesa, which adds support for running KMS+DRM GL applications with QXL.

July 23, 2014

Watch out for DRI3 regressions
DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

July 22, 2014

GTK+ at GUADEC 2014

GUADEC is almost here. GTK+ will be presented with 3 talks:

  • GTK+, dialogs, the HIG and you  (by me)
  • GTK+ and CSS (by Benjamin)
  • The GTK+ scene graph toolkit (by Emmanuele)

All of these will be on Monday, the 28th. We will also have a GTK+ team meeting on the 31st.

I’ve made a small collage to summarize my talk:

Dialogs See you all there !

July 20, 2014

Newcomers Workshop

The GNOME Newcomers Workshop might be coming to a conference near you this summer! Owen Taylor and I will be hosting one at GUADEC in Strasbourg, France on Saturday, July 26 and one at Flock in Prague, Czech Republic on Saturday, August 9. If you have never contributed a patch to GNOME before or even have never used GNOME, this workshop is for you! You’ll get a quick overview of the GNOME project, have a chance to install GNOME in a virtual machine, and learn the tools and the process of contributing a patch. If you plan to come to either workshop, please add yourself to the list of attendees.gnome-balloonIf you can’t attend one of the workshops, you can install the available image in a virtual machine and go through the newcomers tutorial used at the workshop on your own. You can ask any questions you have on IRC.

We would like to have one helper per three newcomers at the workshops. If you are an established contributor and will be available, please sign up now to help with the workshop at GUADEC or Flock, and then you can check the room in the beginning of the session to see if we need your help based on the attendance. I’d love to see more people running the Newcomers Workshop locally or at other events they attend. Helping out with one of these sessions would give you a chance to learn how it works.

In addition to hosting the Newcomers Workshop, I will be speaking about how to be an ally to women in tech at GUADEC and about the Outreach Program for Women at Flock.

guadec-2014-badge-small

And then I’m going to

flock-logo14

July 14, 2014

pointer acceleration in libinput - an analysis

Following Christian's Wayland in Fedora Update post, and after Hans fixed the touchpad acceleration, I've been playing with pointer acceleration in libinput a bit. The main focus was not yet on changing it but rather on figuring out what we actually do and where the room for improvement is. There's a tool in my (rather messy) github wip/ptraccel-work branchto re-generate the graphs below.

This was triggered by a simple plan: I want a configuration interface in libinput that provides a sliding scale from -1 to 1 to adjust a device's virtual speed from slowest to fastest, with 0 being the default for that device. A user should not have to worry about the accel mechanism itself, which may be different for any given device, all they need to know is that the setting -0.5 means "halfway between default and 'holy cow this moves like molasses!'". The utopia is of course that for any given acceleration setting, every device feels equally fast (or slow). In order to do that, I needed the right knobs to tweak.

The code we currently have in libinput is pretty much 1:1 what's used in the X server. The X server sports a lot more configuration options, but what we have in libinput 0.4.0 is essentially what the default acceleration settings are in X. Armed with the knowledge that any #define is a potential knob for configuration I went to investigate. There are two defines that are labelled as adjustible parameters:

  • DEFAULT_THRESHOLD, set to 0.4
  • DEFAULT_ACCELERATION, set to 2.0
But what do they mean, exactly? And what exactly does a value of 0.4 represent?
[side-note: threshold was 4 until I took the constant multiplier out, it's now 0.4 upstream and all the graphs represent that.]

Pointer acceleration is nothing more than mapping some input data to some potentially faster output data. How much faster depends on how fast the device moves, and to get there one usually needs a couple of steps. The trick of course is to make it predictable, so that despite the acceleration, your brain thinks that the visible cursor is an extension of your hand at all speeds.

Let's look at a high-level outline of our pointer acceleration code:

  • calculate the velocity of the current movement
  • use that velocity to calculate the acceleration factor
  • apply accel to dx/dy
  • smoothen out the dx/dy to avoid abrupt changes between two events

Calculating pointer speed

We don't just use dx/dy as values, rather, we use the pointer velocity. There's a simple reason for that: dx/dy depends on the device's poll rate (or interrupt frequency). A device that polls twice as often sends half the dx/dy values in each event for the same physical speed.

Calculating the velocity is easy: divide dx/dy by the delta time. We use a set of "trackers" that store previous dx/dy values with their timestamp. As long as we get movement in the same cardinal direction, we take those into account. So if we have 5 events in direction NE, the speed is averaged over those 5 events, smoothing out abrupt speed changes.

The acceleration function

The speed we just calculated is passed to the acceleration function to calculate an acceleration factor.

Figure 1: Mapping of velocity in unit/ms to acceleration factor (unitless). X axes here are labelled in units/ms and mm/s.
This function is the only place where DEFAULT_THRESHOLD/DEFAULT_ACCELERATION are used, but they mostly just stretch the graph. The shape stays the same.

The output of this function is a unit-less acceleration factor that is applied to dx/dy. A factor of 1 means leaving dx/dy untouched, 0.5 is half-speed, 2 is double-speed.

Let's look at the graph for the accel factor output (red): for very slow speeds we have an acceleration factor < 1.0, i.e. we're slowing things down. There is a distinct plateau up to the threshold of 0.4, after that it shoots up to roughly a factor of 1.6 where it flattens out a bit until we hit the max acceleration factor

Now we can also put units to the two defaults: Threshold is clearly in units/ms, and the acceleration factor is simply a maximum. Whether those are mentally easy to map is a different question.

We don't use the output of the function as-is, rather we smooth it out using the Simpson's rule. The second (green) curve shows the accel factor after the smoothing took effect. This is a contrived example, the tool to generate this data simply increased the velocity, hence this particular line. For more random data, see Figure 2.

Figure 2: Mapping of velocity in unit/ms to acceleration factor (unitless) for a random data set. X axes here are labelled in units/ms and mm/s.
For the data set, I recorded the velocity from libinput while using Firefox a bit.

The smoothing takes history into account, so the data points we get depend on the usage. In this data set (and others I tested) we see that the majority of the points still lie on or close to the pure function, apparently the delta doesn't matter that much. Nonetheless, there are a few points that suggest that the smoothing does take effect in some cases.

It's important to note that this is already the second smoothing to take effect - remember that the velocity (may) average over multiple events and thus smoothens the input data. However, the two smoothing effects somewhat complement each other: velocity smoothing only happens when the pointer moves consistently without much change, the Simpson's smoothing effect is most pronounced when the pointer moves erratically.

Ok, now we have the basic function, let's look at the effect.

Pointer speed mappings

Figure 3: Mapping raw unaccelerated dx to accelerated dx, in mm/s assuming a constant pysical device resolution of 400 dpi that sends events at 125Hz. dx range mapped is 0..127
The graph was produced by sending 30 events with the same constant speed, then dividing by the number of events to reduce any effects tracker feeding has at the initial couple of events.

The two lines show the actual output speed in mm/s and the gain in mm/s, i.e. (output speed - input speed). We can see that the little nook where the threshold kicks in and after the acceleration is linear. Look at Figure 1 again: the linear acceleration is caused by the acceleration factor maxing out quickly.

Most of this graph is theoretical only though. On your average mouse you don't usually get a delta greater than 10 or 15 and this graph covers the theoretical range to 127. So you'd only ever be seeing the effect of up to ~120 mm/s. So a more realistic view of the graph is:

Figure 4: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 3, but zoomed to the realistic range. We go from a linear speed increase (no acceleration) to a quick bump once the threshold is hit and from then on to a linear speed increase once the maximum acceleration is hit.

And to verify, the ratio of output speed : input speed:

Figure 5: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated.

Looks pretty much exactly like the pure acceleration function, which is to be expected. What's important here though is that this is the effective speed, not some mathematical abstraction. And it shows one limitation: we go from 0 to full acceleration within really small window.

Again, this is the full theoretical range, the more realistic range is:

Figure 6: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 5, just zoomed in to a maximum of 120 mm/s. If we assume that 15 dx/event is roughly the maximum you can reach with a mouse you'll see that we've reached maximum acceleration at a third of the maximum speed and the window where we have adaptive acceleration is tiny.

Tweaking threshold/accel doesn't do that much. Below are the two graphs representing the default (threshold=0.4, accel=2), a doubled threshold (threshold=0.8, accel=2) and a doubled acceleration (threshold=0.4, accel=4).

Figure 6: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Figure 7: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, see Figure 5 for details. Zoomed in to a max of 120 t0.4 a4 (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Doubling either setting just moves the adaptive window around, it doesn't change that much in the grand scheme of things.

Now, of course these were all fairly simple examples with constant speed, etc. Let's look at a diagram of what is essentially random movement, me clicking around in Firefox for a bit:

Figure 8: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set.
And the zoomed-in version of this:
Figure 9: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set, zoomed in to events 450-550 of that set.
This is more-or-less random movement reflecting some real-world usage. What I find interesting is that it's very hard to see any areas where smoothing takes visible effect. the accelerated curve largely looks like a stretched input curve. tbh I'm not sure what I should've expected here and how to read that, pointer acceleration data in real-world usage is notoriously hard to visualize.

Summary

So in summary: I think there is room for improvement. We have no acceleration up to the threshold, then we accelerate within too small a window. Acceleration stops adjusting to the speed soon. This makes us lose precision and small speed changes are punished quickly.

Increasing the threshold or the acceleration factor doesn't do that much. Any increase in acceleration makes the mouse faster but the adaptive window stays small. Any increase in threshold makes the acceleration kick in later, but the adaptive window stays small.

We've already merged a number of fixes into libinput, but some more work is needed. I think that to get a good pointer acceleration we need to get a larger adaptive window [Citation needed]. We're currently working on that (and figuring out how to evaluate whatever changes we come up with).

A word on units

The biggest issue I was struggling with when trying to understand the code was that of units. The code didn't document used units anywhere but it turns out that everything was either in device units ("mickeys"), device units/ms or (in the case of the acceleration factors) was unitless.

Device units are unfortunately a pretty useless base entity, only slightly more precise than using the length of a piece of string. A device unit depends on the device resolution and of course that differs between devices. An average USB mouse tends to have 400 dpi (15.75 units/mm) but it's common to have 800 dpi, 1000 dpi and gaming mice go up to 8200dpi. A touchpad can have resolutions of 1092 dpi (43 u/mm), 3277 dpi (129 u/mm), etc. and may even have different resolutions for x and y.

This explains why until commit e874d09b4 the touchpad felt slower than a "normal" mouse. We scaled to a magic constant of 10 units/mm, before hitting the pointer acceleration code. Now, as said above the mouse would likely have a resolution of 15.75 units/mm, making it roughly 50% faster. The acceleration would kick in earlier on the mouse, giving the touchpad and the mouse not only different speeds but a different feel altogether.

Unfortunately, there is not much we can do about mice feeling different depending on the resolution. To my knowledge there is no way to query the resolution on a device. But for absolute devices that need pointer acceleration (i.e. touchpads) we can normalize to a fake resolution of 400 dpi and base the acceleration code on that. This provides the same feel on the mouse and the touchpad, as much as that is possible anyway.

July 11, 2014

Another GtkInspector update

I’ve last written about GtkInspector in early June. Since then, a few things have fallen into place, so it is time for another status update.

Show all the things

An early focus of my work on the inspector was to make as much information visible as possible (objects, styles, settings,…).

MenusThis has continued; the most recent addition in the object tree are submenus and combobox popups, which are now shown below the widgets they are attached to.

StyleFor widgets, we now have a Style Properties tab, which shows the values of known style properties, and importantly, where they originate in the theme css. This tab is still a bit preliminary,  but it is a start towards answering the question: why does GTK+ not pick up this selector in my css ?

Property BindingSettings Binding

The property editor popover in the Properties tab is now showing information about GObject and GSettings property bindings, with an easy way to jump to the other object.

The environment section in the General tab includes GSETTINGS_SCHEMA_DIR when set.

Test all the things

Next to making things visible, the inspector is supposed to make it easy to test an application in various situations, such as different themes or right-to-left locales.

GeneralAn recent addition in this regard is a switch in the General tab to turn on touchscreen simulation. This used to be available only via the GTK_TEST_TOUCHSCREEN environment variable.

Hi-dpiYou can also turn on hi-dpi scaling in the General tab to test how your app behaves in this scenario, and look out for blurry icons and the like. Careful, this will make your windows giant.

Better picking

I found that I often use the widget picker right after opening the inspector window, so I figured that I should try to combine these two actions. As a result, Ctrl-Shift-I will now open the inspector and select the widget that was under the pointer before. Ctrl-Shift-D still toggles the inspector on and off.

Another small improvement to picking is that the Escape key can now be used to cancel a picking operation.

Less confusion

As I’ve mentioned above, some of the things that the inspector lets you change at runtime can also be set from environment variables (e.g. GTK_THEME). And when they are set, the environment variables often override any runtime changes.

Disabled controlGtkInspector now reflects this by disabling the corresponding controls when they won’t take effect.

Help welcome

I’ll continue to improve the inspector, but there’s more good ideas for possible features than one person can handle. If you want to help out, feel free to come by on irc, or just put your contribution in bugzilla, where the inspector has its own component now.

 

July 08, 2014

Important AppData milestone

Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

  • Applications with descriptions: 262/1037 (25.3%)
  • Applications with keywords: 112/1037 (10.8%)
  • Applications with screenshots: 235/1037 (22.7%)
  • Applications in GNOME with AppData: 91/134 (67.9%)
  • Applications in KDE with AppData: 5/67 (7.5%)
  • Applications in XFCE with AppData: 2/20 (10.0%)
  • Application addons with MetaInfo: 30

We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

July 04, 2014

Self-signing custom Android ROMs
The security model on the Google Nexus devices is pretty straightforward. The OS is (nominally) secure and prevents anything from accessing the raw MTD devices. The bootloader will only allow the user to write to partitions if it's unlocked. The recovery image will only permit you to install images that are signed with a trusted key. In combination, these facts mean that it's impossible for an attacker to modify the OS image without unlocking the bootloader[1], and unlocking the bootloader wipes all your data. You'll probably notice that.

The problem comes when you want to run something other than the stock Google images. Step number one for basically all of these is "Unlock your bootloader", which is fair enough. Step number two is "Install a new recovery image", which is also reasonable in that the key database is stored in the recovery image and so there's no way to update it without doing so. Except, unfortunately, basically every third party Android image is either unsigned or is signed with the (publicly available) Android test keys, so this new recovery image will flash anything. Feel free to relock your bootloader - the recovery image will still happily overwrite your OS.

This is unfortunate. Even if you've encrypted your phone, anyone with physical access can simply reboot into recovery and reflash /system with something that'll stash your encryption key and mail your data to the NSA. Surely there's a better way of doing this?

Thankfully, there is. Kind of. It's annoying and involves a bunch of manual processes and you'll need to re-sign every update yourself. But it is possible to configure Nexus devices in such a way that you retain the same level of security you had when you were using the Google keys without losing the freedom to run whatever you want. Here's how.

Note: This is not straightforward. If you're not an experienced developer, you shouldn't attempt this. I'm documenting this so people can create more user-friendly approaches.

First: Unlock your bootloader. /data will be wiped.
Second: Get a copy of the stock recovery.img for your device. You can get it from the factory images available here
Third: Grab mkbootimg from here and build it. Run unpackbootimg against recovery.img.
Fourth: Generate some keys. Get this script and run it.
Fifth: zcat recovery.img-ramdisk.gz | cpio -id to extract your recovery image ramdisk. Do this in an otherwise empty directory.
Sixth: Get DumpPublicKey.java from here and run it against the .x509.pem file generated in step 4. Replace /res/keys from the recover image ramdisk with the output. Include the "v2" bit at the beginning.
Seventh: Repack the ramdisk image (find . | cpio -o -H newc | gzip > ../recovery.img-ramdisk.gz) and rebuild recovery.img with mkbootimg.
Eighth: Write the new recovery image to your device
Ninth: Get signapk from here and build it. Run it against the ROM you want to sign, using the keys you generated earlier. Make sure you use the -w option to sign the whole zip rather than signing individual files.
Tenth: Relock your bootloader
Eleventh: Boot into recovery mode and sideload your newly signed image.

At this point you'll want to set a reasonable security policy on the image (eg, if it grants root access, ensure that it requires a PIN or something), but otherwise you're set - the recovery image can't be overwritten without unlocking the bootloader and wiping all your data, and the recovery image will only write images that are signed with your key. For obvious reasons, keep the key safe.

This, well. It's obviously an excessively convoluted workflow. A *lot* of it could be avoided by providing a standardised mechanism for key management. One approach would be to add a new fastboot command for modifying the key database, and only permit this to be run when the bootloader is unlocked. The workflow would then be something like
  • Unlock bootloader
  • Generate keys
  • Install new key
  • Lock bootloader
  • Sign image
  • Install image
which seems more straightforward. Long term, individual projects could do the signing themselves and distribute their public keys, resulting in the install process becoming as easy as
  • Unlock bootloader
  • Install ROM key
  • Lock bootloader
  • Install ROM
which is actually easier than the current requirement to install an entirely new recovery image.

I'd actually previously criticised Google on the grounds that using custom keys wasn't possible on Android devices. I was wrong. It is, it's just that (as far as I can tell) nobody's actually documented it before. It's important that users not be forced into treating security and freedom as mutually exclusive, and it's great that Google have made that possible.

[1] This model fails if it's possible to gain root on the device. Thankfully this would never hold on what's that over there is that a distraction?

comment count unavailable comments
FUDCON + GNOME.Asia Beijing 2014

Thanks to the funding from FUDCON I had the chance to attend and keynote at the combined FUDCON Beijing 2014 and GNOME.Asia 2014 conference in Beijing, China.

My talk was about systemd's present and future, what we achieved and where we are going. In my talk I tried to explain a bit where we are coming from, and how we changed focus from being purely an init system, to more being a set of basic building blocks to build an OS from. Most of the talk I talked about where we still intend to take systemd, which areas we believe should be covered by systemd, and of course, also the always difficult question, on where to draw the line and what clearly is outside of the focus of systemd. The slides of my talk you find online. (No video recording I am aware of, sorry.)

The combined conferences were a lot of fun, and as usual, the best discussions I had in the hallway track, discussing Linux and systemd.

A number of pictures of the conference are now online. Enjoy!

After the conference I stayed for a few more days in Beijing, doing a bit of sightseeing. What a fantastic city! The food was amazing, we tried all kinds of fantastic stuff, from Peking duck, to Bullfrog Sechuan style. Yummy. And one of those days I am sure I will find the time to actually sort my photos and put them online, too.

I am really looking forward to the next FUDCON/GNOME.Asia!

July 03, 2014

LibreOffice Coverity Defect Density

Coverity Defect Density: LibreOffice vs Average

We run LibreOffice through Coverity approximately once a week. According to Coverity's overview dashboard our current status is:

LibreOffice: 9,500,825 line of code and 0.13 defect density

Open Source Defect Density By Project Size

Line of Code (LOC) Defect Density
Less than 100,0000.35
100,000 to 499,9990.5
500,000 to 1 million0.7
More than 1 million0.65
Note: Defect density is measured by the number of defects per 1,000 lines of code, identified by the Coverity platform. The numbers shown above are from our 2013 Coverity Scan Report, which analyzed 250 million lines of open source code.

So any crashes you might experience in 4.3 are either a figment of your imagination or a sad commentary on the limitations of static code analysis.

July 02, 2014

Blurry Screenshots in GNOME Software?

Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

June 27, 2014

scrolling the sidebar with the scroll-wheel
The sidebar comes with a vertical scrollbar for when content doesn't fit in the available space. But the mouse pointer has to be right over the scrollbar to use your scroll wheel, it doesn't work to hover over the content of the sidebar and move the wheel there.

Which is annoying, but on trying to fix that I realized the snag with allowing the wheel-scroll over the sidebar. If you scroll the sidebar down and a widget contained in it ends up under the mouse pointer then if it is a widget which accepts the wheel-scroll it's very easy to accidentally make a change to the widget that has scrolled under the mouse pointer. Spin Buttons for example.

As an aside, this is the exact same problem that I have in glade where I scroll down the property pane with the scroll-wheel and accidentally end up over the "Ellipsize" listbox and inadvertently change it from None to Middle. So if you find labels in LibreOffice with "..." in the middle of them for no good reason, this is why.

Anyway, I still want to scroll the sidebar, but I don't want to end up with this conflicting target-location widget wheel-scroll conflict, so my solution is to continue to send the wheel-scroll events to the previous target so long as the position of the mouse pointer is at the same place as the last wheel event and the time between events is <= the default timeout for raising help tips, i.e. 1/2 a second.

Seems to work well for me, scrolling the sidebars "just works" on master (LibreOffice 4.4) with the scroll wheel without random changes to any scroll-wheel sensitive contents but you can still use the scroll-wheel to modify those widgets on moving to them or when the little timeout completes.
GNOME 3.13.3

I’ve done the release team duty for the GNOME 3.13.3 release this week. As I often do, I took some screenshots of new things that I’ve noticed while smoketesting.

There is quite a bit of good new stuff in this release, starting with an rewritten and improved Adwaita theme that is now part of GTK+:

Adwaita dark Next, I’ve noticed that our sharing infrastructure has become network aware, and lets you change what is shared depending on what network you’re on:

SharingThis works not just for media sharing, but for file sharing and screen sharing as well.

At the other side of media sharing, gnome-online-accounts has learned to set up access to the media servers in your local network:

Media serversAmong the applications, yelp and evince stand out with new, modern looks that fit in very well with the rest of GNOME:

YelpEvinceThere are more things to discover that I could not capture in a screenshot. For example, I noticed that GNOME shell now remembers which workspace windows where on when you disconnect and reconnect external monitors.

If you want to learn more about GNOME 3.13.3, you can study the release notes (here and here) or you can try it out in Fedora rawhide. Richard has also set up a copr repository for F20.

This is also a good occasion to announce publicly that we are aiming to have GNOME 3.14 in Fedora 21 – it may be a little tight, schedule-wise (the GNOME 3.13.91 beta release happens on Sep 1, a few days after the beta freeze for F21), but we’ve been able to squeeze things in, in the past.

June 25, 2014

Firewalls and per-network sharing
Firewalls

Fedora has had problems for a long while with the default firewall rules. They would make a lot of things not work (media and file sharing of various sorts, usually, whether as a client or a server) and users would usually disable the firewall altogether, or work around it through micro-management of opened ports.

We went through multiple discussions over the years trying to break the security folks' resolve on what should be allowed to be exposed on the local network (sometimes trying to get rid of the firewall). Or rather we tried to agree on a setup that would be implementable for desktop developers and usable for users, while still providing the amount of security and dependability that the security folks wanted.

The last round of discussions was more productive, and I posted the end plan on the Fedora Desktop mailing-list.

By Fedora 21, Fedora will have a firewall that's completely open for the user's applications (with better tracking of what applications do what once we have application sandboxing). This reflects how the firewall was used on the systems that the Fedora Workstation version targets. System services will still be blocked by default, except a select few such as ssh or mDNS, which might need some tightening.

But this change means that you'd be sharing your music through DLNA on the café's Wi-Fi right? Well, this is what this next change is here to avoid.

Per-network Sharing

To avoid showing your music in the caf, or exposing your holiday photographs at work, we needed a way to restrict sharing to wireless networks where you'd already shared this data, and provide a way to avoid sharing in the future, should you change your mind.

Allan Day mocked up such controls in our Sharing panel which I diligently implemented. Personal File Sharing (through gnome-user-share and WedDAV), Media Sharing (through rygel and DLNA) and Screen Sharing (through vino and VNC) implement the same per-network sharing mechanism.

Make sure that your versions of gnome-settings-daemon (which implements the starting/stopping of services based on the network) and gnome-control-center match for this all to work. You'll also need the latest version of all 3 of the aforementioned sharing utilities.

(and it also works with wired network profiles :)



Whats that icon ?

Keeping up with the theme of making life easier for application developers, I decided to take an afternoon off from releasing GNOME 3.13.3, and wrote a little application that shows icons from the icon theme, their symbolic variants, and some extra information from the icon naming spec, such as the icon description, and the context for the icon.

Icon browser

This should make it a little easier to find the right icon to use in your application. gtk3-icon-browser will be available in GTK+ 3.13.4.

June 21, 2014

We’ll Build A Dream House Of Net

(Note: the final 0.9.10 will be out later this week…  read on for the awesome that it will contain)

Hey wouldn’t it be great if NetworkManager did X and made my life so awesome I could retire to a private island surrounded by things I love?  Like kittens and teddy bears and bright copper kettles and Domaine Leflaive Montrachet Grand Cru?

If only your dream was reality…  Oh wait!  NetworkManager 0.9.10 will be your genie from Aladdin, granting every wish you dream of, except this time you can wish for more wishes.  But you still can’t bring awful networking back from the dead because it’s just not pretty.  Don’t do it.

 

7385321668_0ec3667c73_b

Tons of new features, yet somehow smaller and nimbler! (via cuxclipper, CC BY 2.0)

What is pretty is NetworkManager 0.9.10; it’s like the lightning-quick racing yacht that Larry Ellison doesn’t have and really, really wants, but which somehow also adds a Triple-E-Class-worth of new features just for you.  Somebody (maybe you!) wished for every single thing you’re about to see.  And then a magic genie showed up, snapped its fingers, and gave it to them.

nmtui

We found a usability gap between full-fledged CLI tools like nmcli and GUI-based ones, and thus nmtui was born.  Sometimes you don’t want to remember esoteric commands and options, but you also don’t want to run X. Boom, first wish granted: a curses-based tool for configuring and managing your network, no X involved:

nmtuinmtui2

nmcli

The command-line still rules with divine mandate, and we’re here to please so nmcli was a huge focus for this release.  We’ve added interactive editing support, single-command editing, detailed help, tab completion, and enhanced bash completion.  You really need to check this out; almost anything you can do with GUI tools can now be done with nmcli, and there’s even some stuff nmcli can do that the GUI tools can’t.  If you’re comfortable with terminals, NetworkManager 0.9.10 is right up your alley.

nmcli

Size Does Matter

Continuing on the quest to be more nimble and streamlined, we’ve split Wi-Fi, WWAN, Bluetooth, ADSL, and WiMAX device support into plugins which you don’t need to install if you like a minimal system.  Distributions should package these separately so they can be added/removed independently of NetworkManager itself, which reduces disk usage, runtime memory usage, and packaging dependency chains.  We’ve also spent time slimming down and optimizing the code.  The core NetworkManager daemon is now just over 1MB in size!

dbus-daemon is also no longer required for root-only or early-boot operation, with communication using a private root-only Unix socket. Similarly, PolicyKit is no longer used for root operation, though it could always be disabled at build-time anyway.

To facilitate remote and SSH-based management, the “at_console” D-Bus permission has been removed, which also helpfully harmonizes authorization settings between Fedora and Debian-based distributions.  All permissions authorization now happens through PolicyKit instead.

4870003098_26ba44a08a_b

NetworkManager works here (via scobleizer, CC BY 2.0)

The Enterprise

When you Absolutely Positively MUST have your ethernet frames delivered on-time and without loss you turn to Data Center Bridging.  DCB provides the reliability and robustness that iSCSI and FibreChannel over Ethernet (FCoE) need so you don’t have to keep shovelling money into a proprietary SAN.  Since users requested it, we snapped our fingers and added support to NetworkManager 0.9.10 for configuring DCB on your ethernet interfaces.

We’ve also upped our game with IP-level configuration support for many more software interfaces like GRE, macvlan, macvtap, tun, tap, veth, and vxlan.  And when you have services that aren’t yet network-aware, the NetworkManager-wait-online systemd service is more reliable to ensure your legacy services start up with the resources they require.

Customization Galore

You dreamed, we listened.  Creepy, no?  Yeah, we know what you want.  And top of the list was more flexible configuration:

  • Connection configuration files are no longer watched for changes by default, which used to cause problems with backups, filesystem copies, half-configured connections, etc.  If you want that behavior you can turn it back on (monitor-connection-files=true), but instead, edit them as much as you want and when you’re done, “nmcli con reload“.
  • Connections can now be locked to interface names instead of just MAC addresses
  • A new “ignore-carrier” option is available to ensure your critical app doesn’t fail just because you got drunk on Captain Morgan + Coke, and tripped over a cable
  • Want to manage /etc/resolv.conf yourself?  You can!  “dns=none” is your new best friend.
  • Configuration file snippets can be dropped into /etc/NetworkManager/conf.d to change smaller sets of configuration options

The NetworkManager dispatcher got some enhancements too.  It now has a “pre-up” event that allow scripts to execute before NetworkManager announces connectivity to applications.  We also added a “pre-down” event that lets network filesystems flush data before the interface is actually disconnected from the network.

Seamless Cooperation

Do you love /sbin/ip?  ifconfig?  brctl?  vconfig?  Keep using them!  Changes you make outside of NetworkManager get picked up, respected, and reflected in the D-Bus API.  NetworkManager 0.9.10 also goes to great lengths to read the existing configuration of interfaces and not touch them.  Most network interfaces known to the kernel are now exposed in the D-Bus API, and you can even change their IP configuration right from NetworkManager.  There’s more work to do here but we hope you’ll appreciate the new situational awareness as much as we do.

Get Your VPN On

We’ve improved support for routing-only VPNs like Openswan/Libreswan/Strongswan.  We’ve added full details of the VPN’s IP configuration to the D-Bus API.  And best yet, VPN plugins can now request additional passwords during the connection process if the ones you previously gave them are wrong or changed.

All the Rest

For clients, more properties are exposed in the D-Bus API.  We’ve added support for custom IP ranges to the Internet Connection Sharing functionality.  We’ve added WWAN autoconnect support and more reliable airplane mode behavior.  Fatal connection errors now more reliably block reconnect, which means better handling of wrong Wi-Fi passwords and access point failures.  Captive portal/hotspot support is moving forward, as are DNSSEC enhancements.

Geez, are we done yet?

Not even close!  Seriously, there’s more but I’m kinda tired of typing.  Try it out (the final release will be out later this week) and tell us what you think.  Then tell us what you want.  Don’t be afraid to dream a little bigger, darling!

(via Alexandra Guerson, CC BY-NC-ND 2.0)

June 19, 2014

PSA: Fedora 21, NetworkManager, and DNF

Recently we posted a Fedora 21 update delivering the huge new awesome that is NetworkManager 0.9.10-beta1.  Among a literal Triple E Class boatload of enhancements and fixes, this update continues our fine tradition of making the core of NetworkManager smaller and more flexible by splitting out Wi-Fi support into the NetworkManager-wifi package.  If you don’t have or don’t use Wi-Fi on your system, you don’t need to install stuff for it and you can save some disk space and RAM.

The Problem

To ensure upgrades work correctly and people didn’t unexpectedly lose Wi-Fi support we set RPM Obsoletes to ensure that when upgrading the new package was installed even though it didn’t exist before.  Unfortunately this didn’t work for those using DNF instead of yum for package management.

It turns out that DNF treats RPM obsoletes differently than yum, and it’s unclear right now how package splitting is actually supposed to work with DNF.  You can track the issue here.

The Workaround

If you suddenly find yourself without Wi-Fi, find a wired network connection and:

dnf install NetworkManager-wifi
systemctl restart NetworkManager

and harmony will return to the Universe.  We understand the pain, will continue to monitor the situation, and will update the Fedora NetworkManager packages when DNF has a solution.

June 18, 2014

dialog conversion status, 99 to go

 Converting LibreOffice dialogs to .ui format, 99 conversions remaining

We've now converted all but 99 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format. This is due to the much appreciated efforts of Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, who are tackling the last bunch of hard to find or hard to convert ones.


Current conversion stats are:
741 .ui files currently exist
There are 46 unconverted dialogs
There are 53 unconverted tabpages
An estimated additional 99 .ui are required
We are 88% of the way through.

June 17, 2014

Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems

(Just a small heads-up: I don't blog as much as I used to, I nowadays update my Google+ page a lot more frequently. You might want to subscribe that if you are interested in more frequent technical updates on what we are working on.)

In the past weeks we have been working on a couple of features for systemd that enable a number of new usecases I'd like to shed some light on. Taking benefit of the /usr merge that a number of distributions have completed we want to bring runtime behaviour of Linux systems to the next level. With the /usr merge completed most static vendor-supplied OS data is found exclusively in /usr, only a few additional bits in /var and /etc are necessary to make a system boot. On this we can build to enable a couple of new features:

  1. A mechanism we call Factory Reset shall flush out /etc and /var, but keep the vendor-supplied /usr, bringing the system back into a well-defined, pristine vendor state with no local state or configuration. This functionality is useful across the board from servers, to desktops, to embedded devices.
  2. A Stateless System goes one step further: a system like this never stores /etc or /var on persistent storage, but always comes up with pristine vendor state. On systems like this every reboot acts as factor reset. This functionality is particularly useful for simple containers or systems that boot off the network or read-only media, and receive all configuration they need during runtime from vendor packages or protocols like DHCP or are capable of discovering their parameters automatically from the available hardware or periphery.
  3. Reproducible Systems multiply a vendor image into many containers or systems. Only local configuration or state is stored per-system, while the vendor operating system is pulled in from the same, immutable, shared snapshot. Each system hence has its private /etc and /var for receiving local configuration, however the OS tree in /usr is pulled in via bind mounts (in case of containers) or technologies like NFS (in case of physical systems), or btrfs snapshots from a golden master image. This is particular interesting for containers where the goal is to run thousands of container images from the same OS tree. However, it also has a number of other usecases, for example thin client systems, which can boot the same NFS share a number of times. Furthermore this mechanism is useful to implement very simple OS installers, that simply unserialize a /usr snapshot into a file system, install a boot loader, and reboot.
  4. Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

Concepts

A number of Linux-based operating systems have tried to implement some of the schemes described out above in one way or another. Particularly interesting are GNOME's OSTree, CoreOS and Google's Android and ChromeOS. They generally found different solutions for the specific problems you have when implementing schemes like this, sometimes taking shortcuts that keep only the specific case in mind, and cannot cover the general purpose. With systemd now being at the core of so many distributions and deeply involved in bringing up and maintaining the system we came to the conclusion that we should attempt to add generic support for setups like this to systemd itself, to open this up for the general purpose distributions to build on. We decided to focus on three kinds of systems:

  1. The stateful system, the traditional system as we know it with machine-specific /etc, /usr and /var, all properly populated.
  2. Startup without a populated /var, but with configured /etc. (We will call these volatile systems.)
  3. Startup without either /etc or /var. (We will call these stateless systems.)

A factory reset is just a special case of the latter two modes, where the system boots up without /var and /etc but the next boot is a normal stateful boot like like the first described mode. Note that a mode where /etc is flushed, but /var is not is nothing we intend to cover (why? well, the user ID question becomes much harder, see below, and we simply saw no usecase for it worth the trouble).

Problems

Booting up a system without a populated /var is relatively straight-forward. With a few lines of tmpfiles configuration it is possible to populate /var with its basic structure in a way that is sufficient to make a system boot cleanly. systemd version 214 and newer ship with support for this. Of course, support for this scheme in systemd is only a small part of the solution. While a lot of software reconstructs the directory hierarchy it needs in /var automatically, many software does not. In case like this it is necessary to ship a couple of additional tmpfiles lines that setup up at boot-time the necessary files or directories in /var to make the software operate, similar to what RPM or DEB packages would set up at installation time.

Booting up a system without a populated /etc is a more difficult task. In /etc we have a lot of configuration bits that are essential for the system to operate, for example and most importantly system user and group information in /etc/passwd and /etc/group. If the system boots up without /etc there must be a way to replicate the minimal information necessary in it, so that the system manages to boot up fully.

To make this even more complex, in order to support "offline" updates of /usr that are replicated into a number of systems possessing private /etc and /var there needs to be a way how these directories can be upgraded transparently when necessary, for example by recreating caches like /etc/ld.so.cache or adding missing system users to /etc/passwd on next reboot.

Starting with systemd 215 (yet unreleased, as I type this) we will ship with a number of features in systemd that make /etc-less boots functional:

  • A new tool systemd-sysusers as been added. It introduces a new drop-in directory /usr/lib/sysusers.d/. Minimal descriptions of necessary system users and groups can be placed there. Whenever the tool is invoked it will create these users in /etc/passwd and /etc/group should they be missing. It is only suitable for creating system users and groups, not for normal users. It will write to the files directly via the appropriate glibc APIs, which is the right thing to do for system users. (For normal users no such APIs exist, as the users might be stored centrally on LDAP or suchlike, and they are out of focus for our usecase.) The major benefit of this tool is that system user definition can happen offline: a package simply has to drop in a new file to register a user. This makes system user registration declarative instead of imperative -- which is the way how system users are traditionally created from RPM or DEB installation scripts. By being declarative it is easy to replicate the users on next boot to a number of system instances.

    To make this new tool interesting for packaging scripts we make it easy to alternatively invoke it during package installation time, thus being a good alternative to invocations of useradd -r and groupadd -r.

    Some OS designs use a static, fixed user/group list stored in /usr as primary database for users/groups, which fixed UID/GID mappings. While this works for specific systems, this cannot cover the general purpose. As the UID/GID range for system users/groups is very small (only containing 998 users and groups on most systems), the best has to be made from this space and only UIDs/GIDs necessary on the specific system should be allocated. This means allocation has to be dynamic and adjust to what is necessary.

    Also note that this tool has one very nice feature: in addition to fully dynamic, and fully static UID/GID assignment for the users to create, it supports reading UID/GID numbers off existing files in /usr, so that vendors can make use of setuid/setgid binaries owned by specific users.

  • We also added a default user definition list which creates the most basic users the system and systemd need. Of course, very likely downstream distributions might need to alter this default list, add new entries and possibly map specific users to particular numeric UIDs.
  • A new condition ConditionNeedsUpdate= has been added. With this mechanism it is possible to conditionalize execution of services depending on whether /usr is newer than /etc or /var. The idea is that various services that need to be added into the boot process on upgrades make use of this to not delay boot-ups on normal boots, but run as necessary should /usr have been update since the last boot. This is implemented based on the mtime timestamp of the /usr: if the OS has been updated the packaging software should touch the directory, thus informing all instances that an upgrade of /etc and /var might be necessary.
  • We added a number of service files, that make use of the new ConditionNeedsUpdate= switch, and run a couple of services after each update. Among them are the aforementiond systemd-sysusers tool, as well as services that rebuild the udev hardware database, the journal catalog database and the library cache in /etc/ld.so.cache.
  • If systemd detects an empty /etc at early boot it will now use the unit preset information to enable all services by default that the vendor or packager declared. It will then proceed booting.
  • We added a new tmpfiles snippet that is able to reconstruct the most basic structure of /etc if it is missing.
  • tmpfiles also gained the ability copy entire directory trees into place should they be missing. This is particularly useful for copying certain essential files or directories into /etc without which the system refuses to boot. Currently the most prominent candidates for this are /etc/pam.d and /etc/dbus-1. In the long run we hope that packages can be fixed so that they always work correctly without configuration in /etc. Depending on the software this means that they should come with compiled-in defaults that just work should their configuration file be missing, or that they should fall back to static vendor-supplied configuration in /usr that is used whenever /etc doesn't have any configuration. Both the PAM and the D-Bus case are probably candidates for the latter. Given that there are probably many cases like this we are working with a number of folks to introduce a new directory called /usr/share/etc (name is not settled yet) to major distributions, that always contain the full, original, vendor-supplied configuration of all packages. This is very useful here, so that there's an obvious place to copy the original configuration from, but it is also useful completely independently as this provides administrators with an easy place to diff their own configuration in /etc against to see what local changes are in place.
  • We added a new --tmpfs= switch to systemd-nspawn to make testing of systems with unpopulated /etc and /var easy. For example, to run a fully state-less container, use a command line like this:

    # system-nspawn -D /srv/mycontainer --read-only --tmpfs=/var --tmpfs=/etc -b

    This command line will invoke the container tree stored in /srv/mycontainer in a read-only way, but with a (writable) tmpfs mounted to /var and /etc. With a very recent git snapshot of systemd invoking a Fedora rawhide system should mostly work OK, modulo the D-Bus and PAM problems mentioned above. A later version of systemd-nspawn is likely to gain a high-level switch --mode={stateful|volatile|stateless} that sets combines this into simple switches reusing the vocabulary introduced earlier.

What's Next

Pulling this all together we are very close to making boots with empty /etc and /var on general purpose Linux operating systems a reality. Of course, while doing the groundwork in systemd gets us some distance, there's a lot of work left. Most importantly: the majority of Linux packages are simply incomptible with this scheme the way they are currently set up. They do not work without configuration in /etc or state directories in /var; they do not drop system user information in /usr/lib/sysusers.d. However, we believe it's our job to do the groundwork, and to start somewhere.

So what does this mean for the next steps? Of course, currently very little of this is available in any distribution (simply already because 215 isn't even released yet). However, this will hopefully change quickly. As soon as that is accomplished we can start working on making the other components of the OS work nicely in this scheme. If you are an upstream developer, please consider making your software work correctly if /etc and/or /var are not populated. This means:

  • When you need a state directory in /var and it is missing, create it first. If you cannot do that, because you dropped priviliges or suchlike, please consider dropping in a tmpfiles snippet that creates the directory with the right permissions early at boot, should it be missing.
  • When you need configuration files in /etc to work properly, consider changing your application to work nicely when these files are missing, and automatically fall back to either built-in defaults, or to static vendor-supplied configuration files shipped in /usr, so that administrators can override configuration in /etc but if they don't the default configuration counts.
  • When you need a system user or group, consider dropping in a file into /usr/lib/sysusers.d describing the users. (Currently documentation on this is minimal, we will provide more docs on this shortly.)

If you are a packager, you can also help on making this all work:

  • Ask upstream to implement what we describe above, possibly even preparing a patch for this.
  • If upstream will not make these changes, then consider dropping in tmpfiles snippets that copy the bare minimum of configuration files to make your software work from somewhere in /usr into /etc.
  • Consider moving from imperative useradd commands in packaging scripts, to declarative sysusers files. Ideally, this is shipped upstream too, but if that's not possible then simply adding this to packages should be good enough.

Of course, before moving to declarative system user definitions you should consult with your distribution whether their packaging policy even allows that. Currently, most distributions will not, so we have to work to get this changed first.

Anyway, so much about what we have been working on and where we want to take this.

Conclusion

Before we finish, let me stress again why we are doing all this:

  1. For end-user machines like desktops, tablets or mobile phones, we want a generic way to implement factory reset, which the user can make use of when the system is broken (saves you support costs), or when he wants to sell it and get rid of his private data, and renew that "fresh car smell".
  2. For embedded machines we want a generic way how to reset devices. We also want a way how every single boot can be identical to a factory reset, in a stateless system design.
  3. For all kinds of systems we want to centralize vendor data in /usr so that it can be strictly read-only, and fully cryptographically verified as one unit.
  4. We want to enable new kinds of OS installers that simply deserialize a vendor OS /usr snapshot into a new file system, install a boot loader and reboot, leaving all first-time configuration to the next boot.
  5. We want to enable new kinds of OS updaters that build on this, and manage a number of vendor OS /usr snapshots in verified states, and which can then update /etc and /var simply by rebooting into a newer version.
  6. We wanto to scale container setups naturally, by sharing a single golden master /usr tree with a large number of instances that simply maintain their own private /etc and /var for their private configuration and state, while still allowing clean updates of /usr.
  7. We want to make thin clients that share /usr across the network work by allowing stateless bootups. During all discussions on how /usr was to be organized this was fequently mentioned. A setup like this so far only worked in very specific cases, with this scheme we want to make this work in general case.

Of course, we have no illusions, just doing the groundwork for all of this in systemd doesn't make this all a real-life solution yet. Also, it's very unlikely that all of Fedora (or any other general purpose distribution) will support this scheme for all its packages soon, however, we are quite confident that the idea is convincing, that we need to start somewhere, and that getting the most core packages adapted to this shouldn't be out of reach.

Oh, and of course, the concepts behind this are really not new, we know that. However, what's new here is that we try to make them available in a general purpose OS core, instead of special purpose systems.

Anyway, let's get the ball rolling! Late's make stateless systems a reality!

And that's all I have for now. I am sure this leaves a lot of questions open. If you have any, join us on IRC on #systemd on freenode or comment on Google+.

DNF v.s. Yum

A lot has been said on fedora-devel in the last few weeks about DNF and Yum. I thought it might be useful to contribute my own views, considering I’ve spent the last half-decade consuming the internal Yum API and the last couple of years helping to design the replacement with about half a dozen of the packaging team here at Red Hat. I’m also a person who unsuccessfully tried to replace Yum completely with Zif in fedora a few years ago, so I know quite a bit about packaging systems and metadata parsing.

From my point of view, the hawkey depsolving library that DNF is designed upon is well designed, optimised and itself built on a successful low-level SAT library that SUSE has been using for years on production level workloads. The downloading and metadata parsing component used by DNF, librepo, is also well designed and complements the hawkey API nicely.

Rather than use the DNF framework directly, PackageKit uses librepo and hawkey to share 80% of the mechanism between PK and DNF. From what I’ve seen of the DNF codebase it’s nice, with unit tests and lots of the older compatibility cruft removed and the only reason it’s not used in PK was that the daemon is written in C and didn’t want to marshal everything via python for latency reasons.

So, from my point of view, DNF is a new command line tool built on 3 new libraries. It’s history may be of a fork from yum, but it resembles more of a 2014 rebuilt American hot-rod with all new motor-sport parts apart from the 1965 modified and strengthened chassis. Renaming DNF to Yum2 would be entirely the wrong message; it’s a new project with a new team and new goals.

June 16, 2014

datarootdir v.s. datadir

Public Service Announcement: Debian helpfully defines datadir to be /usr/share/games for some packages, which means that the AppData and MetaInfo files get installed into /usr/share/games/appdata which isn’t picked up by the metadata parsers.

It’s probably safer to install the AppData files into $datarootdir/appdata as this will work even if a distro has redefined datadir to be something slightly odd. I’ve changed the examples on the AppData page, but if you maintain a game on Debian with AppData then this might affect you when Debian starts extracting AppSpream metadata in the next few weeks. Anyone affected will be getting email in the next few days, although it only looks to affect very few people.

June 13, 2014

A new default theme for GTK+

This has been a long time coming. We’ve wanted to replace the default GTK+ theme for a very long time.

-#define DEFAULT_THEME_NAME "Raleigh"
+#define DEFAULT_THEME_NAME "Adwaita"

The Raleigh theme that we’ve used as the default until now has some advantages:

  • It is very simple
  • No dependency on a theme engine (external or internal)
  • It does not use a lot of resources

But there is no nice way of putting it: it is very ugly.

Raleigh

This may not  be such a big deal on Linux, where distributions generally have ‘their’ theme, not to mention the many packaged and readily available themes.  So, basically no Linux user ever sees the default GTK+ theme. The situation is very different on other platforms, where GTK+ is often bundled with applications, and it may not be easy to install themes, or get the bundled GTK+ to use them.

For a very long time, we’ve held onto the belief that the theming system is a way to make applications blend smoothly into the platform, and that there should be a native theme for each major platform that GTK+ can run on.

This is a great idea in theory. In practice, it has not worked out so well.  The one platform where we have a native theme is Windows, and even though the ms-windows theme has received much appreciated attention and updates by Руслан Ижбулатов  this cycle, it is still incomplete and has problems with some recent GTK+ features.

(No need to panic though. Even if it is no longer the default, the ms-windows theme will still be available.)

Adwaita on the other hand,  is a very complete theme that has received a lot of attention over the last three years. Not only does it support all recent GTK+ features, many of the CSS improvements that the GTK+ theming machinery has received in the last years were direct responses to the needs of the Adwaita designers.

Adwaita
With Adwaita, GTK+ applications can rely on having a 100% complete theme that will look and feel the same on all supported platforms.  A theme that is constantly receiving a lot of love and attention and keeps up with new GTK+ features.

Another big plus: Adwaita has a high-quality dark variant, which will now also be available everywhere.

Adwaita dark

So, why not do this switch earlier ? After all, Adwaita has been around for a while.

The main reason is that we did not want to lose the ‘no theme engine’ characteristic of the default theme.  Theme engines, and loadable modules in general, are something we’ve been moving away from for a while now. They are

  •  questionable from a security perspective (executable code thats shipped separately, inserted into all your applications)
  • associated with search path problems
  • require stable APIs for many things that are more or less internal

(No reason to panic, though. Theme engines will not stop working overnight in GTK+ 3.)

The alternative to engines that we want themes to use is CSS. Our CSS implementation has only recently become powerful enough to replace the last features from the Adwaita theme engine  (focus rectangles and menu shadows). That is why we are doing the switch now.

One of consequences of moving Adwaita into GTK+ is that we will no longer ship application-specific theming as part of the theme (many core GNOME applications currently have small amounts of CSS glue inside gnome-themes-standard). Where this is still relevant, applications should just install it themselves. Here is an example for how to do this.

Thanks for the huge amount of recent work on Adwaita go to Jakub Steiner, Lapo Calamandrei, Jon McCann and Cosimo Cecchi.

June 11, 2014

Application Addons in GNOME Software

Ever since we rolled out the GNOME Software Center, people have wanted to extend it to do other things. One thing that was very important to the Eclipse developers was a way of adding addons to the main application, which seems a sensible request. We wanted to make this generic enough so that it could be used in gedit and similar modular GNOME and KDE applications. We’ve deliberately not targeted Chrome or Firefox, as these applications will do a much better job compared to the package-centric operation of GNOME Software.

So. Do you maintain a plugin or extension that should be shown as an addon to an existing desktop application in the software center? If the answer is “no” you can probably stop reading, but otherwise, please create a file something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name Here <your@email.com> -->
<component type="addon">
<id>gedit-code-assistance</id>
<extends>gedit.desktop</extends>
<name>Code Assistance</name>
<summary>Code assistance for C, C++ and Objective-C</summary>
<url type="homepage">http://projects.gnome.org/gedit</url>
<metadata_license>CC0-1.0</metadata_license>
<project_license>GPL-3.0+</project_license>
<updatecontact>richard_at_hughsie.com</updatecontact>
</component>

This wants to be installed into /usr/share/appdata/gedit-code-assistance.metainfo.xml — this isn’t just another file format, this is the main component schema used internally by AppStream. Some notes when creating the file:

  • You can use anything as the <id> but it needs to be unique and sensible and also match the .metainfo.xml filename prefix
  • You can use appstream-util validate gedit-code-assistance.metainfo.xml if you install appstream-glib from git.
  • Don’t put the application name you’re extending in the <name> or <summary> tags — so you’d use “Code Assistance” rather than “GEdit Code Assistance
  • You can omit the <url> if it’s the same as the upstream project
  • You don’t need to create the metainfo.xml if the plugin is typically shipped in the same package as the application you’re extending
  • Please use <_name> and <_summary> if you’re using intltool to translate either your desktop file or the existing appdata file and remember to add the file to POTFILES.in if you use one

Please grab me on IRC if you have any questions or concerns, or leave a comment here. Kalev is currently working on the GNOME Software UI side, and I only finished the metadata extractor for Fedora today, so don’t expect the feature to be visible until GNOME 3.14 and Fedora 21.

June 06, 2014

A GtkInspector update

I’ve first introduced GtkInspector a few weeks ago. Since then, it has made it into the GTK+ 3.13.2 development release and is
now available in Fedora rawhide, which should hopefully make debugging of GTK+ applications in Fedora easier.

I’ve continued to work on the inspector, and it is time to give an update on what it can do now. So far, my focus has been mostly on covering more of GTK+’s features at a basic level, and so much on adding sophisticated debugging support. That will probably change over time.

In unsorted order, here are some of the recent additions:
Inspector warning
We show a warning dialog now when the inspector window is opened with a keyboard shortcut. This is meant as a safety net for users who may end up here by accident when they mistype an application shortcut. We don’t want them to get scared by the inspector, so we offer them a quick way out.

The warning dialog can be turned off permanently with a setting, so frequent users of the inspector can avoid it.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1115-1" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/06/better-picking.webm?_=1" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/06/better-picking.webm</video>

When using the mouse to pick a widget, we now lower the inspector window, so it doesn’t get in the way.

ResourcesThis tab shows all the embedded resources in the application (as well as in used libraries). This is perhaps not such a great debugging feature, but still useful information. We show the type and size of the resources, and display e.g. images as such.

Property Editing: FontsEditing of properties has been changed from a cell renderer to popovers. Popovers let us use much more room so that we can e.g. embed a font chooser to edit font properties.

Property Editing: AttributeSome other properties have gotten their own editors. Here, we are editing a cell renderer property that is mapped to a tree model column. The editor allows to change the attribute mapping, and the Properties button lets us jump to the tree model in question, where it will open the Data tab:

Tree Model
It can be useful to see the data in the tree model, to verify that we have used the right column in the attribute mapping.

Property Editing: ActionsAnother custom property editor has been added for action names. The Properties button lets us jump to the Actions tab of the widget where the action is defined:

ActionsActions can be activated from here. For stateful actions, we also allow to set their state:

Action EditingOne feature I am proud of is that when gesture support was merged in GTK+ a few weeks ago, the branch already came with GtkInspector support for gestures.

GesturesI hope that this can become a model for the future, and we can make ‘GtkInspector support’ an expected deliverable for every new GTK+ feature, just like accessibility or internationalization are now.

Last, not least, the inspector is very useful in sorting out theme questions – the GNOME designers have used it quite a bit while doing a major refactoring of the Adwaita theme that will land soon.

Of course, one can also use it for more silly things, like figuring out how to do translucent headers:

Translucent titlebars

I’m very interested in hearing both success stories of GtkInspector helping to solve problems and gaps where GtkInspector is lacking functionality.

May 29, 2014

Why We Need the Outreach Program for Women and More Outreach

Thank you to everyone who spoke up in the last few days in support of the Outreach Program for Women. The reason we need to have this program is that there are many challenges women encounter on their path to technology, and by working to address this disadvantage, we not only do the right thing and help women access the rewards of participating in free software, we also get awesome contributors whom we would otherwise have missed. Girls and women are systematically discouraged from exploring technology and from participating in the more hobbyist areas of technology, which are especially male dominated. They don’t typically have the kind of social support or encouragement men do to contribute to free software. By the time they work on their computer science degrees, they often feel behind their male peers in their experience, underestimate their abilities, and are less likely to apply for prestigious programs like Google Summer of Code.

Women often encounter sexist behavior when participating in the free software community. Recent examples from OPW interns include men being surprised about one of them attending a conference and commenting about the appearance of another in a professional context. Here is a quote from the first intern at a recent IRC meeting for interns: “I attended FOSDEM and a couple of guys came to tell me they were surprised to see a girl there… You feel like you’re not supposed to be there, like you’re doing something wrong… So it was nice to have this space here, and feel like I was in the right place.” When an organization posted an article on Facebook about another intern’s impressive work, among many congratulatory messages there were comments about the developer’s appearance. In addition to people speaking up about the inappropriateness of such behavior, support groups for women joining our community can help address the immediate negative feelings after such incidents.

The brilliance and drive of the women we accept for OPW have always left me in awe. To think that, judging by the historical data, they would not have been likely to get involved otherwise and we would not have had their contributions is sad. The program has had 170 interns so far, with 40 of them participating in the current round with 16 free software organizations. We had 122 applicants for this round who worked with a mentor and completed the required contribution. Because of the many outreach efforts, including OPW, the percent of women among GSoC participants increased from 7.1% in 2011 to 9.8% in 2014. OPW encourages women who are students and coders to apply for GSoC as well, and of the ones who applied for both programs at the same time, 26 were accepted to participate in GSoC. An additional 11 OPW participants went on to participate in GSoC in a later round. 13 found employment with sponsoring organizations. 15 gave full session talks at conferences. This is a sizable change, but we are in the beginning of a long road.

The group, including 3 OPW alums and 4 mentors, at the Feminist Hacker Lounge at PyCon 2014.

The group, including 3 OPW alums and 4 mentors, at the Feminist Hacker Lounge at PyCon 2014.

There was only one girl among 40 teenage Google Code-in grand prize winners in the last two years. About 10% of the participants were girls. While OPW is effective in involving college and post-college women in free software, we are set to run the program forever, unless we also start involving high school girls in free software. For the next Google Code-in, I’d like to engage our network of OPW alums, mentors, and supporters in mentoring high school girls in Google Code-in participation. I’d like to invite people to step up to start and lead this effort. The first step would be to form a local group of people to teach workshops similar to OpenHatch’s “Open Source Comes to Campus” at local programming for girls groups, programming camps, or high school CS classes. You would then need to report on your experience and set up the resources for people to replicate it in their locations. The next step after the workshops would be to organize meetups at local libraries for mentoring in ongoing free software contributions and Google Code-in participation. This will be the Outreach Program for Girls.

There are many other groups of people underrepresented in free software – people from the Middle East, Africa, and East Asia; people of color in North America and Europe; people from disadvantaged social-economic backgrounds, people with disabilities. Empowerment and access are the main tenets of free software, and we need to do so much more to involve all these people. As Karen Sandler recently said to me, the current program is incomplete. Once we have a solid financial situation for OPW, we’d like to evolve it into OP-UP – the Outreach Program for Underrepresented People. Lukas Blakk is starting the Ascend Project for Mozilla, which will offer a 6 week course in contributing to free software, complete with financial support, to people from disadvantaged backgrounds. The first course will end in October, and I hope we can open the next round of OPW to graduates of this course as a pilot for OP-UP.

Being the community which fostered OPW and brought many free software organizations together to involve more of humanity in developing free software is earning the GNOME Foundation a lot of credit. It helps more people find out about GNOME and learn about the amazing product and community we have to offer. People notice how polished GNOME is. We went from 985 people contributing for the 3.10 release to 1,140 people contributing for 3.12! We shouldn’t forget how much we have achieved, when we consider what our software is missing or who our community is missing, and continue on to make things better.

May 28, 2014

configure fails with "No package 'foo' found" - and how to fix it

A common error when building from source is something like the error below:


configure: error: Package requirements (foo) were not met:

No package 'foo' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean?

pkg-config is a tool that provides compiler flags, library dependencies and a couple of other things to correctly link to external libraries. For more details on it see Dan Nicholson's guide. If a build system requires a package foo, pkg-config searches for a file foo.pc in the following directories: /usr/lib/pkgconfig, /usr/lib64/pkgconfig, /usr/share/pkgconfig, /usr/local/lib/pkgconfig, /usr/local/share/pkgconfig. The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum provides a great shortcut to install any pkg-config dependency:


$> yum install "pkgconfig(foo)"
will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:

$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
foo-dev
$> apt-get install foo-dev
For those running Arch and pacman, the sequence is:

$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
extra/foo
$> pacman -S extra/foo
zypper is the same as yum:

$> zypper in 'pkgconfig(foo)'
Once that's done you can re-run configure and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

Where does the dependency come from?

In most projects using autotools the dependency is specified in the file configure.ac and looks roughly like one of these:


PKG_CHECK_MODULES(FOO, [foo])
PKG_CHECK_MODULES(DEPENDENCIES, foo [bar >= 1.4] banana)
The first argument is simple a name that is used in the build system, you can ingore it. After the comma is the list of space-separated dependencies. In this case this means we need foo.pc, bar.pc and banana.pc, and more specifically we need a bar.pc that is equal or newer to version 1.4 of the package. To install all three follow the above steps and you're good.

My version is wrong!

It's not uncommon to see the following error after installing the right package:


configure: error: Package requirements (foo >= 1.9) were not met:

Requested 'foo >= 1.9' but version of foo is 1.8

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run:


$> yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Skipping unreadable repository '///etc/yum.repos.d/SpiderOak-stable.repo'
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
URL : http://cairographics.org
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
:
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.
The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:

$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett <dajobe>
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Homepage: http://cairographics.org/
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27
In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:

$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
URL : http://cairographics.org/
Licenses : LGPL MPL
....
zypper (Tizen, SailfishOS, Meego and others) doesn't have an interface for this, but you can run rpm on the package that you installed.

$> rpm -qi cairo-devel
Name : cairo-devel
[...]
URL : http://cairographics.org/
This command would obviously work on other rpm-based distributions too (Fedora, RHEL, ...). Unlike yum, it does require the package to be installed but by the time you get here you've already installed it anyway :)

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:


$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run configure again
So you create a directory called dependencies, install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run configure in the original project, pkg-config will find the new version and configure should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:


$> pkg-config --modversion cairo
1.13.1
Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:

$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo
If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.


$> export LD_LIBRARY_PATH=$HOME/dependencies/lib
and now you can, in the same shell, run your project.

Good luck!

AppData progress and the email deluge

In the last few days, I’ve been asking people to create and ship AppData files upstream. I’ve:

  • Sent 245 emails to upstream maintainers
  • Opened 38 launchpad bugs
  • Created 5 gnome.org bugs
  • Opened 72 sourceforge feature requests
  • Opened 138 github issues
  • Created 8 bugs on Fedora trac
  • Opened ~20 accounts on random issue trackers
  • Used 17 “contact” forms

In doing this, I’ve visited over 600 upstream websites, helpfully identifying 28 that are stated as abandoned by thier maintainer (and thus removed from the metadata). I’ve also blacklisted quite a few things that are not actually applications and not suitable for the software center.

I’ve deliberately not included GNOME in this sweep, as a lot of the core GNOME applications already have AppData and most of the gnomies already know what to do. I also didn’t include XFCE appications, as XFCE has agreed to adopt AppData on the mailing list and are in the process of doing this already. KDE is just working out how to merge the various files created by Matthias, and I’ve not heard anything from LXDE or MATE. So, I only looked at projects not affiliated with any particular desktop.

For far, the response has been very positive, with at least 10% of the requests been actioned and some projects even doing new releases that I’ve been slowly uploading into Fedora. Another ~10% of requests are acknowlegdments from maintainers thay they would do this sometime before the next release. I have found a lot of genuinely interesting applications in my travels, and lot of junk. The junk is mostly unmaintained, and so my policy of not including applications (unless they have AppData manually added by the distro packager) that have not had an upstream release for the last 5 years seems to be valid.

At least 5 of the replies have been very negative, e.g. “how dare you ask me to do something — do it yourself” and things like “Please do not contact me again – I don’t want any new users“. The vast number of people have not responded yet — so I’m preparing myself for a deluge over the next few weeks from the people that care.

My long term aim is to only show applications in Fedora 22 with AppData, so it seemed only fair to contact the various upstream projects about an initiative they’re probably not familiar with. If we don’t get > 50% of applications in Fedora with the extra data we’ll have to reconsider such a strong stance. So far we’ve reached over 20%, which is pretty impressive for a standard I’ve been pushing for such a short amount of time.

So, if you’ve got an email from me, please read it and reply — thanks.

May 23, 2014

DisplayPort MST dock support for Fedora 20 copr repo.
So I've created a copr

http://copr.fedoraproject.org/coprs/airlied/mst/

With a kernel + intel driver that should provide support for DisplayPort MST on Intel Haswell hardware. It doesn't do any of the fancy Dell monitor stuff yet, it primarily for people who have Lenovo or Dell docks and laptops that can't currently multihead.

The kernel source is from this branch which backports a chunk of stuff to v3.14 to support this.

http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-i915-mst-v3.14

It might still have some bugs and crashes, but the basics should in theory work.

May 19, 2014

Introducing tellme, a text-to-speech notifier

I've been hacking on a little tool the last couple of days and I think it's ready for others to look at it and provide suggestions to improve it. Or possibly even tell me that it already exists, in which case I'll save a lot of time. "tellme" is a simple tool that uses text-to-speech to let me know when a command finished. This is useful for commands that run for a couple of minutes - you can go off read something and the computer tells you when it's done instead of you polling every couple of seconds to check. A simple example:


tellme sudo yum update
runs yum update, and eventually says in a beautiful totally-not-computer-sounding voice "finished yum update successfully".

That was the first incarnation which was a shell script, I've started putting a few more features in (now in Python) and it now supports per-command configuration and a couple of other semi-smart things. For example:


whot@yabbi:~/xorg/xserver/Xi> tellme make
eventually says "finished xserver make successfully". With the default make configuration, it runs up the tree to search for a .git directory and then uses that as basename for the voice output. Which is useful when you rebuild all drivers simultaneously and the box tells you which ones finished and whether there was an error.

I put it up on github: https://github.com/whot/tellme. It's still quite rough, but workable. Have a play with it and feel free to send me suggestions.

The desktop and the developer
I was at the OpenStack Summit this week. The overwhelming majority of OpenStack deployments are Linux-based, yet the most popular laptop vendor (by a long way) at the conference was Apple. People are writing code with the intention of deploying it on Linux, but they're doing so under an entirely different OS.

But what's really interesting is the tools they're using to do so. When I looked over people's shoulders, I saw terminals and a web browser. They're not using Macs because their development tools require them, they're using Macs because of what else they get - an aesthetically pleasing OS, iTunes and what's easily the best trackpad hardware/driver combination on the market. These are people who work on the same laptop that they use at home. They'll use it when they're commuting, either for playing videos or for getting a head start so they can leave early. They use an Apple because they don't want to use different hardware for work and pleasure.

The developers I was surrounded by aren't the same developers you'd find at a technical conference 10 years ago. They grew up in an era that's become increasingly focused on user experience, and the idea of migrating to Linux because it's more tweakable is no longer appealing. People who spend their working day making use of free software (and in many cases even contributing or maintaining free software) won't run a free software OS because doing so would require them to compromise on things that they care about. Linux would give them the same terminals and web browser, but Linux's poorer multitouch handling is enough on its own to disrupt their workflow. Moving to Linux would slow them down.

But even if we fixed all those things, why would somebody migrate? The best we'd be offering is a comparable experience with the added freedom to modify more of their software. We can probably assume that this isn't a hugely compelling advantage, because otherwise it'd probably be enough to overcome some of the functional disparity. Perhaps we need to be looking at this differently.

When we've been talking about developer experience we've tended to talk about the experience of people who are writing software targeted at our desktops, not people who are incidentally using Linux to do their development. These people don't need better API documentation. They don't need a nicer IDE. They need a desktop environment that gives them access to the services that they use on a daily basis. Right now if someone opens an issue against one of their bugs, they'll get an email. They'll have to click through that in order to get to a webpage that lets them indicate that they've accepted the bug. If they know that the bug's already fixed in another branch, they'll probably need to switch to github in order to find the commit that contains the bug number that fixed it, switch back to their issue tracker and then paste that in and mark it as a duplicate. It's tedious. It's annoying. It's distracting.

If the desktop had built-in awareness of the issue tracker then they could be presented with relevant information and options without having to click through two separate applications. If git commits were locally indexed, the developer could find the relevant commit without having to move back to a web browser or open a new terminal to find the local checkout. A simple task that currently involves multiple context switches could be made significantly faster.

That's a simple example. The problem goes deeper. The use of web services for managing various parts of the development process removes the need for companies to maintain their own infrastructure, but in the process it tends to force developers to bounce between multiple websites that have different UIs and no straightforward means of sharing information. Time is lost to this. It makes developers unhappy.

A combination of improved desktop polish and spending effort on optimising developer workflows would stand a real chance of luring these developers away from OS X with the promise that they'd spend less time fighting web browsers, leaving them more time to get on with development. It would also help differentiate Linux from proprietary alternatives - Apple and Microsoft may spend significant amounts of effort on improving developer tooling, but they're mostly doing so for developers who are targeting their platforms. A desktop environment that made it easier to perform generic development would be a unique selling point.

I spoke to various people about this during the Summit, and it was heartening to hear that there are people who are already thinking about this and hoping to improve things. I'm looking forward to that, but I also hope that there'll be wider interest in figuring out how we can make things easier for developers without compromising other users. It seems like an interesting challenge.

comment count unavailable comments

May 15, 2014

Introducing GtkInspector

If you need to solve a tricky GTK+ problem in your application, gtkparasite is a very useful tool to have around. It lets you explore the widget hierarchy, change properties, tweak theme settings, and so on.

Unfortunately, gtkparasite is a tool for people ‘in the know’ – it is not part of GTK+, not advertised on our website, and not available out of the box on your average GTK+ installation.

At the Developer Experience hackfest in Berlin a few weeks ago, the assembled GTK+ developers discussed fixing this situation by making an interactive debugger like gtkparasite part of GTK+ itself. This way, it will be available whenever you run a GTK+ application, and we can develop and improve the debugging tools alongside the toolkit.

So, I’ve spent some of my spare time on this since the hackfest. The results are now in GTK+ master.  I’ve started from the gtkparasite code, but things have changed quite a bit, and some new things have appeared.

GtkInspectorIf you want to give it a try yourself, you can just use the Control-Shift-I or Control-Shift-D shortcuts to open an inspector window in any application that is using GTK+ master, or you can just set the GTK_DEBUG=interactive environment variable. Note that the keybinding will only work if GTK+ can find the org.gtk.Debug settings schema, so you probably want to run the application under jhbuild.

Here is a video of GtkInspector in action.

Among the things you can see in the video are interactive picking of widgets for inspection, visual debugging of graphic updates and baseline alignment, changing of widget properties, theme tweaks and general version and environment information.

I’ve also put together a page with ideas for future improvements to this debugging tool. If you are looking for a fun project to work on, this might just be it!

Thanks to Christian Hammond for creating gtkparasite and maintaining (over many years) such a useful debugging tool.  The GTK+ inspector would not exist without it.

May 12, 2014

selected and unselected slides with mouse over
It will be possible in LibreOffice Impress 4.3 to distinguish between selected and unselected slides when the mouse over highlight activates.

I mean, in previous versions, slide 2 here is drawn the same, when the mouse is over it in the slide pane, regardless of whether it is selected or unselected

 While in 4.3 the two modes are drawn respectively as

There's also a little subtle gradient added in for good measure.