August 22, 2014

ppc64le libreoffice
LibreOffice is now ported to ppc64le. make passes, testtools passes and the resulting application is capable of headlessly converting documents to pdf. There's no reason to think it's any less capable of anything else as any other port but I don't actually have a ppc64le and transatlantic ssh tunnels aren't conducive to extensive UI testing.

The tricky bit of the port as always is the uno bridge, especially because the ABI was changed for little endian

https://bugs.openjdk.java.net/browse/JDK-8035647 is handy to get the links to the original elf v2 abi change commits to gcc/libffi

https://ghc.haskell.org/trac/ghc/ticket/8965 is handy to get a friendlier translation of the change where if gcc can see that the arguments to the function to be called will fit in registers then no argument save area is created which stumped me for a while

August 15, 2014

dialog conversion status, 4 to go
http://magazine.uc.edu/issues/0509/rabbi.html

Converting LibreOffice dialogs to .ui format, 4 left

I should go on vacation more often. On my return I find that Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, have now converted all but 4 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format.

Here's the list of the last four. One (monster) whose conversion is in progress, one that should ideally be removed in favour of a duplicate dialog and two that have no known route to display them. Hacking the code temporarily to force those two to appear is probably no biggy.


Current conversion stats are:
820 .ui files currently exist
There are 3 unconverted dialogs
There are 1 unconverted tabpages
An estimated additional 4 .ui are required
We are 99% of the way through.

What's next, well *cough*, the above are all the dialogs and tabpages in the classic .src format. There are actually a host of ErrorBox, InfoBox and QueryBox that exist in the .src format as well.

These take just two pieces of information, a string to display and some bits that set what buttons to show, e.g. cancel, close, ok + cancel, etc. We want to remove them in favour of the Gtk-alike MessageDialog, but we don't want to actually convert them to .ui format, because they are so simple it makes more sense to just reduce them to strings like this sample commit demonstrates. This might even be possible to at least somewhat automate.

I've now updated count-todo-dialogs to display the count of those *Box elements that exist in src file format, but I'll elide the count of them until the last 4 true dialogs+tabpages are gone.

August 10, 2014

Birthplace
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comment count unavailable comments

August 07, 2014

Post-GUADEC

  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

August 04, 2014

Notes on Fedora on an Android device
A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.


$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.

Nice-to-haves

In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.

Conclusion

Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

July 30, 2014

you have a long road to walk, but first you have to leave the house
or why publishing code is STEP ZERO.

If you've been developing code internally for a kernel contribution, you've probably got a lot of reasons not to default to working in the open from the start, you probably don't work for Red Hat or other companies with default to open policies, or perhaps you are scared of the scary kernel community, and want to present a polished gem.

If your company is a pain with legal reviews etc, you have probably spent/wasted months of engineering time on internal reviews and stuff, so think all of this matters later, because why wouldn't it, you just spent (wasted) a lot of time on it, so it must matter.

So you have your polished codebase, why wouldn't those kernel maintainers love to merge it.

Then you publish the source code.

Oh, look you just left your house. The merging of your code is many many miles distant and you just started walking that road, just now, not when you started writing it, not when you started legal review, not when you rewrote it internally the 4th time. You just did it this moment.

You might have to rewrite it externally 6 times, you might never get it merged, it might be something your competitors are also working on, and the kernel maintainers would rather you cooperated with people your management would lose their minds over, that is the kernel development process.

step zero: publish the code. leave the house.

(lately I've been seeing this problem more and more, so I decided to write it up, and it really isn't directed at anyone in particular, I think a lot of vendors are guilty of this).

July 28, 2014

A talk in 9 images

My talk at GUADEC this year was about GTK+ dialogs. The first half of the talk consisted of a comparison of dialogs in GTK+ 2, in GTK+ 3 under gnome-shell and in GTK+ 3 under xfwm4 (as an example of an environment that does not favor client-side decorations).

The main take-away here should be that in 3.14, all GTK+ dialogs will again have traditional decorations if that is what works best in the environment it is used in.

About DialogsPreference DialogsFile ChoosersMessage DialogsError DialogsPrint DialogsFont DialogsColor DialogsAction DialogsThe second part of my talk was discussing best practices for dealing with various issues that can come up with custom GTK+ dialogs; I’ve summarized the main points in this HowDoI page.

July 25, 2014

Dialogs and Coverity, current numbers

Army massing

Converting LibreOffice dialogs to .ui format, 54 conversions remaining

We've now converted all but 54 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format. This is due to the much appreciated efforts of Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, who are tackling the last bunch of hard to find or hard to convert ones.

Current conversion stats are:
778 .ui files currently exist
There are 20 unconverted dialogs
There are 34 unconverted tabpages
An estimated additional 54 .ui are required
We are 93% of the way through.

Coverity Defect Density: LibreOffice vs Average

According to Coverity's overview dashboard our current status is:

LibreOffice: 9,425,526 line of code and 0.09 defect density

Open Source Defect Density By Project Size

Line of Code (LOC) Defect Density
Less than 100,0000.35
100,000 to 499,9990.5
500,000 to 1 million0.7
More than 1 million0.65
Note: Defect density is measured by the number of defects per 1,000 lines of code, identified by the Coverity platform. The numbers shown above are from our 2013 Coverity Scan Report, which analyzed 250 million lines of open source code.

July 24, 2014

Continuous testing and Wayland

The GNOME-Continuous  continuous integration and delivery system has been helping us to keep the quality of the GNOME code base up for a while now.

It is doing a number of things:

  • Builds changed modules
  • Creates vm images that can be downloaded for local testing
  • Smoke-tests the installed image  by verifying that it boots up to the login screen
  • Runs  more than 400 tests against the installed image
  • Launches all the applications that are part of the moduleset and takes screenshots of them

All of this happens after every commit, or at least very close to that, and the results are available at the build.gnome.org website.

You can learn more about GNOME-Continuous here.

As a member of the the GNOME release team I am really thankful for this service, it has made our job a lot easier – at release time everything just builds and works most of the time nowadays.

Earlier this year, we’ve made the smoke-testing aspect of gnome-continuous more useful by verifying not just GNOME, but also GNOME Classic and GNOME on Wayland.

Today, we’ve had a minor breakthrough: for the first time, the GNOME/Wayland smoke test succeeded all the way to taking a screenshot of the session.

GNOME/Wayland(of course, GNOME on Wayland looks just like GNOME under X11, so the screenshot is not very exciting by itself).

To get this far, we had to switch to the egl-drm branch of mesa, which adds support for running KMS+DRM GL applications with QXL.

July 23, 2014

Watch out for DRI3 regressions
DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

July 22, 2014

GTK+ at GUADEC 2014

GUADEC is almost here. GTK+ will be presented with 3 talks:

  • GTK+, dialogs, the HIG and you  (by me)
  • GTK+ and CSS (by Benjamin)
  • The GTK+ scene graph toolkit (by Emmanuele)

All of these will be on Monday, the 28th. We will also have a GTK+ team meeting on the 31st.

I’ve made a small collage to summarize my talk:

Dialogs See you all there !

July 20, 2014

Newcomers Workshop

The GNOME Newcomers Workshop might be coming to a conference near you this summer! Owen Taylor and I will be hosting one at GUADEC in Strasbourg, France on Saturday, July 26 and one at Flock in Prague, Czech Republic on Saturday, August 9. If you have never contributed a patch to GNOME before or even have never used GNOME, this workshop is for you! You’ll get a quick overview of the GNOME project, have a chance to install GNOME in a virtual machine, and learn the tools and the process of contributing a patch. If you plan to come to either workshop, please add yourself to the list of attendees.gnome-balloonIf you can’t attend one of the workshops, you can install the available image in a virtual machine and go through the newcomers tutorial used at the workshop on your own. You can ask any questions you have on IRC.

We would like to have one helper per three newcomers at the workshops. If you are an established contributor and will be available, please sign up now to help with the workshop at GUADEC or Flock, and then you can check the room in the beginning of the session to see if we need your help based on the attendance. I’d love to see more people running the Newcomers Workshop locally or at other events they attend. Helping out with one of these sessions would give you a chance to learn how it works.

In addition to hosting the Newcomers Workshop, I will be speaking about how to be an ally to women in tech at GUADEC and about the Outreach Program for Women at Flock.

guadec-2014-badge-small

And then I’m going to

flock-logo14

July 14, 2014

pointer acceleration in libinput - an analysis

Following Christian's Wayland in Fedora Update post, and after Hans fixed the touchpad acceleration, I've been playing with pointer acceleration in libinput a bit. The main focus was not yet on changing it but rather on figuring out what we actually do and where the room for improvement is. There's a tool in my (rather messy) github wip/ptraccel-work branchto re-generate the graphs below.

This was triggered by a simple plan: I want a configuration interface in libinput that provides a sliding scale from -1 to 1 to adjust a device's virtual speed from slowest to fastest, with 0 being the default for that device. A user should not have to worry about the accel mechanism itself, which may be different for any given device, all they need to know is that the setting -0.5 means "halfway between default and 'holy cow this moves like molasses!'". The utopia is of course that for any given acceleration setting, every device feels equally fast (or slow). In order to do that, I needed the right knobs to tweak.

The code we currently have in libinput is pretty much 1:1 what's used in the X server. The X server sports a lot more configuration options, but what we have in libinput 0.4.0 is essentially what the default acceleration settings are in X. Armed with the knowledge that any #define is a potential knob for configuration I went to investigate. There are two defines that are labelled as adjustible parameters:

  • DEFAULT_THRESHOLD, set to 0.4
  • DEFAULT_ACCELERATION, set to 2.0
But what do they mean, exactly? And what exactly does a value of 0.4 represent?
[side-note: threshold was 4 until I took the constant multiplier out, it's now 0.4 upstream and all the graphs represent that.]

Pointer acceleration is nothing more than mapping some input data to some potentially faster output data. How much faster depends on how fast the device moves, and to get there one usually needs a couple of steps. The trick of course is to make it predictable, so that despite the acceleration, your brain thinks that the visible cursor is an extension of your hand at all speeds.

Let's look at a high-level outline of our pointer acceleration code:

  • calculate the velocity of the current movement
  • use that velocity to calculate the acceleration factor
  • apply accel to dx/dy
  • smoothen out the dx/dy to avoid abrupt changes between two events

Calculating pointer speed

We don't just use dx/dy as values, rather, we use the pointer velocity. There's a simple reason for that: dx/dy depends on the device's poll rate (or interrupt frequency). A device that polls twice as often sends half the dx/dy values in each event for the same physical speed.

Calculating the velocity is easy: divide dx/dy by the delta time. We use a set of "trackers" that store previous dx/dy values with their timestamp. As long as we get movement in the same cardinal direction, we take those into account. So if we have 5 events in direction NE, the speed is averaged over those 5 events, smoothing out abrupt speed changes.

The acceleration function

The speed we just calculated is passed to the acceleration function to calculate an acceleration factor.

Figure 1: Mapping of velocity in unit/ms to acceleration factor (unitless). X axes here are labelled in units/ms and mm/s.
This function is the only place where DEFAULT_THRESHOLD/DEFAULT_ACCELERATION are used, but they mostly just stretch the graph. The shape stays the same.

The output of this function is a unit-less acceleration factor that is applied to dx/dy. A factor of 1 means leaving dx/dy untouched, 0.5 is half-speed, 2 is double-speed.

Let's look at the graph for the accel factor output (red): for very slow speeds we have an acceleration factor < 1.0, i.e. we're slowing things down. There is a distinct plateau up to the threshold of 0.4, after that it shoots up to roughly a factor of 1.6 where it flattens out a bit until we hit the max acceleration factor

Now we can also put units to the two defaults: Threshold is clearly in units/ms, and the acceleration factor is simply a maximum. Whether those are mentally easy to map is a different question.

We don't use the output of the function as-is, rather we smooth it out using the Simpson's rule. The second (green) curve shows the accel factor after the smoothing took effect. This is a contrived example, the tool to generate this data simply increased the velocity, hence this particular line. For more random data, see Figure 2.

Figure 2: Mapping of velocity in unit/ms to acceleration factor (unitless) for a random data set. X axes here are labelled in units/ms and mm/s.
For the data set, I recorded the velocity from libinput while using Firefox a bit.

The smoothing takes history into account, so the data points we get depend on the usage. In this data set (and others I tested) we see that the majority of the points still lie on or close to the pure function, apparently the delta doesn't matter that much. Nonetheless, there are a few points that suggest that the smoothing does take effect in some cases.

It's important to note that this is already the second smoothing to take effect - remember that the velocity (may) average over multiple events and thus smoothens the input data. However, the two smoothing effects somewhat complement each other: velocity smoothing only happens when the pointer moves consistently without much change, the Simpson's smoothing effect is most pronounced when the pointer moves erratically.

Ok, now we have the basic function, let's look at the effect.

Pointer speed mappings

Figure 3: Mapping raw unaccelerated dx to accelerated dx, in mm/s assuming a constant pysical device resolution of 400 dpi that sends events at 125Hz. dx range mapped is 0..127
The graph was produced by sending 30 events with the same constant speed, then dividing by the number of events to reduce any effects tracker feeding has at the initial couple of events.

The two lines show the actual output speed in mm/s and the gain in mm/s, i.e. (output speed - input speed). We can see that the little nook where the threshold kicks in and after the acceleration is linear. Look at Figure 1 again: the linear acceleration is caused by the acceleration factor maxing out quickly.

Most of this graph is theoretical only though. On your average mouse you don't usually get a delta greater than 10 or 15 and this graph covers the theoretical range to 127. So you'd only ever be seeing the effect of up to ~120 mm/s. So a more realistic view of the graph is:

Figure 4: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 3, but zoomed to the realistic range. We go from a linear speed increase (no acceleration) to a quick bump once the threshold is hit and from then on to a linear speed increase once the maximum acceleration is hit.

And to verify, the ratio of output speed : input speed:

Figure 5: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated.

Looks pretty much exactly like the pure acceleration function, which is to be expected. What's important here though is that this is the effective speed, not some mathematical abstraction. And it shows one limitation: we go from 0 to full acceleration within really small window.

Again, this is the full theoretical range, the more realistic range is:

Figure 6: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 5, just zoomed in to a maximum of 120 mm/s. If we assume that 15 dx/event is roughly the maximum you can reach with a mouse you'll see that we've reached maximum acceleration at a third of the maximum speed and the window where we have adaptive acceleration is tiny.

Tweaking threshold/accel doesn't do that much. Below are the two graphs representing the default (threshold=0.4, accel=2), a doubled threshold (threshold=0.8, accel=2) and a doubled acceleration (threshold=0.4, accel=4).

Figure 6: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Figure 7: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, see Figure 5 for details. Zoomed in to a max of 120 t0.4 a4 (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Doubling either setting just moves the adaptive window around, it doesn't change that much in the grand scheme of things.

Now, of course these were all fairly simple examples with constant speed, etc. Let's look at a diagram of what is essentially random movement, me clicking around in Firefox for a bit:

Figure 8: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set.
And the zoomed-in version of this:
Figure 9: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set, zoomed in to events 450-550 of that set.
This is more-or-less random movement reflecting some real-world usage. What I find interesting is that it's very hard to see any areas where smoothing takes visible effect. the accelerated curve largely looks like a stretched input curve. tbh I'm not sure what I should've expected here and how to read that, pointer acceleration data in real-world usage is notoriously hard to visualize.

Summary

So in summary: I think there is room for improvement. We have no acceleration up to the threshold, then we accelerate within too small a window. Acceleration stops adjusting to the speed soon. This makes us lose precision and small speed changes are punished quickly.

Increasing the threshold or the acceleration factor doesn't do that much. Any increase in acceleration makes the mouse faster but the adaptive window stays small. Any increase in threshold makes the acceleration kick in later, but the adaptive window stays small.

We've already merged a number of fixes into libinput, but some more work is needed. I think that to get a good pointer acceleration we need to get a larger adaptive window [Citation needed]. We're currently working on that (and figuring out how to evaluate whatever changes we come up with).

A word on units

The biggest issue I was struggling with when trying to understand the code was that of units. The code didn't document used units anywhere but it turns out that everything was either in device units ("mickeys"), device units/ms or (in the case of the acceleration factors) was unitless.

Device units are unfortunately a pretty useless base entity, only slightly more precise than using the length of a piece of string. A device unit depends on the device resolution and of course that differs between devices. An average USB mouse tends to have 400 dpi (15.75 units/mm) but it's common to have 800 dpi, 1000 dpi and gaming mice go up to 8200dpi. A touchpad can have resolutions of 1092 dpi (43 u/mm), 3277 dpi (129 u/mm), etc. and may even have different resolutions for x and y.

This explains why until commit e874d09b4 the touchpad felt slower than a "normal" mouse. We scaled to a magic constant of 10 units/mm, before hitting the pointer acceleration code. Now, as said above the mouse would likely have a resolution of 15.75 units/mm, making it roughly 50% faster. The acceleration would kick in earlier on the mouse, giving the touchpad and the mouse not only different speeds but a different feel altogether.

Unfortunately, there is not much we can do about mice feeling different depending on the resolution. To my knowledge there is no way to query the resolution on a device. But for absolute devices that need pointer acceleration (i.e. touchpads) we can normalize to a fake resolution of 400 dpi and base the acceleration code on that. This provides the same feel on the mouse and the touchpad, as much as that is possible anyway.

July 11, 2014

Another GtkInspector update

I’ve last written about GtkInspector in early June. Since then, a few things have fallen into place, so it is time for another status update.

Show all the things

An early focus of my work on the inspector was to make as much information visible as possible (objects, styles, settings,…).

MenusThis has continued; the most recent addition in the object tree are submenus and combobox popups, which are now shown below the widgets they are attached to.

StyleFor widgets, we now have a Style Properties tab, which shows the values of known style properties, and importantly, where they originate in the theme css. This tab is still a bit preliminary,  but it is a start towards answering the question: why does GTK+ not pick up this selector in my css ?

Property BindingSettings Binding

The property editor popover in the Properties tab is now showing information about GObject and GSettings property bindings, with an easy way to jump to the other object.

The environment section in the General tab includes GSETTINGS_SCHEMA_DIR when set.

Test all the things

Next to making things visible, the inspector is supposed to make it easy to test an application in various situations, such as different themes or right-to-left locales.

GeneralAn recent addition in this regard is a switch in the General tab to turn on touchscreen simulation. This used to be available only via the GTK_TEST_TOUCHSCREEN environment variable.

Hi-dpiYou can also turn on hi-dpi scaling in the General tab to test how your app behaves in this scenario, and look out for blurry icons and the like. Careful, this will make your windows giant.

Better picking

I found that I often use the widget picker right after opening the inspector window, so I figured that I should try to combine these two actions. As a result, Ctrl-Shift-I will now open the inspector and select the widget that was under the pointer before. Ctrl-Shift-D still toggles the inspector on and off.

Another small improvement to picking is that the Escape key can now be used to cancel a picking operation.

Less confusion

As I’ve mentioned above, some of the things that the inspector lets you change at runtime can also be set from environment variables (e.g. GTK_THEME). And when they are set, the environment variables often override any runtime changes.

Disabled controlGtkInspector now reflects this by disabling the corresponding controls when they won’t take effect.

Help welcome

I’ll continue to improve the inspector, but there’s more good ideas for possible features than one person can handle. If you want to help out, feel free to come by on irc, or just put your contribution in bugzilla, where the inspector has its own component now.

 

July 08, 2014

Important AppData milestone

Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

  • Applications with descriptions: 262/1037 (25.3%)
  • Applications with keywords: 112/1037 (10.8%)
  • Applications with screenshots: 235/1037 (22.7%)
  • Applications in GNOME with AppData: 91/134 (67.9%)
  • Applications in KDE with AppData: 5/67 (7.5%)
  • Applications in XFCE with AppData: 2/20 (10.0%)
  • Application addons with MetaInfo: 30

We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

July 04, 2014

Self-signing custom Android ROMs
The security model on the Google Nexus devices is pretty straightforward. The OS is (nominally) secure and prevents anything from accessing the raw MTD devices. The bootloader will only allow the user to write to partitions if it's unlocked. The recovery image will only permit you to install images that are signed with a trusted key. In combination, these facts mean that it's impossible for an attacker to modify the OS image without unlocking the bootloader[1], and unlocking the bootloader wipes all your data. You'll probably notice that.

The problem comes when you want to run something other than the stock Google images. Step number one for basically all of these is "Unlock your bootloader", which is fair enough. Step number two is "Install a new recovery image", which is also reasonable in that the key database is stored in the recovery image and so there's no way to update it without doing so. Except, unfortunately, basically every third party Android image is either unsigned or is signed with the (publicly available) Android test keys, so this new recovery image will flash anything. Feel free to relock your bootloader - the recovery image will still happily overwrite your OS.

This is unfortunate. Even if you've encrypted your phone, anyone with physical access can simply reboot into recovery and reflash /system with something that'll stash your encryption key and mail your data to the NSA. Surely there's a better way of doing this?

Thankfully, there is. Kind of. It's annoying and involves a bunch of manual processes and you'll need to re-sign every update yourself. But it is possible to configure Nexus devices in such a way that you retain the same level of security you had when you were using the Google keys without losing the freedom to run whatever you want. Here's how.

Note: This is not straightforward. If you're not an experienced developer, you shouldn't attempt this. I'm documenting this so people can create more user-friendly approaches.

First: Unlock your bootloader. /data will be wiped.
Second: Get a copy of the stock recovery.img for your device. You can get it from the factory images available here
Third: Grab mkbootimg from here and build it. Run unpackbootimg against recovery.img.
Fourth: Generate some keys. Get this script and run it.
Fifth: zcat recovery.img-ramdisk.gz | cpio -id to extract your recovery image ramdisk. Do this in an otherwise empty directory.
Sixth: Get DumpPublicKey.java from here and run it against the .x509.pem file generated in step 4. Replace /res/keys from the recover image ramdisk with the output. Include the "v2" bit at the beginning.
Seventh: Repack the ramdisk image (find . | cpio -o -H newc | gzip > ../recovery.img-ramdisk.gz) and rebuild recovery.img with mkbootimg.
Eighth: Write the new recovery image to your device
Ninth: Get signapk from here and build it. Run it against the ROM you want to sign, using the keys you generated earlier. Make sure you use the -w option to sign the whole zip rather than signing individual files.
Tenth: Relock your bootloader
Eleventh: Boot into recovery mode and sideload your newly signed image.

At this point you'll want to set a reasonable security policy on the image (eg, if it grants root access, ensure that it requires a PIN or something), but otherwise you're set - the recovery image can't be overwritten without unlocking the bootloader and wiping all your data, and the recovery image will only write images that are signed with your key. For obvious reasons, keep the key safe.

This, well. It's obviously an excessively convoluted workflow. A *lot* of it could be avoided by providing a standardised mechanism for key management. One approach would be to add a new fastboot command for modifying the key database, and only permit this to be run when the bootloader is unlocked. The workflow would then be something like
  • Unlock bootloader
  • Generate keys
  • Install new key
  • Lock bootloader
  • Sign image
  • Install image
which seems more straightforward. Long term, individual projects could do the signing themselves and distribute their public keys, resulting in the install process becoming as easy as
  • Unlock bootloader
  • Install ROM key
  • Lock bootloader
  • Install ROM
which is actually easier than the current requirement to install an entirely new recovery image.

I'd actually previously criticised Google on the grounds that using custom keys wasn't possible on Android devices. I was wrong. It is, it's just that (as far as I can tell) nobody's actually documented it before. It's important that users not be forced into treating security and freedom as mutually exclusive, and it's great that Google have made that possible.

[1] This model fails if it's possible to gain root on the device. Thankfully this would never hold on what's that over there is that a distraction?

comment count unavailable comments
FUDCON + GNOME.Asia Beijing 2014

Thanks to the funding from FUDCON I had the chance to attend and keynote at the combined FUDCON Beijing 2014 and GNOME.Asia 2014 conference in Beijing, China.

My talk was about systemd's present and future, what we achieved and where we are going. In my talk I tried to explain a bit where we are coming from, and how we changed focus from being purely an init system, to more being a set of basic building blocks to build an OS from. Most of the talk I talked about where we still intend to take systemd, which areas we believe should be covered by systemd, and of course, also the always difficult question, on where to draw the line and what clearly is outside of the focus of systemd. The slides of my talk you find online. (No video recording I am aware of, sorry.)

The combined conferences were a lot of fun, and as usual, the best discussions I had in the hallway track, discussing Linux and systemd.

A number of pictures of the conference are now online. Enjoy!

After the conference I stayed for a few more days in Beijing, doing a bit of sightseeing. What a fantastic city! The food was amazing, we tried all kinds of fantastic stuff, from Peking duck, to Bullfrog Sechuan style. Yummy. And one of those days I am sure I will find the time to actually sort my photos and put them online, too.

I am really looking forward to the next FUDCON/GNOME.Asia!

July 03, 2014

LibreOffice Coverity Defect Density

Coverity Defect Density: LibreOffice vs Average

We run LibreOffice through Coverity approximately once a week. According to Coverity's overview dashboard our current status is:

LibreOffice: 9,500,825 line of code and 0.13 defect density

Open Source Defect Density By Project Size

Line of Code (LOC) Defect Density
Less than 100,0000.35
100,000 to 499,9990.5
500,000 to 1 million0.7
More than 1 million0.65
Note: Defect density is measured by the number of defects per 1,000 lines of code, identified by the Coverity platform. The numbers shown above are from our 2013 Coverity Scan Report, which analyzed 250 million lines of open source code.

So any crashes you might experience in 4.3 are either a figment of your imagination or a sad commentary on the limitations of static code analysis.

July 02, 2014

Blurry Screenshots in GNOME Software?

Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

June 27, 2014

scrolling the sidebar with the scroll-wheel
The sidebar comes with a vertical scrollbar for when content doesn't fit in the available space. But the mouse pointer has to be right over the scrollbar to use your scroll wheel, it doesn't work to hover over the content of the sidebar and move the wheel there.

Which is annoying, but on trying to fix that I realized the snag with allowing the wheel-scroll over the sidebar. If you scroll the sidebar down and a widget contained in it ends up under the mouse pointer then if it is a widget which accepts the wheel-scroll it's very easy to accidentally make a change to the widget that has scrolled under the mouse pointer. Spin Buttons for example.

As an aside, this is the exact same problem that I have in glade where I scroll down the property pane with the scroll-wheel and accidentally end up over the "Ellipsize" listbox and inadvertently change it from None to Middle. So if you find labels in LibreOffice with "..." in the middle of them for no good reason, this is why.

Anyway, I still want to scroll the sidebar, but I don't want to end up with this conflicting target-location widget wheel-scroll conflict, so my solution is to continue to send the wheel-scroll events to the previous target so long as the position of the mouse pointer is at the same place as the last wheel event and the time between events is <= the default timeout for raising help tips, i.e. 1/2 a second.

Seems to work well for me, scrolling the sidebars "just works" on master (LibreOffice 4.4) with the scroll wheel without random changes to any scroll-wheel sensitive contents but you can still use the scroll-wheel to modify those widgets on moving to them or when the little timeout completes.
GNOME 3.13.3

I’ve done the release team duty for the GNOME 3.13.3 release this week. As I often do, I took some screenshots of new things that I’ve noticed while smoketesting.

There is quite a bit of good new stuff in this release, starting with an rewritten and improved Adwaita theme that is now part of GTK+:

Adwaita dark Next, I’ve noticed that our sharing infrastructure has become network aware, and lets you change what is shared depending on what network you’re on:

SharingThis works not just for media sharing, but for file sharing and screen sharing as well.

At the other side of media sharing, gnome-online-accounts has learned to set up access to the media servers in your local network:

Media serversAmong the applications, yelp and evince stand out with new, modern looks that fit in very well with the rest of GNOME:

YelpEvinceThere are more things to discover that I could not capture in a screenshot. For example, I noticed that GNOME shell now remembers which workspace windows where on when you disconnect and reconnect external monitors.

If you want to learn more about GNOME 3.13.3, you can study the release notes (here and here) or you can try it out in Fedora rawhide. Richard has also set up a copr repository for F20.

This is also a good occasion to announce publicly that we are aiming to have GNOME 3.14 in Fedora 21 – it may be a little tight, schedule-wise (the GNOME 3.13.91 beta release happens on Sep 1, a few days after the beta freeze for F21), but we’ve been able to squeeze things in, in the past.

June 25, 2014

Firewalls and per-network sharing
Firewalls

Fedora has had problems for a long while with the default firewall rules. They would make a lot of things not work (media and file sharing of various sorts, usually, whether as a client or a server) and users would usually disable the firewall altogether, or work around it through micro-management of opened ports.

We went through multiple discussions over the years trying to break the security folks' resolve on what should be allowed to be exposed on the local network (sometimes trying to get rid of the firewall). Or rather we tried to agree on a setup that would be implementable for desktop developers and usable for users, while still providing the amount of security and dependability that the security folks wanted.

The last round of discussions was more productive, and I posted the end plan on the Fedora Desktop mailing-list.

By Fedora 21, Fedora will have a firewall that's completely open for the user's applications (with better tracking of what applications do what once we have application sandboxing). This reflects how the firewall was used on the systems that the Fedora Workstation version targets. System services will still be blocked by default, except a select few such as ssh or mDNS, which might need some tightening.

But this change means that you'd be sharing your music through DLNA on the café's Wi-Fi right? Well, this is what this next change is here to avoid.

Per-network Sharing

To avoid showing your music in the caf, or exposing your holiday photographs at work, we needed a way to restrict sharing to wireless networks where you'd already shared this data, and provide a way to avoid sharing in the future, should you change your mind.

Allan Day mocked up such controls in our Sharing panel which I diligently implemented. Personal File Sharing (through gnome-user-share and WedDAV), Media Sharing (through rygel and DLNA) and Screen Sharing (through vino and VNC) implement the same per-network sharing mechanism.

Make sure that your versions of gnome-settings-daemon (which implements the starting/stopping of services based on the network) and gnome-control-center match for this all to work. You'll also need the latest version of all 3 of the aforementioned sharing utilities.

(and it also works with wired network profiles :)



Whats that icon ?

Keeping up with the theme of making life easier for application developers, I decided to take an afternoon off from releasing GNOME 3.13.3, and wrote a little application that shows icons from the icon theme, their symbolic variants, and some extra information from the icon naming spec, such as the icon description, and the context for the icon.

Icon browser

This should make it a little easier to find the right icon to use in your application. gtk3-icon-browser will be available in GTK+ 3.13.4.

June 21, 2014

We’ll Build A Dream House Of Net

(Note: the final 0.9.10 will be out later this week…  read on for the awesome that it will contain)

Hey wouldn’t it be great if NetworkManager did X and made my life so awesome I could retire to a private island surrounded by things I love?  Like kittens and teddy bears and bright copper kettles and Domaine Leflaive Montrachet Grand Cru?

If only your dream was reality…  Oh wait!  NetworkManager 0.9.10 will be your genie from Aladdin, granting every wish you dream of, except this time you can wish for more wishes.  But you still can’t bring awful networking back from the dead because it’s just not pretty.  Don’t do it.

 

7385321668_0ec3667c73_b

Tons of new features, yet somehow smaller and nimbler! (via cuxclipper, CC BY 2.0)

What is pretty is NetworkManager 0.9.10; it’s like the lightning-quick racing yacht that Larry Ellison doesn’t have and really, really wants, but which somehow also adds a Triple-E-Class-worth of new features just for you.  Somebody (maybe you!) wished for every single thing you’re about to see.  And then a magic genie showed up, snapped its fingers, and gave it to them.

nmtui

We found a usability gap between full-fledged CLI tools like nmcli and GUI-based ones, and thus nmtui was born.  Sometimes you don’t want to remember esoteric commands and options, but you also don’t want to run X. Boom, first wish granted: a curses-based tool for configuring and managing your network, no X involved:

nmtuinmtui2

nmcli

The command-line still rules with divine mandate, and we’re here to please so nmcli was a huge focus for this release.  We’ve added interactive editing support, single-command editing, detailed help, tab completion, and enhanced bash completion.  You really need to check this out; almost anything you can do with GUI tools can now be done with nmcli, and there’s even some stuff nmcli can do that the GUI tools can’t.  If you’re comfortable with terminals, NetworkManager 0.9.10 is right up your alley.

nmcli

Size Does Matter

Continuing on the quest to be more nimble and streamlined, we’ve split Wi-Fi, WWAN, Bluetooth, ADSL, and WiMAX device support into plugins which you don’t need to install if you like a minimal system.  Distributions should package these separately so they can be added/removed independently of NetworkManager itself, which reduces disk usage, runtime memory usage, and packaging dependency chains.  We’ve also spent time slimming down and optimizing the code.  The core NetworkManager daemon is now just over 1MB in size!

dbus-daemon is also no longer required for root-only or early-boot operation, with communication using a private root-only Unix socket. Similarly, PolicyKit is no longer used for root operation, though it could always be disabled at build-time anyway.

To facilitate remote and SSH-based management, the “at_console” D-Bus permission has been removed, which also helpfully harmonizes authorization settings between Fedora and Debian-based distributions.  All permissions authorization now happens through PolicyKit instead.

4870003098_26ba44a08a_b

NetworkManager works here (via scobleizer, CC BY 2.0)

The Enterprise

When you Absolutely Positively MUST have your ethernet frames delivered on-time and without loss you turn to Data Center Bridging.  DCB provides the reliability and robustness that iSCSI and FibreChannel over Ethernet (FCoE) need so you don’t have to keep shovelling money into a proprietary SAN.  Since users requested it, we snapped our fingers and added support to NetworkManager 0.9.10 for configuring DCB on your ethernet interfaces.

We’ve also upped our game with IP-level configuration support for many more software interfaces like GRE, macvlan, macvtap, tun, tap, veth, and vxlan.  And when you have services that aren’t yet network-aware, the NetworkManager-wait-online systemd service is more reliable to ensure your legacy services start up with the resources they require.

Customization Galore

You dreamed, we listened.  Creepy, no?  Yeah, we know what you want.  And top of the list was more flexible configuration:

  • Connection configuration files are no longer watched for changes by default, which used to cause problems with backups, filesystem copies, half-configured connections, etc.  If you want that behavior you can turn it back on (monitor-connection-files=true), but instead, edit them as much as you want and when you’re done, “nmcli con reload“.
  • Connections can now be locked to interface names instead of just MAC addresses
  • A new “ignore-carrier” option is available to ensure your critical app doesn’t fail just because you got drunk on Captain Morgan + Coke, and tripped over a cable
  • Want to manage /etc/resolv.conf yourself?  You can!  “dns=none” is your new best friend.
  • Configuration file snippets can be dropped into /etc/NetworkManager/conf.d to change smaller sets of configuration options

The NetworkManager dispatcher got some enhancements too.  It now has a “pre-up” event that allow scripts to execute before NetworkManager announces connectivity to applications.  We also added a “pre-down” event that lets network filesystems flush data before the interface is actually disconnected from the network.

Seamless Cooperation

Do you love /sbin/ip?  ifconfig?  brctl?  vconfig?  Keep using them!  Changes you make outside of NetworkManager get picked up, respected, and reflected in the D-Bus API.  NetworkManager 0.9.10 also goes to great lengths to read the existing configuration of interfaces and not touch them.  Most network interfaces known to the kernel are now exposed in the D-Bus API, and you can even change their IP configuration right from NetworkManager.  There’s more work to do here but we hope you’ll appreciate the new situational awareness as much as we do.

Get Your VPN On

We’ve improved support for routing-only VPNs like Openswan/Libreswan/Strongswan.  We’ve added full details of the VPN’s IP configuration to the D-Bus API.  And best yet, VPN plugins can now request additional passwords during the connection process if the ones you previously gave them are wrong or changed.

All the Rest

For clients, more properties are exposed in the D-Bus API.  We’ve added support for custom IP ranges to the Internet Connection Sharing functionality.  We’ve added WWAN autoconnect support and more reliable airplane mode behavior.  Fatal connection errors now more reliably block reconnect, which means better handling of wrong Wi-Fi passwords and access point failures.  Captive portal/hotspot support is moving forward, as are DNSSEC enhancements.

Geez, are we done yet?

Not even close!  Seriously, there’s more but I’m kinda tired of typing.  Try it out (the final release will be out later this week) and tell us what you think.  Then tell us what you want.  Don’t be afraid to dream a little bigger, darling!

(via Alexandra Guerson, CC BY-NC-ND 2.0)

June 19, 2014

PSA: Fedora 21, NetworkManager, and DNF

Recently we posted a Fedora 21 update delivering the huge new awesome that is NetworkManager 0.9.10-beta1.  Among a literal Triple E Class boatload of enhancements and fixes, this update continues our fine tradition of making the core of NetworkManager smaller and more flexible by splitting out Wi-Fi support into the NetworkManager-wifi package.  If you don’t have or don’t use Wi-Fi on your system, you don’t need to install stuff for it and you can save some disk space and RAM.

The Problem

To ensure upgrades work correctly and people didn’t unexpectedly lose Wi-Fi support we set RPM Obsoletes to ensure that when upgrading the new package was installed even though it didn’t exist before.  Unfortunately this didn’t work for those using DNF instead of yum for package management.

It turns out that DNF treats RPM obsoletes differently than yum, and it’s unclear right now how package splitting is actually supposed to work with DNF.  You can track the issue here.

The Workaround

If you suddenly find yourself without Wi-Fi, find a wired network connection and:

dnf install NetworkManager-wifi
systemctl restart NetworkManager

and harmony will return to the Universe.  We understand the pain, will continue to monitor the situation, and will update the Fedora NetworkManager packages when DNF has a solution.

June 18, 2014

dialog conversion status, 99 to go

 Converting LibreOffice dialogs to .ui format, 99 conversions remaining

We've now converted all but 99 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format. This is due to the much appreciated efforts of Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, who are tackling the last bunch of hard to find or hard to convert ones.


Current conversion stats are:
741 .ui files currently exist
There are 46 unconverted dialogs
There are 53 unconverted tabpages
An estimated additional 99 .ui are required
We are 88% of the way through.

June 17, 2014

Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems

(Just a small heads-up: I don't blog as much as I used to, I nowadays update my Google+ page a lot more frequently. You might want to subscribe that if you are interested in more frequent technical updates on what we are working on.)

In the past weeks we have been working on a couple of features for systemd that enable a number of new usecases I'd like to shed some light on. Taking benefit of the /usr merge that a number of distributions have completed we want to bring runtime behaviour of Linux systems to the next level. With the /usr merge completed most static vendor-supplied OS data is found exclusively in /usr, only a few additional bits in /var and /etc are necessary to make a system boot. On this we can build to enable a couple of new features:

  1. A mechanism we call Factory Reset shall flush out /etc and /var, but keep the vendor-supplied /usr, bringing the system back into a well-defined, pristine vendor state with no local state or configuration. This functionality is useful across the board from servers, to desktops, to embedded devices.
  2. A Stateless System goes one step further: a system like this never stores /etc or /var on persistent storage, but always comes up with pristine vendor state. On systems like this every reboot acts as factor reset. This functionality is particularly useful for simple containers or systems that boot off the network or read-only media, and receive all configuration they need during runtime from vendor packages or protocols like DHCP or are capable of discovering their parameters automatically from the available hardware or periphery.
  3. Reproducible Systems multiply a vendor image into many containers or systems. Only local configuration or state is stored per-system, while the vendor operating system is pulled in from the same, immutable, shared snapshot. Each system hence has its private /etc and /var for receiving local configuration, however the OS tree in /usr is pulled in via bind mounts (in case of containers) or technologies like NFS (in case of physical systems), or btrfs snapshots from a golden master image. This is particular interesting for containers where the goal is to run thousands of container images from the same OS tree. However, it also has a number of other usecases, for example thin client systems, which can boot the same NFS share a number of times. Furthermore this mechanism is useful to implement very simple OS installers, that simply unserialize a /usr snapshot into a file system, install a boot loader, and reboot.
  4. Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

Concepts

A number of Linux-based operating systems have tried to implement some of the schemes described out above in one way or another. Particularly interesting are GNOME's OSTree, CoreOS and Google's Android and ChromeOS. They generally found different solutions for the specific problems you have when implementing schemes like this, sometimes taking shortcuts that keep only the specific case in mind, and cannot cover the general purpose. With systemd now being at the core of so many distributions and deeply involved in bringing up and maintaining the system we came to the conclusion that we should attempt to add generic support for setups like this to systemd itself, to open this up for the general purpose distributions to build on. We decided to focus on three kinds of systems:

  1. The stateful system, the traditional system as we know it with machine-specific /etc, /usr and /var, all properly populated.
  2. Startup without a populated /var, but with configured /etc. (We will call these volatile systems.)
  3. Startup without either /etc or /var (We will call these stateless systems.).

A factory reset is just a special case of the latter two modes, where the system boots up without /var and /etc but the next boot is a normal stateful boot like like the first described mode. Note that a mode where /etc is flushed, but /var is not is nothing we intend to cover (why? well, the user ID question becomes much harder, see below, and we simply saw no usecase for it worth the trouble).

Problems

Booting up a system without a populated /var is relatively straight-forward. With a few lines of tmpfiles configuration it is possible to populate /var with its basic structure in a way that is sufficient to make a system boot cleanly. systemd version 214 and newer ship with support for this. Of course, support for this scheme in systemd is only a small part of the solution. While a lot of software reconstructs the directory hierarchy it needs in /var automatically, many software does not. In case like this it is necessary to ship a couple of additional tmpfiles lines that setup up at boot-time the necessary files or directories in /var to make the software operate, similar to what RPM or DEB packages would set up at installation time.

Booting up a system without a populated /etc is a more difficult task. In /etc we have a lot of configuration bits that are essential for the system to operate, for example and most importantly system user and group information in /etc/passwd and /etc/group. If the system boots up without /etc there must be a way to replicate the minimal information necessary in it, so that the system manages to boot up fully.

To make this even more complex, in order to support "offline" updates of /usr that are replicated into a number of systems possessing private /etc and /var there needs to be a way how these directories can be upgraded transparently when necessary, for example by recreating caches like /etc/ld.so.cache or adding missing system users to /etc/passwd on next reboot.

Starting with systemd 215 (yet unreleased, as I type this) we will ship with a number of features in systemd that make /etc-less boots functional:

  • A new tool systemd-sysusers as been added. It introduces a new drop-in directory /usr/lib/sysusers.d/. Minimal descriptions of necessary system users and groups can be placed there. Whenever the tool is invoked it will create these users in /etc/passwd and /etc/group should they be missing. It is only suitable for creating system users and groups, not for normal users. It will write to the files directly via the appropriate glibc APIs, which is the right thing to do for system users. (For normal users no such APIs exist, as the users might be stored centrally on LDAP or suchlike, and they are out of focus for our usecase.) The major benefit of this tool is that system user definition can happen offline: a package simply has to drop in a new file to register a user. This makes system user registration declarative instead of imperative -- which is the way how system users are traditionally created from RPM or DEB installation scripts. By being declarative it is easy to replicate the users on next boot to a number of system instances.

    To make this new tool interesting for packaging scripts we make it easy to alternatively invoke it during package installation time, thus being a good alternative to invocations of useradd -r and groupadd -r.

    Some OS designs use a static, fixed user/group list stored in /usr as primary database for users/groups, which fixed UID/GID mappings. While this works for specific systems, this cannot cover the general purpose. As the UID/GID range for system users/groups is very small (only containing 998 users and groups on most systems), the best has to be made from this space and only UIDs/GIDs necessary on the specific system should be allocated. This means allocation has to be dynamic and adjust to what is necessary.

    Also note that this tool has one very nice feature: in addition to fully dynamic, and fully static UID/GID assignment for the users to create, it supports reading UID/GID numbers off existing files in /usr, so that vendors can make use of setuid/setgid binaries owned by specific users.

  • We also added a default user definition list which creates the most basic users the system and systemd need. Of course, very likely downstream distributions might need to alter this default list, add new entries and possibly map specific users to particular numeric UIDs.
  • A new condition ConditionNeedsUpdate= has been added. With this mechanism it is possible to conditionalize execution of services depending on whether /usr is newer than /etc or /var. The idea is that various services that need to be added into the boot process on upgrades make use of this to not delay boot-ups on normal boots, but run as necessary should /usr have been update since the last boot. This is implemented based on the mtime timestamp of the /usr: if the OS has been updated the packaging software should touch the directory, thus informing all instances that an upgrade of /etc and /var might be necessary.
  • We added a number of service files, that make use of the new ConditionNeedsUpdate= switch, and run a couple of services after each update. Among them are the aforementiond systemd-sysusers tool, as well as services that rebuild the udev hardware database, the journal catalog database and the library cache in /etc/ld.so.cache.
  • If systemd detects an empty /etc at early boot it will now use the unit preset information to enable all services by default that the vendor or packager declared. It will then proceed booting.
  • We added a new tmpfiles snippet that is able to reconstruct the most basic structure of /etc if it is missing.
  • tmpfiles also gained the ability copy entire directory trees into place should they be missing. This is particularly useful for copying certain essential files or directories into /etc without which the system refuses to boot. Currently the most prominent candidates for this are /etc/pam.d and /etc/dbus-1. In the long run we hope that packages can be fixed so that they always work correctly without configuration in /etc. Depending on the software this means that they should come with compiled-in defaults that just work should their configuration file be missing, or that they should fall back to static vendor-supplied configuration in /usr that is used whenever /etc doesn't have any configuration. Both the PAM and the D-Bus case are probably candidates for the latter. Given that there are probably many cases like this we are working with a number of folks to introduce a new directory called /usr/share/etc (name is not settled yet) to major distributions, that always contain the full, original, vendor-supplied configuration of all packages. This is very useful here, so that there's an obvious place to copy the original configuration from, but it is also useful completely independently as this provides administrators with an easy place to diff their own configuration in /etc against to see what local changes are in place.
  • We added a new --tmpfs= switch to systemd-nspawn to make testing of systems with unpopulated /etc and /var easy. For example, to run a fully state-less container, use a command line like this:

    # system-nspawn -D /srv/mycontainer --read-only --tmpfs=/var --tmpfs=/etc -b

    This command line will invoke the container tree stored in /srv/mycontainer in a read-only way, but with a (writable) tmpfs mounted to /var and /etc. With a very recent git snapshot of systemd invoking a Fedora rawhide system should mostly work OK, modulo the D-Bus and PAM problems mentioned above. A later version of systemd-nspawn is likely to gain a high-level switch --mode={stateful|volatile|stateless} that sets combines this into simple switches reusing the vocabulary introduced earlier.

What's Next

Pulling this all together we are very close to making boots with empty /etc and /var on general purpose Linux operating systems a reality. Of course, while doing the groundwork in systemd gets us some distance, there's a lot of work left. Most importantly: the majority of Linux packages are simply incomptible with this scheme the way they are currently set up. They do not work without configuration in /etc or state directories in /var; they do not drop system user information in /usr/lib/sysusers.d. However, we believe it's our job to do the groundwork, and to start somewhere.

So what does this mean for the next steps? Of course, currently very little of this is available in any distribution (simply already because 215 isn't even released yet). However, this will hopefully change quickly. As soon as that is accomplished we can start working on making the other components of the OS work nicely in this scheme. If you are an upstream developer, please consider making your software work correctly if /etc and/or /var are not populated. This means:

  • When you need a state directory in /var and it is missing, create it first. If you cannot do that, because you dropped priviliges or suchlike, please consider dropping in a tmpfiles snippet that creates the directory with the right permissions early at boot, should it be missing.
  • When you need configuration files in /etc to work properly, consider changing your application to work nicely when these files are missing, and automatically fall back to either built-in defaults, or to static vendor-supplied configuration files shipped in /usr, so that administrators can override configuration in /etc but if they don't the default configuration counts.
  • When you need a system user or group, consider dropping in a file into /usr/lib/sysusers.d describing the users. (Currently documentation on this is minimal, we will provide more docs on this shortly.)

If you are a packager, you can also help on making this all work:

  • Ask upstream to implement what we describe above, possibly even preparing a patch for this.
  • If upstream will not make these changes, then consider dropping in tmpfiles snippets that copy the bare minimum of configuration files to make your software work from somewhere in /usr into /etc.
  • Consider moving from imperative useradd commands in packaging scripts, to declarative sysusers files. Ideally, this is shipped upstream too, but if that's not possible then simply adding this to packages should be good enough.

Of course, before moving to declarative system user definitions you should consult with your distribution whether their packaging policy even allows that. Currently, most distributions will not, so we have to work to get this changed first.

Anyway, so much about what we have been working on and where we want to take this.

Conclusion

Before we finish, let me stress again why we are doing all this:

  1. For end-user machines like desktops, tablets or mobile phones, we want a generic way to implement factory reset, which the user can make use of when the system is broken (saves you support costs), or when he wants to sell it and get rid of his private data, and renew that "fresh car smell".
  2. For embedded machines we want a generic way how to reset devices. We also want a way how every single boot can be identical to a factory reset, in a stateless system design.
  3. For all kinds of systems we want to centralize vendor data in /usr so that it can be strictly read-only, and fully cryptographically verified as one unit.
  4. We want to enable new kinds of OS installers that simply deserialize a vendor OS /usr snapshot into a new file system, install a boot loader and reboot, leaving all first-time configuration to the next boot.
  5. We want to enable new kinds of OS updaters that build on this, and manage a number of vendor OS /usr snapshots in verified states, and which can then update /etc and /var simply by rebooting into a newer version.
  6. We wanto to scale container setups naturally, by sharing a single golden master /usr tree with a large number of instances that simply maintain their own private /etc and /var for their private configuration and state, while still allowing clean updates of /usr.
  7. We want to make thin clients that share /usr across the network work by allowing stateless bootups. During all discussions on how /usr was to be organized this was fequently mentioned. A setup like this so far only worked in very specific cases, with this scheme we want to make this work in general case.

Of course, we have no illusions, just doing the groundwork for all of this in systemd doesn't make this all a real-life solution yet. Also, it's very unlikely that all of Fedora (or any other general purpose distribution) will support this scheme for all its packages soon, however, we are quite confident that the idea is convincing, that we need to start somewhere, and that getting the most core packages adapted to this shouldn't be out of reach.

Oh, and of course, the concepts behind this are really not new, we know that. However, what's new here is that we try to make them available in a general purpose OS core, instead of special purpose systems.

Anyway, let's get the ball rolling! Late's make stateless systems a reality!

And that's all I have for now. I am sure this leaves a lot of questions open. If you have any, join us on IRC on #systemd on freenode or comment on Google+.

DNF v.s. Yum

A lot has been said on fedora-devel in the last few weeks about DNF and Yum. I thought it might be useful to contribute my own views, considering I’ve spent the last half-decade consuming the internal Yum API and the last couple of years helping to design the replacement with about half a dozen of the packaging team here at Red Hat. I’m also a person who unsuccessfully tried to replace Yum completely with Zif in fedora a few years ago, so I know quite a bit about packaging systems and metadata parsing.

From my point of view, the hawkey depsolving library that DNF is designed upon is well designed, optimised and itself built on a successful low-level SAT library that SUSE has been using for years on production level workloads. The downloading and metadata parsing component used by DNF, librepo, is also well designed and complements the hawkey API nicely.

Rather than use the DNF framework directly, PackageKit uses librepo and hawkey to share 80% of the mechanism between PK and DNF. From what I’ve seen of the DNF codebase it’s nice, with unit tests and lots of the older compatibility cruft removed and the only reason it’s not used in PK was that the daemon is written in C and didn’t want to marshal everything via python for latency reasons.

So, from my point of view, DNF is a new command line tool built on 3 new libraries. It’s history may be of a fork from yum, but it resembles more of a 2014 rebuilt American hot-rod with all new motor-sport parts apart from the 1965 modified and strengthened chassis. Renaming DNF to Yum2 would be entirely the wrong message; it’s a new project with a new team and new goals.

June 16, 2014

datarootdir v.s. datadir

Public Service Announcement: Debian helpfully defines datadir to be /usr/share/games for some packages, which means that the AppData and MetaInfo files get installed into /usr/share/games/appdata which isn’t picked up by the metadata parsers.

It’s probably safer to install the AppData files into $datarootdir/appdata as this will work even if a distro has redefined datadir to be something slightly odd. I’ve changed the examples on the AppData page, but if you maintain a game on Debian with AppData then this might affect you when Debian starts extracting AppSpream metadata in the next few weeks. Anyone affected will be getting email in the next few days, although it only looks to affect very few people.

June 13, 2014

A new default theme for GTK+

This has been a long time coming. We’ve wanted to replace the default GTK+ theme for a very long time.

-#define DEFAULT_THEME_NAME "Raleigh"
+#define DEFAULT_THEME_NAME "Adwaita"

The Raleigh theme that we’ve used as the default until now has some advantages:

  • It is very simple
  • No dependency on a theme engine (external or internal)
  • It does not use a lot of resources

But there is no nice way of putting it: it is very ugly.

Raleigh

This may not  be such a big deal on Linux, where distributions generally have ‘their’ theme, not to mention the many packaged and readily available themes.  So, basically no Linux user ever sees the default GTK+ theme. The situation is very different on other platforms, where GTK+ is often bundled with applications, and it may not be easy to install themes, or get the bundled GTK+ to use them.

For a very long time, we’ve held onto the belief that the theming system is a way to make applications blend smoothly into the platform, and that there should be a native theme for each major platform that GTK+ can run on.

This is a great idea in theory. In practice, it has not worked out so well.  The one platform where we have a native theme is Windows, and even though the ms-windows theme has received much appreciated attention and updates by Руслан Ижбулатов  this cycle, it is still incomplete and has problems with some recent GTK+ features.

(No need to panic though. Even if it is no longer the default, the ms-windows theme will still be available.)

Adwaita on the other hand,  is a very complete theme that has received a lot of attention over the last three years. Not only does it support all recent GTK+ features, many of the CSS improvements that the GTK+ theming machinery has received in the last years were direct responses to the needs of the Adwaita designers.

Adwaita
With Adwaita, GTK+ applications can rely on having a 100% complete theme that will look and feel the same on all supported platforms.  A theme that is constantly receiving a lot of love and attention and keeps up with new GTK+ features.

Another big plus: Adwaita has a high-quality dark variant, which will now also be available everywhere.

Adwaita dark

So, why not do this switch earlier ? After all, Adwaita has been around for a while.

The main reason is that we did not want to lose the ‘no theme engine’ characteristic of the default theme.  Theme engines, and loadable modules in general, are something we’ve been moving away from for a while now. They are

  •  questionable from a security perspective (executable code thats shipped separately, inserted into all your applications)
  • associated with search path problems
  • require stable APIs for many things that are more or less internal

(No reason to panic, though. Theme engines will not stop working overnight in GTK+ 3.)

The alternative to engines that we want themes to use is CSS. Our CSS implementation has only recently become powerful enough to replace the last features from the Adwaita theme engine  (focus rectangles and menu shadows). That is why we are doing the switch now.

One of consequences of moving Adwaita into GTK+ is that we will no longer ship application-specific theming as part of the theme (many core GNOME applications currently have small amounts of CSS glue inside gnome-themes-standard). Where this is still relevant, applications should just install it themselves. Here is an example for how to do this.

Thanks for the huge amount of recent work on Adwaita go to Jakub Steiner, Lapo Calamandrei, Jon McCann and Cosimo Cecchi.

June 11, 2014

Application Addons in GNOME Software

Ever since we rolled out the GNOME Software Center, people have wanted to extend it to do other things. One thing that was very important to the Eclipse developers was a way of adding addons to the main application, which seems a sensible request. We wanted to make this generic enough so that it could be used in gedit and similar modular GNOME and KDE applications. We’ve deliberately not targeted Chrome or Firefox, as these applications will do a much better job compared to the package-centric operation of GNOME Software.

So. Do you maintain a plugin or extension that should be shown as an addon to an existing desktop application in the software center? If the answer is “no” you can probably stop reading, but otherwise, please create a file something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name Here <your@email.com> -->
<component type="addon">
<id>gedit-code-assistance</id>
<extends>gedit.desktop</extends>
<name>Code Assistance</name>
<summary>Code assistance for C, C++ and Objective-C</summary>
<url type="homepage">http://projects.gnome.org/gedit</url>
<metadata_license>CC0-1.0</metadata_license>
<project_license>GPL-3.0+</project_license>
<updatecontact>richard_at_hughsie.com</updatecontact>
</component>

This wants to be installed into /usr/share/appdata/gedit-code-assistance.metainfo.xml — this isn’t just another file format, this is the main component schema used internally by AppStream. Some notes when creating the file:

  • You can use anything as the <id> but it needs to be unique and sensible and also match the .metainfo.xml filename prefix
  • You can use appstream-util validate gedit-code-assistance.metainfo.xml if you install appstream-glib from git.
  • Don’t put the application name you’re extending in the <name> or <summary> tags — so you’d use “Code Assistance” rather than “GEdit Code Assistance
  • You can omit the <url> if it’s the same as the upstream project
  • You don’t need to create the metainfo.xml if the plugin is typically shipped in the same package as the application you’re extending
  • Please use <_name> and <_summary> if you’re using intltool to translate either your desktop file or the existing appdata file and remember to add the file to POTFILES.in if you use one

Please grab me on IRC if you have any questions or concerns, or leave a comment here. Kalev is currently working on the GNOME Software UI side, and I only finished the metadata extractor for Fedora today, so don’t expect the feature to be visible until GNOME 3.14 and Fedora 21.

June 06, 2014

A GtkInspector update

I’ve first introduced GtkInspector a few weeks ago. Since then, it has made it into the GTK+ 3.13.2 development release and is
now available in Fedora rawhide, which should hopefully make debugging of GTK+ applications in Fedora easier.

I’ve continued to work on the inspector, and it is time to give an update on what it can do now. So far, my focus has been mostly on covering more of GTK+’s features at a basic level, and so much on adding sophisticated debugging support. That will probably change over time.

In unsorted order, here are some of the recent additions:
Inspector warning
We show a warning dialog now when the inspector window is opened with a keyboard shortcut. This is meant as a safety net for users who may end up here by accident when they mistype an application shortcut. We don’t want them to get scared by the inspector, so we offer them a quick way out.

The warning dialog can be turned off permanently with a setting, so frequent users of the inspector can avoid it.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1115-1" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/06/better-picking.webm?_=1" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/06/better-picking.webm</video>

When using the mouse to pick a widget, we now lower the inspector window, so it doesn’t get in the way.

ResourcesThis tab shows all the embedded resources in the application (as well as in used libraries). This is perhaps not such a great debugging feature, but still useful information. We show the type and size of the resources, and display e.g. images as such.

Property Editing: FontsEditing of properties has been changed from a cell renderer to popovers. Popovers let us use much more room so that we can e.g. embed a font chooser to edit font properties.

Property Editing: AttributeSome other properties have gotten their own editors. Here, we are editing a cell renderer property that is mapped to a tree model column. The editor allows to change the attribute mapping, and the Properties button lets us jump to the tree model in question, where it will open the Data tab:

Tree Model
It can be useful to see the data in the tree model, to verify that we have used the right column in the attribute mapping.

Property Editing: ActionsAnother custom property editor has been added for action names. The Properties button lets us jump to the Actions tab of the widget where the action is defined:

ActionsActions can be activated from here. For stateful actions, we also allow to set their state:

Action EditingOne feature I am proud of is that when gesture support was merged in GTK+ a few weeks ago, the branch already came with GtkInspector support for gestures.

GesturesI hope that this can become a model for the future, and we can make ‘GtkInspector support’ an expected deliverable for every new GTK+ feature, just like accessibility or internationalization are now.

Last, not least, the inspector is very useful in sorting out theme questions – the GNOME designers have used it quite a bit while doing a major refactoring of the Adwaita theme that will land soon.

Of course, one can also use it for more silly things, like figuring out how to do translucent headers:

Translucent titlebars

I’m very interested in hearing both success stories of GtkInspector helping to solve problems and gaps where GtkInspector is lacking functionality.

May 29, 2014

Why We Need the Outreach Program for Women and More Outreach

Thank you to everyone who spoke up in the last few days in support of the Outreach Program for Women. The reason we need to have this program is that there are many challenges women encounter on their path to technology, and by working to address this disadvantage, we not only do the right thing and help women access the rewards of participating in free software, we also get awesome contributors whom we would otherwise have missed. Girls and women are systematically discouraged from exploring technology and from participating in the more hobbyist areas of technology, which are especially male dominated. They don’t typically have the kind of social support or encouragement men do to contribute to free software. By the time they work on their computer science degrees, they often feel behind their male peers in their experience, underestimate their abilities, and are less likely to apply for prestigious programs like Google Summer of Code.

Women often encounter sexist behavior when participating in the free software community. Recent examples from OPW interns include men being surprised about one of them attending a conference and commenting about the appearance of another in a professional context. Here is a quote from the first intern at a recent IRC meeting for interns: “I attended FOSDEM and a couple of guys came to tell me they were surprised to see a girl there… You feel like you’re not supposed to be there, like you’re doing something wrong… So it was nice to have this space here, and feel like I was in the right place.” When an organization posted an article on Facebook about another intern’s impressive work, among many congratulatory messages there were comments about the developer’s appearance. In addition to people speaking up about the inappropriateness of such behavior, support groups for women joining our community can help address the immediate negative feelings after such incidents.

The brilliance and drive of the women we accept for OPW have always left me in awe. To think that, judging by the historical data, they would not have been likely to get involved otherwise and we would not have had their contributions is sad. The program has had 170 interns so far, with 40 of them participating in the current round with 16 free software organizations. We had 122 applicants for this round who worked with a mentor and completed the required contribution. Because of the many outreach efforts, including OPW, the percent of women among GSoC participants increased from 7.1% in 2011 to 9.8% in 2014. OPW encourages women who are students and coders to apply for GSoC as well, and of the ones who applied for both programs at the same time, 26 were accepted to participate in GSoC. An additional 11 OPW participants went on to participate in GSoC in a later round. 13 found employment with sponsoring organizations. 15 gave full session talks at conferences. This is a sizable change, but we are in the beginning of a long road.

The group, including 3 OPW alums and 4 mentors, at the Feminist Hacker Lounge at PyCon 2014.

The group, including 3 OPW alums and 4 mentors, at the Feminist Hacker Lounge at PyCon 2014.

There was only one girl among 40 teenage Google Code-in grand prize winners in the last two years. About 10% of the participants were girls. While OPW is effective in involving college and post-college women in free software, we are set to run the program forever, unless we also start involving high school girls in free software. For the next Google Code-in, I’d like to engage our network of OPW alums, mentors, and supporters in mentoring high school girls in Google Code-in participation. I’d like to invite people to step up to start and lead this effort. The first step would be to form a local group of people to teach workshops similar to OpenHatch’s “Open Source Comes to Campus” at local programming for girls groups, programming camps, or high school CS classes. You would then need to report on your experience and set up the resources for people to replicate it in their locations. The next step after the workshops would be to organize meetups at local libraries for mentoring in ongoing free software contributions and Google Code-in participation. This will be the Outreach Program for Girls.

There are many other groups of people underrepresented in free software – people from the Middle East, Africa, and East Asia; people of color in North America and Europe; people from disadvantaged social-economic backgrounds, people with disabilities. Empowerment and access are the main tenets of free software, and we need to do so much more to involve all these people. As Karen Sandler recently said to me, the current program is incomplete. Once we have a solid financial situation for OPW, we’d like to evolve it into OP-UP – the Outreach Program for Underrepresented People. Lukas Blakk is starting the Ascend Project for Mozilla, which will offer a 6 week course in contributing to free software, complete with financial support, to people from disadvantaged backgrounds. The first course will end in October, and I hope we can open the next round of OPW to graduates of this course as a pilot for OP-UP.

Being the community which fostered OPW and brought many free software organizations together to involve more of humanity in developing free software is earning the GNOME Foundation a lot of credit. It helps more people find out about GNOME and learn about the amazing product and community we have to offer. People notice how polished GNOME is. We went from 985 people contributing for the 3.10 release to 1,140 people contributing for 3.12! We shouldn’t forget how much we have achieved, when we consider what our software is missing or who our community is missing, and continue on to make things better.

May 28, 2014

configure fails with "No package 'foo' found" - and how to fix it

A common error when building from source is something like the error below:


configure: error: Package requirements (foo) were not met:

No package 'foo' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Seeing that can be quite discouraging, but luckily, in many cases it's not too difficult to fix. As usual, there are many ways to get to a successful result, I'll describe what I consider the simplest.

What does it mean?

pkg-config is a tool that provides compiler flags, library dependencies and a couple of other things to correctly link to external libraries. For more details on it see Dan Nicholson's guide. If a build system requires a package foo, pkg-config searches for a file foo.pc in the following directories: /usr/lib/pkgconfig, /usr/lib64/pkgconfig, /usr/share/pkgconfig, /usr/local/lib/pkgconfig, /usr/local/share/pkgconfig. The error message simply means pkg-config couldn't find the file and you need to install the matching package from your distribution or from source.

What package provides the foo.pc file?

In many cases the package is the development version of the package name. Try foo-devel (Fedora, RHEL, SuSE, ...) or foo-dev (Debian, Ubuntu, ...). yum provides a great shortcut to install any pkg-config dependency:


$> yum install "pkgconfig(foo)"
will automatically search and install the right package, including its dependencies.
apt-get requires a bit more effort:

$> apt-get install apt-file
$> apt-file update
$> apt-file search --package-only foo.pc
foo-dev
$> apt-get install foo-dev
For those running Arch and pacman, the sequence is:

$> pacman -S pkgfile
$> pkgfile -u
$> pkgfile foo.pc
extra/foo
$> pacman -S extra/foo
zypper is the same as yum:

$> zypper in 'pkgconfig(foo)'
Once that's done you can re-run configure and see if all dependencies have been met. If more packages are missing, follow the same process for the next file.

Any users of other distributions - let me know how to do this on yours and I'll update the post

Where does the dependency come from?

In most projects using autotools the dependency is specified in the file configure.ac and looks roughly like one of these:


PKG_CHECK_MODULES(FOO, [foo])
PKG_CHECK_MODULES(DEPENDENCIES, foo [bar >= 1.4] banana)
The first argument is simple a name that is used in the build system, you can ingore it. After the comma is the list of space-separated dependencies. In this case this means we need foo.pc, bar.pc and banana.pc, and more specifically we need a bar.pc that is equal or newer to version 1.4 of the package. To install all three follow the above steps and you're good.

My version is wrong!

It's not uncommon to see the following error after installing the right package:


configure: error: Package requirements (foo >= 1.9) were not met:

Requested 'foo >= 1.9' but version of foo is 1.8

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Now you're stuck and you have a problem. What this means is that the package version your distribution provides is not new enough to build your software. This is where the simple solutions and and it all gets a bit more complicated - with more potential errors. Unless you are willing to go into the deep end, I recommend moving on and accepting that you can't have the newest bits on an older distribution. Because now you have to build the dependencies from source and that may then require to build their dependencies from source and before you know you've built 30 packages. If you're willing read on, otherwise - sorry, you won't be able to run your software today.

Manually installing dependencies

Now you're in the deep end, so be aware that you may see more complicated errors in the process. First of all you need to figure out where to get the source from. I'll now use cairo as example instead of foo so you see actual data. On rpm-based distributions like Fedora run:


$> yum info cairo-devel
Loaded plugins: auto-update-debuginfo, langpacks
Skipping unreadable repository '///etc/yum.repos.d/SpiderOak-stable.repo'
Installed Packages
Name : cairo-devel
Arch : x86_64
Version : 1.13.1
Release : 0.1.git337ab1f.fc20
Size : 2.4 M
Repo : installed
From repo : fedora
Summary : Development files for cairo
URL : http://cairographics.org
License : LGPLv2 or MPLv1.1
Description : Cairo is a 2D graphics library designed to provide high-quality
: display and print output.
:
: This package contains libraries, header files and developer
: documentation needed for developing software which uses the cairo
: graphics library.
The important field here is the URL line - got to that and you'll find the source tarballs. That should be true for most projects but you may need to google for the package name and hope. Search for the tarball with the right version number and download it. On Debian and related distributions, cairo is provided by the libcairo2-dev package. Run apt-cache show on that package:

$> apt-cache show libcairo2-dev
Package: libcairo2-dev
Source: cairo
Version: 1.12.2-3
Installed-Size: 2766
Maintainer: Dave Beckett <dajobe>
Architecture: amd64
Provides: libcairo-dev
Depends: libcairo2 (= 1.12.2-3), libcairo-gobject2 (= 1.12.2-3),[...]
Suggests: libcairo2-doc
Description-en: Development files for the Cairo 2D graphics library
Cairo is a multi-platform library providing anti-aliased
vector-based rendering for multiple target backends.
.
This package contains the development libraries, header files needed by
programs that want to compile with Cairo.
Homepage: http://cairographics.org/
Description-md5: 07fe86d11452aa2efc887db335b46f58
Tag: devel::library, role::devel-lib, uitoolkit::gtk
Section: libdevel
Priority: optional
Filename: pool/main/c/cairo/libcairo2-dev_1.12.2-3_amd64.deb
Size: 1160286
MD5sum: e29852ae8e8e5510b00b13dbc201ce66
SHA1: 2ed3534d02c01b8d10b13748c3a02820d10962cf
SHA256: a6099cfbcc6bd891e347dd9abc57b7f137e0fd619deaff39606fd58f0cc60d27
In this case it's the Homepage line that matters, but the process of downloading tarballs is the same as above. For Arch users, the interesting line is URL as well:

$> pacman -Si cairo | grep URL
Repository : extra
Name : cairo
Version : 1.12.16-1
Description : Cairo vector graphics library
Architecture : x86_64
URL : http://cairographics.org/
Licenses : LGPL MPL
....
zypper (Tizen, SailfishOS, Meego and others) doesn't have an interface for this, but you can run rpm on the package that you installed.

$> rpm -qi cairo-devel
Name : cairo-devel
[...]
URL : http://cairographics.org/
This command would obviously work on other rpm-based distributions too (Fedora, RHEL, ...). Unlike yum, it does require the package to be installed but by the time you get here you've already installed it anyway :)

Now to the complicated bit: In most cases, you shouldn't install the new version over the system version because you may break other things. You're better off installing the dependency into a custom folder ("prefix") and point pkg-config to it. So let's say you downloaded the cairo tarball, now you need to run:


$> mkdir $HOME/dependencies/
$> tar xf cairo-someversion.tar.xz
$> cd cairo-someversion
$> autoreconf -ivf
$> ./configure --prefix=$HOME/dependencies
$> make && make install
$> export PKG_CONFIG_PATH=$HOME/dependencies/lib/pkgconfig:$HOME/dependencies/share/pkgconfig
# now go back to original project and run configure again
So you create a directory called dependencies, install cairo there. This will install cairo.pc as $HOME/dependencies/lib/cairo.pc. Now all you need to do is tell pkg-config that you want it to look there as well - so you set PKG_CONFIG_PATH. If you re-run configure in the original project, pkg-config will find the new version and configure should succeed. If you have multiple packages that all require a newer version, install them into the same path and you only need to set PKG_CONFIG_PATH once. Remember you need to set PKG_CONFIG_PATH in the same shell as you are running configure from.

If you keep seeing the version error the most common problem is that PKG_CONFIG_PATH isn't set in your shell, or doesn't point to the new cairo.pc file. A simple way to check is:


$> pkg-config --modversion cairo
1.13.1
Is the version number the one you installed or the system one? If it is the system one, you have a typo in PKG_CONFIG_PATH, just re-set it. If it still doesn't work do this:

$> cat $HOME/dependencies/lib/pkgconfig/cairo.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include

Name: cairo
Description: Multi-platform 2D graphics library
Version: 1.13.1

Requires.private: gobject-2.0 glib-2.0 >= 2.14 [...]
Libs: -L${libdir} -lcairo
Libs.private: -lz -lz -lGL
Cflags: -I${includedir}/cairo
If the Version field matches what pkg-config returns, then you're set. If not, keep adjusting PKG_CONFIG_PATH until it works. There is a rare case where the Version field in the installed library doesn't match what the tarball said. That's a defective tarball and you should report this to the project, but don't worry, this hardly ever happens. In almost all cases, the cause is simply PKG_CONFIG_PATH not being set correctly. Keep trying :)

Let's assume you've managed to build the dependencies and want to run the newly built project. The only problem is: because you built against a newer library than the one on your system, you need to point it to use the new libraries.


$> export LD_LIBRARY_PATH=$HOME/dependencies/lib
and now you can, in the same shell, run your project.

Good luck!

AppData progress and the email deluge

In the last few days, I’ve been asking people to create and ship AppData files upstream. I’ve:

  • Sent 245 emails to upstream maintainers
  • Opened 38 launchpad bugs
  • Created 5 gnome.org bugs
  • Opened 72 sourceforge feature requests
  • Opened 138 github issues
  • Created 8 bugs on Fedora trac
  • Opened ~20 accounts on random issue trackers
  • Used 17 “contact” forms

In doing this, I’ve visited over 600 upstream websites, helpfully identifying 28 that are stated as abandoned by thier maintainer (and thus removed from the metadata). I’ve also blacklisted quite a few things that are not actually applications and not suitable for the software center.

I’ve deliberately not included GNOME in this sweep, as a lot of the core GNOME applications already have AppData and most of the gnomies already know what to do. I also didn’t include XFCE appications, as XFCE has agreed to adopt AppData on the mailing list and are in the process of doing this already. KDE is just working out how to merge the various files created by Matthias, and I’ve not heard anything from LXDE or MATE. So, I only looked at projects not affiliated with any particular desktop.

For far, the response has been very positive, with at least 10% of the requests been actioned and some projects even doing new releases that I’ve been slowly uploading into Fedora. Another ~10% of requests are acknowlegdments from maintainers thay they would do this sometime before the next release. I have found a lot of genuinely interesting applications in my travels, and lot of junk. The junk is mostly unmaintained, and so my policy of not including applications (unless they have AppData manually added by the distro packager) that have not had an upstream release for the last 5 years seems to be valid.

At least 5 of the replies have been very negative, e.g. “how dare you ask me to do something — do it yourself” and things like “Please do not contact me again – I don’t want any new users“. The vast number of people have not responded yet — so I’m preparing myself for a deluge over the next few weeks from the people that care.

My long term aim is to only show applications in Fedora 22 with AppData, so it seemed only fair to contact the various upstream projects about an initiative they’re probably not familiar with. If we don’t get > 50% of applications in Fedora with the extra data we’ll have to reconsider such a strong stance. So far we’ve reached over 20%, which is pretty impressive for a standard I’ve been pushing for such a short amount of time.

So, if you’ve got an email from me, please read it and reply — thanks.

May 23, 2014

DisplayPort MST dock support for Fedora 20 copr repo.
So I've created a copr

http://copr.fedoraproject.org/coprs/airlied/mst/

With a kernel + intel driver that should provide support for DisplayPort MST on Intel Haswell hardware. It doesn't do any of the fancy Dell monitor stuff yet, it primarily for people who have Lenovo or Dell docks and laptops that can't currently multihead.

The kernel source is from this branch which backports a chunk of stuff to v3.14 to support this.

http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-i915-mst-v3.14

It might still have some bugs and crashes, but the basics should in theory work.

May 19, 2014

Introducing tellme, a text-to-speech notifier

I've been hacking on a little tool the last couple of days and I think it's ready for others to look at it and provide suggestions to improve it. Or possibly even tell me that it already exists, in which case I'll save a lot of time. "tellme" is a simple tool that uses text-to-speech to let me know when a command finished. This is useful for commands that run for a couple of minutes - you can go off read something and the computer tells you when it's done instead of you polling every couple of seconds to check. A simple example:


tellme sudo yum update
runs yum update, and eventually says in a beautiful totally-not-computer-sounding voice "finished yum update successfully".

That was the first incarnation which was a shell script, I've started putting a few more features in (now in Python) and it now supports per-command configuration and a couple of other semi-smart things. For example:


whot@yabbi:~/xorg/xserver/Xi> tellme make
eventually says "finished xserver make successfully". With the default make configuration, it runs up the tree to search for a .git directory and then uses that as basename for the voice output. Which is useful when you rebuild all drivers simultaneously and the box tells you which ones finished and whether there was an error.

I put it up on github: https://github.com/whot/tellme. It's still quite rough, but workable. Have a play with it and feel free to send me suggestions.

The desktop and the developer
I was at the OpenStack Summit this week. The overwhelming majority of OpenStack deployments are Linux-based, yet the most popular laptop vendor (by a long way) at the conference was Apple. People are writing code with the intention of deploying it on Linux, but they're doing so under an entirely different OS.

But what's really interesting is the tools they're using to do so. When I looked over people's shoulders, I saw terminals and a web browser. They're not using Macs because their development tools require them, they're using Macs because of what else they get - an aesthetically pleasing OS, iTunes and what's easily the best trackpad hardware/driver combination on the market. These are people who work on the same laptop that they use at home. They'll use it when they're commuting, either for playing videos or for getting a head start so they can leave early. They use an Apple because they don't want to use different hardware for work and pleasure.

The developers I was surrounded by aren't the same developers you'd find at a technical conference 10 years ago. They grew up in an era that's become increasingly focused on user experience, and the idea of migrating to Linux because it's more tweakable is no longer appealing. People who spend their working day making use of free software (and in many cases even contributing or maintaining free software) won't run a free software OS because doing so would require them to compromise on things that they care about. Linux would give them the same terminals and web browser, but Linux's poorer multitouch handling is enough on its own to disrupt their workflow. Moving to Linux would slow them down.

But even if we fixed all those things, why would somebody migrate? The best we'd be offering is a comparable experience with the added freedom to modify more of their software. We can probably assume that this isn't a hugely compelling advantage, because otherwise it'd probably be enough to overcome some of the functional disparity. Perhaps we need to be looking at this differently.

When we've been talking about developer experience we've tended to talk about the experience of people who are writing software targeted at our desktops, not people who are incidentally using Linux to do their development. These people don't need better API documentation. They don't need a nicer IDE. They need a desktop environment that gives them access to the services that they use on a daily basis. Right now if someone opens an issue against one of their bugs, they'll get an email. They'll have to click through that in order to get to a webpage that lets them indicate that they've accepted the bug. If they know that the bug's already fixed in another branch, they'll probably need to switch to github in order to find the commit that contains the bug number that fixed it, switch back to their issue tracker and then paste that in and mark it as a duplicate. It's tedious. It's annoying. It's distracting.

If the desktop had built-in awareness of the issue tracker then they could be presented with relevant information and options without having to click through two separate applications. If git commits were locally indexed, the developer could find the relevant commit without having to move back to a web browser or open a new terminal to find the local checkout. A simple task that currently involves multiple context switches could be made significantly faster.

That's a simple example. The problem goes deeper. The use of web services for managing various parts of the development process removes the need for companies to maintain their own infrastructure, but in the process it tends to force developers to bounce between multiple websites that have different UIs and no straightforward means of sharing information. Time is lost to this. It makes developers unhappy.

A combination of improved desktop polish and spending effort on optimising developer workflows would stand a real chance of luring these developers away from OS X with the promise that they'd spend less time fighting web browsers, leaving them more time to get on with development. It would also help differentiate Linux from proprietary alternatives - Apple and Microsoft may spend significant amounts of effort on improving developer tooling, but they're mostly doing so for developers who are targeting their platforms. A desktop environment that made it easier to perform generic development would be a unique selling point.

I spoke to various people about this during the Summit, and it was heartening to hear that there are people who are already thinking about this and hoping to improve things. I'm looking forward to that, but I also hope that there'll be wider interest in figuring out how we can make things easier for developers without compromising other users. It seems like an interesting challenge.

comment count unavailable comments

May 15, 2014

Introducing GtkInspector

If you need to solve a tricky GTK+ problem in your application, gtkparasite is a very useful tool to have around. It lets you explore the widget hierarchy, change properties, tweak theme settings, and so on.

Unfortunately, gtkparasite is a tool for people ‘in the know’ - it is not part of GTK+, not advertised on our website, and not available out of the box on your average GTK+ installation.

At the Developer Experience hackfest in Berlin a few weeks ago, the assembled GTK+ developers discussed fixing this situation by making an interactive debugger like gtkparasite part of GTK+ itself. This way, it will be available whenever you run a GTK+ application, and we can develop and improve the debugging tools alongside the toolkit.

So, I’ve spent some of my spare time on this since the hackfest. The results are now in GTK+ master.  I’ve started from the gtkparasite code, but things have changed quite a bit, and some new things have appeared.

GtkInspectorIf you want to give it a try yourself, you can just use the Control-Shift-I or Control-Shift-D shortcuts to open an inspector window in any application that is using GTK+ master, or you can just set the GTK_DEBUG=interactive environment variable. Note that the keybinding will only work if GTK+ can find the org.gtk.Debug settings schema, so you probably want to run the application under jhbuild.

Here is a video of GtkInspector in action.

Among the things you can see in the video are interactive picking of widgets for inspection, visual debugging of graphic updates and baseline alignment, changing of widget properties, theme tweaks and general version and environment information.

I’ve also put together a page with ideas for future improvements to this debugging tool. If you are looking for a fun project to work on, this might just be it!

Thanks to Christian Hammond for creating gtkparasite and maintaining (over many years) such a useful debugging tool.  The GTK+ inspector would not exist without it.

May 12, 2014

selected and unselected slides with mouse over
It will be possible in LibreOffice Impress 4.3 to distinguish between selected and unselected slides when the mouse over highlight activates.

I mean, in previous versions, slide 2 here is drawn the same, when the mouse is over it in the slide pane, regardless of whether it is selected or unselected

 While in 4.3 the two modes are drawn respectively as

There's also a little subtle gradient added in for good measure.

May 11, 2014

Oracle continue to circumvent EXPORT_SYMBOL_GPL()
Oracle won their appeal regarding whether APIs are copyrightable. There'll be ongoing argument about whether Google's use of those APIs is fair use or not, and perhaps an appeal to the Supreme Court, but that's the new status quo. This will doubtless result in arguments over whether Oracle's implementation of Linux APIs in Solaris 10 was a violation of copyright or not (and presumably Google are currently checking whether they own any code that Oracle reimplemented), but that's not what I'm going to talk about today.

Oracle own some code called DTrace (Wikipedia has a good overview here - what it actually does isn't especially relevant) that was originally written as part of Solaris. When Solaris was released under the CDDL, so was DTrace. The CDDL is a file-level copyleft license with some restrictions not present in the GPL - as a result, combining GPLed code with CDDLed code will (in the absence of additional permission grants) result in a work that is under an inconsistent license and cannot legally be distributed.

Oracle wanted to make DTrace available for Linux as part of their Unbreakable Linux product. Integrating it directly into the kernel would obviously cause legal issues, so instead they implemented it as a kernel module. The copyright status of kernel modules is somewhat unclear. The GPL covers derivative works, but the definition of derivative works is a function of copyright law and judges. Making use of explicitly exported API may not be sufficient to constitute a derivative work - on the other hand, it might. This is largely untested in court. Oracle appear to believe that they're legitimate, and so have added just enough in-kernel code (and GPLed) to support DTrace, while keeping the CDDLed core of DTrace separate.

The kernel actually has two levels of exposed (non-userspace) API - those exported via EXPORT_SYMBOL() and those exported via EXPORT_SYMBOL_GPL(). Symbols exported via EXPORT_SYMBOL_GPL() may only be used by modules that claim to be GPLed, with the kernel refusing to load them otherwise. There is no technical limitation on the use of symbols exported via EXPORT_SYMBOL().

(Aside: this should not be interpreted as meaning that modules that only use symbols exported via EXPORT_SYMBOL() will not be considered derivative works. Anything exported via EXPORT_SYMBOL_GPL() is considered by the author to be so fundamental to the kernel that using it would be impossible without creating a derivative work. Using something exported via EXPORT_SYMBOL() may result in the creation of a derivative work. Consult lawyers before attempting to release a non-GPLed Linux kernel module)

DTrace integrates very tightly with the host kernel, and one of the things it needs access to is a high-resolution timer that is guaranteed to monotonically increase. Linux provides one in the form of ktime_get(). Unfortunately for Oracle, ktime_get() is only exported via EXPORT_SYMBOL_GPL(). Attempting to call it directly from the DTrace module would fail.

Oracle work around this in their (GPLed) kernel abstraction code. A function called dtrace_gethrtimer() simply returns the value of ktime_get(). dtrace_gethrtimer() is exported via EXPORT_SYMBOL() and therefore can be called from the DTrace module.

So, in the face of a technical mechanism designed to enforce the author's beliefs about the copyright status of callers of this function, Oracle deliberately circumvent that technical mechanism by simply re-exporting the same function under a new name. It should be emphasised that calling an EXPORT_SYMBOL_GPL() function does not inherently cause the caller to become a derivative work of the kernel - it only represents the original author's opinion of whether it would. You'd still need a court case to find out for sure. But if it turns out that the use of ktime_get() does cause a work to become derivative, Oracle would find it fairly difficult to argue that their infringement was accidental.

Of course, as copyright holders of DTrace, Oracle could solve the problem by dual-licensing DTrace under the GPL as well as the CDDL. The fact that they haven't implies that they think there's enough value in keeping it under an incompatible license to risk losing a copyright infringement suit. This might be just the kind of recklessness that Oracle accused Google of back in their last case.

comment count unavailable comments

May 08, 2014

fit slide to window statusbar icon
LibreOffice 4.3 Impress now has a "Fit slide to window" icon in the statusbar alongside the zoom slider.
Additionally when you change zoom the slide now automatically centers itself. Hopefully the addition of the one-click to fit slide to window and the automatic centering on zoom-change will successfully address some of the complaining I've heard about the suffering that scrolling a slide causes.

May 06, 2014

Tweaking a the GTK+ theme, using CSS

I got asked today:

How do I make the sidebar in evolution as narrow as the one in thunderbird, so I can see all my mail folders ? Doesn’t GTK+’s awesome CSS theming make that sort of thing very easy ?

Well, I was hopeful that it would, but I figured that I better try before saying yes. Here is what I came up with:

First, create a local directory to hold our theme modification:

mkdir ~/.local/share/themes/Adwaita/gtk-3.0

Next, copy the gtk.css file from the system theme:

cp /usr/share/themes/Adwaita/gtk-3.0/gtk.css ~/.local/share/themes/Adwaita/gtk-3.0

Well, that didn’t go as expected:

Oops

What went wrong ? It turns out that the Adwaita css file is pretty minimal:

@import url("resource:///org/gnome/adwaita/gtk-main.css");

All the secret sauce is wrapped up in a resource bundle in the same directory. GTK+ automatically looks for such a bundle in the same directory as the css file when loading themes. With this knowledge, making my local copy of Adwaita work is a simple as linking the resources into the right place:

ln -s /usr/share/themes/Adwaita/gtk-3.0/gtk.gresource ~/.local/share/themes/Adwaita/gtk-3.0

The last step is to add a little bit of css to gtk.css:

EMailSidebar.view {
  font-size: 5px;
}

@import url("resource:///org/gnome/adwaita/gtk-main.css");

And we reach the desired end result:

Small print

I know what you are going to ask:

How did I come up with the selector EMailSidebar.view that matches the evolution sidebar ?

The answer is, I used gtkparasite, which is a pretty useful debugging tool for this sort of problem.

A parasite

And because it is so useful, we are planning to include something very similar to it in GTK+ itself soon. You can follow this bug to track the progress.

May 02, 2014

AppData, meet SPDX. SPDX, meet AppData

A few long months ago I asked everyone shipping a desktop application to also write an AppData file for the software installer. So far over 300 projects have written these files and there are over 500 upstream screenshots that have been taken. The number has been growing steadily, and most active projects now ship a file upstream. So, what do I want you to do now? :)

The original AppData specification had something like this:

<?xml version="1.0" encoding="UTF-8"?>
<application>
<id type="desktop">gnome-power-statistics.desktop</id>
<licence>CC0</licence>
...

This had a couple of problems. First was the spelling of license. I’m from Blightly, and forgot that I was supposed to be coding in en_US. The second was people frequently got confused that they were supposed to be specifying the license of that specific metadata file, rather than the license of the project as a whole. A few months ago we fixed this, and added the requirement of a copyright statement to please the Debian overlords:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2013 Richard Hughes <richard@hughsie.com> -->
<application>
<id type="desktop">gnome-power-statistics.desktop</id>
<metadata_license>CC0-1.0</metadata_license>
<project_license>GPL-2.0+ and GFDL-1.3</project_license>
...

The project licenses just have to be valid SPDX strings. You can use “ and ” and “ or ” or even brackets if required, just like in a spec file. The reason for standardising on SPDX is that it’s being used on lots of distros now, and we can also make the licence substrings clickable in gnome-software very easily.

So, if you’ve already written an AppData file please do three things:

  • Make sure Copyright exists at the top of the file after the <?xml header
  • Convert license into metadata_license and change to a SPDX ID
  • Add project_license and add all the licenses used in your project.

In Fedora 21 I’m currently doing a mapping from License: in the spec file to the SPDX format, although it’s not a 1:1 mapping hence why I need this to be upstream. KDE is already shipping project_license in thier AppData files, but I’m not going to steal that thunder. Stay tuned.

April 30, 2014

Good bye Totem browser plugin
10 years ago, I committed the first version of a browser plugin in Totem's source code tree. Today, it's going away.

The landscape of video on the Web changed, then changed back again, and web technologies have moved on. We've witnessed:

  • The fall of RealPlayer
  • The rise of Flash video players, as a way to turn videos into black boxes with minimal "copy protection" (cf. "YouTube downloader" in your favourite search engine)
  • The rise and precipitous fall of Silverlight (with only a handful of websites, ever, or still, using it)
  • And most importantly, the advent of HTML5's <video> tag
Totem's browser plugin did as good a job as it could mimicking legacy web browser plugins from other platforms, such as QuickTime or Windows Media Player (even we stopped caring about the RealPlayer mimicking).

It wasn't helped by the ill-defined Netscape Plugin APIs (NPAPI) which meant that we never knew whether we'd receive a stream for the video we were about to play, or maybe not at all, and when you request one, you'd get one automatic one and the one you requested, or whether it would download empty files. Or we couldn't tell to open in another application when clicking directly on a file. All in all, pretty dire.

We made attempts at replacing the Flash plugin for playing back videos, but the NPAPI meant that we needed to handle everything or nothing. Ideally, we'd have been able to tell the browser to use our browser plugin for websites that we could support through libquvi, and either fallback to a placeholder or the real Flash plugin for other cases. NPAPI didn't allow us to do that.

The current state of media playback in browsers on Linux is such that:
Given all this, and the facts that Totem's browser plugin will not work on Wayland (it uses XEmbed to slot into the browser UI), that its UI is pretty broken since the redesign of the main player (not unfixable, but time consuming), and that it does not work properly in GNOME's own web browser (due to bad interactions between Clutter and GL acceleration in WebKit), I think it's time to call it a day.

Good bye Totem browser plugin.

I'll miss the clever puns of your compatibility plugins (Real Player/Complex and QuickTime/NarrowSpace being the best ones). I won't miss interacting with ill-defined APIs and buggy implementations.
T440 touchpads, the story continues

I've just updated the post about X.Org synaptics support for the Lenovo T440, T540, X240, Helix, Yoga, X1 Carbon. For those following my blog, here is a rough diff of the updates:

  • All touchpads in this series need a kernel quirk to fix the min/max ranges. It's like a happy meal toy, make sure you collect them all.
  • A new kernel evdev input prop INPUT_PROP_TOPBUTTONPAD is available in 3.15. It marks the devices that require top software buttons. It will be backported to stable.
  • A new option was added HasSecondarySoftButtons was added to the synaptics driver. It is automatically set if INPUT_PROP_TOPBUTTONPAD is set and if set, the driver parses the SecondarySoftButtonAreas option and honours the values in it.
  • If you have the kernel min/max fixes and the new property, don't bother with DMI matching. Provide a xorg.conf.d snippet that unconditionally merges the SecondarySoftButtonAreas and rely on the driver for parsing it when appropriate

X.Org synaptics support for the Lenovo T440, T540, X240, Helix, Yoga, X1 Carbon

Updates: 30 April 2014, for the new INPUT_PROP_TOP_BUTTONPAD

This is a follow-up to my post from December Lenovo T440 touchpad button configuration. Except this time the support is real, or at least close to being finished. Since I am now seeing more and more hacks to get around all this I figured it's time for some info from the horse's mouth.

[update] I forgot to mention: synaptics 1.8 will have all these, the first snapshot is available here

Lenovo's newest series of laptops have a rather unusual touchpad. The trackstick does not have a set of physical buttons anymore. Instead, the top part of the touchpad serves as software-emulated buttons. In addition, the usual ClickPad-style software buttons are to be emulated on the bottom edge of the touchpad. An ASCII-art of that would look like this:


+----------------------------+
| LLLLLLLLLL MMMMM RRRRRRRRR |
| |
| |
| |
| |
| |
| |
| LLLLLLLL RRRRRRRR |
+----------------------------+
Getting this to work required a fair bit of effort, patches to synaptics, the X server and the kernel and a fair bit of trial-and-error. Kudos for getting all this sorted goes to Hans the Goede, Benjamin Tissoires, Chandler Paul and Matthew Garrett. And in the process of fixing this we also fixed a bunch of other issues that have been plaguing clickpads for a while.

The first piece in the puzzle was to add a second software button area to the synaptics driver. Option "SecondarySoftButtonAreas" now allows a configuration in the same manner as the existing one (i.e. right and middle button). Any click in that software button area won't move the cursor, so the buttons will behave just like physical buttons. Option "HasSecondarySoftButtons" defines if that button area is to be used. Of course, we expect that button area to work out of the box, so we now ship configuration files that detect the touchpad and apply that automatically. Update 30 Apr: Originally we tried to get this done based on the PNPID or DMI matching but a better solution is the new INPUT_PROP_TOPBUTTONPAD evdev property bit. This is now applied to all these touchpads, and the synaptics driver uses this to enable the secondary software button area. This bit will be aviailable in kernel 3.15, with stable backports happening after that.

The second piece in the puzzle was to work around the touchpad firmware. The touchpads speak two protocols, RMI4 over SMBus and PS/2. Windows uses RMI4, Linux still uses PS/2. Apparently the firmware never got tested for PS/2 so the touchpad gives us bogus data for its axis ranges. A kernel fix for this is in the pipe. Update 30 Apr: every single touchpad of this generation needs a fix. They have been or are being merged.

Finally, the touchpad needed to be actually usable. So a bunch of patches that tweak the clickpad behaviours were merged in. If a finger is set down inside a software button area, finger movement does no longer affect the cursor. This stops the ever-so-slight but annoying movements when you execute a physical click on the touchpad. Also, there is a short timeout after a click to avoid cursor movement when the user just presses and releases the button. The timeout is short enough that if you do a click-and-hold for drag-and-drop, the cursor will move as expected. If a touch started outside a software button area, we can now use the whole touchpad for movement. And finally, a few fixes to avoid erroneous click events - we'd sometimes get the software button wrong if the event sequence is off.

Another change changed the behaviour of the touchpad when it is disabled through the "Synaptics Off" property. If you use syndaemon to disable the touchpad while typing, the buttons now work even when the touchpad is disabled. If you don't like touchpads at all and prefer to use the trackstick only, use Option "TouchpadOff" "1". This will disable everything but physical clicks on the touchpad.

On that note I'd also like to mention another touchpad bug that was fixed in the recent weeks: plenty of users reported synaptics having a finger stuck after suspend/resume or sometimes even after logging in. This was an elusive bug and finally tracked down to a mishandling of SYN_DROPPED events in synaptics 1.7 and libevdev. I won't provide a fix for synaptics 1.7 but we've fixed libevdev - please use synaptics 1.8 RC1 or later and libevdev 1.1 RC1 or later.

Update 30 Apr: If the INPUT_PROP_TOPBUTTONPAD is not available on your kernel, you can use DMI matching through udev rules. PNPID matching requires a new kernel patch as well, at which point you might as well rely on the INPUT_PROP_TOPBUTTONPAD property. An example for udev rules that we used in Fedora is below:


ATTR{[dmi/id]product_version}=="*T540*", ENV{ID_INPUT.tags}="top_softwarebutton_area"
and with the matching xorg.conf snippet:

Section "InputClass"
Identifier "Lenovo T540 trackstick software button buttons"
MatchTag "top_softwarebutton_area"
Option "HasSecondarySoftButtons" "on"
# If you dont have the kernel patches for your touchpad
# to fix the min/max ranges, you need to use absolute coordinates
# Option "SecondarySoftButtonAreas" "3363 0 0 2280 2717 3362 0 2280"
Option "SecondarySoftButtonAreas" "58% 0 0 8% 42% 58% 0 8%"
EndSection
Update 30 Apr: For those touchpads that already have the kernel fix to adjust the min/max range, simply specifying the buttons in % of the touchpad dimensions is sufficient. For all other touchpads, you'll need to use absolute coordinates.

Fedora users: everything is being built in rawhide Update 30 Apr:, F20 and F19. The COPR listed in an earlier version of this post is not available anymore.

April 23, 2014

format all comments
As part of our series of trying to solve in-house needs LibreOffice 4.3 will have a "format all comments" feature to change the character properties of all comments in a document.

April 21, 2014

Home entertainment implementations are pretty appalling
I picked up a Panasonic BDT-230 a couple of months ago. Then I discovered that even though it appeared fairly straightforward to make it DVD region free (I have a large pile of PAL region 2 DVDs), the US models refuse to play back PAL content. We live in an era of software-defined functionality. While Panasonic could have designed a separate hardware SKU with a hard block on PAL output, that would seem like unnecessary expense. So, playing with the firmware seemed like a reasonable start.

Panasonic provide a nice download site for firmware updates, so I grabbed the most recent and set to work. Binwalk found a squashfs filesystem, which was a good sign. Less good was the block at the end of the firmware with "RSA" written around it in large letters. The simple approach of hacking the firmware, building a new image and flashing it to the device didn't appear likely to work.

Which left dealing with the installed software. The BDT-230 is based on a Mediatek chipset, and like most (all?) Mediatek systems runs a large binary called "bdpprog" that spawns about eleventy billion threads and does pretty much everything. Runnings strings over that showed, well, rather a lot, but most promisingly included a reference to "/mnt/sda1/vudu/vudu.sh". Other references to /mnt/sda1 made it pretty clear that it was the mount point for USB mass storage. There were a couple of other constraints that had to be satisfied, but soon attempting to run Vudu was actually setting a blank root password and launching telnetd.

/acfg/config_file_global.txt was the next stop. This is a set of tokens and values with useful looking names like "IDX_GB_PTT_COUNTRYCODE". I tried changing the values, but unfortunately made a poor guess - on next reboot, the player had reset itself to DVD region 5, Blu Ray region C and was talking to me in Russian. More inconveniently, the Vudu icon had vanished and I couldn't launch a shell any more.

But where there's one obvious mechanism for running arbitrary code, there's probably another. /usr/local/bin/browser.sh contained the wonderful line:
export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so
, so then it was just a matter of building a library that hooked open() and launched inetd and dropping that into the right place, and then opening the browser.

This time I set the country code correctly, rebooted and now I can actually watch Monkey Dust again. Hurrah! But, at the same time, concerning. This software has been written without any concern for security, and it listens on the network by default. If it took me this little time to find two entirely independent ways to run arbitrary code on the device, it doesn't seem like a stretch to believe that there are probably other vulnerabilities that can be exploited with less need for physical access.

The depressing part of this is that there's no reason to believe that Panasonic are especially bad here - especially since a large number of vendors are shipping much the same Mediatek code, and so probably have similar (if not identical) issues. The future is made up of network-connected appliances that are using your electricity to mine somebody else's Dogecoin. Our nightmarish dystopia may be stranger than expected.

comment count unavailable comments

April 17, 2014

What is GOM¹
Under that name is a simple idea: making it easier to save, load, update and query objects in an object store.

I'm not the main developer for this piece of code, but contributed a large number of fixes to it, while porting a piece of code to it as a test of the API. Much of the credit for the design of this very useful library goes to Christian Hergert.

The problem

It's possible that you've already implemented a data store inside your application, hiding your complicated SQL queries in a separate file because they contain injection security issues. Or you've used the filesystem as the store and threw away the ability to search particular fields without loading everything in memory first.

Given that SQLite pretty much matches our use case - it offers good search performance, it's a popular thus well-documented project and its files can be manipulated through a number of first-party and third-party tools - wrapping its API to make it easier to use is probably the right solution.

The GOM solution

GOM is a GObject based wrapper around SQLite. It will hide SQL from you, but still allow you to call to it if you have a specific query you want to run. It will also make sure that SQLite queries don't block your main thread, which is pretty useful indeed for UI applications.

For each table, you would have a GObject, a subclass of GomResource, representing a row in that table. Each column is a property on the object. To add a new item to the table, you would simply do:

item = g_object_new (ITEM_TYPE_RESOURCE,
"column1", value1,
"column2", value2, NULL);
gom_resource_save_sync (item, NULL);

We have a number of features which try to make it as easy as possible for application developers to use gom, such as:
  • Automatic table creation for string, string arrays, and number types as well as GDateTime, and transformation support for complex types (say, colours or images).
  • Automatic database version migration, using annotations on the properties ("new in version")
  • Programmatic API for queries, including deferred fetches for results
Currently, the main net gain in terms of lines of code, when porting SQLite, is the verbosity of declaring properties with GObject. That will hopefully be fixed by the GProperty work planned for the next GLib release.

The future

I'm currently working on some missing features to support a port of the grilo bookmarks plugin (support for column REFERENCES).

I will also be making (small) changes to the API to allow changing the backend from SQLite to a another one, such as XML, or a binary format. Obviously the SQL "escape hatches" wouldn't be available with those backends.

Don't hesitate to file bugs if there are any problems with the API, or its documentation, especially with respect to porting from applications already using SQLite directly. Or if there are bugs (surely, no).

Note that JavaScript support isn't ready yet, due to limitations in gjs.

¹: « SQLite don't hurt me, don't hurt me, no more »