October 30, 2014

Hacker News metrics (first rough approach)
I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:
  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:
KeywordPenalisedUnpenalised
Women134
Harass20
Female51
Intel23
x8634
ARM34
Airplane12
Startup4626

A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments
appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

On joining the FSF board
I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comment count unavailable comments

October 28, 2014

Weekend hack, updated

After a bit more debugging, and with some help from Jasper, here is the inspector debugging an X11 application while displaying under Wayland:

Wayland inspection!It also turns out that the GTK_INSPECTOR_DISPLAY environment variable is sufficient to make this work. I started this demo with

GTK_DEBUG=interactive GDK_BACKEND=x11,wayland \
GTK_INSPECTOR_DISPLAY=wayland-0 ./gtk3-demo
Update

The reverse (application displaying under Wayland, inspector  on X) also works:

Reverse inspection

 

 

October 25, 2014

A weekend hack

I’ve been working on making GtkInspector use a different display connection. This helps isolating it from some of the changes you can trigger from inside the inspector UI. Then I thought, why not use a different backend ?!

We did enough work on GDK backend separation that it could almost work. But since we didn’t add API to actually connect to specific backends (users and applications get some control with GDK_BACKEND and gdk_set_allowed_backends()),  nobody has ever used multiple backends in the same process. And things that don’t get used don’t work.  So some fixes were necessary.

But now I have gtk3-demo running under X while the inspector is using the Broadway backend and shows up in the web browser:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1304-1" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/10/multidisplay.webm?_=1" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/10/multidisplay.webm</video>

To my knowledge this is the first time a GTK+ application is using multiple GDK backends at the same time.

Anticipating some questions:

  • What is this good for ?

Not that much, really. I consider it mostly a curiosity. It does prove that the GDK backend separation is (almost) working.

For most cases where this could be used with the inspector, it is probably preferable to move the inspector out of process and communicate via D-Bus. There are some people who are interested in this, so it may happen.

  • Can I use this in my application ?

No. I haven’t committed that hack that makes the inspector use a different backend . If we wanted to support this, we would need to add a real API to connect to a specific backend.

What I did commit though is the code that makes the inspector always use a separate display connection (with the same backend), so you get the better isolation that can be seen in the video.

  • Does this also work with Wayland ?

No. I first tried to make it work with Wayland+X, but the Wayland backend is more complicated than Broadway. I’m still getting crashes fairly soon. Broadway pretty much just worked, after a  round of fixes.

October 23, 2014

perf.gnome.org – introduction

My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on build.gnome.org. GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

But actually putting performance tests into the set of tests that are run by build.gnome.org doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

The overall architecture is shown in the following diagram:

HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to build.gnome.org to download builds or perf.gnome.org to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to perf.gnome.org.

perf.gnome.org is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s perf.gnome.org catching a regression (fix)

perf.gnome.org regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at https://wiki.gnome.org/Projects/HardwareTesting.


Linux Container Security
First, read these slides. Done? Good.

(Edit: Just to clarify - these are not my slides. They're from a presentation Jerome Petazzoni gave at Linuxcon NA earlier this year)

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:

  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments
An early view of GTK+ 3.16

A number of new features have landed in GTK+ recently. These are now available in the 3.15.0 release. Here is a quick look at some of them.

Overlay scrolling

We’ve had long-standing feature requests to turn scrollbars into overlayed indicators, for touch systems. An implementation of this idea has been merged now. We show traditional scrollbars when a mouse is detected, otherwise we fade in narrow, translucent indicators. The indicators are rendered on top of the content and don’t take up extra space. When you move the pointer over the indicator, it turns into a full-width scrollbar that can be used as such.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1295-2" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/10/overlay-scroll2.webm?_=2" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/10/overlay-scroll2.webm</video>

Other new scrolling-related features are support for synchronized scrolling of multiple scrolled windows with a shared scrollbar (like in the meld side-by-side view), and an ::edge-overshot signal that is generated when the user ‘overshoots’ the scrolling at either end.

OpenGL support

This is another very old request – GtkGLExt and GtkGLArea have been around for more than a decade.  In 3.16, GTK+ will come with a GtkGLArea widget.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1295-3" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/10/opengl.webm?_=3" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/10/opengl.webm</video>

Alex’ commit message explains all the details, but the high-level summary is that we now render with OpenGL when we have to, and we can  fully control the stacking of pieces that are rendered with OpenGL or with cairo: You can have a translucent popup over a 3D scene, or mix buttons into your 3D views.

While it is nice to have a GLArea widget, the real purpose is to prepare GDK for Emmanuele’s scene graph work, GSK.

A Sidebar widget

Ikey Doherty contributed the GtkSidebar widget. It is a nice and clean widget to turn the pages of a GtkStack.

sidebar
IPP Printing

The GTK+ print dialog can now handle IPP printers which don’t provide a PPD to describe their capabilities.

Pure CSS theming

For the last few years, we’ve implemented more and more CSS functionality in the GTK+ style code. In 3.14, we were able to turn Adwaita into a pure CSS theme. Since CSS has clear semantics that don’t include ‘call out to arbitrary drawing code’, we are not loading and using theme engines anymore.

We’ve landed this change early in 3.15 to give theme authors enough time to convert their themes to CSS.

More to come

With these features, we are about halfway through our plans for 3.16. You can look at the GTK+ roadmap to see what else we hope to achieve in the next few months.

October 21, 2014

A GNOME Kernel wishlist
GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

October 15, 2014

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>Liberation</id>
  <metadata_license>CC0-1.0</metadata_license>
  <name>Liberation</name>
  <summary>Open source versions of several commercial fonts</summary>
  <description>
    <p>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
    </p>
  </description>
  <updatecontact>richard_at_hughsie_dot_com</updatecontact>
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>
</component>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>LiberationSerif</id>
  <metadata_license>CC0-1.0</metadata_license>
  <extends>Liberation</extends>
</component>

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

GNOME Summit wrap-up
Day 3

The last day of the Summit was a hacking-focused day.

In one corner,  Rob Taylor was bringing up Linux and GNOME on an Arm tablet. Next to him, Javier Jardon was working on a local gnome-continuous instance . Michael Cantanzaro and Robert Schroll were working on D-Bus communication to let geary use Webkit2. Christan Hergert was adding editor preferences to gnome-builder and reviewed design ideas with Ryan Lerch. I spent most of the day teaching glade about new widgets in GTK+. I also merged OpenGL support for GTK+.

Overall, a quite productive day.

Thanks again to the sponsors

 

and to Walter Bender and the MIT for hosting us.

 

October 12, 2014

GNOME Summit update

The Boston summit is well underway, time for some update on what is happening.

Day 1

BreakfastAround 20 people got together Saturday morning and, after bagels and coffee got together in a room and collected topics that everybody was interested in working on. With Christian in attendance, gnome-builder was high on the list, but OSTree and gnome-continuous also showed up several times on the list.

Our first session was discussing issues of application writers with GTK+. Examples that were mentioned are the headerbar child order changing between 3.10 and 3.12, and icon sizes changing for some icons between 3.12 and 3.14 . We haven’t come to any great conclusions as to what we can do here, other than being careful about changes. GTK+ has a README file that gets updated with release notes for each major release, but it is not very visible. I will make an effort to draw attention to such potentially problematic changes earlier in the cycle.

Another kind of problem that was pointed out is that  the old way of doing action-based menus with GtkAction and GtkUIBuilder has been deprecated, but the ‘new way’ with GAction, GMenuModel and GtkApplication has not really been fully developed yet; at least, there is not great documentation for migrating from the old to the new, or for just starting from scratch with the new way. The HowDoI‘s for this are a start, but not sufficient.

RMS

After the morning session, Richard Stallman came by and spent an hour with us, discussing the history of free software, current challenges, and how GNOME fits into this larger picture from his perspective.

In the afternoon, we had a long session discussion ‘future apps’, which is hard to summarize. It revolved around runtimes, sdks, bundling, pros and cons of btrfs vs ostree, portals,  how to get apps ported to it, what to do about qt apps. This thread covers a lot of the same ground.

I managed to land GType statistics support in GObject and GtkInspector.

Day 2

Over breakfast, we talked about the best way to communicate with a webkit webview process, which somehow led us to kdbus. Takeaways from this discussion:

  • We (GNOME release team) should push for getting all users of webkit ported to webkit2 this cycle
  • We should create a document that explains what application and service authors and distributors need to do when porting to kdbus

Day 2

After breakfast, we started out with a live demo of gnome-builder, which looks really promising. Christian is reusing a lot of existing things, from gtksourceview over devhelp to gitg and glade. But having a polished, unified application bringing it all together makes a real difference!

As this session was winding down, I started testing a GNOME 3.14.0 Wayland session with the goal of recording as many of the smaller (and larger) issues as possible in bugs.  The result so far can be found here. While I reported a long list of Wayland issues today,  all is not bad. It was no problem at all to write this blog post in a Wayland session!

Thanks to Red Hat and Codethink,  who are sponsoring this event

 

October 11, 2014

And now for some hardware (Onda v975w)
Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

October 02, 2014

Actions have consequences (or: why I'm not fixing Intel's bugs any more)
A lot of the kernel work I've ended up doing has involved dealing with bugs on Intel-based systems - figuring out interactions between their hardware and firmware, reverse engineering features that they refuse to document, improving their power management support, handling platform integration stuff for their GPUs and so on. Some of this I've been paid for, but a bunch has been unpaid work in my spare time[1].

Recently, as part of the anti-women #GamerGate campaign[2], a set of awful humans convinced Intel to terminate an advertising campaign because the site hosting the campaign had dared to suggest that the sexism present throughout the gaming industry might be a problem. Despite being awful humans, it is absolutely their right to request that a company choose to spend its money in a different way. And despite it being a dreadful decision, Intel is obviously entitled to spend their money as they wish. But I'm also free to spend my unpaid spare time as I wish, and I no longer wish to spend it doing unpaid work to enable an abhorrently-behaving company to sell more hardware. I won't be working on any Intel-specific bugs. I won't be reverse engineering any Intel-based features[3]. If the backlight on your laptop with an Intel GPU doesn't work, the number of fucks I'll be giving will fail to register on even the most sensitive measuring device.

On the plus side, this is probably going to significantly reduce my gin consumption.

[1] In the spirit of full disclosure: in some cases this has resulted in me being sent laptops in order to figure stuff out, and I was not always asked to return those laptops. My current laptop was purchased by me.

[2] I appreciate that there are some people involved in this campaign who earnestly believe that they are working to improve the state of professional ethics in games media. That is a worthy goal! But you're allying yourself to a cause that disproportionately attacks women while ignoring almost every other conflict of interest in the industry. If this is what you care about, find a new way to do it - and perhaps deal with the rather more obvious cases involving giant corporations, rather than obsessing over indie developers.

For avoidance of doubt, any comments arguing this point will be replaced with the phrase "Fart fart fart".

[3] Except for the purposes of finding entertaining security bugs

comment count unavailable comments

September 30, 2014

GTK+ widget templates now in Javascript
Let's get the features in early!

If you're working on a Javascript application for GNOME, you'll be interested to know that you can now write GTK+ widget templates in gjs.

Many thanks to Giovanni for writing the original patches. And now to a small example:

const MyComplexGtkSubclass = new Lang.Class({
Name: 'MyComplexGtkSubclass',
Extends: Gtk.Grid,
Template: 'resource:///org/gnome/myapp/widget.xml',
Children: ['label-child'],

_init: function(params) {
this.parent(params);

this._internalLabel = this.get_template_child(MyComplexGtkSubclass,
'label-child');
}
});

And now you just need to create your widget:

let content = new MyComplexGtkSubclass();
content._internalLabel.set_label("My updated label");

You'll need gjs from git master to use this feature. And if you see anything that breaks, don't hesitate to file a bug against gjs in the GNOME Bugzilla.

September 29, 2014

GNOME Boston Summit 2014

The Boston Summit is a 3-day hackfest for GNOME developers and contributors, that we traditionally hold over the Columbus day weekend.

In recent years, we’ve met in Montreal,  but for 2014,  we are back in Boston, at the MIT.

Early Turning

Details

When:    Saturday, October 11 — Monday, October 13
Where:  MIT building E51, Cambridge, MA

Topics

There’s plenty of topics we can discuss. Here are some I would be interested in working on:

  • Application development (bundling, SDKs, sandboxing, kdbus, …)
  • Wayland (porting leftovers, possibilities for the future, …)
  • Privacy

If you are interested, please let us know you’re coming.

Hope to see many old and new faces there !

Shipping larger application icons in Fedora 22

In GNOME 3.14 we show any valid application in the software center with an application icon of 32×32 or larger. Currently a 32×32 icon has to be padded with 16 pixels of whitespace on all 4 edges, and also has to be scaled x2 to match other UI elements on HiDPI screens. This looks very fuzzy and out of place and lowers the quality of an otherwise beautiful installing experience.

For GNOME 3.16 (Fedora 22) we are planning to increase the minimum icon size to 48×48, with recommended installed sizes of 16×16, 24×24, 32×32, 48×48 and 256×256 (or SVG in some cases). Modern desktop applications typically ship multiple sizes of icons in known locations, and it’s very much the minority of applications that only ship one small icon.

Soon I’m going to start nagging upstream maintainers to install larger icons than 32×32. If you’re re-doing the icon, please generate a 256×256 or 64×64 icon with alpha channel, as the latter will probably be the minimum size for F23 and beyond.

At the end of November I’ll change the minimum icon size in the AppStream generator used for Fedora so that applications not fixed will be dropped from the metadata. You can of course install the applications manually on the command line, but they won’t be visible in the software center until they are installed.

If you’re unclear on what needs to be done in order to be listed in the AppStream metadata, refer to the guidelines or send me email.

September 25, 2014

AppStream Progress in September

Last time I blogged about AppStream I announced that over 25% of applications in Fedora 21 were shipping the AppData files we needed. I’m pleased to say in the last two months we’ve gone up to 45% of applications in Fedora 22. This is thanks to a lot of work from Ryan and his friends, writing descriptions, taking screenshots and then including them in the fedora-appstream staging repo.

So fedora-appstream doesn’t sound very upstream or awesome. This week I’ve sent another 39 emails, and opened another 42 bugs (requiring 17 new bugilla/trac/random-forum accounts to be opened). Every single file in the fedora-appstream staging repo has been sent upstream in one form or another, and I’ve been adding an XML comment to each one for a rough audit log of what happened where.

Some have already been accepted upstream and we’re waiting for a new tarball release; when that happens we’ll delete the file from fedora-appstream. Some upstreams are really dead, and have no upstream maintainer, so they’ll probably languish in fedora-appstream until for some reason the package FTBFS and gets removed from the distribution. If the package gets removed, the AppData file will also be deleted from fedora-appstream.

Also, in the process I’ve found lots of applications which are shipping AppData files upstream, but for one reason or another are not being installed in the binary rpm file. If you had to tell me I was talking nonsense in an email this week, I apologize. For my sins I’ve updated over a dozen packages to the latest versions so the AppData file is included, and fixed a quite a few more.

Fedora 22 is on track to be the first release that mandates AppData files for applications. If upstream doesn’t ship one, we can either add it in the Fedora package, or in fedora-appstream.

September 24, 2014

GNOME 3.14 impressions

As part of the release preparation, I’ve done some smoketesting of GNOME 3.14 – looking good! As I usually do, I’ve taken some screenshots while testing things, to share some impressions. Screenshot from 2014-09-24 07:50:18-smallScreenshot from 2014-09-24 09:25:44-smallScreenshot from 2014-09-24 07:30:42-smallNow back to final release preparations.

My free software will respect users or it will be bullshit
I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments
libinput - a common input stack for Wayland compositors and X.Org drivers

Last November, Jonas Ådahl sent an RFC to the wayland-devel list about a common library to handle input devices in Wayland compositors called libinput. Fast-forward and we are now at libinput 0.6, with a broad support of devices and features. In this post I'll give an overview on libinput and why it is necessary in the first place. Unsuprisingly I'll be focusing on Linux, for other systems please mentally add the required asterisks, footnotes and handwaving.

The input stack in X.org

The input stack as it currently works in X.org is a bit of a mess. I'm not even talking about the different protocol versions that we need to support and that are partially incompatible with each other (core, XI 1.x, XI2, XKB, etc.), I'm talking about the backend infrastructure. Let's have a look:

The graph above is a simplification of the input stack, focusing on the various high-level functionalities. The X server uses some device discovery mechanism (udev now, previously hal) and matches each device with an input driver (evdev, synaptics, wacom, ...) based on the configuration snippets (see your local /usr/share/X11/xorg.conf.d/ directory).

The idea of having multiple drivers for different hardware was great when hardware still mattered, but these days we only speak evdev and the drivers that are hardware-specific are racing the dodos to the finishing line.

The drivers can communicate with the server through the very limited xf86 DDX API, but there is no good mechanism to communicate between the drivers. It's possible, just not doable in a sane manner. Most drivers support multiple X server releases so any communication between drivers would need to take version number mixes into account. Only the server knows the drivers that are loaded, through a couple of structs. Knowledge of things like "which device is running synaptics" is obtainable, but not sensibly. Likewise, drivers can get to the drivers of other devices, but not reasonably so (the API is non-opaque, so you can get to anything if you find the matching header).

Some features are provided by the X server: pointer acceleration, disabling a device, mapping a device to monitor, etc. Most other features such as tapping, two-finger scrolling, etc. are provided by the driver. That leads to an interesting feature support matrix: synaptics and wacom both provide two-finger scrolling, but only synaptics has edge-scrolling. evdev and wacom have calibration support but they're incompatible configuration options, etc. The server in general has no idea what feature is being used, all it sees is button, motion and key events.

The general result of this separation is that of a big family gathering. It looks like a big happy family at first, but then you see that synaptics won't talk to evdev because of the tapping incident a couple of years back, mouse and keyboard are have no idea what forks and knives are for, wacom is the hippy GPL cousin that doesn't even live in the same state and no-one quite knows why elographics keeps getting invited. The X server tries to keep the peace by just generally getting in the way of everyone so no-one can argue for too long. You step back, shrug apologetically and say "well, that's just how these things are, right?"

To give you one example, and I really wish this was a joke: The server is responsible for button mappings, tapping is implemented in synaptics. In order to support left-handed touchpads, gnome-settings-daemon sets the button mappings on the device in the server. Then it has to swap the tapping actions from left/right/middle for 1/2/3-finger tap to right/left/middle. That way synaptics submits right button clicks for a one-finger tap, which is then swapped back by the server to a left click.

The X.org input drivers are almost impossible to test. synaptics has (quick guesstimate done with grep and wc) around 70 user-configurable options. Testing all combinations would be something around the order of 10101 combinations, not accounting for HW differences. Testing the driver on it's own is not doable, you need to fire up an X server and then run various tests against that (see XIT). But now you're not testing the driver, you're testing the whole stack. And you can only get to the driver through the X protocol, and that is 2 APIs away from the driver core. Plus, test results get hard to evaluate as different modules update separately.

So in summary, in the current stack features are distributed across modules that don't communicate with each other. The stack is impossible to test, partially thanks to the vast array of user-exposed options. These are largely technical issues, we control the xf86 DDX API and can break it when needed to, but at this point you're looking at something that resembles a rewrite anyway. And of course, don't you dare change my workflow!

The input stack in Wayland

From the input stack's POV, Wayland simply merges the X server and the input modules into one item. See the architecture diagram from the wayland docs:

evdev gets fed into the compositor and wayland comes out the other end. If life were so simple... Due to the X.org input modules being inseparable from the X server, Weston and other compositors started implementing their own input stack, separately. Let me introduce a game called feature bingo: guess which feature is currently not working in $COMPOSITOR. If you collect all five in a row, you get to shout "FFS!" and win the price of staying up all night fixing your touchpad. As much fun as that can be, maybe let's not do that.

libinput

libinput provides a full input stack to compositors. It does device discovery over udev and event processing and simply provides the compositor with the pre-processed events. If one of the devices is a touchpad, libinput will handle tapping, two-finger scrolling, etc. All the compositor needs to worry about is moving the visible cursor, selecting the client and converting the events into wayland protocol. The stack thus looks something like this:

Almost everything has moved into libinput, including device discovery and pointer acceleration. libinput has internal backends for pointers, touchpads, tablets, etc. but they are not exposed to the compositor. More importantly, libinput knows about all devices (within a seat), so cross-device communication is possible but invisible to the compositor. The compositor still does configuration parsing, but only for user-specific options such as whether to enable tapping or not. And it doesn't handle the actual feature, it simply tells libinput to enable or disable it.

The graph above also shows another important thing: libinput provides an API to the compositor. libinput is not "wayland-y", it doesn't care about the Wayland protocol, it's simply an input stack. Which means it can be used as base for an X.org input driver or even Canonical's MIR.

libinput is very much a black box, at least compared to X input drivers (remember those 70 options in synaptics?). The X mantra of "mechanism, not policy" allows for interesting use-cases, but it makes the default 90% use-case extremely painful from a maintainer's and integrator's point of view. libinput, much like wayland itself, is a lot more restrictive in what it allows, specifically in the requirement it places on the compositor. At the same time aims for a better out-of-the-box experience.

To give you an example, the X.org synaptics driver lets you arrange the software buttons more-or-less freely on the touchpad. The default placement is simply a config snippet. In libinput, the software buttons are always at the bottom of the touchpad and also at the top of the touchpad on some models (Lenovo *40 series, mainly). The buttons are of a fixed size (which we decided on after analysing usage data), and you only get a left and right button. The top software buttons have left/middle/right matching the markings on the touchpad. The whole configuration is decided based on the hardware. The compositor/user don't have to enable them, they are simply there when needed.

That may sound restrictive, but we have a number of features on top that we can enable where required. Pressing both left and right software buttons simultaneously gives you a middle button click; the middle button in the top software button row provides wheel emulation for the trackstick. When the touchpad is disabled, the top buttons continue to work and they even grow larger to make them easier to hit. The touchpad can be set to auto-disable whenever an external mouse is plugged in.

And the advantage of having libinput as a generic stack also results in us having tests. So we know what the interactions are between software buttons and tapping, we don't have to wait for a user to trip over a bug to tell us what's broken.

Summary

We need a new input stack for Wayland, simply because the design of compositors in a Wayland world is different. We can't use the current modules of X.org for a number of technical reasons, and the amount of work it would require to get those up to scratch and usable is equivalent to a rewrite.

libinput now provides that input stack. It's the default in Weston at the time of this writing and used in other compositors or in the process of being used. It abstracts most of input away and more importantly makes input consistent across all compositors.

September 22, 2014

On fêtera la sortie de GNOME 3.14 mardi soir à Lyon
In French, for a change :)

Mardi soir, le 23 septembre, quelques-uns d'entre nous se retrouveront vers 18h30 au Smoking Dog pour quelques boissons, et poursuivront avec un dîner indien prés du métro St-Jean.

N'hésitez pas à vous inscrire sur le Wiki, que vous soyez utilisateurs de GNOME, développeurs ou simplement des amis du logiciel libre.

À mardi!

September 21, 2014

Stylus behaviour on Microsoft Surface 3 tablets

Note: The purpose of this post is basically just so we have a link when this comes up in future bugreports.

Some stylus devices have two buttons on the stylus, plus the tip itself which acts as a button. In the kernel, these two are forwarded to userspace as BTN_STYLUS and BTN_STYLUS2. Userspace then usually maps those two into right and middle click, depending on your configuration. The pen itself used BTN_TOOL_PEN when it goes into proximity.

The default stylus that comes with the Wacom Intuos Pro [1] has an eraser on the other side of the pen. If you turn the pen around it goes out of proximity and comes back in as BTN_TOOL_RUBBER. [2] In the wacom X driver we handle this accordingly, through two different devices available via the X Input Extension. For example the GIMP assigns different tools assigned to each device. [3]

In the HID spec there are a couple of different fields (In Range, Tip Switch, Barrel Switch, Eraser and Invert) that matter here. Barrel Switch is the stylus button, In Range and Tip Switch are proximity and "touching the surface". Invert signals which side of the pen points down, and Eraser is triggered when the eraser touches the surface. In Wacom tablets, Invert is always on when the eraser touches because that's how the pens designed.

Microsoft, in its {in}finite wisdom has decided to make the lower button an "eraser" button on its Surface 3 Pen. So what happens now is that once you press the button, In Range goes to zero, then to one in the next event, together with Invert. Eraser comes on once you touch the surface but curiously that also causes Invert to go off. Anyway, that's a low-level detail that will get handled. What matters to users is that on the press of that button, the pen goes virtually out of proximity, comes back in as eraser and then hooray, you can now use it as an eraser tool without having actually moved it. Of course, since the button controls the mode it doesn't actually work as button, you're left with the second button on the stylus only.

Now, the important thing here is: that's the behaviour you get if you have one of these devices. We could work around this in software by detecting the mode button, flipping bits here and there and trying to emulate a stylus button based on the mode switches. But we won't. The overlords have decreed and it's too much effort to hack around the intended behaviour for little gain.

[1] If marketing decides to rename products so that need a statement "Bamboo pen tablets are now Intuos. Intuos5 is now Intuos Pro." then you've probably screwed up.
[2] Isn't it nice to see some proper queen's English for a change? For those of you on the other side of some ocean: eraser.[3] GIMP, rubber tool, I'm not making this up, seriously
Fresh software from the 3.14 menu
Here is a small recap of the GNOME 3.14 features I worked on. Some are already well publicised, through blogs:
And obviously loads of bug fixes, and patch reviews. And I do mean loads :)

To look forward to

If all goes according to plan, I'll be able to merge the aforementioned automatic rotation support into systemd/udev. The kernel API is pretty bad, which makes the user-space code look bad...

The first parts of ebooks support in gnome-documents have already been written, scheduled for 3.16.

And my favourites

Note: With links that will open up like a Christmas present when GNOME 3.14 is released.

There are a lot of big, new features in GNOME 3.14. The Adwaita rewrite made it possible to polish the theme greatly. The captive portals support is very useful, the travelling you will enjoy this (I certainly have!).

But my favourite new feature has to be the gestures support in gnome-shell. I'll make good use of that :)

September 19, 2014

What the GNOME release team is doing

shells

At the release team BoF at this years Guadec, I said I would write a blog post about the whats and hows and ifs of release team work. I’m a little late with this, but here it is: a glimpse into the life of a GNOME release team member.

We are in the end phase of the development cycle, when the release team work is really kicking into high gear.

Blocker bugs

Since the .90 release, we’ve been tracking blocker bugs.  The way this works is that we are setting the GNOME Target field to the next release. During the cause of the development cycle, we’ve had 50 bugs that were marked with

GNOME Target=3.14

at some point . Today, we’re down to a single one, which will hopefully be gone before the weekend is over.

In order to draw our developers attention to these bugs, we’re sending out regular reports to desktop-devel-list (see them here, here and here). Andre has taken over this task this cycle.

We don’t have formal criteria for what bugs to mark as blockers, we are mostly going by the instinct and experience of bug zapping veterans like Andre Klapper. My own criteria for setting this flag mostly come down to these questions:

Is it a crash in a core component ?
Is it very visible or annoying ?
Will it affect many users ?

For finding bugs that should be blockers, I am regularly scanning all incoming bugs in bugzilla. On an average day this query finds between 100 and 200 bugs – with some practice, one can get through that list in 15 minutes.

We also get ‘nominations’ for blockers from maintainers and developers, which is very helpful.

Development Releases

The duty of ‘doing’ releases is distributed among all of the current, active release team members.

Our development schedule has a pretty well-established cadence of development  releases: We do a number of early development snapshots which are roughly a month apart (3.13.1, 3.13.2, 3.13.3 and 3.13.4 this cycle), followed by the beta releases which are 2 weeks apart (3.13.90, 3.13.91, 3.13.92, this cycle).

The end product of each development release is a set of jhbuild modulesets and a forest of symlinks to the tarballs for individual modules that are included in the release.

Most of the mechanics of creating the modulesets by rewriting our regular modulesets (which point at git repositories, not tarballs), and creating those symlinks on the server are handled by scripts that have been passed down through the generations.

The time-consuming aspect of creating a release is that it usually takes several attempts to create candidate modulesets, hunt down missing releases (or doing them ourselves – which is sadly necessary for a number of ‘weakly maintained’ modules), and do a full jhbuild using the final modulesets. As a consequence, while our official release day is always Monday, the release typically happens on Wednesday or Thursday of the same week.

Planning

Towards the end of a development cycle, the release team also starts to plan for the next cycle.

This includes creating the schedule, taking a look at annoying bugs that we should tackle in the next cycle, figuring out if there are project-wide goals that we should push, and collecting input on new modules and features that people want to work on for the next release.

Since we are at this point right now, let me end with this request:

Please let us know what features are on your modules roadmap for the next cycle!
pointer acceleration in libinput - a userstudy

This post is a summary, the full writeup with data is here (PDF). The texts are quite similar, so if you plan to read the paper you can skip this post.

Following pointer acceleration in libinput - an analysis, we decided to run an actual userstudy to gather some data on how our acceleration behaves, and - more importantly - test if a modified acceleration method is better.

We developed two new pointer acceleration methods (plus the one already in libinput). As explained previously, the pointer acceleration method is a function mapping input speed of a device into cursor speed in pixels. The faster one moves the mouse, the further the cursor moves per "mickey" (a 1-device unit movement). In a simplest example, input deltas of 1 may result in a 1 pixel movement, input deltas of 10 may result in a 30 pixel movement.

The three pointer acceleration methods used in this study were nicknamed:

<dh>smooth:</dh>
A shortening of "smooth and simple", this method is used in libinput 0.5 as well as in the X.Org stack since ~2008.
<dh>stretched:</dh>
a modification of 'smooth' with roughly the same profile, but the maximum acceleration is applied at a higher speed. This method was developed by Hans de Goede and very promising in personal testing.
<dh>linear:</dh>
a linear acceleration method with a roughly similar speed-to-acceleration profile as the first two. This method was developed to test if a simple function could achieve similar results, as the more complex "smooth" and "stretched" methods.</d>
The input data expected by all three methods is in units/ms. Touchpad devices are normalised to 400 dpi, other devices are left as-is. It is impossible to detect in software what resolution a generic mouse supports, so any acceleration method differs between devices. This is intended by the manufacturer, high-resolution devices are sold as "faster" for this reason.

The three pointer acceleration methods
As the graph shows, the base profile is roughly identical and the main difference is how quickly the maximum acceleration factor is reached.

Study description

Central component was a tool built on libinput that displays a full-screen white window, with a round green target. Participants were prompted by GTK dialog boxes on the steps to take next. Otherwise the study was unsupervised and self-guided.

The task required participants to click on a round target with a radius of 15, 30 and 45 pixels. Targets were grouped, each "set" consisted of 15 targets of the same size. On a successful click within the target, a new target appeared on one out of 12 possible locations, arranged in a grid of 4x3 with grid points 300 pixels apart. The location of the target was randomly selected but was never on the same location twice in a row.


Screenshot of the study tool with the first target (size 45) visible.

Each participant was tested for two acceleration methods, each acceleration method had 6 sets of 15 targets (2 sets per target size, order randomised). The two acceleration methods were randomly selected on startup, throughout the study they were simply referred to as "first" and "second" acceleration method with no further detail provided. Acceleration changed after 6 sets (participants were informed about it), and on completion of all 12 sets participants had to fill out a questionnaire and upload the data.

Statistical concepts

A short foray into statistics to help explain the numbers below. This isn't a full statistics course, I'm just aiming to explain the various definitions used below.

The mean of a dataset is what many people call the average: all values added up divided by the number of values. As a statistical tool, the mean is easy to calculate but is greatly affected by outliers. For skewed datasets the median is be more helpful: the middle value of the data array (array[len/2]). The closer the mean and the median are together, the more symmetrical the distribution is.

The standard deviation (SD) describes how far the data points spread from the median. The smaller the SD, the closer together are the data points. The SD is also used to estimate causality vs randomly induced sampling errors. Generally, if the difference between two items is more than 2 standard deviations, there's a 95% confidence that this is a true effect, not just randomness (95% certainty is a widely accepted standard in this domain). That 95% directly maps to the p-value you may have seen in other studies. A p-value of less than 0.05 equals a less than 5% chance of random factors causing the data differences. That translates into "statistically significant".

The ANOVA method is a standard statistical tool for studies like ours. (we're using one-way ANOVA only here, Wikipedia has an example here). If multiple sets of samples differ in only a single factor (e.g. pointer acceleration method), we start with the so-called Null-Hypothesis of "the factor has no influence, all results are the same on average". Our goal is to reject that hypothesis so we can say that the factor did actually change things. If we cannot reject the Null-Hypothesis, either our factor didn't change anything or the results are caused by random influences. The tools for ANOVA compare the mean value within each sets to the mean value differences across the sets and spit out a p-value. As above, a p-value less than 0.05 means greater than 95% confidence that the Null-Hypothesis can be rejected, i.e. we can say our factor did cause those differences.

One peculiarity of ANOVA is that the sample sets have to be the same size. This affects our samples, more see below.

Study participants

An email was sent to three Red Hat-internal lists with a link to the study description. One list was a specific developer list, the other two list were generic lists. As Red Hat employees, participants are expected to be familiar with Linux-based operating systems and the majority is more technical than the average user. The data collected does not make it possible to identify who took part in the study beyond the information provided in the questionnaire.

44 participants submitted results, 7 left-handed, 37 right-handed (no ambidextrous option was provded in the questionnaire). Gender distribution was 38 male, 6 female. Mean age was 33.3 years (SD 6.7) and participants had an mean 21.2 years of experience with mouse-like input devices (SD 4.9) and used those devices an average 58.1 hours per week (SD 20.0).

As all participants are familiar with Linux systems and thus exposed to the smooth acceleration method on their workstations, we expect a bias towards the smooth acceleration method.

Study data

Data was manually checked and verified, three result files were discarded for bugs or as extreme outliers, leaving us with 41 data files. The distribution of methods in these sets was: 27 for smooth, 25 for stretched and 30 for linear.

The base measurement was the so-called "Index of Difficulty" (ID), the number obtained by distance-to-target/width-of-target. This index gives an indication on how difficult it is to hit the target; a large target very close is easier to hit than a small target that is some distance away.


Illustration of the Index of Difficulty for a target.

In hindsight, the study was not ideally suited for evaluation based on ID. The targets were aligned on a grid and the ID based on the pointer position was very variable. As is visible in the graph below, there are few clear dividing lines to categorise the targets based on their ID. For the evaluation the targets were grouped into specific ID groups: ID < 4.2, ID < 8.4, ID < 12.9, ID < 16.9 < ID < 25 and ID > 25. The numbers were selected simply because there are clear gaps between the ID clusters. This division results in uneven group sizes, (I ran the same calculations with different group numbers, it does not have any real impact on the results.)


ID for each target with the divider lines shown
The top ID was 36.44, corresponding to a 15px radius target 1093 pixels away, the lowest ID was 1.45, corresponding to a 45px radius target 130 pixels away.

Number of targets per ID group
As said above, ANOVA requires equal-sized sample sets. ANOVA was performed separately between the methods (i.e. smooth vs stretched, then smooth vs linear, then stretched vs linear). Before each analysis, the two data arrays were cut to be of equal length. For example, comparing smooth and stretched in the ID max group shortened the smooth dataset to 150 elements. The order of targets was randomised.

Study Results

The following factors were analysed:

  • Time to click on target
  • Movement efficiency
  • Overshoot

Time to click on target

Time to click on a target was measured as the time between displaying the target and clicking on it. This does not take reaction time into account, but there is no reliable way of measuring reaction time in this setup.


Mean time to click on target

As is visible, an increasing ID increases the time-to-click. On a quick glance, we can see that the smooth method is slower than the other two in most ID groups, with linear and stretched being fairly close together. However, the differences are only statistically significant in the following cases:

  • ID 8.4: linear is faster than smooth and stretched
  • ID 12.9: linear and stretched are faster than smooth
  • ID 25: linear and stretched are faster than smooth
In all other combinations, there is no statistically significant difference between the three methods, but overall a slight advantage for the two methods stretched and linear.

Efficiency of movement

The most efficient path from the cursor position to the target is a straight line. However, most movements do not follow that straight line for a number of reasons. One of these reasons is basic anatomy - it is really hard to move a mouse in a straight line due to the rotary action of our wrists. Other reasons may be deficiencies in the pointer acceleration method. To measure the efficiency, we calculated the distance to the target (i.e. the straight line) and compared that to all the deltas added up to the total movement. Note that the distance is to the center of the target, whereas the actual movement may be to any point in the target. So for short distances and large targets, there is a chance that a movement may be less than the distance to the target.


Straight distance to target vs. movement path shows the efficiency of movement.

The efficiency was calculated as movement-path/distance, then normalised to a percent value. A value of 10 thus means the movement path was 10% longer than the straight line to the target centre).


Extra distance covered
Stretched seems to perform better than smooth and linear in all but one ID group and smooth performing worse than linear in all but ID group 4.2. Looking at the actual values however shows that the large standard deviation prevents statistical significance. The differences are only statistically significant in the following cases:
  • ID 4.2: stretched is more efficient than smooth and linear
In all other combinations, there is no statistically significant difference between the three methods.

Overshoot

Somewhat similar to the efficiency of movement, the overshoot is the distance the pointer has moved past the target. It was calculated by drawing a line perpendicular to the direct path from the pointer position to the target's far side. If the pointer moves past this line, the user has overshot the target. The maximum distance between the line and the pointer shows how much the user has overshot the target.


Illustration of pointer overshooting the target.
The red line shows the amount the pointer has overshot the target.
Overshoot was calculated in pixels, as % of the distance and as % of the actual path taken. Unsurprisingly, the graphs look rather the same so I'll only put one up here.

Overshoot in pixels by ID group
As the ID increases, the amount of overshooting increases too. Again the three pointer acceleration methods are largely the same, though linear seems to be slightly less affected by overshoot than smooth and stretched. The differences are only statistically significant in the following cases:
  • ID 4.2: if measured as percentage of distance, stretched has less overshoot than linear.
  • ID 8.4: if measured as percentage of movement path, linear has less overshoot than smooth.
  • ID 16.8: if measured as percentage of distance, stretched and linear have less overshoot than smooth.
  • ID 16.8: if measured as percentage of distance, linear has less overshoot than smooth.
  • ID 16.8: if measured in pixels, linear has less overshoot than smooth.
In all other combinations, there is no statistically significant difference between the three methods.

Summary

In summary, there is not a lot of difference between the three methods, though smooth has no significant advantage in any of the measurements. The race between stretched and linear is mostly undecided.

Questionnaire results

The above data was objectively measured. Equally important is the subjective feel of each acceleration method. At the end of the study, the following 14 questions were asked of each participant, with answer ranges in a 5-point Likert scale, ranging from "Strongly Disagree" to "Strongly Agree".

  1. The first acceleration method felt natural
  2. The first acceleration method allowed for precise pointer control
  3. The first acceleration method allowed for fast pointer movement
  4. The first acceleration method made it easy to hit the targets
  5. I would prefer the first acceleration method to be faster
  6. I would prefer the first acceleration method to be slower
  7. The second acceleration method felt natural
  8. The second acceleration method allowed for precise pointer control
  9. The second acceleration method allowed for fast pointer movement
  10. The second acceleration method made it easy to hit the targets
  11. I would prefer the second acceleration method to be faster
  12. I would prefer the second acceleration method to be slower
  13. The two acceleration methods felt different
  14. The first acceleration method was preferable over the second
The figure below shows that comparatively few "strongly agree" and "strongly disagree" answers were given, hinting that the differences between the methods were small.

Distribution of answers in the questionnaire
Looking at statistical significance, the questionnaire didn't really provide anything of value. Not even the question "The two acceleration methods felt different" provided any answers, and the question "The first acceleration method was preferable over the second" was likewise inconclusive. So the summary of the questionnaire is pretty much: on the whole none of the methods stood out as better or worse.

Likert frequencies for the question of which method is preferable

Summary

Subjective data was inconclusive, but the objective data goes slightly in favour of linear and stretched over the current smooth method. We didn't have enough sample sets to analyse separately for each device type, so from a maintainer's point of view the vote goes to linear. It allows replacing a rather complicated pointer acceleration method with 3 lines of code.

September 17, 2014

A follow up to yesterday's Videos new for 3.14
The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

September 16, 2014

ACPI, kernels and contracts with firmware
ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments
A Wayland status update

It has been a while since I last  wrote about the GNOME Wayland port; time for another status update of GNOME 3.14 on Wayland.

+

So, what have we achieved this cycle ?

  • Keyboard layouts are supported
  • Drag-and-Drop works (with limitations)
  • Touch is supported

The list of applications that work ‘natively’ (ie with the GTK+ Wayland backend) is looking pretty good, too.  The main straggler here is totem, where we are debugging some issues with the use of subsurfaces in clutter-gtk.

We are homing in on ‘day-to-day usable’.  I would love to say the Wayland session is “rock-solid”, but I just spent an hour trying to track track down an ugly memory leak that ended my session rather quickly. So, we are not quite there yet, and more work is needed.

If you are interested in helping us complete the port and take advantage of Wayland going forward, there is an opening in the desktop team at Red Hat for a Wayland developer.

Update: After a bit of collective head-scratching, Jasper fixed the memory leak here.

Videos 3.14 features
We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.
Master Document Templates
Writer has long had Master Documents. A master document lets you manage large documents, such as a book with many chapters. One odt per chapter, bundled into a single document via a master odm

LibreOffice 4.4 introduces Master Document Templates. What that means is that these Master Document Templates can be added to the Template Manager and from there you can create a new Master Document (odm) based on a Master Document Template (otm). The new odm of course having the same initial content as the Template it is based upon.

Thanks to Máirín Duffy (of Red Hat, Inc.) for prompting this feature. Any failures in execution are mine however.

September 13, 2014

More Font Support
Playing around with some Mac OS X fonts under Linux I noticed that LibreOffice wasn't listing a lot of them despite fontconfig announcing their existence. A little digging and some very small tweaks means that we now have mac ttf fontname encoding support along with support for version 2 ttc fonts. This is more fixing some oversights (version 2 of ttc came into existence after the ttc support was added so there was a "only if version is 1" condition) than implementing anything particularly new, but now LibreOffice under Linux works with a lot more ttf/ttc/otf fonts than it did before.

September 09, 2014

change image option in writer context menu

Change Image in Writer

Thanks to Jennifer Liebel, writer now has a change image option in its graphic context menu like Impress/Draw.

September 03, 2014

aarch64 libreoffice

Thanks to Stephen Bergmann of Red Hat, Inc. LibreOffice is now ported to aarch64. No new ports for years, and then two 64bit ports landed within a week of aarch64 and ppc64le.

August 31, 2014

Revisiting How We Put Together Linux Systems

In a previous blog story I discussed Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems, I now want to take the opportunity to explain a bit where we want to take this with systemd in the longer run, and what we want to build out of it. This is going to be a longer story, so better grab a cold bottle of Club Mate before you start reading.

Traditional Linux distributions are built around packaging systems like RPM or dpkg, and an organization model where upstream developers and downstream packagers are relatively clearly separated: an upstream developer writes code, and puts it somewhere online, in a tarball. A packager than grabs it and turns it into RPMs/DEBs. The user then grabs these RPMs/DEBs and installs them locally on the system. For a variety of uses this is a fantastic scheme: users have a large selection of readily packaged software available, in mostly uniform packaging, from a single source they can trust. In this scheme the distribution vets all software it packages, and as long as the user trusts the distribution all should be good. The distribution takes the responsibility of ensuring the software is not malicious, of timely fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn't fit many use-cases of our software particularly well. Let's have a look at the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream distributions to package their stuff. It's the downstream distribution that decides on schedules, packaging details, and how to handle support. Often upstream vendors want much faster release cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to impossible. Since the end-user can run a variety of different package versions together, and expects the software he runs to just work on any combination, the test matrix explodes. If upstream tests its version on distribution X release Y, then there's no guarantee that that's the precise combination of packages that the end user will eventually run. In fact, it is very unlikely that the end user will, since most distributions probably updated a number of libraries the package relies on by the time the package ends up being made available to the user. The fact that each package can be individually updated by the user, and each user can combine library versions, plug-ins and executables relatively freely, results in a high risk of something going wrong.

  • Since there are so many different distributions in so many different versions around, if upstream tries to build and test software for them it needs to do so for a large number of distributions, which is a massive effort.

  • The distributions are actually quite different in many ways. In fact, they are different in a lot of the most basic functionality. For example, the path where to put x86-64 libraries is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is hard: if you want to do it, you need to actually install them, each one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and trademark requirements (and rightly so), any kind of closed source software (or otherwise non-free) does not fit into this scheme at all.

This all together makes it really hard for many upstreams to work nicely with the current way how Linux works. Often they try to improve the situation for them, for example by bundling libraries, to make their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for people who want to put together their individual system, nicely adjusted to exactly what they need. However, this is not really how many of today's Linux systems are built, installed or updated. If you build any kind of embedded device, a server system, or even user systems, you frequently do your work based on complete system images, that are linearly versioned. You build these images somewhere, and then you replicate them atomically to a larger number of systems. On these systems, you don't install or remove packages, you get a defined set of files, and besides installing or updating the system there are no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing for this major use-case of Linux. Their strict focus on individual packages as well as package managers as end-user install and update tool is incompatible with what many system vendors want.

Users

The classic Linux distribution scheme is frequently not what end users want, either. Many users are used to app markets like Android, Windows or iOS/Mac have. Markets are a platform that doesn't package, build or maintain software like distributions do, but simply allows users to quickly find and download the software they need, with the app vendor responsible for keeping the app updated, secured, and all that on the vendor's release cycle. Users tend to be impatient. They want their software quickly, and the fine distinction between trusting a single distribution or a myriad of app developers individually is usually not important for them. The companies behind the marketplaces usually try to improve this trust problem by providing sand-boxing technologies: as a replacement for the distribution that audits, vets, builds and packages the software and thus allows users to trust it to a certain level, these vendors try to find technical solutions to ensure that the software they offer for download can't be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are sometimes quite successful attempts to do something about it. Ubuntu Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of this problem set, usually with a strict focus on one facet of Linux systems. For example, Ubuntu Apps focus strictly on end user (desktop) applications, and don't care about how we built/update/install the OS itself, or containers. Docker OTOH focuses on containers only, and doesn't care about end-user apps. Software Collections tries to focus on the development environments. ChromeOS focuses on the OS itself, but only for end-user devices. CoreOS also focuses on the OS, but only for server systems.

The approaches they find are usually good at specific things, and use a variety of different technologies, on different layers. However, none of these projects tried to fix this problems in a generic way, for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so generic: you can build supercomputers and tiny embedded devices out of it. It's time we come up with a basic, reusable scheme how to solve the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom Gundersen, David Herrmann, and yours truly) recently met in Berlin about all these things, and tried to come up with a scheme that is somewhat simple, but tries to solve the issues generically, for all use-cases, as part of the systemd project. All that in a way that is somewhat compatible with the current scheme of distributions, to allow a slow, gradual adoption. Also, and that's something one cannot stress enough: the toolbox scheme of classic Linux distributions is actually a good one, and for many cases the right one. However, we need to make sure we make distributions relevant again for all use-cases, not just those of highly individualized systems.

Anyway, so let's summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their software (regardless if just an app, or the whole OS) directly for the end user, and know the precise combination of libraries and packages it will operate with.

  • We want to allow end users and administrators to install these packages on their systems, regardless which distribution they have installed on it.

  • We want a unified solution that ultimately can cover updates for full systems, OS containers, end user apps, programming ABIs, and more. These updates shall be double-buffered, (at least). This is an absolute necessity if we want to prepare the ground for operating systems that manage themselves, that can update safely without administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a fully trustable OS, with images that can be verified by a full trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the kernel, and initrd. Cryptographically secure verification of the code we execute is relevant on the desktop (like ChromeOS does), but also for apps, for embedded devices and even on servers (in a post-Snowden world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So, now, let's discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs and Linux file system name-spacing. btrfs at this point already has a large number of features that fit neatly in our concept, and the maintainers are busy working on a couple of others we want to eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> -- This refers to a full vendor operating system tree. It's basically a /usr tree (and no other directories), in a specific version, with everything you need to boot it up inside it. The <vendorid> field is replaced by some vendor identifier, maybe a scheme like org.fedoraproject.FedoraWorkstation. The <architecture> field specifies a CPU architecture the OS is designed for, for example x86-64. The <version> field specifies a specific OS version, for example 23.4. An example sub-volume name could hence look like this: usr:org.fedoraproject.FedoraWorkstation:x86_64:23.4

  • root:<name>:<vendorid>:<architecture> -- This refers to an instance of an operating system. Its basically a root directory, containing primarily /etc and /var (but possibly more). Sub-volumes of this type do not contain a populated /usr tree though. The <name> field refers to some instance name (maybe the host name of the instance). The other fields are defined as above. An example sub-volume name is root:revolution:org.fedoraproject.FedoraWorkstation:x86_64.

  • runtime:<vendorid>:<architecture>:<version> -- This refers to a vendor runtime. A runtime here is supposed to be a set of libraries and other resources that are needed to run apps (for the concept of apps see below), all in a /usr tree. In this regard this is very similar to the usr sub-volumes explained above, however, while a usr sub-volume is a full OS and contains everything necessary to boot, a runtime is really only a set of libraries. You cannot boot it, but you can run apps with it. An example sub-volume name is: runtime:org.gnome.GNOME3_20:x86_64:3.20.1

  • framework:<vendorid>:<architecture>:<version> -- This is very similar to a vendor runtime, as described above, it contains just a /usr tree, but goes one step further: it additionally contains all development headers, compilers and build tools, that allow developing against a specific runtime. For each runtime there should be a framework. When you develop against a specific framework in a specific architecture, then the resulting app will be compatible with the runtime of the same vendor ID and architecture. Example: framework:org.gnome.GNOME3_20:x86_64:3.20.1

  • app:<vendorid>:<runtime>:<architecture>:<version> -- This encapsulates an application bundle. It contains a tree that at runtime is mounted to /opt/<vendorid>, and contains all the application's resources. The <vendorid> could be a string like org.libreoffice.LibreOffice, the <runtime> refers to one the vendor id of one specific runtime the application is built for, for example org.gnome.GNOME3_20:3.20.1. The <architecture> and <version> refer to the architecture the application is built for, and of course its version. Example: app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133

  • home:<user>:<uid>:<gid> -- This sub-volume shall refer to the home directory of the specific user. The <user> field contains the user name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs of the user. The idea here is that in the long run the list of sub-volumes is sufficient as a user database (but see below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly identifiable. It is our intention to introduce a new GPT partition type ID for this.

How To Use It

After we introduced this naming scheme let's see what we can build of this:

  • When booting up a system we mount the root directory from one of the root sub-volumes, and then mount /usr from a matching usr sub-volume. Matching here means it carries the same <vendor-id> and <architecture>. Of course, by default we should pick the matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when we boot up a regular system: we simply combine a usr sub-volume with a root sub-volume.

  • When we enumerate the system's users we simply go through the list of home snapshots.

  • When a user authenticates and logs in we mount his home directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the app sub-volume to /opt/<vendorid>/, and the appropriate runtime sub-volume the app picked to /usr, as well as the user's /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he installs the right framework, and then temporarily transitions into a name space where /usris mounted from the framework sub-volume, and /home/$USER from his own home directory. In this name space he then runs his build commands. He can build in multiple name spaces at the same time, if he intends to builds software for multiple runtimes or architectures at the same time.

Instantiating a new system or OS container (which is exactly the same in this scheme) just consists of creating a new appropriately named root sub-volume. Completely naturally you can share one vendor OS copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-fold-buffered), because usr, runtime, framework, app sub-volumes can exist in multiple versions. Of course, by default the execution logic should always pick the newest release of each sub-volume, but it is up to the user keep multiple versions around, and possibly execute older versions, if he desires to do so. In fact, like on ChromeOS this could even be handled automatically: if a system fails to boot with a newer snapshot, the boot loader can automatically revert back to an older version of the OS.

An Example

Note that in result this allows installing not only multiple end-user applications into the same btrfs volume, but also multiple operating systems, multiple system instances, multiple runtimes, multiple frameworks. Or to spell this out in an example:

Let's say Fedora, Mageia and ArchLinux all implement this scheme, and provide ready-made end-user images. Also, the GNOME, KDE, SDL projects all define a runtime+framework to develop against. Finally, both LibreOffice and Firefox provide their stuff according to this scheme. You can now trivially install of these into the same btrfs volume:

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:org.mageia.Client:i386:39.3
  • usr:org.mageia.Client:i386:39.4
  • usr:org.mageia.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:org.mageia.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.1
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.4
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
  • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
  • runtime:org.kde.KDE5_6:x86_64:5.6.0
  • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • framework:org.kde.KDE5_6:x86_64:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems installed. All of them in three versions, and one even in a beta version. We have four system instances around. Two of them of Fedora, maybe one of them we usually boot from, the other we run for very specific purposes in an OS container. We also have the runtimes for two GNOME releases in multiple versions, plus one for KDE. Then, we have the development trees for one version of KDE and GNOME around, as well as two apps, that make use of two releases of the GNOME runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can actually relatively freely mix and match apps and OSes, or develop against specific frameworks in specific versions on any operating system. It doesn't matter if you booted your ArchLinux instance, or your Fedora one, you can execute both LibreOffice and Firefox just fine, because at execution time they get matched up with the right runtime, and all of them are available from all the operating systems you installed. You get the precise runtime that the upstream vendor of Firefox/LibreOffice did their testing with. It doesn't matter anymore which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the sub-volume list, it doesn't matter which system you boot, the distribution should be able to find your local users automatically, without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on execution we already came quite far, but how do we actually get these sub-volumes onto the final machines, and how do we update them? Well, btrfs has a feature they call "send-and-receive". It basically allows you to "diff" two file system versions, and generate a binary delta. You can generate these deltas on a developer's machine and then push them into the user's system, and he'll get the exact same sub-volume too. This is how we envision installation and updating of operating systems, applications, runtimes, frameworks. At installation time, we simply deserialize an initial send-and-receive delta into our btrfs volume, and later, when a new version is released we just add in the few bits that are new, by dropping in another send-and-receive delta under a new sub-volume name. And we do it exactly the same for the OS itself, for a runtime, a framework or an app. There's no technical distinction anymore. The underlying operation for installing apps, runtime, frameworks, vendor OSes, as well as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an awful lot of waste, after all they will contain a lot of very similar data, since a lot of resources are shared between distributions, frameworks and runtimes. However, thankfully btrfs actually is able to de-duplicate this for us. If we add in a new app snapshot, this simply adds in the new files that changed. Moreover different runtimes and operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user, desktop side of things, the concept is also extremely powerful in server scenarios. For example, it is easy to build your own usr trees and deliver them to your hosts using this scheme. The usr sub-volumes are supposed to be something that administrators can put together. After deserializing them into a couple of hosts, you can trivially instantiate them as OS containers there, simply by adding a new root sub-volume for each instance, referencing the usr tree you just put together. Instantiating OS containers hence becomes as easy as creating a new btrfs sub-volume. And you can still update the images nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded use-cases. Regardless if you build a TV, an IVI system or a phone: you can put together you OS versions as usr trees, and then use btrfs-send-and-receive facilities to deliver them to the systems, and update them there.

Many people when they hear the word "btrfs" instantly reply with "is it ready yet?". Thankfully, most of the functionality we really need here is strictly read-only. With the exception of the home sub-volumes (see below) all snapshots are strictly read-only, and are delivered as immutable vendor trees onto the devices. They never are changed. Even if btrfs might still be immature, for this kind of read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example, an installer image could include a Fedora version compiled for x86-64, one for i386, one for ARM, all in the same btrfs volume. Due to btrfs' de-duplication they will share as much as possible, and when the image is booted up the right sub-volume is automatically picked. Something similar of course applies to the apps too!

This also allows us to implement something that we like to call Operating-System-As-A-Virus. Installing a new system is little more than:

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader into the EFI System Partition
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume, you can trivially duplicate this onto any block device you want. Let's say you are a happy Fedora user, and you want to provide a friend with his own installation of this awesome system, all on a USB stick. All you have to do for this is do the steps above, using your installed usr tree as source to copy. And there you go! And you don't have to be afraid that any of your personal data is copied too, as the usr sub-volume is the exact version your vendor provided you with. Or with other words: there's no distinction anymore between installer images and installed systems. It's all the same. Installation becomes replication, not more. Live-CDs and installed systems can be fully identical.

Note that in this design apps are actually developed against a single, very specific runtime, that contains all libraries it can link against (including a specific glibc version!). Any library that is not included in the runtime the developer picked must be included in the app itself. This is similar how apps on Android declare one very specific Android version they are developed against. This greatly simplifies application installation, as there's no dependency hell: each app pulls in one runtime, and the app is actually free to pick which one, as you can have multiple installed, though only one is used by each app.

Also note that operating systems built this way will never see "half-updated" systems, as it is common when a system is updated using RPM/dpkg. When updating the system the code will either run the old or the new version, but it will never see part of the old files and part of the new files. This is the same for apps, runtimes, and frameworks, too.

Where We Are Now

We are currently working on a lot of the groundwork necessary for this. This scheme relies on the ability to monopolize the vendor OS resources in /usr, which is the key of what I described in Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems a few weeks back. Then, of course, for the full desktop app concept we need a strong sandbox, that does more than just hiding files from the file system view. After all with an app concept like the above the primary interfacing between the executed desktop apps and the rest of the system is via IPC (which is why we work on kdbus and teach it all kinds of sand-boxing features), and the kernel itself. Harald Hoyer has started working on generating the btrfs send-and-receive images based on Fedora.

Getting to the full scheme will take a while. Currently we have many of the building blocks ready, but some major items are missing. For example, we push quite a few problems into btrfs, that other solutions try to solve in user space. One of them is actually signing/verification of images. The btrfs maintainers are working on adding this to the code base, but currently nothing exists. This functionality is essential though to come to a fully verified system where a trust chain exists all the way from the firmware to the apps. Also, to make the home sub-volume scheme fully workable we actually need encrypted sub-volumes, so that the sub-volume's pass-phrase can be used for authenticating users in PAM. This doesn't exist either.

Working towards this scheme is a gradual process. Many of the steps we require for this are useful outside of the grand scheme though, which means we can slowly work towards the goal, and our users can already take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with /usr, /home, /opt, /var, /etc. It executes in an environment that is pretty much identical to how it would be run on traditional systems.

There's no need to fully move to a system that uses only btrfs and follows strictly this sub-volume scheme. For example, we intend to provide implicit support for systems that are installed on ext4 or xfs, or that are put together with traditional packaging tools such as RPM or dpkg: if the the user tries to install a runtime/app/framework/os image on a system that doesn't use btrfs so far, it can just create a loop-back btrfs image in /var, and push the data into that. Even us developers will run our stuff like this for a while, after all this new scheme is not particularly useful for highly individualized systems, and we developers usually tend to run systems like that.

Also note that this in no way a departure from packaging systems like RPM or DEB. Even if the new scheme we propose is used for installing and updating a specific system, it is RPM/DEB that is used to put together the vendor OS tree initially. Hence, even in this scheme RPM/DEB are highly relevant, though not strictly as an end-user tool anymore, but as a build tool.

So Let's Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images, user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of all executed code can be done, all the way to the firmware, as standard feature of the system.

  • We want to allow app vendors to write their programs against very specific frameworks, under the knowledge that they will end up being executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions of them, multiple runtimes in multiple versions, as well as multiple frameworks in multiple versions. And of course, multiple apps in multiple versions.

  • We want everything double buffered (or actually n-fold buffered), to ensure we can reliably update/rollback versions, in particular to safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS container is as simple as adding in a new snapshot and restarting the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS instances from a single vendor tree, with zero difference for doing this on order to be able to boot it on bare metal/VM or as a container.

  • We want to enable Linux to have an open scheme that people can use to build app markets and similar schemes, not restricted to a specific vendor.

Final Words

I'll be talking about this at LinuxCon Europe in October. I originally intended to discuss this at the Linux Plumbers Conference (which I assumed was the right forum for this kind of major plumbing level improvement), and at linux.conf.au, but there was no interest in my session submissions there...

Of course this is all work in progress. These are our current ideas we are working towards. As we progress we will likely change a number of things. For example, the precise naming of the sub-volumes might look very different in the end.

Of course, we are developers of the systemd project. Implementing this scheme is not just a job for the systemd developers. This is a reinvention how distributions work, and hence needs great support from the distributions. We really hope we can trigger some interest by publishing this proposal now, to get the distributions on board. This after all is explicitly not supposed to be a solution for one specific project and one specific vendor product, we care about making this open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us (IRC, mail, G+, ...).

The future is going to be awesome!

August 29, 2014

Putting PackageKit metadata on the Fedora LiveCD

While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

August 22, 2014

ppc64le libreoffice
LibreOffice is now ported to ppc64le. make passes, testtools passes and the resulting application is capable of headlessly converting documents to pdf. There's no reason to think it's any less capable of anything else as any other port but I don't actually have a ppc64le and transatlantic ssh tunnels aren't conducive to extensive UI testing.

The tricky bit of the port as always is the uno bridge, especially because the ABI was changed for little endian

https://bugs.openjdk.java.net/browse/JDK-8035647 is handy to get the links to the original elf v2 abi change commits to gcc/libffi

https://ghc.haskell.org/trac/ghc/ticket/8965 is handy to get a friendlier translation of the change where if gcc can see that the arguments to the function to be called will fit in registers then no argument save area is created which stumped me for a while

August 15, 2014

dialog conversion status, 4 to go
http://magazine.uc.edu/issues/0509/rabbi.html

Converting LibreOffice dialogs to .ui format, 4 left

I should go on vacation more often. On my return I find that Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, have now converted all but 4 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format.

Here's the list of the last four. One (monster) whose conversion is in progress, one that should ideally be removed in favour of a duplicate dialog and two that have no known route to display them. Hacking the code temporarily to force those two to appear is probably no biggy.


Current conversion stats are:
820 .ui files currently exist
There are 3 unconverted dialogs
There are 1 unconverted tabpages
An estimated additional 4 .ui are required
We are 99% of the way through.

What's next, well *cough*, the above are all the dialogs and tabpages in the classic .src format. There are actually a host of ErrorBox, InfoBox and QueryBox that exist in the .src format as well.

These take just two pieces of information, a string to display and some bits that set what buttons to show, e.g. cancel, close, ok + cancel, etc. We want to remove them in favour of the Gtk-alike MessageDialog, but we don't want to actually convert them to .ui format, because they are so simple it makes more sense to just reduce them to strings like this sample commit demonstrates. This might even be possible to at least somewhat automate.

I've now updated count-todo-dialogs to display the count of those *Box elements that exist in src file format, but I'll elide the count of them until the last 4 true dialogs+tabpages are gone.

August 10, 2014

Birthplace
For tedious reasons, I will at this stage point out that I was born in Galway, Ireland.

comment count unavailable comments

August 07, 2014

Post-GUADEC

  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

August 04, 2014

Notes on Fedora on an Android device
A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.


$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.

Nice-to-haves

In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.

Conclusion

Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

July 30, 2014

you have a long road to walk, but first you have to leave the house
or why publishing code is STEP ZERO.

If you've been developing code internally for a kernel contribution, you've probably got a lot of reasons not to default to working in the open from the start, you probably don't work for Red Hat or other companies with default to open policies, or perhaps you are scared of the scary kernel community, and want to present a polished gem.

If your company is a pain with legal reviews etc, you have probably spent/wasted months of engineering time on internal reviews and stuff, so think all of this matters later, because why wouldn't it, you just spent (wasted) a lot of time on it, so it must matter.

So you have your polished codebase, why wouldn't those kernel maintainers love to merge it.

Then you publish the source code.

Oh, look you just left your house. The merging of your code is many many miles distant and you just started walking that road, just now, not when you started writing it, not when you started legal review, not when you rewrote it internally the 4th time. You just did it this moment.

You might have to rewrite it externally 6 times, you might never get it merged, it might be something your competitors are also working on, and the kernel maintainers would rather you cooperated with people your management would lose their minds over, that is the kernel development process.

step zero: publish the code. leave the house.

(lately I've been seeing this problem more and more, so I decided to write it up, and it really isn't directed at anyone in particular, I think a lot of vendors are guilty of this).

July 28, 2014

A talk in 9 images

My talk at GUADEC this year was about GTK+ dialogs. The first half of the talk consisted of a comparison of dialogs in GTK+ 2, in GTK+ 3 under gnome-shell and in GTK+ 3 under xfwm4 (as an example of an environment that does not favor client-side decorations).

The main take-away here should be that in 3.14, all GTK+ dialogs will again have traditional decorations if that is what works best in the environment it is used in.

About DialogsPreference DialogsFile ChoosersMessage DialogsError DialogsPrint DialogsFont DialogsColor DialogsAction DialogsThe second part of my talk was discussing best practices for dealing with various issues that can come up with custom GTK+ dialogs; I’ve summarized the main points in this HowDoI page.

July 25, 2014

Dialogs and Coverity, current numbers

Army massing

Converting LibreOffice dialogs to .ui format, 54 conversions remaining

We've now converted all but 54 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format. This is due to the much appreciated efforts of Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, who are tackling the last bunch of hard to find or hard to convert ones.

Current conversion stats are:
778 .ui files currently exist
There are 20 unconverted dialogs
There are 34 unconverted tabpages
An estimated additional 54 .ui are required
We are 93% of the way through.

Coverity Defect Density: LibreOffice vs Average

According to Coverity's overview dashboard our current status is:

LibreOffice: 9,425,526 line of code and 0.09 defect density

Open Source Defect Density By Project Size

Line of Code (LOC) Defect Density
Less than 100,0000.35
100,000 to 499,9990.5
500,000 to 1 million0.7
More than 1 million0.65
Note: Defect density is measured by the number of defects per 1,000 lines of code, identified by the Coverity platform. The numbers shown above are from our 2013 Coverity Scan Report, which analyzed 250 million lines of open source code.

July 24, 2014

Continuous testing and Wayland

The GNOME-Continuous  continuous integration and delivery system has been helping us to keep the quality of the GNOME code base up for a while now.

It is doing a number of things:

  • Builds changed modules
  • Creates vm images that can be downloaded for local testing
  • Smoke-tests the installed image  by verifying that it boots up to the login screen
  • Runs  more than 400 tests against the installed image
  • Launches all the applications that are part of the moduleset and takes screenshots of them

All of this happens after every commit, or at least very close to that, and the results are available at the build.gnome.org website.

You can learn more about GNOME-Continuous here.

As a member of the the GNOME release team I am really thankful for this service, it has made our job a lot easier – at release time everything just builds and works most of the time nowadays.

Earlier this year, we’ve made the smoke-testing aspect of gnome-continuous more useful by verifying not just GNOME, but also GNOME Classic and GNOME on Wayland.

Today, we’ve had a minor breakthrough: for the first time, the GNOME/Wayland smoke test succeeded all the way to taking a screenshot of the session.

GNOME/Wayland(of course, GNOME on Wayland looks just like GNOME under X11, so the screenshot is not very exciting by itself).

To get this far, we had to switch to the egl-drm branch of mesa, which adds support for running KMS+DRM GL applications with QXL.

July 23, 2014

Watch out for DRI3 regressions
DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

July 22, 2014

GTK+ at GUADEC 2014

GUADEC is almost here. GTK+ will be presented with 3 talks:

  • GTK+, dialogs, the HIG and you  (by me)
  • GTK+ and CSS (by Benjamin)
  • The GTK+ scene graph toolkit (by Emmanuele)

All of these will be on Monday, the 28th. We will also have a GTK+ team meeting on the 31st.

I’ve made a small collage to summarize my talk:

Dialogs See you all there !

July 20, 2014

Newcomers Workshop

The GNOME Newcomers Workshop might be coming to a conference near you this summer! Owen Taylor and I will be hosting one at GUADEC in Strasbourg, France on Saturday, July 26 and one at Flock in Prague, Czech Republic on Saturday, August 9. If you have never contributed a patch to GNOME before or even have never used GNOME, this workshop is for you! You’ll get a quick overview of the GNOME project, have a chance to install GNOME in a virtual machine, and learn the tools and the process of contributing a patch. If you plan to come to either workshop, please add yourself to the list of attendees.gnome-balloonIf you can’t attend one of the workshops, you can install the available image in a virtual machine and go through the newcomers tutorial used at the workshop on your own. You can ask any questions you have on IRC.

We would like to have one helper per three newcomers at the workshops. If you are an established contributor and will be available, please sign up now to help with the workshop at GUADEC or Flock, and then you can check the room in the beginning of the session to see if we need your help based on the attendance. I’d love to see more people running the Newcomers Workshop locally or at other events they attend. Helping out with one of these sessions would give you a chance to learn how it works.

In addition to hosting the Newcomers Workshop, I will be speaking about how to be an ally to women in tech at GUADEC and about the Outreach Program for Women at Flock.

guadec-2014-badge-small

And then I’m going to

flock-logo14

July 14, 2014

pointer acceleration in libinput - an analysis

Following Christian's Wayland in Fedora Update post, and after Hans fixed the touchpad acceleration, I've been playing with pointer acceleration in libinput a bit. The main focus was not yet on changing it but rather on figuring out what we actually do and where the room for improvement is. There's a tool in my (rather messy) github wip/ptraccel-work branchto re-generate the graphs below.

This was triggered by a simple plan: I want a configuration interface in libinput that provides a sliding scale from -1 to 1 to adjust a device's virtual speed from slowest to fastest, with 0 being the default for that device. A user should not have to worry about the accel mechanism itself, which may be different for any given device, all they need to know is that the setting -0.5 means "halfway between default and 'holy cow this moves like molasses!'". The utopia is of course that for any given acceleration setting, every device feels equally fast (or slow). In order to do that, I needed the right knobs to tweak.

The code we currently have in libinput is pretty much 1:1 what's used in the X server. The X server sports a lot more configuration options, but what we have in libinput 0.4.0 is essentially what the default acceleration settings are in X. Armed with the knowledge that any #define is a potential knob for configuration I went to investigate. There are two defines that are labelled as adjustible parameters:

  • DEFAULT_THRESHOLD, set to 0.4
  • DEFAULT_ACCELERATION, set to 2.0
But what do they mean, exactly? And what exactly does a value of 0.4 represent?
[side-note: threshold was 4 until I took the constant multiplier out, it's now 0.4 upstream and all the graphs represent that.]

Pointer acceleration is nothing more than mapping some input data to some potentially faster output data. How much faster depends on how fast the device moves, and to get there one usually needs a couple of steps. The trick of course is to make it predictable, so that despite the acceleration, your brain thinks that the visible cursor is an extension of your hand at all speeds.

Let's look at a high-level outline of our pointer acceleration code:

  • calculate the velocity of the current movement
  • use that velocity to calculate the acceleration factor
  • apply accel to dx/dy
  • smoothen out the dx/dy to avoid abrupt changes between two events

Calculating pointer speed

We don't just use dx/dy as values, rather, we use the pointer velocity. There's a simple reason for that: dx/dy depends on the device's poll rate (or interrupt frequency). A device that polls twice as often sends half the dx/dy values in each event for the same physical speed.

Calculating the velocity is easy: divide dx/dy by the delta time. We use a set of "trackers" that store previous dx/dy values with their timestamp. As long as we get movement in the same cardinal direction, we take those into account. So if we have 5 events in direction NE, the speed is averaged over those 5 events, smoothing out abrupt speed changes.

The acceleration function

The speed we just calculated is passed to the acceleration function to calculate an acceleration factor.

Figure 1: Mapping of velocity in unit/ms to acceleration factor (unitless). X axes here are labelled in units/ms and mm/s.
This function is the only place where DEFAULT_THRESHOLD/DEFAULT_ACCELERATION are used, but they mostly just stretch the graph. The shape stays the same.

The output of this function is a unit-less acceleration factor that is applied to dx/dy. A factor of 1 means leaving dx/dy untouched, 0.5 is half-speed, 2 is double-speed.

Let's look at the graph for the accel factor output (red): for very slow speeds we have an acceleration factor < 1.0, i.e. we're slowing things down. There is a distinct plateau up to the threshold of 0.4, after that it shoots up to roughly a factor of 1.6 where it flattens out a bit until we hit the max acceleration factor

Now we can also put units to the two defaults: Threshold is clearly in units/ms, and the acceleration factor is simply a maximum. Whether those are mentally easy to map is a different question.

We don't use the output of the function as-is, rather we smooth it out using the Simpson's rule. The second (green) curve shows the accel factor after the smoothing took effect. This is a contrived example, the tool to generate this data simply increased the velocity, hence this particular line. For more random data, see Figure 2.

Figure 2: Mapping of velocity in unit/ms to acceleration factor (unitless) for a random data set. X axes here are labelled in units/ms and mm/s.
For the data set, I recorded the velocity from libinput while using Firefox a bit.

The smoothing takes history into account, so the data points we get depend on the usage. In this data set (and others I tested) we see that the majority of the points still lie on or close to the pure function, apparently the delta doesn't matter that much. Nonetheless, there are a few points that suggest that the smoothing does take effect in some cases.

It's important to note that this is already the second smoothing to take effect - remember that the velocity (may) average over multiple events and thus smoothens the input data. However, the two smoothing effects somewhat complement each other: velocity smoothing only happens when the pointer moves consistently without much change, the Simpson's smoothing effect is most pronounced when the pointer moves erratically.

Ok, now we have the basic function, let's look at the effect.

Pointer speed mappings

Figure 3: Mapping raw unaccelerated dx to accelerated dx, in mm/s assuming a constant pysical device resolution of 400 dpi that sends events at 125Hz. dx range mapped is 0..127
The graph was produced by sending 30 events with the same constant speed, then dividing by the number of events to reduce any effects tracker feeding has at the initial couple of events.

The two lines show the actual output speed in mm/s and the gain in mm/s, i.e. (output speed - input speed). We can see that the little nook where the threshold kicks in and after the acceleration is linear. Look at Figure 1 again: the linear acceleration is caused by the acceleration factor maxing out quickly.

Most of this graph is theoretical only though. On your average mouse you don't usually get a delta greater than 10 or 15 and this graph covers the theoretical range to 127. So you'd only ever be seeing the effect of up to ~120 mm/s. So a more realistic view of the graph is:

Figure 4: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 3, but zoomed to the realistic range. We go from a linear speed increase (no acceleration) to a quick bump once the threshold is hit and from then on to a linear speed increase once the maximum acceleration is hit.

And to verify, the ratio of output speed : input speed:

Figure 5: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated.

Looks pretty much exactly like the pure acceleration function, which is to be expected. What's important here though is that this is the effective speed, not some mathematical abstraction. And it shows one limitation: we go from 0 to full acceleration within really small window.

Again, this is the full theoretical range, the more realistic range is:

Figure 6: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, i.e. the ratio of accelerated:unaccelerated. Zoomed in to a max of 120 mm/s (15 dx/event).
Same data as Figure 5, just zoomed in to a maximum of 120 mm/s. If we assume that 15 dx/event is roughly the maximum you can reach with a mouse you'll see that we've reached maximum acceleration at a third of the maximum speed and the window where we have adaptive acceleration is tiny.

Tweaking threshold/accel doesn't do that much. Below are the two graphs representing the default (threshold=0.4, accel=2), a doubled threshold (threshold=0.8, accel=2) and a doubled acceleration (threshold=0.4, accel=4).

Figure 6: Mapping raw unaccelerated dx to accelerated dx, see Figure 3 for details. Zoomed in to a max of 120 mm/s (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Figure 7: Mapping of the unit-less gain of raw unaccelerated dx to accelerated dx, see Figure 5 for details. Zoomed in to a max of 120 t0.4 a4 (15 dx/event). Graphs represent thresholds:accel settings of 0.4:2, 0.8:2, 0.4:4.
Doubling either setting just moves the adaptive window around, it doesn't change that much in the grand scheme of things.

Now, of course these were all fairly simple examples with constant speed, etc. Let's look at a diagram of what is essentially random movement, me clicking around in Firefox for a bit:

Figure 8: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set.
And the zoomed-in version of this:
Figure 9: Mapping raw unaccelerated dx to accelerated dx on a fixed random data set, zoomed in to events 450-550 of that set.
This is more-or-less random movement reflecting some real-world usage. What I find interesting is that it's very hard to see any areas where smoothing takes visible effect. the accelerated curve largely looks like a stretched input curve. tbh I'm not sure what I should've expected here and how to read that, pointer acceleration data in real-world usage is notoriously hard to visualize.

Summary

So in summary: I think there is room for improvement. We have no acceleration up to the threshold, then we accelerate within too small a window. Acceleration stops adjusting to the speed soon. This makes us lose precision and small speed changes are punished quickly.

Increasing the threshold or the acceleration factor doesn't do that much. Any increase in acceleration makes the mouse faster but the adaptive window stays small. Any increase in threshold makes the acceleration kick in later, but the adaptive window stays small.

We've already merged a number of fixes into libinput, but some more work is needed. I think that to get a good pointer acceleration we need to get a larger adaptive window [Citation needed]. We're currently working on that (and figuring out how to evaluate whatever changes we come up with).

A word on units

The biggest issue I was struggling with when trying to understand the code was that of units. The code didn't document used units anywhere but it turns out that everything was either in device units ("mickeys"), device units/ms or (in the case of the acceleration factors) was unitless.

Device units are unfortunately a pretty useless base entity, only slightly more precise than using the length of a piece of string. A device unit depends on the device resolution and of course that differs between devices. An average USB mouse tends to have 400 dpi (15.75 units/mm) but it's common to have 800 dpi, 1000 dpi and gaming mice go up to 8200dpi. A touchpad can have resolutions of 1092 dpi (43 u/mm), 3277 dpi (129 u/mm), etc. and may even have different resolutions for x and y.

This explains why until commit e874d09b4 the touchpad felt slower than a "normal" mouse. We scaled to a magic constant of 10 units/mm, before hitting the pointer acceleration code. Now, as said above the mouse would likely have a resolution of 15.75 units/mm, making it roughly 50% faster. The acceleration would kick in earlier on the mouse, giving the touchpad and the mouse not only different speeds but a different feel altogether.

Unfortunately, there is not much we can do about mice feeling different depending on the resolution. To my knowledge there is no way to query the resolution on a device. But for absolute devices that need pointer acceleration (i.e. touchpads) we can normalize to a fake resolution of 400 dpi and base the acceleration code on that. This provides the same feel on the mouse and the touchpad, as much as that is possible anyway.

July 11, 2014

Another GtkInspector update

I’ve last written about GtkInspector in early June. Since then, a few things have fallen into place, so it is time for another status update.

Show all the things

An early focus of my work on the inspector was to make as much information visible as possible (objects, styles, settings,…).

MenusThis has continued; the most recent addition in the object tree are submenus and combobox popups, which are now shown below the widgets they are attached to.

StyleFor widgets, we now have a Style Properties tab, which shows the values of known style properties, and importantly, where they originate in the theme css. This tab is still a bit preliminary,  but it is a start towards answering the question: why does GTK+ not pick up this selector in my css ?

Property BindingSettings Binding

The property editor popover in the Properties tab is now showing information about GObject and GSettings property bindings, with an easy way to jump to the other object.

The environment section in the General tab includes GSETTINGS_SCHEMA_DIR when set.

Test all the things

Next to making things visible, the inspector is supposed to make it easy to test an application in various situations, such as different themes or right-to-left locales.

GeneralAn recent addition in this regard is a switch in the General tab to turn on touchscreen simulation. This used to be available only via the GTK_TEST_TOUCHSCREEN environment variable.

Hi-dpiYou can also turn on hi-dpi scaling in the General tab to test how your app behaves in this scenario, and look out for blurry icons and the like. Careful, this will make your windows giant.

Better picking

I found that I often use the widget picker right after opening the inspector window, so I figured that I should try to combine these two actions. As a result, Ctrl-Shift-I will now open the inspector and select the widget that was under the pointer before. Ctrl-Shift-D still toggles the inspector on and off.

Another small improvement to picking is that the Escape key can now be used to cancel a picking operation.

Less confusion

As I’ve mentioned above, some of the things that the inspector lets you change at runtime can also be set from environment variables (e.g. GTK_THEME). And when they are set, the environment variables often override any runtime changes.

Disabled controlGtkInspector now reflects this by disabling the corresponding controls when they won’t take effect.

Help welcome

I’ll continue to improve the inspector, but there’s more good ideas for possible features than one person can handle. If you want to help out, feel free to come by on irc, or just put your contribution in bugzilla, where the inspector has its own component now.