May 26, 2015

All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
Major service disruption
Service 'COPR Build System' now has status: major: Network issues in PHX - Copr queue jobs, but is not processing the queue. We are working on this issue.
Major service disruption
Service 'COPR Build System' now has status: major: Network issues
Answer page and Sign in page mockups for Askbot
Answer Page Desktop View Mockups

Answer Page Mobile View Mockups

Sign in page Desktop view mockups

Sign in Page Mobile view mockups

xfce4-power-manager updated to 1.5.0

Xfce4-power-manager version 1.5.0 was released today and I have updated that for rawhide and F22. Apart from bug fixes, there are one or two nice UI changes (shown in the screenshots).

As always, if you encounter any bugs with this update, please submit a bug report on the bugzilla.


May 25, 2015

impress, right click, insert image

Added "insert image" to right click context menu in impress.
A Linux proud history – 15 years ago and the Brazilian ATM

Some time ago i passed by in one of the bank agencies of a brazilian south bank, called Banrisul, and see a change, the ATM’s are evolving. The ATM’s are changing for modern code, and i don’t know what they are using now, but is the past that is the history itself.

Most of old Linux guys remember that as one of the firsts bank ATM done in Linux in the world ( or at least the first openly shown ) was made here in this bank, here’s a picture from the wonderful article of John MadDog Hall in this Linux Journal article. ( I hope he will not bother that i’m citing him here ).


The Banrisul

The Banrisul “Tux” ATM, picture from John MadDog Hall


The history i want to share with you is how that “marble Tux” happens. Yes, it was a production machine that you see in the picture and was running in every place in Brazil for at least 10 years.

So, a 25 years old boy, in this case me, the guy typing now,  who was working in a ILOG graphical toolkit partner suddenly decide to look for Linux jobs, it was out of university for 1 year, but was already infected for the open source and Linux for more than 3 years, and thought it can be done.

Lucky me, that there was a company locally in Curitiba, hiring Linux guys, for a short time prototype project in C, and was the chance i foresse to enter in linux job world for good. This company was Conectiva, and then, this prototype end up to be my first job in the company, mostly at this time, was a universe confluence, since all the players involved, the bank through  the manager, Carlos Eduardo Wagner, the corporate development manager from Conectiva, João Luis Barbosa and the PERTO ATM company, moving to Linux, all believing that could be done.

And then, they need the suicide guys, meaning me and Ruben Trancoso which made the mainframe comm network stack.

To resume, 3 months, four different ATM’s with their original specific DOS code, one barely new ATM designed to be first time used in this project by PERTO, and that’s it.

We didn’t had much requisites that time, mostly keep the same original face and make it work. On the verge of everything, we made the base code been ported quickly, but still, was 2000, and linux graphics stack and licensing still not heavily clarified. qt was out of question, Gtk was not suitable for the older environments. Aside other toolkits, i decided go on X11 pure code, which at least took one layer of code bug testing on our side, despite the inherent difficulty and from a guy that get used already on C++ toolkits ( Ilog Views, today now owned by IBM ).

But worked, it paid the efforts, then one day, comes the day where the manager sit downs on your side and say: “We have a big meeting with bank directors to show the prototype, is it ready ?”. The interface was already exactly the same as the older DOS interfaces, and that’s our initial target.

The answer from me was a sound yes, from Ruben as well, but i asked if i could “pimp up” the interface a little. Was a demo anyway, and not need to be the final result.  Just don’t told what i will be doing, since i have some idea, but not THE FINAL idea.

So, i pick up gimp, pick the Conectiva logo, and then put on top right, as a proud developer of his company, and to show that it done by us, here, in Brazil. I know this would be for testing, never would go to production.

And for some reason as most aesthetically possible for a developer, the lower left corner was visibly empty, unbalanced, could have something else there, but couldn’t be too “loud” in terms of graphics, so i decided that an emboss figure could be ok’ish. And i start to drumming my fingers and i heard someone around the office saying something ..Linux…, and again, …Linux word, so i though that need to be something Linux related, obviously. But there are no Linux text logo, no official at least, the only thing was Tux. Then i placed that embossed Tux, proud myself that at least me, my coleagues and some guys at Banrisul will see what we achieved. Again, i know that was for demo day an in production, the clean face would be back.

Then the day of demo and approval came. My manager from Banrisul came back, and say everyone was happy with the results, everything worked as expected, with only one single remarks. ( i was expecting already ), the logo need be gone.


No one single remark over that embossed shadow Tux there.

And then again, the machine gone to a bank office to real public test, again, no remark of Tux logo, some people outside even noticed the penguin.

The rest is history, i left Banrisul after the work and back to Conectiva engineering and KDE  and several other Conectiva staff went there to finish the code that known better than me, polish or remove old DOS tidbits, and 15 years later, still you can see some TUX happily providing money and services for customers.

I remember the day John MadDog took that picture in one FISL, i remember a crazy Miguel de Icaza jumping over the machine taking pictures as well on FISL. Banrisul was smart in place a machine right aside the stairs of the entrance of FISL where thousands of geek was passing daily in ever conference.

Never intended, well executed 😀

May 24, 2015

Fedora 21 chrooted on an aarch64 Nexus 9


A while back I bought a Nexus 9, mainly because it has a weird processor that emulates a 64 bit ARM (aarch64). Google seem to have abandoned this platform entirely, just 6 months after I got it, so fuck you too Google. Anyway here’s how I installed a Fedora 21 aarch64 chroot on the device, using virt-builder and virt-tar-out and a bunch of unnecessary hassle.

First I ran virt-builder, which takes under a minute to produce a Fedora 21 aarch64 disk image. I then used virt-tar-out to convert all the files in that disk image into a tar file:

$ virt-builder --arch aarch64 fedora-21
$ virt-tar-out -a fedora-21.img / chroot.tar

Copy this file over to the N9, and unpack it. I have rooted my N9, so I can do this as root to preserve all the permissions etc:

# mkdir root
# cd root
# tar -xf /sdcard/Download/chroot.tar
# cd ..

And how can there not be a tar utility in Android?? I had to build a static ‘tar’ for aarch64 using my existing aarch64 server, to run the above command. And and and how can there be no chroot utility either!? I ended up compiling that myself too yada yada.

After all that you can do:

# mount -o bind /dev root/dev
# mount -o bind /proc root/proc
# mount -o bind /sys root/sys
# PATH=/usr/bin:/bin LD_PRELOAD= chroot root /bin/bash

which gives me at least a Fedora 21 shell on Android.

Edit: A few further notes:

  1. When setting up a non-root user account inside the chroot, give it the same UID, GID and groups as the ordinary non-privileged Android user account. In particular it must be in the inet group, else network access is blocked.
  2. You may need to set up /etc/resolv.conf by hand in the chroot.

Attention Fedora 22 prerelease users

Just a note for everyone/anyone who installed Fedora 22 from anything before the Release Candidate (RC) composes: (If you install from the final release on tuesday you are not affected of course):

Before this point the updates-testing repository was enabled and you very likely installed some things from it if you did any installs or updates after you installed. A fedora-release update came along and disabled this repo now, so you have packages from it installed, but that repo is no longer enabled. This can show up as weird issues around mismatched devel packages or other strange looking dependency issues.

Please do one of the following if you see any such issues:

1. You can re-enable updates-testing and help us test updates. See:


2. You can run a ‘dnf distro-sync’ and downgrade your packages to all the correct versions available in updates and the base repo.

New domain:

When I open this web site in 2005, I simply use the domain I own since 2000.

With the growing success of the repository, I thought it was time to use its own domain:

So, now:

Of couse, old addresses are still reachable, without any time limit planed.

Inkscape Mockups for Askbot
Askbot User Profile page desktop view mockup


Askbot User Profile page mobile view mockups

Askbot Ask Question page desktop view mockups

Askbot Ask Question page mobile view mockups

How to quickly migrate mail from Evolution to Thunderbird with Dovecot

Fedora 22 is just around the corner and while upgrading my machine, I decided to completely ditch Gnome’s Evolution in favor of Mozilla Thunderbird. I had already switched a while back, but still had tons of mail in an old local Evolution account I wanted to migrate.

Unfortunately all HowTos I found on the web assume Evolution would store mail in the mbox format, while it switched to maildir in version 3.2.0. MozillaZine suggests to first convert maildir to mbox and then import the resulting files with the ImportExportTools extension. Why so cumbersome if there is the excellent Dovecot IMAP server that can read both maildir and mbox?

Migrating mail with Dovecot is straight forward. Quit Evolution and install dovecot:

yum install dovecot

Then set it to use Evolution’s local storage as mail location:

echo "mail_location = maildir:~/.local/share/evolution/mail/local/" \
 >> /etc/dovecot/conf.d/10-mail.conf
service dovecot start

Fire up Thunderbird, configure a new account for your user on localhost and copy over all mail from this account to the “Local folders”. There you go!

May 23, 2015

Nvidia driver modeset kernel module

As part of the latest Nvidia driver update at version 352.09, there is now code supporting a new nvidia-modeset kernel module that should be running on a system and that interfaces with the usual nvidia kernel module.

Evidence of this is in the kernel module sources and in the nvidia-modprobe command code that is hosted in Github.

From the nv-modeset-interface.h header in the kernel module “sources”:

 * This file defines the interface between the nvidia.ko and
 * nvidia-modeset.ko Linux kernel modules.
 * A minor device file from nvidia.ko's pool is dedicated to
 * nvidia-modeset.ko.  nvidia-modeset.ko registers with nvidia.ko by
 * calling nvidia_register_module() and providing its file operation
 * callback functions.
 * Later, nvidia-modeset.ko calls nvidia.ko's nvidia_get_rm_ops()
 * function to get the RMAPI function pointers which it will need.

Let’s hope that modesetting support in the driver is near and we will not have to wait additional years for it. Also, let’s hope that the firmware images required for the latest hardware on Nouveau will be released soon, without further delays.

As soon as it will be delivered, I will implement it in the packages according to the driver table in the repository page.

Steam for CentOS / RHEL 7

The Steam repository now contains the Steam client package plus the S3 texture compression library for Open Source drivers for CentOS and Red Hat Enterprise Linux 7.

The CentOS/RHEL repository contains also all the SteamOS session files and binaries for running a Steam-only system, like the Fedora ones. As the Fedora packages, the main client is 32 bit only, so when running on 64 bit systems, make sure to load also your 32 bit libraries if you are running on proprietary drivers or the S3 texture compression library if you are running on a 64 bit system. Work on Valve’s X-Box kernel module for CentOS/RHEL is ongoing; as in its current form there are unresolved symbols.

As part of the update, also the Fedora X-Box kernel module has additional fixes on top of Valve’s code.

I will also add the packages to RPMFusion when a CentOS/RHEL 7 branch will eventually be available.

For full details see the repository page.



AskFedora Pages/Flow

Video: LXD containers vs. KVM

Since I'm such a big container fan (been using them on Linux since 2005) and I recently blogged about Docker, LXC, and OpenVZ... how could I pass up posting this? Some Canonical guys gave a presentation at the recent OpenStack Summit on "LXD vs. KVM". What is LXD? It is basically a management service for LXC that supposedly adds a lot of the features LXC was missing... and is much easier to use. For a couple of years now Canonical has shown an interest in LXC and has supposedly be doing a lot of development work around them. I wonder what specifically? They almost seem like the only company who is interested in LXC.. or at least they are putting forth a publicly noticeable effort around them.

Why Should You Care?
If Canonical can actually deliver on their LXD roadmap it is possible that it will be a suitable substitute for OpenVZ. The main "problem" with OpenVZ is that it is not in the mainline kernel, whereas LXC is. In practice you have to purposefully make an OpenVZ host (currently recommended on RHEL6 or clone) but with LXC/LXD any contemporary Linux system should be able to do full-distro containers... aka containers everywhere for everyone.

How About a Roadmap
Where is LXD now? Well, so far it seems to be mostly a technology preview available in Ubuntu 15.04 with the target "usable and production ready" release slated for the next Ubuntu LTS release (16.04)... which if you weren't familiar with their numbering scheme is 2016 April.

That's about a year away, right... so what do they still have left to do? If you go to about 23:30 in the video you'll get to the "Roadmap" section. They have work to do on storage, networking, resource management and usage reporting, and live migration. A bit of that falls within the OpenStack context... integrating with various OpenStack components so containers will be more in parity with VMs for OpenStack users... but still, that's quite a bit of work.

The main thing I care about absolutely being there is isolation and resource management which are really the killer features of OpenVZ. So far as I can tell, LXD does not offer read-only base images and layering like Docker... so that would be an area for improvement I would suggest. BTW they are using CRIU for checkpointing and live migration... thanks Parallels/OpenVZ!

Certainly LXD won't really make it no matter how good it is until it is available in more Linux distributions than just Ubuntu. In a video interview a while back (which I don't have the link handy for at the moment) Mark Shuttleworth stated that he hopes and expects to see LXD in other distributions. One of the first distros I hope to see with LXD is Fedora and that's the reason I tagged this post appropriately.

Broadening the Echosystem
Historically I've been a bit of an anti-Canonical person but thinking more about it recently and taking the emotion out of it... I do wish Ubuntu success because we definitely need more FLOSS companies doing well financially in the market... and I think Red Hat (and OpenVZ) will have an incentive to do better. Competition is good, right? Anyway, enjoy the video. BTW, everything they tout as a benefit of LXD over KVM (density, speed of startup, scalability, etc) is also true of OpenVZ for almost a decade now.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="" width="853"></iframe>

For those with iFrame issues, here's the YouTube link: LXD vs. KVM

Containers Should Contain
Let's face it, Docker (in its current form) sucks. Why? Well, ok... Docker doesn't totally suck... because it is for applications and not a full system... but if a container doesn't contain, it isn't a container. That's just how language works. If you have an airplane that doesn't fly, it isn't an airplane, right? Docker should really say it is an "Uncontainer" or "Uncontained containers"... or better yet, just use a different word. What word? I'm not sure. Do you have any suggestions? (Email me:

What is containment? For me it is really isolation and resource control. If a container doesn't do that well, call it something else. OpenVZ is a container. No, really. It contains. OpenVZ didn't start life using the word container. On day one they were calling them "Virtual Environments" (VEs). Then a year or two later they decided "Virtual Private Server" (VPS) was the preferred term. Some time after switching to VPS, VPS became quite ambiguous and used by hosting companies using hardware virtualization backends like Xen and VMware (KVM wasn't born yet or was still a baby). Then OpenVZ finally settled on the word "container".

If you want a fairly good history of the birth and growth of OpenVZ over the years, see Kir's recent presentation.

Hopefully LXD will live up to "container" but we'll have to wait and see.

read more

GSoC Update #3
I had a meeting yesterday (22nd May 2015) at the #fedora-apps IRC channel with mentors Sarup Banskota and Suchakra Sharma. The purpose of the meeting was to revise my timeline and refine what I have included in the timeline into more specific sub tasks.  

My actions items in this meeting were:

  • Analyse the flow of pages in askbot
  • Set up a staging instance to share work in progress
  • Creating mockups for the askbot pages
You can see the minutes of the meeting at:
San Francisco Python Meetup Group ...
 ... Python Meetup is putting out a call for speakers:

Meet other local Python Programming Language enthusiasts! Please join us on the second Wednesday of each month for a Presentation Night of intermediate and advanced Python talks. Please join us on the third Wednesday of each month for a Project Night of Python tutorials, mentors helping new and intermediate Python developers, sprints on Python projects, and developers working on their own projects.

More about this sfpython.

May 22, 2015

Fedora 22 is “Go” for May 26!

That’s right — the bits are heading out the door (and onto our mirror network)! Expect the official announcement around 10am US Eastern time Tuesday morning.

Quick-Tipp: Man-Pages mit yelp lesen
Bitte beachtet auch die Anmerkungen zu den HowTos!

Vielen unbekannt, verfügt Gnome’s Betrachter für Hilfeseiten yelp über die Fähigkeit, man-pages anzuzeigen.

Sofern yelp bereits geöffnet ist, kann man eine man-page ganz einfach über STRG-L und anschließend in dem Eingabefeld



Alternativ kann man yelp auch direkt mit einer Man-Page aufrufen. Dazu einfach z.B. ALT+F2 drücken und anschließend

yelp man:man-page


In beiden Fällen muss man-page natürlich durch die zu öffnende Man-Page ersetzt werden.
iio-sensor-proxy 1.0 is out!
Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.


We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.


Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.

Orientation lock in action

As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

Power saving in action

This will be available in the upcoming GNOME 3.17.2 development release.

Looking ahead

For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.


We're currently using GitHub for bug and code tracking. Releases are mirrored on, as GitHub is known to mangle filenames. API documentation is available on

[1]: gnome-settings-daemon, gnome-shell, and systemd will need patches
What is Compass? 

Compass is an open source css authoring framework. It adds useful mixins to sass. Compass is very much popular because of its ease and usefulness in creating styles for web based applications.


Installing Compass in your machine is really simple. Just need to type the following commands in the command line if you are using windows or on the terminal if you are using Linux.

For installing you need to have ruby gems installed in your machine. As we discussed about installation of Ruby and Sass in the previous blog post, now let's just install Compass.

sudo gem install compass

After that all you need to do is create a compass project in your theme directory with the following command.

compass create nameofyourtheme

Like we did with Sass we may need to start watching the Compass project and hence type the commands:

nameofyourtheme$ compass watch

Then you will be able to view something like the following.

>>> Compass is watching for changes. Press Ctrl-C to Stop.
    write stylesheets/ie.css
    write stylesheets/print.css
    write stylesheets/screen.css

Now you are all ready to start coding.:)

Coding with Compass

Inside the theme of the Compass project we created we can see
  • sass directory
  • stylesheets directory
  • .sass-cache directory
  • config.rb file

Yeah you guessed right! :) It's inside the sass directory that we should be placing our scss files and the corresponding css files will be placed inside the stylesheets directory.

Type something in the screen.scss file inside the sass directory and hit save. Then you will be able to see the corresponding changes in the screen.css file the stylesheets directory.

Inside the screen.scss file you will have something like

/* Welcome to Compass.
 * In this file you should write your main styles. (or centralize your imports)
 * Import this file using the following HTML or equivalent:
 * <link href="/stylesheets/screen.css" media="screen, projection" rel="stylesheet" type="text/css" /> */
@import "compass/reset";

Also add the following line to it.

@import "compass";

By adding this we will be access many of the awesome mixins provided by Compass.

For example: 

@include float-right;

is a cross browser compatible float-right command.


$test_color: #ccc;
color: shade($test_color, 30%);

is a Compass color mixin.


@include text-shadow(0 2px 2px rgba(#000,0.3));

 is a mixin for applying text shadow and rgbs() is again a Sass mixin.

All of the Compass mixins that you might want will be here. Learning all of these might take sometime yet after learnt all these will make your work much more easier than using native css.

I’m going to FUDCon APAC 2015!

Last year, I was really impressed by the level of organization and atmosphere at FUDCon APAC that took place in Beijing, China which is why I decided to submit a talk for FUDCon APAC 2015, which is going to take place in Pune, India. And guess what! My talk was accepted!

I named the talk “Present and Future of Fedora Workstation”. I’m now part of the Red Hat desktop team and we have a lot of interesting stuff that has made it to F22 and even more interesting stuff that is planned for F23. So I’ll talk about all the goodness that is changing Fedora Workstation into the best desktop system for active and creative users (developers, writers, designers,…).

I’m arriving to Mumbai at 8:35am on June 25th. I’ve seen that some people have arrivals around that time, too. It’d be great to organize transportation to Pune together. After FUDCon, I’m taking a week of holidays and would like to check interesting places around, hope to see e.g. Goa before the proper rain season starts. India will be my 50th visited country and I’m looking forward to it.

See you in Pune!

Sass which will make my life easy
Hi there, this will be my first blog update on GSoC project Askfedora redesign. For the project I may need to write all the styles from scratch for all the interfaces of AskFedora and I will be using sass, compass and susy in the redesigning process.

I am totally new to the languages Sass and Susy and to the framework Compass and hence I'm learning these from scratch. Here are some notes on using Sass I learnt which might be useful for anyone interested in styling. So here goes. :)

Before going into details you might need to have an idea about CSS prepocessing.

CSS Preprocessing

CSS as we all know is used for styling web pages. Many beautiful styles can be created with CSS yet when the style sheets are getting more larger they become more difficult to maintain. CSS preprocessing can help us here. 

CSS Preprocessors like Sass have many features which are similar to those offered in programming languages like Java or C. These features include variables, nesting, mixins (discussed later), inheritance etc. which we cannot see in CSS. These features make maintenance of our styles more easier which reduces most of the work for the programmers.

What is Sass?

Sass a CSS Preprocessor and is the mostly used most powerful professional css extension tool in the world.
When coding it will be filename.scss that we will be editing and sass converts to a css file.


Installing Sass only requires just require you to type in the following commands in the terminal. To install Sass you need to have ruby gems installed in your computer. The set of commands below take care of that also.

sudo apt-get install rubygems
sudo gem update
sudo gem install sass

And if Sass is properly installed we can view its version by typing in "sass -v". It should display something like the following.

Sass 3.4.13 (Selective Steve)

Coding with Sass

To experiment with Sass I created a simple style.scss file and in the terminal you need to type in

sass --watch style.scss:style.css

This is to instruct Sass to watch over the edits that are done to the style.scss file and every time a change is done to the style.scss file it needs to update a style.css file accrodingly.

Sample code with Sass

This is the sample code that I typed referring the Sass tutorial [1] at their site which includes many features of Sass which are
  • Sass variables
  • Nesting
  • Partials and require statement
  • Mixins
  • Inheritance
  • Operators
----------------- style.scss -----------------

@import 'reset'; //importing sass partials

/* Use of variables */
$font-stack: Helvetica, sans-serif;   
$primary-color: #333;

    font: 100% $font-stack;
    color: $primary-color;

/* Nesting */
        margin: 0;
        padding: 0;
        list-style: none;

    li{display: inline-block}

        display: block;
        padding: 6px 12px;
        text-decoration: none;

@mixin border-radius($radius) {  //mixins
  -webkit-border-radius: $radius;
     -moz-border-radius: $radius;
      -ms-border-radius: $radius;
          border-radius: $radius;

.box { @include border-radius(10px); }

/* Inheritance */
    border: 1px solid #ccc;
    padding: 10px;
    color: #333;

    @extend .message;
    border-color: red;

    border-solor: red;

    @extend .message;
    border-color: yellow;

/* Creating a simple grid with sass operators */
    width: 100%;

    float: left;
    width: 600px / 960px * 100%;

    float: right;
    width: 300px / 960px * 100%;

----------------- _reset.scss -----------------

    margin: 0;
    padding: 0;

The corresponding converted css file:

----------------- style.css -----------------

ol {
  margin: 0;
  padding: 0; }

/* Use of variables */
body {
  font: 100% Helvetica, sans-serif;
  color: #333; }

/* Nesting */
nav ul {
  margin: 0;
  padding: 0;
  list-style: none; }
nav li {
  display: inline-block; }
nav a {
  display: block;
  padding: 6px 12px;
  text-decoration: none; }

.box {
  -webkit-border-radius: 10px;
  -moz-border-radius: 10px;
  -ms-border-radius: 10px;
  border-radius: 10px; }

/* Inheritance */
.message, .success, .warning {
  border: 1px solid #ccc;
  padding: 10px;
  color: #333; }

.success {
  border-color: red; }

.error {
  border-solor: red; }

.warning {
  border-color: yellow; }

/* Creating a simple grid with sass operators */
.container {
  width: 100%; }

article[role="main"] {
  float: left;
  width: 62.5%; }

article[role="complimentary"] {
  float: right;
  width: 31.25%; }

/*# */

I would give you a briefing about the above code.

Sass Variables

First under the variables section we can declare Sass variables with a '$' mark infront of the name of the variable. For example if we want to use the same color in number of different places we can assign it to a variable and use it so that we only need to change the value of the variable if we want to change the color.

The variables are replaced by their corresponding values in the translated css file.


Nesting is done to avoid complications, by having a clearly visible hierarchy. Like for instance instead of using "nav li" we can make "li" nested inside "nav" to make it more clearly visible and manageable.

Partials and Import Statement

Sass partials are used to include small sass code snippets inside scss files which starts with an underscore. The file names starts with underscore so that Sass will identify them as partials and include them in the scss files which import them.

Partials such as "_reset.scss" which I have indicated above can be included in your main scss files by importing them using the commands like:

@import 'reset'

Note that the underscore in front the file name and .scss extension is missing in the import statement.


If you want to have a group of css statement together in many places we can use Mixins provided by Sass. In the above example we have a Mixin named "border-radius" which will accept a variable named "radius" and the same set of css codes will be repeated each time we include:

@include border-radius(10px);


The concept of inheritance that we come across in object oriented programming languages is applied for css styles also. In the above example all the styles which are applied for the message class will be inherited to error, success and warning classes also.


Sass has the following operators which we can deal with.

+, -, *, / and %

As you can see only the result of these operations will be displayed in the converted .css file.


[1] -
        Sass language guide 

DEVit Conf 2015 Impressions

It's been a busy week after DEVit conf took place in Thessaloniki. Here are my impressions.

Crack, Train, Fix, Release


I've started the day with the session called "Crack, Train, Fix, Release" by Chris Heilmann. While it was very interesting for some unknown reason I was expecting a talk more closely related to software testing. Unfortunately at the same time in the other room was a talk called "Integration Testing from the Trenches" by Nicolas Frankel which I missed.

At the end Chris answered the question "What to do about old versions of IE ?". And the answer pretty much was "Don't try to support everything, leave them with basic functionality so that users can achieve what they came for on your website. Don't put nice buttons b/c IE 6 users are not used to nice things and they get confused."

If you remember I had a similar question to Jeremy Keith at Bulgaria Web Summit last month and the answer was similar:

Q: Which one is Jeremy's favorite device/browser to develop for.
A: Your approach is wrong and instead we should be thinking in terms of what features are
essential or non-essential for our websites and develop around features
(if supported, if not supported) not around browsers!

Btw I did ask Chris if he knows Jeremy and he does.

After the coffee break there was "JavaScript ♥ Unicode" by Mathias Bynens which I saw last year at How Camp in Veliko Tarnovo so I just stopped by to say hi and went to listen to "The future of responsive web design: web component queries" by Nikos Zinas. As far as I understood Nikos is a local rock-star developer. I'm not much into web development but the opportunity to create your own HTML components (tags) looks very promising. I guess there will be more business coming for Telerik :).

I wanted to listen to "Live Productive Coder" by Heinz Kabutz but that one started in Greek so I switched the room for "iOS real time content modifications using websockets" by Benny Weingarten-Gabbay.

After lunch I went straight for "Introduction to Docker: What is it and why should I care?" by Ian Miell which IMO was the most interesting talk of the day. It wasn't very technical but managed to clear some of the mysticism around Docker and what it actually is. I tried to grab a few minutes of Ian's time and we found topics of common interest to talk about (Project Atomic anyone?) but later failed to find him and continue the talk. I guess I'll have to follow online.

Tim Perry with "Your Web Stack Would Betray You In An Instant" made a great show. The room was packed, I myself was actually standing the whole time. He described a series of failures across the entire web development stack which gave developers hard times patching and upgrading their services. The lesson: everything fails, be prepared!

The last talk I visited was "GitHub Automation" by Forbes Lindesay. It was more of an inspirational talk, rather than technical one. GitHub provides cool API so why not use it?


DEVit team

From what I know this is the first year of DEVit. For a first timer the team did great! I particularly liked the two coffee breaks before lunch and in the early afternoon and the sponsors pitches in between the main talks.

All talks were recorded but I have no idea what's happening with the videos!

I will definitely make a point of visiting Thessaloniki more often and follow the local IT and start-up scenes there. And tonight is Silicon Drinkabout which will be the official after party of DigitalK in Sofia.

FUDCon Pune Planning Meeting - 19 May
Again a bit late from my side, apologizes for that. So we had our weekly planning meeting on 19 May at Red Hat Pune office. Event is approaching quite fast and we are making progress in term of planning. In this meeting we discussed about travel, schedule, outreach and swag.

As usual we used to keep notes and discussed different topic/blockers.

- List of speaker  along with preference.
- In touch with Global Mobility for visa invitation letter.
- First draft of schedule
- T-shirt design
June first/second week for f22 release party + FUDCon advertisement.
Discuss in Ambassadors meeting about Fedora 22 DVD's printing.
- Site redesign complete and now in production.
- Keep looking for different vendor to get swags.

Note: If you are speaker then please blog about your presence and also we have nice buttons which you can add to your site. 

May 21, 2015

libblockdev reaches the 1.0 milestone!

A year ago, I started working on a new storage library for low-level operations with various types of block devices — libblockdev. Today, I’m happy to announce that the library reached the 1.0 milestone which means that it covers all the functionality that has been stated in the initial goals and it’s going to keep the API stable.

A little bit of a background

Are you asking the question: "Why yet another code implementing what’s already been implemented in many other places?" That’s, of course, a very good and probably crucial question. The answer is that I and people who were at the birth of the idea think that this is for the first time such thing is implemented in a way that it is usable for a wide range of tools, applications, libraries, etc. Let’s start with the requirements every widely usable implementation should meet:

  1. it should be written in C so that it is usable for code written in low-level languages
  2. it should be a library as DBus is not usable together with chroot() and things like that and running subprocesses is suboptimal (slow, eating lot of random data entropy, need to parse the output, etc.)
  3. it should provide bindings for as many languages as possible, in particular the widely used high-level languages like Python, Ruby, etc.
  4. it shouldn’t be a single monolithic piece required by every user code no matter how much of the library it actually needs
  5. it should have a stable API
  6. it should support all major storage technologies (LVM, MD RAID, BTRFS, LUKS,…)

If we take the candidates potentially covering the low-level operations with blockdev devices — Blivet, ssm and udisks2 (now being replaced by storaged) — we can easily come to a conclusion that none of them meets the requirements above. Blivet 1 covers the functionality in a great way, but it’s written in Python and thus hardly usable from code written in other languages. The same applies to ssm 2 is also written in Python, it’s an application and it doesn’t cover all the technologies (it doesn’t try to). udisks2 3 and now storaged 4 provide a DBus API and don’t provide for example functions related to BTRFS (and even LVM in case of udisks2).

The libblockdev library is:
  • written in C,
  • using GLib and providing bindings for all languages supporting GObject instrospection (Python, Perl, Ruby, Haskell*,…),
  • modular — using separate plugins for all technologies (LVM, Btrfs,…),
  • covering all technologies Blivet supports 5 plus some more,

by which it fulfills all the requirements mentioned above. It’s only a wish, but a strong one, that every new piece of code written for low-level manipulation with block devices 6, should be written as part of the libblockdev library, tested and reused in as many places as possible instead of writing it again and again in many, many places with new, old, weird and surprising and custom bugs.


As mentioned above, the library loads plugins that provide the functionality, each related to one storage technology. Right now, there are lvm, btrfs, swap, loop, crypto, mpath, dm, mdraid, kbd and s390 plugins. 7 The library itself basically only provides a thin wrapper around its plugins so that it can all be easily used via GObject introspection and so that it is easy to setup logging (and probably more in the future). However, each of the plugins can be used as a standalone shared library in case that’s desired. The plugins are loaded when the bd_init() function is called 8 and changes (loading more/less plugins) can later be done with the bd_reinit() function. It is also possible to reload a plugin in a long-running process if it gets updated, for example. If a function provided by a plugin that was not loaded is called, the call fails with an error, but doesn’t crash and thus it is up to the caller code to deal with such situation.

The libblockdev library is stateless from the perspective of the block device manipulations. I.e., it has some internal state (like tracking if the library has been initialized or not), but it doesn’t hold any state information about the block devices. So if you e.g. use it to create some LVM volume groups and then try to create a logical volume in a different, non-existing VG, it just fails creating it at the point where LVM realizes that such volume group doesn’t exist. That makes the library a lot simpler and "almost thread-safe" with the word "almost" being there just because some of the technologies doesn’t provide any other API than running various utilities as subprocesses which cannot generally be considered thread-safe. 9

Scope (provided functionality)

The first goal for the library was to replace the Blivet’s devicelibs subpackage that provided all the low-level functions for manipulations with block devices. That fact also defined the original scope of the library. Later, we realized that we would like to add the LVM cache and bcache support to Blivet and the scope of the library got extended to the current state. The supported technologies are defined by the list of plugins the library uses (see above) and the full list of the functions can be seen either in the project’s features.rst file or by browsing the documentation.

Tests and reliability

Right now, there are 135 tests run manually and by a Jenkins instance hooked up to the project’s Git repository. The tests use loop devices to test vast majority of the functions the library provides 10. They must be run as root, but that’s unavoidable if they should really test the functionality and not just some mocked up stubs that we would believe behave like a real system.

The library is used by Fedora 22’s installation process as F22’s Blivet has been ported to use libblockdev before the Beta release. There have been few bugs reported against the library (majority of them were related to FW RAID setups) with all bugs being fixed and covered by tests for those particular use cases (based on data gathered from the logs in bug reports).

Future plans

Although the initial goals are all covered by the version 1.0 of the library there are already many suggestions for additional functionality and also extensions for some of the functions that are already implemented (extra arguments, etc.). The most important goal for the near future is to fix reported bugs in the current version and promote the library as much as possible so that the wish mentioned above gets fulfilled. The plan for a bit further future (let’s say 6-8 months) is to work on additional functionality targetting version 2.0 that will break the API for the purpose of extending and improving it.

To be more concrete, for example one of the planned new plugins is the fs plugin that will provide various functions related to file systems. One of such functions will definitely be the mkfs() function that will take a list (or dictionary) of extra options passed to the particular mkfs utility on top of the options constructed by the implementation of the function. The reason for that is the fact that some file systems support many configuration options during their creation and it would be cumbersome to cover them all with function parameters. In relation to that, at least some (if not all) of the LVM functions will also get such extra argument so that they are useful even in very specific use cases that require fine-tuning of the parameters not covered by functions’ arguments.

Another potential feature is to add some clever and nice way of progress reporting to some functions that are expected to take a lot of time to finish –like lvresize(), pvmove(), resizefs() and others. It’s not always possible to track the progress because even the underlying tools/libraries don’t report it, but where possible, libblockdev should be able to pass that information to its callers ideally in some unified way.

So a lot of work behind, much more ahead. It’s a challenging world, but I like taking challenges.

  1. a python package used by the Anaconda installer as a storage backend

  2. System Storage Manager

  3. daemon used by e.g. gnome-disks and the whole GNOME "storage stack"

  4. a fork of udisks2 adding an LVM API and being actively developed

  5. the first goal for the library was to replace Blivet’s devicelibs subpackage

  6. at higher than the most low-level layers, of course

  7. I hope that with the exception of kbd which stands for Kernel Block Devices the related technologies are clear, but don’t hesitate to ask in the comments if not.

  8. or e.g. BlockDev.init(plugins) in Python over the GObject introspection

  9. use Google and "fork shared library" for further reading

  10. 119 out of 132 to be more precise

Use Docker to Try Pulp

Using Docker, you can now easily and quickly stand up a demo deployment of Pulp. Excluding the time it takes Docker to download the images, it takes only a few seconds to have a brand new deployment running.

$ wget
$ sudo source /path/to/lots/of/storage/

The README explains a variety of security and other concerns that prevent this containerization of pulp from being production-worthy. But it is a great way to try Pulp for the first time, try a new release, or experiment with new features in a throw-away environment.

I’ve been using these Docker images for a few months to do demos, reproduce bugs, and verify that each Pulp process can be run in isolation. They’ve been very stable and easy to work with, but let me know what your experience is.

The images themselves are built on CentOS 7. You have the best chance at a good experience if you run them on a distribution in the Red Hat, CentOS and Fedora “family”, but I have also had success running them on Ubuntu. There were a couple of hurdles to overcome (for example, for reasons I don’t understand, qpid’s persistent message store would not work), and I am interested to know if you encounter any problems running these images on a different distribution.

Just run the script mentioned in the README, and it will get everything configured and running. Then you can log in and start using it. The following shows an rpm sync, which under-the-hood makes use of a container running apache, and at least two containers running worker processes:

$ sudo docker run -it --rm --link pulpapi:pulpapi pulp/admin-client bash
[root@6610af58a341 /]# pulp-admin login -u admin
Enter password:
Successfully logged in. Session certificate will expire at May 19 02:50:16 2015 GMT.

[root@6610af58a341 /]# pulp-admin rpm repo create --repo-id=zoo --feed=
Successfully created repository [zoo]

[root@6610af58a341 /]# pulp-admin rpm repo sync run --repo-id=zoo
Synchronizing Repository [zoo]

This command may be exited via ctrl+c without affecting the request.

Downloading metadata...
... completed

Downloading repository content...
[==================================================] 100%
RPMs: 32/32 items
Delta RPMs: 0/0 items

... completed


Bokeh plots from Spark

This post will show you an extremely simple way to make quick-and-dirty Bokeh plots from data you’ve generated in Spark, but the basic technique is generally applicable to any data that you’re generating in some application that doesn’t necessarily link in the Bokeh libraries.

Getting started

We’ll need to have a recent version of Bokeh installed and some place to put our data. If you don’t already have Bokeh installed, use virtualenv to get it set up:

<figure class="code"><figcaption></figcaption>
virtualenv ~/.bokeh-venv
source ~/.bokeh-venv/bin/activate
pip install bokeh numpy pandas

This will download and build a nontrivial percentage of the Internet. Consider getting some coffee or catching up with a colleague while it does its thing.

The next thing we’ll need is some place to stash data we’ve generated. I’ll use a very simple service that I developed just for this kind of application, but you can use whatever you want as long as it will let you store and retrieve JSON data from a given URL.

With all that in place, we’re ready to make a simple plot.

A basic static-HTML plot

We’re going to use Bokeh to construct a basic line graph in a static HTML file. The catch is that instead of specifying the data statically, we’re going to specify it as a URL, which the page will poll so it can update the plot when it changes. Here’s what a script to construct that kind of plot looks like:

<figure class="code"><figcaption></figcaption>
#!/usr/bin/env python

from bokeh.plotting import figure, output_file, show
from bokeh.models.sources import AjaxDataSource
output_file("json.html", title="data polling example")
source = AjaxDataSource(data_url="http://localhost:4091/tag/foo", polling_interval=1000)
p = figure()
p.line('x', 'y', source=source)

Note that the above example assumes you’re using the Firkin service as your data store. If you’re using something else, you will probably need to change the data_url parameter to the AjaxDataSource constructor.

Publishing and updating data

Now we’ll fire up the Firkin server and post some data. From a checkout of the Firkin code, run sbt server/console and then create a connection to the server using the client library:

<figure class="code"><figcaption></figcaption>
val client = new com.freevariable.firkin.Client("localhost", 4091)

You can now supply some values for x and y. We’ll start simple:

<figure class="code"><figcaption></figcaption>
/* publish an object to http://localhost:4091/tag/foo */
client.publish("foo", """{"x": [1,2,3,4,5,6,7,8,9,10], "y": [2,4,6,8,10,12,14,16,18,20]}""")

If you don’t already have json.html from the previous step loaded in your browser, fire it up. You should see a plot that looks like this:

plot of y=2x for 1..10

If we update the data stored at this URL, the plot will update automatically. Try it out; publish a new data set:

<figure class="code"><figcaption></figcaption>
client.publish("foo", """{"x": [1,2,3,4,5,6,7,8,9,10], "y": [2,4,6,8,10,12,14,16,18,30]}""")

The plot will refresh, reflecting the updated y value:

plot of y=2x for 1..9 and y=3x for x=10

(Again, if you’re using some other JSON object cache to store your data sets, the principles will be the same but the syntax won’t be.)

Connecting Bokeh to Spark

Once we have a sink for JSON data, it’s straightforward to publish data to it from Spark code. Here’s a simple example, using json4s to serialize the RDDs:

<figure class="code"><figcaption></figcaption>
import org.json4s.JsonDSL._
import org.json4s.jackson.JsonMethods._

// assume that spark is a SparkContext object, declared elsewhere
val x = spark.parallelize(1 to 100)
// x: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>

val y = / 4)
y: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at <console>

val json = ("x" -> x.collect.toList) ~ ("y" -> y.collect.toList)

val client = new com.freevariable.firkin.Client("localhost", 4091)

client.publish("data", compact(render(json)))

After we do this, the data will be available at http://localhost:4091/tag/data.

Alternate approaches

The approach described so far in this post was designed to provide the simplest possible way to make a basic plot backed by a relatively small amount of dynamic data. (If we have data to plot that we can’t store in memory on a single node, we’ll need something more sophisticated.) It is suitable for prototyping or internal use. Other approaches, like the two below, might make more sense for other situations:

  1. Bokeh offers several alternate language bindings, including one for Scala. This means that you can define plots and supply data for them from within applications that link these libraries in. This is the approach that spark-notebook takes, for example.
  2. Another possibility is setting up the Bokeh server and publishing plots and data to it. The Bokeh server offers several additional capabilities, including support for scalable visualizations (e.g., via sampling large datasets) and support for multiple authenticated users.

Since I'm kinda new here, I still spend much of my mental energy thinking about how to FCL, and wrapping my head around this wonderful bazaar that is Fedora. One of the best parts is getting to enjoy "Fedora-Firsts" on a daily-basis.

Today's firsts were my first non-decause specific wiki page, and first time chiming in on the Fedora-Join Trac.

I spend a lot of mental energy thinking about "HowFOSS", but even moreso "WhyFOSS?" So when I got pinged in #fedora-join today (an up-and-coming new contributor onboarding group within Fedora), I was *EXXSTATIC* when I found Join-SIG Trac Ticket #10, proposing a "FLOSSophy" contest.

I added my thoughts, which were well received so far, and have created a wikipage here for others who want to chime in too:

This is still a very early-on idea, but it won't be for long, as the Join-SIG would like to propose it to the council in the very near future.

I'm likely on the hook for providing a version of my "WhyWeFOSS" as an example, so stay tuned for that post in the near-ish future.

How to debug weird build issues
When working on a secondary arch Fedora like s390x, we witness interesting build issues sometimes. Like a sudden test failure in e2fsprogs in rawhide. No issue with previous build, no issue with the same sources in F-22. So we started to look what has changed and one thing in Rawhide was enabling the hardened builds globally for all builds. With the hardening disabled the test case passed. It can mean two possible causes - first the code is somehow bad, second there is a bug in the compiler. And when a new major gcc version is released we usually find a couple of bugs, sometimes even general ones, not specific for our architecture. When the issue should be in gcc, then it often depends on the optimization level, so I've tried to switch from the Fedora default -O2 to -O1. And voila, the test passed again. But this is now a global option, but we need to find the piece of code that might be mis-compiled. We call the procedure that follows "bisecting", inspired by bisecting in git as a method to find an offending commit, Here it means limiting the lower optimization level to a specific directory, then to one source file, and then to a single function. It is a time consuming process and requires modifying compiler flags in the buildsystem, using #pragma GCC optimize("O1") in files or adding __attribute__(optimize(("O1"))) to functions. In the case of the test in e2fsprogs we were quite sure it should be either the resize2fs binary or the e2fsck binary. At the end we have identified 3 function in rehash.c source file of e2fsprogs that had to be built with -O1 for the test case to pass. It looked a bit strange to me, usually it is one function that gcc mis-compiles. But from the past I knew another possible cause of interesting failures could be aliasing in combination with wrong code, like here. A quick test build with -fno-strict-aliasing also made the problem to away. The gcc maintainer then identified some pieces of the code that are clearly not aliasing safe and after a short discussion with the e2fsprogs developer we decided to disable strict aliasing for this package as an interim solution as the code is complex and it will take time to fix it properly. And what's the conclusion - using non-mainstream architectures helps in discovering bugs in applications. And also in the toolchain, but that will be another story :-)
Mailman 3 is out

….aaaaaaand it happened. The Mailman team just released version 3.0. Three weeks ago, OK I’m late, but that’s nothing on a Mailman-scale ;-)

Here’s the official announcement. You’ll learn there that the first alpha release for 3.0 was published a little over 7 years ago. That’s a “seven”, not a “one”. I won’t paraphrase the announcement, but I’d like to thank all the people who made this release possible over the years. Mailman 3 is complete rewrite of the popular mailing-list server that is used all over the world to create communities. The new software architecture is really sound and clean, and makes it possible to create great addons. Such as HyperKitty. HyperKitty 1.0 was released at the same time as Mailman 3.0, and is part of the Mailman suite.

People are already starting to install this new version and think about migrating their old list servers. There’s a piece of software called Mailman-bundler to help you setup the whole Mailman suite quickly and test it out, but for a real production server you may want to rely on individual packages and custom configuration.

Come talk to us on IRC (#mailman on freenode) if you have any issue setting it up!

Thanks again to the wonderful Mailman team and all those who contributed over the years. Mailman 3 is going to be legendary :-)

Containers Reloaded

I've been busy lately trying to learn more about Docker. I'm not much of a fan of "application containers" and still prefer a full-blown "distro container" like that provided by LXC (good) or OpenVZ (better)... but I have to admit that the disk image / layering provided by Docker is really the feature everyone loves... which provides almost instantaneous container creation and start-up. If OpenVZ had that, it would be even more awesome.

OpenVZ certainly has done a lot development over the past couple of years. They realized that simfs just wasn't cutting it and introduced ploop storage... and then made that the default. ploop is great. It provides for instant snapshots which is really handy for doing zero-downtime backups. I wonder how ploop differs these days from qcow2? I wonder how hard it would be to add disk layering features like Docker to OpenVZ with ploop snapshots?

Applications Containers In the Beginning

Ok, so Docker has taken off but I really can't figure out why. I mean Red Hat introduced OpenShift some time ago. First it was a service, then a product, and lastly a open source product that you can deploy yourself if you don't need support. A couple of years ago I attended an OpenShift presentation and at that time it provided "Gears" which were basically chrooted processes with a custom SELinux policy applied... and cgroup resource management? Something like that. While (non-OpenVZ) containers don't contain, with the SELinux added, OpenShift gears seemed to be secure enough.

OpenShift offered easy deployment via some git-based scheme (if I remember correctly) and a bunch of pre-packaged stacks, frameworks, and applications called "cartridges" which I see as functionally equivalent to the Docker registry.. It didn't have the disk image layering and instant startup of Docker so I guess that's was a minus.

These days I guess OpenShift is going to or has shifted to using Docker.

Docker Crawls Before It Can Walk

Docker started off using aufs but that was an out-of-tree filesystem that isn't going to make it into mainline. Luckily Red Hat helped by adapting Docker to use device mapper-based container storage... and then btrfs-based container storage was added. What you get as default seems to depend on what distro you install Docker on. Which of the three is performant and which one(s) sucks... again that depends on who you talk to and what the host distro is.

Docker started off using LXC. I'm not sure what that means exactly. We all know that LXC is "LinuX native Containers" but LXC seems to vary greatly depending on what kernel you are running and what distro you are using... and the state of the LXC userland packages. Docker wised up there and decided to take more control (and provide more consistency) and created their own libcontainer.

The default networking of Docker containers seems a bit sloppy. A container gets a private network address (either via DHCP or manually assigned, you pick) and then if you want to expose a service to the outside world you have to map that to a port on the host. That means if you want to run a lot of the same service... you'll be doing so mostly on non-standard ports... or end up setting up a more advanced solution like a load balancer and/or a reverse proxy.

Want to run more than one application / service inside of your Docker container? Good luck. Docker was really designed for a single application and as a result a Docker container doesn't have an init system of its own. Yeah, there are various solutions to this. Write some shell scripts that start up everything you want... which is basically creating your own ghetto init system. That seems so backwards considering the gains that have been made in recent years with the switch to systemd... but people are doing it. There is something called supervisor which I think is a slight step up from a shell script but I don't know much about it. I guess there are also a few other solutions from third-parties.

Due to the complexity of the networking and the single-app design... and given the fact that most web-services these days are really a combination of services that are interconnected, a single Docker container won't get you much. You need to make two or three or more and then link them together. Links can be private between the containers but don't forget to expose to the host the port(s) you need to get your data to the outside world.

While there are ways (hacks?) that make Docker do persistent data (like mapping one or more directories as "volumes" into the container or doing a "commit"), Docker really seems more geared toward non-persistent or stateless use.

Docker Spaghetti

Because of all of these complexities, which I really see as the result of an over-simplified Docker design, there are a ton of third-party solutions. Docker has been trying to solve some of these things themselves too. Some of Docker's newer stuff has been seen by some (for example CoreOS) as a hijacking of the original platform and as a result... additional, currently incompatible container formats and tools have been created. There seems to be a new third-party Docker problem solver start-up appearing weekly. I mean there are a ton of add-ons... and not many of them are designed to work together. It's kind of like Christianity denominations... they mostly believe the same stuff but there are some important things they disagree on. :)

Application Containers Are Real

Ok, so I've vented a little about Docker but I will admit that application containers are useful to certain people... those into "livestock" virtualization rather than "pet" virtualization aka "fleet computing". Those are the folks running big web-services that need dozens, hundreds or thousands of instances of the same thing serving a large number of clients. I'm just one one of those folks so I prefer the more traditional full-distro style of containers provided by OpenVZ.

Working On Fedora 22

I've already blogged about working on my own Fedora 22 remix but I've also made a Fedora 22 OpenVZ OS Template that I've submitted to contrib. Yeah, it is pre-release but I'll update it over time... and Fedora 22 is slated for release next week unless there are additional delays.

Like so many OpenVZ OS Templates my contributed Fedora 22 OS Template doesn't have a lot of software installed and is mainly for use as a server. For my own use though I've added to that with the MATE desktop, x2goserver, Firefox, LibreOffice, GIMP, Dia, Inkscape, Scribus, etc. It makes for a pretty handy yet light desktop environment. It was a little tricky to build because adding any desktop environment will drag in NetworkManager which will overpower ye 'ole network service and break networking in the container upon next container start. So while building it "vzctl enter" access from the OpenVZ host node was required. With a handful of systemctl disable / mask commands it was in working order again. Don't forget to change the default target back to multi-user from graphical... and yeah, you can turn off the display manager because you don't need that since x2go is the access method of choice.

BTW, there was a libssh update that broke x2go but they should have that fixed RSN.

Multi-purpose OS Templates

I also decided to play with LXC some on my Fedora 22 physical desktop. I found a libvirt-related recipe for LXC on Fedora. Even though it was a little dated it was very helpful.

The yum-install-in-chroot method of building a container filesystem really didn't work for me. I guess I just didn't have a complete enough package list or maybe a few things have changed since Fedora 20. I decided to re-purpose my Fedora 22 OpenVZ OS Template. I extracted it to a directory and then edited a few network related files (/etc/sysconfig/network, removed /etc/sysconfig/network-scripts/ifcfg-venet*, and added an ifcfg-eth0 file). I also chroot'ed into the directory and set a root password and created a user account that I added to the wheel group for sudo access.

After a minute or so for the minor modifications (and having left the chroot'ed environment) I did the virt-install command to create a libvirt managed LXC container using the new Fedora 22 directory / filesystem... and bingo bango that worked. I also added some GUI stuff and just like with OpenVZ I had to disable NetworkManager or it broke networking in the container. Anyway... running an LXC container is a like OpenVZ on a mainline kernel... just without all of the resource management and working security. Baby steps.

Containers Taken Too Far?

While hunting down some videos on Docker I ran into RancherVM. What is that? To quote from their description:

RancherVM is a new open source project from Rancher Labs that makes it simple to run KVM inside a Docker container.

What they heck? Run KVM VMs inside of Docker containers? Why would anyone want to do that? Well, so you can embed KVM VM disk images inside of Docker images... and easily deploy a KVM VM (almost) as easily as a Docker container. That kind of makes my head hurt just thinking about running a Windows 7 Desktop inside of a Docker container... but someone out there is doing that. Yikes!

read more

May 20, 2015

systemtap band-aid for qemu bug "venom" cve-2015-3456

The so-called venom CVE-2015-3456 bug has been public for a little while, so go patch your copy of qemu etc.

What's that? You can't, but are interested in a systemtap band-aid? You've come to the right place. Behold:

probe process("/usr/bin/qemu-system-*").function("fdctrl_write") {
  $reg = (($reg & ~7) | 6) # replace register address with 0x__6
  if (!noted[pid()]) { noted[pid()]=1; println("defanging fdc writes, pid=", pid()) }
global noted%

Running it will affect all current and future qemu-system-* processes (while the systemtap script is alive). You will also need the qemu-debuginfo package installed, so systemtap can resolve the fdctrl_write function and its reg parameter.

# stap -g antivenom.stp
defanging fdc writes, pid=21911

As usual, this band-aid works by altering data of the vulnerable program during its execution, not modifying control flow or program text. It is a blunt instrument - completely disabling the floppy-controller emulation, by redirecting I/O to a port number that the emulator harmlessly rejects.

A more surgical correction, permitting I/O but manually wrapping-around the FIFO pointers (as the upstream qemu fix does) may be possible. One option could be to insert probes to modify the fdctrl->data_pos variable used to calculate indexes into the fdctrl->fifo[] array (and then restore it). This seemed a little more tricky, but if someone needs a functional FDC emulation, and a systemtap-based band-aid, such an approach could probably be made to work. Prototypes welcome!

FUDCon APAC 2015 - 19th May planning meeting minutes
We had 14 members in the meeting. Following are key points:
  • Started with scheduling. Most of the scheduling work done on paper. By this 22nd May we will get first draft of schedule.
  • Travel side: Remaining tickets will be mostly get booked on Wed. 20th May. Work on invitation letter is going on.
  • Outreach:
    • We voted on two drafts and decided to choose Dark background poster. Standee need further work. We decided to use more generic FUDCon standee.
    • Social media post going well. Need to keep this up. If you have any tweets in mind feel free to edit wiki.
    • Soon contacting speakers and ask them put missing details in portal. Also asking to speakers for bogging about there topics,we can use those to spread news about FUDCon.
    • Ticket for T-shirt design.
    • June 1st and 2nd week we will plan F22 release party in few colleges and use this opportunity to promote FUDCon. This points also reminder regarding F22 DVD's.
    • Video series: Few clips are ready now, work is going on to compile it and make short video.
  • Website:
    • Redesign done. Its now in production. For more info read post.
    • We need some volunteer to add topics for "News & Updates" section.
  • SWAG
    • We reviewed few swag. Will finalize mostly in next meeting.  Most of the quotes and ideas are available in
  • Video recording of talks. Trying for self recording with
For more details visit piratepad page
[OpenMS] 2.0.0 RPMs released [F21 and F22]

OpenMS (follow this link for a quick start guide) is an openms2.0open-source software C++ library for LC/MS data management and analyses. It offers an infrastructure for the rapid development of mass spectrometry related software. OpenMS is free software available under the three clause BSD license and runs under Windows, MacOSX and Linux.

The release 2.0 is now ready to be installed from official Fedora repositories for testing.

Please, if you’re interested, install by YUM or DNF

# yum install openms-tools openms-tutorials openms-data openms openms-doc --enablerepo=updates-testing

and leave a positive(or)negative karma, otherwise write a comment in this post with your experience with it.


For a full list of resolved issues and changed tool parameters please refer to the CHANGELOG.

OpenMS 2.0 is the first release after the switch to git and a complete overhaul of the build system. It introduces a considerable number of new features and bug fixes.
Furthermore, we removed the dependency to GSL and replaced the functionality using Eigen3 and Wildmagic. Thus, the OpenMS core and the full build are now under a more permissive non-GPL (e.g., Apache or BSD) license.

File formats:
– mzQuantML support (experimental)
– mzIdentML support (experimental)
– mzTab support (experimental)
– Indexed mzML support
– Support for numpress encoding in mzML
– Major speed improvement in mzML / mzXML parsing (up to 4x for some setups)

– Support for visualizing mass fingerprinting hits from featureXML along with their raw spectra in MS1
– Improved “Tools” -> “Goto” dialog  – Improved display of m/z, RT, and intensity values 1D and 2D view

New tools:
– FeatureFinderIdentification — Detects features in MS1 data based on peptide identifications (TOPP)
– FeatureFinderMultiplex — Determination of peak ratios in LC-MS data (TOPP) for e.g. SILAC or Dimethyl labeling
– FidoAdapter — Runs the protein inference engine Fido (TOPP)
– LowMemPeakPickerHiRes — Finds mass spectrometric peaks in profile mass spectra (UTIL)
– LowMemPeakPickerHiRes_RandomAccess — Finds mass spectrometric peaks in profile mass spectra (UTIL)
– MRMTransitionGroupPicker (UTIL)
– MSGFPlusAdapter — MS/MS database search using MS-GF+ (TOPP)
– MetaboliteSpectralMatcher — Find potential HMDB ids within the given mass error window (UTIL)
– OpenSwathWorkflow — Complete workflow to run OpenSWATH (UTIL)
– PeakPickerIterative — Finds mass spectrometric peaks in profile mass spectra (UTIL)
– RTAnnotator — Annotates identification files that are missing the RT field (UTIL)
– SimpleSearchEngine — Annotates MS/MS spectra using SimpleSearchEngine (UTIL)
– TopPerc — Facilitate input to Percolator and reintegrate (UTIL)

Deprecated tools:
– DBExporter — Exports data from an OpenMS database to a file (TOPP)
– DBImporter — Imports data to an OpenMS database (TOPP)
– FeatureFinderRaw — Determination of peak ratios in LC-MS data (TOPP)
– SILACAnalyzer — Determination of peak ratios in LC-MS data (TOPP)

Status changes:
– PhosphoScoring (UTIL -> TOPP)
Tools with major changes:
– OpenSWATH now supports MS1 extraction and labelled workflows
– OpenSWATHWorkflow single binary (high performance integrated workflow)
– IsobaricAnalyzer now supports TMT 10-plex

– Removed GSL dependencies  – Introduced low memory versions of various algorithms
– OpenMS now offers a single interface for different implementations to access mass spectrometric data
– in memory
– on disk with index
– cached on disc for fast access
as well as a chainable, low memory sequential processor of MS data (using a separate interface)
– pyOpenMS now supports python 3.x
– Refactored AASequence, major speed improvement (~40x) for construction of unmodified sequences

Third party software:
– Added Fido support
– Added MS-GF+ support

Changes to the Build System / Package System:
– Restructured repository layout and build system
– Added support for Travis CI
– Simplified pyOpenMS build system
– Support for Visual Studio 2013

Filed under: Articoli, English, FedoraPlanet, Packaging, RPM, Spectrometry Tagged: Linux, OpenMS, opensource, packaging, software
How to create a new initial policy using sepolicy-generate tool?

I have a service running without own SELinux domain and I would like to create a new initial policy for it.

How can I create a new initial policy? Is there a tool for it?

We get these questions very often. And my answer is pretty easy. Yes, there is a tool which can help you with this task.

Let’s use a real example to demonstrate how to create own initial policy for the running lttng-sessiond service on my system.

I see

$ ps -efZ |grep lttng-sessiond
system_u:system_r:unconfined_service_t:s0 root 29186 1 0 12:31 ? 00:00:00 /usr/bin/lttng-sessiond -d

unconfined_service_t tells us the lttng-sessiond service runs without SELinux confinement.

Basically there is no problem with a service running as unconfined_service_t if this service does “everything” or this service is a third party software. A problem occurs if there are another services with own SELinux domains and they want to access objects created by your service.

Then you can see AVCs like

type=AVC msg=audit(1431724248.950:1003): avc: denied { getattr } for pid=768 comm="systemd-logind" path="/dev/shm/lttng-ust-wait-5" dev="tmpfs" ino=25832 scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file permissive=0

In that case, you want to create  SELinux policy from the scratch to get objects created by your service with the specific SELinux labeling to see if you can get a proper SELinux confinement.

Let’s start.

1. You need to identify an executable file which is used to start a service. From

$:s0 root 29186 1 0 12:31 ? 00:00:00 /usr/bin/lttng-sessiond -d

you can see /usr/bin/lttng-sessiond is used. Also

$ grep ExecStart /usr/lib/systemd/system/lttng-sessiond.service
ExecStart=/usr/bin/lttng-sessiond -d

is useful.

2. Run sepolicy-generate to create initial policy files.

sepolicy generate --init -n lttng /usr/bin/lttng-sessiond
Created the following files:
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.te # Type Enforcement file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.if # Interface file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.fc # File Contexts file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng_selinux.spec # Spec file
/home/mgrepl/Devel/RHEL/selinux-policy/ # Setup Script

3. Run

# sh


# ls -Z /usr/bin/lttng-sessiond
system_u:object_r:lttng_exec_t:s0 /usr/bin/lttng-sessiond
# systemctl restart lttng-sessiond
# ps -eZ |grep lttng-sessiond
system_u:system_r:lttng_t:s0 root 29850 1 0 12:50 ? 00:00:00 /usr/bin/lttng-sessiond -d
# auseaarch -m avc -ts recent
... probably you see a lot of AVCs ...

Now you have created/loaded own initial policy for your service. In this point, you can work on AVCs, you can ask us to help you with these AVCs.

JSON, Homoiconicity, and Database Access

During a recent review of an internal web application based on the Node.js platform, we discovered that combining JavaScript Object Notation (JSON) and database access (database query generators or object-relational mappers, ORMs) creates interesting security challenges, particularly for JavaScript programming environments.

To see why, we first have to examine traditional SQL injection.

Traditional SQL injection

Most programming languages do not track where strings and numbers come from. Looking at a string object, it is not possible to tell if the object corresponds to a string literal in the source code, or input data which was read from a network socket. Combined with certain programming practices, this lack of discrimination leads to security vulnerabilities. Early web applications relied on string concatenation to construct SQL queries before sending them to the database, using Perl constructs like this to load a row from the users table:

# WRONG: SQL injection vulnerability
  SELECT * FROM users WHERE users.user = '$user'

But if the externally supplied value for $user is "'; DROP TABLE users; --", instead of loading the user, the database may end up deleting the users table, due to SQL injection. Here’s the effective SQL statement after expansion of such a value:

  SELECT * FROM users WHERE users.user = ''; DROP TABLE users; --'

Because the provenance of strings is not tracked by the programming environment (as explained above), the SQL database driver only sees the entire query string and cannot easily reject such crafted queries.

Experience showed again and again that simply trying to avoid pasting untrusted data into query strings did not work. Too much data which looks trustworthy at first glance turns out to be under external control. This is why current guidelines recommend employing parametrized queries (sometimes also called prepared statements), where the SQL query string is (usually) a string literal, and the variable parameters are kept separate, combined only in the database driver itself (which has the necessary database-specific knowledge to perform any required quoting of the variables).

Homoiconicity and Query-By-Example

Query-By-Example is a way of constructing database queries based on example values. Consider a web application as an example. It might have a users table, containing columns such as user_id (a serial primary key), name, password (we assume the password is stored in the clear, also this practice is debatable), a flag that indicates if the user is an administrator, a last_login column, and several more.

We could describe a concrete row in the users table like this, using JavaScript Object Notation (JSON):

  "user_id": 1,
  "name": "admin",
  "password": "secret",
  "is_admin": true,
  "last_login": 1431519292

The query-by-example style of writing database queries takes such a row descriptor, omits some unknown parts, and treats the rest as the column values to match. We could check user name an password during a login operation like this:

  "name": "admin",
  "password": "secret",

If the database returns a row, we know that the user exists, and that the login attempt has been successful.

But we can do better. With some additional syntax, we can even express query operators. We could select the regular users who have logged in today (“1431475200” refers to midnight UTC, and "$gte" stands for “greater or equal”) with this query:

  "last_login": {"$gte": 1431475200},
  "is_admin": false

This is in fact the query syntax used by Sequelize, a object-relational mapping tool (ORM) for Node.js.

This achieves homoiconicity refers to a property of programming environment where code (here: database queries) and data look very much alike, roughly speaking, and can be manipulated with similar programming language constructors. It is often hailed as a primary design achievement of the programming language Lisp. Homoiconicity makes query construction with the Sequelize toolkit particularly convenient. But it also means that there are no clear boundaries between code and data, similar to the old way of constructing SQL query strings using string concatenation, as explained above.

Getting JSON To The Database

Some server-side programming frameworks, notably Node.js, automatically decode bodies of POST requests of content type application/json into JavaScript JSON objects. In the case of Node.js, these JSON objects are indistinguishable from other such objects created by the application code.  In other words, there is no marker class or other attribute which allows to tell apart objects which come from inputs and objects which were created by (for example) object literals in the source.

Here is a simple example of a hypothetical login request. When Node.js processes the POST request on he left, it assigns a JavaScript object to the the req.body field in exactly the same way the JavaScript code on the right does.

POST request Application code
POST /user/auth HTTP/1.0
Content-Type: application/json

req.body = {
  name: "admin",
  password: "secret"

In a Node.js application using Sequelize, the application would first define a model User, and then use it as part of the authentication procedure, in code similar to this (for the sake of this example, we still assume the password is stored in plain text, the reason for that will be come clear immediately):

  where: {
    password: req.body.password
}).then(function (user) {
  if (user) {
    // We got a user object, which means that login was successful.
  } else {
    // No user object, login failure.

The query-by-example part is highlighted.

However, this construction has a security issue which is very difficult to fix. Suppose that the POST request looks like this instead:

POST /user/auth HTTP/1.0
Content-Type: application/json

  "name": {"$gte": ""},
  "password": {"$gte": ""}

This means that Sequelize will be invoked with this query (and the markers included here are invisible to the Sequelize code, they just illustrate the data that came from the post request):

  where: {
    name: {"$gte": ""},
    password: {"$gte": ""}

Sequelize will translate this into a query similar to this one:

SELECT * FROM users where name >= ''  AND password >= '';

Any string is greater than or equal to the empty string, so this query will find any user in the system, regardless of the user name or password. Unless there are other constraints imposed by the application, this allows an attacker to bypass authentication.

What can be done about this? Unfortunately, not much. Validating POST request contents and checking that all the values passed to database queries are of the expected type (string, number or Boolean) works to mitigate individual injection issues, but the experience with SQL injection issues mentioned at the beginning of this post suggests that this is not likely to work out in practice, particularly in Node.js, where so much data is exposed as JSON objects. Another option would be to break homoiconicity, and mark in the query syntax where the query begins and data ends. Getting this right is a bit tricky. Other Node.js database frameworks do not describe query structure in terms of JSON objects at all; Knex.js and Bookshelf.js are in this category.

Due to the prevalence of JSON, such issues are most likely to occur within Node.js applications and frameworks. However, already in July 2014, Kazuho Oku described a JSON injection issue in the SQL::Maker Perl package, discovered by his colleague Toshiharu Sugiyama.

Other fixable issues in Sequelize

Sequelize overloads the findOne method with a convenience feature for primary-key based lookup. This encourages programmers to write code like this:

User.findOne(req.body.user_id).then(function (user) {
  … // Process results.

This allows attackers to ship a complete query object (with the “{where: …}” wrapper) in a POST request. Even with strict query-by-example queries, this can be abused to probe the values of normally inaccessible table columns. This can be done efficiently using comparison operators (with one bit leaking per query) and binary search.

But there is another issue. This construct

  where: "user_id IN (SELECT user_id " +
    "FROM blocked_users WHERE unblock_time IS NULL)"
}).then(function (user) {
  … // Process results.

pastes the marked string directly into the generated SQL query (here it is used to express something that would be difficult to do directly in Sequelize (say, because the blocked_users table is not modeled). With the “findOne(req.body.user_id)” example above, a POST request such as

POST /user/auth HTTP/1.0
Content-Type: application/json

{"user_id":{"where":"0=1; DROP TABLE users;--"}}

would result in a generated query, with the highlighted parts coming from the request:

SELECT * FROM users WHERE 0=1; DROP TABLE users;--;

(This will not work with some databases and database drivers which reject multi-statement queries. In such cases, fairly efficient information leaks can be created with sub-queries and a binary search approach.)

This is not a defect in Sequelize, it is a deliberate feature. Perhaps it would be better if this functionality were not reachable with plain JSON objects. Sequelize already supports marker objects for including literals, and a similar marker object could be used for verbatim SQL.

The Sequelize upstream developers have mitigated the first issue in version 3.0.0. A new method, findById (with an alias, findByPrimary), has been added which queries exclusively by primary keys (“{where: …}” queries are not supported). At the same time, the search-by-primary-key automation has been removed from findOne, forcing applications to choose explicitly between primary key lookup and full JSON-based query expression. This explicit choice means that the second issue (although not completely removed from version 3.0.0) is no longer directly exposed. But as expected, altering the structure of a query by introducing JSON constructs (as with the "$gte example is still possible, and to prevent that, applications have to check the JSON values that they put into Sequelize queries.


JSON-based query-by-example expressions can be an intuitive way to write database queries. However, this approach, when taken further and enhanced with operators, can lead to a reemergence of injection issues which are reminiscent of SQL injection, something these tools try to avoid by operating at a higher abstraction level. If you, as an application developer, decide to use such a tool, then you will have to make sure that data passed into queries has been properly sanitized.

Why mock does not work on EL 6 and EL7 and how to fix it

I am receiving lots of complaints that people are unable to build Fedora rawhide builds on EL6 and EL7. This is intentional and I will show you how you can workaround it.

At the outset brief history lesson: As you know DNF became default in Fedora and Yum is obsoleted. After few releases of Fedora, Yum will be likely removed from distribution. Definitely during lifetime of EL6 and EL7. So while we have Yum in all distributions currently, we are approaching the moment where we will not have either Dnf or Yum available in all distribution which Mock supports. During various stakeholders decided to change Mock settings to use DNF for building rawhide builds (soon to be F23 builds). However this introduced big dilema: EL6 and EL7 does not have DNF available (well there is DNF in EPEL7, but not dnf-plugins-core, which are crucial for for Mock). There are three options how to handle this situation:

  1. Do not allow EL6 and EL7 to build packages for Fedora rawhide (and F23+).
  2. Change Mock settings that EL6 and EL7 users will continue to use Yum for building Fedora rawhide (and F23+). While this option seems to be good trade-off, there is big risk that you end up with different binary compared to the same process on Fedoras. For example there was issue that Rubygems requires ruby(release), which Yum resolved to normal ruby, however DNF resolved that to jruby. And that is big difference.
  3. So I decided to go with first option (i.e. to not allow builds for Fedora rawhide), however you have two option how to easily workaround it, if you know what you are doing. The important part is that you must do manually change. This is basicaly your acknowledgment that you are doing something non-standard.


You have two way how to force Mock to use Yum:

  1. The first option is more suitable for one shot mode. You simply pass --yum option on a command line.
  2. For persistent change you can put this line inside of /etc/mock/site-defaults.cfg or inside of any /etc/mock/*.cfg file:

    config_opts['package_manager'] = 'yum'

This is something which you may want to do, when you are building some packages for personal usage. However you should avoid that if you are operating some build system. In such case migration to some recent Fedora is your only option.

Once again - this is implication of 'DNF as default' feature. We could postpone it probably by few months, but then it would hit us even harder.

Free Software Testing Books

There's a huge list of free books on the topic of software testing. This will definitely be my summer reading list. I hope you find it helpful.

[Event-Report] rootconf-2015

This time I got chance to attend rootconf (Conference on infrastructure automation and DevOps) which held in Bangalore). Around 400 professional registered and attended it. At expo hall there were lot of different organizations who were promoting their cloud-infra solutions, tools developed for system administration and DevOps workflow.

Day-1 started with conference introduction followed by 'Rewriting for scale' by Abhishek. I was quite surprise that his team rewrote complete code-base to Go language from ruby to achieve performance and to make it monotonous.

Mike talked or rather I would say demoed about SaltStack which is again configuration Management tool, he presented basic concept about SaltStack to manage large number of servers/workstations and how effective it work. After tea break Anand talked about '10 reasons why should you should prefer PostgreSQL over MySQL'. he pointed out where MySQL failed and PostgreSQL outperformed.

I went to expo hall and checked out solutions offering by different organizations to manage infrastructure. Most of folks were discussing about docker and how they are using is right now. After Lunch I put some Fedora Workstation DvD's (thanks to Ratandeep) to BrowserStack booth which was managed by Aditya. We also put upcoming FUDCon notice there. Some folks asked about what FUDCon is and how to register it.
There were crisp talks about 'Inframer-Know your infra' by Saurabh who described about issue with distributed infrastructure and how inframer can help to manage it to a certain extent. Aditya talked about Project Atomic and why it needed, various component of it like OSTree, Cockpit and briefly explained about kubernetes.

During Infrastructure BoF we discussed about docker and how it can be used, also why should we always inspect docker images instead of blindly pulling it and running it in production/staging. We also discussed about database rollbacks and how should we persist our data-store. We talked about docker usability is for application level, it not meant for storing data to containers because containers are non-persist.

Day-2 started summary of day-1 followed by 'Why favor Icinga over Nagios' by Bernd. He talked about what Icinga provide in terms of scaling, configuration and integration side which is not easy with Nagios. He also talked about what new features Icinga web2 have and provided a brief demo about it. Shanker talked about using docker for microservices and how 'Rancher OS' make sense for docker. He gave a quick demo about build out microservices using rancher. After tea break I attended 'Managing distributed file systems' by Atin, he talked what challenge we face when manage different node in a distributed system and how can handle those using distributed consistent storage.

After lunch Aditya talked about 'BrowserStack Security Breach, Lessons Learnt", his talk covered best practices to manage a large infra and do's and don'ts. During Lighting session I talked about Jenkins Job Builder and how we can manage job configuration with ymls and store it with SCM like we do with source code (Notes). I also announced upcoming FUDCon APAC event during Lighting talk. Razy talked about SELinux and why it's important.

It was a great experience and I really liked BoF session where participants were discussing about real challenge they face and if someone have better solutions for it. Kudos to organizing team to manage each session on time.

May 19, 2015

Is SELinux good anti-venom?
SELinux to the Rescue 

If you have been following the news lately you might have heard of the "Venom" vulnerabilty.

Researchers found a bug in Qemu process, which is used to run virtual machines on top of KVM based linux
machines.  Red Hat, Centos and Fedora systems were potentially vulnerable.  Updated packages have been released for all platforms to fix the problem.

But we use SELinux to prevent virtual machines from attacking other virtual machines or the host.  SELinux protection on VM's is often called sVirt.  We run all virtual machines with the svirt_t type.  We also use MCS Separation to isolate one VM from other VMs and thier images on the system.

While to the best of my knowlege no one has developed an actual hack to break out of the virtualization layer, I do wonder whether or not the break out would even be allowed by SELinux. SELinux has protections against executable memory, which is usually used for buffer overflow attacks.  These are the execmem, execheap and execstack access controls.  There is a decent chance that these would have blocked the attack. 

# sesearch -A -s svirt_t -t svirt_t -c process -C
Found 2 semantic av rules:
   allow svirt_t svirt_t : process { fork sigchld sigkill sigstop signull signal getsched setsched getsession getcap getattr setrlimit } ; 
DT allow svirt_t svirt_t : process { execmem execstack } ; [ virt_use_execmem ]

Examining the policy on my Fedora 22 machine, we can look at the types that a svirt_t process would be allowed to write. These are the types that SELinux would allow the process to write, if they had matching MCS labels, or s0.

# sesearch -A -s svirt_t -c file -p write -C | grep open 
   allow virt_domain qemu_var_run_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_home_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_tmp_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_image_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_tmpfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain virt_cache_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
DT allow virt_domain fusefs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_fusefs ]
DT allow virt_domain cifs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_samba ]
ET allow virt_domain dosfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_usb ]
DT allow virt_domain nfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_nfs ]
ET allow virt_domain usbfs_t : file { ioctl read write getattr lock append open } ; [ virt_use_usb ]

Lines beginning with the D are disabled, and only enabled by toggling the boolean.  I did a video showing the access avialable to an OpenShift process running as root on your system using the same
technology.  Click here to view.

SELinux also blocks capabities, so the qemu process even if running as root would only have the net_bind_service capabilty, which allows it to bind to ports < 1024.

# sesearch -A -s svirt_t -c capability -C
Found 1 semantic av rules:
   allow svirt_t svirt_t : capability net_bind_service ; 

Dan Berrange, creator of libvirt, sums it up nicely on the Fedora Devel list:

"While you might be able to crash the QEMU process associated with your own guest, you should not be able to escalate from there to take over the host, nor be able to compromise other guests on the same host. The attacker would need to find a second independent security flaw to let them escape SELinux in some manner, or some way to trick libvirt via its QEMU monitor connection. Nothing is guaranteed 100% foolproof, but in absence of other known bugs, sVirt provides good anti-venom for this flaw IMHO."

Did you setenforce 1?

Conociendo el comando ip

Con la “casi” liberada Fedora 22, debo decir que anda muy pulida en todo aspecto, y un tema que recorde (muy aparte de los “spins” (aun por mantener el nombre) que hay: workstation, server, cloud) recorde que, aparte del ya muy usado comando ifconfig, podemos usar tambien ip addr para lo mismo, es decir, cambiar, ips de forma manual, rutas estaticas y más. A continuación detallo un poco más de lo que podemos hacer:

ip addr list : listar todas las interfaces de red


ip addr list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
   valid_lft forever preferred_lft forever

ip link show: listar todas las interfaces de red en capa 2 (data link layer)


ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff

ip link set eth0 down : desactiva la interface de red


ip link set eth0 down

eth0:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff

ip link set eth0 up: activa la interface de red


eth0:  mtu 1500 qdisc pfifo_fast state DORMANT qlen 1000
    link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff
    valid_lft forever preferred_lft forever

ip link set dev eth0 promisc on: activamos modo promiscuo


ip link set dev eth0 promisc on && ip addr list eth0

eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
    valid_lft forever preferred_lft forever

Obs: Si queremos “apagar” el modo promiscuo usamos al final OFF a su ves podemos usar: multicast, arp, dynamic o allmulti.

ip addr add broadcast dev eth0: si queremos cambiar una dirección ip de forma manual

ip addr del dev eth0: si queremos eliminar una dirección ip.

ip link set dev eth0 mtu 9000: si queremos cambiar el MTU

ip route show: si queremos ver la tabla de ruteo


default via dev eth0  proto static dev eth0  scope link  metric 1000 dev eth0  proto kernel  scope link  src  metric 1

ip route add default via añadimos un gateway de forma manual

ip route add via dev eth0: Añadimos una nueva ruta

ip route del via dev eth0: Eliminamos una ruta


  • También puedes crear diferentes tablas de rutas (a parte de la por defecto):

echo 7 special >> /etc/iproute2/rt_tables
ip route show table special
ip route add table special default via

  • Prohibir una ruta (avisando con un destination unreachable):
    ip route add prohibit
  • Prohibir una ruta a un remitente en concreto:
    ip route add prohibit from

Recuerda que si quieres un manual, puedes usar “man ip”

Espero que les sea de ayuda ….


All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
CentOS Cloud SIG update

For the last few months we are working on the Cloud Special Interest Group in the CentOS project. The goal of this SIG is to provide the basic guidelines and infrastructure required of FOSS cloud infrastructure projects so that we can build and maintain the packages inside official CentOS repositories.

We have regular meetings at 1500 UTC in every Thursday on #centos-devel IRC channel. You can find the last week’s meeting log here. RDO (Openstack), Opennebula and Eucalyptus were the first few projects to come ahead, and participate in forming the SIG. We also have a good number of overlap with the Fedora Cloud SIG.

RDO is almost ready to do a formal release of Kilo on CentOS 7. The packages are in testing phase. Opennebula team has started the process to get the required packages built on CBS.

If you want to help feel free to join in the #centos-devel channel, and give us a shout. We do need more helping hands to package, and maintain various FOSS Cloud platforms.

There are also two GSoC projects under CentOS which are related to the Cloud SIG. The first one is “Cloud in a box”, and the second one is “Lightweight Cloud Instance Contextualization Tool”. Rich Bowen, and Haikel Guemar are the respective mentors for the projects.

Minor service disruption
Service 'COPR Build System' now has status: minor: Migrations underway. Services will be restored later today
The new Why and How

We had a major change earlier this week, with the new website going live. This was a major task I was involved in over the last couple of weeks, and also one of the major reasons why we did not have a lot of visible action on the website. Hopefully you’ll see more action in the coming weeks as we come closer to the big day with just over a month to go.

Why did we do it?

The old website was based on Drupal 6.x with the COD module. Technically, this is a supported version of Drupal, but that is a pointless detail because every security or bug fix update was painful. The primary reason, it seemed to us, was COD. The 6.x version seemed more or less dead. We still stuck to it however, since the 7.x upgrade was far more painful than doing these updates and hacking at settings to get things working again.

That was until we decided to add the Speaker bio field to our sessions.

The COD module is very versatile and can let you ask for arbitrary information about a session. However, when you add a field, you can capture data from users, but cannot actually show it. The problem seemed to be in the way COD stored its additional data - drupal seemed unable to query it when displaying the session node and hence would refuse to show all of the additional fields, like FAS username, Twitter handle and speaker bio. Praveen and I hacked at the settings for days and couldn’t get it to work. We went live with the missing speaker bio, which apparently nobody else seemed to notice.

However, when we put out the talk list, the absence of speaker bio was evident, so I decided to take a crack at fixing it in code. I gave up because I was quickly overwhelmed by the Drupal maze of dependencies - I have spent way too long away from the web app world - and decided that I may have an easier time upgrading all of Drupal and COD to 7.x than peering at the Drupal/COD code and then maintaining a patch for it. I also felt that the upgrade would serve us better in the longer run, when we have to use the website to host a future FUDCOn - upgrading from 7.x ought to be easier than upgrading from 6.x.

How we did it

I sat back one weekend to upgrade the Drupal instance. The instructions make it sound so easy - retain the sites directory and your modules and change the rest of the code, call the Drupal update.php script and wait for it to do the magic. It is that easy, if your website does not use anything more than the popular modules. With COD, it is basically impossible to go from 6.x to 7.x, especially if you have added custom fields like we did.

Data definitions for COD seemed to have changed completely between 6.x and 7.x, making it near impossible to write a sensible migration script, especially when the migrator (yours truly) has no idea what the schema is. So I went about it the neanderthal way - remove all content, retain all users and then upgrade to Drupal 7.x from COD 6.x. That thankfully worked like a charm. This was a useful first step because it meant that at least we did not have to ask users to sign up again or add hundreds of accounts manually.

Once our user schema was on 7.x, the next task was to get COD 7.x. This again worked out quite easily since COD did not complain at all. Why would it - there was no conference content to migrate! Creating a new event and basic pages for the event was pretty straightforward and in fact, nicer since the new COD puts conference content in its own namespace. This would mean shared links being broken, but I didn’twant to bother with trying to fix that because there were only a few links that were shared out there. If this is too big a problem, we could write a .htaccess rule to do a redirect.

Adding sessions back was a challenge. It took me a while to figure out all of the data that gets added for each session and in the end I gave up due to exhaustion. Since there were just about 140 session entries to make, Praveen and I split that work and entered them ourselves. Amita and Suprith then compared content with to verify that it is all the same and the finally Praveen pushed the button to upgrade.

Like everything else, this upgrade taught me a few things. Web apps in general don’t think a lot about backward compatibility, which is probably justified since keeping backward compatibility often results in future designs being constrained - not something a lot of developers are comfortable with. I also had to refresh a lot of my database foo - it’s been more than 6 years since the last time I wrote any serious SQL queries.

The biggest lesson I got though was the realization that I am no longer young enough to pull an all-nighter to do a job and then come back fresh the next day.

GDB Preattach

In firefox development, it’s normal to do most development tasks via the mach command. Build? Use mach. Update UUIDs? Use mach. Run tests? Use mach. Debug tests? Yes, mach mochitest --debugger gdb.

Now, normally I run gdb inside emacs, of course. But this is hard to do when I’m also using mach to set up the environment and invoke gdb.

This is really an Emacs bug. GUD, the Emacs interface to all kinds of debuggers, is written as its own mode, but there’s no really great reason for this. It would be way cooler to have an adaptive shell mode, where running the debugger in the shell would magically change the shell-ish buffer into a gud-ish buffer. And somebody — probably you! — should work on this.

But anyway this is hard and I am lazy. Well, sort of lazy and when I’m not lazy, also unfocused, since I came up with three other approaches to the basic problem. Trying stuff out and all. And these are even the principled ways, not crazy stuff like screenify.

Oh right, the basic problem.  The basic problem with running gdb from mach is that then you’re just stuck in the terminal. And unless you dig the TUI, which I don’t, terminal gdb is not that great to use.

One of the ideas, in fact the one this post is about, since this post isn’t about the one that I couldn’t get to work, or the one that is also pretty cool but that I’m not ready to talk about, was: hey, can’t I just attach gdb to the test firefox? Well, no, of course not, the test program runs too fast (sometimes) and racing to attach is no fun. What would be great is to be able to pre-attach — tell gdb to attach to the next instance of a given program.

This requires kernel support. Once upon a time there were some gdb and kernel patches (search for “global breakpoints”) to do this, but they were never merged. Though hmm! I can do some fun kernel stuff with SystemTap…

Specifically what I did was write a small SystemTap script to look for a specific exec, then deliver a SIGSTOP to the process. Then the script prints the PID of the process. On the gdb side, there’s a new command written in Python that invokes the SystemTap script, reads the PID, and invokes attach. It’s a bit hacky and a bit weird to use (the SIGSTOP appears in gdb to have been delivered multiple times or something like that). But it works!

It would be better to have this functionality directly in the kernel. Somebody — probably you! — should write this. But meanwhile my hack is available, along with a few other gdb scxripts, in my gdb helpers github repository.

Firefox 38 available now in the Fedora repositories

Mozilla released version 38 of the Firefox web browser last week, and the updated version is available now in the Fedora repositories for Fedora 21, and for users running Fedora 22 pre-release versions. As has been the case since Firefox starting rapidly releasing new versions every 6 weeks or so, there are a handful of new shiny features, and many, many bugfixes.

Two notable new features provided by the release of Firefox 38 are the tabbed preferences, and better high DPI support.

Tabbed Preferences

In previous versions, the Firefox preferences were contained in a pop-up dialog box. In version 38, the preferences are now moved into a dedicated tab, much like the add-ons tab that has been in Firefox for many releases:


High DPI support

Previously, users running Firefox on high DPI screens on Fedora had to tweak a setting in the about:config to get Firefox to play nicely with their high DPI screens. Now, if the layout.css.devPixelsPerPx value in about:config is set to the default of -1.0, Firefox renders pages and its chrome at the DPI setting of the desktop. Note that if you previously followed our instructions for manually setting this value, you may want to reset it to the default by finding it again in about:config, right clicking on it, and choosing Reset



May 18, 2015

OpenCL - A small glance at where we stand and what we achieved. (And ocl-icd)


More than a year ago it was a change. A change in Fedora 22: The introduction of OpenCL. The idea was to get OpenCL somewhat usable out of the boxin Fedora, to enable people to use it, to further get more people testing it, to finally find more bugs and raise the demand. With a lot of help from others, especially Björn, the change made it into Fedora 21.

About a year later a couple of bugs were found, and some OpenCL related packages got co-maintainers and more or less regular updates.

One bug tho in ocl-icd (an OpenCL ICD loader) gained some attention lately, because the usage of ocl-icd lead to an infite loop.

After some discussion on the bug, two of us reached out to upstream. Within ours we got a feedback, and even more, the bug got (likely) fixed. That is nice, and that is the power of open source. The power of sharing, caring, and working together. This is especially remarkable, because the ocl-icd maintainers are not active in the Fedora community, but were so responsive and had open ears to fix the bug. This is not always the case, but nice that it worked so nicely here.

Let’s get to something else after this outburst: OpenCL is used much more in Fedora compared to how much it was used a year ago. While trying to identify the packages which might need a rebuild because of this ocl-icd change, I saw that we’ve got quite some packages which now require ocl-icd (the only ICD loader in Fedora for now).

$ repoquery --whatrequires ocl-icd --qf "%{name}" | sort -u

Nice to see that the interest is higher than before. But we also see that OpenCL still needs much love to make it really usable. But work is done everywhere, especially the implementations progress slowly, but steady.