July 23, 2014

Assigning IP addresses to Docker containers via DHCP

In my last blog post I explained how to run a Docker installation across multiple hosts. In the comments I was asked if it would be possible to use a DHCP server to assign IPs to the containers. I thought immediately — why not? In fact, assigning IPs using DHCP is a nice way to overcome the IP assignment issue I talked about before.

Set up

The set up is pretty similar to my earlier work.

The difference is that I want to run a DHCP server on each of the hosts. It will be responsible for assigning the IPs to all of the containers connected to the docker0 bridge on that host.

You may ask why I want to run so many DHCP servers? Of course one server would do the job of assigning the IP’s (in fact, this was my initial set-up). Then I realized that it would be easier to manage routing of the container’s traffic if we have a DHCP server on every host.

Since we want to have the containers run on the same network, there will be many DHCP servers working on that network. The trick is to reserve a bunch of IP’s for each host so as not to interfere with other hosts (in my case it’s 10 addresses per host) and drop the DHCP requests on each br0 Open vSwitch bridge.

New scripts

Additionally I rewrote my earlier script that prepares the virtual machines for me. I decided to drop the use of cloud-init entirely in favor of libguestfs — a great Swiss Army knife written by Rich. If you’re not familiar with it — it’s a tool for offline manipulation of disk images. And it does the job damn well.

You can install libguestfs tools on your system by running this command:

yum -y install libguestfs-tools

Install liguestfs-tools before you use my scripts.

The first run of guestfish can take a bit, since it creates the supermin appliance. Also, if you run it in a virtualized environment it will perform a bit worse compared to bare metal.

The new scripts are avilable in the docker-dhcp repository on GitHub.

The logic of preparing and registering the image as a libvirt domain was split into two scripts:

The split made it possible to create the image once and create as many domains as you want in seconds.

I placed a lot of comments in these scripts, so I hope everything is self-explaining. Let me know if that’s not the case!

Base image

To create a VM with everything preinstalled to make the setup easier the only thing you need to do is to download the QCOW2 image from cloud.fedoraproject.org and run the script providing the cloud image as the parameter, like this:

$ ./prepare-image.sh Fedora-x86_64-20-20131211.1-sda.qcow2
      Wed, 29 Jan 2014 12:20:57 +0100 Cleaniung up...
      Wed, 29 Jan 2014 12:20:58 +0100 Modifying the image...
      Wed, 29 Jan 2014 12:25:15 +0100 Resizing the disk...
      Wed, 29 Jan 2014 12:26:06 +0100 Image 'image.qcow2' created!
Executing the above script can take some time. It took about 5 minutes for me to prepare the image. Please keep in mind that it does the full system update and it installs some additional software as well.

This script injects the network.sh script which will be used later to configure the network inside the hosts.

Host domains

Now we have the base image prepared: image.qcow2. Time to make use of it and register two domains based on it. For this purpose I use the register-domain.sh script:

$ ./register-domain.sh image.qcow2 host1
      Wed, 29 Jan 2014 13:52:51 +0100 Installing the domain and adjusting the configuration...
      Wed, 29 Jan 2014 13:52:51 +0100 Domain host1 registered!
      Wed, 29 Jan 2014 13:52:51 +0100 Launching the host1 domain...
      Wed, 29 Jan 2014 13:52:52 +0100 Domain started, waiting for the IP...
      Wed, 29 Jan 2014 13:53:30 +0100 You can ssh to the host using 'fedora' username and 'fedora' password or use the 'virsh console host1' command to directly attach to the console

You can log-in using the fedora / fedora credentials.

If you don’t want to run the domain immediately after creation — use the RUN_AFTER environment variable and set it to false.

Run the register-domain.sh script twice with host1 and host2 as the arguments.

DHCP configuration explained

The DHCP server will be run on every host and listen only for requests on the docker0 interface since it’s configured to look for the network only. The first 9 IP addresses from this network are reserved (can be assigned manually to additional hosts), the rest will be available to the DHCP clients.

Every DHCP server will only be responsible for a part of the network. For example, the server on host1 will assign addresses from to, whereas host2 will will assign addresses from to

This is just an example — you can expand the default values to run more than 10 containers on one host.

The host1 docker0 network interface will have the address assigned, host2 will have, and so on.

Make it work!

I assume that we have already started host1 and host2 as explained above.


Now it’s time to prepare the networking on both hosts. Log-in to both hosts and get the IP addresses of the eth0 network interfaces. Now run the network.sh script (it’s located in the /home/fedora directory) on both hosts.

I assume that host1 has IP and host2 has

On host1 run the script with the IP of host2:

$ sudo ./network.sh 1

And do the opposite on host2:

$ sudo ./network.sh 2
When you run the network.sh script for the first time, you may see messages similar to bridge docker0 doesn't exist — don’t worry, this is normal.

The GRE tunnel should now be established and a DHCP server should be running on host1. You can confirm this by pinging the docker0 bridge addresses on each host.


There is one requirement for the container image — it needs to have a DHCP client installed. Sadly the default fedora image does not have the dhclient package installed. To make things easy I prepared the goldmann/fedora-dhcp image. The only difference between fedora image is the addition of dhclient.

Download this image on both hosts:

docker pull goldmann/fedora-dhcp

If you run the goldmann/fedora-dhcp image you’ll see that there is no network interfaces beside the loopback. This is because Docker is run with the -b=none flag and it does not know about any network interfaces to bind to, so it does not create the ethernet adapter in the container.

But we still want to have networking. The only option at the moment is to use the -lxc-conf flag when running the image, like this:

docker run -i -t \
      -lxc-conf="lxc.network.type = veth" \
      -lxc-conf="lxc.network.link = docker0" \
      -lxc-conf="lxc.network.flags = up" \
      goldmann/fedora-dhcp /bin/bash

This will start a new container with a virtual ethernet adapter which is attached to the docker0 bridge. Sweet!

Obtaining the IP address

Since the Docker container does not run anything besides the command you specify (in our case /bin/bash) — it does not run the scripts that configures the network too. We need to do it by hand.

I hope this will change in the near future. One option is to make systemd run well in the Docker containers.

After you get the prompt from the container, you can simply run the dhclient command. This will obtain the address from the DHCP server, exit and leave a shell just for you.

bash-4.2# ip a s dev eth0
      17: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
          link/ether 1e:d4:13:c7:9d:fd brd ff:ff:ff:ff:ff:ff
          inet6 fe80::1cd4:13ff:fec7:9dfd/64 scope link
             valid_lft forever preferred_lft forever
      bash-4.2# dhclient
      bash-4.2# ip a s dev eth0
      17: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
          link/ether 1e:d4:13:c7:9d:fd brd ff:ff:ff:ff:ff:ff
          inet brd scope global dynamic eth0
             valid_lft 43197sec preferred_lft 43197sec
          inet6 fe80::1cd4:13ff:fec7:9dfd/64 scope link
             valid_lft forever preferred_lft forever
      bash-4.2# ping -c 1 google.com
      PING google.com ( 56(84) bytes of data.
      64 bytes from ee-in-f139.1e100.net ( icmp_seq=1 ttl=39 time=55.6 ms
      --- google.com ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 55.672/55.672/55.672/0.000 ms
You can also use the /etc/init.d/network restart command to obtain the IP.

You can (should!) try it on host1 and host2. You should get the same result with no IP conflict and be able to access the Internet as well as other containers on the network.


Things to improve

There are of course some things that could be improved to make this setup easier.

  1. Make systemd available in the container — this would boot the networking and get the IP address automatically for us.

  2. Stop Docker from (blindly) assigning IP addresses when we specify the -b=BRIDGE flag. Docker currently assumes that it manages the container network and nothing else is allowed to do so. I hope this will change in the future.

Connecting Docker containers on multiple hosts

In my previous blog post I talked about running the Fedora Cloud images on a local KVM with libvirt. This was not a standalone task, but rather the preparation for this blog post: running Docker containers on multiple hosts attached to the same network.

I was asked in the comments on my WildFly cluster based on Docker blog post if it would be possible to run a cluster on multiple hosts. I found a very nice tutorial written by Franck Besnard. I’ve decided to set up a similar environment on my own to see how/if it works.

I made a few changes to Franck’s set up:

  1. I’m not using the pipework script to minimize the dependencies.

  2. I wanted to make the launching of the containers as simple as possible, so I dropped the use of the ovswork.sh script Franck crafted and I only use the docker run command.

  3. I’m not creating a virtual ethernet device for each container — instead I’m attaching all containers to a bridge.

  4. I’m not using VLAN’s (yet)

Set up


I used two VM’s (host1 and host2) each with Fedora 20 as the operating system. You can use the script I described in my previous post to create them. Later you can use the virsh start host1 command to run them.


On both hosts I’ve installed Docker:

yum -y install docker-io

The Docker configuration requires some changes.

By default Docker chooses a (more or less) random network to run the containers. After this it creates a bridge and assigns an address to it. This is not really what we want because we need to have static address assignment, so we need to prepare our own bridge and disable the one managed by Docker.

Copy the /usr/lib/systemd/system/docker.service file to /etc/systemd/system/docker.service and add following content to disable the default docker0 bridge creation on Docker startup.

.include /usr/lib/systemd/system/docker.service
      ExecStart=/usr/bin/docker -d -b=none

You can start Docker with systemctl start docker.

Every time you modify a systemd service file do not forget to run systemctl daemon-reload to apply your changes.


This is the interesting part :)

Open vSwitch

To make networking easy I used the Open vSwitch software. I’m very new to it, but its flexibility and ease of use is just impressive. I haven’t done any performance testing, though. Maybe some day.

You can install Open vSwitch on Fedora by running this command:

yum -y install openvswitch

Network configuration

The script below prepares the networking for you. You can execute it on both hosts by adjusting the REMOTE_IP and BRIDGE_ADDRESS variables. The BRIDGE_NAME can be the same on both hosts.

# The 'other' host
      # Name of the bridge
      # Bridge address
      # Deactivate the docker0 bridge
      ip link set $BRIDGE_NAME down
      # Remove the docker0 bridge
      brctl delbr $BRIDGE_NAME
      # Delete the Open vSwitch bridge
      ovs-vsctl del-br br0
      # Add the docker0 bridge
      brctl addbr $BRIDGE_NAME
      # Set up the IP for the docker0 bridge
      ip a add $BRIDGE_ADDRESS dev $BRIDGE_NAME
      # Activate the bridge
      ip link set $BRIDGE_NAME up
      # Add the br0 Open vSwitch bridge
      ovs-vsctl add-br br0
      # Create the tunnel to the other host and attach it to the
      # br0 bridge
      ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre options:remote_ip=$REMOTE_IP
      # Add the br0 bridge to docker0 bridge
      brctl addif $BRIDGE_NAME br0
      # Some useful commands to confirm the settings:
      # ip a s
      # ip r s
      # ovs-vsctl show
      # brctl show

After executing these commands on both hosts you should be able to ping the docker0 bridge addresses from both hosts.

Here is an example from host2 (ip

$ ping
      PING ( 56(84) bytes of data.
      64 bytes from icmp_seq=1 ttl=64 time=2.16 ms
      64 bytes from icmp_seq=2 ttl=64 time=0.628 ms
      --- ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1001ms
      rtt min/avg/max/mdev = 0.628/1.396/2.165/0.769 ms

Networking explained

The above script has some useful comments that help to understand what it’s doing, but here’s a high level view on the networking part.

  1. Every container run with Docker is attached to docker0 bridge. This is a regular bridge you can create on every Linux system, without the need for Open vSwitch.

  2. The docker0 bridge is attached to another bridge: br0. This time it’s an Open vSwitch bridge. This means that all traffic between containers is routed through br0 too. You can think about two switches connected to each other.

  3. Additionally we need to connect together the networks from both hosts in which the containers are running. A GRE tunnel is used for this purpose. This tunnel is attached to the br0 Open vSwitch bridge and as a result to docker0 too.

The issue: IP assignment

While creating this environment I found a problem.

Docker assumes that it’s managing the network where the containers are run. It does not expect any other hosts to be run on the network besides the ones it starts. This works well in a typical environment (and definitely makes the code easier). But if we’re going to spread across multiple hosts — this can cause some headaches.

Docker address assignement method

The way Docker assignes IP addresses to the containers is very simple: it tries to assign the first unused address. It sounds valid, right? But it depends how do you define not used. When Docker starts a container — the assigned IP is added to a list of used IPs maintained by the Docker daemon. Not used IP in Docker’s case means that the IP wasn’t found in that list.

This can be problematic, though. If you run something manually on that network and you assign an IP to it — Docker will not be able to detect it and instead it can happen that Docker assigns this IP blindly again causing a conflict.


Over the weekend I was thinking about some solutions, and I ended up with two:

  1. Obvious one: change the Docker code to find out if the address is really free.

  2. Manually assign IP’s to the containers when running them.

Both have pros and cons. There may be other solutions too. Feel free to drop a comment if you find one.

Option 1: Modifying Docker

The first idea involves patching Docker. We need to make it aware of the hosts running on the network. From the beginning I was focused on using the ARP protocol.

I was trying to use the host ARP cache table for the interface bound to Docker (by default it’s docker0), but I found that:

  1. Containers do not advertise themselves on startup, and

  2. Even if we advertise manually (using gratuitous ARP message) — the ARP table is not reliable enough since entries will be removed after some time if there is no communication between these two hosts.

Fedora does drop the broadcast ARP messages by default. You can change this by setting: echo 1 > /proc/sys/net/ipv4/conf/<device>/arp_accept. Read more in the Linux kernel documentation (search for arp_accept).

But the good news is that we still can find if the selected IP is used by using the arping utility and this is what I used.

I prepared a very ugly patch for Docker 0.7.6 which adds an additional check if the IP we’re trying to use is actually free.

In my testing I found that using arping is pretty reliable — the hosts were discovered properly and it didn’t take too long to find a free IP.

I built an RPM with this patch for Fedora 20, you can download it from here, if you want to give it a try.

After installing the patched Docker you should be able to run containers just like you’re used to:

docker run -i -t centos:latest /bin/bash

Option 2: Manual address assignment

Sometimes patching Docker is not an option.

This is where assigning IP addresses manually makes sense. Since Docker does not expose the ability to assign a selected IP directly to the docker run command — we need to do this in two steps:

  1. Disable the automatic network configuration in Docker by specifying -n=false,

  2. Configure networking using the LXC configuration using -lxc-conf


This is how it could be done:

docker run \
      -n=false \
      -lxc-conf="lxc.network.type = veth" \
      -lxc-conf="lxc.network.ipv4 =" \
      -lxc-conf="lxc.network.ipv4.gateway =" \
      -lxc-conf="lxc.network.link = docker0" \
      -lxc-conf="lxc.network.name = eth0" \
      -lxc-conf="lxc.network.flags = up" \
      -i -t centos:latest /bin/bash

This will run a CentOS container with networking set up as follows:

  • Create a virtual ethernet interface

  • Attach this interface to the docker0 bridge

  • Expose it in the container as eth0

  • Assign the IP to the interface

  • Set up the default gateway as

If you want to run multiple containers on one host, the only thing you’ll change is the IP address — everything else can be left as-is.

Expected result

If you followed the tutorial (no matter which option you choose) — you should be able to run containers on both hosts. Containers should be attached to the same network and be able to ping each other. Additionaly no IP address conflicts should happen.



If you encounter some problems — you need to check the configuration.

  • Make sure the brctl show command outputs similar content:

bridge name bridge id   STP enabled interfaces
      docker0   8000.7a7c5f332842 no    br0
  • Make sure the ovs-vsctl show command outputs similar content:

          Bridge "br0"
              Port "br0"
                  Interface "br0"
                      type: internal
              Port "gre0"
                  Interface "gre0"
                      type: gre
                      options: {remote_ip=""}
          ovs_version: "2.0.0"
  • Make sure you can ping host1 from host2 and vice-versa.

  • Make sure you can ping the docker0 interface running on host1 from host2 and vice-versa.


It’s possible to run Docker containers on different hosts that share the same network.

It’s even pretty simple. But like always — it could be better: Docker should make it possible without any workarounds.

One idea would be to implement the ARP requests directly in Go and drop the use of arping.

The other idea is to expose the network settings for the containers to the docker run call. I’m thinking here about the -i (IP with network prefix) and -g (gateway) options forwarded to dockerinit when launching a container.

Whoah, you’re still reading this? Not bad.


<script type="text/javascript"> $('.picture').colorbox(); </script>
Running Fedora Cloud images on KVM

Inspired by Matt’s blog post I’ve decided to make something that is similar but better fits my use case. I plan to do some clustering and routing testing and therefore I need to have similar VMs running side-by-side. These images will be short-lived and I would like to automate as many things I can.

Additionally, I found out that the images shipped by Fedora require some changes to work well in my case, for example the default 2GB disk is too small.


You can see my script here.

My script does create a new domain with the desired name with the cloud disk image. I use the a QCOW2 image to not eat all the free space on my SSD. The disk can be optionally grown to the desired size. The whole install is done without VNC using the text method. The domain is optionally run after the setup completes.

I use the cloud-init script only for the initial setup (setting the password) - afterwards it is removed from the image. The same thing happens to the ISO used for the cloud-init configuration - it’s ejected and removed from the disk afterwards.

Sample run

$ virt-install-fedora host
      Thu, 16 Jan 2014 14:28:58 +0100 Destroying the host1 domain...
      Thu, 16 Jan 2014 14:28:59 +0100 Generating ISO for cloud-init...
      Thu, 16 Jan 2014 14:28:59 +0100 Installing the domain and adjusting the configuration...
      Thu, 16 Jan 2014 14:29:23 +0100 Cleaning up cloud-init...
      Thu, 16 Jan 2014 14:29:23 +0100 Resizing the disk...
      Thu, 16 Jan 2014 14:30:15 +0100 Launching the host1 domain...
      Thu, 16 Jan 2014 14:30:15 +0100 DONE, ssh to the host using 'fedora' username and 'fedora' password

After about a minute I have a registered domain with a Fedora 20 disk image on a 20GB QCOW2 disk. Pretty good. And I can run the command as many times I want to deploy new VMs.

Along with the disk image — the script creates a log file (in my case host1/host1.log) in which you can see the messages from the commands that were run during the installation.

Wrong battery estimate in Fedora on Lenovo ThinkPad 430s

I had a power mangement issue when running Fedora 19 (and later 20) on my Lenovo ThinkPad 430s. When the battery was low - the laptop just powered off itself, without a clean shutdown or even warning me. As you may guess - this wasn’t something I very happy about.


I discovered that it happens when the battery is still at 15%. The system reported at that time still about 40 min of available time.

By default Fedora uses the time based policy for battery power management. In almost all cases this is a good choice, but this time it won’t fly.


Instead of fixing the issue properly (finding the source of wrong estimate or wrong battery reading) I found a workaround which works very well and does not involve any magic: I disabled the time based and switched to percentage based policy.

gsettings set org.gnome.settings-daemon.plugins.power use-time-for-policy false

The other thing we need to do is to adjust when the warning will apear and what type of action should be executed when the battery level is critical.

gsettings set org.gnome.settings-daemon.plugins.power percentage-critical 25
      gsettings set org.gnome.settings-daemon.plugins.power percentage-low 30
      gsettings set org.gnome.settings-daemon.plugins.power percentage-action 20
      gsettings set org.gnome.settings-daemon.plugins.power critical-battery-action 'suspend'

The system will warn me when the battery level is at 30%, another warning will apear at 25% and the action will be executed at 20% of battery level. What action? I choose supend because in almost all cases I have apower plug somewhere nearby, so I just need to plug in and I can work again almost immediately.

You can check the values for all power management settings by using this command:

gsettings list-recursively org.gnome.settings-daemon.plugins.power

If you prefer graphical tools — you can use dconf-editor to edit these values. The gui additionally shows a description for each key and what values are available to set.

Even more Docker — Fedora news

It has been a while since my last blog post about recent Docker changes in Fedora. We’ve done some significant work over the last three weeks. Now it’s time to wrap it up.

Docker is available in the repositories

Now you can enjoy Docker on your Fedora system by executing just one command:

yum install docker-io

If you’re on RHEL/CentOS you still need to enable epel-testing repository. This shouldn’t be necessary once the lxc package will be available in stable.

yum install docker-io --enablerepo epel-testing

We also created systemd (or init.d — for EPEL) scripts for you so you can easily manage the Docker service.

To make it all happen, in the last 3 weeks we have closed 19 bugs.


Please report all issues regarding the docker-io package in Red Hat Bugzilla in the correct place:

Docker Registry is packaged

This is new: some time ago I opened a review request for the docker-registry package. It has been finished and I was able to submit an update yesterday. Today I’ve added support for EPEL 6 and submitted a new update.

Over the next few hours the docker-registry package will be pushed to updates-testing (or epel-testing for EPEL) so you’ll be able to play with it.

In Fedora:

yum install docker-registry --enablerepo updates-testing


yum install docker-registry --enablerepo epel-testing


And again, please report all issues in Red Hat Bugzilla in the correct place:

What’s next

The major packaging work is done. Now it is your turn — please install these packages, use them, and report all issues in the places mentioned above. We’ll make sure they are fixed.

Recent Docker changes in Fedora

Lokesh, the docker-io package maintainer at Fedora, does a damn good job at keeping it up to date. I think it's a good time to see what changed over the last weeks in the Docker-Fedora world.

Why Docker is still not available in Fedora?

Simple answer

There are some technical issues to overcome :)

Longer answer

The most important blocker is lack of AUFS support in the kernel (upstream as well as Fedora's and RHEL's). Alex Larsson is working on a replacement for AUFS using devicemapper. You can read more about this work in a nice blog post where Alex is covering this area.

If you're interested in the actual code, please look at the dm branch in the official Docker GitHub repo.

Not yet in the official repos

At least not for Fedora 19 and 20. In Rawhide we do have a docker-io package avaialable, but Rawhide is intended for testing, and this is what we do now with the docker-io package. Additionally we agreed with upstream to not push Docker to Fedora repos before the 0.7.0 release. This is very closely related to Alex's work on devicemapper. We do not want to make avialable half-backed solutions.

Please be patient.

Updated repo
Today I've updated my repository. It contains the latest 0.7-0.13.dm version of docker-io package.

But if you really want to get your hands dirty with the latest version - use my repository. You can of course help with testing and by reporting bugs you see, we appreciate it.

Only 64 bit

This is not a Fedora choice, but it's forced by upstream. Docker currently works only on 64 bit architectures, so there won't be any build for other archs (including i386 and arm), at least for now. I'm not sure where the root of this problem lies, but I'll do some research over then next few days.

This is a quite big issue, since it technically prevents us from adding Docker to Fedora, since all packages should be available on all supported architectures.

I hope at that should is the key word here.


Some other important changes made from 0.6.3-2.devicemapper to 0.7-0.13.dm.

  1. Networking issues when accessing servers from an instance are now resolved. Appropriate iptables rules are now created when starting the docker service.

  2. The docker -v commands prints now correct version information.

  3. If you're using zsh - you have now completion enabled!

WildFly cluster using Docker on Fedora

In my previous blog post I introduced Docker and its Fedora integration. Now it's time to do some serious (read: useful) stuff.

A few years ago I started a project called CirrAS. It's dead now, but the main idea behind it was to form a cluster of JBoss AS servers in the cloud, without any unnecessary steps. You just launched the instances, they found and connected to each other and the result was a working cluster. Additionally every cluster node registered itself in a front-end instance which worked as a load balancer and monitoring/management (we used RHQ) node.

You can still watch the screencast I created (over 3 years ago) to show how it works, but prepare for my Polish accent. You've been warned.

Since that was a few years ago and we now have both WildFly (the JBoss AS successor) and Docker in Fedora, it's time to use these new techonogies to do something similar.


Because we're IT hipsters we need to use the latest technologies like Fedora 20 (pre-release), WildFly 8 (pre-release) and Docker (soon-to-be-in-Fedora). As you can imagine, bad things may happen.

I assume you have Docker installed. If not, please refer to my previous blog post on how to do it on Fedora.

Docker 0.6.3
I've upgraded the Docker version available in my repo to 0.6.3.

I've done some of the hard stuff for you already; I've prepared a very basic Fedora 20 image for Docker. Grab it with:

docker pull goldmann/f20

Now that you have my image locally, you can try to run it, like this:

$ docker run -i -t goldmann/f20 /bin/bash

Building the basic WildFly image

Now it's time to extend the goldmann/f20 image and install the wildfly package on it. This can be easily done by using this Dockerfile:

# Base on the Fedora image created by me
      FROM goldmann/f20
      # Install WildFly
      RUN yum install -y wildfly

Let's build the image:

$ docker build .
      Uploading context 10240 bytes
      Step 1 : FROM goldmann/f20
       ---> 5c47c0892695
      Step 2 : RUN yum install -y wildfly
       ---> Running in 984358fb5472
      Resolving Dependencies
      --> Running transaction check
      ---> Package wildfly.noarch 0:8.0.0-0.9.Alpha4.fc20 will be installed
      --> Processing Dependency: java-devel >= 1:1.7 for package: wildfly-8.0.0-0.9.Alpha4.fc20.noarch
        xstream.noarch 0:1.3.1-8.fc20
        xz-java.noarch 0:1.3-2.fc20
        zip.x86_64 0:3.0-9.fc20
       ---> a70a03698e7e
      Successfully built a70a03698e7e

Time to test our image, let's run the container and start WildFly:

$ docker run -i -t a70a03698e7e /bin/bash
      bash-4.2# /usr/share/wildfly/bin/standalone.sh 
      09:25:55,305 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.0.0.Alpha4 "WildFly" started in 2789ms - Started 161 of 196 services (57 services are lazy, passive or on-demand)

Cool, it works!

Extending the WildFly image

Now that we have a working basic WildFly image, it's time to make sure it works in a cluster too.

We're going to create a standalone cluster. We won't use the domain mode built into WildFly AS.

The Dockerfile

Take a look at our Dockerfile. I'll describe the important stuff later.

It is a good idea to create a custom launch script for WildFly. This will greatly simplify the Dockerfile for us. Our launch.sh file could look like this:

      IPADDR=$(ip a s | sed -ne '/!{s/^[ \t]*inet[ \t]*\([0-9.]\+\)\/.*$/\1/p}')
      /usr/share/wildfly/bin/standalone.sh -c standalone-ha.xml -Djboss.bind.address=$IPADDR -Djboss.bind.address.management=$IPADDR -Djboss.node.name=server-$IPADDR

And here is the Dockerfile itself:

# Base on the Fedora image created by me
      FROM goldmann/f20
      # Install WildFly
      RUN yum install -y wildfly
      # Create the management user
      RUN /usr/share/wildfly/bin/add-user.sh admin Admin#70365 --silent
      ADD launch.sh /
      RUN chmod +x /launch.sh
      # Run WildFly after the container boots
      ENTRYPOINT /launch.sh

The first line tells docker that we want to use the goldmann/f20 image as our base. The second line installs the wildfly package with all the required dependencies (there are quite a few). Next, we create the admin user which will be used for node management. We also inject the launch.sh file and make it executable. This will be our entry point, meaning that this script will be executed after the container boots.

Binding to the right address

When you boot WildFly as we did previously it will bind to This is not very useful since we're launching an application server... We need to bind it to the current IP address assigned to the NIC of the container. We can use the jboss.bind.address. To get the IP we can use some shell scripting. Please take a look at the launch.sh script above.

We do the same for the jboss.bind.address.management property which will be used later.


Our WildFly image uses the standalone.xml configuration file which is great, but not for the clustering purposes. Let's switch to standalone-ha.xml. This will enable the clustering features.

The container network by default is multicast enabled. This is a great thing, since it allows WildFly's auto discovery feature to work. Each node on the network will find and join the cluster automatically. Good stuff.

Please note that a node will search for clusters only when there is something deployed on it. When the application server is empty - it'll register only in the front-end, without joining the cluster and setup session replication. This may be a bit misleading at first, since you're expecting some messages in the logs right after starting a new node. Nope. You need to deploy an app first.

Application deployment

We need to think about deploying apps to the cluster. There are various ways we can do it. I prefer to use the jboss-cli.sh script. To make it work, we need to expose the WildFly management interface. Which we've done already (remember the jboss.bind.address.management property?).

The last thing that prevents us from connecting to a running WildFly instance is the lack of a management user. Authentication is not required when you try to connect from localhost, but to connect to remote servers (our case) - we need to create a user. We can use the add-user.sh shell script, like this:

/usr/share/wildfly/bin/add-user.sh admin Admin#70365 --silent

Nope, this is not a very secure password, but will do for now.


You can now build the image with docker build . and you're done!

Building load balancer image

OK, we have the back-end image providing WildFly, but to have a proper cluster we need a load balancer. Let's create one with Apache HTTPD as the proxy. We chose HTTPD because of a very nice project called mod_cluster. The mod_cluster project consists of two parts:

  1. An Apache HTTPD module,
  2. An application server component (shipped with WildFly, but available for other application servers too)

This is different from the mod_proxy setup, since the back-end registers itself in the proxy, not the other way around. This is very valuable since we're going to start and shut down nodes depending on the load, but the load balancer will stay online forever (hopefully).

Another nice thing is that if you have multicast enabled (which we do!) we can use the mod_advertise module. This will make load balancer recognition very easy. The load balancer will notify back-ends of its existence. When the back-end receives this information, it will automatically register itself with the front-end, knowing it's location.

Cluster out-of-the-box? Yep, this is it.

Enough talking, let's create the load-balancer image.

# Base on the Fedora image created by me
      FROM goldmann/f20
      # Install Apache and mod_cluster
      RUN yum install -y httpd mod_cluster
      # Disable mod_proxy_balancer module to allow mod_cluster to work
      RUN sed -i 's|LoadModule proxy_balancer_module|# LoadModule proxy_balancer_module|' /etc/httpd/conf.modules.d/00-proxy.conf
      ADD launch.sh /
      ADD mod_cluster.conf /etc/httpd/conf.d/mod_cluster.conf
      RUN chmod +x /launch.sh
      # Do the required modifications and launch Apache after boot
      ENTRYPOINT /launch.sh

The Dockerfile is simple. so I won't describe it in detail. Instead I'll focus on the mod_cluster.conf and launch.sh injected into the image:

The mod_cluster.conf will overwrite the default config file installed with the mod_cluster package. It will enable the advertise and mod_cluster manager features, the latter of which exposes a simple web interface allowing us to see all nodes connected to the cluster.

LoadModule slotmem_module       modules/mod_slotmem.so
      LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
      LoadModule advertise_module     modules/mod_advertise.so
      LoadModule manager_module       modules/mod_manager.so
      MemManagerFile /var/cache/httpd
      ServerName *:80
      <VirtualHost *:80>
        EnableMCPMReceive true
        ServerAdvertise On
        ServerName loadbalancer
        <Location />
          Require all granted
        <Location /mod_cluster_manager>
          SetHandler mod_cluster-manager
          Require all granted

Just like with the back-end, we inject a launch.sh script:

      # Get the IP address
      IPADDR=$(ip a s | sed -ne '/!{s/^[ \t]*inet[ \t]*\([0-9.]\+\)\/.*$/\1/p}')
      # Adjust the IP addresses in the mod_cluster.conf file
      sed -i "s|[0-9\.\*]*:80|$IPADDR:80|g" /etc/httpd/conf.d/mod_cluster.conf
      # Run Apache
      httpd -D FOREGROUND

The only thing we do here is adjust the IP addresses in the mod_cluster.conf file. This will ensure we send the correct IP address to the back-end nodes using the advertise feature.

You can now build this image.

Prebuilt Images

If you don't want to take the time to build the images yourself, you can use the images I've pushed to the Docker repository. To grab them, just pull the goldmann/wildfly-cluster repo:

docker pull goldmann/wildfly-cluster

This will take some time, since these images are quite big. In the end, you'll have three images with the following tags: front-end, back-end and back-end-base.


Once you've built (or pulled) the images, we can begin to test them. Let's start with the front-end image:

docker run -d -p 80:80 goldmann/wildfly-cluster:front-end 

This will start a front-end container in detached mode. As a bonus we're redirecting port 80 from the host directly to this container making the Apache running in the container available directly via the host IP.

If you go now to the host IP address using your browser, you should be see the Apache HTTPD test page. If you point your browser at /mod_cluster_manager, you should see a mod_cluster manager page without any nodes.

Apache Fedora test page Empty mod_cluster manager

Let's add some back-end nodes. Run this twice:

docker run -d goldmann/wildfly-cluster:back-end

Wait a few seconds, and refresh the browser. You should now see two nodes.

mod_cluster manager with two nodes

Your cluster is working, congrats!

Deploying applications

We prepared the back-end nodes for management by creating the management user before. Now it's time to use this user to deploy an application. You'll need the jboss-cli.sh script shipped with WildFly. You can get it by downloading WildFly or installing it (for exmaple using yum install wildfly, if you're on Fedora 20+).

We need the IP address of the node we want to connect to. You can use the docker inspect command, looking for the IPAddress.

Next we need to connect to (use port 9990) and authenticate with (use admin/Admin#70365 credentails) the node:

$ $WILDFLY_HOME/bin/jboss-cli.sh
      WARN: can't find jboss-cli.xml. Using default configuration values.
      You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
      [disconnected /] connect
      Authenticating against security realm: ManagementRealm
      Username: admin
      [standalone@ /] deploy your-app.war

The CLI provides you many useful (and powerful) features. From deploying to managing the whole server. You can learn more about it in the documentation.

Once you deploy your web app, you'll see the context avaiable in the mod_cluster manager.

Deploying to all of nodes
To deploy the application on every node in the cluster (in standalone mode) you need to repeat the above step for all nodes in the cluster. Of course there are other options, but this is not part of the tutorial.

The last thing left is to point your browser at the front-end IP and the context of your app. It should be available and running. If you deploy your app on multiple nodes requests will be routed to all back-ends, as you would expect. Try it out!


It's really easy to create a cluster for your Java EE applications using Docker and Fedora. Because of the nice Docker/LXC features, we're now able to grow the cluster in literally seconds.

Once again: everything shown here is based on pre-releases. The Fedora/WildFly/Docker integration will be improved over time, but give it a shot today and let me know how you like it. If you find a bug, please report it directly in Bugzilla or ping me in the #fedora-cloud or #fedora-java IRC channels.

<script type="text/javascript"> $('.picture').colorbox(); </script>
Docker and Fedora


For the last couple of days I have been playing with Docker. Docker is a project that helps you create images and run containers. This sounds like virtualization, but it isn't. It uses Linux Containers (LXC) under the hood to do all the magic. What kind of magic? Read on.

Linux Containers

The first question you might ask is: how is Docker/LXC different from virtualization?

LXC is something between chroot and full virtualization. You don't run your applications in a virtual machine (inside a process controlled by the virtualization engine). Instead your applications are run in isolated (by the kernel itself) environments. Processes that run in one container do not have any access to processes in other containers. From a cointainer POV it looks like virtualization, but when you look from the host side - you can see all of the processes (applications) running on the host, directly. I'm pretty new to the LXC technology, but it's very promising, especially with regards to speed.

The most visible difference between virtualization and LXC for the end user is boot time. LXC overhead is literally zero compared to the few seconds up to minutes to boot your operating system in a virtualized environment. When you run a container, it's ready to do the stuff immediately - you don't need to wait at all.

This single feature is a great reason to take a look at LXC. But there is also Docker, which is a great extension of what we already have in LXC.

Hello Docker

Docker builds upon LXC. It uses it to create images and later run and manage them. What makes Docker especially nice is its lightweight and easy to use. The good folks at Docker made Ubuntu their distribution of choice, but Fedora isn't sleeping. Just recently Lokesh Mandvekar created a docker-io package and it was reviewed and accepted for Fedora. It'll take a week or two for it become available in the Fedora 19+ repos. If you're eager (and brave) - I prepared my own Docker repository with RPMs for Fedora 19, 20 and Rawhide. This repo will become unavailable after the official Docker RPMs hit Fedora repos.

Fedora 20 host
Please note that I used a Fedora 20 host, but it should work on Fedora 19 too.
Superuser privileges
All commands below should be executed with root privileges.
curl http://goldmann.fedorapeople.org/repos/docker.repo > /etc/yum.repos.d/docker-goldmann.repo
      yum install docker-io

When the install finishes - start the docker systemd service:

systemctl start docker.service

And if you want to enable it on boot:

systemctl enable docker.service

Docker should be running now.

Run your first container

Docker offers a central repository with images. This makes it easy to download (and publish) images. Matthew Miller (Fedora Cloud Architect) prepared a Fedora 19 image. This image will be updated to Fedora 20 once it is released.

Let's grab the fedora image:

$ docker pull mattdm/fedora
      Pulling repository mattdm/fedora
      22a514a5aa4c: Download complete
      50f374c05c2c: Download complete
      97fc5bf7f8d4: Download complete

Done! Now you have the image locally (in /var/lib/docker) and you can immediately start a container based on it:

docker run -i -t mattdm/fedora /bin/bash

Let's look at the parameters:

  • run - runs a container,
  • -i - keeps the stdin open, even if there is nothing attached,
  • -t - allocates a pseudo terminal, so we can interact with the container directly,
  • mattdm/fedora - ID of the image, it can be a tag or a hash (22a514a5aa4c in this case),
  • /bin/bash - the command to run after the container boots.

After you run the command, you'll be greeted by the bash prompt from inside the container, where you can do whatever you want. There are different types of images, some of them have an entry point, some not. I hope to discuss this further in a different blog post.

$ docker run -i -t mattdm/fedora /bin/bash

If you see an error similar to this, try again.

2013/09/25 13:22:02 Error: Error starting container 4b9cdcc43f43: fork/exec /usr/bin/unshare: operation not permitted

This is a known bug and I hope it will be fixed soon.

Basic container management

To stop the container, just press CTRL+D. Please note that the container is now stopped, but that does not mean that it no longer exists. Stopped container can be started or removed.

To remove a stopped container you need to know the container ID. You can see it by using the docker ps command. By default the docker ps command will show only running containers. To see all containers (including stopped) run docker ps -a:

$ docker ps -a
      ID                  IMAGE                  COMMAND                CREATED             STATUS              PORTS
      15bd697c7174        mattdm/fedora:latest   /bin/bash              22 minutes ago      Exit 0                                  
      5ab7c7a95885        mattdm/fedora:latest   /bin/bash              23 minutes ago      Exit 0                                  
      4b9cdcc43f43        mattdm/fedora:latest   /bin/bash              23 minutes ago      Exit 0                                  
      0fdab01e4eaa        mattdm/fedora:latest   /bin/bash              24 minutes ago      Exit 0                                  

Now you can remove the container by executing the docker rm command and specifying the ID, for example docker rm 15bd697c7174. The 15bd697c7174 container is now gone.

Network connectivity

By default Fedora disables IP forwarding which will prevent you from accessing the Internet from inside of the container. In most (all?) cases this is not what you want. To enable IP forwarding you can run this command:

sysctl -w net.ipv4.ip_forward=1 

After restarting, forwarding will be disabled again. To make it persistent, create a /etc/sysctl.d/80-docker.conf file and put the following line in it:

net.ipv4.ip_forward = 1

There is an open bug to fix this in Fedora.

Build your first image

What we have done so far is run an image made by someone else. Let's create our own image now.

Docker uses plain text files to describe the image which can contain various commands. To build an image, let's create an empty directory and place a file in it called Dockerfile with following content:

# Base on the Fedora image created by Matthew
      FROM mattdm/fedora
      # Install the JBoss Application Server 7
      RUN yum install -y jboss-as
      # Run the JBoss AS after the container boots
      ENTRYPOINT /usr/share/jboss-as/bin/launch.sh standalone standalone.xml

The FROM command is required and tells Docker which image should be used as a base for our new image.

The RUN command is used to modify the image by running a command inside the container at the time of building it.

The ENTRYPOINT command specifies which command should be executed after the container fully boots.

The next step is to build the image itself. In the directory execute the docker build . command:

$ docker build .
      Uploading context 10240 bytes
      Step 1 : FROM mattdm/fedora
       ---> 22a514a5aa4c
       Step 2 : RUN yum install -y jboss-as
        ---> Running in 4e4d90823207
        Resolving Dependencies
        --> Running transaction check
        ---> Package jboss-as.noarch 0:7.1.1-21.fc19 will be installed
        --> Processing Dependency: wss4j >= 1.6.7 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: wsdl4j >= 1.6.2-5 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: resteasy >= 2.3.2-7 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: mod_cluster-java >= 1.2.1-2 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: jython >= 2.2.1-9 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: jbossws-spi >= 2.1.0 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: jbossws-native >= 4.1.0 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: jbossws-cxf >= 4.1.0 for package: jboss-as-7.1.1-21.fc19.noarch
        --> Processing Dependency: jbossws-common >= 2.0.4-3 for package: jboss-as-7.1.1-21.fc19.noarch
        xpp3.noarch 0:
        xpp3-minimal.noarch 0:
        xsom.noarch 0:0-9.20110809svn.fc19
        xstream.noarch 0:1.3.1-5.fc19
        zip.x86_64 0:3.0-7.fc19
       ---> fafccbe2bffc
      Step 3 : ENTRYPOINT /usr/share/jboss-as/bin/launch.sh standalone standalone.xml
       ---> Running in 055d264ab953
       ---> 366ff524eea0
      Successfully built 366ff524eea0

Please note that after every command Docker commits the changes (in a manner similar to Git). Future executions of the same command will use the cached result.

Now if we run docker run -i -t 366ff524eea0 (please note that we don't specify the /bin/bash command, since our image has an entry point and it will be executed for us) we'll see JBoss AS booting:

$ docker run -i -t 366ff524eea0
        JBoss Bootstrap Environment
        JBOSS_HOME: /usr/share/jboss-as
        JAVA: java
      13:28:15,433 WARN  [org.jboss.as.domain.http.api] (MSC service thread 1-4) JBAS015102: Unable to load console module for slot main, disabling console
      13:28:15,442 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-1) JBAS015012: Started FileSystemDeploymentService for directory /usr/share/jboss-as/standalone/deployments
      13:28:15,520 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
      13:28:15,552 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on
      13:28:15,553 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final "Brontes" started in 2328ms - Started 133 of 208 services (74 services are passive or on-demand)

That's it, JBoss AS is running.

What's next

I highly recommend the try-and-fail method. Read the Docker docs, try to build your own images. In future blog posts I'll get a bit more into Docker details (and we'll build a cluster).

Hope you enjoyed this quick ride with Docker and Fedora!

WildFly is approaching Fedora

In April 2013 the JBoss Application Server rename to WildFly was announced at Devoxx. Since then the WildFly team made a couple of Alpha releases. They all start with version 8.0.0 to highlight the fact that WildFly is the successor of JBoss AS.


Immediately after the first release of WildFly I've started to think about getting it into Fedora.

A few days ago I finished upgrading all required components and finally packaged Alpha3 version of WildFly. It's already available in Rawhide and in Fedora 20 updates-testing repository.


As you can imagine the name change triggered some changes to the Fedora jboss-as package. The most visible one is the package rename. In Fedora 20+ the jboss-as package is replaced with wildfly. Other than dependencies upgrade and some scripts name changes nothing was dramatically changed - you can still expect thing to work as you're used to.

If you hit any issues - please let me know!

Of course WildFly is a brand new application server with some new great features like Java EE 7 support, so do not forget to test your new apps on it!

Today I submitted a new update for Fedora 20 that includes the Alpha4 version. It fixes also a few bugs reported to me (thanks!). I personally feel that this update is pretty stable. Give it a shot and don't forget to bump the karma!

The future

The plan is simple - package the most recent version of WildFly and make it available in the shortest period of time after the upstream release. With the Fedora 20 Final release approaching (planned 2013.11.19) - I'm going to make everything possible to package 8.0.0.Final before so Fedora 20 can ship stable version of WildFly since the beginning. This will be a very tough task since the release dates are very close to each other. If not - the 8.0.0.Final will be a 0-day update.

Domain mode in Fedora's JBoss AS

JBoss Application Server shipped in Fedora makes it easy to run it as a system service. So far you could launch only the standalone mode, there was no easy way to run it in the domain mode. This is going to change.

If you're not familiar with the operating modes, I highly recommend you reading the introduction to it. In short, in domain mode you can launch more than one server on one host easily. But this is not everything – you get a single entry point for management for all these instances. This means that you can deploy applications one all instances by just executing one command!

Available with the next jboss-as package update
Domain mode will be available with the next jboss-as package update 7.1.1-19 on Fedora 19+.


With the jboss-as-7.1.1-19 update you'll be able to select which mode should be used when running the systemd service for JBoss AS. To do this you need to edit the /etc/jboss-as/jboss-as.conf file.

You can choose the mode by setting the JBOSS_MODE environment variable. Do not forget to select a valid configuration file by setting the JBOSS_CONFIG variable.

# The configuration you want to run
      # JBOSS_CONFIG=standalone.xml
      # The mode you want to run
      # JBOSS_MODE=standalone
      # The address to bind to

Afterwards you can restart the server by simply using the systemctl command:

$ systemctl restart jboss-as.service

And voila – domain mode is running with two JBoss AS instances (default case):

$ systemctl status jboss-as.service
      jboss-as.service - The JBoss Application Server
         Loaded: loaded (/usr/lib/systemd/system/jboss-as.service; disabled)
         Active: active (running) since pią 2013-05-24 10:56:30 CEST; 7s ago
       Main PID: 1325 (launch.sh)
         CGroup: name=systemd:/system/jboss-as.service
                 ├─1325 /bin/sh /usr/share/jboss-as/bin/launch.sh domain domain.xml
                 ├─1326 /bin/sh /usr/share/jboss-as/bin/domain.sh -c domain.xml -b
                 ├─1368 java -D[Process Controller] -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byt...
                 ├─1382 java -D[Host Controller] -Dorg.jboss.boot.log.file=/usr/share/jboss-as/domain/log/host-controller.log -Dlogging.configuration=file:/usr/share/jboss-as/domain/configuration/logging.properties -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Sta...
                 ├─1433 /usr/lib/jvm/java-1.7.0-openjdk- -D[Server:server-one] -XX:PermSize=256m -XX:MaxPermSize=256m -Xms64m -Xmx512m -server -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.bind.address= -Dsun...
                 └─1451 /usr/lib/jvm/java-1.7.0-openjdk- -D[Server:server-two] -XX:PermSize=256m -XX:MaxPermSize=256m -Xms64m -Xmx512m -server -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.bind.address= -Dsun...

Domain management examples

The JBoss AS domain is so powerful that you can launch another server instance by using just the CLI:

[domain@localhost:9999 /] /host=master/server-config=server-three:start
          "outcome" => "success",
          "result" => "STARTING"
      [domain@localhost:9999 /] /host=master/server-config=server-three:read-resource(include-runtime=true)
          "outcome" => "success",
          "result" => {
              "auto-start" => false,
              "group" => "other-server-group",
              "interface" => undefined,
              "jvm" => undefined,
              "name" => "server-three",
              "path" => undefined,
              "socket-binding-group" => undefined,
              "socket-binding-port-offset" => 250,
              "status" => "STARTED",
              "system-property" => undefined

You can check the status of the service to confirm that the server is actually a new instance:

$ systemctl status jboss-as.service
      jboss-as.service - The JBoss Application Server
         Loaded: loaded (/usr/lib/systemd/system/jboss-as.service; disabled)
         Active: active (running) since pią 2013-05-24 10:56:30 CEST; 8min ago
       Main PID: 1325 (launch.sh)
         CGroup: name=systemd:/system/jboss-as.service
                 ├─1325 /bin/sh /usr/share/jboss-as/bin/launch.sh domain domain.xml
                 ├─1326 /bin/sh /usr/share/jboss-as/bin/domain.sh -c domain.xml -b
                 ├─1368 java -D[Process Controller] -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byt...
                 ├─1382 java -D[Host Controller] -Dorg.jboss.boot.log.file=/usr/share/jboss-as/domain/log/host-controller.log -Dlogging.configuration=file:/usr/share/jboss-as/domain/configuration/logging.properties -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Sta...
                 ├─1433 /usr/lib/jvm/java-1.7.0-openjdk- -D[Server:server-one] -XX:PermSize=256m -XX:MaxPermSize=256m -Xms64m -Xmx512m -server -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.bind.address= -Dsun...
                 ├─1954 /usr/lib/jvm/java-1.7.0-openjdk- -D[Server:server-two] -XX:PermSize=256m -XX:MaxPermSize=256m -Xms64m -Xmx512m -server -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.bind.address= -Dsun...
                 └─2076 /usr/lib/jvm/java-1.7.0-openjdk- -D[Server:server-three] -XX:PermSize=256m -XX:MaxPermSize=256m -Xms64m -Xmx512m -server -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.bind.address= -Ds...

Now you can use the JBoss AS CLI to deploy application to all running instances in one step:

[domain@localhost:9999 /] deploy node-info.war --all-server-groups

Nice, isn't?

Of course there is more stuff you can do with it, please read the documentation.

The update was submitted to Fedora 19 and is already available in Rawhide. Please give it a shot and add some karma!

Update 28.05.2013

Please make sure you install the jacorb-2.3.1-5 bugfix update available now in updates-testing repository. This fixes some issues when running JBoss AS in high-availability and in domain mode.

Configuring polkit in Fedora 18 to access virt-manager

virt-manager authentication

In Fedora when you run virt-manager you'll be asked for your password. Since I use this tool a lot I would like to have a password-less virt-manager.

Thank Jebus we have polkit where we can define authentication rules. There was a handy rule available written by Rich, but it stopped to work with the release of Fedora 18 because polkit changed completely the language used in rules files. Since polkit-0.106 the new rules files are written in JavaScript. Yes, JavaScript. More info about the choice you can find on David's blog post.

To access virt-manager without entering password, just a create a file named /etc/polkit-1/rules.d/80-libvirt-manage.rules (or similar) with following content:

      polkit.addRule(function(action, subject) {
        if (action.id == "org.libvirt.unix.manage" && subject.local && subject.active && subject.isInGroup("wheel")) {
            return polkit.Result.YES;

Remember to add your user to the wheel group:

usermod -a -G wheel goldmann

That's all, enjoy!

<script type="text/javascript"> $('.picture').colorbox(); </script>
Webservices support in JBoss AS in Fedora

Version info

New features mentioned in this post are available in jboss-as-7.1.1-11 or newer.

Until now the webservices support was not available in the Fedora packaged JBoss AS. The main issue was the lack of CXF stack in Fedora. It took some time to make it available in an RPMified version since CXF is a pretty big project, with many submodules and a pretty nice dependency tree.

Currently in Fedora we have JBoss AS available in version 7.1.1.Final which requires CXF 2.4.6. This is a pretty old release. I decided to upgrade the CXF stack to the latest available release from the 2.6.x series. This triggered updating the jbossws-* stack to newer versions than shipped with JBoss AS 7.1.1.Final. I did some tests and it seems that the components integrate with JBoss AS seamlessly. Either case please test your application with the new stack and report any bugs.

Sample application

To ensure the webservices integration works as expected I created a small application.

      package pl.goldmann.as7.ws;
      import javax.jws.WebMethod;
      import javax.jws.WebService;
      public interface Calculator {
        public float add(float a, float b);
        public float sub(float a, float b);
        public float multiply(float a, float b);
        public float divide(float a, float b);

As you can see the webservice is a very simple calculator with four basic operations. You can build the application by executing mvn package. I used the jboss-cli command to deploy the application to JBoss AS:

      [standalone@localhost:9999 /] deploy /home/goldmann/tmp/webservices.war

And here is the JBoss AS log:

      13:26:07,705 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "webservices.war"
      13:26:08,749 INFO  [org.jboss.ws.cxf.metadata] (MSC service thread 1-2) JBWS024061: Adding service endpoint metadata: id=CalculatorWS
      13:26:09,597 INFO  [org.apache.cxf.service.factory.ReflectionServiceFactoryBean] (MSC service thread 1-2) Creating Service {http://ws.as7.goldmann.pl/}CalculatorWS from class pl.goldmann.as7.ws.Calculator
      13:26:11,458 INFO  [org.apache.cxf.endpoint.ServerImpl] (MSC service thread 1-2) Setting the server's publish address to be http://jboss-as:8080/webservices
      13:26:11,781 INFO  [org.jboss.ws.cxf.deployment] (MSC service thread 1-2) JBWS024074: WSDL published to: file:/var/lib/jboss-as/standalone/data/wsdl/webservices.war/CalculatorWS.wsdl
      13:26:11,793 INFO  [org.jboss.as.webservices] (MSC service thread 1-4) JBAS015539: Starting service jboss.ws.port-component-link
      13:26:11,824 INFO  [org.jboss.as.webservices] (MSC service thread 1-6) JBAS015539: Starting service jboss.ws.endpoint."webservices.war".CalculatorWS
      13:26:11,899 INFO  [org.jboss.ws.common.management] (MSC service thread 1-6) JBWS022050: Endpoint registered: jboss.ws:context=webservices,endpoint=CalculatorWS
      13:26:12,335 INFO  [org.jboss.web] (MSC service thread 1-3) JBAS018210: Registering web context: /webservices
      13:26:12,548 INFO  [org.jboss.as.server] (management-handler-thread - 1) JBAS018559: Deployed "webservices.war"

You can see the WSDL by pointing your browser to http://jboss-as:8080/webservices?wsdl.


The example applications use jboss-as as the hostname. You may want to edit the /etc/hosts file and add an entry to map this hostname to a valid IP address.

Testing the webservice

Sample calls to websevice using SoapUI

To test the service I prepared a simple standalone client. You can build it by running mvn package. To start the client just execute:

java -jar target/webservices-client-1.0.jar

and observe the output. It should be similar to what I got.

Additionally I ran some basic tests with SoapUI. I was able to create a webservice from WSDL and run some sample requests. You can see the result on the screenshot.


As you can see the webservice stack in JBoss AS in Fedora works! Of course all you saw above are basic tests. If you have something more fancy, go for it and let me know how it went.

<script type="text/javascript"> $('.picture').colorbox(); </script>
Generate a database schema with OpenJPA and Hibernate on JBoss AS 7

When you develop an application, sometimes you want to run it quickly and test it manually. Sometimes you want to execute some integration tests that require database access. In all of these cases you need a working database. Thanks to JPA providers we can generate the database schema based on the entity definitions. Let's quickly look at two of them: Hibernate and OpenJPA.

Hibernate configuration

I'm sure you have used Hibernate before. Did you know that it has a nice feature that generates the schema in the database at application startup? You can additionally place any SQL statements you want to execute after the creation of the schema into file called import.sql.

To enable the schema generation in Hibernate add the following property to persistence.xml:

<property name="hibernate.hbm2ddl.auto" value="create-drop"/>

It works perfectly in JBoss AS 7.

The value create-drop means that the database schema will be created at application deploy time and removed when you undeploy it. There are other possible values like validate, update and create.

By default, Hibernate will search for the import.sql file in the root of the classpath of the produced archive. For a WAR file, it is located at WEB-INF/classes/import.sql, but if you generate a regular JAR just place the import.sql in the root of the file.

If you want to change the location of the file use hibernate.hbm2ddl.import_files. As the property name suggests, you can specify more than one file.

Read more about the Hibernate properties you can use in the Miscellaneous Properties table located in the Hibernate documentation.

OpenJPA configuration

OpenJPA includes a similar feature. You can generate the schema by using the provided MappingTool. MappingTool allows you to run the generation even from a command line. In our case, the more interesting feature is running it at application deploy time so that we automatically get a working schema in our database.

To make it work at runtime we need to add a openjpa.jdbc.SynchronizeMappings property to persistence.xml:

<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema"/>

Additionally we need to list all the classes for which we want to generate the schema in our persistence unit so that OpenJPA knows about these classes at startup. Just use the <class/> marker, for example:


In JBoss AS 7 we need to force the initialization of the OpenJPA persistence unit that generates the schema at deployment time to actually trigger the schema generation. It's very simple, just add another property:

<property name="openjpa.InitializeEagerly" value="true"/>

I'm not sure if OpenJPA has a built-in feature to execute SQL statements after schema generation like Hibernate. If you know how to do it, please speak up.

And that's it. It's a simple to implement but handy feature.

OpenJPA and Hibernate 3 on JBoss AS in Fedora

With the upcoming new release of the JBoss AS package in Fedora you'll be able to use both the Hibernate 3 and OpenJPA JPA providers. The reason why I'm enabling this for you is that we still don't have Hibernate 4 packaged, which is a pity since Hibernate 4 is the default JPA provider in JBoss AS 7. If you want to help us with it, please consider reviewing the Gradle package.

Sample application

I crafted a small application that shows how to use the two new providers. The full source code is available from my GitHub account. This application uses JSF and CDI in addition to JPA. And yes, I use both JPA providers at the same time.

Configuration files

Let's take a look at the persistence.xml file, since this is the most important part of the application.

<?xml version="1.0" encoding="UTF-8"?>
      <persistence version="2.0"
                   xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          <persistence-unit name="hibernate3PU">
                  <property name="hibernate.hbm2ddl.auto" value="create-drop"/>
                  <property name="hibernate.show_sql" value="true"/>
                  <property name="jboss.as.jpa.providerModule" value="org.hibernate:3"/>
          <persistence-unit name="openjpaPU">
                  <property name="jboss.as.jpa.providerModule" value="org.apache.openjpa"/>
                  <property name="jboss.as.jpa.adapterModule" value="org.jboss.as.jpa.openjpa"/>
                  <property name="jboss.as.jpa.adapterClass"
                  <property name="openjpa.Log" value="DefaultLevel=WARN, Runtime=INFO, Tool=INFO, SQL=TRACE"/>

As you can see I create two persistence units. Both use the ExampleDS datasource shipped with JBoss AS (it's an in-memory H2 database).

Please examine the jboss.as.jpa.* properties carefully. They tell JBoss AS which provider you want to use and how it should initialized.

Hibernate 3 Persistence Unit

Since Hibernate 3 is initially configured in JBoss AS 7 the only thing you need to provide is the jboss.as.jpa.providerModule property. Simple.

OpenJPA Persistence Unit

With OpenJPA it is a bit different. Besides the providerModule we need to additionally configure the adapterModule and adapterClass properties. This will be not necessary after the next stable JBoss AS 7 release. But we'll need to do this until then.

Other stuff

Additionally I enabled logging of the SQL statement execution, just for clarity.

The import.sql file contains some sample data to populate the database on application startup. It's a Hibernate feature and will be used in our case by the Hibernate 3 provider only.


The only entity class in this application is the Chair class. It's simple, and not worth discussing here.

Entity enhancement in OpenJPA

OpenJPA requires entity enhancement. There are several ways to do this. For this application I use the openjpa-maven-plugin.

Hibernate 3 in action

CDI beans

There are two CDI beans for interacting with the view and two for getting data from the database using different providers. This application uses only one database, so data entered with one provider will be accessible from the other one. Here you can see the power of JPA, where there is only one entity configured and it works across different providers. Nice!

The Hibernate3Bean.java and OpenJPABean.java files are almost the same - the only difference is in the injection of specific Database interface implementations.


The view is written in JSF. It's very simple to understand, so go straight to the code.


It's easy to use Hibernate 3 and OpenJPA with JBoss AS 7. It's even easier with the jboss-as package provided with Fedora. The upstream tarball requires some manual work to get it running, in Fedora you get it for free.

Available in next jboss-as package update

Please note that both Hibernate 3 and OpenJPA providers will be available with the next jboss-as package update, version 7.1.1-7.

If you have any issues ask in the comments or report a bug directly.

Update 24.08.2012

The jboss-as-7.1.1-7 package was submitted as an update for Fedora 17 and Fedora 18, please test it and bump the karma. This kind of feedback is very important.

JBoss AS 7.1.1-4 update pushed to Fedora

I'm very happy to let you know that I pushed today a new update for jboss-as package to Fedora.

New stuff

The 7.1.1-4 version includes a lot of new modules as well as some design changes, let's go briefly over them.


The most important change from a RPM/build stability POV is the move from building minimalistic to default profile. This let me know to drop about 60 (sic!) patches from the jboss-as.spec.

Since the change wasn't's trivial because it required rebasing and manual merging of previous patches I hereby ask you for help with testing. I did my best to eliminate bugs, but... Please install the new update and check if everything works for you. It is very important that you add karma on that page (remember to log in first!). This package will go into stable only if it hits the threshold of 4 positive karma, I'm not going to force push it like I did previously. Keep that in mind and encourage others.

New subsystem available

New update, new modules added. Mostly OSGi stuff, but also some Arquillian goodness.

  • org.jboss.as.modcluster module
  • org.jboss.as.jsr77 module
  • org.jboss.as.arquillian
  • org.jboss.as.osgi
  • org.jboss.as.configadmin
  • org.jboss.as.spec-api

If you're interested in some of them, please test the integration.

A few bugs fixed

There were few bugs reported, one was hanging for some time in my queue.

  • RHBZ#827571 - jboss-as-cp script is missing argument placeholder for c optarg
  • RHBZ#827588 - Create a startup script when creating a new user instance (jboss-as-cp)
  • RHBZ#827589 - The user instance create script (jboss-as-cp) should allow a port offset to be specified
  • RHBZ#812522 - Add ExampleDS based on H2 database

How to get it

The simplest way is to wait for the package to hit Fedora's updates-testing repository. This should be done by tomorrow. Remember that you'll need to have updates-testing repository enabled on your system. Do not install jboss-as-7.1.1-4 without updates-testing enabled and without up to date packages on your system. You have been warned.

One more thing...

"Don't mess with coders"

In the early days of packaging JBoss AS into Fedora I had a conversation with Alexander Kurtakov (aka The Fedora Eclipse Guy). He told me that he'll take a picture of himself in the JBoss AS t-shirt once the jboss-as package hits Fedora repositories. Well, this is now reality, so here is the pic :)

Alexander in JBoss AS t-shirt

Need help?

Feel free to report any bugs in #fedora-java IRC channel or directly in Bugzilla.

Impressions after Fedora Test Day for JBoss Application Server

Yesterday we had JBoss AS Test Day. Since the early morning hours we received help with testing the RPM packaged JBoss AS.

And what's the general impression?

It works!

Almost all test cases we prepared were successfully finished by our community members. We found one issue with not preserving the permission on files when adding new users to JBoss AS management interface. I've created a fix for that and sent a pull request. Expect to have it fixed in jboss-as-7.1.0-4 package.

Other than that - we haven't found any issues with running AS7 on Fedora. Which is obviously good. But if you find some, please report them imemdiately as we want to have a great platform for developers.

Please note that the shipped AS is not a full AS7 you can download from the website. It is a web profile, but lacking the JPA2 implementation. We're working really hard on extending it.

Thanks to all testers for a good job!

JBoss AS Fedora Test Day - today!

We're going today to test the (very) fresh JBoss Application Server which was packaged into Fedora. I hope you have some time to join us!

Where I can read more about the test day?

Most recent information you can find on the JBoss AS Fedora Test Day wiki page.

How should I prepare?

Make sure you have a Fedora 17 environment somewhere (VM preferred) with latest updates installed. This is very important.

What we'll test?

We'll have some test cases. But you're not limited to these test cases. Feel free to test the server at your own too!

Where we'll meet?

I am available on IRC in #fedora-test-day channel on freenode.

How can I participate?

You can help us with testing and proposing test cases! After you execute the test cases, please add your results to the table.

See you online!

The ARM Arc

ARM.  When used in a sentence it may refer to the company (ARM Holdings), one of its numerous CPU versions, or even a way of life.  But we just call it ARM.  ARM (the company) creates low power processor designs …

Watch out for DRI3 regressions
DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.
Sandboxed applications for GNOME, part 2

This is the second of two posts on application sandboxing for GNOME. In my previous post, I wrote about why application sandboxing is important, and the many positive impacts it could have. In this post, I’m going to concentrate on the user experience design of application sandboxing.

The problem

As I previously argued, application sandboxing has the potential to greatly improve user experience. Privacy features can make users feel more secure and in control, and performance can be improved, for example. At the same time, there is a risk involved with adding security layers: if a security framework is bothersome, or makes software more difficult to use, then it can degrade the user experience.

Sandboxing could be truly revolutionary, but it will fall flat on its face if no one wants to use it. If application sandboxing is going to be a success, it therefore needs to be designed in such a way that it doesn’t get in the way, and doesn’t add too many hurdles for users. This is one reason why effective design is an essential part of the application sandboxing initiative.

Some principles

Discussion about sandboxed applications has been happening in the GNOME project for some time. As these discussions have progressed, we’ve identified some principles that we want to follow in ensuring that sandboxed application provide a positive user experience. Before I get into the designs themselves, it’s useful to quickly go over these principles.

Avoid contracts

Sandboxed applications have limited access to system API and user data and, if they want more access, they have to ask the user for it. One way to allow apps to ask for these permissions is to present a set of application requirements that the user must agree to at install time. Anyone who uses Android will be familiar with this model.

Asking for blanket user permissions at install time is not something we are in favour of, for a number of reasons:

  • People have a tendency to agree to contracts without reading them, or without understanding their implications.
  • It often isn’t clear why the application wants access to the things it wants access to, nor is it clear when it will use them. There is little feedback about application behaviour.
  • There’s no opportunity to try the app before you decide what it gets access to. This is an issue because, prior to using the application, the user is not in a good position to evaluate what it should get access to.
  • Asking for permissions up front makes software updates difficult, since applications might change what they want access to.

It doesn’t have to look like security

All too often, security requests can be scary and intimating things, and they can feel far removed from what a user is actually trying to do. It doesn’t have to be this way, though: security questions don’t have to be expressed using scary security language. They don’t even have to look like security – the primary purpose of posing a security question is to ascertain that a piece of software is doing what the user wants it to do, and often, you can verify this without the user even realising that they are being asked a question for security purposes.

We can take this principle even further. The moment when you ask a security question can be an opportunity to present useful informations or controls – these moments can become a valuable, useful, and even enjoyable part of the experience.

Privacy is the interesting part

People tend to be interested in privacy far more than security. Generally speaking, they are concerned with who can see them and how they appear, rather than with abstract security threats. Thinking in terms of privacy rather than security therefore helps us to shift the user experience in a more human-orientated direction. It prompts us to think about social and presentational issues. It makes us think about people rather than technology.

Real-time feedback

A key part of any security framework is getting real-time feedback about what kinds of access are occurring at any one time. Providing real-time feedback makes the system much more transparent and, as a result, builds trust and understanding, as well as the opportunity to quickly respond when undesirable access occurs. We want to build this into the design, so that you get immediate feedback about which devices and services are being used by applications.

Audit and revocation

This is another key part of security, and follows on from real-time feedback. A vital area that needs to be addressed is the ability to see which services and data have been accessed in the past and which application accessed them. It should be possible to revoke access to individual services or devices based on your changing needs as a user, so that you have genuine control over what your applications get to see and do.

Key design elements

User-facing mechanisms for applications to request access to services and data are one obvious thing that needs to be designed for sandboxed applications. We also need to design how feedback will be given when services are being accessed, and so on.

At the same time, application sandboxing also requires that we design new, enabling features. By definition, sandboxed applications are isolated and can have limited permissions. This represents a challenge, since an isolated application must still be able to function and, as a part of this, it needs (mediated, secure) mechanisms for basic functions, like importing content, or passing content items to other apps. This is the positive aspect of the sandboxing user experience.


Here, sharing is the kind of functionality that is commonly found on mobile platforms. It is a framework which allows a user to pass content items (images, documents, contacts, etc) from one app to another (or to a system service). This is one of the positive, enabling pieces of functionality that we want to implement around application sandboxing.

Sharing is important for sandboxing because it provides a secure way to pass content between applications. It means that persistent access to the user’s files will be less important for many applications.


The sharing system is envisaged to work like many others – each application can provide a share action, which passes a content item to the sharing service. The system determines which applications and services are capable of receiving the content item, and presents these as a set of choices to the user.

In the example shown above, an image that is being viewed in Photos is being shared. Other applications that can use this image are then listed in a system-provided dialog window. Notice that online accounts are also able to act as share points in this system, as are system services, like Bluetooth, Device Send (using DLNA), or setting the wallpaper.

Content selection

Content selection plays a similar role to sharing, but in reverse: where sharing allows a user to pass content items from a sandboxed application, content selection is a mechanism that allows them to pull them in.

Content selection has traditionally occurred through the file chooser dialog. There are a number of obvious disadvantages with this approach, of course. First, content items have to be files: you can’t select a contact or a note or an appointment through the file chooser. Second, content items have to be local: content from the cloud cannot be selected.

The traditional file chooser isn’t well-suited to sandboxed applications. Sandboxing implies that applications might not be able to save content to a common location on disk: this means that we need a much more flexible content selection framework.


Content selection should enable content from a range of applications to be selected. Content can be filtered by source application, and it can include items that aren’t files and aren’t even stored locally.

System authorisation

Sharing and content selection are intended to provide (system mediated) mechanisms for opening or sending individual content items from sandboxed applications. When access is required to hardware devices (like cameras or microphones), or permanent access is required to the user’s files or data (such as contacts or the calendar), the system needs to check that access is authorised.

For cases like this, there is little option but to present the user with a direct check – the system needs to present a dialog which asks the user whether the application should have access to the things it wants to have access to. The advantage of posing a direct question at the time of access is that it provides real-time feedback about what an application is attempting to do.



In line with the principles I outlined above, we’re pushing to take the sting out of these dialogs, by phrasing them as relatively friendly/useful questions, rather than as scary security warnings. We’re also exploring ways to make them into useful parts of the user experience, as you can see with the camera example: in this case, the security dialog is also an opportunity to check which microphone the user wants to use, as well as to indicate the input level.

A key requirement for the design is that these access request dialogs feel like they are part of your natural workflow – they shouldn’t be too much of an interruption, and they should feel helpful. One technique we’ll need to use here is to restrict when system authorisation dialogs can be shown, since we don’t want them popping up uninvited. It certainly shouldn’t be possible for an application to pop up an access request while you are using a different application.

Real-time feedback, audit and revocation


As I said above, providing real-time feedback about when certain services are being used is one of the goals of the design. Here, we plan to extend the system status area to indicate when cameras, microphones, location and other services and devices are in use by applications.

We also have designs to extend GNOME’s privacy settings, so that you can see which services and content have been accessed by applications. You will be able to restrict access to these different services, and block individual applications from accessing them.

Pulling it all together

One of the things that I’ve tried to demonstrate in this post is that implementing application sandboxing isn’t just about adding security layers to desktop infrastructure. It also requires that we carefully think about what it will be like for people to use these applications, and that security frameworks be designed with user experience in mind.

We need to think beyond security to actually making sandboxing into a positive thing that users and developers want to use. For me, one of the most exciting things about sandxboxing is that it provides the opportunity to add new, powerful features to the GNOME application platform. It can be enabling rather than being a purely restrictive technology.

These designs also show that application sandboxing isn’t just about low-level infrastructure. Work needs to be done across the whole stack in order to make sandboxing a reality. This will require a combined effort that we can all participate in and contribute to. It’s the next step for the Free Software desktop, after all.

The designs that I’ve presented here are in the early stages of development. They will evolve as these initiatives progress, and everyone working in this area will have the opportunity to help develop them with us.

EPEL 7 now contains syslog-ng
RHEL 7 was released over a month ago and CentOS 7 not much later, but one piece of software was still missing: syslog-ng. Not any more. EPEL, which stands for Extra Packages for Enterprise Linux, is a software collection containing additional packages for Enterprise Linux and derivatives. Now its latest version, EPEL 7 also contains […]
New package, new branch, new workflow?

If you are a Fedora packager, you are probably aware of the new pkgdb.

One question which has been raised by this new version is: should we change the process to request new branches or integrate new packages in the distribution.

The discussion has occurred on the rel-eng mailing list but I'm gonna try to summarize here what the process is today and what it might become in the coming weeks.

Current new-package procedure:
  1. packager opens a review-request on bugzilla
  2. reviewer sets the fedora-review flag to ?
  3. reviewer does the review
  4. reviewer sets the fedora-review flag to +
  5. packager creates the scm-request and set fedora-cvs flag to ?
  6. cvsadmin checks the review (check reviewer is a packager)
  7. cvsadmin processes the scm-request (create git repo, create package in pkgdb)
  8. cvsadmin sets fedora-cvs flag to +
New procedure
  1. packager opens a review-request on bugzilla
  2. reviewer sets the fedora-review flag to ?
  3. reviewer does the review
  4. reviewer sets the fedora-review flag to +
  5. packager goes to pkgdb2 to request new package (specifying: package name, package summary, package branches, bugzilla ticket)
  6. requests added to the scm admin queue
  7. cvsadmin checks the review (check reviewer is a packager¹)
  8. cvsadmin approves the creation of the package in pkgdb
  9. package creation is broadcasted on fedmsg
  10. fedora-cvs flag set to + on bugzilla
  11. git adjusted automatically

Keeping the fedora-cvs flag in bugzilla allows to perform a regular (daily?) check that there are no fedora-review flag set as + that have been approved in pkgdb and whose fedmsg message hasn't been processed.

Looking at the number, it looks like there are more steps on the new procedure but eventually, most of them can be automated.

New branch process

For new branches, the process would be very similar:

  1. packager goes to pkgdb2 to request new branch
  2. requests added to the scm admin queue
  3. cvsadmin checks the request (requester is a packager...)
  4. cvsadmin approves the creation of the branch in pkgdb
  5. branch creation is broadcasted on fedmsg
  6. git adjusted automatically
Firewire + MythTV + XBMC PVR test setup

So I got a bit impatient after that last post and threw together a temporary test system to see how well MythTV+firewire input works these days, and how well XBMC works as a front end to it.

With cables trailing all over the apartment I temporarily hooked up our ancient DCT-6200 to my desktop, installed mythtv-backend and mariadb, and successfully set up my desktop as a temporary MythTV backend with only a couple of hiccups here and there. Connecting the XBMC system to it as a frontend was pretty easy, and XBMC certainly seems like a viable client experience with a bit of button behaviour tweaking and stuff.

Even using my Rawhide desktop as the backend with the storage on my NAS the performance wasn’t bad – just a bit of jerkiness on sports channels – so I think it might be worthwhile throwing together a dedicated setup. I can pick up the newer DCX-3200 boxes capable of tuning h.264 channels pretty cheap on Craigslist, and I specced out a dedicated backend box for a couple hundred bucks, so it shouldn’t break the bank.

July 22, 2014

All systems go
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.
Major service disruption
Service 'The Koji Buildsystem' now has status: major: koji down
USB IR MCE remote wake from suspend with Harmony: the missing piece?

tl;dr: if you’ve read all the references and still can’t get your Harmony or other universal remote to wake up a computer, make sure you use the Media Center Extender or Media Center Keyboard device profile in the Harmony software.

One of the feelings that almost makes up for all the hassle that comes with building your own infrastructure is the one you get when that last little bit of a jigsaw puzzle finally fits into place.

So that Harmony remote I talked about recently is hooked up to (among other things) my HTPC box. The HTPC box has been one of my more succesful hardware purchases: I’ve had it running ever since that blog post, and it’s just great, really. XBMC and OpenElec are great projects.

Ever since I set it up, I’ve used a janky USB infrared receiver I got off eBay to control it. It worked fine for a long time, but one thing that never quite worked was that I couldn’t manage to suspend and resume the system from the remote. I can’t recall whether it was on or off that didn’t work with the old one, but one didn’t. I just had to control the power manually, which really isn’t that big of a deal but ate away at me inside, leaving me a hollow, hollow man.

So that receiver started packing up recently; it’d frequently get stuck repeating keys, or just not register when it was plugged in, throwing USB errors in the kernel logs. So I chucked it and replaced it with a ‘genuine’ MCE remote transceiver, a Philips OVU4120 (much like this newer model). I say ‘genuine’ because I bought it off eBay, so who knows, but hey. I set it up with the Philips profile for the Harmony remote, and everything worked fine (as long as I stick the transceiver on a shelf and point it at the wall…yeah, IR is weird), then I thought “hey, maybe on/off from the remote will finally work now!”

Then I tried it, and was a sad bunny when it didn’t. I could suspend the system from the remote, but not wake it up.

Now this is one of those topics where if you DuckDuckGo it, you’ll find some possibly relevant information, and an awful lot of woo-woo. A fairly typical page is this one. I don’t think there’s a lot of woo-woo there, but input from kernel folks who know what the stuff that’s being cargo culted there actually does would be welcome. It does seem like poking /proc/acpi/wakeup and/or /sys/blahblahblah/power/wakeup is sometimes necessary for some folks, to enable wake-from-USB at the kernel level for the relevant USB host interface. I suspect the usbcore.autosuspend reference is ancient now, but I couldn’t say for sure.

None of that applied to me, though. All the entries in /proc/acpi/wakeup and /sys that could possibly be the port which my transceiver was plugged into were definitely set to enabled. I could wake up just fine with a USB keyboard plugged into the same port. I had all the even-possibly-relevant firmware settings I could find set to ‘yes please let me wake up thank you very much’. I had XBMC configured appropriately: wake from actual power-off is rarely going to work, so you want to configure XBMC to suspend when it’s told to shut off; that setting is in System / Settings / System / Power saving / Shutdown function, set it to Suspend. Everything seemed to be pointing to Go, yet my remote obstinately would not wake up the system.


Obviously I couldn’t sleep with things this way, so I decided to try just one other thing: I changed the profile I was using for the transceiver in the Harmony configuration. The MCE remote protocol is a standard of sorts, so there are actually a whole bunch of ‘devices’ in Logitech’s database which are listed as being for various HTPCs or remote controls which are really just sending the standard MCE commands, and you can pick any of them if what you’re actually talking to is an MCE transceiver. As I mentioned above, I’d picked the one that most closely matched the transceiver I actually bought, the Philips OVU4120. But I’d found a note somewhere that only one specific MCE IR code can actually trigger a wake from suspend, and I wondered if somehow the power command in that Philips profile was the wrong one.

Apparently it was! I switched to the “Microsoft Media Center Extender” profile in the Harmony software, sent a power toggle command, and watched in joy as the damn thing finally actually woke up from suspend.

So yup: if you want to both suspend and wake a computer with a USB MCE IR transceiver using a Harmony remote, do all the other stuff you can read about, but also make sure you use the Microsoft Media Center Extender profile and use the power toggle command. I couldn’t find this explicitly noted anywhere else, but it was the bit of the puzzle I was missing.

Happily, in the interim when I’d given up on this working, OpenElec/XBMC seem to have fixed a bug where the Zotac box I’m using didn’t come back entirely reliably from suspend, and it all seems to be prety bulletproof now. Whee!

I’m now considering a second attempt at a MythTV-based PVR. I had one more or less up and running for a while, but we got annoyed at only having a single tuner, and there were a few other kinks. In the end I bought a new box from the cable co which has PVR functionality if you plug in an external hard disk, but that’s proved to be even more of a nightmare so I don’t use it any more. It now appears to be the case that I can pick up a Motorola DCX-3200, DCX-3400 or DCX-3510 box reasonably cheap from Shaw or Craigslist, and by all indications those boxes work well with MythTV and firewire control, and Shaw still transmits most channels without the flag that blocks firewire output. I still have an old DCT6200 box in the bedroom, so with that plus two of the 3200s or one of the PVR boxes I’d have three tuners. I can put together a dedicated MythTV backend box (just a couple of big hard disks, some RAM, and firewire inputs are really all it’d need, as there’s no need to transcode firewire-captured video) for $300 or so, and XBMC apparently works well as a MythTV front end these days, so I could use the OpenELEC box as the main front end. Maybe I’ll give the project a go this weekend. If I pick up the DCX-3510, even if the MythTV plan doesn’t work out, I’d have a better ‘official’ PVR box…

Flock: Behind the Scenes 3

I’ve got another set of updates from the Flock organization for you:

Flock apps for BB10 and SailfishOS – Jaroslav Řezník has created a mobile app for those who are using Blackberry 10 system (is there anyone out there?). The Jolla phone and its SailfishOS has been quite popular among open source geeks. If you have one, check out an app that was created by Jozef Mlích. It’s available in the OpenRepos. So together with the Android app, I wrote about in the first article, we already have three apps. I’m also working on an offline guide for Guidebook.com.

Social events – we finally made a decision about social events (what, where, when). There will be one on Wednesday and the main one will be on Thursday. We’re also thinking about organizing an unofficial kind of gathering in some pub on Tuesday where you can come to meet others after you arrive to Prague and get accommodated.

Printouts – Sirko Kemter is working on conference booklets. The last thing he was missing was information about social events which is now solved. Ryan Lerch has prepared badges. They will be from the same vendor as last year, produced in the U.S. and brought to Prague. We’re looking for a volunteer who would help us with navigation signs and mainly schedules we will post on doors of lecture rooms.

And some tips for the promised section “Getting ready for the trip to Flock”:

  • Money – I’ve already been asked by several people what currency they should bring to the Czech Republic. Believe or not even though the Czech Republic is a member of the EU we don’t have euro. Our currency is Czech crown (CZK). Would you like to get more familiar with the Czech coins and bills? Download a mobile app release by The Czech National Bank. It will show you all details and security measurements.  You won’t make a mistake if you bring euros or US dollars because these are the most widely accepted foreign currencies in exchange offices. Euro is even accepted in some stores, restaurants, or gas stations. GBP or CHF are also fine while not as common as € or $. You’ll be able to exchange other currencies, too, but you most likely will get worse exchange rates. Payment cards (Mastercard, VISA) are quite widely accepted and if you need cash you can get it from ATMs which are at every corner. So I recommend you bring just little cash with you from home. And prices? The Czech Republic is a fairly cheap country. You can check a list of price samples by expact.cz or prices for tourists in Prague by PriceOfTravel.com.
  • Language – believe or not the language of the Czech Republic is not English (I met several people in Asia who were surprised that English is not the (only) native language in Europe), it’s… surprise, surprise… Czech. Czech is a West Slavic language which is very similar to Slovak, fairly similar to Polish and Slovenian, and only remotely similar to Russian and other East Slavic languages. I heard that some of Flock attendees’ve started learning Czech to make a nice touch while communicating with locals. Czech is said to be difficult, but read tips by an Irish polyglot who learned Czech in just 2 months and says it’s not difficult at all! The most common foreign language is English. Almost all people under 30 have learned it at primary and secondary school, but only 10% of the population rate their English proficiency as good. The second most common language is German. It used to compete with English for the status of the first foreign language, but has been completely ran over by English in the recent years, but is still the second foreign language at most schools. Other common foreign languages are French, Spanish, and Italian, but they have much fewer speakers here than English and German. Russian was a mandatory language at schools before 1989, but this language won’t help you much in the Czech Republic nowadays unfortunately. Most people who learned it don’t remember it any more because they learned it because they had to, not because they wanted to, and they never really practiced it.

GTK+ at GUADEC 2014

GUADEC is almost here. GTK+ will be presented with 3 talks:

  • GTK+, dialogs, the HIG and you  (by me)
  • GTK+ and CSS (by Benjamin)
  • The GTK+ scene graph toolkit (by Emmanuele)

All of these will be on Monday, the 28th. We will also have a GTK+ team meeting on the 31st.

I’ve made a small collage to summarize my talk:

Dialogs See you all there !

July 21, 2014

So its that time of the year! GUADEC is always loads of fun and meeting all those the awesome GNOME contributors in person and listening to their exciting stories and ideas gives me a renewed sense of motivation.

I have two regular talks this year:
  • Boxes: All packed & ready to go?
  • Geo-aware OS: Are we there yet?
Apart from that I also intend to present a lightning talk titled "Examples to follow". This talk will present stories of few of our awesome GNOME contributors and what we all can learn from them.

Fedlet update

Since I forgot to actually include any Fedlet news in my last post, here’s some instead!

So I’ve done a few 3.16rc kernel builds in the repo. Modesetting still doesn’t work on my Venue 8 Pro, but various other folks have reported it does work on their hardware.

I did a new image build last week, but I can’t really test it, because of the modesetting fail. However, I did at least boot it in a VM. Or rather, I tried, and it failed miserably.

Fedora 21 is a bit fragile right now, so I think the image is broken due to bugs in Fedora itself. Given that it doesn’t work in a VM and I can’t test it on metal, I’m not willing to put it out, I’m afraid. If you have an installed Fedlet, though, you can grab the latest kernel from the repo, and hopefully you should have accelerated graphics. You’ll want to drop the custom X config package and the kernel parameters that force the video mode.

Intel still hasn’t put out the firmware necessary for the sound to work in the official linux-firmware repository, unfortunately, or released it anywhere under a license that lets me redistribute it, so far as I can tell. I’ve just contacted them to ask about that again.

Groups, permissions and bugspad.

This week was not much productive in terms of the amount of code written, as I was traveling by the grace of Indian railways. I finally am in my hostel room. However I used this time to plan out things and also test out the bugspad instance on the server. I made a script using mechanize and requests libraries of python to do so, which I’ll be adding to the scripts section of the repo. I am also working on the permissions stuff on a new branch. Instead of having groups I am planning to have usertypes instead keeping it product centric. This would require a minor change in the schema as I would be using charfields to denote the user types. For example, “c1″ for users assigned to group with component id 1, similarly “p1″ for users with product id 1. Would discuss more with the upstream is my mentor on the missing features and how to go about it.

Threats: William the Manager
William the Manager

William the Manager

William is concerned with his group getting their job done. He is under budget pressure, time pressure, and requirements to deliver. William is a good manager – he is concerned for his people and dedicated to removing obstacles that get in their way.

To a large degree William is measured on current performance and expectations for the next quarter. This means that he has little sympathy for other departments getting in the way of his people making the business successful! A lot of his job involves working with other groups to make sure that they meet his needs. And when they don’t, he gets them over-ruled or works around them.

When William does planning – and he does! – he is focused on generating business value and getting results that benefit him and his team. He is not especially concerned about global architecture or systems design or “that long list of hypothetical security issues”. Get the job done, generate value for the company, and move on to the next opportunity.

William sees IT departments as an obstacle to overcome – they are slow, non-responsive, and keep doing things that get in the way of his team. He sees the security team in particular as being an unreasonable group of people who have no idea what things are like in the real world, and who seem be be dedicated to coming up with all sorts of ridiculous requirements that are apparently designed to keep the business from succeeding.

William, with the best of intentions, is likely to compromise and work around security controls – and often gets the support of top management in doing this. To be more blunt, if security gets in the way, it is gone! If a security feature interferes with getting work done, he will issue orders to turn that feature off. If you look at some of my other posts on the value of IT and computer systems, such as Creating Business Value, you will see that, at least in some cases, William may be right.

And this is assuming that William is a good corporate citizen, looking out for the best interests of the company. If he is just looking out for himself, the situation can be much worse.

It is not enough to try to educate William on security issues – for one thing (depending on the security feature), William may be right! The only chance for security is to find ways to implement security controls that don’t excessively impact the business units. And to keep the nuclear option for the severe cases where it is needed, such as saving credit card numbers in plain text on an Internet facing system. (Yes, this can easily happen – for example, William might set up a “quick and dirty” ecommerce system on AWS if the IT group isn’t able to meet his needs.)

Fedora Rawhide installation with 320 MB RAM


The Anaconda OS installer used by Fedora, RHEL and their derivatives have been many times criticized for its memory requirements being bigger than memory requirements of the installed OS. Maybe a big surprise for users who don’t see too deep into the issue, no surprise for people who really understand what’s going on in the OS installation and how it all works. The basic truth (some call it an issue) about OS installation is that the installer cannot write to physical storage of the machine before user tells it to do so. However, since the OS installation is quite a complex process and since it has to be based on the components from the OS itself there are many things that have to be stored somewhere for the installer to work. The only space for data that for sure doesn’t contain any data the installer shouldn’t overwrite or leave some garbage in is RAM.

Thus for example when doing installation from PXE or with a minimal ISO file (netinst/boot.iso) vmlinuz (kernel) is loaded to RAM, initrd.img is loaded and extracted to RAM and squashfs.img containing the installation environment the part of which is the Anaconda installer itself is loaded to RAM as well with on-the-fly extraction of the required data. That’s quite a lot of RAM taken with Anaconda consuming 0 MB as it is not even started yet. On top of that the Anaconda starts, loading a lot of libraries covering all areas of an OS from storage and packaging over language, keyboard and localization configuration to firewall, authentication, etc.

Tricks to lower RAM consumption

Although all the pieces listed above are crucial for the OS installation to work and provide users high-level UI there are some tricks that can be done to lower RAM consumption.

kernel + initrd.img

Obviously, kernel has to stay as it is and in order to support wide range of HW and SW technologies it cannot be made smaller. The situation of the initrd.img on the other hand is quite different. It is loaded and extracted to RAM and used when the system boots to the installation environment and when it reboots to the newly installed OS, but the rest of the time it just lies around taking space and doing nothing. Thus it can be compressed when the boot is finished and then decompressed back when the machine reboots saving RAM in time when Anaconda needs it most. It means users have to wait for a few seconds (or tens of seconds) longer when (re)booting the machine, but that doesn’t make much difference in the whole OS installation taking tens of minutes.


The squashfs.img containing the installation environment and thus the Anaconda installer itself is either loaded to RAM in case of PXE, minimal/netinst installations or mounted from a storage device in case of DVD or USB flash drive installations. See the trick to need less RAM for the installation? Place the squashfs.img to some mountable media and it doesn’t have to be loaded to RAM, saving over 250 MB of RAM space 1. In case of a virtual machine, the easiest way is to put the squashfs.img into some directory and create an ISO file from it by running mkisofs -V SQUASH on it and then attaching the ISO file to the VM. By using -V SQUASH we give the ISO file a label/ID/name which we can then use for identification by passing inst.stage2=hd:LABEL=SQUASH:/squashfs.img to the Anaconda installer as a boot option. For a physical machine the easiest solution is probably a USB drive with a file system having a label containing the squashfs.img. A universal solution is then an NFS server exporting directory containing the squashfs.img.

RSS (Resident Set Size)

"The resident set size is the portion of a process’s memory that is held in RAM." (taken from Wikipedia) The RSS of the anaconda process is something around 160/235 MB in text/graphical. Quite a lot you may think, but this 160/235 MB of RAM contains information about all the locales, keyboards, timezones and package groups available plus partitioning layout information and a lot more. You may imagine it as running yum, parted, system-config-keyboard/language/timezone and many other tools together in a single process. Then there is also the anaconda-yum process running the Yum transaction installing packages which takes 120 MB RAM or more depending on the size of the package set.

However, even in this area some tricks can be done to lower memory consumption. One nice example is switching from python-babel to langtable for getting information about available languages and locales and searching for best matches 2 that lead into ~20MB decrease of Anaconda’s RSS. Also another potential trick would be dropping languages, locales, keyboard layouts and timezones objects from memory once the installation begins and user can do no further changes or in text mode where there is no interactive language/locale and keyboard configuration at all.

Then it’s also worth to run the top utility from a console when the installation environment is fully established and see which processes consume most RAM. Obviously, number one is anaconda, but for example recently the dhclient started taking more than 10 MB of RAM which is quite a lot for a process that basically just sits there and does nothing complicated. A bug report has been filed on dhclient‘s RAM consumption and hopefully it will be soon fixed. Another useful command is du -sh /tmp because the tmpfs file system mounted to /tmp is used as a storage for all data that are stored in a form of files (logs, configuration files, etc.) and also e.g. as a cache for Yum.


A traditional solution for an issue with not enough RAM but not requiring super-high performance is using (more) swap. However, that’s not applicable for the OS installer that cannot touch disks before user selects them OS installation and clearly confirms that selection. A swap device can be detected by the installer, but it could contain hibernated system or any other data that shouldn’t be overwritten. And even if the swap was freshly formatted and contained nothing, using it would cause many troubles when doing partitioning, because it would be equivalent to a mounted file system that cannot be unmounted (where to put data from it if RAM is full?).

The only difference is a swap device created as part of the new OS. That is freshly created or clearly marked to be used by the new OS so it can be used. But that happens only after its partition or logical volume is created and formatted as swap. It’s useful anyway, because package installation that happens afterwards requires quite a lot of RAM and space in the /tmp used as a cache so Anaconda activates such swap devices ASAP. Activating them presents a critical point for minimum RAM needed for the OS installation — once big swaps located on HDDs/SSDs are activated, there is enough space for basically whatever could be needed by the installer and its components. The critical point is thus right after users confirm they really want to install the OS with the chosen configuration and storage layout when partitions are created and formatted.

Anaconda zRAM swap

Recently (only a week ago actually), one more nice trick has been applied to the Anaconda sources in the area of memory requirements — adding a zRAM swap. zRAM is a block device similar to tmpfs introduced in kernel 3.14. But when compared with tmpfs it has two very neat features — it is a block device and its content is compressed (with the LZO3 or LZ4 algorithm). It can thus be used in combination with arbitrary file system as a replacement for tmpfs potentially making the capacity of such RAM-located file system bigger thanks to the compression.

But a very clever and neat usage is using it as a swap device as big as the amount of available RAM. Take a few seconds to think about it. See? Compressed RAM! When kernel runs out of physical memory which happens basically immediately when something starts allocating memory that is reserved for the zRAM devices, kernel starts using swap devices. By giving zRAM swap devices high priority among swaps it is assured that kernel uses zRAM swaps first even if there some other swap devices. Since zRAM blocks are located in RAM, the only difference is that memory pages are compressed. That of course requires some CPU cycles, but the LZO compression is really fast and by dividing amount of available RAM by the number of available CPUs and creating one zRAM device for each CPU the compression can be easily made parallel. The result is that for example on a machine we use for VMs running Fedora and RHEL installations over and over again the average compression ratio is between 50 and 60 % without any noticeable CPU performance drop! Nice, isn’t it?

The compression ratio in the Anaconda installer environment was not investigated yet 4. But what has been tested is that when having a squashfs.img on a mountable media, 320 MB RAM is enough to install Fedora Rawhide in text mode installation. 400 MB RAM is enough for a graphical installation if the inst.dnf=1 boot option is used to tell Anaconda it should use DNF instead of Yum (more about that in the next section). The zRAM swap is activated when less than 1 GB of RAM is available because with more RAM, there is not need for it. Everything would work even without zRAM swap with 1 GB RAM, but things are much faster with it because instead of heavy use of swap on HDD/SSD, zRAM is used primarily.

DNF vs. Yum

With Yum being written completely in Python and DNF using many C libraries for the underlying operations DNF is expected to require less RAM. The current situation is that it depends on one’s answer for the following question: "What does it mean to require less RAM?" Without the inst.dnf=1 boot option, the amount of memory required by the installation environment when the actual installation begins (the critical point) is ~232 MB whereas with the inst.dnf=1 option it is 190 MB. Seems like 40 MB less, right? But… (there always is some) shortly before this point there is a peak of memory consumption when DNF does metadata processing and dependency solving which is over 320 MB and causes Anaconda being killed by oom_kill with less than 350 MB of RAM (compressed by zRAM). But that reveals a fact that although DNF takes 120 MB peak RAM than Yum with zRAM the difference is only 30 MB (320 MB enough with Yum, 350 MB enough with DNF) and thus the extra data can be heavily compressed.

With a graphical installation, 400 MB RAM is enough with DNF but too little with Yum. That probably looks weird when the paragraph above is considered. But it has a simple explanation — the DNF’s peak happens before the peak of other components taking place in the graphical installation and 400 MB is enough for all of them whereas Yum’s peak meets one of the other peaks and grows over 400 MB and requires at least 410 MB.


Although memory requirements of a full-featured OS installer have to be quite high, this post tries to show that there are many things that can improve the situation quite a lot. It’s not easy and it needs clever ideas and wide overview of the system and available technologies, but hey, this is open source, anybody having a great idea or knocking their heads on my lack of knowledge: Patches and suggestions welcome! There are probably many other things that can be improved, but it usually takes quite a lot of testing and experimenting which is time consuming and to be honest, these things don’t have a priority of show-stopper bugs like release blockers or crashes/freezes on traditional systems with standard HW equipment. Maybe with various ARM machines, cloud images and other restricted systems the focus on lowering RAM consumption will increase. Hard to tell, RAM is quite cheap these days. But 400 MB of RAM for a full-featured OS installation is quite nice, isn’t it? Do you know about any other OS installer that requires less RAM and provides at least to some extent comparable feature set?

  1. the image contains many packages required for the installation, but many unneeded files are pruned (documentation in general, many binaries, libraries, images, etc.) and squashfs provides good compression

  2. more about that can be found in one of my previous posts

  3. which is the default

  4. I’ll add a comment or blog post with such numbers once I have time to determine them

I’m going to GUADEC 2014!

I’m going to GUADEC 2014 in Strasbourg! Last year, I was one of the organizers and I planned to enjoy this year’s GUADEC just as an ordinary attendee. But I assigned myself at least some job. I’ll be covering the conference for Fedora Magazine.

I’m looking forward to meeting other Fedora guys there!

Flock: Behind the Scenes 2

Last week I decided to blog about things from the organization of Flock 2014, so that you can see what’s going on “behind the scenes”. Today, I’ve got another set of things that may be interesting or useful for Flock attendees:

Registration – some of you have noticed that the pre-registration on the conference website is closed and you cannot register for Flock any more. I wasn’t directly involved in spinning off the registration and setting the deadline, but I suppose it’s because of planning. The whole pre-registration is mainly for planning purposes. We needed to know how many people would attend, so that we could plan social events, lunches, t-shirt production. Of course, we have to leave ourselves some time and can’t wait till the beginning of the conference. All the mentioned things are currently planned for 250 ppl. The number of attendees registered has pretty much reached that. Of course, you can attend Flock even though you’re not registered, it’s free and open to attend, but we can’t promise that you’ll get a t-shirt, lunches, and tickets to the party.

Problems with visas – some sponsored attendees reported problems with getting a Schengen visa.  That resulted in rebooking flights which put even more pressure on our limited budget for this year’s Flock. Ruth Suehle started a discussion on the flock-planning mailing list about how to avoid such situations in the future. There have been several suggestions. Examples how other projects have solved it. Some people even suggested that we should not sponsor people fully next time. Frankly, the Fedora Project has been quite generous at his covering both travel costs and accommodation even at the cost of additional rebooking. When we organized the Flock Sponsorship Program for EMEA we set the limit to $200 and told ppl that they could apply if they think the amount would help them. Travel itinerary and lodging options are their business.

Cool Guide to Prague – one of my colleagues pointed me to a very interesting guide to Prague (and not only to Prague, it’s a Europe-wide project). It’s called USE-IT and it offers a map of the city with recommended points of interest, Czech phrases, pieces of advice how to act “like a local”. All put in a fun way. We’d like to give away printed versions at the conference, but you can check it out already now and learn something in advance ;)

In one of the future posts, I will share other tips and sources of useful information with you, so that you can get ready for your trip to Flock.

OpenSource Tools for Video Editing

Video editing has become a daily activity for those who do it professionally and for those who just consider it a hobby. How many of us haven’t wanted to improve a video lightning or simply cut it and merge it again? There are many things we can do when we talk about video edit. The good thing about working with OpenSource is that variety will always be there. Here are some of the most used apps for video editing:

Kdenlive: Means KDE Non-Linear Video Editor, is a non-linear editor quite comfortable, and probably the first choice for many people. Despite its simplicity, it allows you a complete manipulation of your video without lag or crash. This is the one I always use for my podcast (at least ’till  another app gets the spot)

OpenShot: One of the things I like about this app are its effects. These are shown on a easy graphical window, so add effect is as easy as drag and drop. If you’re looking for an app with instagram-styled effects, this should be your choice.

Pitivi: This is an app that might bring awesome things. PiTiVi allows you to render, import and export videos with basic editing properties,however, they ave hired a student from GSoC to add effects. Lets hope everything works for them and we have an awesome app soon.

Avidemux: This app is the equivalent to Raw editing for videos. Is probably the only app that literally runs over anything.It allows basic edition and filters. However, even if it’s quite powerful, its GUI is one of the lest attractive from the list.

There are many other apps, such as Lives, Cinelerra, Lombard, LightWorks, Kino and even Blender; what matters is to know that there is a huge variety and be able to choose. So far, my fav still remains Kdenlive. This app has prove to be useful and stable with the time. Wat do you think? Which is your favorite? am I missing one?

flattr this!

What is preloading?

by Jakub Hrozek and Andreas Schneider

The LD_PRELOAD trick!

Preloading is a feature of the dynamic linker (ld). It is a available on most Unix system and allows to load a user specified, shared library before all other shared libraries which are linked to an executable.

Library pre-loading is most commonly used when you need a custom version of a library function to be called. You might want to implement your own malloc(3) and free(3) functions that would perform a rudimentary leak checking or memory access control for example, or you might want to extend the I/O calls to dump data when reverse engineering a binary blob. In this case, the library to be preloaded would implement the functions you want to override with prelinking. Only functions of dynamically loaded libraries can be overridden. You’re not able to override a function the application implements by itself or links statically with.

The library to preload is defined by the environment variable LD_PRELOAD, such as LD_PRELOAD=libwurst.so. The symbols of the preloaded library are bound first, before other linked shared libraries.
Lets look into symbol binding in more details. If your application calls a function, then the linker looks if it is available in the application itself first. If the symbol is not found, the linker checks all preloaded libraries and only then all the libraries which have been linked to your application. The shared libraries are searched in the order which has been given during compilation and linking. You can find out the linking order by calling 'ldd /path/to/my/applicaton'. If you’re interested how the linker is searching for the symbols it needs or if you want do debug if the symbol of your preloaded library is used correctly, you can do that by enabling tracing in the linker.

A simple example would be 'LD_DEBUG=symbols ls'. You can find more details about debugging with the linker in the manpage: 'man ld.so'.


Your application uses the function open(2).

  • Your application doesn’t implement it.
  • LD_PRELOAD=libcwrap.so provides open(2).
  • The linked libc.so provides open(2).

=> The open(2) symbol from libcwrap.so gets bound!

The wrappers used for creating complex testing environments of the cwrap project use preloading to supply their own variants of several system or library calls suitable for unit testing of networked software or privilege separation. For example, one wrapper includes its version of most of the standard API used to communicate over sockets that routes the communication over local sockets.

flattr this!

Add sudo rules to Active Directory and access them with SSSD

Centralizing sudo rules in a centralized identity store such as FreeIPA is usually a good choice for your environment as opposed to copying the sudoers files around - the administrator has one place to edit the sudo rules and the rule set is always up to date. Replication mitigates most of the single-point-of-failure woes and by using modern clients like the SSSD, the rules can also be cached on the client side, making the client resilient against network outages.

What if your identity store is Active Directory though? In this post, I'll show you how to load sudo rules to an AD server and how to configure SSSD to retrieve and cache the rules. A prerequisite is a running AD instance and a Linux client enrolled to the AD instance using tools like realmd or adcli. In this post, I'll use dc=DOMAINNAME,dc=LOCAL as the Windows domain name.

The first step is to load the sudo schema into the AD server. The schema describes the objects sudo uses and their attributes and is not part of standard AD installations. In Fedora, the file describing the schema is part of the SUDO RPM and is located at /usr/share/doc/sudo/schema.ActiveDirectory. You can copy the file to your AD server or download it from the Internet directly.

Next, lauch the Windows command line and load the schema to AD's LDAP server using the ldifde utility:

ldifde -i -f schema.ActiveDirectory -c dc=X dc=DOMAINNAME,dc=LOCAL

Before creating the rule, let's also crate an LDAP container that would store the rules. It's not a good idea to mix sudo rules into the same OU that already stores other objects, like users - a separate OU makes management easier and allows to set more fine-grained permissions. You can create the sudoers OU in "ADSI Edit" quite easily by right-clicking the top-level container (dc=DOMAINNAME,dc=LOCAL), selecting "New->Object". In the dialog that opens, select "organizationalUnit", click "Next" and finally name the new OU "sudoers". If you select a different name or a different OU altogether, you'll have to set a custom ldap_sudo_search_base in sssd.conf, the default is ou=sudoers,$BASE_DN".

Now, let's add the rule itself. For illustration purposes, we'll allow the user called 'jdoe' to execute less on all Linux clients in the enterprise.

In my test, I used "ADSI Edit" again. Just right-click the SUDO container, select "New->Object" and then you should see sudoRole in the list of objectClasses. Create the rule based on the syntax described in the sudoers.ldap man page; as an example, I created a rule that allows the user called "jdoe" to run less, for instance to be able to inspect system log files.

dn: CN=lessrule,OU=sudoers,DC=DOMAINNAME,DC=LOCAL
objectClass: top
objectClass: sudoRole
cn: lessrule
distinguishedName: CN=lessrule,OU=sudoers,DC=DOMAINNAME,DC=LOCAL
name: lessrule
sudoHost: ALL
sudoCommand: /usr/bin/less
sudoUser: jdoe

The username of the user who is allowed to execute the rule is stored in the sudoUser attribute. Please note that the username must be stored non-qualified, which is different from the usual username@DOMAIN (or DOM\username) syntax used in Windows.For a more detailed description of how the sudo rules in LDAP work, refer to the sudoers.ldap manual page.

The client configuration involves minor modifications to two configuration files. First, edit /etc/nsswitch.conf and append 'sss' to the 'sudoers:' database configuration:

sudoers: files sss

If the sudoers database was not present in nsswitch.conf at all, just add the line as above. This modification would allow SSSD to communicate with the sssd with the libsss_sudo library.
Finally, open the /etc/sssd/sssd.conf file and edit the [sssd] section to include the sudo service:

services = nss, pam, sudo

Then just restart sssd and the setup is done! For testing, log in as the user in question ("jdoe" here) and run:

sudo -l

You should be able to see something like this in the output:
User jdoe may run the following commands on adclient:
(lcl) /usr/bin/less

That's it! Now you can use your AD server as an centralized sudo rules storage and the rules are cached and available offline with the SSSD.

July 20, 2014

All systems go
New status good: Everything seems to be working. for services: Ask Fedora, Fedora Packages App
Major service disruption
New status major: We are investigating an outage affecting ask.fp.o and apps.fp.o/packages for services: Ask Fedora, Fedora Packages App
Writing two books with AsciiDoc: Apache Mahout and Scala

For past few weeks I have been gathering my notes on two topics: Machine Learning with Apache Mahout and Programming in Scala. Now I have created two repositories for these two books, with source-code in AsciiDoc format. They are avilable on GitHub.

Read on!

FUDCon Latam 2014 Definitive dates

We have reached an agreement with Universidad de Ciencias Comerciales to set dates, so we can start making announcements and contacting possible sponsors.

FUDCon 2014 Managua will be Octuber 23th to 25th, 2014.

We algo agreed that we will have the following infraestructure at our dispossal:

  • Main auditory (300 persons)
  • 2 Master Classroom (50 person each)
  • 2 Computer labs (30 and 20 respectively)
  • 2 Classroom (40 persons)
Preliminary review of the FirefoxOS flame

Last week my FirefoxOS reference phone arrived: The “flame”. It’s not a super high end phone, but it is higher end than any of the previous firefoxos phones out there. It’s got all the things they want to test as part of their os, ie, dual sim cards, front and rear cameras, adjustable memory, etc.

First a bit of background: Why would I want one of these when I have a android phone? A number of reasons:

  • It was only $170 and I like tinkering with new hardware.
  • Android has been worrying me more and more over time. Taking core parts of the OS closed source, tying things more and more to their fiefdom of google over all else, and doing less and less the open source way.
  • I’m going to be traveling to europe soon for flock, and my sprint android phone won’t work over there, but I can (and did, see below) get a sim from another provider that does provide text/data roaming there.
  • I like rooting for the underdog and android sure isn’t that anymore.

Lets go over the hardware first. You can see a full list (and also buy one if you like) at everbuying.com ( http://www.everbuying.com/product549652.html ).  The phone is nicely solid, if a bit thicker than my galaxy s3. The screen is pretty bright and nice. Presses are somewhat different than I am used to, but seem to work fine after adjusting a little bit where I press. The battery life so far has been awesome! Leaving it unplugged overnight after some playing around with it leaves me at around 85%. Doing a wifi hotspot for about an hour leaves me also at around 85% (so 6 hours off power of wifi teathering). There is only one button on the phone which definitely takes some getting used to if you are used to android devices, but is doable. The sim slots and micro-sd work fine and are pretty easy to get to. Wifi works fine as well, it connected right up there. Also, once you enable debugging in the developer options, connection via usb to a Fedora machine works fine and (re)uses the adb and fastboot tools.

On the software end, there’s a lot of rough edges. A bit confusingly to me is that you can use the ‘I’m thinking of’ search bar to search for something… say “facebook” and it basically finds web pages related to that. If you press and hold the ‘facebook’ icon one it adds it to your desktop. It’s basically just a launcher for the firefox browser to load the mobile facebook site. If you go to the marketplace and search for ‘facebook’ you get a facebook ‘app’ that’s a html5/js/whatever application, which you can also add to your desktop. They have slightly different icons, and behave slightly differently when run. I guess if I had to make a suggestion there, I would say they should always prefer the ‘app’ since most mobile sites views are… not ideal. This is apparent in the g+ mobile site with the browser: you cannot get to communities. There’s just no link to do so. It takes a bit of getting used to to look for a back arrow in applications (since there’s only one button), and if you are in a web session, you have to swipe very gently up from the bottom to get a small toolbar with a back arrow (that was not easy to discover).  I was very happy to see ‘do not track’ options available in settings, and the browser seems to do pretty well on most any site I hit. The camera pictures I have taken haven’t been great. It doesn’t seem to focus very well. I couldn’t get the email client hooked up to my server because a use a cacert ssl cert, and it’s untrusted and the app has no ‘allow untrusted certs’. Here maps is the replacement for google maps, and it’s actually not too bad. I guess it’s a nokia thing? Contacts imported from google just fine. You can also import from facebook and others. There’s not so many applications yet in the marketplace. In particular I’d like a openvpn client, freeotp (although there are some shady looking otp clients already), fbreader (there is a passable epub reader, but it’s pretty primitive next to the current gen of android ebook apps), and a few others.

I picked up a cheap t-mobile sim for it (T-Mobile has free roaming to europe apparently), and it activated fine and works great so far. The sms app is pretty reasonable, the wifi hotspot app works fine, calls work ok, etc.

The phone is running a prerelease of firefoxos 1.3. It sounds like 2.0 (due out later this year) is going to have a major UI re-write and facelift. It’s not very easy to find one place with any information on the OS. There’s a bunch of different places in the mozilla world mentioning it, and you can drill down to blocker bugs in their bugzilla, but there’s not much in the way of an accurate schedule for when something is going to land and whats in it and who or what is doing the work on those things.

I’m going to try and take and use it as my primary phone next month when I go to flock, should have a good deal more to report then. I think I could use it as my primary phone with some pain, but we will see. I really like the idea of a 100% free android competitor.

Some Implications of Supporting the Scala drop Method for Spark RDDs

In Scala, sequence data types support the drop method for skipping (aka "dropping") the first elements of the sequence:

// drop the first element of a list
scala> List(1, 2, 3).drop(1)
res1: List[Int] = List(2, 3)

Spark RDDs also support various standard sequence methods, for example filter, as they are logically a sequence of row objects. One might suppose that drop could be a useful sequence method for RDDs, as it would support useful idioms like:

// Use drop (hypothetically) to skip the header of a text file:
val data = sparkContext.textFile("data.txt").drop(1)

Implementing drop for RDDs is possible, and in fact can be done with a small amount of code, however it comes at the price of an impact to the RDD lazy computing model.

To see why, recall that RDDs are composed of partitions, and so in order to drop the first (n) rows of an RDD, one must first identify the partition that contains the (n-1),(n) row boundary. In the resulting RDD, this partition will be the first one to contain any data. Identifying this "boundary" partition cannot have a closed-form solution, because partition sizes are not in general equal; the partition interface does not even support the concept of a count method. In order to obtain the size of a partition, one is forced to actually compute its contents. The diagram below illustrates one example of why this is so -- the contents of the partitions in the filtered RDD on the right cannot be known without actually running the filter on the parent RDD:


Given all this, the structure of a drop implementation is to compute the first partition, find its length, and see if it contains the requested (n-1),(n) boundary. If not, compute the next partition, and so on, until the boundary partition is identified. All prior partitions are ignored in the result. All subsequent partitions are passed on with no change. The boundary partition is passed through its own drop to eliminate rows up to (n).

The code implementing the concept described above can be viewed here: https://github.com/apache/spark/pull/1254/files

The following diagram illustrates the relation between input and output partitions in a call to drop:


Arguably, this represents a potential subversion of the RDD lazy compute model, as it forces the computation of at least one (and possibly more) partitions. It behaves like a "partial action", instead of a transform, but an action that returns another RDD.

In many cases, the impact of this might be relatively small. For example, dropping the first few rows in a text file is likely to only force computation of a single partition, and it is a partition that will eventually be computed anyway. Furthermore, such a use case is generally not inside a tight loop.

However, it is not hard to construct cases where computing even the first partition of one RDD recursively forces the computation of all the partitions in its parents, as in this example:


Whether the benefits of supporting drop for RDDs outweigh the costs is an open question. It is likely to depend on whether or not the Spark community yields any compelling use cases for drop, and whether a transform that behaves like a "partial action" is considered an acceptable addition to the RDD formalism.

RDD support for drop has been proposed as issue SPARK-2315, with corresponding pull request 1254.

Change the default search engine in Epiphany, the GNOME Web application

When I'm enjoying the sun/wind/rain on the balcony, I tend to use my XO-1.75 for duties where most people would use a tablet. Reading/writing emails, browsing the internet, bug triaging or writing small fixes, release notes and all can be done fine on a small screen. My preference goes definitely towards physical keyboards, and less to their onscreen variants. Even when the keyboard is small, I like the typing on it much more than using a touchscreen for it. Of course, the space saving of not needing to display a keyboard helps too. But well, that aside...

My XO is is installed with the stock OLPC distribution, based on Fedora. Sometimes I use the Sugar desktop environment, on other days I'll switch to GNOME (Classic). With GNOME comes the Epiphany browser (recently renamed to Web). Unfortunately Epiphany uses Google as default search engine, and there is no option in the settings menu to change that. After a little DuckDuckGo'ing, I found a hint that the keyword-search-url can get set by gsettings:

$ gsettings set org.gnome.Epiphany keyword-search-url

Using the gsettings command works fine, but does not apply the option for all users on the system. I could not find a command to change the system-wide settings, which would help with automatically setting the option after a reinstall. More searching (now directly from the addressbar) suggested that I could use a special .gschema.override file. Indeed, the installation of the XO already has some of these .gschema.override files under /usr/share/glib-2.0/schemas/. Dropping the following file in the directory:

# filename: /usr/share/glib-2.0/schemas/50_use-duckduckgo.gschema.override
# use https://duckduckgo.com instead of Google for searches from the addressbar


After creating the file, it is needed to 'compile' the gschemas:

# glib-compile-schemas /usr/share/glib-2.0/schemas

Happy searching!

Video of bike data analysis talk

I gave a talk at Spark Summit earlier this month about my work using Apache Spark to analyze my bike power meter data, and the conference videos are now online. You can watch my talk here:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/I775I0WzeqY" width="560"></iframe>

There were a lot of great talks at Spark Summit; check out the other videos as well!


En las aulas de la Universidad Privada Telesup sede Ate – Lima, se realizó el primer evento de Software Libre denominado FEDORA CLASSROOM EVENT organizado por la Comunidad Fedora Perú, el evento fue realizado después de terminar el Ciclo Académico 2014-I con el objetivo de contar con participantes sin presión a asistencias y/o nota, como era de esperarse se llego a contar entre 25 a 30 participantes.
Para el presente evento decidimos realizar al estilo Barcamp y que decició el orden de los temas a realizar.
La primera charla fue Importancia de fedora en Aula dada por mi persona, luego Anthony Mogrovejo con el tema El camino del SysAdmin; enseguida Raul Hugo Noriega hizo una charla sobre tema Introduccion a Mongo DB usando Fedora; después Anthony Mogrovejo continuó con su charla Linux en las empresas, ensiguida mi persona realizó su charla denominada Como usar Fedora y no morir en el Intento. Una vez culminada las charlas, ingresamos a una rueda de diálogo, comentarios sobre experiencias en el trabajo, preguntas y más preguntas, con entregas de presentes como dvds, colgadores y adhesivos.
Finalmente concluimos afirmando que existe mucho interés en conocer estas tecnologías mencionadas y que estas no se encuentran poco o nada integradas en la programación de sus contenidos de estudio, por que solicitaron a realizar más charlas incluido talleres.
Por otro lado hubo un interes de 4 a 5 estudiantes en formar parte de la Comunidad Fedora, por lo que se le asignó la información pertinente para que pueda formar parte del equipo.
1 IMG_2911 IMG_2915 IMG_2916 IMG_2919 IMG_2923 IMG_2926 IMG_2927 IMG_2928 IMG_2935 IMG_2944 IMG_2948 IMG_2949 IMG_2952 IMG_2953 IMG_2962 IMG_2963 IMG_2971 IMG_2972 IMG_2984 IMG_2993 IMG_2995 IMG_2998

July 19, 2014

Ask Fedora – Getting Started and Helping Out

Most of you would know of Ask Fedora – the Askbot instance we, the Fedora community, set up as another channel to help out our users at – the other pre-existing channels are – the many mailing lists, especially user@lists.fp.o and our various IRC channels, especially #fedora.

Since we deployed Ask Fedora, we’ve seen a healthy rise in its usage. Unfortunately, I haven’t statistics to show for this. I still need to figure out how I can get some. In this post, I’ll introduce Ask Fedora for the benefit of those still unaware of it and then write a little about how you can help us help yourself and our users via this Q&A forum.

What exactly is Ask Fedora?

Ask Fedora is an instance of the Askbot Q&A website. It’s similar to stackexchange if you’ve used it. The idea is for people to ask questions and for any one that can help to answer these questions. While asking and answering questions, you earn karma and badges, you vote on these questions and answers and as more and more people use the site and learn in the process, it becomes a self sustaining knowledge base – new folks can look through existing questions for information. If you continue participating well, you earn more karma, and you earn more rights on the forum. For example, when you begin, you will not be able to post links – this is an anti-spam measure. However, after you earn a little karma, you’ll earn the right to post them. Similarly, when you’ve earned enough karma, you’ll get moderator privileges meaning you can close and delete questions and answers.

Getting started with Ask Fedora

We have a nifty video that you can watch to quickly learn the basics:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://archive.org/embed/ask_fedoraproject" width="640"></iframe>

Getting started with Ask Fedora is easy. You use one of the many login methods to create an account as shown in the image below:

Login to Ask Fedora

Once you’ve logged in, you’ll can go ahead and ask your question, or answer other questions. Read on to learn how you can make most of Ask Fedora.

Best practices at Ask Fedora

Asking and answering questions is incredibly easy when you do it verbally, in person. However, when you’re doing it over a forum, certain simple things can help ensure that a certain standard is maintained at the forum – people should easily understand both questions and answers, for example. Most of this is documented at our wiki page.

  • Go through the “sticky” posts: We’ve marked a set of good questions as “sticky” questions. Please glean through them – you’ll pick up quite a few tips on the way.
  • Search before you post: Search the interweb and then Ask Fedora itself before posting your question. More often than not, someone else may have had the same query and already received an answer.
  • If you have more than one question, post them separately: Do not write a post that says “many Fedora questions” and ask them all together. You’re to ask each question separately.
  • Frame a question: It’s Ask Fedora after all.
  • It is perfectly OK to answer a question that you asked: There’s nothing stopping you from answering your own questions. When you do, please post your answer separately, and not in the question description.
  • Post in the appropriate language specific group: We have support for different languages in Ask Fedora. Make sure you select the appropriate language in the left top of the page. The default is English. Please try and use complete words and correct language constructs – it ensures that everyone can read your statements.
  • File bugs in bugzilla: If it is a package related issue or you are requesting an update for some package, do so directly via bugzilla. If you haven’t done that before, refer to How_to_file_a_bug_report. Remember that Ask Fedora is for troubleshooting, it is not a bug tracker. Quite a few times, someone will point out that your question has uncovered a bug, and that you should report this bug.
  • Ask questions in a useful manner: Here’s a great post that tells you how to ask good questions. Here are some more tips on getting good answers.
  • Provide information about to help people help you better: People can’t help you if you do not provide specific information on the issue you face. Some commands and logs that you should look for information to provide with your questions. Most of these logs are now managed by journalctl:
    • fpaste –sysinfo –printonly: fpaste is a nifty tool that collects a lot of system information for you and pastes it on our paste.fedoraproject.org server. The –printonly option will spew out the output on your terminal and you can then add this info to your question
    • lsusb: Information on USB devices
    • lspci: Information on PCI devices
    • dmesg or journalctl -k: Kernel messages
    • uname -a: Current running kernel
    • /var/log/Xorg.0.log: X server errors, for example when you don’t get a display on boot
    • /var/log/messages or journalctl -b: Kernel and more common errors
    • ~/.xsession-errors or journalctl _UID=…: User session errors (~/.xsession_errors isn’t used in Fedora 19 GNOME or later; use journalctl with your user ID, as reported by the id command – and yes, that is an underscore!)
    • /var/log/pm-suspend.log: Suspend/resume logs
    • Google: All of the above logs and commands can be learned from the internet.
  • Use the tools: When you want to post information along with your question, please use the tools, such as quoting, code snippet etc. If folks can’t read your questions quickly, they are not likely to answer them.
  • Use correct tags: Make use of existing tags instead of adding new ones. This increases the visibility of your question. Remember that tags are without spaces, so “fedora16″ is not the same as “fedora 16″. The latter actually breaks up into two separate tabs “fedora” and “16″ which is illogical. Don’t use generic tags like “fedora” or “problem”. Every tag has to individually make sense. Tags in Ask Fedora do not need to have a # prefix.
  • Is that an answer?: Only post answers when you’re providing a solution to the asked question. For everything else, comment on the appropriate question/answer. Having a thread of conversation as answers just confuses people looking for information. Use comments. Only post an answer when you have one.
  • Please do not append “[Solved]” to your question summaries: Please do not add the [Solved] keyword to your question summaries. If you received an answer that solved the issue, please mark the answer as correct and reward the helper with karma. There is no need to modify the question summary.
  • Subscribe to questions you ask/answer/comment: A lot of answered questions are not marked so because the original asker hasn’t cared to follow up!
  • Reward your helpers: This is most important. Mark answers as correct, vote up a good question, vote down a bad one, comment for queries. This will make it a better, more knowledgeable forum. Use your votes! You get 30 votes everyday.
  • If you see someone not using the forum correctly, point it out: Comment telling them what they’re doing wrong politely. Hint: Use “Please mark an answer as correct.” instead of “Mark an answer as correct.”
  • Do not take offense at being down voted: Down voting implies that someone doesn’t think your answer is good enough, or complete. This isn’t personal at all. What this actually means is that you have the opportunity to learn something new. Please do not take offence if your answer is down voted. It’s how the forum is designed to work.
  • Last but not least be polite and refrain from ranting: Ranting makes people not want to help you. There isn’t anything else to it. If you’re polite, folks will want to help you. Remember to “be excellent to each other” at all times.

I request you to take out about 15 minutes a day to glance at Ask Fedora. If there’s a question you can answer, please do. The standard of questions and answers will be much better if advanced users from the community answer questions regularly.

Trouble with Ask Fedora

If you run into issues with Ask Fedora, you can inform us by:

Helping Fedora Infrastructure maintain Ask Fedora

The infrastructure team is always looking for new people to help them out. If you’re interested, please take a look at this wiki page and jump right in! If you’re a developer that would like to contribute to Askbot, please take a look at their website upstream.

Cheers! I’ll see you at the forum!

July 18, 2014

SNAKE is no Longer Needed to Run Installation Tests in Beaker

This is a quick status update for one of the pieces of Fedora QA infrastructure and mostly a self-note.

Previously to control the kickstart configuration used during installation in Beaker one had to either modify the job XML in Beaker or use SNAKE (bkr workflow-snake) to render a kickstart configuration from a Python template.

SNAKE presented challenges when deploying and using beaker.fedoraproject.org and is virtually unmaintained.

I present the new bkr workflow-installer-test which uses Jinja2 templates to generate a kickstart configuration when provisioning the system. This is already available in beaker-client-0.17.1.

The templates make use of all Jinja2 features (as far as I can tell) so you can create very complex ones. You can even include snippets from one template into another if required. The standard context that is passed to the template is:

  • DISTRO - if specified, the distro name
  • FAMILY - as returned by Beaker server, e.g. RedHatEnterpriseLinux6
  • OS_MAJOR and OS_MINOR - also taken from Beaker server. e.g. OS_MAJOR=6 and OS_MINOR=5 for RHEL 6.5
  • VARIANT - if specified
  • ARCH - CPU architecture like x86_64
  • any parameters passed to the test job with --taskparam. They are processed last and can override previous values.

Installation related tests at fedora-beaker-tests have been updated with a ks.cfg.tmpl templates to use with this new workflow.

This workflow also has the ability to return boot arguments for the installer if needed. If any, they should be defined in a {% block kernel_options %}{% endblock %} block inside the template. A simpler variant is to define a comment line that stars with ## kernel_options:

There are still a few issues which need to be fixed before beaker.fedoraproject.org can be used by the general public though. I will be writing another post about that so stay tuned.

GSoC - Mock improvements - week 8
Good news, we're merging my changes upstream. It's been a lot of changes and the code wasn't always in the best shape, so I didn't want to submit it before all major features are implemented. Mirek Suchy agreed he'll do the code review and merge the changes. Big thanks to him for that :)
I've setup a new branch, rebased it on top of current upstream and tried to revisit all my code and get rid of changes that were reverted/superseded, or are not appropriate for merging yet.  I squashed fixup commits to their originl counterparts to reduce the number of commits and changed lines.
The changes that weren't submitted are:
  • C nofsync library, because Mikolaj made a more robust nosync library that is packaged separately, and therefore supersedes the bundled one.
    Link: https://github.com/kjn/nosync/
    I did the review: https://bugzilla.redhat.com/show_bug.cgi?id=1118850
    That way mock can stay noarch, which gets rid of lots of packaging issues. And also saves me a lot problems with autoconf/automake. There is no support for it yet, because I need to figure out how to make it work correctly in multilib environment.
  • nofsync DNF plugin - it's an ugly DNF hack and I consider it superseded by aforementioned nosync library
  • noverify plugin - it's also a DNF hack, I will make a RFE for optional verification in DNF upstream instead
Everything else was submitted including the LVM plugin.  The merging branch is not pushed on github because I frequently need to make changes by interactive rebasing and force pushing the branch each time kind of defeats the purpose of SCM.
Other than that I was mostly fixing bugs, the only new features are the possibility of specifying additional commandline options to rpmbuild, such as --rpmfcdebug with --rpmbuild-opts option and ability to override command executable paths for rpm, rpmbuild, yum, yum-builddep and dnf, in order to be able to use different version of the tools than the system-wide version.