August 01, 2014

Network Speeds – MTU

I’ve been having an issue with copying data to my NAS for a while and I’ve finally had a chance to take a quick look at it. It’s an old NEtgear NAS Duo v1. I was only getting about 1/2 mb/s transfer to it, so I figured something was wrong there. I read a website from someone who had the same issue, their solution was to update the MTU for both the desktop and the NAS so I thought I would give it a try. OK so how do determine the optimal MTU setting for my desktop/nas?

Well that’s simple enough, you keep pinging a site with lower and lower packets until they stop being defragmented, add another number to it and bam, there you go.

Step 1
Open Command Prompt
Step 2
In the Command Prompt type in ping www.<anywebsite>.com -f -l 1472 and hit Enter. If you get ‘Packet needs to be fragmented but DF set.’ Message it means that the packet needs to be fragmented.
Step 3
Drop the test packet size down (10 or 12 bytes) and test again until your reach a packet size that does not fragment.
Step 4
Once you have a test packet that is not fragmented increase your packet size in small increments and retest until you find the largest possible packet that doesn’t fragment.
Step 6
Take the maximum packet size from the ping test and add 28. You add 28 bytes because 20 bytes are reserved for the IP header and 8 bytes must be allocated for the ICMP Echo Request header. Remember: You must add 28 to your results from the ping test!

An example:
1440 Max packet size from Ping Test
+ 28 IP and ICMP headers
1468 Your optimum MTU Setting

In your case if the maximum packet size is 1452 the optimal MTU (Maximum Transmission Unit) is 1452+28 i.e. 1480. Here 1452 is the packet size of the data sent but the MTU is 1480.

To update your MTU on a windows machine, you can do this from the command line [I did it from a command line with administrative rights]

netsh interface ipv4 set subinterface “Local Area Connection” mtu=1458 store=persistent

There you go, sorted, updated the MTU on my NAS and now I’m getting 11/12mb/s which isn’t that much, but it’s an old NAS so I’ll forgive it.

Answering questions regarding the Fedora Security Team

Wow, I had no idea that people would care about the start of this project.  There seems to be a few questions out there that I’d like to address here to clarify what we are doing and why.

OMG!  Fedora is just getting a security team?  Does this mean Fedora has been insecure this entire time?!?

Umm, no, it doesn’t mean that Fedora has been insecure this entire time.  In all actuality Fedora is in pretty good shape overall.  There is always room for improvement and so we’re organizing a team to help facilitate that improvement.

What exactly is the security team responsible for?

We here to help packagers get the patches or new releases that fix vulnerabilities into the Fedora repositories faster.  Most of our packagers are very good at shipping fixes for bugs when upstream rolls a new version of their software.  Bug fixes can usually wait a few days, though, as most aren’t critical.  Security vulnerabilities are a bit different and fixes should be made available as soon as possible.  A little helping hand is never a bad thing and that’s what we’re here to do… help.

Can the security team audit package x?

No.  This may become a service a different team (also falling under the Security SIG) can provide but I/we haven’t gotten there yet.

I read where Fedora has 566 vulnerabilities!  How can you say that Fedora isn’t insecure?

Well, it’s actually 573 right this second.  That’s down from 577 last week.  566 was Monday’s number.  It’s important to not get caught up in the numbers cause they are, well, just numbers.  The numbers only deal specifically with the number of tickets open.  Many of the tickets are duplicates in that the same vulnerability might have several tickets opened for it if the finding is in only certain Fedora versions and EPEL versions.  Since the same packager is likely responsible for all versions and the same fix can be made we can likely close several bugs at a time with minimal work.

I should also point out that the majority of these bugs fall well below the “world is on fire” level of Critical and the “this isn’t good” level of Important.  This doesn’t mean we should just ignore these lower vulnerabilities but rather we should understand that they aren’t something that is likely to be exploited without many other bad things happening.  Should they be fixed?  Yes, but we should probably be more concerned with the Critical and Important vulnerabilities first.  If you’d like to know more about the process for coming up with the severity rating my friend Vincent wrote an excellent article that you should read.

“6. Close bug when vulnerability is shipped in Fedora repos.”

Yeah, that isn’t correct.  This is what happens when I try to multi-task.  Glad I don’t get paid to write….  err… never mind.  Luckily it’s a wiki and someone fixed it for me.  Whew!

(We try to not deliberately release a package with a vulnerability.  It seems people don’t appreciate vulnerabilities in the same way they like other features.  Who’d a thought?)

I’d like to help!  How can I join up?

Go to the Security Team wiki page and look for the link to the mailing list and IRC channels, sign up, join up, and use the work flow to start digging in.  Questions?  Feel free to ask in the IRC channel or on the mailing list.  You can also contact me directly if can’t otherwise find the answer to your question.

July 31, 2014

Flock (You’re Not Too Late!), Unsigned Packages in F21, Security, Marketing, and Notifications (5tFTW 2014-07-29)

5tFTWFedora is a big project, and it’s hard to follow it all. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for July 29th, 2014:

Flock: Missed Registration? No problem!

Flock, Fedora’s big contributor conference, is just a week away, from August 6th to 9th in Prague. Registration closed a while ago, but we’ve gotten several inquiries about the possibility of late sign-up. The answer is: You are still very welcome. Just show up! You won’t get a badge, lunch, or a t-shirt, but those are really the main reasons we ask for preregistration. If you missed your chance for that but want to come anyway, please do!

Unsigned Packages in Fedora 21?

An important notice from Rawhide Watch — a blog devoted to those of us who run Fedora’s bleeding-edge development branch — but about Fedora 21. Earlier this month, F21 split from Rawhide, in order to go through the stabilization process that leads to the alpha, beta, and (coming this fall) official final release. Even though the alpha isn’t ready yet, many testers are trying out the new branch already, and have discovered that some packages are not yet cryptographically signed.

As Adam Williamson explains on in the blog post, ) this is perfectly normal. Until the alpha “freeze” (currently slated for August 12th, after Flock), whenever a packager builds a package, it goes right into the tree, rather than going through the normal updates process. Once we get to that freeze, though, the normal updates system will be activated and package updates will need to be pushed just as they are for stable releases.

Fedora Security Team

Fedora contributor Eric “Sparks” Christensen announced the new Fedora Security Team. Fedora has had a Security Special Interest Group for a long time, and of course emergency security response and everything else you’d expect, but in general, the security update process put the burden on the maintainers of each individual software package in Fedora. The new community team will serve as a new resource for those packagers, working with each package’s upstream project to find the right fix — either a backported patch or identifying an updated version — and with the packager to help get these fixes into place and out to users.

Fedora Marketing

Meanwhile, the Fedora Marketing SIG held a meeting to discuss group organization and plans for the Fedora 21 release. Since Fedora 21 is going to be a little different from business as usual, with and the Cloud/Server/Workstation products, we’ve got a lot to think about and plan in order to best promote what we’re doing to the world. Check out the meeting minutes for details and join us on the mailing list if you’re interested in helping out.

Fedora Contributor Notifications

We have a system called fedmsg, the Fedora Infrastructure Message Bus. Many of the tools we use in Fedora send messages to this bus, including the Koji and Copr package build systems, the Fedora Wiki, question and answer site Ask Fedora, Fedora Badges, and more.

From the command line, you can yum install fedmsg and follow all activity with fedora-tail --really-pretty (or, of course, less pretty if you prefer). Or, you can use datagrepper to search historical data. Now, the Fedora Infrastructure team is working on Fedora Notifications, which can trigger e-mail or IRC alerts for messages from various apps which match your username or other criteria. Mobile phone app alerts are in the works too.


There’ll be some downtime on today or tomorrow; doing exciting moving-servers-around-the-apartment stuff. Try and survive without me!

Threat: Sam the Disgruntled Employee


I’m going to assert that Sam is the second greatest security you face. (We will encounter the greatest thread in a few more posts.) Depending on who you talk to, between 60% and 90% of corporate losses due to theft and fraud and from employees, not external threats.

This may be overstated in some areas; a lot of credit card theft and identify theft is external. See, for example, the theft of over 50M credit cards numbers at Target. Still, much of the real world theft is internal.

Sam is unhappy with your company. He wants to take from it or cause hurt. Sam may be committing fraud, copying internal documents to take to a competitor, posting damaging information on the Internet, or walking out the door in the evening with a bag full of your products or supplies.

You need to both watch for disgruntled employees and to minimize the damage they can do. Good management and good internal controls are your first line of defense. Constant awareness and vigilance are called for.

Above all, watch the people side. In some cases Sam is simply unethical – you need to find him and remove him. In other cases he is angry – this is often a management issue. In many cases he simply sees an opportunity that he can’t resist; solid internal controls will minimize this risk.

In any case, be aware that your greatest threats are usually inside your company, not outside of it!

Copr update – new features in July 2014

Copr is an easy build service for Fedora which can make you your own repo very easily. I have been contributing to this project since March 2014 and this month has been the best in terms of new features: you can track your builds much better, it is bit easier to use and also got much faster in some cases.

Build detail – packages and versions

This is a quite new view showing all the information about your build progress and results. This month’s new feature is the Build packages section – which shows you names and versions of the result packages which has been built.


API extension – build details

The same information as above is also available via API. Have a look at some more details about Copr API.

Overview page – modified buildroots and generated dnf command

Two new things has been added to the project’s overview page – one of which could help you when building Software collections.

Have you found a project with packages added to the minimal buildroot and want to see a list of them? There is a new button for it: [modified]

And if you decide that you want this repo enabled in your system, just copy the dnf command to your terminal.


Bit of speeding up

There are some mass rebuilds of loads of packages in which just some have changed since last time. Copr is now able to skip the unchanged packages almost immediately as I moved this process out of Mockremote – the build script- to the beginning of the worker process. This means massive speed-up in some cases.

New build states - skipped and starting

If you build a package successfully and some time later you submit the very same one again, it gets skipped. This saves your time and Copr’s resources, because there is no need of doing the same thing again. And from now on it would be marked as Skipped instead of Succeeded to make it more transparent.

The other state is an intermediate step while spawning a builder. It happens between picking up from the queue and the building process itself.


Optimize your GitHub Issues and 4 tricks and facts you should know about GitHub

I wrote an article, that can be found here about the new GitHub Issues, web development processes, using visual feedback and some facts about GitHub:['bigdata','github stats','xkcd comic ban','more''].

Actually I am using one old bug from project (a.k.a) Fjord. It’s a good 4 mins read full of useful stuff and fun.

If you are interested in the tool I am using to optimize your GitHub Issues processes, it is available for free for F(L)OSS projects from here, but you should read the article first.

Any thoughts?

HyperKitty at Flock

As most Fedora contributors know, Flock 2014 is just around the corner. I’ll have the pleasure to meet you in the beautiful city of Prague, and I’m sure we’ll have an excellent time together at the talks, the hackfests and the evening events :-)

I’ll be giving a talk about HyperKitty on Wednesday 6th at 17:00 (5PM) in room B286, so make sure you attend if you’re interested in how the Fedora folks are going to talk to each other in the future (and then, the whole Free Software world! Mwahahahaha!).

If you want to be a part of that, if you crave for it and want it done faster, if you have a pet feature you’d like to have, come to the workshop that will take place on Friday 8 August at 17:00 (5PM) in room T9:302. I’ll show you the basics so you can be productive quickly.

See you there next week!

Webmaps using OpenStreetMap

So in my little steps of trying to be fully free (as in beer), I decided to change the map in my Places section from Google Maps to OpenStreetMap. There is a whole table in the OpenStreetMap wiki with many libs you can use to embed OpenStreetMap into a webpage, and add a lot of stuff to it.

The one I picked was the one at the top of the list, called Leaflet, which the authors describe as a library that focuses on performance, usability, simple API, small size and mobile support. In my experience, it was really easy to implement on my site, as well as I just needed something simple: a map with a bunch of markers. There is an easy quick start guide on Leaflet's site, where they use Mapbox for the maps, but if you are like me and want to use OSM, you only need to change the tileLayer to use it, see my example.

It didn't take me more than an hour to have the map with all the markers on it. If you want an alternative solution for embedding maps into your site, you can use this opensource one (:

Webmaps usando OpenStreetMap

En mi lenta transición de convertirme en una persona libre (como la cerveza), decidí cambiar el mapa embebido en mi sección de Lugares de Google Maps a OpenStreetMap (OSM). En la wiki de OpenStreetMap hay una gran lista de librerías para poder lograr esto.

El que elegí es uno llamado Leaflet, el cual los autores describen como simple, ligero y con gran soporte móbil. En mi experiencia, fue muy fácil implementarlo en mi sitio, igual sólo necesitaba un mapa con un montón de marcadores. Hay una sencilla guía de introducción (en inglés) en el sitio de la librería, el cual pueden seguir. Pero AGUAS, porque esa guía usa los mapas de Mapbox. Si, como yo, quieren usar los de OSM, necesitan simplemente cambiar el tileLayer para que use OSM, como pueden ver en mi ejemplo.

No pasó más de una hora y ya tenía mi mapa con todos los marcadores. Si quieren una alternativa a Google Maps, les recomiendo esta opción libre (:

如何自制 Fedora Live 介质

包含重大变化的 Fedora 21 延期到了 11 月份,意味着 Fedora 用户群体将不得不坚守去年底发布的 Fedora 20 长达一年。可是,若是遇到新硬件,老的 Live 介质中的内核太早不支持,怎么办?

自己动手呗!其实自己构建包含最新更新的 Fedora Live 介质是个很简单的事情。

本文基于官方 Wiki(亦有中文版),捡要点简单叙述下。


pkcon install livecd-tools spin-kickstarts

livecd-tools 中包含了创建 LiveCD 以及将 LiveCD 制作成 USB 的工具,而 spin-kickstarts 中则包含了构建用的大量 KS 模板

和安装介质不同,Live 介质的构建思路是将指定的软件包安装到一个特定目录,再将其目录转换成运行根目录。于是这个过程使用和 Fedora 无人值守安装一样的 KS 文件进行定制。


构建 LiveCD 就要用 livecd-creator,不过这个工具工作过程特殊,需要在开始前暂时禁用 SELinux。

su -c 'setenforce 0'

若是感兴趣,可以仔细浏览下 KS 模板们:/usr/share/spin-kickstarts/ 里面包含了各种 Live 介质,从名字可以看出包含依赖关系,比较重要的几个基础类别 KS 有: fedora-live-desktop.ks、fedora-live-base.ks 和 fedora-repo.ks 。


若想直入主题,构建包含最近更新的 LiveCD,那么进入想要存放生成 ISO 文件的目录,执行以下命令:

su -c 'livecd-creator --verbose --config=/usr/share/spin-kickstarts/fedora-livecd-desktop.ks --fslabel=F20x8664-Latest --cache =/var/cache/live'

参数的用途可以从名字看出,无需多解释。换个 cache 目录后亦可以用普通用户执行,

从输出可以看到其先在 /tmp 临时目录创建多个伪 ext 分区并挂载,然后依据 KS 文件通过 yum 从镜像抓取 RPM 包,之后安装至伪分区,且会执行一部分脚本操作进行诸如清理 man 数据库。F20 的 Desktop 镜像大小在 1G 左右,所以具体用时取决于网络速度。

接下来转换伪分区至 squashfs 的过程比较费时,因为涉及压缩,在本人 A10-5800K 的机子上,满载五分钟才完成,不愧是炎炎夏日中的保暖极品……

写入 USB

耐心等待后,一个全新的 LiveCD ISO 就完成了。若是直接依据官方 KS 文件,那么无需担忧,可以直接制成 LiveUSB 使用。

插入一个 FAT32 分区格式的 U 盘,umount 掉自动挂载的分区,执行以下命令即可

su -c 'livecd-iso-to-disk --reset-mbr F20x8664-Latest.iso /dev/sdb1'

上面的命令假设 U 盘上对应为 sdb1,请根据实际情况替换。


其实,在 Linus 吐槽 Fedora 不发布更新版本安装镜像之后,Fedora 就开始提供 Live-Respins。Respins 没有太固定的更新周期,基本上每月会有一次。所以若是等不及的话,还是参照本文中的方法自己构建吧~

由此入门,还可以尝试融合 rpmfusion 的 ks 实现更多的订制,留待诸位童鞋自行研究。

分类: Tutorials | 永久链接 | Email 给好友 | 1 评论 | 捐助本站

Taskwarrior, Taskwarrior-server and Mirakel – syncing and carrying your taskslist with you

I’d documented using task warrior to manage your tasks recently. I’d only just mentioned task server in that post. I hadn’t had the time to set it up. The effect of this was that all my tasks were only on my laptop and couldn’t I access them when I wasn’t at it. The simple solution was to use an android client that would sync with taskwarrior – like Mirakel. In this post, I document setting up the task server and then Mirakel. The previous post already documents how one needs to setup and use task. You can use vit which is an excellent front end to task. It’s available in the Fedora repositories for Fedora 20+:

sudo yum install vit


Task warrior server

There’s already a review request put up for this. Threebean’s working on it. It should be available in the Fedora repositories shortly. There’s a copr repo here, but it doesn’t look in sync with the review at the moment. I’ve run a scratch build and put up the rpms here. You can just grab them and install them locally for the time being. Once you’ve installed the package, the instructions to set up the server are quite straightforward:

Generate your keys

$ sudo -i #or su -
$ cd /etc/pki/taskd #this is where the scripts are placed
$ ./generate #this will generate a couple of new files
$ ls -l /etc/pki/taskd/
total 60
-rw-------. 1 root root 1476 Jul 31 12:00 ankur_sinha.cert.pem
-rw-------. 1 root root 6799 Jul 31 12:00 ankur_sinha.key.pem
-rw-------. 1 root root 1489 Jul 31 11:57 ca.cert.pem
-rw-------. 1 root root 6789 Jul 31 11:57 ca.key.pem
-rwxr-xr-x. 1 root root 666 Jan 16 2014 generate
-rwxr-xr-x. 1 root root 647 Jan 16 2014
-rwxr-xr-x. 1 root root 787 Jan 16 2014 generate.client
-rwxr-xr-x. 1 root root 878 Jan 16 2014 generate.crl
-rwxr-xr-x. 1 root root 792 Jan 16 2014 generate.server
-rw-------. 1 root root 1521 Jul 31 11:57 server.cert.pem
-rw-------. 1 root root 808 Jul 31 11:57 server.crl.pem
-rw-------. 1 root root 6796 Jul 31 11:57 server.key.pem

When you run the generate script, it’ll generate a client.cert.pem and I’ve renamed them to match the user that I’ll create in the next section.

$ mv client.cert.pem ankur_sinha.cert.pem
$ mv client.key.pem ankur_sinha.key.pem

Set up a user

Choose your username and organization. For example, I picked “Ankur Sinha” as my username and “Personal” as the organization.

taskd add org ORGNAME --data /var/lib/taskd
taskd add user ORGNAME USERNAME --data /var/lib/taskd

This will generate a unique key for your user. Please note it down. It is required when you setup your client to sync with the task server. You can have multiple users set up. Each will be given a unique key.

Start taskd

It should be as simple as:
sudo systemctl start taskd.service

If this doesn’t work, for some reason, try this:
sudo taskd server --data /var/lib/taskd --daemon

Set up your client

You need to copy the client keys to your client’s configuration directory. For example, if you’re using the client and server on the same machine, you need to copy the client certs to ~/.task. In my case, to set up the task client I did:

$ sudo -i
$ cd /etc/pki/taskd
$ cp ankur_sinha*pem ~asinha/.task #client keys
$ cp ca.cert.pem ~asinha/.task #signing certificate
$ chown asinha:asinha ~/asinha/.task/*.pem #make sure the permissions are limited to your user only

Configuring task

You need to configure your client to use the credentials that you created, and to point it to your server. You can either modify ~/.taskrc by hand, or use the task config command - they both do the same thing. To edit it by hand, I did:

taskd.credentials=Personal\/Ankur Sinha\/my-long-key


If I'd used the task config command, it'd be this:

$ task config taskd.certificate ~/.task/ankur_sinha.cert.pem
$ task config taskd.key ~/.task/ankur_sinha.key.pem
$ task config ~/.task/ca.cert.pem
$ task config taskd.server localhost:6544 #on Fedora, we use 6544 for taskd
$ task config taskd.credentials 'Personal/Ankur Sinha/my-long-key

Sync up!

That's all the setup you need. Now, you run your first sync:

$ task sync init

In the future, you just need to run:

$ task sync

All of this is well documented at the taskwarrior website here:

Setting up Mirakel

Mirakel is quite easy to setup too. You can use the same credentials for the user you created to get Mirakel to sync with your task server. There's one main difference - instead of placing your certificate files in a folder, you need to quote the keys in the file itself. For example, my Mirakel configuration file looks like this:

username: Ankur Sinha
org: Personal
user key: my-long-key
server : your-servers-hostname:6544

# PLACE contents of ~/.task/ankur_sinha.cert.pem here

# PLACE KEY FROM ~/.task/ankur_sinha.key.pem here

# PLACE CONTENTS OF ~/.task/ca.cert.pem here

Once your configuration file is ready, place it on your android device and add a new Mirakel user using this file: Menu > Settings > Sync > Add (button on top right) > Taskwarrior > Select config file.

Select your configuration file

Select your configuration file

It'll add a new user. You can then play around with the settings and set up your sync frequency etc. These steps are quite clearly documented here: However, they're not tailored to use the Fedora rpms, which is why I thought it'd be good to write up fresh instructions.

Now, you have Mirakel up and running:
2014-07-31 04.53.57

A couple of things to keep in mind

  • Your credentials need to be correct
  • Your server should be reachable. This implies that the network should be functional, and the port should be open in the firewall. Please note that you may have to specify the zone if you're using firewalld.
  • Check /var/lib/taskd/config to see if Mirakel has permissions to sync. It isn't in the access list by default.
  • The sync is two way. You can add tasks on your phone and they'll be listed in task on your laptop after you sync them all up.

If you run into trouble, check /var/log/taskd.log to start with. It logs accesses, syncs and errors too.

“You’re not allowed to join this video call.”

“You’re not allowed to join this video call.” was the greeting I found while trying to log into my astronomy class tonight.  Thanks to Google and their Hangout app I’ve missed my last night of classes.  Fantastic.

I blame Google for this, honestly, but I wonder if they are really the problem.  They provide a service that has complex relationships with their other “products” and they provide this all for “free” to anyone that is willing to sign up (and allow them to track your every move).  I’m sure they never said the thing would have certain availability (how could they, they are utilizing the Internet as a transport layer) so I have no expectation of this thing working… ever.  And this is what happens when, as a society, we continue to embrace proprietary services that are completely out of our control.  Even if there was some sort of agreement that this stuff would work all the time I would still be sitting here unable to join my class.  Even from my FOSS software-running computer I am at the mercy of our proprietary overlords.  It’s sad.

Desterrrado de GNOME Shell... Mi triste historia u.u

Me gusta mi computadora principal. No es la gran cosa, una HP Pavilion Slimline s5120la con algunas modificaciones y (claro cómo no) Fedora Linux instalado. A lo largo de su vida le he hecho diferentes modificaciones según he necesitado: Mayormente a nivel Software, poco a poco con el paso de los años me ha tocado meterle mano al hardware también. Empecé con una nueva PSU y una nueva tarjeta gráfica. Compré también (aún no llega pero está en camino) un nuevo procesador compatible, el "tope de gama" que la motherboard acepta. Un SSD para mi partición raíz también es prioridad y pienso adquirirlo en la misma tienda que el procesador si todo resulta bien con el primer envío.

¿El problema? La RAM.

La computadora trae por defecto 4GB de RAM, de los cuales (tras reemplazar la gráfica integrada de fábrica por la dedicada que le puse) el sistema tiene acceso a un total de 3.9 (antes del cambio eran 3.4 aprox). Obviamente conforme avanza la tecnología los entornos de escritorio y los programas van necesitando más y más recursos, siendo la RAM el más solicitado. Mi plan era (tras revisar mis consumos actuales de RAM) hacer el upgrade a 8GB cuando de pronto me di cuenta de algo espantoso: La motherboard no admite más de 4GB de RAM aún con un sistema de 64 Bits instalado. Esto es un problema porque para el uso cotidiano que le doy a GNOME Shell 4GB se me van viendo cortos y además si compré el nuevo procesador es porque tengo intenciones de virtualizar sistemas pesados a 64 bits dentro de la misma máquina...

De querer hacer esto último, el usar GNOME Shell queda totalmente descartado del panorama puesto que en sí dicho escritorio en fedora 20 de 64 bits (la versión más actual al momento que escribo esto) consume de 600 a 700 MB (base) y conforme pasa el tiempo debido a procesos como el tracker (lo que permite que encuentres archivos y apps en un chasquido desde su dash) termina operando regularmente en algo bastante próximo a los 2GB con cargas de trabajo medias, dejando la posibilidad de virtualizar totalmente fuera de la ecuación.

Más allá de todo esto (pues si estudiamos mi problema con una mirada objetiva nada que no sea un gestor de ventanas simple me va a dejar virtualizar tranquilamente) Esta computadora ha caído víctima de la obsolencia programada y para "rescatarla" sin tener que volverla servidor y/o cambiar completamente la motherboard, de momento lo único que queda es migrar a un entorno de escritorio ligero y quedarse ahí, cosa que efectivamente hice migrando a XFCE:

Bien pude haber migrado a una distro más ligera (Antergos por ejemplo tiene un gnome-shell que presume de estar en el rango de los 200MB de RAM base) en lugar de seguir usando fedora pero de momento no tengo ni el tiempo ni las ganas de hacer dicha maniobra, ya que en ese caso mejor compro otro equipo o le actualizo la motherboard a este e instalo fedora con GNOME Shell en cualquiera de los casos, así que prefiero confiar en que XFCE se mantendrá ligero el suficiente tiempo como para dejarme usar esta computadora (que de verdad me agrada) sin batallar con los recursos. De momento en mis pruebas sí he notado un cambio en el consumo de recursos que es notable. A lo largo de los días XFCE se ha comportado bastante bien en cargas altas de trabajo y la gestión de recursos es bastante buena. Incluso pude virtualizar un ubuntu con 2GB de RAM asignados y cambiar entre el host y el guest sin mayor problema, mientras los recursos se consumían de manera periódica y no "de jalón" en la máquina virtual.

Lo que sí tengo que destacar es que hay una que otra cosa que me molesta de XFCE en fedora: Para empezar, aunque hice todo el proceso de instalación de mi iOS device como es debido, Thunar no lo monta y tampoco lo hace ningún gestor de archivos, a la fecha no sé porqué (pero sí se que algo tiene que ver con gvfs). Cosas tan simples como cambiar la foto del usuario para el login manager se vuelven algo complejo (al no tener GUI para ello como en otros escritorios o bien, como en otras distros con XFCE como Xubuntu 14.04) y a veces el equipo simplemente no apaga. Por otro lado, aunque tengo deshabilitado el "recordar la sesión" por alguna razón XFCE la recuerda a veces, (contrario a lo que yo he solicitado) abriendo los programas que tenía previamente abiertos al iniciar una nueva sesión en blanco. El sonido fue otro de los problemas que tuve que solucionar ya que a veces no sonaban las bocinas y a veces no sonaban los audífonos, pero deshabilitando el sonido HDMI por el análogo estándar fue todo lo que se necesitaba al final... Las apps predeterminadas del menú de aplicaciones funcionan todas, excepto la de navegador web, que no permite hacer un setting permanente al que esté usando en turno si éste no es Midori.

Fuera de estos glitches, XFCE es un buen entorno de escritorio (aunque en fedora esté demasiado "vanilla") y con un poco de personalización y paciencia se puede lograr tener un workflow muy eficiente, digno de cualquier otro escritorio (de eso no me cabe duda); Lo que sí me entristece un poco es que el XFCE 4.10 de fedora no esté a la altura de digamos, el de xubuntu en el apartado de funcionalidad (dejando del lado el aspecto gráfico que es lo que menos importa) cuando se trata del mismo escritorio. Eeeen fin, veremos lo que el destino nos depara de ahora en adelante usando XFCE y si éste, (en algún momento) se vuelve relevante como alternativa para la comunidad fedoriana de manera que "le echen más ganas" en mejorar la experiencia de usuario en futuras versiones.
Seminário - O profissional: Técnico de Informática
O Projeto Fedora esteve presente em mais um evento realizado no dia 30 de Julho de 2014 na cidade de Uruçuca na Bahia, cidade localizada no Sul da Bahia. O Projeto Fedora se fez presente no Seminário de Abertura do curso Técnico de Informática do IFBaiano Campus de Uruçuca, que teve como tema, O Profissional: Técnico de Informática. 

O evento teve como objetivo integrar os alunos do 1 ao 3 ano do curso Técnico de Informática, como, também debater temas ligado ao profissional Técnico. O evento teve início com uma mesa redonda, que debateu o seguinte tema "Ser técnico de informática e o mercado de trabalho". A mesa foi composta pelos professores Romeu Menezes, coordenador do curso Técnico. Getílio Dias, empresário da GD Creted Web de Ilheus e do Professor e Embaixador do Projeto Fedora Ramilton Costa Gomes Júnior.

Foto 1 - Mesa Redonda composta por Getilio, Ramilton e Romeu

Em seguida, teve a palestra do Embaixador do Projeto Fedora Ramilton Costa Gomes Junior, com o tema: " O Linux para o técnico de informática". Nesta palestra foi abordados assuntos como, o que é software livre? O que é linux? Mercado de trabalho e por fim, certificações.

Foto 2 - Palestra sobre Linux para o Técnico de Informática

Foto 3 - Palestra sobre Linux para o Técnico de Informática

Foto 4 - Alunos do curso Técnico de Informática

Pela parte da manha, o evento finalizou com outra mesa redonda que teve como tema: "SER ÉTICO na profissão do técnico de informática". A mesa foi composta pelo professor Ricardo Rosa do IFBaiano Campus de Uruçuca e professor Manoel Lopes da prefeitura de Canavieiras - BA.

Na parte da  tarde, ocorreu três oficinas:

  1. Introdução ao Linux Fedora - Professor e Embaixador do Projeto Fedora (Ramilton Costa Gomes Júnior);
  2. Computação Gráfica com o Inkscape - Professor IFBaiano Campus de Uruçuca (Romeu Menezes);
  3. Animação Gráfica - Professor GC Created Web (Getílio Dias).
Gostaria que destacar que o Projeto Fedora formalizou uma parceria com a Instituição de ensino. Nessa parceria o IFBaiano Campus de Uruçuca disponibiliza a sua infra-estrutura para montarmos grupos de estudos que trabalharão com vários subprojetos do Projeto Fedora, como, empacotamento e tradução. Agradeço ao professor Romeu Menezes por confiar em nosso Projeto, que terá o prazer de levar conhecimento e consequentemente contribuir com a formação de profissionais qualificados para o mercado de trabalho.
Foto 5 - Laboratório com 28 máquinas com o Fedora 20.

O projeto fedora também distribuiu DVDs, bottom e adesivos. No final do evento posso dizer que plantamos muitas sementes, que com certeza nos darão muitos frutos. Conseguimos atingir os nossos objetivos, conseguimos agregar valores ao Projeto Fedora. Agradeço ao Projeto Fedora pelo apoio, e por acreditar em nosso trabalho perante a comunidade.

Outras Fotos

Tunnel into your libvirt NAT network with SOCKS

Tunnel into your libvirt NAT network with SOCKS

My development setup is based on two Fedora boxes: laptop and worstation situated in Red Hat office. Both machines are powerful enough to run few VMs, but to leverage hardware, I wanted to use libvirt virtual networks on both devices. The Xeon server is more capable of running nested KVM virtual appliances like oVirt or RHEV while laptop is good for local testing.

I have two virtual (NAT) libvirt networks called:

  • zzz.lan (server)
  • local.lan (laptop)

My goal was to use them seamlessly during my development. Let’s focus on the local.lan first.

Some time ago I was using libvirt bridges, but since I need to tackle with DHCP/TFTP/DNS services (I work on The Foreman project), NAT was the only viable option here. The configuration is pretty standard, I created new NAT network and disabled DHCP service there because I wanted to run my own DHCP server. You can use “default” one for this purpose.

The key thing is to setup local caching DNS server dnsmasq. In my case, I use Google public DNS servers as main forwarders:

laptop# cat /etc/dnsmasq.d/caching

laptop# cat /etc/resolv.dnsmasq

laptop# cat /etc/resolv.conf

laptop# chattr +i /etc/resolv.conf

Having dnsmasq set correctly, now we can add special file that will add all hosts that has been created via libvirt. This is done automatically by libvirt.

laptop# cat /etc/dnsmasq.d/local.lan

Restart (or start and enable) the service.

laptop# systemctl enable dnsmasq
laptop# systemctl start dnsmasq

Tip: When using VPN you can add extra configuration file so all * queries goes to internal DNS servers (these IP addresses are not real). Also we want to use public DNS server for the adress of the VPN hub, otherwise you would need to use IP address for that service.

laptop# cat /etc/dnsmasq.d/

Now, everytime you create a new VM locally, you only need to refresh dnsmasq configuration to re-load the default.addnhosts file.

laptop# virt-install --os-variant rhel6 --vcpus 1 --ram 900 \
    --name myhost.local.lan --boot network --nodisk --noautoconsole

This is as easy as sending HUP signal (or you can do service reload) or similar:

laptop# pkill -SIGHUP dnsmasq

That’s it. You can connect to your server easily:

laptop# ssh root@myhost.local.lan

I do more. When you shutdown or restart your VM, libvirt (which uses dnsmasq for DHCP too) can assign different IP, therefore the addnhost entry is not valid anymore. Therefore I created a simple script that preallocates static DHCP entry in libvirt, so restarts won’t hurt anymore. The snippet is something like:

echo "Removing existing DHCP/DNS configuration from libvirt"
netdump="sudo virsh net-dumpxml default"
virsh_dhcp=$($netdump | xmllint --xpath "/network/ip/dhcp/host[@mac='$MAC']" - 2>/dev/null)
virsh_dns=$($netdump | xmllint --xpath "/network/dns/host/hostname[text()='$FQDN']/parent::host" - 2>/dev/null)
sudo virsh net-update default delete ip-dhcp-host --xml "$virsh_dhcp" --live --config 2>/dev/null
sudo virsh net-update default delete dns-host --xml "$virsh_dns" --live --config 2>/dev/null

while true; do
  AIP="192.168.100.$(( ( RANDOM % 250 )  + 2 ))"
  echo "Checking if random IP $AIP is not in use"
  $netdump | xmllint --xpath "/network/ip/dhcp/host[@ip='$AIP']" - &>/dev/null || break

echo "Deploying DHCP/DNS configuration via libvirt for $AIP"
sudo virsh net-update default add-last ip-dhcp-host --xml "<host mac='$MAC' name='$FQDN' ip='$AIP'/>" --live --config
sudo virsh net-update default add-last dns-host --xml "<host ip='$AIP'><hostname>$FQDN</hostname></host>"  --live --configpice,listen= -

This will remove existing entries, generates random IP address, checks if that does not exist and create DHCP (and in addition to that DNS) entries.

Now, this was the easy part. I use the same on my server, but I want to be able to access the zzz.lan VMs from my laptop too. For web access this turns out to be quite easy task. For example in Chrome you only need to created this file:

laptop# cat ~/proxies.pac
function FindProxyForURL(url, host) {
  if (shExpMatch(host, "*.zzz.lan*")) {
    return "SOCKS5 localhost:8890";
  } else {
    return "DIRECT";

Start dynamic tunnel to the libvirt hypervisor (the server):

laptop# cat .ssh/config
Host server
    DynamicForward 8890

laptop# ssh -N server &>/dev/null &

And start Chrome with this configuration:

laptop# google-chrome --proxy-pac-url=file:///home/lzap/proxies.pac

When working on The Foreman development, I also want the application to be able to connect to various services (like oVirt or RHEV compute resources). This turns to be possible too. The key utility is tsocks wrapper which is available in Fedora. We can use the very same ssh dynamic tunnel for that.

laptop# cat /etc/tsocks.conf
local =
server =
server_port = 8890

The only difference in dnsmasq configuration on the server is that I want to open it for incoming DNS requests, because my laptop needs to resolve hosts from zzz.lan:

server# cat /etc/dnsmasq.d/caching

And of course we need to open firewall port:

server# firewall-cmd --add-service=dns --permanent

Now on the laptop, one more change is needed. I need to direct my local dnsmasq daemon to resolve from the server dnsmasq (assuming my server IP address is

laptop# cat /etc/dnsmasq.d/zzz.lan

After reloading dnsmasq, I can finally use tsocks:

laptop# tsocks wget -O - http://ovirt34.zzz.lan

So I can start my application allowing it to connect to zzz.lan NAT network:

laptop# tsocks foreman_application

That’s it. Hope you were inspired.

July 30, 2014

Curso de Linux Gratis e impartido por la Linux Foundation
Este es el tipo de cosas que no puedo dejar de comentar, cuando escuche que la Linux Fundation iba a abrir un MOOC de Linux corrí al sitio a inscribirme:

Por fin el curso va a iniciar el próximo Viernes 01 de Agosto, así que aun están a tiempo de inscribirse en el curso:

HowTo: Two different public IPs on a single server

Ok, today, I discovered I am still an idiot.

Yep, I tried to add 2 public networks to one of my CloudSigma servers and one of them didn’t work.

I thought everything was to blame but my configuration (as always). Well, I managed to discover what the problem was and how to correct it.

The problem is that since there is only one default route, packets going through eth1 didn’t know how to go back to where they came from. This is solved by adding a rule and telling the kernel where to look for info on those packets:


# first my NIC configuration
## cat /etc/sysconfig/network-scripts/ifcfg-eth0 

## cat /etc/sysconfig/network-scripts/ifcfg-eth1

# my routing table
## ip route dev eth0  proto kernel  scope link  src dev eth1  proto kernel  scope link  src dev eth0  scope link  metric 1002 dev eth1  scope link  metric 1003 
default via dev eth0 

# look for info on packets comming from network on table 1
ip rule add from tab 1 priority 500

# append to default gateway telling it to look for info on table 1
ip route add default via dev eth1 tab 1

# flush cache
ip route flush cache


So, eth0 ( is the default route. It is declared in ifcfg-eth0. If I do not declare DEFROUTE=no on eth1, then, the last NIC to become available becomes the default route. So, I specify which is the default so I can add rules later.

Then, there is eth1 ( which is a completely different network. We add the rules needed for the info of it to be found on it’s own table and we add it to the default.

This works ipso facto. I don’t know if it will survive a reboot, but, hey, I know my readers will tell me if it does or not.

Getting Started with Kubernetes / Docker on Fedora
These are my notes on how to get started evaluating a Fedora / Docker / kubernetes environment.  I'm going to start with two hosts.  Both will run Fedora rawhide.  The goal is to stand up both hosts with kubernetes / Docker and use kubernetes to orchestrate the deployment of a couple of simple applications.  Derek Carr has already put together a great tutorial on getting a kubernetes environment up using vagrant.  However, that process is quite automated and I need to set it all up from scratch.

Install Fedora rawhide using the instructions from here.  I just downloaded the boot.iso file and used KVM to deploy the Fedora rawhide hosts.  My hosts names are: fed{1,2}.

The kubernetes package provides four services: apiserver, controller, kubelet, proxy.  These services are managed by systemd unit files. We will break the services up between the hosts.  The first host, fed1, will be the kubernetes master.  This host will run the apiserver and controller.  The remaining host, fed2 will be minions and run kubelet, proxy and docker.

This is all changing rapidly, so if you walk through this and see any errors or something that needs to be updated, please let me know via comments below.

So let's get started.

fed1 = 10.x.x.241
fed2 = 10.x.x.240

Versions (Check the kubernetes / etcd version after installing the packages):

# cat /etc/redhat-release
Fedora release 22 (Rawhide)

# rpm -q etcd kubernetes

1. Enable the copr repos on all hosts.  Colin Walters has already built the appropriate etcd / kubernetes packages for rawhide.  You can see the copr repo here.

# yum -y install dnf dnf-plugins-core
# dnf copr enable walters/atomic-next
# yum repolist walters-atomic-next/x86_64
Loaded plugins: langpacks
repo id repo name status
walters-atomic-next/x86_64 Copr repo for atomic-next owned by walters 37
repolist: 37

2.  Install kubernetes on all hosts - fed{1,2}.  This will also pull in etcd.
# yum -y install kubernetes

3.  Pick a host and explore the packages.
# rpm -qi kubernetes
# rpm -qc kubernetes
# rpm -ql kubernetes
# rpm -ql etcd
# rpm -qi etcd

4.  Configure fed1.

Export the etcd and kube master variables so the services know where to go.
# export KUBE_ETCD_SERVERS=10.x.x.241
# export KUBE_MASTER=10.x.x.241

These are my services files for: apiserver, etcd and controller.  They have been changed from what was distributed with the package.

Copy these to /etc/systemd/systemd/. using the -Z to maintain proper SELinux context on them. We will change the files in /etc/systemd/system leaving the ones in /usr the same.
# cp -Z /usr/lib/systemd/system/kubernetes-apiserver.service /etc/systemd/system/.

# cp -Z /usr/lib/systemd/system/kubernetes-controller-manager.service /etc/systemd/system/.

# cp -Z /usr/lib/systemd/system/etcd.service /etc/systemd/system/.

# cat /etc/systemd/system/kubernetes-apiserver.service
Description=Kubernetes API Server

ExecStart=/usr/bin/kubernetes-apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address= -port=8080 -machines=10.x.x.240


# cat /etc/systemd/system/kubernetes-controller-manager.service
Description=Kubernetes Controller Manager

ExecStart=/usr/bin/kubernetes-controller-manager --logtostderr=true --etcd_servers=$KUBE_ETC_SERVERS --master=$KUBE_MASTER


# cat /etc/systemd/system/etcd.service
Description=Etcd Server

# etc logs to the journal directly, suppress double logging


Start the appropriate services on fed1.
# systemctl daemon-reload

# systemctl restart etcd
# systemctl status etcd
# systemctl enable etcd

# systemctl restart kubernetes-apiserver.service
# systemctl status kubernetes-apiserver.service
# systemctl enable kubernetes-apiserver.service

# systemctl restart kubernetes-controller-manager
# systemctl status kubernetes-controller-manager
# systemctl enable kubernetes-controller-manager

Test etcd on the master (fed1) and make sure it's working.
curl -L -XPUT -d value="this is awesome"
curl -L
curl -L

I got those examples from the CoreOS github page.

Open up the ports for etcd and the kubernetes API server on the master (fed1).
# firewall-cmd --permanent --zone=public --add-port=4001/tcp
# firewall-cmd --zone=public --add-port=4001/tcp
# firewall-cmd --permanent --zone=public --add-port=8080/tcp
# firewall-cmd --zone=public --add-port=8080/tcp

Take a look at what ports the services are running on.
# netstat -tulnp

5. Configure fed2

These are my service files.  They have been changed from what was distributed with the package.

Copy the unit files to /etc/systemd/system/. and make edits there. Don't modify the unit files in /usr/lib/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-kubelet.service /etc/systemd/system/.

# cp -Z /usr/lib/systemd/system/kubernetes-proxy.service /etc/systemd/system/.


# cat /etc/systemd/system/kubernetes-kubelet.service
Description=Kubernetes Kubelet

ExecStart=/usr/bin/kubernetes-kubelet --logtostderr=true -etcd_servers=http://10.x.x.241:4001 -address=10.x.x.240 -hostname_override=10.x.x.240


# cat /etc/systemd/system/kubernetes-proxy.service
Description=Kubernetes Proxy

ExecStart=/usr/bin/kubernetes-proxy --logtostderr=true -etcd_servers=http://10.x.x.241:4001


Start the appropriate services on fed2.
# systemctl daemon-reload

# systemctl enable kubernetes-proxy.service
# systemctl restart kubernetes-proxy.service
# systemctl status kubernetes-proxy.service

# systemctl enable kubernetes-kubelet.service
# systemctl restart kubernetes-kubelet.service
# systemctl status kubernetes-kubelet.service

# systemctl restart docker
# systemctl status docker
# systemctl enable docker

Take a look at what ports the services are running on.
# netstat -tulnp

Open up the port for the kubernetes kubelet server on the minion (fed2).
# firewall-cmd --permanent --zone=public --add-port=10250/tcp
# firewall-cmd --zone=public --add-port=10250/tcp

Now the two servers are set up to kick off a sample application.  In this case, we'll deploy a web server to fed2.  Start off by making a file in roots home directory on fed1 called apache.json that looks as such:
# cat apache.json
"id": "apache",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apache-1",
"containers": [{
"name": "master",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
"hostPort": 80
"labels": {
"name": "apache"

This json file is describing the attributes of the application environment.  For example, it is giving it an "id", "name", "ports", and "image".  Since the fedora/apache images doesn't exist in our environment yet, it will be pulled down automatically as part of the deployment process.  I have seen errors though where kubernetes was looking for a cached image.  In that case I did a manual "docker pull fedora/apache" and that seemed to resolve.
For more information about which options can go in the schema, check out the docs on the kubernetes github page.

Now, deploy the fedora/apache image via the apache.json file.
# /usr/bin/kubernetes-kubecfg -c apache.json create pods

You can monitor progress of the operations with these commands:
On the master (fed1) -
# journalctl -f -xn -u kubernetes-apiserver -u etcd -u kubernetes-kubelet -u docker

On the minion (fed2) -
# journalctl -f -xn -u kubernetes-kubelet.service -u kubernetes-proxy -u docker

This is what a successful expected result should look like:
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
I0730 15:13:48.535653 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:08.538052 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:28.539936 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:48.542192 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:08.543649 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:28.545475 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:48.547008 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:16:08.548512 27880 request.go:220] Waiting for completion of /operations/8
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache / name=apache

After the pod is deployed, you can also list the pod.
# /usr/bin/kubernetes-kubecfg list pods
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache 10.x.x.240/ name=apache
redis-master-2 dockerfile/redis 10.x.x.240/ name=redis-master

You can get even more information about the pod like this.
# /usr/bin/kubernetes-kubecfg -json get pods/apache

Finally, on the minion (fed2), check that the service is available, running, and functioning.
# docker images | grep fedora
fedora/apache latest 6927a389deb6 10 weeks ago 450.6 MB

# docker ps -l
d5871fc9af31 fedora/apache:latest / 9 minutes ago Up 9 minutes k8s--master--apache--8d060183

# curl http://localhost

To delete the container.
/usr/bin/kubernetes-kubecfg -h delete /pods/apache

That's it.

Of course this just scratches the surface. I recommend you head off to the kubernetes github page and follow the guestbook example.  It's a bit more complicated but should expose you to more functionality.

You can play around with other Fedora images by building from Fedora Dockerfiles. Check here at Github.
Fedora 21 virt test day rescheduled to September 10th
Due to Fedora 21 slipping 3 weeks, the virt test day has been rescheduled to September 10th. Landing page is now here:
Fedora hat jetzt ein Security Team

Eric H. Christensen hat heute offiziell das neu gegründete Fedora Security Team auf der Devel-Announce-Liste vorgestellt.

Hauptaufgabe des Teams soll es sein, Paketbetreuer bei der Behebung von Sicherheitsproblemen zu helfen. So kann das Team z.B. dem Upstream eines Paketes dabei helfen, einen Patch für das Problem zu entwickeln oder ein neues Release, welches das Problem beseitigt, bereitzustellen. Anschließend soll das Team dann mit dem Paketbetreuer zusammen daran arbeiten, möglichst schnell aktualisierte Pakete bereitzustellen.

Nach Aussage von Christensen verfügt das Security Team bereits über 20 Mitglieder. Mit der heutigen Ankündigung stehen interessierten jedoch die Türen für die Mitarbeit im Security Team von Fedora offen.

Logitech M570 on Fedora.

The post Logitech M570 on Fedora. appeared first on The Grand Fallacy.

I just bought a new Logitech M570 wireless trackball for use with my Fedora workstation. I favor a trackball over a moving mouse, because it’s easier on the joints, not to mention more practical on a crowded desk. My previous trackball device was a wired Logitech, and it developed a few problems recently. I’ve had it eight years, so I decided I got my money’s worth and could spring for a new one.

The Logitech M570 uses the Logitech Unifying Receiver USB wireless dongle, common to many Logitech devices. You can pair up to 6 of them to the current unifying device dongle that ships with the M570. Most Fedora users will want this device to be set with correct permissions for people who login on the console. It’s also helpful to be able to query or display battery status.

So here are the steps I recommend to install the Logitech M570 on Fedora. Do these steps before you plug in the receiver or turn on the trackball device. I’m using GNOME 3.12 on Fedora 20, so your mileage may vary:

  1. You may want to remove your existing pointing device first. Otherwise the new one may not work, at least until you do.
  2. Install solaar (upstream link), a monitoring and control gizmo for your Logitech Unifying Receiver and connected devices. Thank you to Eric Smith for packaging and maintaining this tool for Fedora!
  3. Plug in the receiver to an open USB slot. I recommend a rear slot since you likely won’t move this very often. (If you do, there’s a handy slot inside the trackball’s battery compartment where you can store the receiver without losing it!)
  4. Turn on the Logitech M570, and it should Just Work.
  5. You can launch solaar from the GNOME Shell, and a notification icon appears in the message tray. You can use this tool to see status and pair or unpair devices.
  6. (optional) If you want solaar to start every time you login, open the Terminal and enter these commands:
    $ cd ~/.config/autostart $ ln -s /usr/share/applications/solaar.desktop .


A logo & icon for DevAssistant


This is a simple story about a logo design process for an open source project in case it might be informative or entertaining to you. :)

A little over a month ago, Tomas Radej contacted me to request a logo for DevAssistant. DevAssistant is a UI aimed at making developers’ lives easier by automating a lot of the menial tasks required to start up a software project – setting up the environment, starting services, installing dependencise, etc. His team was gearing up for a new release and really wanted a logo to help publicize the release. They came to me for help as colleagues familiar with some of the logo work I’ve done.


When I first received Tomas’ request, I reviewed DevAsisstant’s website and had some questions:

  • Are there any parent or sibling projects to this one that have logos we’d need this to match up with?
  • Is an icon needed that coordinates with the logo as well?
  • There is existing artwork on the website (shown above) – should the logo coordinate with that? Is that design something you’re committed to?
  • Are there any competing projects / products (even on other platforms) that do something similar? (Just as a ‘competitive’ evaluation of their branding.)

He had some answers :) :

  • There aren’t currently any parent or sibling projects with logos, so from that persepctive we had a blank slate.
  • They definitely needed an icon, preferably in all the required sizes for the desktop GUI.
  • Tomas impressively had made the pre-existing artwork himself, but considered it a placeholder.
  • The related projects/products he suggested are: Software Collections, JBoss Forge, and Enide.

From the competition I saw a lot of clean lines, sharp angles, blues and greens, some bold splashes here and there. Software Collections has a logotype without a mark; JBoss Forge has a mark with an anvil (a construction tool of sorts); Enide doesn’t have a logo per se but is part of Node.js which has a very stylized logotype where letters are made out of hexagons.

I liked how Tomas’ placeholder artwork used shades of blue, and thought about how the triangles could be shaped such to make up the ‘D’ of ‘Dev’ and the ‘A’ of Assistant (similarly to how ‘node’ is spelled out with hexagons for each letter in the node.js logotype.) I played around a little be with the notion of ‘d’ and ‘a’ triangles and sketched some ideas out:


I grabbed an icon sheet template from the GNOME design icon repo and drew this out in Inkscape. This, actually, was pretty foolish of me since I hadn’t sent Tomas my sketches at this point and I didn’t even have a solid concept in terms of the mark’s meaning beyond being stylized ‘d’ and ‘a’ – it could have been a waste of time – but thankfully his team liked the design so it didn’t end up being a waste at all. :)


Then I thought a little about about meaning here. (Maybe this is backwards. Sometimes I start with meaning / concept, sometimes I start with a visual and try to build meaning into it. I did the latter this time; sue me!) I was thinking about how JBoss Forge used a construction tool in its logo (Logo copyright JBoss & Red Hat):


And I thought about how Glade uses a carpenter’s square (another construction tool!) in its icon… hmmm… carpenter’s squares are essentially triangles… ! :) (Glade logo from the GNOME icon theme, LGPLv3+):


I could think of a few other developer-centric tools that used other artifacts of construction – rulers, hard hats, hammers, wrenches, etc. – for their logo/icon design. It seemed to be the right family of metaphor anyway, so I started thinking the ‘D’ and ‘A’ triangles could be carpenter’s squares.

What I started out with didn’t yet have the ruler markings, or the transparency, and was a little hacky in the SVG… but it could have those markings. With Tomas’ go-ahead, I made the triangles into carpenter’s squares and created all of the various sizes needed for the icon:


So we had a set of icons that could work! I exported them out to PNGs and tarred them up for Tomas and went to work on the logo.

Now why didn’t I start with the logo? Well, I decided to start with the icon just because the icon had the most amount of constraints on it – there’s certain requirements in terms of the sizes a desktop icon has to read at, and I wanted it to fit in with the style of other GNOME icons… so I figured, start where the most constraints are, and it’s easier to adapt what you come up with there in the arena where you have less constraints. This may have been a different story if the logo had more constraints – e.g., if there was a family of app brands it had to fit into.

So logos are a bit different than icons in that people like to print them on things in many different sizes, and when you pay for printed objects (especially screen-printed T-shirts) you pay for color, and it can be difficult to do effects like drop shadows and gradients. (Not impossible, but certainly more of a pain. :) ) The approach I took with the logo, then, was to simplify the design and flatten the colors down compared to the icon.

Anyhow, here’s the first set of ideas I sent to Tomas for the logomark & logotype:


From my email to him explaining the mockups:

Okay! Attached is a comp of two logo variations. I have it plain and flat in A & B (A is vertical, and B is a horizontal version of the same thing.) C & D are the same except I added a little faint mirror image frame to the blue D and A triangles – I was just playing around and it made me think of scaffolding which might be a nice analogy. The square scaffolding shape the logomark makes could also be used to create a texture/pattern for the website and associated graphics.

The font is an OFL font called Spinnaker – I’ve attached it and the OFL that it came with. The reason I really liked this font in particular compared to some of the others I evaluated is that the ‘A’ is very pointed and sharp like the triangles in the logo mark, and the ratio of space between the overall size of some of the lowercase letters (e.g., ‘a’ and ‘e’) to their enclosed spaces seemed similar to the ratio of the size of the triangles in the logomark and the enclosed space in the center of the logomark. I think it’s also a friendly-looking font – I would think an assistant to somebody would have a friendly personality to them.

Anyway, feel free to be brutal and let me know what you think, and we can go with this or take another direction if you’d prefer.

Tomas’ team unanimously favored the scaffolding versions (C&D), but were hoping the mirror image could be a bit darker for more contrast. So I did some versions with the mirror image at different darknesses:


I believe they picked B or C, and…. we have a logo.

Overall, this was a very smooth, painless logo design process for a very easy-going and cordial “customer.” :)

Announce: gerrymander 1.3 “Any history of sanity in the family?” – a client API and command line tool for gerrit

I’m pleased to announce the availability of a new release of gerrymander, version 1.3. Gerrymander provides a python command line tool and APIs for querying information from the gerrit review system, as used in OpenStack and many other projects. You can get it from pypi

# pip install gerrymander

Or straight from GitHub

# git clone git://

If you’re the impatient type, then go to the README file which provides a quick start guide to using the tool.

This release contains a mixture of bug fixes and two new features. When displaying a list of changes, one of the fields that can be shown per-change is the approvals. This is rendered as a list of all the -2/-1/+1/+2 votes made against the current patch set. The text is also coloured to make it easier to tell at a glance what the overall state of the change is. There are two problems with this, first when there were a lot of votes on a change the list gets rather too wide. The bigger problem though has been the high level of false failures in the OpenStack CI testing system. This results in many patches receiving -1′s from testing, which caused gerrymander to colour them in red:

| URL                                 | Subject                                               | Created  | Approvals             |
|  | Power off commands should give guests a chance to ... | 186 days | w= v=1,1,1,1 c=-2,-1  |
|  | Support image property for config drive               | 152 days | w= v=1,-1,-1,-1 c=-1  |
|  | Fixes a Hyper-V list_instances localization issue     | 128 days | w= v=1,-1 c=-1        |
|  | Allow deleting instances while uuid lock is held      | 104 days | w= v=1,1,1,1 c=2      |
| | Fixes Hyper-V agent force_hyperv_utils_v1 flag iss... | 12 days  | w= v=1,1,1,-1 c=1,1,1 |

My workflow is to focus on things which do not have negative feedback and so I found this was discouraging me from reviewing stuff that was only marked negative due to bogus CI failures. So in this new release, the display is now using separate columns to report test votes, code review votes and workflow votes, each column being separately coloured. Also, instead of showing each individual vote, we only show the so called “casting vote” – ie the one that’s most important, (order is -2, +2, -1, +1)

| URL                                 | Subject                                               | Created  | Tests | Reviews | Workflow |
|  | Power off commands should give guests a chance to ... | 186 days | 1     | -2      |          |
|  | Support image property for config drive               | 152 days | -1    | -1      |          |
|  | Fixes a Hyper-V list_instances localization issue     | 128 days | -1    | -1      |          |
|  | Allow deleting instances while uuid lock is held      | 104 days | 1     | 2       |          |
| | Fixes Hyper-V agent force_hyperv_utils_v1 flag iss... | 12 days  | -1    | 1       |          |

The second new feature is the ‘patchreviewrates’ command which is reports on the review comment activity of people over time. We already have ‘patchreviewstats’ command which gives information about review activity over a fixed window, but this doesn’t let us see long term trends. With the new command we’re reporting on the daily number of review comments per person, averaging over a week, and reported for the last 52 weeks. This lets us see how review activity from contributors goes up and down over the course of a year (or 2 dev cycles). I used this to produced a report which I then imported to LibreOffice to create a graph showing the nova-core team activity over the past two cycles (click image to enlarge)

Nova core team review rates

Nova core team review rates

In summary the changes in version 1.2 of gerrymander are

  • Exclude own changes in the todo lists
  • Add CSV as an output format for some reports
  • Add patchreviewrate report for seeing historica approvals per day
  • Replace ‘Approvals’ column with ‘Test’, ‘Review’ and ‘Workflow’ columns in change reports
  • Allow todo lists to be filtered per branch
  • Reorder sorting of votes to prioritize +2/-2s over +1/-1s
  • Avoid exception from unexpected approval vote types
  • Avoid creating empty cache file when Ctrl-C’ing ssh client
  • Run ssh in batch mode to avoid hang when host key is unknown

Thanks to everyone who contributed patches that went into this new release

Controlling access with smart cards

Smart cards are increasingly used in workstations as an authentication method. They are mainly used to provide public key operations (e.g., digital signatures) using keys that cannot be exported from the card. They also serve as a data storage, e.g., for the corresponding certificate to the key. In RHEL and Fedora systems low-level access to smart cards is provided using the pcsc-lite daemon, an implementation of the PC/SC protocol, defined by the PC/SC industry consortium. In brief the PC/SC protocol allows the system to execute certain pre-defined commands on the card and obtain the result. The implementation on the pcsc-lite daemon uses a privileged process that handles direct communication with the card (e.g., using the CCID USB protocol), while applications can communicate with the daemon using the SCard API. That API hides, the underneath communication between the application and the pcsc-lite daemon which is based on unix domain sockets.

However, there is a catch. As you may have noticed there is no mention of access control in the communication between applications and the pcsc-lite daemon. That is because it is assumed that the access control included in smart cards, such as PINs, pinpads, and biometrics, would be sufficient to counter most threats. That isn’t always the case. As smart cards typically contain embedded software in the form of firmware there will be bugs that can be exploited by a malicious application, and these bugs even if known they are not easy nor practical to fix. Furthermore, there are often public files (e.g., without the protection of a PIN) present on a smart card that while they were intended to be used by the smart card user, it is not always desirable to be accessible by all system users. Even worse, there are certain smart cards that would allow any user of a system to erase all smart card data by re-initializing it. All of these led us to introduce additional access control to smart cards, in par with the access control used for external hard disks. The main idea is to be able to provide fine-grained access control on the system, and specify policies such as “the user on the console should be able to fully access the smart card, but not any other user”. For that we used polkit, a framework used by applications to grant access to privileged operations. The reason of this decision is mainly because polkit has already been successfully used to grant access to external hard disks, and unsurprisingly the access control requirements for smart cards share many similarities with removable devices such as hard disks.

The pcsc-lite access control framework is now part of pcsc-lite 1.8.11 and will be enabled by default in Fedora 21. The advantages that it offers is that it can prevent unauthorized users from issuing commands to smart cards, and prevent unauthorized users from reading, writing or (in some cases) erasing any public data from a smart card. The access control is imposed during the session initialization, thus reducing to minimal any potential overhead. The default policy in Fedora 21 will treat any user on the console as authorized, as physical access to the console implies physical access to the card, but remote users, e.g., via ssh, or system daemons will be treated as unauthorized unless they have administrative rights.

Let’s now see how the smart card access control can be administered. The system-wide policy for pcsc-lite daemon is available at /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy. That file is a polkit XML file that contains the default rules needed to access the daemon. The default policy that will be shipped in Fedora 21 consists of the following.

  <action id="org.debian.pcsc-lite.access_pcsc">
    <description>Access to the PC/SC daemon</description>
    <message>Authentication is required to access the PC/SC daemon</message>

  <action id="org.debian.pcsc-lite.access_card">
    <description>Access to the smart card</description>
    <message>Authentication is required to access the smart card</message>

The syntax format is explained in more details in the polkit manual page. The pcsc-lite relevant parts are the action IDs. The action with ID “org.debian.pcsc-lite.access_pcsc” contains the policy in order to access the pcsc-lite daemon and issue commands to it, i.e., access the unix domain socket. The latter action with ID “org.debian.pcsc-lite.access_card” contains the policy to issue commands to smart cards available to the pcsc-lite daemon. That distinction allows for example programs to query the number of readers and cards present, but not issue any commands to them. Under both policies only active (console) processes are allowed to access the pcsc-lite daemon and smart cards, unless they are privileged processes.

Polkit, is quite more flexible though. With it we can provide even more fine-grained access control, e.g., to specific card readers. For example, if we have a web server that utilizes a smart card we can restrict it to use only the smart cards under a given reader. These rules are expressed in Javascript and can be added in a separate file in /usr/share/polkit-1/rules.d/. Let’s now see how the rules for our example would look like.

polkit.addRule(function(action, subject) {
    if ( == "org.debian.pcsc-lite.access_pcsc" &&
        subject.user == "apache") {
            return polkit.Result.YES;

polkit.addRule(function(action, subject) {
    if ( == "org.debian.pcsc-lite.access_card" &&
        action.lookup("reader") == 'name_of_reader' &&
        subject.user == "apache") {
            return polkit.Result.YES;    }

Here we add two rules. The first one allows the user “apache”, which is the user the web-server runs under, to access the pcsc-lite daemon. That rule explicitly allows access to the daemon because in our default policy only administrator and console user can access it. The latter rule, it allows the same user to access the smart card reader identified by “name_of_reader”. The name of the reader can be obtained using the commands pcsc_scan or opensc-tool -l.

With these changes to pcsc-lite we manage to provide reasonable default settings for the users of smart cards that apply to most, if not all, typical uses. These default settings increase the overall security of the system, by denying access to the smart card firmware, as well as to data and operations for non-authorized users.

Crop an image with a defined aspect ratio: Darktable Vs. Gimp

Sometimes, the easiest things are ones we care the less, and those little tricks are the ones that make the difference when you need to edit quickly an image.

Today’s Podcast, will include a small explanation about what’s an aspect ratio, the most common sizes for photography, and how to use it. In this podcast it’s included how to crop an image with Darktable and GIMP, since I always say, good think about Linux is its variety.

Recently, every time we are requested to make a wallpaper for any distribution, one of the most important requirements, beyond its format, is to have 16:9 aspect ratio. That’s why, if you’re thinking on contribute to any project sending your wallpaper, this short tutorial will help you.

I hope this tutorial was useful, and if you also liked the image used, don’t forget to click the post where you can find the wasp wallpapers.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

flattr this!

Open a URL Highlighted from Anywhere on Your Desktop with This Quick Tip for Fedora

Sometimes when i am using certain applications (especially text editors), the applications themselves do not make URLs that are written out clickable and openable in my default browser. Usually, this would result in me having to highlight the link, copy it to the clipboard, switch to my web browser, open a new tab, paste the link and go.

However, the quick hack for Fedora described in this tutorial will set up a keyboard shortcut so you can simply highlight the link text, do a keyboard shortcut, and the link will open up in your default browser.

Step 1 — Download xsel

first up we need to download a little command line utility called xsel that will let us get at the clipboard selection from a bash script. Install xsel with the command:

sudo yum install xsel

Step 2 — create the script in your user’s bin directory

If you don’t already have it, create a directory called bin in the .local directory in your home directory:

mkdir ~/.local/bin

Then edit a new file in ~/.local/bin (i called mine openindefaultbrowser):

gedit ~/.local/bin/openindefaultbrowser

Now, add the following code to the openindefaultbrowser file, and save it.


xdg-open $(xsel -p)

Now, finally, make the script executable with the command:

chmod +x ~/.local/bin/openindefaultbrowser

Step 3 — bind your new script to a keyboard shortcut

Open up the keyboard preferences, and switch to the “shortcuts” tab. Click “Custom Shortcuts” from the list on the left, and then click the + button to add a new shortcut. Name the shortcut whatever you want and put the name of your script in the command field and press add.


Now, click on the shortcut, and assign an keyboard combo. (i chose super+F).

Ready to go

Your new shortcut should be ready to go. To use it, highlight (select) the text of a url, press your keyboard shortcut, and your webbrowser should open up with the link you highlighted. Note that this script is not smart, if you highlight something that is not a url, and press the shortcut, it will try to open it as it was a url :)


How to setup syslog-ng quickly for performance monitoring using Graphite inside Docker?
For most of its history, syslog-ng could only be used for collecting, processing and storing log messages. Not any more. The Redis and Riemann destinations are already a step into the direction of metrics-based monitoring, and the monitoring source combined with Graphite template support are the next. Installing Graphite can be a huge task, if […]
Fewer auth dialogs for Print Settings

The latest version of system-config-printer adds a new button to the main screen: Unlock. This is a GtkLockButton attached to the “all-edit” polkit permission for cups-pk-helper.

The idea is to make it work a bit more like the Printing screen in the GNOME Settings application. To make any changes you need to click Unlock first, and this fetches all the permissions you need in order to make any changes.

Screenshot from 2014-07-30 10:20:43

This is a change from the previous way of working. Before this, all user interface elements that made changes were available immediately, and permission was sought for each change required (adding a printer, fetching a list of devices, etc).

Screenshot from 2014-07-30 10:20:55

Hopefully it will now be a little easier to use, as only one authentication dialog will generally be needed rather than several as before.

Exceptions are:

  • the dialog for reading/changing the firewall (when there are no network devices discovered but the user is trying to add a network queue), and
  • the dialog for adjusting server settings such as whether job data is preserved to allow for reprinting

These are covered by separate polkit permissions and I think a LockButton can only be responsible for one permission.

GUADEC 2014, Day Four: Hardware, New IDE for GNOME
This is part four of a four part series covering the GUADEC conference. Check out part one, part two and part three for the full coverage on all days of talks from GUADEC.

The fourth day of GUADEC was mostly devoted to hardware. Attendees learned what it takes to integrate hardware with the desktop, how GNOME does continuous performance testing, how sandboxed apps may access hardware. Builder, a new IDE for GNOME, was introduced and the host city of GUADEC 2015 announced!

Hardware Integration

The fourth day of GUADEC was devoted to hardware and its interaction with desktop. The first talk was “Hardware Integration, The GNOME Way” by Bastien Nocera who has been a contributor to GNOME and Fedora for many years. At the beginning he talked about his experience with hardware integration and it’s really a looong list. He’s worked on nautilus-cd-burner, LIRC, bluetooth, iOS filesystem access support, fingerprint readers, Wacon support, orientation support and I’m probably missing out some.
The whole talk was more or less about what it takes to do a proper integration of hardware with the desktop. He mentioned a case back from 2008 which is related to Fedora and showed it as an example of a wrong approach to hardware integration. According to Bastian, GNOME currently well handles screens, touchscreens, storage devices, screen rotation, showing/hiding the cursor, but there are types of hardware which may be useful on the desktop and is starting to appear on more and more laptops: accelerometer, light sensors, compass. He says that the situation about hardware has improved significantly and a lot of Windows 8 machines have a standard, well documented hardware interface and it just needs to be implemented in Linux.

Performance Testing on Actual Hardware

The next talk was also very much hardware related. Owen Taylor talked on continuous integration performance testing on actual hardware. According to Owen, continuous performance testing is very important. It helps find performance regressions more easily because the delta between the code tested last time and the code tested now is much smaller, thus there are much fewer commits to investigate.
He noted that desktop performance testing in VMs is not very useful which is why he has several physical machines that are connected to a controller which downloads new builds of GNOME Continuous and installs them on the connected machines. The testing can be controlled by GNOME Hardware Testing app Owen has created. And what is tested? Here are currently used metrics:  time from boot to desktop, time redraw entire empty desktop, time to show overview, time to redraw overview with 5 windows, time to show application picker, time to draw frame from test application, time to start gedit. Tests are scripted right in the shell (javascript) and events logged with  timestamp. The results are uploaded to In the future, he’d like to have results in the graph linked to particular commits (tests are triggered after very commit), have more metrics (covering also features in apps), assemble more machines and various kinds of them (laptops, ARM devices,…).

Owen Taylor

Owen Taylor

Sandboxing and hardware

The first talk after lunch was also related to hardware. David King talked on the new architecture of Cheese he’d been working on for some time. Cheese is a photo booth kind of application and was created by Daniel Siegel in 2007. David took over maintenance of Cheese around the 3.2 release. The UI of the app is still a subject of change and there are design changes planned.
But David talked mostly about the underlying technologies. Cheese is a typical multimedia desktop app that approaches physical devices directly (in this case it’s a camera). David is trying to change the architecture in a way that would allow sandboxing. Cheese should not access hardware directly, but via a D-Bus API, so that the access can be controlled. David even has a working prototype, but the problem is that the current implementation of D-Bus is too slow to transfer a video stream. So like many other initiatives related to sandboxing this one is also waiting for kdbus/memfds. When it’s in place there are other challenges left: e.g. to design a D-Bus API which will expose all properties and still be generic.

David King

David King

People don’t need a better desktop, they need a different desktop

The last keynote of the conference was delivered by Matthew Garrett who is a long-time contributor to Fedora and currently a member of the Fedora board. Matthew did a small excursion to the history of desktop systems from early 80s till the last trends. According to Matthew, people don’t need a better desktop, they need a different desktop, a desktop where security is a priority concern in OS design, which respects privacy and is open. In his opinion, the desktop is GNOME because it’s free of corporate control, developed transparently, developed for the needs of the user.

Matthew Garrett

Matthew Garrett

Builder: a new IDE for GNOME

The last talk of the day was “Builder, a new IDE for GNOME” by Christian Hergert. Christian started the talk by clearly stating what Builder is not intended to be: a generic IDE (use Eclipse, Anjuta, MonoDevelop,… instead). And it most likely won’t support plugins. Builder should be an IDE specializing on GNOME development. Here are some characteristics of Builder: components are broken into services and services are contained in sub-processes, uses basic autotools management, source editor uses GtkSourceView, has code highlighting, auto-completation, cross-reference, change tracking, snippets, auto-formatting, distraction free mode. Vim/Emacs integration may be possible. The UI designer will use Glade and integrate GTK+ Inspector. Builder will also contain resource manager, simulator (something similar to Boxes, using OSTree), debugger, profiler, source control.
After naming all Builder’s characteristics Christian demoed a prototype. It’s still in very early stages  and looking at the plans I wondered who was going to pull it off. And then Christian came up with a really bold announcement: he is going to quit his job in MongoDB and work on Builder full-time for a year living off his savings and hoping that he will get some funds in a fundraising he is planning to try in fall. Wow, that makes achieving the planned goals much more believable. Good luck, Christian!

Christian Hergert

Christian Hergert

The End

The core days of GUADEC 2014 ended with 5-minute lightning talks that covered a lot of topics, some completely unrelated to GNOME. BoFs start tomorrow and will be taking place till Friday. But most people leave the conference after the core days and the second part is only for the focused groups.
And what city will host GUADEC 2015? Gothenburg! Yeah, the Swedish conspiracy has finally been revealed :)

Group Photo

Group Photo (Author: Ekaterina Gerasimova, CC-BY-SA 2.0)

Google Summer of Code 2014: Week 8-10 update

Google Summer of Code 2014: Week 8-10 update

Hello Folks, 

My project is almost complete and in the past weeks we did work on some things and I would like to update about the status of my project. Also, this time  I have included a couple of screenshots in the blog post. 

To be precise in the previous weeks I have been working on following things: 
  1. Improve the GUI for home page. The CSS has been inspired from pintrest. You can see a demo here.  
  2. Also, we worked on the parser class for the fedora messaging bus. So, that messages sent from the fedora college can be easily parsed.
  3. I have also added  ability to rate and mark tutorials as favorites. Below are presented some screenshots about the same. Though this is not currently reflected in the demo, but is present in the code published at my repository. 
  4. There is a list of to-do's present here : Once I am done with these, they can be added to the Redhat BugZilla. THis can be created as a package and added to fedora.

Now the project has been formally added to the fedora-infra,

Demo for the project is : or (Visible only to a group member of fedora project.)

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Thanks for Reading through the Post.

you have a long road to walk, but first you have to leave the house
or why publishing code is STEP ZERO.

If you've been developing code internally for a kernel contribution, you've probably got a lot of reasons not to default to working in the open from the start, you probably don't work for Red Hat or other companies with default to open policies, or perhaps you are scared of the scary kernel community, and want to present a polished gem.

If your company is a pain with legal reviews etc, you have probably spent/wasted months of engineering time on internal reviews and stuff, so think all of this matters later, because why wouldn't it, you just spent (wasted) a lot of time on it, so it must matter.

So you have your polished codebase, why wouldn't those kernel maintainers love to merge it.

Then you publish the source code.

Oh, look you just left your house. The merging of your code is many many miles distant and you just started walking that road, just now, not when you started writing it, not when you started legal review, not when you rewrote it internally the 4th time. You just did it this moment.

You might have to rewrite it externally 6 times, you might never get it merged, it might be something your competitors are also working on, and the kernel maintainers would rather you cooperated with people your management would lose their minds over, that is the kernel development process.

step zero: publish the code. leave the house.

(lately I've been seeing this problem more and more, so I decided to write it up, and it really isn't directed at anyone in particular, I think a lot of vendors are guilty of this).

July 29, 2014

Document Your Family Tree and Track Your Genealogy Research with GRAMPS on Fedora

A few weeks back, LWN published an awesome article on using GRAMPS for documenting your family tree and tracking your genealogy research. GRAMPS is a pretty comprehensive peice of software for genealogy research allowing you to specify people, add details about them and their relationships with others in your family tree. The good news? GRAMPS is available in Fedora, and is frequently updated whenever a new version of GRAMPS is released. Jump over to the LWN article for the full lowdown on GRAMPS and what it can do.


IsItFedoraRuby new design

The past week I tried to do something about the looks of isitfedoraruby. It was fun using bootstrap (my first time) and I think the outcome is cool. I tried to use Fedora like colors and the font is Liberation Sans, same as Fedora pkgdb.

You can check the overall changes:


They are now borderless, with highlighted headings. They are also responsive which means if the table is bigger than the page it gets its own sidebar without breaking the rest of the site.


index page

The index page show all packaged rubygems along with some interesting info. You can see if a package is out of date if is highlighted with a red color. On the other hand green means is up to date with latest upstream.

The code that does that is pretty simple. Bootstrap provides some css classes for coloring. So I wanted to use warning for outdated and success for up to date packages. I highlighted the whole table row so I used:

%tr{class: rpm.up_to_date? ? 'success' : 'danger'}

In particular check line 19.

show page

Previously there was a ton of information all in one page. Now, the info is still there but I have devided it into tab sections.

Currently there are 5 tabs.

The main tab has a gem's basic info:

  • Up to date badge (green yes or red no)
  • Gitweb repository url
  • SPEC file url
  • Upstream url
  • Maintainer FAS name
  • Number of git commits
  • Last packager (in case a package is co-maintained)
  • Last commit message
  • Last commit date
  • Description

Basic Info

Then there is a tab about version information:

  • Table with gem versions across supported Fedora versions (rawhide, 21, 20)


Another important tab is a list with a packages's dependencies:

  • One table with dependencies with column whether they are runtime/development deps
  • One table with dependents packages


The bugs tab depicts all of package's open bugs for Fedora in a table.


And lastly koji builds for only the supported Fedora versions.


rubygems show page

The description is now on top of the page. Instead of one column, the new look has two columns, one for basic info and one for the depdendencies table.

Compare rake:

owner page

I added some info on top of the page about the number of the packages a user owns:

  • Total
  • Up to date
  • Outdated

The table that has an owner's packages is also highlighted to depict outdated and up to date packages.

Here's an embarassing screenshot which reminds me I have to update my packages...

Owner page

The navigation bar was a PITA to configure and make as responsive as possible. There were a lot of bits and pieces needed to fit together, here are some of them.

I used a helper method which I found in this so answer.

I used the same colors of Fedora pkgdb. With the help of a firefox extension named colorpicker and I gave the navbar the color it has now. twbscolor is a cool site that extracts your chosen color even in scss, which I used along with some minor tweaks.

In responsive mode there is a dropdown menu. That requires some javascript and the steps are:

1.Add *= require bootstrap in app/assets/stylesheets/application.css

2.Add //= require bootstrap in app/assets/javascripts/application.js

3.Add in app/assets/javascripts/application.js:

  toggle: false

4.Add bootstrap classes to header view:

      %button.navbar-toggle{ type: 'button', data: {toggle: 'collapse', target: '#header-collapse'}} 'Toggle navigation'
      = link_to 'FedoraRuby', root_path, class: 'navbar-brand'

    %nav.collapse.navbar-collapse#header-collapse{role: 'navigation'}
        %li{class: is_active?(root_path)}
          = link_to _('Home'), root_path
        %li{class: is_active?(rubygems_path)}
          = link_to _('Ruby Gems'), rubygems_path
        %li{class: is_active?(fedorarpms_path)}
          = link_to _('Fedora Rpms'), fedorarpms_path
        %li{class: is_active?(about_path)}
          = link_to _('About'), about_path

Search field

I wanted the search field to be together with the search button. In bootstrap this is accomplished with input-group-buttons. The final code was:

    = form_tag( { :controller => 'searches', :action => 'redirect' },
    :class => 'navbar-form', :method => 'post') do
        = text_field_tag :search, params[:search] ||= '',
            class: 'search-query form-control',
            placeholder: 'Search'
          = button_tag raw('<span class="glyphicon glyphicon-search"></span>'), name: nil, class: 'btn btn-default'

Instead for a search button with text, I used an icon.

There was also another problem regarding responsiveness. In different page sizes the header looked ugly and the search bar was getting under the menu.

I fixed it by adding a media query in custom.css.scss that disappears the logo in certain widths.

@media (min-width: 768px) and (max-width: 993px) {
  .navbar-brand {
    display: none

Here are before/after screenshots to better understand it.



Responsive design

Bootstrap comes with responsiveness by default. In order to activate it you have to add a viewport meta tag in the head of your html, so in app/views/layouts/application.html.haml add:

%meta{ :content => "width=device-width, initial-scale=1, maximum-scale=1", :name => "viewport" }

See full application.html.haml

It sure was fun and I learned a lot during the process of searching and fixing stuff :)

Deferring Spark Actions to Lazy Transforms With the Promise RDD

In a previous post I described a method for implementing the Scala drop transform for Spark RDDs. That implementation came at a cost of subverting the RDD lazy transform model; it forced the computation of one or more input RDD partitions at call time instead of deferring partition computation, and so behaved more like a Spark action than a transform.

In this followup post I will describe how to implement drop as a true lazy RDD transform, using a new RDD subclass: the Promise RDD. A Promise RDD can be used to embed computations in the lazy transform formalism that otherwise would require non-lazy actions.

The Promise RDD (aka PromiseRDD subclass) is designed to encapsulate a single expression value in an RDD having exactly one row, to be evaluated only if and when its single partition is computed. It behaves somewhat analogously to a Scala promise structure, as it abstracts the expression such that any requests for its value (and hence its actual computation) may be deferred.

The definition of PromiseRDD is compact:

class PromisePartition extends Partition {
  // A PromiseRDD has exactly one partition, by construction:
  override def index = 0

 * A way to represent the concept of a promised expression as an RDD, so that it
 * can operate naturally inside the lazy-transform formalism
class PromiseRDD[V: ClassTag](expr: => (TaskContext => V),
                              context: SparkContext, deps: Seq[Dependency[_]])
  extends RDD[V](context, deps) {

  // This RDD has exactly one partition by definition, since it will contain
  // a single row holding the 'promised' result of evaluating 'expr' 
  override def getPartitions = Array(new PromisePartition)

  // compute evaluates 'expr', yielding an iterator over a sequence of length 1:
  override def compute(p: Partition, ctx: TaskContext) = List(expr(ctx)).iterator

A PromiseRDD is constructed with the expression of choice, embodied as a function from a TaskContext to the implied expression type. Note that only the task context is a parameter; Any other inputs needed to evaluate the expression must be present in the closure of expr. This allows the expression to be of very general form: its value may depend on a single input RDD, or multiple RDDs, or no RDDs at all. It receives an arbitrary sequence of partition dependencies which is the responsibility of the calling code to assemble. Again, this allows substantial generality in the form of the expression: the PromiseRDD dependencies can correspond to any arbitrary input dependencies assumed by the expression. The dependencies can be tuned to exactly what input partitions are required.

As a motivating example, consider how a PromiseRDD can be used to promote drop to a true lazy transform. The aspect of computing drop that threatens laziness is the necessity of determining the location of the boundary partition (see previous discussion). However, this portion of the computation can in fact be encapsulated in a PromiseRDD. The details of constructing such a PromiseRDD can be viewed here. The following illustration summarizes the topology of the dependency DAG that is constructed:


As the dependency diagram shows, the PromiseRDD responsible for locating the boundary partition depends on each partition of the original input RDD. The actual computation is likely to only request the first input partition, but all partitions might be required to handle all possible arguments to drop. In turn, the location information given by the PromiseRDD is depended upon by each output partition. Input partitions are either passed to the output, or used to compute the boundary, and so none of the partition computation is wasted.

Observe that the scheduler remains in charge of when partitions are computed. An advantage to using a PromiseRDD is that it works within Spark's computational model, instead of forcing it.

The following brief example demonstrates that drop implemented using a PromiseRDD satisfies the lazy transform model:

// create data rdd with values 0 thru 9
scala> val data = sc.parallelize(0 until 10)
data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:12

// drop the first 3 rows
// note that no action is performed -- this transform is lazy
scala> val rdd = data.drop(3)
rdd: org.apache.spark.rdd.RDD[Int] = $anon$1[2] at drop at <console>:14

// collect the values.  This action kicks off job scheduling and execution
scala> rdd.collect
14/07/28 12:16:13 INFO SparkContext: Starting job: collect at <console>:17
... job scheduling and execution output ...

res0: Array[Int] = Array(3, 4, 5, 6, 7, 8, 9)


In this post, I have described the Promise RDD, an RDD subclass that can be used to encapsulate computations in the lazy transform formalism that would otherwise require non-lazy actions. As an example, I have outlined a lazy transform implementation of drop that uses PromiseRDD.

The saga of the server replacement

A few weeks ago my main server lost a disk out of it’s raid array. No big deal, I had a (larger than needed) spare around to replace it with. However, I got to thinking it might be about time to get some new disks and do a fresh install. The old disks were 1.5TBx4 in a encrypted/raid5 that I had installed Fedora on back in December 2008 (Fedora 10) and upgraded since then. Moving to a new fresh install would allow me to move to bigger/newer disks, add lvm (I didn’t use lvm for some reason on the f10 install), move to newer/better/bigger disk encryption, and also just get rid of a bunch of cruft that had piled up over the years.

Side note: Dear wordpress: It’s NOT AT ALL NICE when I type out a bunch of text for a post and hit “save draft” and you DELETE a bunch of stuff. (Redoing the rest of this post for the second time, thanks wordpress).

Looking at drive prices, it seemed 3TB drives were the good price point, so I picked up 4 of those. I didn’t have much chance to mess with them last week as I was out at Fedora’s main datacenter doing a bunch of work, but this weekend seemed a good time to get things done.

I have a server chassis thats (mostly) identical to my main server box that I use for test machines, so it was easy to pull it’s drives, put the new ones in and boot the Fedora 20 netinstall iso from usb. I then ran into 2 anaconda issues: First, I hit what anaconda says was a duplicate of bug 1008137. Poking around, I think this was because all 4 drives are gpt, and because I did /boot as raid1, it wanted to install grub2 to all mbr’s, but there was only a 1MB bios boot partition on sda. I couldn’t figure any way to get anaconda to make more, so I went and manually made one on each drive. That seemed to get me past that. Then, I hit a duplicate of bug 1040691. This may have been my fault, as I forgot that it’s really important which “encrypt this” checkbox you check. There’s one when you go into custom partitioning, one on each mount point, and one in the lvm/raid popup. I wanted only the last one of those checked (as I want the entire pv encrypted).

With the machine installed, it was time to rsync data and configuration over. Most of my services that run on the bare server were easy to move over: squid, unbound, nsd, mediatomb, dhcpd, radvd. One was a sticking point: I long ago got some slimserver devices, which uses a perl based free media server. They in turn got bought by logitech. Logitech isn’t doing much with the server, but there was a open project still developing on it until about last month, when they removed all their rpms and went away. ;( So, I think moving to this new server I am going to setup a beagle bone black with mpd and call it good.

After a few days of rsync, my backups and media and other data were copied over. I decided to try and use libvirts live migration on my main virtual machine to cut down on outage time. It took a bit of tweaking to get the new server setup in a way that libvirt was happy to migrate my main guest from the old server: I had to setup a bridged network named the same as the one on the old server, I had to make the hostname NOT the same as the old server, and I had to make a link from the storage path on the old server to the new one. Then, the mirgation started, but it didn’t give me progress for some reason. Many hours later it did finish. I also took the chance to resize the guest some (larger).

The new server I also moved to use NetworkManager from network, since NM now handles bridges nicely. I was happy to see it was only some minor tweaking to ifcfg files and NM brought up everything just as I wanted. I did run into a small snag when I forgot to enable forwarding on the new server, but that was easily fixed.

The swap of the new drives/install into the old server hardware was pretty simple (hot swap bays for the win!). Then, only a few tweaks and everything was up and running on the new install. It was a bit of effort, but it’s nice having the new fresh install and setup running along.

Pruning Syslog entries from MongoDB

I previously announced the availability of rsyslog+MongoDB+LogAnalyzer in Debian wheezy-backports. This latest rsyslog with MongoDB storage support is also available for Ubuntu and Fedora users in one way or another.

Just one thing was missing: a flexible way to prune the database. LogAnalyzer provides a very basic pruning script that simply purges all records over a certain age. The script hasn't been adapted to work within the package layout. It is written in PHP, which may not be ideal for people who don't actually want LogAnalyzer on their Syslog/MongoDB host.

Now there is a convenient solution: I've just contributed a very trivial Python script for selectively pruning the records.

Thanks to Python syntax and the PyMongo client, it is extremely concise: in fact, here is the full script:


import syslog
import datetime
from pymongo import Connection

# It assumes we use the default database name 'logs' and collection 'syslog'
# in the rsyslog configuration.

with Connection() as client:
    db = client.logs
    table = db.syslog
    #print "Initial count: %d" % table.count()
    today =

    # remove ANY record older than 5 weeks except
    t = today - datetime.timedelta(weeks=5)
    table.remove({"time":{ "$lt": t }, "syslog_fac": { "$ne" : syslog.LOG_MAIL }})

    # remove any debug record older than 7 days
    t = today - datetime.timedelta(days=7)
    table.remove({"time":{ "$lt": t }, "syslog_sever": syslog.LOG_DEBUG})

    #print "Final count: %d" % table.count()

Just put it in /usr/local/bin and run it daily from cron.


Just adapt the table.remove statements as required. See the PyMongo tutorial for a very basic introduction to the query syntax and full details in the MongoDB query operator reference for creating more elaborate pruning rules.

Potential improvements

  • Indexing the columns used in the queries
  • Logging progress and stats to Syslog

LogAnalyzer using a database backend such as MongoDB is very easy to set up and much faster than working with text-based log files

Fedora 21 will Feature “Solarized” Color Schemes in Both the Terminal and Gedit

Recently, I have been using what will become Fedora 21 as my day-to-day machine, (side note: I have found it to be pretty stable for pre-release software). One really nice improvement that i am enjoying on Fedora 21 is the addition of the solarized color scheme in both the default terminal (gnome-terminal), and the default graphical text editior (gedit). Solarized comes in both light and dark variants, and really makes these applications look fantastic and works really well on a wide range of displays and screen brightness levels. From the solarized website:

Solarized is a sixteen color palette (eight monotones, eight accent colors) designed for use with terminal and gui applications. It has several unique properties. I designed this colorscheme with both precise CIELAB lightness relationships and a refined set of hues based on fixed color wheel relationships.

These color schemes are not enabled by default, but are easily switched to in the preferences for both gnome terminal and gedit.


Threat: Tom the Programmer


No discussion of system integrity and security would be complete without Tom.

Without the applications, tools, and utilities that Tom writes, computers would be nothing but expensive space heaters. Software, especially applications software, is the reason computers exist.

Tom is a risk because of the mistakes that he might make – mistakes that can crash an application or even an entire system, mistakes that can corrupt or lose data, and logic errors that can produce erroneous results.

Today, most large applications are actually groups of specialized applications working together. The classic example is three tier applications which include a database tier, a business logic tier, and a presentation tier. Each tier is commonly run on a different machine. The presentation and business logic tiers are commonly replicated for performance, and the database tier is often configured with fail-over for high availability. Thus, you add complex communications  between these application components as well as the challenge of developing and upgrading each component. It isn’t surprising that problems can arise! Building and maintaining these applications is much more challenging than a single application on a single system.

Tom is also a risk because of the things he can do deliberately – add money to his bank account, upload credit card data to a foreign system, steal passwords and user identity, and a wide range of other “interesting” things.

If Tom works for you, look for integrity as well as technical skills.

Be aware that behind every software package is a programmer or a team of programmers. They are like fire – they can do great good or great damage. And, like fire, it is easy to overlook them until something bad happens.

Adventures in live booting Linux distributions

We’re all familiar with live booting Linux distributions. Almost every Linux distribution under the sun has a method for making live CD’s, writing live USB sticks, or booting live images over the network. The primary use case for some distributions is on a live medium (like KNOPPIX).

However, I embarked on an adventure to look at live booting Linux for a different use case. Sure, many live environments are used for demonstrations or installations — temporary activities for a desktop or a laptop. My goal was to find a way to boot a large fleet of servers with live images. These would need to be long-running, stable, feature-rich, and highly configurable live environments.

Finding off the shelf solutions wasn’t easy. Finding cross-platform off the shelf solutions for live booting servers was even harder. I worked on a solution with a coworker to create a cross-platform live image builder that we hope to open source soon. (I’d do it sooner but the code is horrific.) ;)

Debian jessie (testing)

First off, we took a look at Debian’s Live Systems project. It consists of two main parts: something to build live environments, and something to help live environments boot well off the network. At the time of this writing, the live build process leaves a lot to be desired. There’s a peculiar tree of directories that are required to get started and the documentation isn’t terribly straightforward. Although there’s a bunch of documentation available, it’s difficult to follow and it seems to skip some critical details. (In all fairness, I’m an experienced Debian user but I haven’t gotten into the innards of Debian package/system development yet. My shortcomings there could be the cause of my problems.)

The second half of the Live Systems project consist of multiple packages that help with the initial boot and configuration of a live instance. These tools work extremely well. Version 4 (currently in alpha) has tools for doing all kinds of system preparation very early in the boot process and it’s compatible with SysVinit or systemd. The live images boot up with a simple SquashFS (mounted read only) and they use AUFS to add on a writeable filesystem that stays in RAM. Reads and writes to the RAM-backed filesystem are extremely quick and you don’t run into a brick wall when the filesystem fills up (more on that later with Fedora).

Ubuntu 14.04

Ubuntu uses casper which seems to precede Debian’s Live Systems project or it could be a fork (please correct me if I’m incorrect). Either way, it seemed a bit less mature than Debian’s project and left a lot to be desired.

Fedora and CentOS

Fedora 20 and CentOS 7 are very close in software versions and they use the same mechanisms to boot live images. They use dracut to create the initramfs and there are a set of dmsquash modules that handle the setup of the live image. The livenet module allows the live images to be pulled over the network during the early part of the boot process.

Building the live images is a little tricky. You’ll find good documentation and tools for standard live bootable CD’s and USB sticks, but booting a server isn’t as straightforward. Dracut expects to find a squashfs which contains a filesystem image. When the live image boots, that filesystem image is connected to a loopback device and mounted read-only. A snapshot is made via device mapper that gives you a small overlay for adding data to the live image.

This overlay comes with some caveats. Keeping tabs on how quickly the overlay is filling up can be tricky. Using tools like df is insufficient since device mapper snapshots are concerned with blocks. As you write 4k blocks in the overlay, you’ll begin to fill the snapshot, just as you would with an LVM snapshot. When the snapshot fills up and there are no blocks left, the filesystem in RAM becomes corrupt and unusable. There are some tricks to force it back online but I didn’t have much luck when I tried to recover. The only solution I could find was to hard reboot.


The ArchLinux live boot environments seem very similar to the ones I saw in Fedora and CentOS. All of them use dracut and systemd, so this makes sense. Arch once used a project called Larch to create live environments but it’s fallen out of support due to AUFS2 being removed (according to the wiki page).

Although I didn’t build a live environment with Arch, I booted one of their live ISO’s and found their live environment to be much like Fedora and CentOS. There was a device mapper snapshot available as an overlay and once it’s full, you’re in trouble.


The path to live booting an OpenSUSE image seems quite different. The live squashfs is mounted read only onto /read-only. An ext3 filesystem is created in RAM and is mounted on /read-write. From there, overlayfs is used to lay the writeable filesystem on top of the read-only squashfs. You can still fill up the overlay filesystem and cause some temporary problems, but you can back out those errant files and still have a useable live environment.

Here’s the problem: overlayfs was given the green light for consideration in the Linux kernel by Linus in 2013. It’s been proposed for several kernel releases and it didn’t make it into 3.16 (which will be released soon). OpenSUSE has wedged overlayfs into their kernel tree just as Debian and Ubuntu have wedged AUFS into theirs.


Building highly customized live images isn’t easy and running them in production makes it more challenging. Once the upstream kernel has a stable, solid, stackable filesystem, it should be much easier to operate a live environment for extended periods. There has been a parade of stackable filesystems over the years (remember funion-fs?) but I’ve been told that overlayfs seems to be a solid contender. I’ll keep an eye out for those kernel patches to land upstream but I’m not going to hold my breath quite yet.

The post Adventures in live booting Linux distributions appeared first on

Fedora 21 package signing: no, it’s not just you

We interrupt this Rawhide blog to bring you an important message!

No, it’s not just you: there are unsigned packages in Fedora 21, and yum (and dnf and mock) are complaining about it.

Here’s what’s going on.

When we branch Branched (in this case, F21) from Rawhide, we don’t immediately enable Bodhi – that is, the process where builds have to be sent to Bodhi which sends them to updates-testing and then to stable after review. For the first few days, Branched package submission works just like Rawhide – the packager submits the build, it gets built by Koji, and it’s automatically pulled into the ‘stable’ repo at the next nightly compose. No updates-testing, no karma.

When the package submission process is working like that, package signing still won’t be 100%. We can only get package signing to 100% when the Bodhi workflow is active.

Bodhi gets turned on when we do the Alpha release freeze. Usually, that’s very soon after branching – like, a week. So there’s a slightly confused week where we do the branch and then run around updating fedora-release and mock configs and so on and the freeze hits and Bodhi gets turned on somewhere in there and after a few days it all shakes out and everything’s running smoothly.

What with and all, the period between Fedora 21 branching and Bodhi being turned on is getting much longer. We’re not really at a point where it makes any sense to freeze for Alpha, so Bodhi isn’t getting turned on, and that means not all packages are getting signed.

So yes, relax, it’s not just you. For right now, please use the –nogpgcheck parameter for Fedora 21 yum and dnf and mock and so forth. We’ll try to send out updates to fedora-repos and mock that will turn off gpgcheck for the next little while, until we freeze and enable Bodhi. We’re sorry for the inconvenience.

Mon livre des rôles

Pour configurer ou reconfigurer rapidement un poste sur Fedora, il existe quelques programmes appartenant à la catégorie des Managers de configuration. Dans cette catégorie on retrouve Cfengine, Puppet, Chef et Ansible. Chacun ayant des spécificités, je me suis orienté vers Ansible qui m'offrait le système de fonctionnement le plus simple. Il permet de déployer sur la machine locale ou bien tout le parc ma configuration personnalisée, sans avoir eu besoin d'installer un quelconque logiciel sur chaque poste, le gain de temps et d'énergie est vraiment non-négligeable. Contrairement à ses concurents, Ansible s'appuie sur des connexions SSH sortante depuis le poste maître, il n'y a donc aucun port d'écoute supplémentaire sur les postes esclaves à part le port du serveur SSH. De même, il n'est pas lancé au démarrage du poste maître comme un service, mais se lance manuellement avec une simple commande. Si les machines du parc ne sont pas compromises il n'y a pas de raison de le laisser tourner en tâche de fond comme un démon, la configuration d'un poste ne change pas toutes les dix minutes. Lors de l'exécution de la commande, Ansible va lire toute une arborescence de fichiers créés par vos soins, que l'on appelle livre des rôles (en anglais: playbook). Cet ensemble de scripts au format YML permet de faire exécuter à Ansible plein d'opérations complexes, automatiquement, afin de personnaliser toutes les machines du parc d'un seul coup. Les applications de cet atout sont diverses, on peut l'utiliser juste après avoir réinstallé Fedora, ou bien pour une nouvelle machine ajoutée au parc, ou encore pour déployer une application en particulier.

Je suis très heureux de vous partager mon livre des rôles sous license GPL, que je développe depuis quelques mois maintenant, et que j'utilise sur trois machines physiques et quatre machines virtuelles. Un livre des rôles n'est jamais fini, mes rôles actuellement fonctionnels mais en cours d'amélioration sont :

  • common
  • clients
  • ntpserver
  • yum-updatesd

Mes rôles actuellement inachevés sont :

  • cozycloud
  • mailserver
  • rpmbuilder

Il y a quelques astuces à prendre en compte au moment d'écrire son premier playbook. Commencer avec un seul script YML, puis lorsqu'il listera toutes vos personnalisations globales (appliquées à toutes vos machines) vous pourrez le splitter dans l'arborescence du rôle "common". À chaque tâche ou service important vous pourrez développer un nouveau rôle. Et enfin, commencer directement dans un dépôt git (git init --bare), pour bénéficier de la gestion de version même si vous ne voulez pas le partager en ligne. (Le partager c'est quand même plus cool).

GUADEC 2014, Day Three: GTK+ and Wayland
This is part three of a four part series covering the GUADEC conference. Check out part one, part two and part four for the full coverage on all days of talks from GUADEC.

The third day of GUADEC was mostly devoted to lower level parts of the GNOME stack. There were talks on GTK+, CSS, Wayland, and WebKitGTK+, but also an annual general meeting of the GNOME Foundation.

The day started with Matthias Clasen’s talk on improvements in GTK+, especially in dialogs. Matthias demoed the changes for the whole time of the talk, switching between the code that was behind the dialogs and dialogs themselves. Matthias also showed how dialogs adapt to the environment they’re running in. GTK+ developers have been accused that they only care about GNOME, but they actually care about how GTK+ 3 apps look in other environments and good news for users of other desktop environments is that a lot has recently been done in this direction.

Benjamin Otte, who is the lead developer of GTK+, spent his talk showing CSS behavior and tricks in GTK+. He is the best person to talk on this topic because he wrote most of the 50,000 lines (his estimation) of CSS renderer in GTK+. The talk was especially interesting to those who make graphical interfaces in GTK+ 3. Benjamin also showcased GTK+ Inspector which helps you inspect GTK+ application and solve problems in the graphical interface. Previously called GTK Parasite, it was an independent project: but the GTK+ developers found it so useful that they decided to include it in the last version of GTK+ and call it GTK+ Inspector.

In the afternoon I attended “GNOME and Wayland” by Jasper St.Pierre. It was a difficult decision because there was a talk by Patrick Uiterwijk, a member of Fedora Infra, that I really wanted to attend, too. But I don’t regret the decision because Jasper’s talk was probably the most interesting talk of the day. He is also the best person to talk on this topic because he’s been working on the Wayland support in GNOME full-time in Red Hat for the last year. Jasper is apparently not a fan of X11 because he spent a significant part of his talk complaining about limitations and issues of X11. To demonstrate security issues of X11 he even wrote a keylogger, which he funnily called “RealPlayer 10.4 Special Deluxe Freemium Edition,” and showed the audience how easy it is to set up such a thing on X11.

He continued by saying that any sandboxing of desktop apps on the file system level is not really effective if the sandboxed app can access resources of someone else via X11. There is an implementation of access control in X11, but according to Jasper it is so complicated that you can’t really work with it.

Another problem he mentioned is that X11 can’t individually scale up windows of legacy apps on HiDPI screens. Window resizing has never been smooth on X11. Wayland solves that and Jasper demoed the difference. However, Jasper mentioned there are no performance benefits right now. Wayland has a lot of potential, but no one has worked on performance yet. But not everything is perfect with Wayland. The app isolation which significantly improves security also prevents apps from using features users are used to. For example you won’t be able to pick a color anywhere on the screen in color-dropper-picker in applications such as GIMP or Inkscape.

There are also problems with window decorations. According to Jasper, there were heated discussions in the community whether the better solution is client-side or server-side decorations. The founder of Wayland Kristian Høgsberg inclines to client-side decorations and that should be the way to go on Wayland. On the other hand, the lead developer of Kwin Martin Gräßlin is a big proponent of server-side decorations, so KDE will probably stick with them. Jasper says that the solution will be some hand-shaking protocol that will ensure that GNOME apps won’t get double decorations in KDE and KDE apps no decorations in GNOME. Server-side decorations will also stay in XWayland.

Jasper St.Pierre

Jasper St.Pierre

A nice break from technical talks was “Lessons learned as GNOME’s Executive Director” by Karen Sandler, who left the position a few months ago. She started her career as an attorney in stock and bond trading, but then she got an opportunity to work for Software Freedom Law Center. She decided to take the job in The GNOME Foundation because she was impressed by GNOME 3 and she thought it was the way to go.

She was the executive director for three years which may have been the bumpiest in the history of the project. The most important lessons she learned during the them were: formal community representation works, newcomers/enthusiasts matter a lot, messaging is important, and to keep our eyes on the ideological prize. She also proposed to have technical evangelist that would engage with potential users and find out what is stopping them from switching to GNOME.

In the Q&A part, someone asked what to do about the fact that negative news spread much faster than positive ones. As an example he mentioned that story about Linus Torvalds ditching GNOME which reached pretty much everyone while almost no one know that he returned to GNOME. To fight this, you need to focus on the positive messages.

Karen Sandler

Karen Sandler

On the third day, the second part of the annual general meeting (AGM) of The GNOME Foundation took place. First all newly elected directors were introduced: Jeff Fortin, Marina Zhurakhinskaya, Andrea Veri, Ekaterina Gerasimova, Sri Ramkrishna, Tobias Muller. The pants, the traditional prize was awarded to Alexandre Franke, the organizer of this year’s GUADEC.

The rest of the AGM was devoted to the financial situation of The GNOME Foundation. The situation was not very good early this year, but then they focused on the income part and worked on getting money from all sponsors who had promised sponsorship and the situation improved a lot since then although there is still a long way to an ideal financial health. Anyway, the situation wasn’t and isn’t so critical as some media described it. We were also shown a structure of income and expenses. Most of the Foundation’s expenses are on employees and most of the income comes from the advisory board fees.

See also GUADEC 2014, Day One and GUADEC 2014, Day Two.

OTP authentication in FreeIPA

As of release 4.0.0, FreeIPA supports OTP authentication. HOTP and TOTP tokens are supported natively, and there is also support for proxying requests to a separately administered RADIUS server.

To become more familiar with FreeIPA and its capabilities, I have been spending a little time each week setting up scenarios and testing different features. Last week, I began playing with a YubiKey for HOTP authentication. A separate blog about using YubiKey with FreeIPA will follow, but first I wanted to post about how FreeIPA’s native OTP support is implemented. This deep dive was unfortunately the result of some issues I encountered, but I learned a lot in a short time and I can now share this information, so maybe it wasn’t unfortunate after all. Even though I still have a problem to solve!

User view of OTP

A user has received or enrolled an OTP token. This may be a hardware token, such as YubiKey, or a software token like FreeOTP for mobile devices, which can capture the token simply by pointing the camera at the QR code FreeIPA generates.

When logging in to an IPA-backed service, the FreeIPA web UI, or when running kinit, the user uses their token to generate a single-use value, which is appended to their usual password. To authenticate the user, this single-use value is validated in addition to the usual password validation, providing an additional factor of security.

HOTP algorithm

The HMAC-based One-Time Password (HOTP) algorithm uses a secret key that is known to the validation server and the token device or software. The key is used to generate an HMAC of a monotonically increasing counter that is incremented each time a new token is generated. The output of the HMAC function is then truncated to a short numeric code – often 6 or 8 digits. This is the single-use OTP value that is transmitted to the server. Because the server knows the secret key and the current value of the counter, it can validate the value sent by the client.

HOTP is specified in RFC 4226. TOTP (Time-based One-Time Password), specified in RFC 6238, is a variation of HOTP that MACs the number of time steps since the UNIX epoch, instead of a counter.

Authentication flow

The problem I encountered was that HOTP authentication (to the FreeIPA web UI) was failing about half the time (there was no discernable pattern of failure). The FreeIPA web UI seemed like a logical place to start investigating the problem, but for a password (and OTP value) it is just the first port of call in a journey through a remarkable number of services and libraries.

Web UI and kinit

The ipaserver.rpcserver.login_password class is responsible for handling the password login process. It’s implementation reads request parameters and calls kinit(1) with the user credentials. Its (heavily abridged) implementation follows:

class login_password(Backend, KerberosSession, HTTP_Status):
    def __call__(self, environ, start_response):
        # Get the user and password parameters from the request
        query_dict = urlparse.parse_qs(query_string)
        user = query_dict.get('user', None)
        password = query_dict.get('password', None)

        # Get the ccache we'll use and attempt to get
        # credentials in it with user,password
        ipa_ccache_name = get_ipa_ccache_name()
        self.kinit(user, self.api.env.realm, password, ipa_ccache_name)
        return self.finalize_kerberos_acquisition(
            'login_password', ipa_ccache_name, environ, start_response)

    def kinit(self, user, realm, password, ccache_name):
        # get http service ccache as an armor for FAST to enable
        # OTP authentication
        armor_principal = krb5_format_service_principal_name(
            'HTTP',, realm)
        keytab = paths.IPA_KEYTAB
        armor_name = "%sA_%s" % (krbccache_prefix, user)
        armor_path = os.path.join(krbccache_dir, armor_name)

        (stdout, stderr, returncode) =
            [paths.KINIT, '-kt', keytab, armor_principal],
            env={'KRB5CCNAME': armor_path}, raiseonerr=False)

        # Format the user as a kerberos principal
        principal = krb5_format_principal_name(user, realm)

        (stdout, stderr, returncode) =
            [paths.KINIT, principal, '-T', armor_path],
            env={'KRB5CCNAME': ccache_name, 'LC_ALL': 'C'},
            stdin=password, raiseonerr=False)

We see that the login_password object reads credentials out of the request and invokes kinit using those credentials, over an encrypted FAST (flexible authentication secure tunneling) channel. At this point, the authentication flow is the same as if a user had invoked kinit from the command line in a similar manner.


Recent versions of the MIT Kerberos key distrubution centre (KDC) have support for OTP preauthentication. This preauthentication mechanism is specified in RFC 6560.

The freeipa-server package ships the KDC database plugin that talks to the database over LDAP to look up principals and their configuration. In this manner the KDC can find out that a principal is configured for OTP authentication, but this is not where OTP validation takes place. Instead, an OTP-enabled principal’s configuration tells the KDC to forward the credentials elsewhere for validation, over RADIUS.


FreeIPA ships a daemon called ipa-otpd. The KDC communicates with it using the RADIUS protocol, over a UNIX domain socket. When ipa-otpd receives a RADIUS authentication packet, it queries the database over LDAP to see if the principal is configured for RADIUS or native OTP authentication. For RADIUS authentication, it forwards the request on to the configured RADIUS server, otherwise it attempts an LDAP BIND operation using the passed credentials.

As a side note, ipa-otpd is controlled by a systemd socket unit. This is an interesting feature of systemd, but I won’t delve into it here. See man 5 systemd.socket for details.

Directory server

Finally, the principal’s credentials – her distinguished name and password with OTP value appended – reach the database in the form of a BIND request. But we’re still not at the bottom of this rabbit hole, because 389 Directory Server does not know how to validate an OTP value or indeed anything about OTP!

Yet another plugin to the rescue. freeipa-server ships the directory server plugin, which handles concepts such as password expiry and – finally – OTP validation. By way of this plugin, the directory server attempts to validate the OTP value and authenticate the user, and the whole process that led to this point unwinds back through ipa-otpd and the KDC to the Kerberos client (and through the web UI to the browser, if this was how the whole process started).


My drawing skills leave a lot to be desired, but I’ve tried to summarise the preceding information in the following diagram. Arrows show the communication protocols involved; red arrows carry user credentials including the OTP value. The dotted line and box show the alternative configuration where ipa-otpd proxies the token on to an external RADIUS server.


Debugging the authentication problem

At time of writing, I still haven’t figured out the cause of my issue. Binding directly to LDAP using an OTP token works every time, so it definitely was not an issue with the HOTP implementation. Executing kinit directly fails about half the time, so the problem is likely to be with the KDC or with ipa-otpd.

When the failure occurs, the dirsrv access log shows two BIND operations for the principal (in the success case, there is only one BIND, as would be expected):

[30/Jul/2014:02:58:54 -0400] conn=23 op=4 BIND dn="uid=ftweedal,cn=users,cn=accounts,dc=ipa,dc=local" method=128 version=3
[30/Jul/2014:02:58:54 -0400] conn=23 op=4 RESULT err=0 tag=97 nentries=0 etime=0 dn="uid=bresc,cn=users,cn=accounts,dc=ipa,dc=local"
[30/Jul/2014:02:58:55 -0400] conn=37 op=4 BIND dn="uid=ftweedal,cn=users,cn=accounts,dc=ipa,dc=local" method=128 version=3
[30/Jul/2014:02:58:55 -0400] conn=37 op=4 RESULT err=49 tag=97 nentries=0 etime=0

The first BIND operation succeeds, but for some reason, one second later, the KDC or ipa-otpd attempts to authenticate again. It would make sense that the same credentials are used, and in that case the second BIND operation would fail (error code 49 means invalid credentials) due to the HOTP counter having been incremented in the database.

By reconfiguring ipa-otpd to listen on a network socket as well as its standard UNIX domain socket, I will be able to use radclient(1) (provided by the freeradius-utils package on Fedora) or some other RADIUS client to send access requests directly to ipa-otpd and observe the success rate. Either ipa-otpd or the KDC should be implicated and from there it will hopefully be straightforward to identify the exact cause and hone in on a solution.

Concluding thoughts

OTP authentication in FreeIPA involves a lot of different servers, plugins and libraries. To provide the OTP functionality and make all the services work together, freeipa-server ships a KDC plugin, a directory server plugin, and the ipa-otpd daemon! Was it necessary to have this many moving parts?

The original design proposal explains many of the design decisions. In particular, ipa-otpd is necessary for a couple of reasons. The first is the fact that the MIT KDC supports only RADIUS servers for OTP validation, so for native OTP support we must have some component act as a RADIUS server. Second, the KDC radius configuration is static, so configuration is simplified by having the KDC talk only to ipa-otpd for OTP validation. It is also nice that ipa-otpd is the sole arbiter of whether to proxy a request to an external RADIUS server or to attempt an LDAP BIND.

What if the KDC could dynamically work out where to direct RADIUS packets for OTP validation? It is not hard to conceieve of this, since it already dynamically learns whether a principal is configured for OTP by way of the plugin. But even if this were possible, the current design arguably preferable since, unlike the KDC, we have full control over the implementation of ipa-otpd and are therefore better placed to respond to performance or security concerns in this aspect of the OTP authentication flow.

Testing, testing and more testing

The previous week, was very eventful. I and my mentor were discussing on the plans of implementing, the groups and permissions feature, which I had planned earlier. However, we concluded that it would be better to clean up the current code, and perform more rigorous testing so that the current implemented features are robust and performance centric. So, have placed the permissions stuff on the shelf for the time being. So far I have been testing via the API, filed 1.7 million bugs or so, only to realise that I wont be able to access those, as I had missed the product versions in each, which I have made compulsory as a part of design decision. So I fixed that part, and am refiling more bugs. The testing which I have done so far, have the result as follows, when only 1 user makes a request at a time:

  • Fetching 10,000+ bugs takes around 1-2 seconds.
  • Filing a bug via API takes around 2-3 seconds(on an average).
  • Filing bugs via the UI(mechanize) takes 4-5 seconds (on an average)

I know the above numbers are not impressive, and the reason behind the same, is that I used mysql at places wherein I should have used redis. So I am onto that now, and more testing, which would be followed by the initial RPM packaging of the application. :D

Testing Java Cryptography Extension (JCE) is installed

If JCE is already installed, you should see on that the jar files ‘local_policy.jar’ and ‘US_export_policy.jar’ are on $JAVA_HOME/jre/lib/security/

But, we can test it:

import javax.crypto.Cipher;
import javax.crypto.*;

class TestJCE {
 public static void main(String[] args) {
 boolean JCESupported = false;
 try {
    KeyGenerator kgen = KeyGenerator.getInstance("AES", "SunJCE");
    JCESupported = true;
 } catch (NoSuchAlgorithmException e) {
    JCESupported = false;
 } catch (NoSuchProviderException e) {
    JCESupported = false;
    System.out.println("JCE Supported=" + JCESupported);

To compile (assuming file name is

$ javac

Previous command will create TestJCE.class output file.

To Interpreting and Running the program:

$ java TestJCE


July 28, 2014

A talk in 9 images

My talk at GUADEC this year was about GTK+ dialogs. The first half of the talk consisted of a comparison of dialogs in GTK+ 2, in GTK+ 3 under gnome-shell and in GTK+ 3 under xfwm4 (as an example of an environment that does not favor client-side decorations).

The main take-away here should be that in 3.14, all GTK+ dialogs will again have traditional decorations if that is what works best in the environment it is used in.

About DialogsPreference DialogsFile ChoosersMessage DialogsError DialogsPrint DialogsFont DialogsColor DialogsAction DialogsThe second part of my talk was discussing best practices for dealing with various issues that can come up with custom GTK+ dialogs; I’ve summarized the main points in this HowDoI page.

15 Alternatives to Your Default Image Viewer on Fedora

Is the default image viewer in your desktop environment just not working the way you want? need more features (or maybe something simpler) from an image viewer? Well, you are in luck, as there is no shortage of choices when looking at alternative image viewers in Fedora. This article covers 17 image viewers in Fedora.

UPDATE: we have added a few more options (sushi and qiv) for image viewers

Typically, an image viewer does one thing — shows you the images in a directory (sometimes in a thumbnail view), and lets you quickly flip through them. Some image viewers also allow you do simple edits of an image, and will also show you some added details of your pictures (like metadata, and color histograms).

Eye of Gnome (eog)

The Eye of GNOME is the official image viewer for the GNOME desktop. It integrates with the GTK+ look and feel of GNOME, and supports many image formats for viewing single images or images in a collection.

The Eye of GNOME also allows to view the images in a fullscreen slideshow mode or set an image as the desktop wallpaper. It reads the camera tags to automatically rotate your images in the correct portrait or landscape orientation.

Eye of MATE

Eye of MATE is a simple viewer for browsing images on your computer. Once an image is loaded, you can zoom and rotate the image, and also view subsequent images in the directory the image was loaded from.


eyesight is a simple image viewer that allows the user to view either a single image, or a whole directory of images. eyesight also allows the user to rotate the image, and save the result.


Feh is a superminimal image viewer that basically just shows the images that are in the directory that it is launched from. It has a few options that are shown when you right-click on the main window.



Geeqie is an application for browsing images — you open up an image or directory to browse and view all the images in that directory. It also displays all the relevant data and metadata for your images in various, configurable sidebar widgets. It can display many forms of data, including exif metadata, color histograms, GPS locations of the photo.


gliv is an image viewer that just views images (no editing capabilities here). It also has extensive shortcuts for control with the keyboard that are all documented in the menus.



gpicview is another image viewer that just shows the image, and lets you proceed back and forward to the next image in the directory. It also has controls at the bottom of the window for zooming, and opening another directory.



gThumb is an image viewer, editor, browser and organizer. As an image viewer gThumb allows to view common image file formats such as BMP, JPEG, GIF (including the animations), PNG, TIFF, TGA and RAW images. It is also possible to view various metadata types embedded inside an image such as EXIF, IPTC and XMP.

As an image editor gThumb allows to scale, rotate and crop the images; change the saturation, lightness, contrast as well as other color trasformations. As an image organizer gThumb allows to add comments and other metadata to images; organize images in catalogs and catalogs in libraries; search for images and save the result as a catalog.


gwenview is a basic image viewer tha allows you to view images, and also perform basic photo manipulations like rotation and cropping.



nomacs is a fast image viewer with additional image editing functionality. When viewing images, it features fast thumbnail previews, a zoomable grid display of thumbnails, a chromeless view, and display of metadata and image details with a histogram. It also has the ability to edit images, including cropping, resizing, rotating and color correction.


when it comes to minimal interfaces for viewing images, qiv is the king. It doenst have a desktop file, so you can only launch it from the terminal, and it will only work if you give it a image to view (or directory) as a parameter from the command line. With such a minimal interface, you control most of the features of this one with the keyboard.



Ristretto is a simple application for viewing, zooming and rotating images. Ristretto has a simple two-paned layout with a list of thumbnails on the left, and the main image on the right. It also has the ability to set an image as your desktop background.


Shotwell is an easy-to-use, fast photo organizer designed for the GNOME desktop. It allows you to import photos from your camera or disk, organize them by date and subject matter, even ratings. It also offers basic photo editing, like crop, red-eye correction, color adjustments, and straighten. Shotwell’s non-destructive photo editor does not alter your master photos, making it easy to experiment and correct errors.

When ready, Shotwell can upload your photos to various web sites, such as Facebook, Flickr, Picasa (Google Plus), and more.

Shotwell supports JPEG, PNG, TIFF, and a variety of RAW file formats.


sushi is not a stand-alone image viewer, but is an add-on for the default GNOME file browser. to use sushi, just browse to a directory in Files (aka nautilus), select a file, then press space to bring up a quick, larger preview of the image.



Viewnoir is a simple and elegant image viewer with a minimal interface that provides as much screen real estate as possible to view your images. It has a wide range of features, including: fullscreen and slideshow modes, the ability to rotate, flip and crop images, support for animations (GIF), support for reading image metadata, and the ability to configure mouse actions.


xfi is an older image viewer that allows you to open a directory, and simply browse the images. To install xfi, use sudo yum install xfe-xfi



xzgv is a simple image editor, with a focus on controlling all actions using keyboard input. It has a simple, two pane layout, with all the thumbnails of the current directory listed in the left pane, and the image viewed in the main pane. The menu in xzgv can be viewed in a context menu that is shown by right-clicking on the main image pane. This context menu also lists all the keyboard shortcuts if you are new to xzgv and need to know them.



DNF 0.5.5 and Core DNF Plugins 0.1.2 Released

The two releases bring several bugfixes and API extensions.

One of the key improvements is better support for proxy servers, including configuration options and accessibility from the API.

Related to this release, the situation around package splitting support has been resolved on the level of DNF’s underlying libraries.

See also:

ColorZilla, pick color samples in your browser

ColorZilla has created a couple of plugins, both for Chrome as for Firefox, that allows you to have an integrated eyedropper on your browser, so you can collect easily color samples of anything. Is like having a kcolorchooser just a click away.


ColorZilla includes several tools:

  • Eyedropper - get the color of any pixel on the page
  • An advanced Color Picker similar to ones that can be found in Photoshop and Paint Shop Pro
  • Webpage Color Analyzer - analyze DOM element colors on any Web page, locate corresponding elements
  • Ultimate CSS Gradient Generator
  • Palette Viewer with 7 pre-installed palettes
  • Color History of recently picked colors
  • Displays element information like tag name, class, id, size etc.
  • Outline elements under the cursor
  • Manipulate colors by their Red/Green/Blue orHue/Saturations/Value components.
  • Auto copy the generated or sampled colors to the clipboard in CSS RGB, Hex and other formats.
  • Keyboard shortcuts for quickly sampling page colors using the keyboard.
  • Get the color of dynamic elements (hovered links etc.) by resampling the last sampled pixel

If you’re a designer and you always need to check colors with the Elements Inspector, this will turn to be an awesome and fast solution for your daily workflow.

flattr this!