March 01, 2015

SSH known hosts verification failure one liner

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.


Those who regularly build and rebuild machines or virtual machines on a dhcp network will probably be faced with this quite often, this is due to the known fingerprint for the previous host being different to a new one which has aquired the same IP address.

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:66
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

There is an option to have SSH ignore these when connecting, however i find that cleaning out the old line before connecting far quicker and i do this with a Sed one liner.

The line in the known_hosts file we are interested in can be found at the end of the line:

Offending ECDSA key in /root/.ssh/known_hosts:66

66 in this case, so we can get sed to simply delete that line using:

sed -i '66d' ~/.ssh/known_hosts

An SSH session can now be opened without Host key verification failure.

Hope this helps someone.

The post SSH known hosts verification failure one liner appeared first on .

flattr this!

Moved my blog to Pelican

For a while now, I've been having trouble with my shared hosting and the wordpres instance I was using. I hadn't realised how limited the resources of a shared hosting set up are. It used to work just fine and then my blog started going down regularly. After a lengthy conversation with the support team, and no resolution, I decided to move to a static page based blog. Of the many options, I decided on Pelican. It's really easy to set up and of course the site is now super quick to load.

I've just imported all my wordpress data and a lot of work still needs to be done to correct my previous posts - broken links and missing images for example. I'll do it over time. I've also combined my research and normal blogs now to use Pelican's category feature.

Guess this calls for yet another "Hello World!"

Blender 3D - great addon for game developers.
I found one great addon for game developers.
I make also one simple tutorial about this addon.
Can be read here.
Automatically subscribe RHEL systems for receiving updates and installing more packages

While fixing bugs and testing patches, I often use virtual machines running RHEL. These systems are short living, and normally do not survive a day or two. For most tests and development tries, I have little need to install additional packages or updates. An installation from the DVD contains all that is needed. Mostly...

To install additional packages or updates, it is needed to register the system to the Red Hat Customer Portal. The subscription-manager tool that is installed on all current RHEL systems can be used for that. For simple usage of the utility, a username and password is sufficient. Automating the subscribing process would require saving those credentials in a kickstart or ansible configuration, that's not what I want. Manually subscribing the VM when I need to was the annoying workaround.

A few weeks ago, I finally took the time to setup my automated RHEL installations to use subscription-manager for registering at the Red Hat Customer Portal. The Customer Portal offers the possibility to configure Activation Keys. The subscription-manager tool can use this key with a command like this:

# subscription-manager register \
--org 123456 \
--activationkey my-rhel-example-key
# subscription-manager attach --auto

The --org option seems required for me. I am not sure everyone needs that. The number (or name) can be found on an installed and registered system but executing:

# subscription-mananger identity

After subscribing like the above, it may well be that many repositories/channels get enabled. If you know which repositories you need, you can disable all repositories, and enable a select few:

# subscription-manager repos \
--disable '*'
# subscription-manager repos \
--enable rhel-7-server-rpms \
--enable rhel-7-server-optional-rpms

At the moment, I have this done in the %post section of my kickstart configuration. I would prefer to set this up with the redhat_subscription ansible module, but the --org option is not available there (yet?).

February 28, 2015

Scratching an itch


Last year I started teaching programming to my grade 10 classes. I started with Python, which is easy to understand, forces good programming practices, and is one of my favorite languages. It was a complete disaster. I had four or five in each class who understood what I was doing, and the rest were completely lost, which says a whole lot about my teaching. At 2014, I chatted with Matthew Miller about my Python problem, and he suggested teaching my students Scratch.

For those (like me) that don’t know about it, Scratch is a graphical programming language that’s designed to be easy to use while still allowing the full power of a proper programming language. The benefit of teaching programming using Scratch is that the students get quick graphical feedback on what works and what doesn’t, and syntax errors are pretty much impossible. Once they understand the basic concepts of programming, it’s then easier to switch to something like Python.

I switched to Scratch, and the students loved it. (Or, at the very least, liked it better than Python.) I ended the school year with a group assignment that was partially graded based on votes by the rest of the classes. I had great ideas for making the group assignments available online, but never went anywhere with it. Fast-forward to this year where we’ve started with Scratch and are now almost done with it and ready to move on to Python. And, since I now have a deadline, I’ve put together a simple site so they can vote on each others’ group projects.

At the moment, it has last year’s projects and is open for anyone to rate, so if you want to try out their projects, go to, give them a shot, and rate them. This was a first attempt for both students and myself, so please be gentle on the ratings.

Sometime in the next few weeks I’ll post this year’s projects. They will be available to play, but initially only students or teachers in the school will be able to rate them. Once I’ve scored them, I’ll open up the ratings to everybody.

If you have any comments or suggestions for the site itself, please leave them below.

Points on Fedora 21 release party AT MIT COE pune on 21st Feb 2015
Celebrating Fedora 21 on 21st Feb 2015, last one we did on 21st Dec 2014 :)

Fedora 21 is in itself special release due to Fedora.Next initiative, after merging Fedora core with extra this one is major step moving into future. Fedora 21 is also one special release for India, i do not remember if we had any release party for Fedora 20 in India, please correct if i missed anyone. But for Fedora 21 it is going on and on !!

This was the 3rd release party of Fedora 21 in which i took part. Thanks to Praveen Kumar for initiating this release party. Unfortunately he was away for some personnel work but Rupali came up to help with the organization.

Planned agenda for this meet in quick time with team. We were expecting major chunk of audience from MIT COE itself. Later took permission from MIT COE and decided to open up this for community and started spreading message around.

Shatadru designed nice FLYERS for this release party from the artwork available from Fedora artwork team.

MIT COE is bit far for most of the people staying around Magarpatta city, still we managed to be there in time. Mine talk was first at 10:30 reached there at sharp 10:25 :)

There was one more reason behind this release party was to do preparation for FUDCon happening in 26-28 Jun 2015. See the facility talk with professors, discuss with students educate them on Fedora and clear the doubts.

Me impressed lot with MIT COE campus and it is really live campus. On F21 release party there was already couple of events were going on, students were really busy managing and participating into it. We saw there one stall specifically for Linux quiz.

Due to these other event initially we got very few audience for release party. So we started bit late around 11pm. Later few more students joined. Overall i am happy with the audience since they were few but all were truly interested in event.

There was 3 talks were planned Mine (introduction), Anish (Workstation) and Parag (How to contribute to Fedora) and Later panel discussions.  

During my talk i asked everyone to introduce themselves and inform what they are expecting. There were interesting topics those came up.
  1. Difference between Fedora and Ubuntu?
  2. Why should i go for Fedora?
  3. What is FUDCon?
  4. How can we participate in FUDCon
  5. What about Embedded systems and Fedora?
Since we were planning for Panel discussion, we thought that will be good time to answer most of the questions.

I did my talk well in time, slides of talk available At slideshare [1]

Then Anish explained about Workstation product feature and target audience for it. Anish also helped audience for better understanding of Free Software and opensource software philosophy. It was good since in F21 release party At Mumbai we found that few people were not clear behind Free software philosophy.

Then Parag talked on How to contribute into Fedora, he explained lots of area's where one can get involved.
 At the end of Parag's talk we made transition from Talk to Panel with Amit and Siddhesh took the lead and later everyone contributed including me, Niranjan, Parag, Rupali and Anish. Panel discussion was awesome and Amit and Siddhesh kept on talking for long and discussed on number of topics. Then we understood that we can keep on talking on Fedora for whole day without break ;) 
 Decided to conclude in panel discussion and start actual celebration with cake cutting. Cake arrived on time with Samosa snacks and TEA :)

We decided to do few more events like Workshop in MIT before the FUDCon 2015. Had meeting with MIT COE principle for FUDCon planning which only some of us attended.

Thought its already big blog,i might have missed few things and few names, so feel free to add them in comments :)    

cubietruck setup, though my card reader is flaky
Finally setup my cubietruck that I purchased late last year. Here it is under my desk running Fedora 21:

Riveting, I know.

I mostly just followed bits from Rich's and Kashyap's blog posts, and the Fedora ARM install instructions. I used this serial adapter, though note you need to make sure to make sure the TX pin on the board is wired to the RX pin USB end, and vice versa. Probably obvious to some people but I would have been stumped if I hadn't seen it mentioned in an Amazon review.

Everything is working now but my hardware has a bit of a malfunction that I mentioned in this fedora-arm thread. Basically the device can boot off the SD card, but linux doesn't detect it. If I wiggle the card around a lot while inserting it I can get linux to detect it about 1/5 of the time, but after rebooting the device is back to not being detected. In the thread, Hans guessed that the card-detect pin is flaky or not connecting well, but it doesn't affect the cubieboard firmware which just ignores that pin and assumes the device is present.

Since I was planning on using a SATA drive anyways, this isn't that big of a deal, just delete everything on the SD card except u-boot, and the SATA drive will be used /boot and /. But if I ever want to update u-boot on the SD card, I'll have to go through the whole wiggle process again and manually 'dd' it into place using the steps on the Fedora install page.
Moving to Ghost

Today, I migrated my blog: to Ghost. I was previously writing blogs and maintaining my static blog website using Nikola. I like Nikola for being a very featureful and powerful static blog generator. I find the ReStructuredText format handy when generating complex HTML, and especially, I fell in love with the custom RST shortcuts provided by Nikola. But, flexibility breeds complexity.

Why migrate?
  • A blog post should be simple and easily renderable in blog aggregators. When you do too much custom HTML/CSS stuff in your post, e.g., playing grid layout of Twitter bootstrap, etc. things break.
  • When you start using a lot of custom things provided by a static blog generator like Nikola, migration becomes a pain. Luckily, I had just started to use Nikola specific RST shortcuts.
  • You want to write blogs, and not code. It was a pain to migrate to newer versions of Nikola. Usually, it involved a lot of fiddling with the conf files, sometimes the source code as well.
  • I kinda like WYSIWYG, and especially web based editors accessible from anywhere over the internet.
  • The final blow was my nikola based blog's RSS feed not being parsed by the planet aggregator: used by Fedor planet, DGPLUG planet, etc. For a second, I thought, I can write a patch to fix wherever the bug is. But then, enough is enough. I want to write blog posts and not code each time to do it.
  • Installed ghost plugin in my wordpress blog and exported all content in a format ghost understands
  • Installed ghost on local machine
  • Signup and enter the admin dashboard
  • Import blog posts from my wordpress blog
  • Migrate RST blog posts from Nikola to Ghost
  • Once everything looked fine, I installed ghost on my server
  • Exported all local ghost content and imported that in my ghost instance for
  • Rsync'd all the local images from content/images/ to the remote server
  • Setup supervisord to run ghost on my server as a daemon
  • Added nginx rules to allow access to all public pages of over HTTP and blocked access to admin pages at I've configured another nginx virtual host to allow access to the admin pages over HTTPS.

Migration was not that painful as I was expecting it to be. It took me a few hours for doing the entire migration. I am liking Ghost's simplicity and the admin dashboards and the web editor for writing blog posts.

Now, I can focus on writing blog posts when I write one and not go debugging through some code :)

February 27, 2015

News: NVIDIA for Linux Released with GL_NV_command_list.
Nvidia come with new drivers for Linux OS and support for the following GPUs:
Quadro K620M, Quadro K2200M, GeForce GTX 960, GeForce GTX 965M.

NVIDIA has published a long-lived branch release for Linux OS:
Add also adds one new extension: GL_NV_command_list and exposes 368 OpenGL extensions for a GeForce GTX 970.
Hello, World!

Test post to validate feed aggregators are working properly. Stay tuned for more interesting content soon.

Revisiting Datagrepper Performance

In Fedora Infrastructure, we run a service somewhat-hilariously called datagrepper which lets you make queries over HTTP about the history of our message bus. (The service that feeds the database is called datanommer.) We recently crossed the mark of 20 million messages in the store, and the thing still works but it has become noticeably slower over time. This affects other dependent services:

  • The releng dashboard and others make HTTP queries to datagrepper.
  • The fedora-packages app waits on datagrepper results to present brief histories of packages.
  • The Fedora Badges backend queries the db directly to figure out if it should award badges or not.
  • The notifications frontend queries the db to try an display what messages in the past would have matched a hypothetical set of rules.

I've written about this chokepoint before, but haven't had time to really do anything about it... until this week!

Measuring how bad it is

First, some stats -- I wrote this benchmarking script to try a handful of different queries on the service and report some average response times:

#!/usr/bin/env python
import requests
import itertools
import time
import sys

url = ''

attempts = 8

possible_arguments = [
    ('delta', 86400),
    ('user', 'ralph'),
    ('category', 'buildsys'),
    ('topic', ''),
    ('not_package', 'bugwarrior'),

result_map = {}
for left, right in itertools.product(possible_arguments, possible_arguments):
    if left is right:
    key = hash(str(list(sorted(set(left + right)))))
    if key in result_map:

    results = []
    params = dict([left, right])
    for attempt in range(attempts):
        start = time.time()
        r = requests.get(url, params=params)
        assert(r.status_code == 200)
        results.append(time.time() - start)

    # Throw away the max and the min (outliers)

    average = sum(results) / len(results)
    result_map[key] = average

    print "%0.4f    %r" % (average, str(params))

The results get printed out in two columns.

  • The leftmost column is the average number of seconds it takes to make a query (we try 8 times, throw away the shortest and the longest and take the average of the remaining).
  • The rightmost column is a description of the query arguments passed to datagrepper. Different kinds of queries take different times.

This first set of results are from our production instance as-is:

7.7467    "{'user': 'ralph', 'delta': 86400}"
0.6984    "{'category': 'buildsys', 'delta': 86400}"
0.7801    "{'topic': '', 'delta': 86400}"
6.0842    "{'not_package': 'bugwarrior', 'delta': 86400}"
7.9572    "{'category': 'buildsys', 'user': 'ralph'}"
7.2941    "{'topic': '', 'user': 'ralph'}"
11.751    "{'user': 'ralph', 'not_package': 'bugwarrior'}"
34.402    "{'category': 'buildsys', 'topic': ''}"
36.377    "{'category': 'buildsys', 'not_package': 'bugwarrior'}"
44.536    "{'topic': '', 'not_package': 'bugwarrior'}"

Notice that a handful of queries are under one second but some are unbearably long. A seven second response time is too long, and a 44-second response time is way too long.

Setting up a dev instance

I grabbed the dump of our production database and imported it into a fresh postgres instance in our private cloud to mess around. Before making any further modifications, I ran the benchmarking script again on this new guy and got some different results:

5.4305    "{'user': 'ralph', 'delta': 86400}"
0.5391    "{'category': 'buildsys', 'delta': 86400}"
0.4992    "{'topic': '', 'delta': 86400}"
4.5578    "{'not_package': 'bugwarrior', 'delta': 86400}"
6.4852    "{'category': 'buildsys', 'user': 'ralph'}"
6.3851    "{'topic': '', 'user': 'ralph'}"
10.932    "{'user': 'ralph', 'not_package': 'bugwarrior'}"
9.1895    "{'category': 'buildsys', 'topic': ''}"
14.950    "{'category': 'buildsys', 'not_package': 'bugwarrior'}"
12.044    "{'topic': '', 'not_package': 'bugwarrior'}"

A couple things are faster here:

  • No ssl on the HTTP requests (almost irrelevant)
  • No other load on the db from other live requests (likely irrelevant)
  • The db was freshly imported (the last time we moved the db server things got magically faster. I think there's something about the way that postgres stores stuff internally that when you freshly import the data, it is organized more effectively. I have no data or real know-how to support this claim though).

Experimenting with indexes

I first tried adding indexes on the category and topic columns of the messages table (which are common columns used for filter operations). We already have an index on the timestamp column, without which the whole service is just unusable.

Some results after adding those:

0.1957    "{'user': 'ralph', 'delta': 86400}"
0.1966    "{'category': 'buildsys', 'delta': 86400}"
0.1936    "{'topic': '', 'delta': 86400}"
0.1986    "{'not_package': 'bugwarrior', 'delta': 86400}"
6.6809    "{'category': 'buildsys', 'user': 'ralph'}"
6.4602    "{'topic': '', 'user': 'ralph'}"
10.982    "{'user': 'ralph', 'not_package': 'bugwarrior'}"
3.7270    "{'category': 'buildsys', 'topic': ''}"
14.906    "{'category': 'buildsys', 'not_package': 'bugwarrior'}"
7.6618    "{'topic': '', 'not_package': 'bugwarrior'}"

Response times are faster in the cases you would expect.

Those columns are relatively simple one-to-many relationships. A message has one topic, and one category. Topics and categories are each associated with many messages. There is no JOIN required.

Handling the many-to-many cases

Speeding up the queries that require filtering on users and packages is more tricky. They are many-to-many relations -- each user is associated with multiple messages and a message may be associated with many users (or many packages).

I did some research, and through trial-and-error found that adding a composite primary key on the bridge tables gave a nice performance boost. See the results here:

0.2074    "{'user': 'ralph', 'delta': 86400}"
0.2091    "{'category': 'buildsys', 'delta': 86400}"
0.2099    "{'topic': '', 'delta': 86400}"
0.2056    "{'not_package': 'bugwarrior', 'delta': 86400}"
1.4863    "{'category': 'buildsys', 'user': 'ralph'}"
1.4553    "{'topic': '', 'user': 'ralph'}"
1.8186    "{'user': 'ralph', 'not_package': 'bugwarrior'}"
3.5525    "{'category': 'buildsys', 'topic': ''}"
10.9242    "{'category': 'buildsys', 'not_package': 'bugwarrior'}"
3.5214    "{'topic': '', 'not_package': 'bugwarrior'}"

The best so far! That one 10.9 second query is undesirable, but it also makes sense: we're asking it to first filter for all buildsys messages (the spammiest category) and then to prune those down to only the builds (a proper subset of that category). If you query just for the builds by topic and omit the category part (which is what you want anyways) the query takes 3.5s.

All around, I see a 3.5x speed increase.

Rolling it out

The code is set to be merged into datanommer and I wrote an ansible playbook to orchestrate pushing the change out. I'd push it out now, but we just entered the infrastructure freeze for the Fedora 22 Alpha release. Once we're through that and all thawed, we should be good to go.

How to boot a Fedora 21 aarch64 UEFI guest on x86_64

You can use virt-builder to make Fedora 21 aarch64 guests easily:

$ virt-builder --arch aarch64 fedora-21

but unless you have real aarch64 hardware, how do you boot them?

Well the latest qemu supports working system emulation for 64 bit ARM. So assuming you (a) have compiled a very new qemu-system-aarch64 (I recommend qemu from git), and (b) you have the AAVMF (UEFI for aarch64, based on TianoCore EDK2) firmware, then:

$ qemu-system-aarch64 \
    -nodefconfig -nodefaults -display none \
    -M virt -cpu cortex-a57 -machine accel=tcg -m 2048 \
    -drive if=pflash,format=raw,file=AAVMF_CODE.fd,readonly \
    -drive if=pflash,format=raw,file=vars.fd \
    -drive file=fedora-21.img,format=raw,if=none,id=hd0 \
    -device virtio-blk-device,drive=hd0 \
    -serial stdio

And that will boot the aarch64 guest.

CzP @ Scale 2015: lemmings, makers and lots of syslog-ng users
This year, for the first time ever, I was presenting and representing syslog-ng in the USA. I was at the South California Linux Expo, or as it is better known: Scale. It’s not just an expo, but also a very nice conference loaded with interesting presentations in many parallel tracks. BalaBit had a booth dedicated […]

February 26, 2015

PostBooks accounting and ERP suite coming to Fedora

PostBooks has been successful on Debian and Ubuntu for a while now and for all those who asked, it is finally coming to Fedora.

The review request has just been submitted and the spec files have also been submitted to xTuple as pull requests so future upstream releases can be used with rpmbuild to create packages.

Can you help?

A few small things outstanding:

  • Putting a launcher icon in the GNOME menus
  • Packaging the schemas - they are in separate packages on Debian/Ubuntu. Download them here and load the one you want into your PostgreSQL instance using the instructions from the Debian package.

Community support

The xTuple forum is a great place to ask any questions and get to know the community.


Here is a quick look at the login screen on a Fedora 19 host:


This post is just me outlining what my current projects are and the order in which I plan on tackling them.

Besides the regular tasks I have on the Design Team Ticket system ( currently just ticket number 279 by pingou, which should be very fun! ) I plan on contributing to some repos within fedoras infra organization on github, as well as some new things Id like to bring to the table.

Project(s) I am currently working on:

asknot-ng - github

Project(s) I would like to start contributing to:

datagrepper - github glittergallery - github

Project(s) I have planned:

Fedora Hubs - starting this once ive played with datagrepper/fedmsg APIs more.

Three Types of Tokens

One of the most annoying administrative issues in Keystone is The MySQL backend to the token database filling up. While we have a flush scrit, it needs to be scheduled via cron. Here is a short over view of the types of tokens, why the backend is necessary, and what is being done to mitigate the problem.


Amanda: The companies OpenStack system Admin

Manny: The IT manager.

ACT 1 SCENE1: Small conference room. Manny has called a meeting with Amanda.

Manny: Hey Amanda, What are these keystone tokens and why are they causing so many problems?

Amanda: Keystone tokens are an opaque blob used to allow caching of an authentication event and some subset of the authorization data associated with the user.

Manny: OK…backup. what does that mean?

Amanda: Authentication means that you prove that you are who you claim to be. For the most of OpenStack’s history, this has meant handing over a symmetric secret.

Manny: And a symmetric secret is …?

Amanda: A password.

Manny:Ok Got it. I hand in my password to prove that I am me. What is the authorization data?

Amanda: In OpenStack, it is the username and the user’s roles.

Manny: All their roles?

Amanda: No. only for the scope of the token. A token can be scoped to a project. Also to a domain, but in our set up, only I ever need a domain scoped token.

Manny: The domain is how I select between the customer list and our employees out of our LDAP server, right?

Amanda: Yep. There is another domain just for admin tasks, too. It has the service users for Nova and so on.

Manny: OK, so I get a token, and I can see all this stuff?

Amanda: Sort of. For most of the operation we do, you use the “openstack” command. That is the common command line, and it hides the fact that it is getting a token for most operations. But you can actually use a web tool called curl to go direct to the keystone server and request a token. I do that for debugging sometimes. If you do that, you see the body of the token data in the response. But that is different from being able to read the token itself. The token is actually only 32 characters long. It is what is known as a UUID.

Manny (slowly): UUID? Universally Unique Identifier. Right?

Amanda: Right. Its based on a long random number generated by the operating system. UUIDs are how most of OpenStack generates remote identifiers for VMs, images, volumes and so on.

Manny: Then the token doesn’t really hold all that data?

Amanda: It doesn’t. The token is just a…well, a token.

Manny: Like we used to have for the toll machines on route 93. Till we all got Easy pass!

Amanda: Yeah. Those tokens showed that you had paid for the trip. For OpenStack, a token is a remote reference to a subset of your user data. If you pass a token to Nova, it still has to go back to Keystone to validate the token. When it validates the token, it gets the data. However, our OpenStack deployment is so small, Nova and Keystone are on the same machine. Going back to Keystone does not require a “real” network round trip.

Manny: So when now that we are planning on going to the multi host set up, validating a token will require a network round trip?

Amanda: Actually, when we move to the multi-site, we are going to switch over to a different form of token that does not require a network round trip. And that is where the pain starts.

Manny: These are the PKI tokens you were talking about in the meeting?

Amanda: Yeah.

Manny: OK, I remember the term PKI was Public Key…something.

Amanda: The I is for infrastructure, but you remembered the important part.

Manny: Two keys, Public versus private: you encode with one and decode with the other.

Amanda: Yes. In this case, it is the token data that is encoded with private key, and decode with the public key.

Manny: I thought that made it huge. Do you really encode all the data?

Amanda: No, just a signature of the data. A Hash. This is called message signing, and it is used in a lot of places, basically to validate that the message is both unchanged and that it comes from the person you think it comes from.

Manny: OK, so…what is the pain.

Amanda: Two things. One, the tokens are bigger, much bigger, than a UUID. They have all of the validation data in them. To include the service catalog. And our service catalog is growing on the multi-site deployment, so we’ve been warned that the tokens might get so big that it causes problems.

Manny: Let’s come back to that. What is the other problem?

Amanda: OK…since a token is remotely validated, there is the possibility that something hass changed on Keystone, and the token is no longer valid. With our current system, Keystoen knows this immediately, and just dumps the token. So When Nova comes to validate it, its no longer valid and the user has to get another token. With remove validation, Nova has to periodically request a list of revoked tokens.

Manny: So either way Keystone needs to store data. What is the problem?

Amanda: Well, today we store our tokens in Memcached. Its a simple Key value store, its local to the Keystone instance, and it just dumps old data that hasn’t been used in a while. With revocations, if you dump old data, you might lose the fact that a token was revoked.

Manny: Effectively un-revoking that token?

Amanda: Yep.

Manny: OK…so how do we deal with this?

Amanda: We have to move from storing token in Memcached to MySQL. According to the docs and upstream discussions, this can work, but you have to be careful to schedule a job to clean up the old tokens, or you can fill up the token database. Some of the larger sites have to run this job very frequently.

Manny: Its a major source of pain?

Amanda: It can be. We don’t think we’ll be at that scale at the multisite launch, but it might happen as we grow.

Manny: OK, back to the token size thing, then. How do we deal with that?

Amanda: OK, when we go multi-site, we are going to have one of everything at each site: Keystone, Nova, Neutron, Glance. We have some jobs to synchronize the most essential things like the glance images and the customer database, but the rest is going to kept fairly separate. Each will be tagged as a region.

Manny: So the service catalog is going to be galactic, but will be sharded out by Keystone server?

Amanda: Sort of. We are going to actually make it possible to have the complete service catalog in each keystone server, but there is an option in Keystone to specify a subset of the catalog for a given project. So when you get a token, the service catalog will be scoped down to the project in question. We’ve done some estimates of size and we’ll be able to squeak by.

Manny: So, what about the multi-site contracts? Where a company can send there VMs to either a local or remote Nova?

Amanda: for now they will be separate projects. But for the future plans where we are going to need to be able to put them in the same project, we are stuck.

Manny: Ugh. We can’t be the only people with this problem.

Amanda: Some people are moving back to UUID tokens, but there are issues both with replication of the token database and also with cross site network traffic. But there is some upstream work that sounds promising to mitigate that.

Manny: The lightweight thing?

Amanda: Yeah, lightweight tokens. Its backing off the remotely validated aspect of Keystone tokens, but doesn’t need to store the tokens themselves. They use a scheme called Authorized Encryption which puts a minimal amount of info into the token to be able to recreate the whole authorization data. But only the Keystone server can expand that data. Then, all that needs to be persisted is the revocations.

Manny: Still?

Amanda: Yeah, and there are all the same issues there with flushing of data, but the scale of the data is much smaller. Password changes and removing roles from users are the ones we expect to see the most. We still need a cron job to flush those.

Manny: No silver bullet, eh? Still how will that work for multisite?

Amanda: Since the token is validated by cryptography, the different sites will need to synchronize the keys. There was a project called Kite that was part of Keystone, and then it wasn’t, and then it was again, but it is actually designed to solve this problem. So all of the Keystone servers will share their keys to validate tokens locally.

Manny: We’ll still need to synchronize the revocation data?

Amanda: No silver bullet.

Manny: Do we really need the revocation data? What if we just … didn’t revoke. Made the tokens short lived.

Amanda: Its been proposed. The problem is that a lot of the workflows were being built around the idea of long lived tokens. The Tokens went from 24 hours valid to 1 hour valid by default, and that broke some things. Some people have had to crank the time back up again. We think we might be able to get away with shorter tokens, but we need to test and see what it breaks.

Manny: Yeah, I could see HA having a problem with that…wait, 24hours…how does heat do what it needs to. It can restart a machine a mong afterwards. DO we just hand over the passwords to HEAT?

Amanda: Heh..used to,. But Heat uses a delegation mechanism called trusts. A user creates a trust, and that effectively says that Heat can do something on the users behalf, but Heat has to get its own token first. It first proves that it is Heat, and then it uses the trust to get a token on the users behalf.

Manny: So…trusts should be used everywhere?

Amanda: Something like trusts, but more lightweight. Trusts are deliberate delegation mechanisms, and a re set op on a per user bases. TO really scale, it would have to be something where the admin set up the delegation agreement as a template. If that were the case, then these long lived work flows would not need to use the same token.

Manny: And we could get rid of the revocation events. OK, that is time, and I have a customer meeting. Thanks.

Amanda: No problem.


FUDCon Pune: Now Accepting Subsidy Requests

If you’re planning on attending FUDCon Pune, and are going to need subsidy for travel and accomodation, you should head to this link to fill out the form requesting for one.

You may have some questions about this, and we already have some answers.  Feel free to hop on to the fedora-india list or the #fedora-india IRC channel on Freenode if you have other questions.

What is a hackathon or hackfest? Few more tips for proposals

According to Wikipedia, A hackathon (also known as a hack day, hackfest or codefest) is an event in which computer programmers and others involved in software development, including graphic designers, interface designers and project managers, collaborate intensively on software projects. Let us go through few points from this definition.

  • it is an event about collaboration.
  • it involves not only programmers, but designers, docs and other people.
  • it is about software projects.

We can also see that people work intensively on the projects. It can be one project, or people can work as teams on different projects. In Fedora land, the most common example of hackathon is “Fedora Activity Days” or FADs. Where a group of contributors sit together in a place and work on the project intensively. The last example is the Design FAD which we had around a month back, where the design team worked on fixing the their goals and workflows and other related things.

One should keep these things in mind while submitting a proposal for FUDCON or actually any other conference. If you want to teach about any particular technology or tool, you should put that as a workshop proposal than a hackfest or hackathon.

Then which one is a good topic for hackfest during Fudcon? Say you want to work on the speed up of the boot time of Fedora. You may want to design 5 great icons for the projects you love. If you love photography, may be you want to build a camera using a RaspberryPi and some nice Python code. Another good option is to ask for a list of bugs from the applications under Fedora apps/infrastructure/releng team and then work on fixing them during the conference.

In both hackfest or workshop proposals, there are a few points which must be present in your proposal. Things like

  • Who are the target audience for the workshop?
  • what version of Fedora must they have in their laptops?
  • which all packages should they pre-install in their computer before coming to the conference?
  • Do they need to know any particular technology or programming language or tool to take part in the workshop or hackfest?
  • Make sure that you submit proposals about the projects where you do contribute upstream.

CFP is open still 9th March, so go ahead and submit awesome proposals.

RTLSDR – Up and running in Mac OSX Yosemite with GQRX & GNURadio


A while back I bought an RTL2832u device from ebay for a very small amount and was blown away by how this piece of kit performed.


Under linux and windows it worked beautifully, I then purchased a new MacBook Pro and didn’t really know what to use as I have had no experience under Mac OSX.


So a little research came up with GQRX and I can tell you it works brilliantly and has pretty much everything you need.

Whilst there are Mac binary (DMG) files for GQRX I have not been able to find any current versions.

But I did find that MacPorts has an up to date version which is good news for us!


If you do not know what MacPorts is, it’s a an easy to use system for compiling, managing and installing open source software. Similar to Brew if you use that.

So let’s crack on and get MacPorts installed so we can get playing!



Firstly we need to install Xcode from the Apple App Store

This link will take you directly to the Xcode app in the app store.

Screen Shot 2015-02-26 at 11.54.42

Once you have Xcode installed, it’s time to install MacPorts

Download the Yosemite package here for any other version of Mac OSX please see this directory.


Install the package by double clicking it and proceeding through the next steps. You will more likely get to the stage in the installation where it says “Running package scripts”. At this stage it might take a while so please be patient it hasn’t stopped working and is downloading the latest base set behind the scenes.

Screen Shot 2015-02-26 at 11.59.52

Installing GQRX using MacPorts “Port” Command.

After the MacPorts installation has finished we then need to go ahead and issue some commands to MacPorts to install GQRX.

Fire up your Terminal of choice (I am currently using iTerm2) and issue the following command.

sudo port install gqrx

Screen Shot 2015-02-26 at 12.06.59

MacPorts will then compute all dependencies, download, compile and install them.

Screen Shot 2015-02-26 at 12.09.15This part may take a while depending on your systems hardware as it has to compile the whole of GNU Radio and some other dependencies.

Once it has finished compiling and installing there will be a new menu item in your dash (F4) called MacPorts and within that folder your will be your shiny new GQRX Icon.




Launch GQRX and you will be preseneted with the following screen.

Screen Shot 2015-02-25 at 20.17.47

No Sound Huh?

Hit the top left to activate the application and you will be surfing the airwaves in not time, I did get a little annoyed after setting this up because there was no sound at all, luckily though it was just me being lazy and expecting the application to work straight away as it has done with all other applications I have used.



What I didn’t do was set the demodulation type. In the top right hand side of the program is the receiver options with the multi choice “Mode” box. You can have a look at other options to suit yourself but I chose WFM (Stereo) and all was working well.


So.. There you have it, you are now up and running and ready to surf the air waves. I hope this worked for you as it is pretty straight forward. If I missed anything or you’d like to point anything out please contact me!

In case you didn’t notice…

This happened.

Time and Date Menu

It was a big job, and Florian deserves major congratulations for getting it ready (just!) in time.

Credit also needs to go to Jon McCann. If you’ve read my previous posts on the notifications redesign, you’ll know that we’ve gone through a number of different design concepts. In the end, it was Jon who was instrumental in helping us to identify the best approach. He has an amazing clarity of vision, which proved crucial in helping us to give the design a solid conceptual foundation.

The various concepts we have explored as a part of the notifications redesign makes me confident that the approach we have settled on is the right one. It was only with the final design that everything clicked into place, and we were left with a design which effectively avoided the problems found in each of the others.

In the rest of this post, I’ll outline the design’s key features.

“What’s happening?”

One of the key goals for the notifications redesign was to provide an effective way to review previous notifications. In this way, notifications can be used to find out about what has happened in the past, as well as what is happening right now.

With this ability, notifications can be used in an exploratory manner, to get an overview of recent events. This is particularly useful as a way to inform decisions about what to do next. As such, notifications can be a tool for directing our own personal activity.

This recognition informs the new calendar and notifications design. It indicates that notifications have an affinity with other activity-related information – at the moment we ask “what’s happening?”, we are also often implicitly asking, “what should I be doing?”

The new calendar design aims to answer that question in the most effective way possible. It provides a single place where you can find out what has happened, what is happening, and what is about to happen. In concrete terms, this means showing notifications, event and birthday reminders, world clocks and weather information. Notifications take centre stage, while the other information is available for you to fortuitously stumble upon.

All this information is contained in a single view, which means that it is immediately accessible and gives an effective overview.

It’s a time machine

Since they tell us about what has happened in the past, and help us decide what to do in the future, notifications are closely related to time. In accordance with this framing, the new design makes notifications accessible through the time and day indicator in the top bar [1]. All the other information found in the new date and time menu also relates to time – event reminders, birthdays, world times, and the calendar.

The world times section of the date and time menu works through integration with the GNOME Clocks application.

Taking action

Notification Banner

Just like the current GNOME 3 notifications design, the new design incorporates pop up banners that include notification actions. These enable you to quickly respond to a notification in a convenient manner. Built in chat replies have been retained.

The position and appearance of banners has changed though – they now appear towards the top of the screen. This gives them a close relationship with the date and time menu. It also helps to ensure that pop up banners don’t get in the way, which was an issue when banners where at the bottom of the screen.

Keeping it simple

A major goal for the notifications redesign has been to simplify as much as possible. There’s quite a bit of complexity to notifications right now, including different notification types (such as resident and transient notifications), special cases (like music and hot plug notifications), notification properties (including images, action buttons), as well as stacking of notifications within different sources.

This complexity comes at a cost, increasing both the number of bugs and the maintenance burden. With the new design, we are clearing the decks and trying to stick to a single, simple API for virtually all notifications. This will lead to a much lower bug count, as well as easier maintenance.

Plans for the future

The main thing that has still to land for 3.16 is status icon support. We’re still finalising our plans for this, but it will definitely happen, and I’m sure that it will be better than what was in 3.14.

Looking forward to 3.18, there are a number of elements to the design that didn’t make it for 3.16, including music controls (probably based on MPRIS), Weather integration and birthday reminders. These additional elements will make the new design increasingly useful and cohesive, so there is a lot to look forward to. If you want to work on any of these features, just get in touch.

A mockup of the date and time menu, featuring birthdays and weather information.

A mockup of the date and time menu, featuring birthdays and weather information.

How you can help

The new notifications and calender design landed late in the development cycle, which makes testing vital. If you want to help, the best thing you can do is use GNOME Shell master, and report any bugs that you find.

This is also a good moment to pay attention to the actual notifications themselves. There’s a lot of poor notification usage out there right now [2], as well as obvious examples of things that should have notifications and don’t (completed downloads, anyone?).

If you are responsible for a module or application, take a moment to read over the notifications guidelines, and take a second to check if there are any untapped opportunities for effective notification usage in your software.

The new notifications design will be released next month, as a part of GNOME 3.16. If you want more details about the design or future plans, check out the design page.

[1] This also has some practical advantages, such as communicating that notifications are provided by the system, clearly delineating each section of the top bar, and providing an effective click-target.

[2] Things to look out for: notifications not being dismissed when they are replaced by a new one, application notifications that don’t use the application’s icon, default actions that don’t raise the sender application, notification actions that duplicate the default action (“View” or “Open” buttons are a warning sign of this), notifications being shown by applications while you are using them.

Another fake flash story
I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.

Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account
The Sax Doctor
Distinctive ring that holds the bell of a Selmer Paris Saxophone to the main body of the horn.

Ring of a Selmer Paris Saxophone

Dropped my Sax off at Emilio Lyon’s house and workshop. My folks bought it for me from him at Rayburn Music in Boston back when I was a High School Freshman. I still remember him pointing to the sticker on it that indicated “This is my work.”

As someone who loves both the saxophone and working with my hands, I have to admit I was looking forward to meeting him. I was even a little nervous. He has a great reputation. Was he going to chastise me for the state of my horn? It hadn’t been serviced in…way too long. I was a little worried that the lack of changing the oil on the rods would have worn down some of the metal connections.

I spent the best forty minutes in his workshop as he explained what the horn needed; an overhaul, which meant pulling all the pads off and replacing them. I expected this.

I showed him how the bottom thumb rest didn’t fit my hand right…it was the stock Selmer piece, and it had always cut into my thumb a little. He had another from a different, older horn, that was brass. He shaped it with a hammer and .. it felt good. Very good. He gave me that piece and kept mine, in case it would work for someone else.

There was another minor issue with the left thumb catching a rough spot near the thumb rest and he covered it with some epoxy. Not magic, but a magic touch none-the-less.

To say he told me it needed an overhaul doesn’t do it justice. He explained how he would do it, step by step, especially the task of setting the pads. I know from elsewhere that this is real artistry, and takes years of experience to get right. He talked about taking the springs off and leaving the pads resting on the cups. I asked why he took the springs off?

Its about touch. You shouldn’t work hard to cover the holes of the saxophone, it should be light. Emilio understands how the top saxophonists in the world interact with their horns. He talked about advising Sonny Rollins to use a light touch “how he would like playing the horn better. Sonny tried for a week and then called back to apologize: he just couldn’t play that way. We talked about how other people played..Joe Lovano and George Garzone, heavy. David Sanborne, very light.

The corks and felts all need to be replaced. He has a new technique, I had heard about, using little black rubber nubs. He showed me, and how quiet they were. “I never liked the base.” I think he meant the base set of padding that came with the Selmers.

He assured me that the metal was fine. This is a good horn. A good number.

He quoted me the price for the overhaul. I told was less than other people had quoted me.

He didn’t take any contact info, told me to contact him. I get my horn back in two weeks. I’ll make due with playing my beat on student horn alto and EWI.

I’m really looking forward to getting back my Selmer Mark VI with the overhaul from Emilio. Will it play like new? I don’t know, the horn was at least 6 years old by the time I was born, and 20 when I first played it.

But I suspect it will play better than new.

February 25, 2015

Better way of importing css stylesheets in Flask

Flask is pretty cool.  But, I ran into a little snag today where if I used dynamic urls, my stylesheets broke.  The solution is url_for in the head section of your base template (or otherwise).  You can get it to work by using the following for inserting your css:

<link rel=”stylesheet” type=”text/css” href=”{{ url_for(‘static’, filename=’style.css’) }}”>

FUDCON Pune 2015 CFP is open

FUDCON, the Fedora Users and Developers Conference is going to happen next in Pune, in India, from 26th June to 28th June in the Maharashtra Institute of Technology College of Engineering (MIT COE). The call for proposals (CFP) is already out and 9th March is the last date to submit a talk/workshop. If you are working on any upstream project, you may want to talk about your work to a technical crowd in the conference. If you are a student, and you want to talk about the latest patches you have submitted to the upstream project, this is the right place to do so. May be you never talked in front of a crowd like this before, but you can start doing this by submitting a talk in the FUDCON.

A few tips for your talk/workshop proposal

  • Write about your intended audience, is this something useful for the students? or for the system administrators? Be clear about who are your target audience.
  • Provide a talk outline, provide points about what all you want to speak, it is better that you provide a time estimate in the outline.
  • What will the attendees get out of your talk?
  • Provide links to the projects, source code, blogs, presentation you gave before. These will add more value to the presentation.
  • Submit your proposal early, that way more people from the talk selection committee will be able to go through your talk proposal.
  • Make sure you have a recorded copy of any demo you want to do on stage, because it is generally a bad idea to do a demo during a live talk. Things can go wrong very fast.
  • Write your speaker biography properly. Do not assume that everyone knows you. Give links to the all other talks you gave before, any link to the recorded videos is also very nice thing to have in the biography.
  • Make sure that you write your proposal for the attendees of your talk. They will be the measurement of success for the talk/workshop. (I am not talking about numbers, but the quality of knowledge sharing).
  • Try to avoid giving a generic talk like introduction to Open Source/Linux.
  • In case you are talking about any upstream project, choose a project where you have enough contribution in that project. That way the selection committee will know that you are a good person to give that talk. We know it is very tempting to talk about the latest fancy and shiny project, but please do so only if you are an upstream contributor.
  • Please do not submit talks on your products. This is a community event, not a company meet.
  • Write more in your talk proposal. It is never bad to explain or communicate more in a talk/workshop proposal.

In case you need help with your proposal, you can show it to the other community members before submitting it. You can always find a few of us in #fedora-india IRC channel.

So do not waste time, go ahead and submit a talk or workshop proposal.

Single UEFI executable for kernel+initrd+cmdline

Lately Kay Sievers and David Herrmann created a UEFI loader stub, which starts a linux kernel with an initrd and a kernel command line, which are COFF sections of the executable. This enables us to create single UEFI executable with a standard distribution kernel, a custom initrd and our own kernel command line attached.

Of course booting a linux kernel directly from the UEFI has been possible before with the kernel EFI stub. But to add an initrd and kernel command line, this had to be specified at kernel compile time.

To demonstrate this feature and have a useful product, I created a shell script, which creates a “rescue” image on Fedora with the rescue kernel and rescue initrd. The kernel command line “” instructs dracut to assemble all devices, while waiting 20 seconds for device appearance “rd.retry=20″ and drop to a final shell because “root=/dev/failme” is specified (which does not exist of course). Now in this shell you can fsck your devices, mount them and repair your system.

The shell script can be downloaded and viewed on github.

To run the script, you will need to install gummiboot >= 46 and binutils.

# yum install gummiboot binutils

Run the script:

# bash BOOTX64.EFI

Copy BOOTX64.EFI to e.g. a USB stick to EFI/BOOT/BOOTX64.EFI in the FAT boot partition and point your BIOS to boot from the USB stick.

Voilà! A rescue USB stick with just one file! :-)

Common Criteria

What is Common Criteria?

Common Criteria (CC) is an international standard (ISO/IEC 15408) for certifying computer security software. Using Protection Profiles, computer systems can be secured to certain levels that meet requirements laid out by the Common Criteria. Established by governments, the Common Criteria treaty has been signed by 17 countries, and each country recognizes the other’s certifications.

In the U.S., Common Criteria is handled by the National Information Assurance Partnership (NIAP). Other countries have their own CC authorities. Each authority certifies CC labs, which do the actual work of evaluating products. Once certified by the authority, based on the evidence from the lab and the vendor, that certification is recognized globally.

Your certification is given a particular assurance level which, roughly speaking, represents the strength of the certification. Confidence is higher at a level EAL4 than at EAL2 for a certification. Attention is usually given to the assurance level, instead of what, specifically, you’re being assured of, which is the protection profiles.
CC certification represents a very specific set of software and hardware configurations. Software versions and hardware model and version is important as differences will break the certification.

How does the Common Criteria work?

The Common Criteria authority in each country creates a set of expectations for particular kinds of software: operating systems, firewalls, and so on. Those expections are called Protection Profiles. Vendors, like Red Hat, then work with a third-party lab to document how we meet the Protection Profile. A Target of Evaluation (TOE) is created which is all the specific hardware and software that’s being evaluated. Months are then spent in the lab getting the package ready for submission. This state is known as “in evaluation”.
Once the package is complete, it is submitted to the relevant authority. Once the authority reviews and approves the package the product becomes “Common Criteria certified” for that target.

Why does Red Hat need or want Common Criteria?

Common Criteria is mandatory for software used within the US Government and other countries’ government systems. Other industries in the United States may also require Common Criteria. Because it is important to our customers, Red Hat spends the time and energy to meet these standards.

What Red Hat products are Common Criteria certified?

Currently, Red Hat Enterprise Linux (RHEL) 5.x and 6.x meet Common Criteria in various versions. Also, Red Hat’s JBoss Enterprise Application Platform 5 is certified in various versions. It should be noted that while Fedora and CentOS operating systems are related to RHEL, they are not CC certified. The Common Criteria Portal provides information on what specific versions of a product are certified and to what level. Red Hat also provides a listing of all certifications and accreditation of our products.

Are minor releases of RHEL certified?

When a minor release, or a bug fix, or a security issue arises, most customers will want to patch their systems to remain secure against the latest threats. Technically, this means falling out of compliance. For most systems, the agency’s Certifying Authority (CA) requires these updates as a matter of basic security measures. It is already understood that this breaks CC.

Connecting Common Criteria evaluation to a specific minor versions is difficult, at best, for a couple of reasons:

First, the certifications will never line up with a particular minor version exactly. A RHEL minor version is, after all, just a convenient waypoint for what is actually a constant stream of updates. The CC target, for example, began with RHEL 6.2, but the evaluated configuration will inevitably end up having packages updated from their 6.2 versions. In the case of FIPS, the certifications aren’t tied to a RHEL version at all, but to the version of the certified package. So OpenSSH server version 5.3p1-70.el6 is certified, no matter which minor RHEL version you happen to be using.

This leads to the second reason. Customers have, in the past, forced programs to stay on hopelessly outdated and unpatched systems only because they want to see /etc/redhat-release match the CC documentation exactly. Policies like this ignore the possibility that a certified package could exist in RHEL 6.2, 6.3, 6.4, etc., and the likelihood that subsequent security patches may have been made to the certified package. So we’re discouraging customers from saying “you must use version X.” After all, that’s not how CC was designed to work. We think CC should be treated as a starting point or baseline on which a program can conduct a sensible patching and errata strategy.

Can I use a product if it’s “in evaluation”?

Under NSTISSP #11, government customers must prefer products that have been certified using a US-approved protection profile. Failing that, you can use something certified under another profile. Failing that, you must ensure that the product is in evaluation.

Red Hat has successfully completed many Common Criteria processes so “in evaluation” is less uncertain than it might sound. When a product is “in evaluation”, the confidence level is high that certification will be awarded. We work with our customers and their CAs and DAAs to help provide information they need to make a decision on C&A packages that are up for review.

I’m worried about the timing of the certification. I need to deploy today!

Red Hat makes it as easy as possible for customers to use the version of Red Hat Enterprise Linux that they’re comfortable with. A subscription lets you use any version of the product as long as you have a current subscription. So you can buy a subscription today, deploy a currently certified version, and move to a more recent version once it’s certified–at no additional cost.

Why can’t I find your certification on the NIAP website?

Red Hat Enterprise Linux 6 was certified by BSI under OS Protection Profile at EAL4+. This is equivalent to certifying under NIAP under the Common Criteria mutual recognition treaties. More information on mutual recognition can be found on the CCRA web site. That site includes a list of the member countries that recognize one another’s evaluations.

How can I keep my CC-configured system patched?

A security plugin for the yum update tool allows customers to only install patches that are security fixes. This allows a system to be updated for security issues while not allowing bug fixes or enhancements to be installed. This makes for a more stable system that also meets security update requirements.

To install the security plugin, from a root-authenticated prompt:

# yum install yum-plugin-security
# yum updateinfo
# yum update --security

Once security updates have been added to the system, the CC-evaluated configuration has changed and the system is no longer certified.  This is the recommended way of building a system: starting with CC and then patching in accordance with DISA regulations. Consulting the CA and DAA during the system’s C&A process will help establish guidelines and expectations.

You didn’t answer all my questions. Where do I go for more help?

Red Hat Support is available anytime a customer, or potential customer, has a question about a Red Hat product.

Additional Reading

Check your packages in pkgdb and anitya

The question was asked on the devel list earlier if there was a way to check all one's packages for their status in pkgdb and whether they are in anitya.

So I just cooked up quickly a small script to do just that, it retrieves all the packages in pkgdb that you are point of contact or co-maintainer and tells you if its monitoring flag is on or off in pkgdb and if it could be found in anitya.

For example for me (partial output):

$ python pingou
   * point of contact
     R-ALL                                Monitor=False   Anitya=False
     R-AnnotationDbi                      Monitor=False   Anitya=False
     guake                                Monitor=True    Anitya=True
     igraph                               Monitor=False   Anitya=False
     jdependency                          Monitor=True    Anitya=True
     libdivecomputer                      Monitor=True    Anitya=True
     metamorphose2                        Monitor=False   Anitya=False
     packagedb-cli                        Monitor=False   Anitya=False
   * co-maintained
     R-qtl                                Monitor=False   Anitya=False
     fedora-review                        Monitor=True    Anitya=True
     geany                                Monitor=True    Anitya=True
     geany-plugins                        Monitor=True    Anitya=True
     homebank                             Monitor=True    Anitya=True
     libfprint                            Monitor=True    Anitya=True

If you are interested, feel free to use the script

hostapd can not find the wlan interface but interface is ready
Have you ever got an error when using hostapd complaining a network interface not be found but its actually there and ready? You probably have a space at the end of the line “interface”. Hostapd does not work when having a space in that line (and probably in other lines as well) in /etc/hostapd/hostapd.conf. ap:/etc/hostapd# […]
FUDCon Pune 2015 – Invite for sponsorship requests
We are now accepting funding requests for FUDCon Pune 2015. So, all Fedora contributors who want to attend FUDCon Pune during 26-28 June, 2015 and need sponsorship for travel and/or accommodation, please open a Fedora trac ticket for funding request here.

Here are some more details ::

What does sponsorship mean?
It means, Fedora Project will subsidize or reimburse your cost for travel and/or accommodation.

How to apply?
Read this guide and file a ticket here. You need a Fedora account to apply.

Am I eligible?
Typically, if you are a Fedora contributor who is planning to present a talk or organize a hackfest or a workshop, you will be considered eligible. However if you are unsure or you feel that you have a good justification,  go ahead and apply anyway. There is a much better of chance of getting sponsored if you request!

Will everyone who applies get sponsorship?
Probably not. We will do our best to sponsor contributors who are within APAC to attend the event. We can potentially subsidize the costs of some non-APAC participants but our budget is tight. We would love to bring a global community to the FUDCon and welcome everyone to participate but we have to do it within the budget we have been granted.

How will I know I am getting sponsored?
We will update your ticket when we make a decision.

How will you decide who gets sponsored?
All the organizers and other Fedora community members who will participate in the meetings get a vote. All discussions will happen transparently on #fudcon-planning and meeting minutes will be posted to the fedora-india mailing list. FUDCon meetings are open for everyone for participate. You are welcome to join us. There are no side way entrances. This is the same process  for everyone in fedora. Please see sponsoring event attendees wiki page for more information.

When is the deadline?
April 30th, 2015. We will keep it open for a while more if we can but don’t wait to apply. You have a much higher chance of being sponsored if you apply early while funds are available.

FUDCon Pune 2015 – Open for sponsorship requests
We are now accepting funding requests for FUDCon Pune 2015. So, all Fedora contributors who want to attend FUDCon Pune during 26-28 June, 2015 and need sponsorship for travel and/or accommodation, please open a Fedora trac ticket for funding request here.

Here are some more details ::

What does sponsorship mean?
It means, Fedora Project will subsidize or reimburse your cost for travel and/or accommodation.

How to apply?
Read this guide and file a ticket here. You need a Fedora account to apply.

Am I eligible?
Typically, if you are a Fedora contributor who is planning to present a talk or organize a hackfest or a workshop, you will be considered eligible. However if you are unsure or you feel that you have a good justification,  go ahead and apply anyway. There is a much better of chance of getting sponsored if you request!

Will everyone who applies get sponsorship?
Probably not. We will do our best to sponsor contributors who are within APAC to attend the event. We can potentially subsidize the costs of some non-APAC participants but our budget is tight. We would love to bring a global community to the FUDCon and welcome everyone to participate but we have to do it within the budget we have been granted.

How will I know I am getting sponsored?
We will update your ticket when we make a decision.

How will you decide who gets sponsored?
All the organizers and other Fedora community members who will participate in the meetings get a vote. All discussions will happen transparently on #fudcon-planning and meeting minutes will be posted to the fedora-india mailing list. FUDCon meetings are open for everyone for participate. You are welcome to join us. There are no side way entrances. This is the same process  for everyone in fedora. Please see sponsoring event attendees wiki page for more information.

When is the deadline?
April 30th, 2015. We will keep it open for a while more if we can but don’t wait to apply. You have a much higher chance of being sponsored if you apply early while funds are available.

About SourceForge and anitya

There are a couple of reports (1 and 2) about anitya not doing its job properly for projects hosted on

So here is a summary of the situation:

A project X on, for example with a homepage, releases multiples tarball named, X-1.2.tar.gz, libX-0.3.tar.gz and libY-2.0.tar.gz.

So how to model this.

The original approach taken was: the project is named X, so in anitya we should name it X and then the sourceforge backend in anitya allows to specify a Sourceforge project allowing to search X, libX or libY in the rss feed of the X project on SourceForge. Problem: when adding libX or libY on anitya, the project and homepage are all X and, while this is actually used to make project uniques in anitya (in other words, adding libX and libY won't be allowed).

So this is the current situation and as you can see, it has problems (which explains the two issues reported).

What are the potential solutions?

1/ Extend the unique constraint

We could include the tarball name to search for in the unique constraint, which would then change from: name+homepage to name+homepage+tarball

2/ Invert the use of name and tarball

Instead of having the project name be X with a tarball name libX, we could make the project be libX and the tarball be X.

This sounds quite nice and easy, but looking at the project currently in anitya's database, I found projects like:

        name         |                          homepage                           |                  tarball
                     +                                                             +
 linuxwacom          |                          | xf86-input-wacom
 brutalchess (alpha) |                        | brutalchess-alpha
 chemical-mime       |               | chemical-mime-data

So for these, the tarball name would become the project name and they would be pretty ugly.

I am not quite sure what is the best approach for this.

What do you think?

My talk in MSF, India

Last week I gave a talk on Free and Open Source Software in the Metal and Steel factory, Indian Ordinance Factories, Ishapore, India. I met Mr. Amartya Talukdar, a well known activist and blogger from Kolkata in the blogger’s meet. He currently manages the I.T. team in the above mentioned place and he arranged the talk to spread more awareness about FOSS.

I reached the main gate an hour before the talk. The securities came around to ask me why I was standing there in the road. I was sure this is going to happen again. I went into the factory along with Mr. Talukdar, at least three times the securities stopped me while the guns were ready. They also took my mobile phone, I left my camera back at home for the same reason.

I met the I.T. Department and few developers who work there, before the talk. Around 9:40am we moved to the big conference room for my talk. The talk started with Mr. Talukdar giving a small introduction. I was not sure how many technical people will attend the talk, so it was less technical and more on demo side. The room was almost full within few minutes, and I hope that my introductions to FOSS, Fedora, and Python went well. I was carrying a few Python docs with me and few other Fedora stickers. In the talk I spent most of time demoing various tools which can increase productivity of the management by using the right tools. We saw reStructuredText, rst2pdf and Sphinx for managing documents. We also looked into version control systems and how we can use them. We talked a bit about Owncloud, but without network, I could not demo. I also demoed various small Python scripts I use, to keep my life simple. I learned about various FOSS tools they are already using. They use Linux in the servers, my biggest suggestion was about using Linux in the desktops too. Viruses are always a common problem which can easily be eliminated with Linux on the desktops.

My talk ended around 12pm. After lunch, while walking back to the factory Mr. Talukdar showed me various historical places and items from Dutch and British colony days. Of course there were again the securities while going out and coming in.

We spent next few hours discussing various technology and workflow related queries with the Jt. General Manager Mr. Neeraj Agrawal. It was very nice to see that he is updated with all the latest news and information from the FOSS and technology world. We really need more people like him who are open to new ideas and capable of managing both the worlds. In future we will be doing a few workshops targeting the needs of the developers of the factory.

Police Challenge Coins

All of our standard police challenge coins are minted utilizing 100% brass centers and are then plated with all the precious metal of the deciding upon. Visit:

February 24, 2015

New GIMP 2.9.1 master builds available for Fedora

After a 6-month hiatus, I have finally gotten the GIMP 2.9.1 development builds working again for Fedora 21. These development builds are built from the upstream git master branch of what will be GIMP 2.10 when it is released.

Check out the COPR page for these builds for futher details on enabling and installing from this repo, but also note that this is an experimental repo of unstable software, so tread cautiously.

The one major feature that is of most interest is the ability to do image manipulations with 32bit float precision; which is possible in version 2.9.1 of GIMP via the power of GEGL.


Fedora Server SIG Weekly Meeting (2015-02-24)

<html> <head> <meta content="text/html;charset=UTF-8" http-equiv="Content-type"/>
<style type="text/css"> /* This is for the .html in the HTML2 writer */ body { font-family: Helvetica, sans-serif; font-size:14px; } h1 { text-align: center; } a { color:navy; text-decoration: none; border-bottom:1px dotted navy; } a:hover { text-decoration:none; border-bottom: 0; color:#0000B9; } hr { border: 1px solid #ccc; } /* The (nick, time) item pairs, and other body text things. */ .details { font-size: 12px; font-weight:bold; } /* The 'AGREED:', 'IDEA', etc, prefix to lines. */ .itemtype { font-style: normal; /* un-italics it */ font-weight: bold; } /* Example: change single item types. Capitalized command name. /* .TOPIC { color:navy; } */ /* .AGREED { color:lime; } */ </style>

</head> <body>

#fedora-meeting-1: Server SIG Weekly Meeting (2015-02-24)

Meeting started by sgallagh at 16:01:22 UTC (full logs).

Meeting summary

  1. roll call (sgallagh, 16:01:22)
  2. Agenda (sgallagh, 16:05:00)
    1. Agenda Item: Fedora Server 22 Alpha Checkpoint (sgallagh, 16:05:16)
    2. Agenda Item: Fedora 22 QA Needs (sgallagh, 16:05:41)

  3. Fedora Server 22 Alpha Checkpoint (sgallagh, 16:07:04)
    1. The offending bug is (sgallagh, 16:11:17)
    2. ACTION: simo and sgallagh to address Dogtag/FreeIPA dependency issue. (sgallagh, 16:13:26)
    3. The first pass at a Database Server Role has landed in rolekit and will be included in F22 Alpha TC5 later today. (sgallagh, 16:14:38)
    4. simo notes that making pki work with tomcat 8 may be difficult. (sgallagh, 16:15:36)
    5. (mitr, 16:20:02)
    6. AGREED: If the Tomcat and Dogtag teams cannot settle this between themselves quickly, the Fedora Server SIG and WG will file a FESCo ticket asking them to vote on reverting Tomcat to version 7 for Fedora 22 so it can spend a stabilization cycle in Rawhide. (sgallagh, 16:31:58)

  4. Open Floor (sgallagh, 16:43:12)
    1. (sgallagh, 16:44:06)
    2. We need to generate AppStream metadata for pgadmin3 to support the DB Role (sgallagh, 16:52:16)

Meeting ended at 16:54:13 UTC (full logs).

Action items

  1. simo and sgallagh to address Dogtag/FreeIPA dependency issue.

Action items, by person

  1. sgallagh
    1. simo and sgallagh to address Dogtag/FreeIPA dependency issue.
  2. simo
    1. simo and sgallagh to address Dogtag/FreeIPA dependency issue.

People present (lines said)

  1. sgallagh (84)
  2. simo (29)
  3. danofsatx (18)
  4. stefw (12)
  5. mitr (10)
  6. zodbot (9)
  7. nirik (5)
  8. tuanta (5)
  9. mizmo (2)
  10. adamw (0)

Generated by MeetBot 0.1.4. </body></html>

What Can We Do About Superfish?

Perhaps the greatest question about Superfish is what can we do about it. The first response is to throw technology at it.

The challenge here is that the technology used by Superfish has legitimate uses:

  • The core Superfish application is interesting – using image analysis to deconstruct a product image and search for similar products is actually quite ingenious! I have no reservations about this if it is an application a user consciously selects and installs and deliberately uses.
  • Changing the html data returned by a web site has many uses – for example, ad blocking and script blocking tools change the web site. Even deleting tracking cookies can be considered changing the web site! Having said that, changing the contents of a web site is a very slippery slope. And I have real problems with inserting ads in a web site or changing the content of the web site without making it extremely clear this is occurring.
  • Reading the data being exchanged with other sites is needed for firewalls and other security products.
  • Creating your own certificates is a part of many applications. However, I can’t think of many cases where it is appropriate to install a root certificate – this is powerful and dangerous.
  • Even decrypting and re-encrypting web traffic has its place in proxies, especially in corporate environments.

The real problem with Superfish is how the combination of things comes together and is used. And quality of implementation – many reports indicate poor implementation practices, such as a single insecure password for the entire root certificate infrastructure. It doesn’t matter what encryption algorithm you are using if your master password is the name of your company!

Attempting a straight technology fix will lead to “throwing the baby out with the bath water” for several valuable technologies. And a technical fix for this specific case won’t stop the next one.

The underlying issue is how these technologies are implemented and used. Attempting to fix this through technology is doomed to failure and will likely make things worse.

Yes, there is a place for technology improvements. We should be using dnssec to make sure dns information is valid. Stronger ways of validating certificate authenticity would be valuable – someone suggested DANE in one of the comments. DANE involves including the SSL certificate in the dns records for a domain. In combination with dnssec it gives you higher confidence that you are talking to the site you think you are, using the right SSL certificate. The issue here is that it requires companies to include this information in their dns records.

The underlying questions involve trust and law as well as technology. To function, you need to be able to trust people – in this case Lenovo – to do the right thing. It is clear that many people feel that Lenovo has violated their trust. It is appropriate to hold Lenovo responsible for this.

The other avenue is legal. We have laws regulate behavior and to hold people and companies responsible for their actions. Violating these regulations, regardless of the technology used, can and should be addressed through the legal system.

At the end of the day, the key issues are trust, transparency, choice, and following the law. When someone violates these they should expect to be held accountable and to pay a price in the market.

Minglish - New input method for Marathi Language
Why new input method?

For Marathi language, we have phonetic,inscript and itrans input methods but each input method has designed for particular users. Minglish is nothing but combination of Marathi + English.
Users who already familiar with English, often want same key sequence in their own language and Minglish solves this problem.

One of the problem with Marathi language is to type Complex words or Joint words and half characters which turn into full characters after combination of other characters.
To solve this problem user need to hold ALTGR and press required key to get half characters and complex characters.

For more information on key binding can be found on

Minglish is been accepted as feature for Fedora 22 and if you would like to have any suggestions then please feel free to reach out.
If you want to have similar input method for your own language, we would like to help you out.

FUDCon Pune 2015 Planning Meeting Minutes – 24 Feb
We again had an another productive FUDCon planning meeting today at #fedora-india as well as on phone + in person at Red Hat pune office.

We used the Etherpad at for keeping notes.

Key Points are:

The entire minutes are appended below.

Agenda + Minutes

  • Code of Conduct / Anti-harrassment policy
  • amit: +1
  • Outreach
  • this is for industry + mailing lists (communities)
  • we need help here with more lists + more volunteers to do the outreach.
  • this is for colleges / educational institutes
  • separate cfp needed because we need to mention open source in education here -- we could have a track for professors / teachers here to get them together and discuss problems specific to their area.
  • MIT
  • MIT has access to 35000 colleges and they can email them to come participate
  • MIT have also proposed to invite papers from opensource friendly people across India.  They say we have access to 7 seminar halls at the campus, and we can accomodate as many tracks as we like.
  • We could have 1-2 tracks for paper presentations?
  • siddhesh: not very keen on this since it probably will dilute our content further, especially if we have little control over the content of those papers. We might be better off if they just propose papers through our system and we decide what goes through
  • they suggested we will have control + we'll be involved in review - hope that's what they meant.
  • Please think of more companies which deal with RH / CentOS / Fedora; reach out for CFP.
  • Rupali emailed some RH contacts
  • Siddhesh contacted Symantec
  • Video series
  • Videos from FPL (Matthew Miller), jsmith, Kushal, Parag, Rahul, Joerg, etc. -- extolling the virtues of FUDCon + Pune
  • Kushal and Shreyank to work on this
  • also reach out to ambassadors in apac for confirmation / planning
  • Siddhesh (I'll announce during the APAC meeting this week and then take it from there)
  • send email to fudcon-planning about CFP
  • Siddhesh (sent now)
  • open trac for sponsorship requests
    • this might also encourage people to apply for speaker sessions

  • Website
  • just need to figuire openid thing now
  • Praveen + Siddhesh.
    • Favicon for fudcon site
      • Need update once we finialize logo
    • Graphics status update?
  • Two draft logos ready - Suchakra + jurankdankkal (Fedora-Indonesia)
  • This logo looks good to several people; minor tweaks are needed.
  • Looks good with small + big sizes - fit for website logo + banners.
  • Soni suggests adding flag. (+1 amit)
  • Currently it's a choice between this one and the older logo from FUDCon Pune 2011.
  • Logo candidates:
      • 2
      • 4
      • 8 WIN!
  • Clear to open tickets for other graphics elements
  • Also pls update current ticket so that designers know not to keep working on this design anymore
  • SSL support?  Asked Saleem about it
    • We're still figuring this out.

  • Travel updates? (after CFP closes + speaker selection)

  • Marketing
  • Fedora magazine
      • CFP article out
  • One more CFP article at -1 week (1 March).
      • Amita
    • Video update?
      • kpoint: Not an option. Rates too high (20k per day just for recording)
      • hasgeek
        • They're allowing us use of their equipment + train a few volunteers who can do the recording.   Equipment needs to be brought from BLR to Pune.  Nice gesture by them; but sounds complicated given the expensive equipment + need to get volunteers to be trained.
      • Look for cheaper quotes from other professionals (Bipin)
      • Buy our own cameras? (Rupali)
      • Check if local RH marketing has cameras they can spare for the event (Rupali)
      • Open source solutions for streaming (amit)
    • Last option will be to have a tiny webcam doing live Hangout -- advantage is it has auto-archival on youtube.
      • amit: +1 for this option (or using the open source one for streaming)
      • will need some trac ticket?
      • Siddhesh (looks like I need a Pune 2015 component to file a ticket against)
  • Twitter
  • Facebook
  • Google Plus
  • LinkedIn group

  • Budget
  • Make and maintain a publicly visible sheet to track expenses?
  • Send link to budget to MIT contacts
  • Sent a reminder to Ruth

  • FUDPub
  • Rupali reached out to Venue1
  • Venue1
  • Space for 100 people
  • Reasonable (approx 1800 per person)
  • RH has relationship; payments are easier
  • Close to cocoon
  • No limitation on sound limits - a nice party can be had.
  • Rupali continuing to reach out to others
  • In any case, we can close this by next week.

  • Swag
  • Niranjan suggests some programmable arduino boards manufactured locally with our logos
  • About ₹750
  • Swag for Volunteers
  • tshirts
  • Swag for Organisers?
  • Swag for Speakers
  • Umbrellas (for sweet Pune rains)
  • Fedora badge for attendees?(added to the FAS account)

  • Venue
  • WiFi
  • They don't have this on-campus; they had said they would set it up by FUDCon.  Follow up.
  • Power connector extensions
  • MIT were going to set this up.  Follow up
  • Note to speakers (include in prep email): In seminar hall: projectors are 4:3, screen quite small (don't include small text)

  • MIT meetups
  • First one proposed on 28th Feb
  • Feasible?
  • What to do?
  • Siddhesh to reach out to MCUG
  • Set up a Fedora mirror (Chandan)
  • Best way to seed will be to get an archive on 

  • Volunteers
  • College reopens on Jun 15
  • Many students will be on leave till Jun 15
  • We should identify students who will be available in the break - e.g. students from Pune who don't plan to travel elsewhere; we don't need too much of their time anyway

  • Mobile Application
  • Siddharth + Rohan had volunteered
  • Rohan to adapt an app used by the OpenStack folks.

  • Creating FAQ for FUDCon India 2015
  • Kushal to draft it
  • Kushal has a list of Q
  • Amit will get creative with answers.
Building RHEL Vagrant Boxes with Vagrant-Builder

Vagrant is a great tool for development, but Red Hat Enterprise Linux (RHEL) customers have typically been left out, because it has been impossible to get RHEL boxes! It would be extremely elegant if hackers could quickly test and prototype their code on the same OS as they’re running in production.

Secondly, when hacking on projects that have a long initial setup phase (eg: a long rpm install) it would be excellent if hackers could roll their own modified base boxes, so that certain common operations could be re-factored out into the base image.

This all changes today.

Please continue reading if you’d like to know how :)


In order to use RHEL, you first need a subscription. If you don’t already have one, go sign up… I’ll wait. You do have to pay money, but in return, you’re funding my salary (and many others) so that we can build you lots of great hacks.


I’ll be working through this whole process on a Fedora 21 laptop. It should probably work on different OS versions and flavours, but I haven’t tested it. Please test, and let me know your results! You’ll also need virt-install and virt-builder installed:

$ sudo yum install -y /usr/bin/virt-{install,builder}

Step one:

Login to and check that you have a valid subscription available. This should look like this:

A view of my available subscriptions.

A view of my available subscriptions.

If everything looks good, you’ll need to download an ISO image of RHEL. First head to the downloads section and find the RHEL product:

A view of my available product downloads.

A view of my available product downloads.

In the RHEL download section, you’ll find a number of variants. You want the RHEL 7.0 Binary DVD:

A view of the available RHEL downloads.

A view of the available RHEL downloads.

After it has finished downloading, verify the SHA-256 hash is correct, and continue to step two!

$ sha256sum rhel-server-7.0-x86_64-dvd.iso
85a9fedc2bf0fc825cc7817056aa00b3ea87d7e111e0cf8de77d3ba643f8646c  rhel-server-7.0-x86_64-dvd.iso

Step two:

Grab a copy of vagrant-builder:

$ git clone
Cloning into 'vagrant-builder'...
Checking connectivity... done.

I’m pleased to announce that it now has some documentation! (Patches are welcome to improve it!)

Since we’re going to use it to build RHEL images, you’ll need to put your subscription manager credentials in ~/.vagrant-builder/

$ cat ~/.vagrant-builder/
# these values are used by vagrant-builder
USERNAME='' # replace with your username
PASSWORD='hunter2'               # replace with your password

This is a simple shell script that gets sourced, so you could instead replace the static values with a script that calls out to the GNOME Keyring. This is left as an exercise to the reader.

To build the image, we’ll be working in the v7/ directory. This directory supports common OS families and versions that have high commonality, and this includes Fedora 20, Fedora 21, CentOS 7.0, and RHEL 7.0.

Put the downloaded RHEL ISO in the iso/ directory. To allow qemu to see this file, you’ll need to add some acl’s:

$ sudo -s # do this as root
$ cd /home/
$ getfacl james # james is my home directory
# file: james
# owner: james
# group: james
$ setfacl -m u:qemu:r-x james # this is the important line
$ getfacl james
# file: james
# owner: james
# group: james

If you have an unusually small /tmp directory, it might also be an issue. You’ll need at least 6GiB free, but a bit extra is a good idea. Check your free space first:

$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.9G 1.3M 1.9G 1% /tmp

Let’s increase this a bit:

$ sudo mount -o remount,size=8G /tmp
$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
tmpfs 8.0G 1.3M 8.0G 1% /tmp

You’re now ready to build an image…

Step three:

In the versions/ directory, you’ll see that I have provided a script. You’ll need to run it from its parent directory. This will take a while, and will cause two sudo prompts, which are required for virt-install. One downside to this process is that your password will be briefly shown in the virt-builder output. Patches to fix this are welcome!

$ pwd
$ time ./versions/
real    38m49.777s
user    13m20.910s
sys     1m13.832s
$ echo $?

With any luck, this should eventually complete successfully. This uses your cpu’s virtualization instructions, so if they’re not enabled, this will be a lot slower. It also uses the network, which in North America, means you’re in for a wait. Lastly, the xz compression utility will use a bunch of cpu building the virt-builder base image. On my laptop, this whole process took about 30 minutes. The above run was done without an SSD and took a bit longer.

The good news is that most of hard work is now done and won’t need to be repeated! If you want to see the fruits of your CPU labour, have a look in: ~/tmp/builder/rhel-7.0-iso/.

$ ls -lAhGs
total 4.1G
1.7G -rw-r--r--. 1 james 1.7G Feb 23 18:48 box.img
1.7G -rw-r--r--. 1 james  41G Feb 23 18:48 builder.img
 12K -rw-rw-r--. 1 james  10K Feb 23 18:11 docker.tar
4.0K -rw-rw-r--. 1 james  388 Feb 23 18:39 index
4.0K -rw-rw-r--. 1 james   64 Feb 23 18:11 metadata.json
652M -rw-rw-r--. 1 james 652M Feb 23 18:50
200M -rw-r--r--. 1 james 200M Feb 23 18:28 rhel-7.0.xz

As you can see, we’ve produced a bunch of files. The is your RHEL 7.0 vagrant base box! Congratulations!

Step four:

If you haven’t ever installed vagrant, you’ll pleased to know that as of last week, vagrant and vagrant-libvirt RPM’s have hit Fedora 21! I started trying to convince the RPM wizards about a year ago, and we finally have something that is quite usable! Hopefully we’ll iterate on any packaging bugs, and keep this great work going! There are now only three things you need to do to get a working vagrant-libvirt setup on Fedora 21:

  1. $ yum install -y vagrant-libvirt
  2. Source this .bashrc add-on from:
  3. Add a vagrant.pkla file as mentioned here

Now that we’re now in well-known vagrant territory. Adding the box into vagrant is a simple:

$ vagrant box add --name rhel-7.0

Using the box effectively:

Having a base box is great, but having to manage subscription-manager manually isn’t fun in a DevOps environment. Enter Oh-My-Vagrant (omv). You can use omv to automatically register and unregister boxes! Edit the omv.yaml file so that the image variable refers to the base box you just built, enter your username and password, and vagrant up away!

$ cat omv.yaml 
:image: rhel-7.0
:boxurlprefix: ''
:sync: rsync
:folder: ''
:extern: []
:puppet: false
:classes: []
:docker: false
:cachier: false
:vms: []
:namespace: omv
:count: 2
:username: ''
:password: 'hunter2'
:poolid: true
:repos: []
$ vs
Current machine states:

omv1                      not created (libvirt)
omv2                      not created (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

You might want to set repos to be:

['rhel-7-server-rpms', 'rhel-7-server-extras-rpms']

but it depends on what subscriptions you want or have available. If you’d like to store your credentials in an external file, you can do so like this:

$ cat ~/.config/oh-my-vagrant/auth.yaml
:password: hunter2

Here’s an actual run to see the subscription-manager action:

$ vup omv1
==> omv1: The system has been registered with ID: 00112233-4455-6677-8899-aabbccddeeff
==> omv1: Installed Product Current Status:
==> omv1: Product Name: Red Hat Enterprise Linux Server
==> omv1: Status:       Subscribed
$ # the above lines shows that your machine has been registered
$ vscreen root@omv1
[root@omv1 ~]# echo thanks purpleidea!
thanks purpleidea!
[root@omv1 ~]# exit

Make sure to unregister when you are permanently done with a machine, otherwise your subscriptions will be left idle. This happens automatically on vagrant destroy when using Oh-My-Vagrant:

$ vdestroy omv1 # make sure to unregister when you are done
Unlocking shell provisioning for: omv1...
Running 'subscription-manager unregister' on: omv1...
Connection to closed.
System has been unregistered.
==> omv1: Removing domain...


One interesting aspect of this build process, is that it’s mostly idempotent. It’s able to do this, because it uses GNU Make to ensure that only out of date steps or missing targets are run. As a result, if the build process fails part way through, you’ll only have to repeat the failed steps! This speeds up debugging and iterative development enormously!

To prove this to you, here is what a second run looks like (after the first successful run):

$ time ./versions/ 

real    0m0.029s
user    0m0.013s
sys    0m0.017s

As you can see it completes almost instantly.


To build a variant of the base image that we just built, create a versions/*.sh file, and modify the variables to add your changes in. If you start with a copy of the ~/tmp/builder/${VERSION}-${POSTFIX} folder, then you shouldn’t need to repeat the initial steps. Hint: btrfs is excellent at reflinking data, so you don’t unnecessarily store multiple copies!

Plumbing Pipeline:

What actually happens behind the scenes? Most of the magic happens in the Makefile. The relevant series of transforms is as follows:

  1. virt-install: install from iso
  2. virt-sysprep: remove unneeded junk
  3. virt-sparsify: make sparse
  4. xz –best: compress into builder format
  5. virt-builder: use builder to bake vagrant box
  6. qemu-img convert: convert to correct format
  7. tar -cvz: tar up into vagrant box format

There are some intermediate dependency steps that I didn’t mention, so feel free to explore the source.

Future work:

  • Some of the above steps in the pipeline are actually bundled under the same target. It’s not a huge issue, but it could be changed if someone feels strongly about it.
  • Virt-builder can’t run docker commands during build. This would be very useful for pre-populating images with docker containers.
  • Oh-My-Vagrant, needs to have its DNS management switched to use vagrant-hostmanager instead of puppet resource commands.


While I expect you’ll love using these RHEL base boxes with Vagrant, the above builder methodology is currently not officially supported, and I can’t guarantee that the RHEL vagrant dev environments will be either. I’m putting this out there for the early (DevOps) adopters who want to hack on this and who didn’t want to invent their own build tool chain. If you do have issues, please leave a comment here, or submit a vagrant-builder issue.


Special thanks to Richard WM Jones and Pino Toscano for their great work on virt-builder that this is based on. Additional thanks to Randy Barlow for encouraging me to work on this. Thanks to Red Hat for continuing to pay my salary :)


If I’ve convinced you that you want some RHEL subscriptions, please go have a look, and please let Red Hat know that you appreciated this post and my work.

Happy Hacking!


February 23, 2015

GNOME 3.16 sightings

As is my habit, I’ve taken some screenshots of new things that showed up in my smoketesting of the GNOME 3.15.90 release.Since we are entering feature freeze with the .90 release,  these pictures give some impression of whats in store for GNOME 3.16.

The GNOME shell theme has seen the first major refresh in a while. As part of this refresh, the theme has been rewritten in sass, and is now sharing much more code with the Adwaita GTK+ theme. Window decorations are now also sharing code between client-side and server-side.

New Shell ThemeA long-anticipated redesign of notifications has landed just-in-time for 3.15.90. This is a major change in the user interaction. Notifications are now appearing at the top of the screen. The message tray is gone, old notifications can now be found in the calendar popup.

New notificationsNew NotificationsSystem integration has been improved, e.g in the area of privacy. We now have  a privacy page in gnome-initial-setup, which offers you to opt out of geolocation and automatic bug reporting:

Privacy Outside of the initial setup, the same settings are also available in the control-center privacy panel.

The nautilus UI has received a lot of love. The ‘gear’ menu has been replaced by a popover, the list appearance is improved,
and file deletion can now be undone from a notification.

Improved NautilusNautilus ImprovementsOther applications have received a fresh look as well, for example evince and eog:

EvinceEye of GNOMEThere will also be a number of new applications, here are a few:

New applicationsYou can try GNOME 3.15.90, for example in Fedora 22 today. Or you can wait for GNOME 3.16, which will arrive on March 25.

100 core ARM 64 bit chip

They claim “linear scaling of application performance” which I rather doubt unless your application is web serving.

Terminal Calendaring Application

As an avid mutt-kz user, I always found it quite annoying to have to use a web browser or my phone to check out my work calendar or upcoming birthdays. I have slowly started to use khal which is shaping up to be a very nice calendaring application for use within terminals:


For Fedora users I have created a COPR repo. As root, simply run:

dnf copr enable mbaldessari/khal

and then launch:

dnf install khal

This will install the following packages: python-atomicwrites, vdirsyncer and khal Once installed, we need to tell vdirsyncer where to fetch the caldav entries from. My ~/.vdirsyncer/config is as follows and contains my birthday list from Facebook and my work calendar:

status_path = ~/.vdirsyncer/status/
[pair my_calendar]
a = my_calendar_local
b = my_calendar_remote
[storage my_calendar_local]
type = filesystem
path = ~/.calendar/work/
fileext = .ics
[storage my_calendar_remote]
type = caldav
url =
username = michele
password = 123

[pair my_fb_birthdays]
a = my_fb_birthdays_local
b = my_fb_birthdays_remote

[storage my_fb_birthdays_local]
type = filesystem
path = ~/.calendar/fb_birthdays/
fileext = .ics

[storage my_fb_birthdays_remote]
type = http
url =

At this point you can run vdirsyncer sync and the entries will be fetched and stored locally. Note that if you get SSL validation errors you most likely need to import your company Root CA:

# cp MY-CA-root-cert.pem /etc/pki/ca-trust/source/anchors/
# update-ca-trust extract

Once vdirsyncer has completed fetching all the entries, it is time to display them in khal or in its interactive cousing ikhal. Here is my ~/.config/khal/khal.conf file:

path = ~/.calendar/fb_birthdays/
color = yellow

path = ~/.calendar/work/
color = dark red
local_timezone= Europe/Berlin
default_timezone= Europe/Berlin
timeformat= %H:%M
dateformat= %d.%m.
longdateformat= %d.%m.%Y
datetimeformat= %d.%m. %H:%M
longdatetimeformat= %d.%m.%Y %H:%M

That’s it. There is still some work todo before I can ditch other calendaring applications completely. Namely:

Let me know if you have any issues with the copr repo.

Eric Mesa: How do you Fedora?

We recently interviewed Fedora user Eric Mesa on how he uses Fedora.  This is part of a series here on the Fedora Magazine where we will profile Fedora users and how they use Fedora to get things done. If you are interested in being interviewed for a further installment of this series you can contact us on the feedback form.


Who are you, and what do you do?

My name’s Eric Mesa and I love Fedora. I manage programmers, but we’re a Windows shop, so that’s not important to this discussion. I am also a blogger and I write comic comic criticism. I am currently going through the approval process to join the Fedora Ambassadors team.

The reason I listed my involvement in the blogs has to do with how I came to be a Fedora user. Back in 2003 my parents wanted to start a new business and have an accompanying website. I did some research and found out that if I installed some thing called “Linux” onto a computer, I could run my own server. This was quite fascinating to me, I had no idea such a thing existed. So I figured I’d experiment by running my own website and blog before I created the server for their business. I went to the local Borders bookstore near my university and went to the computer section to figure out how to do this. Under the Linux section there was a Debian box with a book inside and a Fedora Core 1 book with DVDs/CDs at the back. Because I was able to look through the book for Fedora, I saw that it would show me how to set up a server, so I went with Fedora.

I’ve been involved with Fedora since then and for most of the time my blog has existed it’s either run on Fedora or CentOS. (I did eventually go to a VPS since most residential internet in the USA consider it a violation of terms of service to run a server.

As I learned about FLOSS I ended up slowly working towards getting another computer to use as a main computer. As both Fedora (and Linux in general) made strides towards better desktop use AND most of what we need to do on a daily basis moved to the web I’ve been able to make my Linux computer my main computer. I’d say that for about the last 5 or so years, my Linux computer has been my main computer. Some days I don’t even turn on my Windows computer – at this point it’s only used for gaming and, thanks to Valve’s Steam, maybe not for much longer.

What hardware do you use to get the job done?

I have lots of hardware, including lots of old laptops. For myself I use Fedora almost exclusively. My desktop is a custom build with the an AMD processor, 8 GB RAM, an nVidia graphics card, an a CD burner. I also have a 4 disk external enclosure attached via USB3. I have a dual monitor setup with a Samsung and a Viewsonic monitor.

My mouse is a $10 logitech with a click-scrollwheel and my keyboard is a simple PS/2 multimedia keyboard with play/skip/volume buttons. The keyboard is on a KVM so I can share it between my Windows and Linux computer. All my hardware works perfectly with Fedora. Because of gaming and because I used to be big into 3D animation, I use nVidia’s proprietary driver. This only ever causes problems for me during upgrades due to the way it wants to hang on to the older kernel. Usually it only adds a teensy bit of extra work.

I have an Acer Aspire One netbook that I bought for travel that runs Fedora. Everything works as of the last time I remember using any particular bit of tech (eg the webcam).

What software do you use?

Software is the biggest reason I love FLOSS and love Fedora. I love that the software is libre and it’s nice that it’s very often gratis. On both my desktop and netbook I’m running the latest Fedora (21 at this time). On my desktop I LOVE using KDE. Its use of Activities along with Virtual Desktops helps me to organize my work so perfectly.

The fact that I have a multi-monitor setup means that in many contexts I have one monitor in full use and the other one only gets used when I need to reference two things at once. So I’m able to make full use of KDE’s widgets which have evolved from SuperKaramba’s Conky-like functionality to very useful things folder views. For example, on my web desktop I have a folder view of my /home/user/Download folder so I can quickly access files I’ve just downloaded.

KDE also tends to have some pretty amazing software under its umbrella. I used to be a diehard Emacs user, but now I can’t see myself going away from using Kate. It has everything you could ever want – code folding, code highlighting, code completion, a folder view, a commandline interface. And if you’re doing something of a huge project, KDevelop uses the same Kate tech (something very Unix-like that KDE does well – reusing code) and then gives you all you need and expect from a high class IDE. I used Kate to develop my btrfs snapshot program. Speaking of commandlines, I truly love that with a simple press of F4 you can bring up a commandline in the Dolphin file manager. This lets me have the best of both worlds. I can do with a mouse the things that are easiest with a mouse (selecting multiple files that don’t have a regex pattern) and do the things that are easiest with text and scripts.

There’s no way I enjoy listening to my music more than to use Amarok 2. I use KDEnlive to create videos – it’s one of the best medium-level non-linear video editors I’ve come across in the FLOSS world. Digikam is more or less just as good as Adobe Lightroom for image management. If I wasn’t already so entrenched in Lightroom, I’d definitely be using it. (My wife uses digikam for her photos).

What else do I use on a regular basis? Well, I use Calibre to organize my ebook and convert everything to the open EPUB standard. For my comic criticism website I use qComicBook and Okular to read the advanced review copies I get from the companies.

Finally, I want to mention two stellar KDE web technologies that are invaluable to me. First is ktorrent. Despite what the media companies would have you believe, there are so many legitimate reasons to use torrents. I always get my Linux ISOs that way to help reduce strain on the official servers. I also use it whenever I buy ebooks or comics from Humble Bundle. What makes ktorrent awesome is that I was able to create categories that have destination folders associated with them.

When I get a Linux ISO, it automatically goes to my ISO folder. Comics to to the comics folder, etc. It saves me a lot of headache.

Similarly useful is the second bit of KDE net technology – KGet. You may think that download managers are unimportant when Firefox and Chrome have them built in. Gone are the days of being unable to pause and resume downloads unless you used a download manager. But KGet works exactly like ktorrent in that I can setup groups to dictate where files download to. It’s very convenient. Even more-so is the ability to feed it a list of URLs and have it automatically grab them. The only thing that seems to keep KGet from being the most awesome thing ever is that certain websites do weird tricks (I guess to control access to downloads?) that make it feed the wrong URL to KGet at times. In those circumstances I have no choice but to use Firefox or Chrome’s download managers.

I would love to use KDE’s netbook interface on my netbook, but unfortunately the hardware within is too ancient to run KDE. So I use my second favorite desktop environment: Xfce. I don’t have any special customizations because the screen resolution is so low, there’s not much I can do on there that wouldn’t take up valuable space. I already use fullscreen mode in Firefox or any other applications I run on there (to get back taskbar real estate). Still, it’s great to have Fedora technology with me when I am on vacation or a business trip. I find it useful enough to carry in addition to a tablet or ereader rather than being replaced by them.

I Love Fedora Meetup @ Open Labs Albania – Event Report

Some people (geeks, geeks everywhere!) decided to spend their St. Valentine’s Day in a different way. And that was to show their appreciation towards Fedora and Free and Open Source Software in general.

On Saturday 14 February 2015, the Open Labs community in Albania organised a local Fedora-related event called I <3 Fedora Meetup. Think of it as a Fedora Coffee event (really, remember those events a few years ago?) aiming to gather people together interested about our project, to discuss all the latest news around it, and to strengthen our local community presence in the region.

Jona while presenting at the event.

Jona, who run the event, is a young student at the University of Tirana and a member of Open Labs. She’s interested about spreading the love about Fedora in her region and building up a local community.

I’ve been in Albania as a Fedora Ambassador last May, when I spoke at OSCAL – Open Source Conference Albania about the Fedora Project in general and also run a workshop. It was so nice back then to see a lot of people interested in our work.

This time, I couldn’t attend this event in person, but I was apparently a virtual speaker. I recorded a very short video for the event, explaining the core values of our project and our community in general. I also provided a few info on how to get Fedora and seek support, and also made a quick reference to our popular remixes. You can find my video (available under CC BY-SA-ND) here.

Fedora, Chocolates, and Love. Also time to get them some F21s.

You can also find more photos from the event here.

Thanks Jona, thanks Redon, and thanks Open Labs. Good luck with your future activities! 2015

This year, the Developer Conference was different in many ways for me. It has different venue. The conference was bigger than ever before. Most importantly, I had my very first talk at devconf. More precisely, it was just an half of the talk.

I was an dogsbody of Honza Horák with topic “SQL is not dead alias news in relational databases” and I was responsible for PostgreSQL part.

Even between all the NoSQL databases and hype technologies, SQL has its strong place and even evolves quite rapidly. Let’s look what news the last versions of PostgreSQL and MySQL/MariaDB brought to us. You can expect quick overview of hottest features and some practical examples.

I am not used to speak in English, but I have survived it. The audience also survived. (-; After all, you can watch it on youtube.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="495" src="" width="660"></iframe>
I am trying to help with open source conferences for long time. I have prepared social wall web page and mobile application for Sailfish OS.

At the beginning it wasn’t very useful, but now it have quite a lot of interesting features. Most important feature is offline mode. The conference people have usually poor internet connection (wifi only), so everything important must be available in offline mode. The conference is big so attendees should be able to see just their favorite talks. Additionally, there is pre-loaded map of the city with highlighted points (Venue, Hotels, Party, etc.). Next year we can look forward to its Android port (I already have apk).

This mobile application already showed power of open source software. Few days before DevConf netvandal used my source code to create sailme (offline schedule application for fosdem).

The second application was social wall. It is collecting data from social networks. It is searching messages and photos with hash tag #devconfcz. The social buzz was focused mainly at twitter. The application works also with google plus and instagram. It was designed to run on plasma screens in corridors next to lecture rooms and also on the screen in the rooms during the breaks. It turned out, that the screens in corridors are not working. We managed to put just one screen next to biggest lecture room.

There was a lot of buzz on social networks. You can see some of them on the following picture.

I was also collecting all photos posted to social networks. Sometimes it is very difficult to find them after conference. They are available in deep history of personal accounts of their authors. I believe all of them would permit its sharing in one big gallery.

I had few positive feedback to my talk and those apps, but no negative feedback. I hope you liked it.

Reliable BIOS updates in Fedora

Some years ago I bought myself a new laptop, deleted the windows partition and installed Fedora on the system. Only to later realize that the system had a bug that required a BIOS update to fix and that the only tool for doing such updates was available for Windows only. And while some tools and methods have been available from a subset of vendors, BIOS updates on Linux has always been somewhat of hit and miss situation. Well luckily it seems that we will finally get a proper solution to this problem.
Peter Jones, who is Red Hat’s representative to the UEFI working group and who is working on making sure we got everything needed to support this on Linux, approached me some time ago to let me know of the latest incoming update to the UEFI standard which provides a mechanism for doing BIOS updates. Which means that any system that supports UEFI 2.5 will in theory be one where we can initiate the BIOS update from Linux. So systems supporting this version of the UEFI specs is expected to become available through the course of this year and if you are lucky your hardware vendor might even provide a BIOS update bringing UEFI 2.5 support to your existing hardware, although you would of course need to do that one BIOS update in the old way.

So with Peter’s help we got hold of some prototype hardware from our friends at Intel which already got UEFI 2.5 support. This hardware is currently in the hands of Richard Hughes. Richard will be working on incorporating the use of this functionality into GNOME Software, so that you can do any needed BIOS updates through GNOME Software along with all your other software update needs.

Peter and Richard will as part of this be working to define a specification/guideline for hardware vendors for how they can make their BIOS updates available in a manner we can consume and automatically check for updates. We will try to align ourselves with the requirements from Microsoft in this area to allow the vendors to either use the exact same package for both Windows and Linux or at least only need small changes to them. We can hopefully get this specification up on for wider consumption once its done.

I am also already speaking with a couple of hardware vendors to see if we can pilot this functionality with them, to both encourage them to support UEFI 2.5 as quickly as possible and also work with them to figure out the finer details of how to make the updates available in a easily consumable fashion.

Our hope here is that you eventually can get almost any hardware and know that if you ever need a BIOS update you can just fire up Software and it will tell you what if any BIOS updates are available for your hardware, and then let you download and install them. For people running Fedora servers we have had some initial discussions about doing BIOS updates through Cockpit, in addition of course to the command line tools that Peter is writing for this.

I mentioned in an earlier blog post that one of our goals with the Fedora Workstation is to drain the swamp in terms of fixing the real issues making using a Linux desktop challenging, well this is another piece of that puzzle and I am really glad we had Peter working with the UEFI standards group to ensure the final specification was useful also for Linux users.

Anyway as soon as I got some data on concrete hardware that will support this I will make sure to let you know.

Open Data Day Panama
It is the first time Panama celebrates the Open Data Day, this time organize by the recently new Panama Mozilla Community, as part of the foss local community, Fedora Panama and Floss-pa was invited to this event.

Local members share talks about big data, open data, open hardware, it was an interesting experience with many people from the out side tech world. It was fun day.

Many thanks to Mozillians Omar Vasquez Lima and Lia Hernandez from IPANDETEC who organize it and to Centro Cultural de España who provide the venue.

February Python Pune Meetup: 21.02.2015

On 21st Feb, 2015, We organized February Python Pune Meetup at Webonise Lab, Bavdhan (Pune, India). Here is the event report of February Python Pune Meetup. We had selected 2 workshops, 1 talk-cum-workshop, 1 talk and 4 lightening talks for this meetup. More than 150 registered for the meetup but only 70 made it at the venue.

This time we started on time by 10:00 A:M, I had given a small talk on aim and objectives of Python Pune Meetup where i covered about PSF, PSSI, Python Pune Meetup, How one can contribute to Python language and Python projects and how it adds values to your career.

By 10:15 A:m, Anurag presented a talk on Writing flexible filesystems with FUSE-Python. He started with UNIX based file system, introduction to fuse-python and how to use it with directory operations and reading files. In the end he created toyfs and demoed it in the lightening talk by reading files.

Then after a short break, we again started with Django workshop by Mukesh Shukla. He continued the Django workshop from the previous meetup. He explained about Django models, creating migrations and apply migrations on an existing project by applying CRUD operations on Django models by using Django-south for Django<1.7.

Again we had a short break and by 12:30 P:M, Mayuresh started a talk-cum-workshop on Integrating Python with Firebase. He started with introduction to Firebase and how it is different from database. He created a demo chat application by using Firebase in python which performs basic read and write functionality. Here is the hosted ui for chat client.

All the workshops were quite interesting.

And by 01:20 P:M, Rishabh presented Automation using Ansible workshop. He started with the basics of ansible, modules and variables and how to create an ansible playback. To understand these things in a better way, he created an OpenStack instance and written an ansible playbook to deploy gitlab local instance on Fedora 21.

And finally here comes the lightening talks. Aditya gave a nice demo of HeatMaps using pandas and Ipython-Notebook. Harsha demoed about Easyengine a python cli tool to deploy wordpress sites easily. Hardik a college student had used selinum driver and written a python script PyAutoLogOn to login into his college WiFi automatically in every 5 minutes (as the wifi get disconnected each time and ask for log-in) and demoed it. That was a real fun in python. Lastly Anurag presented TOYFS, an implementation of FUSE-python.

Here it comes to an end of an awesome meetup with awesome feedback with a group photo.

We are soon coming up with a developer sprint for python related projects in March.

Thanks to Mukesh, Nishant, Vijay for helping me in hosting the meetup and Webonise lab for providing venue for hosting this meetup. Thanks to volunteers, attendees and speakers for making the event successful.

Libinput now enabled as default xorg driver for F-22 workstation installs
Hi All,

As described here we've been working on making xorg-x11-drv-libinput the default input driver for the Xorg xserver under Fedora 22. All the necessary changes for this are in place for the GNOME and KDE desktops. So starting with the next Fedora 22 compose new Fedora 22 Workstation installations will be using xorg-x11-drv-libinput instead of the -evdev and -synaptics drivers.

For existing installations the move to libinput will not happen automatically, as  we have not added a hard dependency on xorg-x11-drv-libinput so the XFCE, LXDE, etc. spins can keep using the old drivers until they have adopted their mouse/touchpad configuration settings tools to also work with xorg-x11-drv-libinput.

If you're running F-22 with GNOME or KDE, please do the following to switch to the new driver:

"sudo dnf install xorg-x11-drv-libinput"

And let us know if you experience any issues while using the new driver.


Make Fedora 22 Beautiful !

Time is flying, Fedora 21 is there just 2 months and Fedora 22 alpha is before the door. So it is time to open the Supplemental Wallpaper Contest. We will use again Nuancier Fedoras application fr the submission and the voting. There are also this time some changes, there is now a team of mentors, who look over the submissions and make suggestions how it can be improved. The submission phase is this time much much shorter, you have only until 19th of March to make your submission.

There are some rules of technical nature:

  • Submitted wallpapers must use a format that can be read by software available in Fedora Package Collection. Preferred image formats include SVG and PNG.
  • Master files, which may be further edited, should be maintained in non-lossy formats. Preserving vector graphics, raster layers, and channels is important for such materials.
  • Originals for landscape formats must be a minimum of 1600 pixels wide and 1200 pixels high. The larger the better. Photographic submissions should be made at the highest resolution the camera is capable of.
  • Submitted wallpapers should be provided in a 16 x 9 aspect ratio when possible.
  • No watermarks, signatures, photographer/creator names, or messages may be included in any part of the work. However if the license allows, and the photo is compelling enough, we could remove watermarks.

and some of organizational nature:

  • Submissions must not contain material that violates or infringes anothers rights, including but not limited to privacy, publicity or intellectual property rights, or that constitutes copyright infringement. If your submissions include or derive from artwork created by other people, please make sure the license of the original work you incorporate is compatible with Fedora and that you are not violating any of the provisions of its license. Just because a work is licensed with a Creative Commons license does not mean it is free to use (make sure you provide attribution to artists that license their work with a CC Attribution clause.)
  • Submission should have the consent and approval of the author or creator
  • Submissions re thereby licensed to the public for reuse under CC-BY-SA unless specifically identified as being licensed by another approved liberal open source license. See a list of approved licenses for Fedora. Note that we can not accept NC or ND submissions.

To get inspiration, you can always look on former submissions, that are my personal favorites from the last contest:

The deadline until you can submit your artwork is the March 19 2015 at 23:59 UTC. The voting will open automatically after this deadline and as many said the period for the voting shall be longer it will be this time open until 25th of March

For the badges hunters, yes there will be badges for it. There will be a badge for submissions, it will be awarded after the examination of the picture if it fits the rules, which you can see above. Another one will be awarded if your submission is chosen and also for the voting process there will be one. So successful submission makes 2 badges ;) For further questions, you can send me a mail and don’t hesitate to ask me for assistance just write me a mail