December 22, 2014

All systems go
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.
There are scheduled downtimes in progress
Service 'The Koji Buildsystem' now has status: scheduled: scheduled outage
It is done!

About four and a half years ago, I decided to apply to graduate school. Though I had sworn I was done with school when I completed my bachelor’s degree in 2006, the idea of a master’s degree began to seem reasonable. With Purdue’s staff discount and my department’s forgivable loan scholarship, I could pay roughly a quarter of the “retail” price. In January of 2011, I began my coursework.

Since then, my wife and I have welcomed two children into the world. I have changed jobs twice, the second time leaving the university for a much more stressful and time-consuming role at a small company. For four years, I tried to balance my family, my academics, my employment, open source contributions, and (on rare occasion) my own mental and physical health.

It is the hardest thing I have done in my life, and although there is evidence to suggest that I performed well, I never felt like the balance was right. At least one area always got less than it deserved. Sadly, myself and my family were most often on the short end.

Nonetheless, my family remained unwaveringly supportive, even when I was basically non-existent for weeks at a time. My colleagues never complained (to me, at least) about my absences during odd hours of the work day. My professors praised my work, particularly the thesis which I defended in November (more on that in a later post). My Fedora contributions became effectively non-existent, but nobody seems to begrudge me for that, and I look forward to being able to contribute again.

I did not, and could not have, accomplished this on my own. I took great pleasure in having my family in attendance yesterday as I participated in a long-awaited commencement ceremony. For everyone else who provided support and encouragement along the way, no matter to what degree, I offer my most sincere thanks. We did it!

image

Fedora 21 Release Party
On Sunday, we had a Fedora 21 Release party at TIFR in Mumbai, which is India's one of the renowned research institutes. It's really nice that they support Free Software.



Pravin Satpute gave a brief introduction about Fedora and Fedora.next and afterwords I gave talk on Fedora workstation features. Then we took a small break and had an awesome Fedora Cake.



Praveen Kumar talked about Fedora cloud features particularity Docker and then he showed a demo of how to use Fedora cloud images on docker.
Finally Rahul Bhalerao talked about how one can contribute into Fedora and we concluded the party by giving Fedora Mugs to Prof Nagarjuna  and Rahul for their gratitude and assistance.
Prof Nagarjuna who is chairperson of Free Software Foundation of India, talked about why and how one should use Free Software. He mentioned that, now big fat organizations use Free Software to make non Free Software which is a serious issue for our society and we need to think about it.
All the sessions in conference was really an interactive one's and we had various questions and most of them were regarding how one start their contribution and doubts they had while using Fedora. We could answer their questions and as a effect of that we could see their happy faces and I am hoping to see their contributions in near future.
In conclusion, I will say "Fedora Release Party was successful"



OpenHardware : Ambient Light Sensor

My OpenHardware post about an entropy source got loads of high quality comments, huge thanks to all who chimed in. There look to be a few existing projects producing OpenHardware, and the various comments have links to the better ones. I’ll put this idea back on the shelf for another holiday-hacking session. I’ve still not given up on the SD card interface, although it looks like emulating a storage device might be the easiest and quickest route for any future project.

So, on to the next idea. An OpenHardware USB ambient light sensor. A lot of hardware doesn’t have a way of testing the ambient light level. Webcams don’t count, they use up waaaay too much power and the auto-white balence is normally hardcoded in hardware. So I was thinking of producing a very low cost mini-dongle to measure the ambient light level so that lower-spec laptops could be saving tons of power. With smartphones people are now acutely aware than up to 60% of their battery power is just making the screen light up and I’m sure we could be smarter about what we do in GNOME. The problem traditionally, has been the lack of hardware with this support.

Anyone interested?

What is slowing down my ssh login process

On one box, I had this strange problem. Every login could take 40-60 seconds, but when first in, everything worked as a charm. As I use ssh for login, I checked the obvious; that reverse DNS lookup did not time out (sshd_config: UseDNS no), and that unnecessary gssapi was not used, but to no avail. So I fetched out old uncle strace from the drawer, and was to run sshd in debug mode, on the console. Then I realized that login on the console took at least as long as via ssh.

So, the problem had to be somewhere else, probably som pam module. strace to the rescue

# strace -e file -ff /usr/sbin/sshd -D -e -ddd -p 2122

and logged in via ssh on port 2122

There it was. An old /var/log/btmp had grown and grown and grown, and login, via pam_lastlog.so (in fedora called in from session), scans it to check for previous logins, using a lot of cpu, io and time in the process.

But why had the file grown so large? Because the btmp log saves failed logins, and this box (by design) had an open ssh to the world, and was often hit by scanners. But also because of missing log rotation. /etc/logrotate.conf on fedora actually rotates /var/log/btmp once a month, but to save space, someone had gzipped the last rotation (again, because of size). And by some strange reasoning (bug?), logrotate on fedora won’t rotate /var/log/btmp at all, if there exists some /var/log/btmp-20140606.gz (unless compress is switched on, which it by default, is not).

Patchutils now on GitHub

Patchutils now lives here. That is all.

Fedora-21 Release Party

After few days of planning, finally yesterday we had fedora release party at Tata Institute of Fundamental Research (TIFR) Mumbai. We started around 1000 IST from pune, had lunch on our driveway and reached to venue around 1415. Release party scheduled at 1430 and around 20 member joined us. 

We had introduction session and after that Pravin Satpute talked about fedora history and fedora.Next, he also talked about why this initiative took place and how community/user will be benifit from it.


After Tea break Anish talked about what feature fedora workstation provide to end user and how it's different from old fedora releases. he also talked about general myth which newbie have regarding Linux (like it's tough to install, hard to get anything working which is surly not the case these day) and what's new in Gnome, which applications landed to it. He demoed about Gnome-Photo, Software centre, dev-assistant and many new feature which Gnome provide.

I talked about fedora cloud features, gave a little bit overview about docker, projectatomic, kubernetes and openstack. I demoed about how to run and manage a container inside atomic instance. Discussed about difference between docker container with Virtual Machine and how docker is good for each project stage.


It was cake cutting time and order was on time. We gathered in cafeteria to cut the cake. We were having discussion about free software and opensource software with Prof. Nagarjuna G. which we decided to a detail conversation during openhouse session.

After break Rahul talked about How can an individual contribute to fedora. He presented fedora core values and step by step process to get involve with community.

During openhouse we resumed our discussion about free and opensource software philosophy. Prof Nagarjuna G. explained what are 4 core value which free software foundation care of and how it's different from opensource software and what are different type of licenses. He also talked about apache license terms/condition and why should a developer need to know about licencing policies. We also had  discussion about forming a community in Mumbai so some fedora related event can happen regularly.

Before ending the event we distributed fedora 21 DVDs and stickers.


Overall it was really a nice event and I met some really enthusiastic members in person who want to use fedora daily bases. Thanks to TIFR for providing venue for event and Red Hat for travel.

December 21, 2014

Yang perlu dilakukan setelah install Fedora 21
Sesudah kita menginstal Fedora 21, kita perlu melakukan beberapa hal guna menyesuaikan dengan kebutuhan kita. Walaupun pada dasarnya hal ini yang sama dilakukan jika kita menginstal Fedora versi sebelumnya. Tapi hal itu tidak 100% bisa diterapkan pada Fedora versi terbaru.

1. Update Sistem

Walaupun Fedora 21 belum lama rilis, kita masih harus tetap melakukan update. Karena kemungkinan besar sudah ada pembaruan sistem yang sudah dirilis.

su -c 'yum update'

2. Menambah repositori

Tidak semua paket/software sudah ada repositori milik Fedora, karena itu kita perlu menambahkan repo dari pihak lain agar kita bisa menginstal paket/software yang kita maksud. Ada banyak pilihan repo dari pihak lain yang bisa kita tambahkan tapi kali ini saya rekomendasikan untuk menambahkan repo dari rpmfusion.

su -c 'yum localinstall --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'

3. Install Yum Plugin

Ada beberapa yum plugin yang efisien jika digunakan, namun plugin ini tidak terpasang secara default. Yum Fastest Mirror, plugin ini untuk selalu menggunakan server mirror tercepat saat melakukan instalasi atau update sistem.

su -c 'yum install yum-plugin-fastestmirror'

4. Install Browser Plugin

Bowser Firefox memerlukan beberapa plugin untuk dapat sepenuhnya kompatibel dan mampu menjelajah dunia maya.

Repo Flash Player (64bit)

su -c 'yum install http://linuxdownload.adobe.com/adobe-release/adobe-release-x86_64-1.0-1.noarch.rpm'

Repo Flash Player (32bit)

su -c 'yum install http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm'

Flash Player

su -c 'yum install flash-plugin'

5. Install Media Codecs

Fedora tidak bisa mengikutkan media codecs pada setiap rilisnya karena ada ketentuan hak cipta dan sebagainya. Karena Fedora memiliki ideologi Free 100% software. Tapi kita bisa menambahkan media codecs itu agar bisa memutar file mp3, mp4, avi, mkv dll.

su -c 'yum install gstreamer1-plugins-ugly gstreamer1-plugins-bad-freeworld gstreamer1-libav gstreamer-plugin-crystalhd gstreamer1-vaapi gstreamer1-plugins-bad-free gstreamer1-plugins-good'

6. Install paket tambahan

Dalam hal ini setiap pengguna memiliki kebutuhan yang berbeda-beda. Ada beberapa hal yang mungkin terlewatkan untuk pengguna yang masih awam. Bagaimana cara membuka file rar? Apa torrent client untuk Fedora? Mana image editor di Fedora?

su -c 'yum install keepassx p7zip transmission audacity unrar xchat gimp inkscape gnome-tweak-tool eog vlc'

Itulah beberapa hal yang perlu kita lakukan setelah install Fedora 21. Untuk menginstal software tambahan lainnya, kita bisa juga menggunakan cara GUI dengan program Software, kamu bisa menemukannya dideret aplikasi Fedora mu. Apakah ada hal lain terlewatkan? Silahkan kasih komentar :)
My Ansible presentation at the Python Indonesia Meetup

The latest bimonthly Jakarta meet-up of Python Indonesia - or, in the local parlance, kopdar (kopi darat), was held yesterday (December 20th), kindly hosted by the really cool folks at the UN Pulse Lab.

why so serious

Following a fascinating presentation of UN Pulse Lab's activities by their lead data scientist, Jong Gun Lee, I presented Ansible, a really useful tool for orchestrating and automating the configuration of computing infrastructure (especially if you have a heterogenous combination of different distributions and versions), followed by Pulse Lab's Theresia Tanzil on PyMongo and Gilang Chandrasa's recap of his PyCon Asia trip.

group photo

Happy to reconnect with fellow Pythonistas and meeting new ones! I'm relatively new to the local scene as well so it's the first time I've met some of the veterans. And if you're a budding data scientist interested in making it your career, check out their vacancies page!

Fedora Desktop Recording / Tutorials

I’ve just been playing with gtk-recordmydesktop, and it seems to work fine, I’ve done a test recording [just doing a simple yum install] and it worked quite well.  Hopefully it’ll allow me to do a few more tutorials around Fedora 21 Workstation.  Keep em peeled.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="570" src="http://www.youtube.com/embed/IQCsclp93VM?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="960"></iframe>

The post Fedora Desktop Recording / Tutorials appeared first on Paul Mellors [DOT] NET.

flattr this!

December 20, 2014

taming the thinkpad’s terrible touchpad

After many years of working with X-series thinkpads, I have come to love these devices. Great keyboard, powerful while very portable and durable and so on. But I am especially an addict of the trackpoint. It allows me to use the mouse from time to time without having to move my fingers away from the typing position. The x230 was the first model I used that additionally features a touchpad. Well, I hate these touchpads! I keep moving the mouse pointer with the balls of my thumbs while typing, which is particularly irritating since I have my system configured to “focus-follows-mouse”. Now with the x230 that was not a big problem, because the I could simply disable the touchpad in the BIOS and keep using the trackpoint and the three mouse buttons that are positioned between keyboard and touchpad. So far so good.

Since three weeks now, since my start at Red Hat, I am using an x240. It is again really nicely done. Great display, very powerful, awesome battery life, … But Lenovo has imho committed an unspeakable sin with the change to the touchpad: The sparate mouse buttons are gone, and instead there are soft keys integrated into regions of the touchpad. Not only are the buttons much harder to hit, since the areas are much harder to feel with the fingertips than the comparatively thick buttons of the earlier models, but the really insane consequence for me is that I can’t disable the touchpad in the BIOS, since that also disables the buttons! This rendered the laptop almost unusable unless docked, with external mouse and keyboard. It was a real pain. :-(

But two days ago GLADIAC THE SAVIOR gave the the decisive hint: Set the TouchpadOff option of synaptics to value 1. Synaptics is, as I learned, the Xorg X11 touchpad driver. And this option disables the touchpad except for the button functionality. Exactly what I need. With a little bit of research I found out that my brand new Fedora 21 supports this out of the box. Because I am still finding my way around fedora, I only needed to find the proper place to add the option. As it turns out,

/usr/share/X11/xorg.conf.d/50-synaptics.conf

is the appropriate file, and I added the option to the section of “Lenovo TrackPoint top software buttons”.
Here is the complete patch that saved me:

--- /usr/share/X11/xorg.conf.d/50-synaptics.conf.ORIG 2014-12-18 22:53:18.454197721 +0100
+++ /usr/share/X11/xorg.conf.d/50-synaptics.conf 2014-12-19 09:03:44.143825508 +0100
@@ -57,13 +57,14 @@

 # Some devices have the buttons on the top of the touchpad. For those, set
 # the secondary button area to exactly that.
 # Affected: All Haswell Lenovos and *431* models
 #
 # Note the touchpad_softbutton_top tag is a temporary solution, we're working
 # on a more permanent solution upstream (likely adding INPUT_PROP_TOPBUTTONPAD)
 Section "InputClass"
         Identifier "Lenovo TrackPoint top software buttons"
         MatchDriver "synaptics"
         MatchTag "touchpad_softbutton_top"
         Option "HasSecondarySoftButtons" "on"
+        Option "TouchpadOff" "1"
 EndSection

Now I can enjoy working with the undocked thinkpad again!

Thanks gladiac! :-)

And thanks of course to the developers of the synaptic driver…

Hacky, hacky playbook to upgrade several Fedora machines

Normally I like to polish my code a bit before publishing but seeing as Fedora 21 is relatively new and a vacation is coming up which people might use as an opportunity to upgrade their home networks, I thought I’d throw this extremely *unpolished and kludgey* ansible playbook out there for others to experiment with:

https://bitbucket.org/toshio/local-playbooks/src/3d3ae76a56034784ab60fcfa1129221c59a40f3b/provisioning/fedora-upgrade.yml?at=master

When I recently looked at updating all of my systems to Fedora 21 I decided to try to be a little lighter on network bandwidth than I usually am (I’m on slow DSL and I have three kids all trying to stream video at the same time as I’m downloading packages). So I decided that I’d use a squid caching proxy to cache the packages that I was going to be installing since many of the packages would end up on all of my machines. I found a page on caching packages for use with mock and saw that there were a bunch of steps that I probably wouldn’t remember the next time I wanted to do this. So I opened up vim, translated the steps into an ansible playbook, and tried to run it.

First several times, it failed because there were unsolved dependencies in my packageset (packages I’d built locally with outdated dependencies, packages that were no longer available in Fedora 21, etc). Eventually I set the fedup steps to ignore errors so that the playbook would clean up all the configuration and I could fix the package dependency problems and then re-run the playbook immediately afterwards. I’ve now got it to the point where it will successfully run fedup in my environment and will cache many packages (something’s still using mirrors instead of the baseurl I specified sometimes but I haven’t tracked that down yet. Those packages are getting cached more than once).

Anyhow, feel free to take a look, modify it to suit your needs, and let me know of any “bugs” that you find :-)

Things I’ll probably do to it for when I update to F22:


Quick question for Fedora Planet

Hey Fedora Planet, quick survey: I’ve been doing some blogging on general python programming and on learning to use Ansible. Are either of these topics that you’d be interested in showing up on planet? Right now the feed I’m sending to planet only includes Fedora-specific posts, linux-specific, or “free software political” but I could easily change the feed to include the ansible or python programming posts if either of those are interesting to people.

Leave me a comment or touch base with me on IRC: abadger1999 on freenode.


Pattern or Antipattern? Splitting up initialization with asyncio

“O brave new world, That has such people in’t!” – William Shakespeare, The Tempest

Edit: Jean-Paul Calderone (exarkun) has a very good response to this detailing why it should be considered an antipattern. He has some great thoughts on the implicit contract that a programmer is signing when they write an __init__() method and the maintenance cost that is incurred if a programmer breaks those expectations. Definitely worth reading!

Instead of spending the Thanksgiving weekend fighting crowds of shoppers I indulged my inner geek by staying at home on my computer. And not to shop online either — I was taking a look at Python-3.4’s asyncio library to see whether it would be useful in general, run of the mill code. After quite a bit of experimenting I do think every programmer will have a legitimate use for it from time to time. It’s also quite sexy. I think I’ll be a bit prone to overusing it for a little while ;-)

Something I discovered, though — there’s a great deal of good documentation and blog posts about the underlying theory of asyncio and how to implement some broader concepts using asyncio’s API. There’s quite a few tutorials that skim the surface of what you can theoretically do with the library that don’t go into much depth. And there’s a definite lack of examples showing how people are taking asyncio’s API and applying them to real-world problems.

That lack is both exciting and hazardous. Exciting because it means there’s plenty of neat new ways to use the API that no one’s made into a wide-spread and oft-repeated pattern yet. Hazardous because there’s plenty of neat new ways to abuse the API that no one’s thought to write a post explaining why not to do things that way before. My joke about overusing it earlier has a large kernel of truth in it… there’s not a lot of information saying whether a particular means of using asyncio is good or bad.

So let me mention one way of using it that I thought about this weekend — maybe some more experienced tulip or twisted programmers will pop up and tell me whether this is a good use or bad use of the APIs.

Let’s say you’re writing some code that talks to a microblogging service. You have one class that handles both posting to the service and reading from it. As you write the code you realize that there’s some time consuming tasks (for instance, setting up an on-disk cache for posts) that you have to do in order to read from the service that you do not have to wait for if your first actions are going to be making new posts. After a bit of thought, you realize you can split up your initialization into two steps. Initialization needed for posting will be done immediately in the class’s constructor and initialization needed for reading will be setup in a future so that reading code will know when it can begin to process. Here’s a rough sketch of what an implementation might look like:

import os
import sqlite
import asyncio

import aiohttp

class Microblog:
    def __init__(self, url, username, token, cachedir):
        self.auth = token
        self.username = username
        self.url = url
        loop = asyncio.get_event_loop()
        self.init_future = loop.run_in_executor(None, self._reading_init, cachedir)

    def _reading_init(self, cachedir):
        # Mainly setup our cache
        self.cachedir = cachedir
        os.makedirs(cachedir, mode=0o755, exist_ok=True)
        self.db = sqlite.connect('sqlite:////{0}/cache.sqlite'.format(cachedir))
        # Create tables, fill in some initial data, you get the picture [....]

    @asyncio.coroutine
    def post(self, msg):
        data = dict(payload=msg)
        headers = dict(Authorization=self.token)
        reply = yield from aiohttp.request('post', self.url, data=data, headers=headers)
        # Manipulate reply a bit [...]
        return reply

    @asyncio.coroutine
    def sync_latest(self):
        # Synchronize with the initialization we need before we can read
        yield from self.init_future
        data = dict(per_page=100, page=1)
        headers = dict(Authorization=self.token)
        reply = yield from aiohttp.request('get', self.url, data=data, headers=headers)
        # Stuff the reply in our cache

if __name__ == '__main__':
    chirpchirp = Microblog('http://chirpchirp.com', 'a.badger', TOKEN, '/home/badger/cache/')
    loop = asyncio.get_event_loop()
    # Contrived -- real code would probably have a coroutine to take user input
    # and then submit that while interleaving with displaying new posts
    asyncio.async(chirpchirp.post(' '.join(sys.argv[1:])))
    loop.run_until_complete(chirpchirp.sync_latest())
    

Some of this code is just there to give an idea of how this could be used. The real question’s revolve around splitting up initialization into two steps:

  • Is yield from the proper way for sync_latest() to signal that it needs self.init_future to finish before it can continue?
  • Is it good form to potentially start using the object for one task before __init__ has finished all tasks?
  • Would it be better style to setup posting and reading separately? Maybe a reading class and a posting class or the old standby of invoking _reading_init() the first time sync_latest() is called?

Fedora 21 disponible para otras arquitecturas
Unos pocos días atrás, Fedora Project un proyecto informático orientado a la elaboración y difusión del sistema operativo Fedora con miles de voluntarios en todo el mundo liberó la versión nueva de su sistema, Fedora 21. 

Dando a conocer principalmente la noticia en una de sus listas de correo electrónico mencionando su disponibilidad en plataformas x86 y x86-64, dos de las más comercializadas desde hace muchos años.

Sin embargo, en estos días se ha ido mencionado su disponibilidad en diversas arquitecturas como PowerPC, AArch64, IBM Z (s390x) ampliando así su llegada a los diversos tipos de plataformas que hay en el mercado. 

Para ver las publicaciones oficiales para cada plataforma pulse en los siguientes enlaces. PowerPC, AArch64, IBM Z. Aquí tendrá más información detallada sobre como ver los posibles errores entre otras cosas.
Why to attend Fedora 21 release party At Mumbai??
Fedora 21 release party happening tomorrow At TIFR, MUMBAI, more information AT eventbrite page


Well qualified speakers for topics
      Anish Patil:
       3+ years with Fedora. Actively working in gnome upstream. He is one of the main developer of ibus typing booster. He is maintaining 15 packages in Fedora. He is going to talk on Fedora workstation product features.

      Praveen Kumar:
     Actively working 4+ years with Fedora community. He own 34 packages and is member of provenpackager and ambassador group of Fedora. He has already talked in dozens of open source conferences. For more information click [2]
   
    Pravin Satpute:
    7+ years in Fedora. Own 43 packages and proven packager in Fedora. Team member of Internationalization, Marketing and Ambassador group.

    Rahul Bhalerao:
    He is working 8+ year with Fedora community. He is team member of Fedora internationalization team and participated as a speaker in number of open source conferences. Being long time Fedora contributor he is going to talk on How to contribute to Fedora project. For more information about him click [4]

Happening AT TIFR, HBCSE Mumbai
        TIFR centre is well known for number of open source initiative. Gnowledge.org is one of those major activity. Learning Studio is widely used Fedora 17 remix for educational activities. We are trying to upgrade this on Fedora 21 release. In 2004 when there was very few open source Indian languages fonts were there. TIFR took initiative and started project on developing open source Samyak fonts. These fonts are available in most of the open source distribution.

Fedora 21 DVD:
        Will provide DVD's of Workstation and Servers to participant. Lots of stickers and goodies to few lucky ones :)

Fedora activity AT Mumbai After long time:
    Fedora activity happening in Mumbai after long time. Lets come meet and plan for some future activities around Fedora in Mumbai.

1. https://fedoraproject.org/wiki/User:Anishpatil
2. https://fedoraproject.org/wiki/User:Kumarpraveen
3. https://fedoraproject.org/wiki/User:Pravins
4. https://fedoraproject.org/wiki/User:RahulBhalerao
5. https://www.eventbrite.com/e/fedora-21-release-party-tickets-14972818102
News: The jeans that blocks wireless signals.
This will help users with linux based smartphones.
Accordind to this article , users will be protected by a pair of jeans containing material that blocks wireless signals:  


A pair of jeans containing material that blocks wireless signals is being developed in conjunction with anti-virus firm Norton.
SSL Version Control
Firefox 35.0 displayed this error while accessing Google today: Secure Connection Failed An error occurred during a connection to www.google.com. The server rejected the handshake because the client downgraded to a lower TLS version than the server supports. (Error code: ssl_error_inappropriate_fallback_alert) SSL Version Control to the rescue: https://addons.mozilla.org/en-US/firefox/addon/ssl-version-control/ SSLv3 is now insecure, and is soon going to be disabled by default. https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/ In the meantime, you can use this extension to turn off SSLv3 in your copy of Firefox. When you install the add-on, it will set the minimum TLS version to TLS 1.0 (disabling SSLv3). If you want to change that setting later, like if you really need to access an SSLv3 site, just go to Tools / Add-ons and click the "Preferences" button next to the add-on. That will give you a drop-down menu to select the minimum TLS version you want to allow.

December 19, 2014

Heroes of Fedora QA: Fedora 21 – Part 3

The last installment of the Heroes of Fedora QA went over the statistics from Bodhi. This third and final installment for the Fedora 21 release will take a look at Bugzilla. The stats below are from new bug reports filed throughout the entire release cycle. Due to some technical limitations, these numbers aren’t 100% of all bug activity throughout the release. However, they do provide some nice insight into the reporting that went into Fedora 21. Now, onto the statistics!

Bugzilla stats

Alpha

Test period: 2014-07-08 – 2014-09-23
Reporters: 305
New reports: 1186

Name Reports submitted1 Excess reports2 Accepted blockers3
Moez Roy 87 3 (3%) 0
Mikhail 48 5 (10%) 0
Chris Murphy 46 6 (13%) 0
Adam Williamson (Red Hat) 45 8 (17%) 4
Tim Waugh 26 1 (3%) 0
Vadim Rutkovsky 26 1 (3%) 0
Elad Alfassa 25 9 (36%) 0
Zbigniew Jędrzejewski-Szmek 23 0 (0%) 0
Ankur Sinha (FranciscoD) 21 1 (4%) 0
Stephen Gallagher 20 1 (5%) 2
Stef Walter 20 2 (10%) 1
Carlos Morel-Riquelme 20 3 (15%) 0
Vladimir Benes 20 2 (10%) 0
Jared Smith 16 1 (6%) 0
Leslie Satenstein 14 2 (14%) 1
Fabio Valentini 13 0 (0%) 0
Matteo Settenvini 13 0 (0%) 0
Joachim Frieben 12 1 (8%) 0
Menanteau Guy 12 0 (0%) 0
Parag 12 3 (25%) 0
Vít Ondruch 12 0 (0%) 0
Igor Gnatenko 11 2 (18%) 0
birger 10 1 (10%) 0
Mark Hamzy 10 2 (20%) 0
poma 10 1 (10%) 0
Benedikt Morbach 9 1 (11%) 0
Diogo Campos 9 1 (11%) 0
satellitgo at gmail.com 9 1 (11%) 0
Michael Kuhn 8 0 (0%) 0
Orion Poplawski 8 1 (12%) 0
Petr Schindler 8 1 (12%) 0
cornel panceac 7 1 (14%) 0
Jan Sedlák 7 0 (0%) 0
Mikolaj Izdebski 7 1 (14%) 0
Nikos Mavrogiannopoulos 7 0 (0%) 0
Marius Vollmer 6 0 (0%) 1
D. Charles Pyle 6 0 (0%) 0
Dan Horák 6 0 (0%) 0
Jeremy Rimpo 6 1 (16%) 0
Marko Myllynen 6 0 (0%) 0
Michael Schwendt 6 1 (16%) 0
Ritesh Khadgaray 6 1 (16%) 0
汪明衡 6 1 (16%) 0
Paul Whalen 5 1 (20%) 2
Christopher Tubbs 5 1 (20%) 0
David Spurek 5 0 (0%) 0
Dennis Gilmore 5 0 (0%) 0
Gene Czarcinski 5 0 (0%) 0
Jakub Prokes 5 1 (20%) 0
Jan Safranek 5 0 (0%) 0
Karsten Hopp 5 0 (0%) 0
Lukas Slebodnik 5 0 (0%) 0
Michael Catanzaro 5 0 (0%) 0
Mike FABIAN 5 1 (20%) 0
Peter H. Jones 5 0 (0%) 0
rvcsaba at freemail.hu 5 0 (0%) 0
Xiao-Long Chen 5 1 (20%) 0
bcl at redhat.com 4 0 (0%) 0
Dan Callaghan 4 0 (0%) 0
Fabian Deutsch 4 1 (25%) 0
Flóki Pálsson 4 0 (0%) 0
Ganapathi Kamath 4 1 (25%) 0
Guillaume Poirier-Morency 4 0 (0%) 0
Jakub Filak 4 1 (25%) 0
Jakub Čajka 4 0 (0%) 0
Jesse 4 0 (0%) 0
Kamil Páral 4 2 (50%) 0
Mathieu Bridon 4 1 (25%) 0
Michal Kovarik 4 1 (25%) 0
Michal Toman 4 0 (0%) 0
Mikko Tiihonen 4 0 (0%) 0
Nils Philippsen 4 0 (0%) 0
Oded Arbel 4 0 (0%) 0
Petr Viktorin 4 0 (0%) 0
Ryan Lerch 4 0 (0%) 0
sangu 4 0 (0%) 0
Satish Balay 4 0 (0%) 0
Siddhesh Poyarekar 4 0 (0%) 0
Zdenek Chmelar 4 0 (0%) 0
…and also 226 other reporters who created less than 4 reports each, but 329 reports combined!

Moez Roy easily came in first for Alpha with 87 submitted reports. Mikhail, Chris Murphy and Adam Williamson were all neck and neck with 48, 46 and 45 reports (respectively). Out of all of Alpha, the top 4 people filed 5.24% of all the bugs filed, a grand total of 226. Great work everyone!

Beta

Test period: 2014-09-23 – 2014-11-04
Reporters: 448
New reports: 1210

Name Reports submitted1 Excess reports2 Accepted blockers3
Christian Stadelmann 59 2 (3%) 0
Mikhail 40 5 (12%) 0
Adam Goode 21 0 (0%) 0
Adam Williamson (Red Hat) 20 0 (0%) 6
Kamil Páral 19 5 (26%) 1
M. Edward (Ed) Borasky 19 3 (15%) 1
lonelywoolf at live.ru 17 2 (11%) 0
Rolle 17 1 (5%) 0
Stephen Gallagher 15 2 (13%) 2
Miroslav Suchý 15 0 (0%) 0
Rodd Clarkson 15 1 (6%) 0
poma 14 6 (42%) 0
Michael Catanzaro 13 1 (7%) 0
Orion Poplawski 12 1 (8%) 0
bodhi.zazen 11 0 (0%) 0
Lubomir Rintel 11 0 (0%) 0
Ankur Sinha (FranciscoD) 10 0 (0%) 0
Ilya Gradina 10 0 (0%) 0
Hapoofesgeli 9 1 (11%) 0
Mikolaj Izdebski 9 0 (0%) 0
汪明衡 9 0 (0%) 0
Chris Gibbs 8 0 (0%) 0
Gustavo Luiz Duarte 8 0 (0%) 0
James 8 0 (0%) 0
Tim Waugh 8 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 8 0 (0%) 0
Kevin Kofler 7 0 (0%) 0
Kristjan Stefansson 7 0 (0%) 0
Leslie Satenstein 7 1 (14%) 0
Stef Walter 7 0 (0%) 0
Cole Robinson 6 0 (0%) 0
Fabian Deutsch 6 1 (16%) 0
Fabrice Bellet 6 0 (0%) 0
Guillaume Poirier-Morency 6 0 (0%) 0
Jakub Čajka 6 1 (16%) 0
Mark Hamzy 6 0 (0%) 0
Matěj Cepl 6 0 (0%) 0
Peter H. Jones 6 2 (33%) 0
Łukasz 6 0 (0%) 0
abyss.7 at gmail.com 5 1 (20%) 0
Aidan Collier 5 2 (40%) 0
al morris 5 0 (0%) 0
anish 5 0 (0%) 0
bob 5 3 (60%) 0
Christophe Fergeau 5 0 (0%) 0
Dan Horák 5 0 (0%) 0
Daniel Berrange 5 0 (0%) 0
Giovanni Campagna 5 0 (0%) 0
Göran Uddeborg 5 1 (20%) 0
Jaromír Cápík 5 1 (20%) 0
Jeremy Rimpo 5 2 (40%) 0
Joachim Backes 5 0 (0%) 0
joerg.lechner at aol.de 5 1 (20%) 0
Jonas Thiem 5 0 (0%) 0
Jozef Mlich 5 1 (20%) 0
Juan Orti 5 1 (20%) 0
Kamil J. Dudek 5 0 (0%) 0
Lubomir Rintel 5 0 (0%) 0
Luya Tshimbalanga 5 0 (0%) 0
Marcel Wysocki 5 1 (20%) 0
Matthias Kluge 5 0 (0%) 0
Mike FABIAN 5 0 (0%) 0
Nikos Mavrogiannopoulos 5 0 (0%) 0
satellitgo at gmail.com 5 0 (0%) 0
Mike Ruckman 4 0 (0%) 2
Alexander Bokovoy 4 0 (0%) 0
Bill Gianopoulos 4 1 (25%) 0
Carlos Morel-Riquelme 4 0 (0%) 0
Cosimo Cecchi 4 0 (0%) 0
Ed Greshko 4 1 (25%) 0
Karsten Hopp 4 0 (0%) 0
Krzysztof Klimonda 4 0 (0%) 0
Mark van Rossum 4 0 (0%) 0
Marko Myllynen 4 0 (0%) 0
Michal Srb 4 0 (0%) 0
Michel Normand 4 1 (25%) 0
Petr Schindler 4 1 (25%) 0
Pravin Satpute 4 0 (0%) 0
rvcsaba at freemail.hu 4 1 (25%) 0
tobias.weise at web.de 4 0 (0%) 0
Tomas Radej 4 0 (0%) 0
Tomasz Torcz 4 0 (0%) 0
…and also 366 other reporters who created less than 4 reports each, but 526 reports combined!

Mikhail managed to hold the number two position throughout Beta, edged out by 19 reports by Christian Stadelmann who clocked in at 59 reports. Adam Goode came in third place with 21 reports. The overall reports through Beta increased a bit from Alpha, but decreased from the last release which had 1,309 reports. Finally, on to Final stats…

Final

Test period: 2014-11-04 – 2014-12-09
Reporters: 510
New reports: 1074

Name Reports submitted1 Excess reports2 Accepted blockers3
Mikhail 25 4 (16%) 0
Kamil Páral 24 4 (16%) 3
Adam Williamson (Red Hat) 12 0 (0%) 2
poma 12 0 (0%) 0
Tomas Mlcoch 12 2 (16%) 0
autarch princeps 11 0 (0%) 0
Lubomir Rintel 11 0 (0%) 0
Mustafa 11 1 (9%) 0
Ralf Corsepius 11 0 (0%) 0
Chris Murphy 9 0 (0%) 0
czerny.jakub at gmail.com 9 2 (22%) 0
Leslie Satenstein 9 1 (11%) 0
Youichi Kawachi 9 0 (0%) 0
Maël Lavault 8 0 (0%) 0
Štefan Gurský 8 1 (12%) 0
Gene Czarcinski 7 2 (28%) 0
lennart_reuther at web.de 7 0 (0%) 0
Peter Robinson 7 0 (0%) 0
Tim Waugh 7 1 (14%) 0
Bill Gray 6 1 (16%) 0
bob 6 0 (0%) 0
Ilya Gradina 6 0 (0%) 0
Jared Smith 6 0 (0%) 0
M. Edward (Ed) Borasky 6 1 (16%) 0
Michal Nowak 6 3 (50%) 0
Miroslav Suchý 6 0 (0%) 0
Swâmi Petaramesh 6 0 (0%) 0
Alexander Kurtakov 5 0 (0%) 0
cornel panceac 5 2 (40%) 0
cz-mail at posteo.net 5 0 (0%) 0
Dan Horák 5 0 (0%) 0
Edgar Hoch 5 0 (0%) 0
icywind90 at gmail.com 5 0 (0%) 0
Josh Stone 5 0 (0%) 0
Juan Orti 5 0 (0%) 0
kartochka22 at yandex.ru 5 0 (0%) 0
lnie 5 1 (20%) 0
lonelywoolf at live.ru 5 0 (0%) 0
Mario Blättermann 5 0 (0%) 0
Matthew Miller 5 0 (0%) 0
Milan Bouchet-Valat 5 0 (0%) 0
Milan Kerslager 5 0 (0%) 0
misko.herko at gmail.com 5 0 (0%) 0
Nathanel Titane 5 0 (0%) 0
Orion Poplawski 5 0 (0%) 0
Parag 5 1 (20%) 0
Pavel Mlčoch 5 0 (0%) 0
rh at treblig.org 5 0 (0%) 0
Rolle 5 0 (0%) 0
Scott Garrett 5 0 (0%) 0
vnc 5 0 (0%) 0
Wolfgang Rupprecht 5 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 5 0 (0%) 0
Cole Robinson 4 0 (0%) 1
Ankur Sinha (FranciscoD) 4 0 (0%) 0
bcl at redhat.com 4 0 (0%) 0
Christian Stadelmann 4 0 (0%) 0
Fedor Butikov 4 0 (0%) 0
Felipe van Schaik Willig 4 1 (25%) 0
Felix Zweig 4 0 (0%) 0
George R. Goffe 4 0 (0%) 0
Guillaume Poirier-Morency 4 0 (0%) 0
Menanteau Guy 4 0 (0%) 0
Michael Schwendt 4 0 (0%) 0
Minus Zero 4 0 (0%) 0
Pavel Alexeev (aka Pahan-Hubbitus) 4 0 (0%) 0
Tong Hui 4 2 (50%) 0
Victor Toso de Carvalho 4 0 (0%) 0
Xose Vazquez Perez 4 1 (25%) 0
…and also 441 other reporters who created less than 4 reports each, but 623 reports combined!

Mikhail took the top slot for final with 25 bugs – staying in the top three throughout the entire release with 113 total bug reports. Thanks for all the work Mikhail! Kamil came in second for Final, with 1 less report than Mikhail. Adam, Poma and Tomas all had 12 reports while Autarch, Lubomir, Mustafa and Ralf all had 11 reports. Third place through ninth only had a single report difference between them! Thanks to everyone for putting Fedora 21 through the gauntlet and submitting so many bug reports!

Well, that’s it for the Fedora 21 version of Heroes of Fedora QA. Thanks so much to everyone working on and testing Fedora throughout the release! Fedora 22 is set to branch soon, so take some time, relax and get ready to get started again here in the coming weeks.

If you’re at all interested in helping improve the quality of Fedora, please come help us out! Read the wiki page and join us on freenode in the #fedora-qa channel. We’d love to have you and answer any questions you might have!

Footnotes -

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.

Holiday break 2014.

The post Holiday break 2014. appeared first on The Grand Fallacy.

Like many people who celebrate holidays around this time of year, I’m taking some vacation time to spend with family and friends. This time helps me relax and recharge for the next year, which promises to be full of energy and new challenges. That’s especially important in a fast paced environment like working at Red Hat.

Quite a few of the Fedora Engineering team members, like me, are taking vacation time. They’ll be at varying levels of connection, so don’t be surprised if it takes longer to reach someone than usual. For example, I’ll be mostly away from the keyboard, visiting family or picking up some musical pursuits. I’ve encouraged our team to use the Fedora vacation calendar, so you know who might not be around. I’m starting my time off after today, and will return to duty Monday, January 5, 2015. (Wow, 2015 still sounds weird to me.)

I hope everyone in the Fedora community has a peaceful and joyous holiday season, and a happy and successful New Year!

OpenHardware Random Number Generator

Before I spend another night reading datasheets; would anyone be interested in an OpenHardware random number generator in an full-size SD card format? The idea being you insert the RNG into the SD slot of your laptop, leave it there, and the kernel module just slurps trusted entropy when required.

Why SD? It’s a port that a a lot of laptops already have empty, and on server class hardware you can just install a PCIe addon card. I think I can build such a thing for less than $50, but at the moment I’m just waiting for parts for a prototype so that’s really just a finger-in-the-air estimate. Are there enough free software people who care about entropy-related stuff?

Moving on from Red Hat.

After eleven and a half years, today is my final day at Red Hat.
I’ll write more about what comes next in the new year.

In the meantime, here’s a slightly edited version of a mail I sent internally yesterday.


In 2003, I got an email from Michael Johnson, about a secretive new thing Red Hat was working on called "Fedora". No-one was quite sure what it was going to be (some may argue we're still figuring it out), but he was pretty sure I'd want to be a part of it. "How'd you feel about taking care of _any_ kernel problems that come in for this thing?" he asked. I was terrified, but excited at the opportunities to learn a lot of stuff outside my usual areas of expertise.

With barely any real detail as to what I was signing up for, I jumped at the opportunity. Within my first few months, I had some concerns over whether or not I had made a good decision. Then Michael left for rPath, and I seriously started to have my doubts.

While everyone was figuring out what Fedora was going to be, I was thrown in at the deep end. "Here's Red Hat Linux 7, 8 and 9, you maintain the kernel for those now. Go". I remember looking at bugzilla scrolling through page after page of bugs thinking "This is going to be a nightmare" At the same time, RHEL 3 was really starting to take shape. I looked at what the guys working on RHEL were doing and thought "Well, this sucks, but those guys.. they _really_ have work to do". As much as I was buried alive in work, I relished every moment of it, learning as much as I could in what little spare time I had.

Then Fedora finally happened. For those not around back then, Fedora Core 1 was pretty much what Red Hat Linux 10 would have been from a kernel pov. A nasty hairball of patches that weren't going upstream (execshield! 4g4g! Tux! CIPE!) that even their authors had stopped maintaining, and a bunch of features backported from 2.5 to 2.4. I get the shakes when I think back to the horrors of maintaining that mess, but like the horrors of RHL before it, it was an amazing learning experience (mostly "what not to do").

But for all its warts, Fedora gained traction, and after Fedora 2 moved to a 2.6 kernel, things really started to take shape. As Fedora's community started to grow, things got even busier in bugzilla than RHL had ever been.

Then somehow I got talked into also being RHEL4 kernel maintainer for a while.
It turned out that juggling Fedora 3, Fedora 4, Rawhide, RHEL4 GA, and RHEL4 U1 means you don't get a lot of time to sleep. So after finding another sucker to deal with the RHEL work, I moved back to just doing Fedora work, and in another big turning point, we started to slowly grow out the Fedora kernel team.

Over the years that followed, the only thing that remained constant was the inflow of bugs. At any given time we had a thousand or so bugs open, with at best 3 people, at worst 1 person working on them. I'm incredibly proud of what we've managed to achieve with the Fedora kernel. More than just the base for RHEL, it changed the whole landscape of upstream kernel development.

  • Our insistence on shipping the latest code, with as few 'special sauce' patches won over a lot of upstream developers that wouldn't have given us the time of day for similar bugs back in the RHL days. Sometimes painful for our users, but Linux as a whole got better because of our stance here.
  • Decisions like Fedora enabling debug options by default in betas shook out an unbelievable number of bugs almost as soon as they get introduced. Again, painful for users, but from a quality standpoint, we found a ton of bugs in code others were racing to ship first and call "enterprise ready".
  • Fedora enabling features sometimes before they were fully baked got us a lot of love from their respective upstream maintainers.

Despite this progress though, I always felt we were on a treadmill making no real forward progress. That constant 1000 or so bugs kept nagging at me. As fast as we closed them out, a new batch would arrive.

In more recent years, we tried to split the workload within the team so we could do more proactive bug-finding before users even find them. My own 'trinity' project has found so many serious bugs (filesystem corruptors, root holes, vm corner cases, the list goes on) that it got to be almost a full time job just tracking everything.

I used to feel that leaving Red Hat wasn't something I could do. On a few occasions I actually turned down offers from potential employers, because "What about the Fedora kernel?". For the first time since the project has begun I feel like I've left things in more than capable hands, and I'm sure things will continue to move in the right direction.

3 RHL's. 5 and a half RHEL's. 21 Fedoras. You don't even want to know how much hardware I've destroyed in the line of duty in this time. It's been uh, an experience.

So, after all this time, one thing I have learned, is that all this was definitely one of my better decisions. I hope that my next decision turns out to be an equally good one.

Moving on from Red Hat. is a post from: codemonkey.org.uk

Third Wave and Telecommuting

I have been reading Tim Bray’s blogpost on how he started to work in Amazon, and I got ignited by the comment by len and particularly by this (he starts quoting Tim):

“First, I am to­tally sick of working remotely. I want to go and work in rooms with other people working on the same things that I am.”

And that says a lot. Whatever the web has enabled in terms of access, it has proven to be isolating where human emotions matter and exposing where business affairs matter. I can’t articulate that succinctly yet, but there are lessons to be learned worthy of articulation. A virtual glass of wine doesn’t afford the pleasure of wine. Lessons learned.

Although I generally agree with your sentiment (these are really not your Friends, except if they already are), I believe the situation with the telecommuting is more complex. I have been telecommuting for the past eight years (or so, yikes, the time fly!) and I do like it most of the time. However, it really requires special type of personality, special type of environment, special type of family, and special type of work to be able to do it well. I know plenty of people who do well working from home (with occasional stay in the coworking office) and some who just don’t. It has nothing to do with IQ or anything like that. Just for some people it works, and I have some colleagues who left Red Hat just because they cannot work from home and the nearest Red Hat office was just too far from them.

However, this trivial statement makes me think again about stuff which is much more profound in my opinion. I am a firm believer in the coming of what Alvin and Heidi Toffler called “The Third Wave”. That after the mainly agricultural and mainly industrial societies the world is changing, so that “much that once was is lost” and we don’t know exactly what is coming. One part of this change is substantial change in the way we organize our work. It really sounds weird but there were times when there were no factories, no offices, and most people were working from their homes. I am not saying that the future will be like the distant past, it never is, but the difference makes it clear to me that what is now is not the only possible world we could live in.

I believe that the standard of people leaving their home in the morning to work will be in future very very diminished. Probably some parts of the industrial world will remain around us (after all, there are still big parts of the agricultural world around us), but I think it might have the same impact (or so little impact) as the agricultural world has on the current world. If the trend of the offices dissolution will continue (and I don’t see the reason why it wouldn’t, in the end all those office buildings and commuting is terrible waste of money) we can expect really massive change in almost everything: ways we build homes (suddenly your home is not just the bedroom to survive night between two workshifts), transportation, ways we organize our communities (suddenly it does matter who is your neighbor), and of course a lot of social rules will have to change. I think we are absolutely not-prepared for this and also we are not talking about this enough. But we should.

tmux with screen-like key-bindings

I recently starting switching from screen to tmux for my daily workflow, partly triggered by the increasing use of tmate for pair-programming sessions.

For that purpose I wanted the key-bindings to be as similar as possible to the ones I am used to from screen, which mostly involving the prefix (hotkey) from Ctrl-b to Ctrl-a. This is achieved in the awesome tmux configuration file ~/.tmux.conf by

set-option -g prefix C-a

Now in screen, you can send the hotkey through to the application by typing Ctrl-a followed by a plain a. I use this frequently, e.g. for sending Ctrl-a to the shell prompt instead of pos1. Tmux offers the send-prefix command specifically for this purpose, which can be bound do a key. My ~/.tmux.conf file already contained

bind-key a send-prefix

According to the tmux manual page, this complete snippet should make it work:


set-option -g prefix C-a
unbind-key C-b
bind-key C-a send-prefix

but it was not working for me! :-(

Entering the command bind-key C-a send-prefix in tmux interactively (command prompt triggered with Ctrl-a :) worked though. After experimenting a bit, I found out that the order of options seems to matter here: Once I put the bind-key a send-prefix line before the set -g prefix C-a one, it started working. So here is the complete snippet that works for me:


bind-key C-a send-prefix
unbind-key C-b
set-option -g C-a

I am not sure whether this is some speciality with my new fedora. On a debian system, the problem does not seem to exist...

In order to complete the screen-like mapping of Ctrl-a, let me mention that bind-key C-a last-window lets double Ctrl-a toggle between the two most recent tmux session windows. So here is the complete part of my config with respect to setting Ctrl-a as the hot-key:


bind-key C-a send-prefix
unbind-key C-b
set-option -g prefix C-a
bind-key C-a last-window

Note that the placement of the last-window setting does not make a difference.

OpenStack on aarch64

OpenStack can now be installed using Fedora 21 or Rawhide, on aarch64 hardware.

You have to use the packstack --allinone install method. Ceilometer doesn’t work because we don’t have mongodb on aarch64 yet, and there are a selection of bugs which you need to work around until they are fixed[1].

The big problem is I don’t have a convenient set of aarch64 cloud images to run on it yet :-(

Happy holidays everyone :-)

[1] 1170646 1174795 1174805 1175419 1175428 1175450 1175460 1175472


PHP version 5.4.36, 5.5.20 and 5.6.4

RPM of PHP version 5.6.4 are available in remi repository for Fedora 21 and  remi-php56 repository for Fedora ≤ 20 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.5.20 are available in remi repository for Fedora ≤ 20 and in remi-php55 repository for  Enterprise Linux.

RPM of PHP version 5.4.36 are available in remi repository Enterprise Linux.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, update is strongly recommended.

Version announcements:

emblem-important-2-24.png5.4.33 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version and installation mode.

Replacement of default PHP by version 5.6 installation (simplest):

yum --enablerepo=remi-php56,remi update php\*

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum --enablerepo=remi install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum --enablerepo=remi-php55,remi update php\*

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

Replacement of default PHP by version 5.4 installation (enterprise only):

yum --enablerepo=remi update php\*

Parallel installation of version 5.4 as Software Collection (x86_64 only):

yum --enablerepo=remi install php54

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.0
  • EL6 rpm are build using RHEL-6.6
  • for php 5.5, the zip extension is now provided by the php-pecl-zip package.
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php54 - php55)

December 18, 2014

Diskussionen um die Firewall-Defaults von Fedora Workstation

Kurz vor dem Release von Fedora 21 ist auf der Entwickler-Liste eine Diskussion darüber entbrannt, was die richtigen Standard-Einstellungen für die in Fedora Workstation verwendete Firewall sind.

Konkret drehte sich die Diskussion darum, ob alles Ports ab einschließlich 1025 standardmäßig geöffnet oder geschlossen sein sollen. Im Laufe der Diskussion kristallisierte sicher heraus, dass das eigentliche Problem eher ist, das der Anwender momentan keinerlei Rückmeldung darüber bekommt, wenn ein Programm auf einem Port > 1024 kommunizieren will. Falls der Port geschlossen ist, bekommt der Anwender im besten Fall von der Anwendung eine Fehlermeldung, bei einem offenen Port hingegen merkt der Anwender gar nicht, auf welchen Ports ein Programm kommuniziert.

Vor dem Hintergrund dieser sprichwörtlichen Wahl zwischen Pest und Cholera haben sich die Fedora-Entwickler dazu entschlossen, standardmäßig alle Ports > 1024 zu öffnen. Wer mit dieser Standard-Einstellung nicht einverstanden ist, kann in den Firewall-Einstellungen einfach eine andere Zone als “Fedora Workstation” verwenden.

Before you initiate a “docker pull”

In addition to the general challenges that are inherent to isolating containers, Docker brings with it an entirely new attack surface in the form of its automated fetching and installation mechanism, “docker pull”. It may be counter-intuitive, but “docker pull” both fetches and unpacks a container image in one step. There is no verification step and, surprisingly, malformed packages can compromise a system even if the container itself is never run. Many of the CVE’s issues against Docker have been related to packaging that can lead to install-time compromise and/or issues with the Docker registry.

One, now resolved, way such malicious issues could compromise a system was by a simple path traversal during the unpack step. By simply using a tarball’s capacity to unpack to paths such as “../../../” malicious images were able to override any part of a host file system they desired.

Thus, one of the most important ways you can protect yourself when using Docker images is to make sure you only use content from a source you trust and to separate the download and unpack/install steps. The easiest way to do this is simply to not use “docker pull” command. Instead, download your Docker images over a secure channel from a trusted source and then use the “docker load” command. Most image providers also serve images directly over a secure, or at least verifiable, connection. For example, Red Hat provides a SSL-accessible “Container Images”.  Fedora also provides Docker images with each release as well.

While Fedora does not provide SSL with all mirrors, it does provide a signed checksum of the Docker image that can be used to verify it before you use “docker load”.

Since “docker pull” automatically unpacks images and this unpacking process itself is often compromised, it is possible that typos can lead to system compromises (e.g. a malicious “rel” image downloaded and unpacked when you intended “rhel”). This typo problem can also occur in Dockerfiles. One way to protect yourself is to prevent accidental access to index.docker.io at the firewall-level or by adding the following /etc/hosts entry:

127.0.0.1 index.docker.io

This will cause such mistakes to timeout instead of potentially downloading unwanted images. You can still use “docker pull” for private repositories by explicitly providing the registry:

docker pull registry.somewhere.com/image

And you can use a similar syntax in Dockerfiles:

from registry.somewhere.com/image

Providing a wider ecosystem of trusted images is exactly why Red Hat began its certification program for container applications. Docker is an amazing technology, but it is neither a security nor interoperability panacea. Images still need to come from sources that certify their security, level-of-support, and compatibility.

Red Hat 2015 Customer Priorities Survey

en-rh-2015-customer-priorities-survey

 

Customers reporting interest in cloud, containers, Linux, and OpenStack for 2015

More information at http://www.redhat.com/en/about/blog/red-hat-2015-customer-priorities-survey

Kind Regards

Frederic


All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar

December 17, 2014

Slow scrolling – Firefox – Fedora 21

When I first install Fedora 21, I knew I was going to like it, but one thing that got on my tits, was scrolling within Firefox, it was slow, jerky pretty much unusable, so off to google I went and looked for a bug and or solution.  Well the solution was easier than I thought.

firefox

Go into the Firefox settings, and on the advanced section, untick “Use hardware acceleration when available” tick box.  Restart Firefox and your scrolling should be a lot smoother.

The post Slow scrolling – Firefox – Fedora 21 appeared first on Paul Mellors [DOT] NET.

flattr this!

There are scheduled downtimes in progress
New status scheduled: Scheduled outage for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Actually shipping AppStream metadata in the repodata

For the last couple of releases Fedora has been shipping the appstream metadata in a package. First it was the gnome-software package, but this wasn’t an awesome dep for KDE applications like Apper and was a pain to keep updated. We then moved the data to an appstream-data package, but this was just as much of a hack that was slightly more palatable for KDE. What I’ve wanted for a long time is to actually ship the metadata as metadata, i.e. next to the other files like primary.xml.gz on the mirrors.

I’ve just pushed the final patches to libhif, PackageKit and appstream-glib, which that means if you ship metadata of type appstream and appstream-icons in repomd.xml then they get downloaded automatically and decompressed into the right place so that gnome-software and apper can use the data magically.

I had not worked on this much before, as appstream-builder (which actually produces the two AppStream files) wasn’t suitable for the Fedora builders for two reasons:

  • Even just processing the changed packages, it took a lot of CPU, memory, and thus time.
  • Downloading screenshots from random websites all over the internet wasn’t something that a build server can do.

So, createrepo_c and modifyrepo_c to the rescue. This is what I’m currently doing for the Utopia repo.

createrepo_c --no-database x86_64/
createrepo_c --no-database SRPMS/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream.xml.gz		\
	x86_64/repodata/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

If you actually do want to create the metadata on the build server, this is what I use for Utopia:

appstream-builder			\
	--api-version=0.8		\
	--origin=utopia			\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=4			\
	--min-icon-size=48		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons	\
	--screenshot-uri=http://people.freedesktop.org/~hughsient/fedora/21/screenshots/

For Fedora, I’m going to suggest getting the data files from alt.fedoraproject.org during compose. It’s not ideal as it still needs a separate server to build them on (currently sitting in the corner of my office) but gets us a step closer to what we want. Comments, as always, welcome.

Fedora 21 – Gnome Terminal and IRSSI

Well It seems that the default install of Gnome Terminal in Version 3.14.2 doesn’t play nicely with Irssi. If you have multiple channels open, you’re unable to ALT-1 or ALT-2 to get to each channel.

It’s an easy fix, open a terminal, and from the top of the screen right click terminal and click preferences

Screenshot from 2014-12-17 20:23:06

Click Shortcuts

Screenshot from 2014-12-17 20:23:16

Scroll down until you see the TABS section

Screenshot from 2014-12-17 20:23:24

Double Click where it says ALT-1

Screenshot from 2014-12-17 20:23:29

It’ll ask for a new modofier, Press the backstpace key, and it’ll change it to disabled.

Screenshot from 2014-12-17 20:23:38Do this for all the short cut keys you normally use in IRSSI and the click the close button, you should now have all your ALT key functionality back.

The post Fedora 21 – Gnome Terminal and IRSSI appeared first on Paul Mellors [DOT] NET.

flattr this!

Fedora 21, 22, and 19, firewall discussion, and holiday break

Fedora is a big project, and it’s hard to keep up with everything that goes on. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for December 17th, 2014:

Fedora 21 Retrospective: What was awesome? What wasn’t?

While Fedora 22 is already rolling into the target zone, we do want to make sure we look back at this previous cycle and identify things we can improve — ideally, specific and actionable changes. In the end, we came out with (another!) great release, but there is always something to learn. In particular, we ended yet again in a last minute scramble to get a release we could feel good about signing off on out before the holidays, and next time around it would be nice to put less stress on all of our contributors (including the quality assurance team and the developers needed to make those late fixes.)

There will be more to it than this, but to get started, we have a F21 Retrospective wiki page, to help collect comments and ideas.

Fedora 22: Coming up fast!

FESCo (the Fedora Engineering Steering Committee, the elected organization which oversees technical decisions in the project) has indicated that we’re back to aiming for the traditional May/October Fedora release cycle, and although the F22 schedule isn’t finalized yet, we have a tentative plan calling for a release about 6 months from now. When you work back from that, it means that there’s really not much time to think about change proposals for F22, especially if we subtract out holiday time. So, if you’re thinking of working on something big, please start getting your proposal formalized — the tentative deadline is January 20th, 2015.

Fedora 19: End of Life

And on the other end of the cycle: it’s time to say farewell to Fedora 19. If you’re running this release, please plan to update before January 6th, 2015, when the last updates will go out. After that, there will be no further security fixes. The good news is that Fedora 20 was a great release, and Fedora 21 is even better, and I think you’ll be happy with the upgrade.

Fedora Workstation firewall discussion

This week’s big devel-list thread concerned the default firewall settings in Fedora Workstation. The Fedora Workstation Working Group was not happy with the user experience offered by blocking incoming “high ports” by default. Out of the box, nothing is listening on these, but if one installs software that expects to, it won’t work, and because we don’t have a good way yet to tie attempts to access ports to listening applications and communicate that to the user, the resulting failure is invisible.

On the other hand, if you install something and it starts listening and you didn’t know that, that’s also invisible. So, pretty much everyone recognizes this as a not ideal situation. Everyone involved in the discussion also is concerned with enhancing user security in practice — the question is just how to best get there from an imperfect state. Originally, the Workstation WG asked to disable the firewall entirely. FESCo asked instead that it be left available, possibly with a less-restrictive out-of-the-box configuration — the path taken for F21.

If you’re not running Workstation, this doesn’t affect you. If you are, and would like a different configuration, run the firewall configuration tool and either edit the Fedora Workstation zone or change the default zone. (There’s a long list of options, but “public” is a generally-restrictive choice.)

You can also change the per-network zone. Unfortunately currently wired networks are all considered as one per interface, but wireless networks are distinguished individually. This can be done in a number of ways, but the easiest is to run the network configuration tool (in GNOME control center — press the overview key and start typing “network”), select the wifi network in question, press the little gear icon next to it, go down to Identity (?!), and choose the appropriate firewall zone. (Again, there’s a long list — go back to the firewall config tool to see exactly what they all do.)

This is clearly, not the most friendly approach; it’s my understanding that the desktop designers, network tools team, and security team are going to work together to develop a better overall solution for Fedora 22 and beyond.

Overall, the mailing list thread stayed relatively positive and constructive and avoided personal attacks, although there were some accusations of bad faith actions which do not seem warranted based on the actual history. It is, however, a case where more transparent discussion and communication could have helped; that’s something we’re continually working at making better and might make for a good component of the F21 retrospective mentioned above.

Christmas break

Of course things in Fedora never really stop, but it’s vacation time for many of us. Before I was a beach-bumRed Hat employee, I was used to seeing extended days off as ideal for getting in some serious work on Fedora. Now, things are strangely inverted, and I’m going to use the time to unplug a bit. I’ll be back in January all recharged, and will catch up with everything that’s happened in the meantime — FtFTW will resume the week of January 15th — or possibly the week before, but let’s save the hard-to-keep resolutions for New Year’s Day. :)

Check out the Fedora vacation calendar to see who else will be away, and make sure to add yourself if you will be too. (There’s even a Fedora badge for doing so!)


 

5tftw-large

Improving Eclipse Platform Stability On Rawhide

The Eclipse platform on Fedora Rawhide can be pretty unstable at times. Every update to one of its dependencies requires a rebuild. As a result, it has been on our TODO list for a while to work out some way of making Eclipse more resilient to these kind of dependency updates (at least in cases where a rebuild shouldn’t be required). Looking upstream, there are quite a few bugs relating to this topic (410710, 410785, 408138) .

For simple rebuilds where project metadata likely hasn’t changed there’s fix for symbolic links in place.

Another common breakage happens when dependencies that are listed as plugins within a feature get updated.

Unlike regular Eclipse plugins that might contribute certain capabilities, a feature represents a set of plugins. One can think of them as RPM meta-packages and some features are used especially by end-users to install a larger set of plugins (eg. org.eclipse.cdt.feature.group is “C/C++ Development Tools”). A feature lists the set of plugins it provides with the <plugin> tag and may also specify dependencies with the <requires> tag. The difference is that the <plugin> tag locks onto the exact version discovered at build time, and may only resolve against that exact version. The <requires> tag on the other hand allows you some flexibility in terms of dependencies with ranges and some compatibility levels.

Sometimes one might see features listing dependencies as content. Does the JDT provide, or own org.junit and org.hamcrest ? Do we really mean to say that changing the versions of those plugins implies a completely different feature ? Clearly this isn’t the case, and it would make much more sense to use <requires> but the former practice seems quite common.

I don’t think this is done out of lack of understanding, but because projects also want to include their dependencies into the repositories they deploy. The <plugin> definition accomplishes this. Using <requires> one would also need to list the plugins to be included in the repository definition (site.xml/category.xml) and some platform projects have to jump through some additional hoops to change this file so it’s not a huge surprise that many go for the simpler option. Sadly, this causes some problems for us in Fedora land.

Luckily there’s now fix for platform features that should help reduce the number of rebuilds.


Container Security: Isolation Heaven or Dependency Hell

Docker is the public face of Linux containers and two of Linux’s unsung heroes: control groups (cgroups) and namespaces. Like virtualization, containers are appealing because they help solve two of the oldest problems to plague developers: “dependency hell” and “environmental hell.”

Closely related, dependency and environmental hell can best be thought of as the chief cause of “works for me” situations. Dependency hell simply describes the complexity inherent in modern application’s tangled graph of external libraries and programs they need to function. Environmental hell is the name for the operating system portion of that same problem (i.e. what wrinkles, in particular which bash implementation,on which that quick script you wrote unknowingly relies).

Namespaces provide the solution in much the same way as virtual memory simplified writing code on a multi-tenant machine: by providing the illusion that an application suite has the computer all to itself. In other words,”via isolation”. When a process or process group is isolated via these new namespace features, we say they are “contained.” In this way, virtualization and containers are conceptually related, but containers isolate in a completely different way and conflating the two is just the first of a series of misconceptions that must be cleared up in order to understand how to use containers as securely as possible. Virtualization involves fully isolating programs to the point that one can use Linux, for example, while another uses BSD. Containers are not so isolated. Here are a few of the ways that “containers do not contain:”

  1. Containers all share the same kernel. If a contained application is hijacked with a privilege escalation vulnerability, all running containers *and* the host are compromised. Similarly, it isn’t possible for two containers to use different versions of the same kernel module.
  2. Several resources are *not* namespaced. Examples include normal ulimit systems still being needed to control resources such as filehandlers. The kernel keyring is another example of a resource that is not namespaced. Many beginning users of containers find it counter-intuitive that socket handlers can be exhausted or that kerberos credentials are shared between containers when they believe they have exclusive system access. A badly behaving process in one container could use up all the filehandles on a system and starve the other containers. Diagnosing the shared resource usage is not feasible from within
  3. By default, containers inherit many system-level kernel capabilities. While Docker has many useful options for restricting kernel capabilities, you need a deeper understanding of an application’s needs to run it inside containers than you would if running it in a VM. The containers and the application within them will be dependent on the capabilities of the kernel on which they reside.
  4. Containers are not “write once, run anywhere”. Since they use the host kernel, applications must be compatible with said kernel. Just because many applications don’t depend on particular kernel features doesn’t mean that no applications do.

For these and other reasons, Docker images should be designed and used with consideration for the host system on which they are running. By only consuming images from trusted sources, you reduce the risk of deploying containerized applications that exhaust system resources or otherwise create a denial of service attack on shared resources. Docker images should be considered as powerful as RPMs and should only be installed from sources you trust. You wouldn’t expect your system to remain secured if you were to randomly install untrusted RPMs nor should you if you “docker pull” random Docker images.

In the future we will discuss the topic of untrusted images.

Try out LXC with an Ansible playbook

Ansible logoThe world of containers is constantly evolving lately. The latest turn of events involves the CoreOS developers when they announced Rocket as an alternative to Docker. However, LXC still lingers as a very simple path to begin using containers.

When I talk to people about LXC, I often hear people talk about how difficult it is to get started with LXC. After all, Docker provides an easy-to-use image downloading function that allows you to spin up multiple different operating systems in Docker containers within a few minutes. It also comes with a daemon to help you manage your images and your containers.

Managing LXC containers using the basic LXC tools isn’t terribly easy — I’ll give you that. However, managing LXC through libvirt makes the process much easier. I wrote a little about this earlier in the year.

I decided to turn the LXC container deployment process into an Ansible playbook that you can use to automatically spawn an LXC container on any server or virtual machine. At the moment, only Fedora 20 and 21 are supported. I plan to add CentOS 7 and Debian support soon.

Clone the repository to get started:

git clone https://github.com/major/ansible-lxc.git
cd ansible-lxc

If you’re running the playbook on the actual server or virtual machine where you want to run LXC, there’s no need to alter the hosts file. You will need to adjust it if you’re running your playbook from a remote machine.

As the playbook runs, it will install all of the necessary packages and begin assembling a Fedora 21 chroot. It will register the container with libvirt and do some basic configuration of the chroot so that it will work as a container. You’ll end up with a running Fedora 21 LXC container that is using the built-in default NAT network created by libvirt. The playbook will print out the IP address of the container at the end. The default password for root is fedora. I wouldn’t recommend leaving that for a production use container. ;)

All of the normal virsh commands should work on the container. For example:

# Stop the container gracefully
virsh shutdown fedora21
# Start the container
virsh start fedora21

Feel free to install the virt-manager tool and manage everything via a GUI locally or via X forwarding:

yum -y install virt-manager dejavu* xorg-x11-xauth
# OPTIONAL: For a better looking virt-manager interface, install these, too
yum -y install gnome-icon-theme gnome-themes-standard

The post Try out LXC with an Ansible playbook appeared first on major.io.

GLPI version 0.85.1

GLPI (Free IT and asset management software) version 0.85.1 is available. RPM are available in remi-test repository for Fedora ≥ 18 and EL ≥ 5

Versions announcements:

Documentation is available (PDF format, in french):

As all plugins projets have not yet released a stable version, this is not ready for production, so, version 0.84.x stay available in remi repository.

Available in the repository:

  • glpi-0.85.1-1
  • glpi-behavior-0.85-1
  • glpi-fusioninventory-0.85+1.0-BETA1-1

Attention Warning: for security reason, the installation wizard is only allowed from the server where GLPI is installed. See the configuration file (/etc/httpd/conf.d/glpi.conf) to temporarily allow more clients.

You are welcome to try this version, in a dedicated test environment, give your feedback and post your questions and bugs on:

RPM and this page will be updated regularly until moved to stable (follow this entry).

Tolkien Coffee mug project

kopper_på_rekke_3

A few months ago I set out to get a copy of one of the versions of The Hobbit that has Tolkien’s original illustations. After a bit of searching, I found an available copy of the 1962 swedish translation, Bilbo en hobbits äventyr. Some editions of that translation, at least the one I got, the 10th reprint from 1979, have the illustrations.

en_hobbits_äventyr

So now, I can enjoy my own private printed copy of these nice illustrations.

en_hobbits_äventyr_2

Another thought came to me. What about having these pictures on coffee mugs? I could have a complete set! (Or at least, nice nerdy christmas presents.) So I looked at some of the available photo cup printing manufacturers, and replaced the default kitten and children with scans from the book. The result were quite allright. I fitted one picture on each side of the cups.

kopper_på_rekke_3

kopper_på_rekke_4

One problem is that these illustrations are made to match a whole book page. Fitting them onto a quite small cup makes the details tiny, and the feeling of majesty of the moutains for instance, is somewhat lessened.  For the landscape pictures, using a cup print template that covers the whole cup would make them more visible. A better approach would perhaps be to crop a part of the image, to enlarge some detail, like only Smaug and Bilbo, or only The Hill with Bag End from the Hobbiton picture. This would of course lessen some of the impact that the pictures gives. The central element of the Hobbiton picture is the river and the mill, and removing them makes the picture unbalanced.

to_kopper_1

to_kopper_2

Then there are the cups themselves. I dreamed of some of the feeling catched by the Arabia series Moomin cups. They have a perfect size for my taste of coffee mugs. They are not too large (some 8 cm high), and are slightly wider at the brim than the foot, and have quite small handles, making them a treat to the eye. The industrial standard cup for photo printing is much larger than this, giving you more picture space, but lower wife acceptance factor. I tried another vendor with a more slender cup that fitted the portrait pictures better, but it was even taller than the ones I had.

tre_kopper

Perhaps I’ll try this again next year, if I can find a better cup, and some nice well thought-out image cuts.

Disclaimer: These cups are for my own personal use. They are not for sale. I do own my own paper copy of the pictures. The pictures are copyrighted material, and sharing scans or reproducing them for commercial usage is probably illegal in most countries.  I will not give any pointers to where to get the illustrations, nor where to produce cups.  Getting a high quality version of The Hobbit  with these prints for your bookcase, is a great reward in itself. Photo cup producers on the Internet are  thirteen to a dozen.

syslog-ng incubator 0.4.1 released
Version 0.4.1 of the syslog-ng incubator was released. You can use it together with syslog-ng Open Source Edition 3.6. Functionality which was migrated to syslog-ng 3.6, like graphite and riemann support, has been removed from the incubator. Version 0.4.1 of the syslog-ng incubator includes many new features: Java destination Kafka destination ZeroMQ source and destination […]
On contribution

Contribution, a word a I first heard in 2004. I was a student back then. I first contribution was to the KDE l10n project with help from Ankur Bangla project. More than anything else it was fun. All the people I first met were already doing many things, they used to do more work than talking, they still do a lot more work than many people I know.

The last 10 years

Over the last 10 years the scenario of FOSS movement in India has changed. Contributors used to be the rock stars. The people just starting always wanted to become contributors. But a new word has taken the place, evangelist. Now everyone wants to become an evangelist. Most of the students I meet during conferences come to me and introduce themselves as evangelists of either Open Source or some other FOSS project, they do only talking and all of them want to change the world. But sadly, none of them seem to want to contribute back.

How to contribute?

I can understand, contributing is difficult in many cases. One needs some amount of preparation and some commitment to contribute to any project. That takes time, cannot be done overnight.

To begin with, you have to spend more time in reading than anything else. Read more documentation, read more source code, read more meeting minutes of the project you want to contribute in. Remember one thing, one always reads more source code than writing. But if you are just starting, you can spend more time in writing code too.

Try to get involved in the discussions of the project. Join the IRC channel, stay there. In the beginning you may not understand all the conversations in the channel, but keep a note of the things people are discussing. You can read about them later, use a tiny and shiny site called google.com :)

I know new students have a tendency of trying to solve non-programming bugs. But as most of you are in Engineering background, you should focus in programming more than anything else.

At home, try to find the things you do in computer in steps and repeatedly/regularly. Try to write small programs which can do those tasks for you. One of my first proper project was a small GUI application using which I used to upload photos to flickr.com, via emails.

When working on some other big project, try to solve easy bugs at first. These days all projects somehow mark easy bugs in their bug tracker. In case you can not find one, ask in the IRC channel for help. Remember that IRC is asynchronous, you may not get the answer right away. If someone is helping you, you may want to ask their timezone.

I am not saying doing work in other parts of the project is less meaningful. I personally want you to write more code than anything else. That way we will get more developers from India.

What about translation and documentation?

If you look at the people who contribute with translations or documentation, you will find few common things. Like they all love their language, they love writing. As I said before even my first contributions were translations. But neither me or any anyone else that time used to do this for some goodies or ticket to any conference. We love our mother tongue and we love to use the computer in our language, period. If you are doing translations, then do it for the love of the language and fun. Please do not do this for some stickers or release parties.

What about becoming an evangelist?

Before you start calling yourself an evangelist, you should learn about that project. You will have to spend a lot of time to learn about the technology behind, you will have to learn why some decisions were taken. The evangelist is a person who cares and believes in the project and most importantly, knows the project intimately. [S]he knows the developers behind the project, constantly talk, blog and spread the news about the project. If you look at the established evangelists, you will find mostly veterans who spent a lot of time contributing to the project first. It is not about the age of the person, but more about the time [s]he spent in the project. Btw, if you want to call yourself a developer evangelist, first become a developer of that project. That means some real code, not some examples.

The doyen of Open Access in India
Met with Subbiah Arunachalam, the doyen of Open Access in Science. He must be in his 70s, but his passion and enthusiasm for Open Access always amazes me. I asked him how he got interested in this area, and he said that when he was at the Indian Institute of Science (IISc), he wanted access to a journal, Surface Science, and asked a friend of his in the US to send him a copy. His friend quietly subscribed him to the journal, and Arun started getting the copies. When Arun looked at the cost of the journal, he was shocked, and realized that even IISc could not afford to subscribe to this journal. That lead him to stumble upon the Open Access movement, which aims to make scientific, and other kinds of literature freely accessible. Arun, then started writing to politicians, bureaucrats and academics on Open Access, and got many people interested in the subject. I think he is a great example of how one person with passion and drive can make a great difference. A big salute to you, sir!
Calling people people. What’s in a name?

My IT service management professor once told the class “there are only two professions who have users: IT and drug dealers.” It’s interesting how the term “user” has become so prevalent in technology, but nowhere else. Certainly the term “customer” is better for a series organization (be it an internal IT group or a company providing technology services). “Customer” sounds better, and it emphasizes whose needs are to be met.

For a free Internet service, though, it’s not necessarily an apt term, if for no other reason than the rule of “if you’re not paying for it, you’re the product.” That’s why I find Facebook’s recent decision to call their users “people” interesting.

Sure, it’s easy to dismiss this as a PR move calculated to make people feel more comfortable with a company that makes a living off of the personal information of others. I don’t doubt that there is a marketing component to this, but that doesn’t make the decision meritless. Words mean things, and chosen the right word can help frame employees mindsets, both consciously and subconsciously.

In Fedora, contributors have been actively discussing names, both of the collected software (“products” versus alternatives) and the people involved (“contributors”, “developers”, “users”). Understanding what the general perception of these terms are is a critical part of selecting the right one (particularly when the chosen term has to be translated into many other languages). A clear definition of the people terms is a necessary foundation of trying to understand the needs and expectations of that group.

“People” may be too broad of a term, but it’s nice to see a major company forego the word “user”. Perhaps others will follow suit. Of course, “user” is just such a handy term that it’s hard to find a suitably generic replacement. Maybe that’s why it sticks around?

December 16, 2014

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
There are scheduled downtimes in progress
New status scheduled: Database migrations in progress for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Major service disruption
New status major: Database migrations in progress for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Major service disruption
New status major: DAtabae migrations in progress for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
There are scheduled downtimes in progress
Service 'Account System' now has status: scheduled: fas db migration in progress