October 20, 2014

Introduction to Fedora on Software Freedom Day Phnom Penh

It is the first time that Software Freedom Day will be organized in Phnom Penh on November 1st, 2014 by some folks. They planned to have 100 participants and there will be a few booth as well. In the mean time, I will take this opportunity to give a talk on “Introduction to Fedora”, but we do not have enough resources for placing a booth there, so I decided to give a talk only.

Below is the schedule of the whole event:

Opening

  • 1:45pm – 1:55pm: Opening speech by Dr. Sopheap Seng, President of NIPTICT
  • 1:55pm – 2:15pm: Why Freedom Matters by Frederic Muller, President of Digital Freedom Foundation

Tracks

Technical track

  • 2:15pm – 2:45pm: Google (TBC)
  • 2:45pm – 3:15pm: Mozilla OS for mobile by Arky (TBC)
  • 3:15pm – 3:45pm: Kickstart a JavaScript project with Yeoman, Grunt & Bower (Darren Jensen, Founder of DevBootstrap.com)

  • 3:45pm – 4:45pm: Rebuild servers with GNU/Linux (Leap Sok, Collaborator of OS Cambodia)
  • 4:45pm – 5:15pm: tbc

Users track

  • 2:15pm – 2:45pm: Moodle by NIPTICT
  • 2:45pm – 3:15pm: Fedora by Somvannda Kong
  • 3:15pm – 3:45pm: eCommerce Free software by Jeff Laflamme (TBC)
  • 3:45pm – 4:15pm: Jack from hackerspace (TBC)
  • 4:15pm – 4:45pm: CentOS by NIPTICT
  • 4:45pm – 5:15pm: Migrating to Free Software by Tom Wilkins, OS Cambodia

Closing

  • 5:15pm – 5:30pm: Closing speech by Rapid from NIPTICT / Fred from Digital Freedom Foundation
  • 5:30pm: Group pictures (all volunteers, speakers, exhibitors and audiences)

October 19, 2014

Throughout the day of October 17, last day of eleven edition of LatinoWare, by morning gaved a talk...
Throughout the day of October 17, last day of eleven edition of LatinoWare, by morning gaved a talk about Fedora QA with full room.
Afternoon we finished the distribution of the remaining gifts Fedora and talked with some people interested in joining the Fedora Project.. #fedora #latinoware #linux  

LatinoWare 2014


October 18, 2014

Hacking out an Openshift app

I had an itch to scratch, and I wanted to get a bit more familiar with Openshift. I had used it in the past, but it was time to have another go. The app and the code are now available. Feel free to check out:

https://pdfdoc-purpleidea.rhcloud.com/

This is a simple app that takes the URL of a markdown file on GitHub, and outputs a pandoc converted PDF. I wanted to use pandoc specifically, because it produces PDF’s that were beautifully created with LaTeX. To embed a link in your upstream documentation that points to a PDF, just append the file’s URL to this app’s url, under a /pdf/ path. For example:

https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md

will send you to a PDF of the puppet-gluster documentation. This will make it easier to accept questions as FAQ patches, without needing to have the git embedded binary PDF be constantly updated.

If you want to hear more about what I did, read on…

The setup:

Start by getting a free Openshift account. You’ll also want to install the client tools. Nothing is worse than having to interact with your app via a web interface. Hackers use terminals. Lucky, the Openshift team knows this, and they’ve created a great command line tool called rhc to make it all possible.

I started by following their instructions:

$ sudo yum install rubygem-rhc
$ sudo gem update rhc

Unfortunately, this left with a problem:

$ rhc
/usr/share/rubygems/rubygems/dependency.rb:298:in `to_specs': Could not find 'rhc' (>= 0) among 37 total gem(s) (Gem::LoadError)
    from /usr/share/rubygems/rubygems/dependency.rb:309:in `to_spec'
    from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
    from /usr/local/bin/rhc:22:in `'

I solved this by running:

$ gem install rhc

Which makes my user rhc to take precedence over the system one. Then run:

$ rhc setup

and the rhc client will take you through some setup steps such as uploading your public ssh key to the Openshift infrastructure. The beauty of this tool is that it will work with the Red Hat hosted infrastructure, or you can use it with your own infrastructure if you want to host your own Openshift servers. This alone means you’ll never get locked in to a third-party providers terms or pricing.

Create a new app:

To get a fresh python 3.3 app going, you can run:

$ rhc create-app <appname> python-3.3

From this point on, it’s fairly straight forward, and you can now hack your way through the app in python. To push a new version of your app into production, it’s just a git commit away:

$ git add -p && git commit -m 'Awesome new commit...' && git push && rhc tail

Creating a new app from existing code:

If you want to push a new app from an existing code base, it’s as easy as:

$ rhc create-app awesomesauce python-3.3 --from-code https://github.com/purpleidea/pdfdoc
Application Options
-------------------
Domain:      purpleidea
Cartridges:  python-3.3
Source Code: https://github.com/purpleidea/pdfdoc
Gear Size:   default
Scaling:     no

Creating application 'awesomesauce' ... done


Waiting for your DNS name to be available ... done

Cloning into 'awesomesauce'...
The authenticity of host 'awesomesauce-purpleidea.rhcloud.com (203.0.113.13)' can't be established.
RSA key fingerprint is 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'awesomesauce-purpleidea.rhcloud.com,203.0.113.13' (RSA) to the list of known hosts.

Your application 'awesomesauce' is now available.

  URL:        http://awesomesauce-purpleidea.rhcloud.com/
  SSH to:     00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com
  Git remote: ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
  Cloned to:  /home/james/code/awesomesauce

Run 'rhc show-app awesomesauce' for more details about your app.

In my case, my app also needs some binaries installed. I haven’t yet automated this process, but I think it can be done be creating a custom cartridge. Help to do this would be appreciated!

Updating your app:

In the case of an app that I already deployed with this method, updating it from the upstream source is quite easy. You just pull down and relevant commits, and then push them up to your app’s git repo:

$ git pull upstream master 
From https://github.com/purpleidea/pdfdoc
 * branch            master     -> FETCH_HEAD
Updating 5ac5577..bdf9601
Fast-forward
 wsgi.py | 2 --
 1 file changed, 2 deletions(-)
$ git push origin master 
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 312 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping Python 3.3 cartridge
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit bdf9601
remote: Activating virtenv
remote: Checking for pip dependency listed in requirements.txt file..
remote: You must give at least one requirement to install (see "pip help install")
remote: Running setup.py script..
remote: running develop
remote: running egg_info
remote: creating pdfdoc.egg-info
remote: writing pdfdoc.egg-info/PKG-INFO
remote: writing dependency_links to pdfdoc.egg-info/dependency_links.txt
remote: writing top-level names to pdfdoc.egg-info/top_level.txt
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: reading manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: running build_ext
remote: Creating /var/lib/openshift/00112233445566778899aabb/app-root/runtime/dependencies/python/virtenv/venv/lib/python3.3/site-packages/pdfdoc.egg-link (link to .)
remote: pdfdoc 0.0.1 is already the active version in easy-install.pth
remote: 
remote: Installed /var/lib/openshift/00112233445566778899aabb/app-root/runtime/repo
remote: Processing dependencies for pdfdoc==0.0.1
remote: Finished processing dependencies for pdfdoc==0.0.1
remote: Preparing build for deployment
remote: Deployment id is 9c2ee03c
remote: Activating deployment
remote: Starting Python 3.3 cartridge (Apache+mod_wsgi)
remote: Application directory "/" selected as DocumentRoot
remote: Application "wsgi.py" selected as default WSGI entry point
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
   5ac5577..bdf9601  master -> master
$

Final thoughts:

I hope this helped you getting going with Openshift. Feel free to send me patches!

Happy hacking!

James


Trinity updates

Over a month ago, I posted about some pthreads work I was experimenting with in Trinity, and how that wasn’t really working out. After taking a short vacation, I came back with no real epiphanies, and decided to back-burner that work for now, and instead refocus on fixing up some other annoying problems that I’d stumbled across while doing that experimenting. Some of these problems were actually long-standing bugs in trinity. So that’s pretty much all I’ve been working on for the last month, and I’m now pretty happy with how long it runs for (providing you don’t hit a kernel bug first).

The primary motivation was to fix a problem where trinity’s internal data structures would get corrupted. After a series of debugging patches, I found a number of places where a child process would overrun a buffer it had allocated.

First up: the code that takes syscalls arguments and renders them into a human-readable string. In some cases this would write huge strings past the end of the buffer. One example of this was the instance where trinity would generate a random pathname. It would sometimes generate complete garbage, which was fine until it came to printing it out. Fixed by deleting lots of code in the pathname generator. Stressing the negative dentry case was never that interesting anyway. After fixing up a few other cases in the argument generator I looked at the code that performs rendering to buffers. None of this code took length parameters, or took into account the remaining space in the buffers. Fairly quick rewrite took care of that.

After these bugs were fixed trinity would (on a good kernel) run for a really long time without incident. With longer runtimes, a few more obscure corner cases turned up.

There were 2-3 cases where the watchdog process would hang waiting for a condition that would never be met (due to losing track of how many running child processes there were). I’m still not happy that this can even occur but it is at least a little less likely to hang when it happens now. I’ll investigate the actual cause for this later.

Another fun watchdog bug: we keep track of the time stamp a child performed its last syscall at, and check to make sure 1 second later that it has increased by some small amount. To make sure we haven’t corrupted our own state, there’s also a sanity check that we haven’t jumped into the future. But we also have to compensate for the possibility that adjtimex was the random syscall we did. That takes a maximum offset of 2145. The code checked for that but forgot to also add the one second since the last time we checked.

There’s been a bunch of small 1-2 fixes like this lately, but I’m sitting on a larger set of changes that I’ll start to trickle into git next week, which moves towards cleaning up the “create a random page to pass to syscalls” code, which has been another fun source of corruption bugs.

In kernel news: The only interesting bugs this week that Trinity has shown up, have been two ext4 bugs. Diagnosing those has pointed out some more enhancements that are needed to the post-mortem code in trinity. Once I’ve cleared the current backlog of patches, I’ll work on adding better tracking of fd’s in the logging code. In other news, the btrfs bug trinity hit in August is now fixed in 3.17+ git.

Trinity updates is a post from: codemonkey.org.uk

libguestfs 1.28 released

The new stable version of libguestfs — a C library and tools for accessing and modifying virtual machine disk images — has been released.

There is one brand new tool, virt-log. And I rewrote the virt-v2v and virt-p2v tools. These tools convert VMware and Xen guests and physical machines, to run on KVM. They are now much faster and better than before.

As well as that there are hundreds of other improvements and bug fixes. For a full list, see the release notes.

Libguestfs 1.28 will be available shortly in Fedora 21, Debian/experimental, RHEL and CentOS 7, and elsewhere.


Fedora Activity Day – 1 Nov 2014 – theme Security

Hello,

    See -> https://fedoraproject.org/wiki/FAD_Pune_Security_1

On 1’st Nov 2014, we plan to host a Fedora Activity Day(FAD) focused at assessing the state of Security in Fedora distribution. The day would start with a brief introduction to Fedora security and progress towards collective security bug triage and other activities. If you are in Pune(India) or plan to be here on 1st Nov, please feel free to drop in and join the action. Note:- we have limited capacity(=~25) for participants, please do register on the wiki page above.

Not too long ago, the Fedora Security Team came to be with the sole intention to improve the state of security in Fedora distribution. Primary goal was to help triage the security bugs and spread awareness.

    See -> https://lists.fedoraproject.org/pipermail/security/2014-July/001948.html

But in the light of the recent upheavals caused by the deadly and the viral security dynamite of the Heartbleed, the Shellshock, and the POODLE[1] flaws, it is only logical to brace ourselves and work towards greater efforts to make Fedora _secure_ by default. Many distributions have taken focused efforts towards this end for decades now,

    Ex -> http://www.openbsd.org/security.html

Idea is to increase the number of eye balls looking at the Fedora security so that the flaws become shallow. And your poodle’s hearts are saved from bleeding caused by the shocks that are still hidden in the future.

Hope to see you there. :)

[1] http://googleonlinesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-ssl-30.html


Latinoware 11 Edicion Foz do Iguaçu | PR | Brasil – Dia 3

En este tercer dia, podemos ya decir que fue todo un exito, un poco ya mas cansados pero con mas ganas que los dias anteriores, se entrego todo!!

Dia 2

Monitorando Servidores com Nagios – Daniel Lara

ARM no Fedora  - Marcelo Barbosa

Inclusão Digital com Fedora – Eduardo Lucas Sena

Servidor Bacula com Fedora + Case de sucesso Daniel Lara

Mini Curso  Especial –  Academia Forense Livre – João Eriberto Mota Filho – Ramilton Costa Gomes Junior- Gilberto Sudre – Sandro Melo

Dia 3

Fedora QA – Wolnei Júnior

Fedora Além do Projeto: Spin Fedora Eletronic Labs (FEL) – Davi Souza

Charla Espontanea –> Empaquetamiento – Rino Rondan

Charla Espontanea –> Bacula – Daniel Lara

Charla Espontanea –> ARM – Marcelo Barbosa

Todo el dia –> STAND !!!

 

Y como  todo venia viento en popa… Relajados en el colectivo y en el desayuno del evento..

 

 

 

 

 

 

 

Despues en el stand y por los alrededores todo fue una fiesta :)

 

 

 

 

 

 

 

 

 

 

 

 

Una mini charla espontanea.. para toda la comunidad y gente del evento..

 

Y ya arrimandonos a la tranquera, gracias a todos por TODO!! Fedora demuestra una vez mas que somos mas que una comunidad, somos hermanos , somos una familia!!

 

 

 

 

 

 

 

 

 

Tambien un abrazo enorme a toda la gente de Red Hat, que nos ayudaron con muchas cosas.. (marcadores, monitor, logistica, etc,etc y mas etc )

 

Lo mejor del dia, va de la semana.. fue el super hack de Wolnei, se rumorea que es un paquete que el tuvo que testear …  Entrar a la habitacion luego de un promedio de 45 grados por dia y que la habitacion este en 55grados.. no era muy reconfortante..!!

 

 

 

 

Share

Latinoware 11 Edicion Foz do Iguaçu | PR | Brasil – Dia 2

Como en todos los eventos, el primer dia guarda la adrenalina de la espera, de toda esa preparacion que uno viene acumulando y de repente el dia llega y termina, ahora ya el cuerpo piensa que fue todo, pero llega la noche y ya empieza la maraton fedoriana, muchas charlas, muchas historias, anecdotas, comidas, momentos gratos juntos… Como una gran FAMILIA..

 

 

 

 

Otra vez la gente llegaba con la voracidad a flor de piel, queriendo y preguntando por todo, algunos timidamente, otros sin miedo y algunos pasaban y miraban de reojo como temerosos de contagiarse de tanta hermosura… seguramente estaban tentados por algun soft oscuro…

 

 

 

 

 

 

 

 

 

 

 

 

Otra vez el evento demostro estar muy bien organizado, al menos mi vision desde adentro como representante de Fedora, no vi tampoco nadie quejarse..

Esta vez hay dos fotos del dia!! EL ataque de las chicas Fedorianas!!!

En este caso las chicas que esperan.. esperan en la trinchera por ir al frente de batalla..

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Y en este otro caso, deciden ir al acecho!!

 

 

 

 

 

Share

October 17, 2014

This week in rawhide, the October 17th edition

Hey look, another week, another this week in rawhide. Almost like clockwork. ;)

I mentioned last week that the rawhide kernels had moved on to 3.18 git snapshots and that I wasn’t seeing any problems. Well, I did run into some after that. suspend is completely broken. It’s unclear yet if it’s kernel or systemd or both to blame. I’m going to try and debug it some this weekend and get a bug report filed. Not a big deal, but kind of anoying if you want to suspend and resume.

There’s also a anoying grubby bug that dropped into f21/rawhide the other day: https://bugzilla.redhat.com/show_bug.cgi?id=1153410 This will result in a kernel entry that won’t boot right. It’s easy enough to work around, and expect a fixed rawhide grubby in tomorrow’s compose.

Gnome 3.14.1 landed in rawhide at the sameish time it was built for f21 updates-testing. Hopefully it will get pushed through the freeze and be in the beta release. It’s been pretty smooth so far here in rawhide. Xfce also got a few fixes, in particular the weather plugin had upstreams change API, so weather updates stopped working. Thats fixed up and pushed out to rawhide and all stable releases.

Happy rawhiding!

requestAnimationFrame. - test it and also I make another tutorial.
I test it this great function. Working well and for readers I make a tutorial about how to use it.
See here if you want to use it.
Fedora Council, L10N Zanata, FUDCon LATAM, Taskotron, and Retrace improvements

Fedora is a big project, and it’s hard to keep up with everything that goes on. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for October 17th, 2014:

Introducing the Fedora Council

Last week, the Fedora Project Board unanimously approved its replacement, a new top-level leadership and governance body we’re calling the Fedora Council. Read more about it in John Rose’s announcement message, and our previous Fedora Magazine article about upcoming elections.

This didn’t happen overnight — Christoph Wickert, Toshio Kuratomi, Josh Boyer, and others have been talking about this and working on related proposals for the last couple of years, and Toshio and Haïkel Guémar led a great session at Flock — Fedora’s big annual planning conference — this August. We’ve been thinking about and discussing what to do ever since, and now it’s time to put the result into action!

Translation team switches to Zanata

Fedora’s L10N team — the L-10-N is short for localization, because there are 10 missing letters there — does an amazing job of translating our software to dozens of different languages. (If you’re a Fedora user who speaks a language other than English, this is a great and fun way to get involved, by the way — see the steps to join in the Fedora Localization Guide.)

All of this work is accomplished using some specialized tools. For a long time, Fedora has used Transifex, a project by Dimitris Glezos which actually grew out of Fedora. Unfortunately, recent versions of Transifex are not open source. As a project, we always prefer to work with open source tools whenever possible, and the L10N team started a project to migrate to a different and completely free and open source tool, Zanata.

Last week, all translation teams for different languages discussed and voted whether to move ahead with this, and the result was 19 “Go” votes and none against. With the active contributor community overwhelmingly in favor, it’s an easy decision to go forward, and according to the plan, the new “stage 1″ service should be live any day now.

FUDCon Managua 2014

This year’s FUDCon — that’s Fedora User and Developer Conference — in Latin America will in in Managua, Nicaragua next week. Organizer Neville Cross tipped off 5tFTW with a few particularly interesting notes:

New QA Automation framework goes live

As I’m sure everyone knows by now, the Fedora 21 cycle has been one of the longest ever. We did this on purpose, and one of the primary reasons was to give our Quality Assurance team time to work on tooling and infrastructure rather than just cycling through tests over and over. This has borne fruit, and our new QA automation framework Taskotron has gone live, replacing AutoQA for checks on package updates.

Right now, the effect on end users and developers is very small, but the change will enable many more important features in the near future, including user-submitted tests to run automatically. This will increasingly offload repetitive testing tasks so that humans time can be focused where it’s most valuable, resulting in an even better Fedora going forward.

Upgraded Retrace Server includes CentOS collaboration

This is another infrastructure thing which sounds kind like it might be boring but which also will pay off in a better, more bug-free Fedora. The Retrace/ABRT Server debugging tool which generates useful information from automated crash reports. This has been upgraded with newer hardware, enabling a few changes which directly benefit Fedora developers and users.

First, if a package is updated and the same crash doesn’t occur for two weeks, those issues are automatically closed, reducing bug noise and overload. Second, these reports are now cross-referenced with those from CentOS 7, allowing us to collaborate on debugging and fixing problems And third, it is, of course, much, much faster.


 

5tftw-large

using ssh keys with screen

It always annoyed me I couldn’t use my ssh key in a screen session. Every now and again I would try and work it out with google and some trial and error. Eventually with the help of a couple of good bits off the net I worked out what I thought to be the easiest way to achieve it consistently.

Firstly the ssh config bits:

Add the following to your ~/.ssh/config file, creating it if you don’t already have one:

host *
  ControlMaster auto
  ControlPath ~/.ssh/master-%r@%h:p

And create the ~/.ssh/rc file:

#!/bin/bash
if test "$SSH_AUTH_SOCK" ; then
    ln -sfv $SSH_AUTH_SOCK ~/.ssh/ssh_auth_sock
fi

And make sure they have the correct permissions for ssh:

chmod 600 ~/.ssh/config ~/.ssh/rc

Finally add the following to your ~/.screenrc file:

setenv SSH_AUTH_SOCK $HOME/.ssh/ssh_auth_sock

I’m not sure it’s the best and most effective way but it’s nice and simple and to date it’s been working well for me, I’ve not had issues with it. Any suggestions for improvement feel free to comment.

Fedora conference coming next week for Latin America

The organizing team of Managua FUDCon 2014, led by the event organizer Neville Cross, is pleased to announce that the Fedora Users and Developers Conference Latin America (FUDCon LATAM) will start on Thursday, October 23,

This has been a great year of events for the Fedora Community with the second year of Flock and the Conference in Asia (FUDCon Beijing). We expect FUDCon Managua to be a productive event for the local and global community.

For the event, there are varied and interesting list of proposed talks and presence of large number of contributors , not only from Latin America.

About the City

Managua is the capital of the Republic of Nicaragua. It is the only capital in the world that is located by a lake — in fact, one of the biggest freshwater´s lakes in the world. In 2012, Managua was the location of DebConf12, and local community free software users have been doing a lot of events every year.

The Venue

The University of Commercial Sciences (UCC) is one of the leading universities in Nicaragua and supports several activities of the local community:

  • Linux Tour – July 24, 2007
  • Software Freedom Day – September 19, 2007 (awarded by the UNESCO as the best event of Software Freedom that year)
  • Festival of Latin American Free Software Installation – April 26, 2008
  • Linux Tour – February 22, 2008
  • Latin American Festival of Installation of Free Software – April 9, 2011

 

Fudcon Managua

Latinoware 11 Edicion Foz do Iguaçu | PR | Brasil – Dia 1

Freedom,  Friends,  Featurs, First, fueron las palabras que se repitieron constantemente mientras la gente pasaba, retumban y vuelven, se llevan su DVD de Fedora, comparten un rato con preguntas que van y vienen, experiencias ajenas, sueños y anhelos, de todo un poco se ve durante la jornada.  La gente tranquila pasea por los pasillos , aulas, calles internas, salones, en donde el calor sofocante no los para y cientos de micros se asoman constantemente, van y vienen, las masas nerds , pichnes del mañana, gente que se perfecciona, curiosidad innata, espirtu combativo. El software libre organizado, pasion unica, cooperativismo, mucha organizacion…

 

 

 

 

 

 

 

 

La cantidad de gente que concurrio fue muy grande y variada, muchos estudiantes, gente de afuera. Muchas ganas de aprender.

 

 

 

 

 

 

 

 

La comunidad Fedora, como siempre presente,  armamos el stand como nunca, el mejor de todos, muchisimos colores, variedad, gente sonriente y lista para promocionar Fedora.

 

 

 

 

 

Somos parte de una gran comunidad , gente de todas las regiones direccionados en un solo sentido –> FEDORA.

 

 

 

 

 

 

 

 

 

Tambien hubo un poco de videogames libres, muchos tiros y nada de mariconadas.. que gane el mejor..

 

 

 

 

Y si PIDORA tambien es parte de todo esto asi que tuvimos que movernos por todos lados para conseguir un cable hdmi y gracias a la gente de spritu libre y Leo que nos consiguieron el cargador y el monitor para poder mostrarlo.

 

 

 

 

 

 

 

Y si la foto del post se la llevan las mascotas.. detras de tanta mascara hay seres humanos.. detras de ese traje tienen un set de aire acondicionado..

 

 

 

Share

Next week Diwali festival in India
Its Diwali festival next week in India.

If you feel you are not getting reply for emails from someone from India or not update on bugzilla from package maintainer just be patient. Its diwali time :)

If one dont know regarding festival read AT http://en.wikipedia.org/wiki/Diwali

Most of the offices used to have back-to-back holidays during this.

I will be working till Monday 20th and then will be away for a week.
PHP version 5.4.34, 5.5.18 and 5.6.2

RPM of PHP version 5.6.2 are available in remi-php56 repository for Fedora and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.5.18 are available in remi repository for Fedora and in remi-php55 repository for  Enterprise Linux.

RPM of PHP version 5.4.34 are available in remi repository Enterprise Linux.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix various security bugs, update is strongly recommended.

Version announcements:

emblem-important-2-24.png5.4.33 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version and installation mode.

Replacement of default PHP by version 5.6 installation (simplest):

yum --enablerepo=remi-php56,remi update php\*

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum --enablerepo=remi install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum --enablerepo=remi-php55,remi update php\*

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

Replacement of default PHP by version 5.4 installation (enterprise only):

yum --enablerepo=remi update php\*

Parallel installation of version 5.4 as Software Collection (x86_64 only):

yum --enablerepo=remi install php54

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.0
  • EL6 rpm are build using RHEL-6.5 (next build will use 6.6)
  • for php 5.5, the zip extension is now provided by the php-pecl-zip package.
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php54 - php55)

October 16, 2014

Yesterday started my first participation at LatinoWare, located in Foz do Iguacu near Itaipu. Like always...
Yesterday started my first participation at LatinoWare, located in Foz do Iguacu near Itaipu. Like always first day was for set camping and know where you can go.
After we populate our stand, started the distribution of media and gifts of Fedora for the winners of a quiz or people who asked in our talks.
Today we help many people understand about Fedora Project or do installation in his netbooks and notebooks. #fedora #latinoware #linux  

LatinoWare 2014


Advanced FOSS: Second Community Contribution

Contributing to the Python Fabulous library

For my second community contribution for Advanced Foss I forked the super neat Fabulous library. I edited the setup.py in order to make it installable again. Currently the version of fabulous available on pypi has a bug which makes it impossible to install. The change is minor, I added a ‘.rst’ to the end of a string. I submitted a pull request with the patch, however as of this blog post it has yet to be merged in. If you want to install fabulous with pip you can install using this command:

pip install git+https://github.com/rossdylan/fabulous.git@patch-1/

Hopefully my change will be merged and pushed to pypi soon.

Advanced FOSS: Googler Visit

Career Fair Visit from Google

During the fall RIT career fair my friend Russ who works at Google stopped by the Advanced Foss class. He touched on his experience at Google, and how open source development factors into his job. The fact that he is working on open source at Google is awesome. I’m hoping that whoever I end up working for will allow me to continue contributing to open source projects. Russ also touched on how to apply to Google and a little of what to expect from the process.

FUDCon and Fedora on TV

Thursday 16th, there were two interviews about FUDCon on TV. Two different channels with variety mornig talk shows. One one apart from each other, have to run from one TV station to the other. Almost out of air we cover one and a half block.

The talks were about the University as a co-organizator of the convention, their role in activities for technology and those related to freesoftware. The came the turn for Fedora and FUDCon. People coming from different parts, FUDCon as a moving event but one of the most important in LATAM, the topics, web registration and it is all free.

Valentin Basel is now on the spot as his project was mentioned as freehardware, educational, built from scratch 100% with fedora.

Sadly, there is no web archive of the shows. Those channels only have archive for news.

Monday we will have another interview on TV. We hope that one of the people that arrived early for FUDCon step forward to face the camera. There is another TV interview pending to be confirmed.

Paper media has been harder, there will be one before the event and one covering the event. This has been a valuable help from the Public Relation office of the University to make all this press contacts.

Disable SSLv3 in Dovecot

Disabling SSLv3 in Dovecot is nice and straight forward.

In the /etc/dovecot/conf.d/10-ssl.conf file edit the ssl_cipher_list line to look as below (or adjust to suit your specific requirements):

ssl_cipher_list = ALL:!ADH:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL:+HIGH:+MEDIUM

To test the option to ensure it’ll work you can run the following command before you restart dovecot and the output should look something like below:

$ openssl ciphers -v 'ALL:!ADH:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL:+HIGH:+MEDIUM'
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA384
ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA384
DHE-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=DSS  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA256
DHE-DSS-AES256-SHA256   TLSv1.2 Kx=DH       Au=DSS  Enc=AES(256)  Mac=SHA256
ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDH-RSA-AES256-SHA384  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256)  Mac=SHA384
ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256)  Mac=SHA384
AES256-GCM-SHA384       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(256) Mac=AEAD
AES256-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA256
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA256
DHE-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=DSS  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA256
DHE-DSS-AES128-SHA256   TLSv1.2 Kx=DH       Au=DSS  Enc=AES(128)  Mac=SHA256
ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDH-RSA-AES128-SHA256  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128)  Mac=SHA256
ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128)  Mac=SHA256
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD
AES128-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA256

Finally to test it from client side you can run the following command to ensure it’s not enabled. If it’s configured as expected the negotiation should fail and it’ll return you straight to the command prompt:

openssl s_client -connect mail.example.com:993 -ssl3
Observing X11 Protocol Differences

I was trying to understand some oddities going with an X11 legacy application showing bad artifacts in one environment and working flawlessly in another environment. Since wireshark does not have any support for diffing two pcaps, I came up with the following steps:

  • Dump both working.pcap and nonworking.pcap into text files with full headers:
<figure class="code">
1
2
~/Devel/wireshark/tshark -r working.pcap -T text -V -Y "x11" > working.txt
~/Devel/wireshark/tshark -r notworking.pcap -T text -V -Y "x11" > notworking.txt
</figure>
  • Prune the two text files with a script like the following:
<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def clean(source, dest):
    f = open(source, "r")
    x11_state = False
    output = []
    for line in f.readlines():
        if x11_state:
            if line.startswith(' '):
                output.append(line)
            else:
                x11_state = False
        else:
            if line.startswith('X11') and \
                    "Property" in line:
                output.append(line)
                x11_state = True
            else:
                continue
    o = open(dest, "w")
    for i in output:
        o.write(i)

if __name__ == '__main__':
    clean("working.txt", "working-trimmed.txt")
    clean("notworking.txt", "notworking-trimmed.txt")
</figure>
  • At this point we can easily vimdiff the outputs obtained from above via vimdiff working-trimmed.txt notworking-trimmed.txt: vimdiff
SSL issues again
Here is a blogpost on the issue called POODLE, which was discovered by Google some days ago.
It is not Heartbleed but it could lead to an impact again.

How to install Fedora 21 on APM Mustang with just HDD?

After my previous posts I got an email with question “how to install Fedora 21 on Mustang without DVD and network?” so decided to describe hard drive based method.

Requirements:

  • APM Mustang with UEFI
  • hard drive
  • another computer to prepare hdd
  • Fedora 21 installation ISO
  • serial cable connected to Mustang and other computer
  • Ethernet cable to get network on Mustang
  • VNC viewer to control installation (not needed but recommended)

First step is preparing hard drive. Create 100MB GPT partition type “ef00″ (EFI System), format it as FAT and copy “/EFI” and “/images” directories from DVD. Edit “/EFI/BOOT/grub.cfg” file and change “Fedora-S-21_A-aarch64″ to “Fedora-S-21_A” because ext4 labels are shorter than ISO9660 ones.

Then do another partition — 2GB size. I used ext4 (other ones may work too). Format and label it as ‘Fedora-S-21_A’ using e2label or other tool. Copy content of DVD image into it.

Now put hard drive into Mustang and power on. From UEFI menu select shell and start run “FS0:\EFI\BOOT\BOOTAA64.efi” to get into GRUB. Press Enter to begin installation.

As before I went VNC way, selected hard drive and automatic partitioning. After few minutes system was installed and ready to reboot.

This time there were no issues with adding bootloader into UEFI boot order. Maybe because I left previous entries there?

And layout of partitions was not changed:

Fedora release 21 (Twenty One)
Kernel 3.17.0-301.fc21.aarch64 on an aarch64 (ttyS0)

localhost login: hrw
[hrw@localhost ~]$ lsblk 
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                       8:0    0 298.1G  0 disk 
├─sda1                    8:1    0   127M  0 part /boot/efi
├─sda2                    8:2    0     2G  0 part 
├─sda3                    8:3    0   500M  0 part /boot
└─sda4                    8:4    0 294.1G  0 part 
  ├─fedora--server-swap 253:0    0     8G  0 lvm  [SWAP]
  ├─fedora--server-root 253:1    0    50G  0 lvm  /
  └─fedora--server-home 253:2    0   236G  0 lvm  /home

So if you have APM Mustang with one hdd and no external network connection on site then looks like this is an easiest way. Sure, it requires some work but allows to make an installation without any extra hardware connected to Mustang.


All rights reserved © Marcin Juszkiewicz
How to install Fedora 21 on APM Mustang with just HDD? was originally posted on Marcin Juszkiewicz website

getting started with sinatra and postgres web service
I’m in the process of resurrecting some of my past projects. One of them is a web service that provided details for a given zip code. For example, you could lookup the zip code ‘15213’ and the web service would spit out the geographical center for the zip code (i.e., the latitude and longitude coordinates), as...
Bird Brained Idea Number 2
Condor’s Bird Brained Ideas
Number 2: Command Line Utilities with Persistent State Information
by Joseph Pesco

I’ll be at a Meetup early this November with MongoDB on the agenda. NoSQL has been on my radar for over a year now. If SQL is the past, then NoSQL must be the future. Computer Science shares with mathematics an ageless and timeless characteristic though: once discovered, a technology is recyclable if found to fit the bill despite poor public relations. I’ve been naming SQL utilities “rockets” since learning of IBM’s hand in the American space program that brought mankind manned missions to the moon. I have given up on seeing manned exploration of the planet Mars during my lifetime.

Description:
This shell assistant comes in the form of a command line utility called `nav’. `nav’ is short for navigator. This utility keeps a list of directories for the user that persist between instances of the shell. You can add to and remove directories from the list, and change directory to a directory in the list. If you change directory to a directory in the list and exit the shell, or instantiate another shell elsewhere you will be taken to that directory by a feature called “Working Path” that saves the last directory changed to for this purpose in a column in a separate table. The persistence is provided by Sqlite3. The list is kept in the environment by each shell and is synchronized by a signal when the database is manipulated by any instantiation of the shell.

For this demonstration each feature has a function all to itself: nav_add (), nav_rmdir(), nav_cd(). I don’t know if this was wise, but I wanted to make the algorithm for each activity transparent to the casual reader in the hope he, or she would actually attempt to use and modify my offering here.

Instructions: Part I
Save the SQL to a file called shell_assistant.sql and use the sql_rocket.sh utility posted in the previous book report, or the sqlite3 utility to create a database file. Name the database file rocket.db!

--- File    : shell_assistant.sql
--- Purpose : SQL commands to create tables.
--- Date    : 10/15/14
--- Author  : Joseph Pesco
--- License : GPLv3+
--- Gufi    : a17d45af-592e-4bd5-bd26-2a662f5ebcca

CREATE TABLE assistant (

	process_id 	INTEGER PRIMARY KEY NOT NULL,
	process		INTEGER 
);

CREATE TABLE persistant_enviroment (

	enviroment_id	INTEGER PRIMARY KEY NOT NULL,
	working_path	TEXT
);

CREATE TABLE target (

	target_id INTEGER PRIMARY KEY NOT NULL,
	active_target TEXT
);

Instructions: Part II
Copy the shell script below to a file named bash_sandbox and change the value of the variable P in the script to the full path to the database file created earlier. I’m very sorry to have left a hard coded second definition of P somewhere in the script, please make sure to point that P to the same place as the first variable P. In the future I’ll attempt to get this on github before my deadline is here and gone.

This is very important, Please source bash_sandbox from near the bottom of your .bashrc shell configuration file!

#!/bin/bash

# File    : bash_sandbox
# Purpose : This script demonstrates a command line utility with 
#           a persistent state provided by Sqlite3
# Date    : 10/15/14 1:48 am UTC 
# Author  : Joseph Pesco
# License : GPLv3+
# Gufi    : 96e9a6a7-08a2-450e-8c2b-3d9b9d706cf6

# This does not pretend to be production script code! The purpose is to demonstrate 
# and to be minimally legible.     

# Instructions: 1. Use either the sql_rocket.sh script presented in an earlier 
#               post or the sqlite3 utility to generate the the database
#               file.
#               2. Change the variable P to point the the database file.    
#               3. Source this file at the bottom of .bashrc file.
 
echo "shell assistant: $PWD" "bash_sandbox"

trap shell_update SIGUSR1
trap shell_exit  EXIT

# Change this to the directory where your Sqlite3 database will be.
P="/mnt/lexar/Laboratory/Bird Brain/BB3"

echo $$
echo "INSERT INTO assistant  ( process ) VALUES ( $$ ); " |  sqlite3 "$P/rocket.db" 

shell_exit () {
       
        # We want to remove our PID from the list of PIDs signaled when the database changes
        # before the shell terminates.  This function is called by the trap on EXIT.
        # I'm sure more needs to be done to make this robust.
 
        # I am very sorry about this hard coded path.  I did not notice it until adding 
        # comments only minutes before posting this and don't want to remove it and possibly 
        # break the script.  
	P="/mnt/lexar/Laboratory/Bird Brain/BB3"
	echo "DELETE FROM assistant WHERE  process = $$;"   | sqlite3 "$P/rocket.db"

	# read -p"EXIT Signal recieved"

}

shell_update () {

        # This function is called when a shell is instantiated, or when the state of the database changes and 
        # data in memory needs to updated to reflect the changes.  
	# echo "update target array"
	# unset target
	IFS=$'\n'
	id=( ` echo "SELECT target_id FROM target;" | sqlite3 "$P/rocket.db" ` )
	target=( ` echo "SELECT active_target FROM target;"  | sqlite3 "$P/rocket.db" ` ) 
	IFS=$' \t\n'
	echo "INSERT OR IGNORE INTO persistant_enviroment (enviroment_id, working_path ) VALUES ( 0, '${PWD}');" | sqlite3 "$P/rocket.db" 
	WP=`echo "SELECT working_path FROM persistant_enviroment WHERE enviroment_id = 0 ;"  | sqlite3 "$P/rocket.db"`
	# echo $WP
	# read -ppause
}

nav_cd () {
        # We will display a list of directories and request the user pick one.  We will cd to this directory
        # and also make it the working path.  If we open a new shell,  this script will cd to that that working path.

	local x=0
	for dir in "${target[@]}"; do 

		echo $x $dir
		let x++
	done
	read -p"Select Destination: " index
	echo "UPDATE persistant_enviroment SET working_path = '${target[${index}]}' WHERE enviroment_id =0 ;"  | sqlite3 "$P/rocket.db" 
	cd "${target[${index}]}" 
}

nav_rmdir () {

	local x=0
	for dir in "${target[@]}"; do 

		echo $x $dir
		let x++
	done
	read -p"Select Target to Delete: " index
	delete_id=${id[${index}]} 
	echo "DELETE FROM  target WHERE target_id = $delete_id;" | sqlite3 "$P/rocket.db"
	IFS=$' '
	# If we want to neglect outselves from the signal add `grep -v "$$"' 
	PROCESSES=`echo "SELECT process FROM assistant;" | sqlite3 "$P/rocket.db"  `
	echo $PROCESSES | xargs -n 1  kill -s SIGUSR1 
	# Alternate implementation: 
	# kill -s SIGUSR1 $PROCESSES
	IFS=$' \t\n'  
}


nav_add () {

	echo "INSERT INTO target ( active_target ) VALUES ( '$PWD' );" | sqlite3 "$P/rocket.db" 
	IFS=$' '
	# grep -v "$$"
	PROCESSES=`echo "SELECT process FROM assistant;" | sqlite3 "$P/rocket.db"  `
	echo $PROCESSES | xargs -n 1  kill -s SIGUSR1 
	# kill -s SIGUSR1 $PROCESSES
	IFS=$' \t\n'  
} 

shell_update

if [ -d "$WP" ]; then
	cd "$WP" 
else
	echo "ERROR $WP does not exist"
fi

There are several book reports on Planet Fedora relevant to our community originated by Condor.
In the future these books should also appear:

  • If A Then B, By Michael Shenefelt & Heidi White, Columbia University Press.
    Note: The ink was still wet on the copy recieved shortly after placing the title on hold at the library. The author’s talk was a fantastic presentation.
  • Information and the Modern Corporation, By James W. Cortada, MIT Press, Copyright 2011, This short readable text is very contemporary.
  • Voice Communication with Computers: Conversational Systems, by C. Schmandt, Van Nostrand Reinhold, 1993. There is no ink for this book. This is listed as an online text for MAS.632 Conversational Computer Systems, a graduate class on MIT OpenCourseware
Sunset in Central Park.

I don’t think I’m very old, but I seem to be planning a sunset career! Sunset in Central Park

All content is copyright 2014 by Joseph Pesco. All rights are reserved. For more information on the GNU General Public License visit: GNU General Public License

Project: HelliJudge

About two months ago Hamed Saleh and I have started a project to write a judge system named HelliJudge:

The system can compile and execute codes, and test them with pre-constructed data. Submitted code may be run with restrictions, including time limit, memory limit, security restriction and so on. The output of the code will be captured by the system, and compared with the standard output. The system will then return the result. (from Wikipedia)

In this project I faced many problems, and learned much more things like co-process, PAM, some NAT solutions, jailing (and some jailbreak techniques), many bash techniques, git and few other things.

Our system is based on the principle of least privilege. In simple words, we compile the code in a jailed environment with minimum needed libraries available for compiler, then run it in the same environment with hard limits on memory and number of threads with no write access — except for stdout and stderr. FYI, monitoring total time, memory and return code of user binary is what time does (note that gnu time is different from bash time, the gnu one only can be accessed with absolute path: /usr/bin/time).

The jailed area, which is available on its git repository, contains only gcc, gcc-c++, cpp, some needed libraries, bash and their requirements. Compile scripts (like this) are also included in the jail (actually this is the reason we need bash in the jailed zone).

Jail system used to be based on chroot, but for some issues we switched to pam_chroot (which is in PAM package in fedora). This module makes it easy to set a root directory for users and groups by editing /etc/security/chroot.conf. This is an example of the file:

# /etc/security/chroot.conf
# format:
# username_regex    chroot_dir
#matthew        /home

judge        /mnt/jail

PAM is also used as limit system! We used pam_limits module to limit maximum thread count (in order to prevent fork bomb and zombie process attack) and limit maximum memory by editing /etc/security/limits.conf. Here is an example of the file:

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#
#
#Where:
# can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
# can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
# can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#
#
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

judge            hard    core            0
judge            hard    nproc           1
judge            hard    as              524288

# End of file

We limited total time using timeout command.

In order to apply the limitations described below, we made a “judge” user, added him to chroot.conf and limits.conf (as you can see in the examples above, yes, they are really being used in our server), and then added these lines to /etc/pam.d/su:

session     required    pam_limits.so
session     required    pam_chroot.so

Note: the order of lines in PAM configuration files is important. pam_chroot.so should come at the end.

Now if you use “su” to run a command as user “judge”, these limits will be applied to session. For example the following command will run a.out with these limits (note that a.out is in root folder of jailed area):

# su judge --session-command /a.out

Currently just HelliCode (see the update) uses our system, which is actually another project of Mohammad Reza Maleki and us :-) We would be glad if you test our system’s security at HelliCode.

P.S: As there isn’t much documentation available for co-processes, I’ll mention it asap.

UPDATE: Yay! A new server is going to use our judge! Algorithms.ir is a computer algorithm education and contest website by Mr. Andjedani in Algorithms and Problem Solving Laboratory in Department of Computer Science at Sharif University of Technology.

October 15, 2014

Great idea with Virtual Machines.
The team (see: osboxes website) come with two .vdi files for VirtualBox to help any user to test it. The Fedora team has a way to promote Fedora, so this can be a great idea.
I test it with may VirtualBox and it's working well. I'm not Fedora Ambassador.
So if somebody want to do it, then will be great.
This great idea can help users to test Linux. I saw it on LinuxLite.
 
Small computers will be big at FUDCon

There is no way to get experimental devices in Nicaragua. Just for FUDCon local team pitch in to get 5 Raspberry Pi +B and 5 Arduino UNO R3. This will not be all, there are sonic distance sensors, temperature sensors, infrared movement sensors, light sensor among other cool stuff.

We hope that those get across customs before Fedora collaborators start arriving to Nicaragua. Most likely there will be some custom duties to pay for. But that will be small thing with the success that this will bring to the event. Other parts and tools have been coordinated with other collaborators coming to Nicaragua.

Combined with bread boards, buzzers, RGB leds the GPIO of the raspberry pi will have plenty to do using Pidora.

We also expect that Arduino will be a success. There has been a lot of talk about arduinos in Nicaragua, even some demonstrations. Never have been a hands on hacking. This will be all running fedora. The link with fedora and experimental electronics will be ever lasting.

Best of all, all components and sensors can be shared among Icaro, Arduino and Rasperry Pi. Small things will be the greatest.

OpenStack Instance HA Proposal

In a perfect world, every workload that runs on OpenStack would be a cloud native application that is horizontally scalable and fault tolerant to anything that may cause a VM to go down.  However, the reality is quite different.  We continue to see a high demand for support of traditional workloads running on top of OpenStack and the HA expectations that come with them.

Traditional applications run on top of OpenStack just fine for the most part.  Some applications come up with availability requirements that a typical OpenStack deployment will not provide automatically.  If a hypervisor goes down, there is nothing in place that tries to rescue VMs that were running there.  There are some features in place that allow manual rescue, but it requires manual intervention from a cloud operator or an external orchestration tool.

This proposal discusses what it would take to provide automated detection of a failed hypervisor and the recovery of the VMs that were running there.  There are some differences to the solution based on what hypervisor you’re using.  I’m primarily concerned with libvirt/KVM, so I assume that for the rest of this post.  Except where libvirt is specifically mentioned, I think everything applies just as well to the use of the xenserver driver.

This topic is raised on a regular basis in the OpenStack community.  There has been pushback against putting this functionality directly in OpenStack.  Regardless of what components are used, I think we need to provide an answer to the question of how this problem should be approached.  I think this is quite achievable today using existing software.

Scope

This proposal is specific to recovery from infrastructure failures.  There are other types of failures that can affect application availability.  The guest operating system or the application itself could fail.  Recovery from these types of failures is primarily left up to the application developer and/or deployer.

It’s worth noting that the libvirt/KVM driver in OpenStack does contain one feature related to guest operating system failure.  The libvirt-watchdog blueprint was implemented in the Icehouse release of Nova.  This feature allows you to set the hw_watchdog_action property on either the image or flavor.  Valid values include poweroff, reset, pause, and none.  When this is enabled, libvirt will enable the i6300esb watchdog device for the guest and will perform the requested action if the watchdog is triggered.  This may be a helpful component of your strategy for recovery from guest failures.

Architecture

A solution to this problem requires a few key components:

  1. Monitoring – A system to detect that a hypervisor has failed.
  2. Fencing - A system to fence failed compute nodes.
  3. Recovery – A system to orchestrate the rescue of VMs from the failed hypervisor.

Monitoring

There are a two main requirements for the monitoring component of this solution.

  1. Detect that a host has failed.
  2. Trigger an automatic response to the failure (Fencing and Recovery).

It’s often suggested that the solution for this problem should be a part of OpenStack.  Many people have suggested that all of this functionality should be built into Nova.  The problem with putting it in Nova is that it assumes that Nova has proper visibility into the health of the infrastructure that Nova itself is running on.  There is a servicegroup API that does very basic group membership.  In particular, it keeps track of active compute nodes.  However, at best this can only tell you that the nova-compute service is not currently checking in.  There are several potential causes for this that would still leave the guest VMs running just fine.  Getting proper infrastructure visibility into Nova is really a layering violation.  Regardless, it would be a significant scope increase for Nova, and I really don’t expect the Nova team to agree to it.

It has also been proposed that this functionality be added to Heat.  The most fundamental problem with that is that a cloud user should not be required to use Heat to get their VM restarted if something fails.  There have been other proposals to use other (potentially new) OpenStack components for this.  I don’t like that for many of the same reasons I don’t think it should be in Nova.  I think it’s a job for the infrastructure supporting the OpenStack deployment, not OpenStack itself.

Instead of trying to figure out which OpenStack component to put it in, I think we should consider this a feature provided by the infrastructure supporting an OpenStack deployment.  Many OpenStack deployments already use Pacemaker to provide HA for portions of the deployment.  Historically, there have been scaling limits in the cluster stack that made Pacemaker not an option for use with compute nodes since there’s far too many of them.  This limitation is actually in Corosync and not Pacemaker itself.  More recently, Pacemaker has added a new feature called pacemaker_remote, which allows a host to be a part of a Pacemaker cluster, without having to be a part of a Corosync cluster.  It seems like this may be a suitable solution for OpenStack compute nodes.

Many OpenStack deployments may already be using a monitoring solution like Nagios for their compute nodes.  That seems reasonable, as well.

Fencing

To recap, fencing is an operation that completely isolates a failed node.  It could be IPMI based where it ensures that the failed node is powered off, for example.  Fencing is important for several reasons.  There are many ways a node can fail, and we must be sure that the node is completely gone before starting the same VM somewhere else.  We don’t want the same VM running twice.  That is certainly not what a user expects.  Worse, since an OpenStack deployment doing automatic evacuation is probably using shared storage, running the same VM twice can result in data corruption, as two VMs will be trying to use the same disks.  Another problem would be having the same IPs on the network twice.

A huge benefit of using Pacemaker for this is that it has built-in integration with fencing, since it’s a key component of any proper HA solution.  If you went with Nagios, fencing integration may be left up to you to figure out.

Recovery

Once a failure has been detected and the compute node has been fenced, the evacuation needs to be triggered.  To recap, evacuation is restarting an instance that was running on a failed host by moving it to another host.  Nova provides an API call to evacuate a single instance.  For this to work properly, instance disks should be on shared storage.  Alternatively, they could all be booted from Cinder volumes.  Interestingly, the evacuate API will still run even without either of these things.  The result is just a new VM from the same base image but without any data from the old one.  The only benefit then is that you get a VM back up and running under the same instance UUID.

A common use case with evacuation is “evacuate all instances from a given host”.  Since this is common enough, it was scripted as a feature in the novaclient library.  So, the monitoring tool can trigger this feature provided by novaclient.

If you want this functionality for all VMs in your OpenStack deployment, then we’re in good shape.  Many people have made the additional request that users should be able to request this behavior on a per-instance basis.  This does indeed seem reasonable, but poses an additional question.  How should we let a user indicate to the OpenStack deployment that it would like its instance automatically recovered?

The typical knobs used are image properties and flavor extra-specs.  That would certainly work, but it doesn’t seem quite flexible enough to me.  I don’t think a user should have to create a new image to mark it as “keep this running”.  Flavor extra-specs are fine if you want this for all VMs of a particular flavor or class of flavors.  In either case, the novaclient “evacuate a host” feature would have to be updated to optionally support it.

Another potential solution to this is by using a special tag that would be specified by the user.  There is a proposal up for review right now to provide a simple tagging API for instances in Nova.  For this discussion, let’s say the tag would be automatic-recovery.  We could also update the novaclient feature we’re using with support for “evacuate all instances on this host that have a given tag”.  The monitoring tool would trigger this feature and ask novaclient to evacuate a host of all VMs that were tagged with automatic-recovery.

Conclusions and Next Steps

Instance HA is clearly something that many deployments would like to provide.  I believe that this could be put together for a deployment today using existing software, Pacemaker in particular.  A next step here is to provide detailed information on how to set this up and also do some testing.

I expect that some people might say, “but I’m already using system Foo (Nagios or whatever) for monitoring my compute nodes”.  You could go this route, as well.  I’m not sure about fencing integration with something like Nagios.  If you skip the use of fencing in this solution, you get to keep the pieces when it breaks.  Aside from that, your monitoring system could trigger the evacuation functionality of novaclient just like Pacemaker would.

Some really nice future development around this would be integration into an OpenStack management UI.  I’d like to have a dashboard of my deployment that shows me any failures that have occurred and what responses have been triggered.  This should be possible since pcsd offers a REST API (WIP) that could export this information.

Lastly, it’s worth thinking about this problem specifically in the context of TripleO.  If you’re deploying OpenStack with OpenStack, should the solution be different?  In that world, all of your baremetal nodes are OpenStack resources via Ironic.  Ceilometer could be used to monitor the status of those resources.  At that point, OpenStack itself does have enough information about the supporting infrastructure to perform this functionality.  Then again, instead of trying to reinvent all of this in OpenStack, we could just use the more general Pacemaker based solution there, as well.


Fedora Council elections coming soon!

Hello everyone! Very shortly, the Fedora Council will replace the Fedora Project Board as Fedora’s top-level leadership and governance body, with the particular aim of having more engaged and effective whole-project coordination and planning.

A friend recently reminded me that the word “governance” is a great way to make everyone’s eyes glaze over, and that “leadership” can be kind of vague and empty-sounding. I can’t promise that there won’t be some boring stuff (“With great power, comes… administration!”), but the governance part means an opportunity to help decide where Fedora’s project-level resources are used, and the leadership part means that we’ll work with the community to identify specific important objectives — and then empower community leaders to achieve each one.

The council structure includes meritocratic representative positions appointed by long-standed Fedora bodies. The Fedora Engineering Steering Committee (FESCo) and Fedora Ambassadors Steering Committee (FAmSCo) are working on selecting those right now.

Additionally, after those appointments are made, we will hold elections for two “at-large” representative seats. Any Fedora contributor is eligible to run, and all are eligible to vote. Because the new Council will be consensus-based rather than each seat being one vote out of ten, each of these positions carries a lot of weight. Fedora is an amazing collaborative community, and I’d like to ask all active members to think about what you might be able to bring to such a role, and about who you want to represent you.

(As we get closer, we’ll post a full election schedule and more details. No action needs to be taken yet, although if you’re involved in either the engineering or outreach aspects of the project, do please help contribute to FESCo and FAmSCo’s selection of those representatives.)


 

governance

Inscripciones para Fudcon Latam abiertas

Estamos a poco mas de una semana para la Fudcon Managua, ya pueden inscribirse e ir votando por su charla favorita en http://fudconlatam.org/
More CVE-2014-3566 information on Red Hat’s Security Blog

I mentioned earlier that I’d update on this, and here we go. Our friends over in the Red Hat security team have posted POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566), an article explaining the vulnerability in (not terribly technical) depth. If you’re curious (or worried!), check it out.

If you’re less curious, you may still want to skim Fedora Magazine’s earlier article. For desktop users, web browsing when you’re on an untrusted network is going to be the main risk, and adding this Mozilla-official no-restart-required extension to Firefox is a quick fix.

LinuxCon/KVMForum/CloudOpen Eu 2014

While the Linux Foundation’s colocated events (LinuxCon/KVMForum/CloudOpen, Plumbers and a bunch of others) are still in progress (Düsseldorf, Germany), thought I’d quickly write a note here.

Some slides and demo notes on managing snapshots/disk image chains with libvirt/QEMU. And, some additional examples with a bit of commentary. (Thanks to Eric Blake, of libvirt project, for reviewing some of the details there.)


Fedora Weekend Greece – 18-19/Oct/2014

This weekend the Greek Fedora community is organizing a two-day FAD called Fedora Weekend, which aims to bring Fedora contributors and users together to learn all about the latest development in the project and collaborate in contributing to certain areas.

The event will be hosted at the Athens Hackerspace (hsgr).

I’ll be running two workshops; the first one being about Fedora Remixes and the second one about Fedora l10n. Throughout the weekend we’ll also be hosting a packaging and a localization hackfest to actively contribute to the project.

See you there! :)

POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566)

Red Hat Product Security has been made aware of a vulnerability in the SSL 3.0 protocol, which has been assigned CVE-2014-3566. All implementations of SSL 3.0 are affected. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack.

To mitigate this vulnerability, it is recommended that you explicitly disable SSL 3.0 in favor of TLS 1.1 or later in all affected packages.

A brief history

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over networks. The SSL protocol was originally developed by Netscape.  Version 1.0 and was never publicly released; version 2.0 was released in February 1995 but contained a number of security flaws which ultimately led to the design of SSL 3.0. Over the years, several flaws were found in the design of SSL 3.0 as well. This ultimately lead to the development and widespread use of the TLS protocol.

Most TLS implementations remain backward compatible with SSL 3.0 to incorporate legacy systems and provide a smoother user experience. Many SSL clients implement a protocol downgrade “dance” to work around the server side interoperability issues. Once the connection is downgraded to SSL 3.0, RC4 or a block cipher with CBC mode is used; this is where the problem starts!

What is POODLE?

The POODLE vulnerability has two aspects. The first aspect is a weakness in the SSL 3.0 protocol, a padding oracle. An attacker can exploit this vulnerability to recover small amounts of plaintext from an encrypted SSL 3.0 connection, by issuing crafted HTTPS requests created by client-side Javascript code, for example. Multiple HTTPS requests are required for each recovered plaintext byte, and the vulnerability allows attackers to confirm if a particular byte was guessed correctly. This vulnerability is inherent to SSL 3.0 and unavoidable in this protocol version. The fix is to upgrade to newer versions, up to TLS 1.2 if possible.

Normally, a client and a server automatically negotiate the most recent supported protocol version of SSL/TLS. The second aspect of the POODLE vulnerability concerns this negotiation mechanism. For the protocol negotiation mechanism to work, servers must gracefully deal with a more recent protocol version offered by clients. (The connection would just use the older, server-supported version in such a scenario, not benefiting from future protocol enhancements.) However, when newer TLS versions were deployed, it was discovered that some servers just terminated the connection at the TCP layer or responded with a fatal handshake error, preventing a secure connection from being established. Clearly, this server behavior is a violation of the TLS protocol, but there were concerns that this behavior would make it impossible to deploy upgraded clients and widespread interoperability failures were feared. Consequently, browsers first try a recent TLS version, and if that fails, they attempt again with older protocol versions, until they end up at SSL 3.0, which suffers from the padding-related vulnerability described above. This behavior is sometimes called the compatibility dance. It is not part of TLS implementations such as OpenSSL, NSS, or GNUTLS; it is implemented by application code in client applications such as Firefox and Thunderbird.

Both aspects of POODLE require a man in the middle attack at the network layer. The first aspect of this flaw, the SSL 3.0 vulnerability, requires that an attacker can observe the network traffic between a client and a server and somehow trigger crafted network traffic from the client. This does not strictly require active manipulation of the network transmission, passive eavesdropping is sufficient. However, the second aspect, the forced protocol downgrade, requires active manipulation of network traffic.  As described in the POODLE paper, both aspects require the attacker to be able to observe and manipulate network traffic while it is in transit.


How are modern browsers affected by the POODLE security flaw?

Browsers are particularly vulnerable because session cookies are short and an ideal target for plain text recovery, and the way HTTPS works allows an attacker to generate many guesses quickly (either through Javascript or by downloading images). Browsers are also most likely to implement the compatibility fallback.
By default, Firefox supports SSL 3.0, and performs the compatibility fallback as described above. SSL 3.0 support can be switched off, but the compatibility fallback cannot be configured separately.

Is this issue fixed?

The first aspect of POODLE, the SSL 3.0 protocol vulnerability, has already been fixed through iterative protocol improvements, leading to the current TLS version, 1.2. It is simply not possible to address this in the context of the SSL 3.0 protocol, a protocol upgrade to one of the successors is needed. Note that TLS versions before 1.1 had similar padding-related vulnerabilities, which is why we recommend to switch to TLS 1.1, at least. (SSL and TLS are still quite similar as protocols, the name change has non-technical reasons.)

The second aspect, caused by browsers which implement the compatibility fallback in an insecure way, has yet to be addressed. Strictly speaking, this is a security vulnerability in browsers due to the way they misuse the TLS protocol. One way to fix this issue would be to remove the compatibility dance, focusing instead on making servers compatible with clients implementing the most recent TLS implementation (as explained, the protocol supports a version negotiation mechanism, but some servers refuse to implement it).

However, there is an industry-wide effort under way to enable browsers to downgrade in a secure fashion, using a new Signaling Cipher Suite Value (SCSV). This will require updates in browsers (such as Firefox) and TLS libraries (such as OpenSSL, NSS and GNUTLS). However, we do not envision changes in TLS client applications which currently do not implement the fallback logic, and neither in TLS server applications as long as they use one of the system TLS libraries. TLS-aware packet filters, firewalls, load balancers, and other infrastructure may need upgrades as well.

Is there going to be another SSL 3.0 issue in the near future? Is there a long term solution?

Disabling SSL 3.0 will obviously prevent exposure to future SSL 3.0-specific issues. The new SCSV-based downgrade mechanism should reliably prevent the use of SSL 3.0 if both parties support a newer protocol version. Once these software updates are widely deployed, the need to disable SSL 3.0 to address this and future vulnerabilities will hopefully be greatly reduced.

SSL 3.0 is typically used in conjunction with the RC4 stream cipher. (The only other secure option in a strict, SSL 3.0-only implementation is Triple DES, which is quite slow even on modern CPUs.) RC4 is already considered very weak, and SSL 3.0 does not even apply some of the recommended countermeasures which prolonged the lifetime of RC4 in other contexts. This is another reason to deploy support for more recent TLS versions.

I have patched my SSL implementation against BEAST and LUCKY-13, am I still vulnerable?

This depends on the type of mitigation you have implemented. If you disabled protocol versions earlier than TLS 1.1 (which includes SSL 3.0), then the POODLE issue does not affect your installation. If you forced clients to use RC4, the first aspect of POODLE does not apply, but you and your users are vulnerable to all of the weaknesses in RC4. If you implemented the n/n-1 split through a software update, or if you deployed TLS 1.1 support without enforcing it, but made no other configuration changes, you are still vulnerable to the POODLE issue.

Is it possible to monitor for exploit attempts?

The protocol downgrade is visible on the server side. Usually, servers can log TLS protocol versions. This information can then be compared with user agents or other information from the profile of a logged-in user, and mismatches could indicate attack attempts.

Attempts to abuse the SSL 3.0 padding oracle part of POODLE, as described in the paper, are visible to the server as well. They result in a fair number of HTTPS requests which follow a pattern not expected during the normal course of execution of a web application. However, it cannot be ruled out that a more sophisticated adaptive chosen plain text attack avoids confirmation of guesses from the server, and this more advanced attack would not be visible to the server, only to the client and the network up to the point at which the attacker performs their traffic manipulation.

What happens when i disable SSL 3.0 on my web server?

Some old browsers may not be able to support a secure connection to your site. Estimates of the number of such browsers in active use vary and depend on the target audience of a web site. SSL protocol version logging (see above) can be used to estimate the impact of disabling SSL 3.0 because it will be used only if no TLS version is available in the client.

Major browser vendors including Mozilla and Google have announced that they are to deactivate the SSL 3.0 in their upcoming versions.

How do I secure my Red Hat-supported software?

Red Hat has put together several articles regarding the removal of SSL 3.0 from its products.  Customers should review the recommendations and test changes before making them live in production systems.  As always, Red Hat Support is available to answer any questions you may have.

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>Liberation</id>
  <metadata_license>CC0-1.0</metadata_license>
  <name>Liberation</name>
  <summary>Open source versions of several commercial fonts</summary>
  <description>
    <p>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
    </p>
  </description>
  <updatecontact>richard_at_hughsie_dot_com</updatecontact>
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>
</component>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>LiberationSerif</id>
  <metadata_license>CC0-1.0</metadata_license>
  <extends>Liberation</extends>
</component>

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

What you need to know about the SSLv3 “POODLE” flaw (CVE-2014-3566)

Good morning everyone! Another security vulnerability is hitting the tech (and mainstream!) press, and we want to make Fedora users get straight, simple information. This one is CVE-2014-3466, and the cute nickname of the day is “POODLE”.

Here’s the basics: SSL and TLS are standards for secure connections to Internet services. You know that little lock icon (or is it a handbag?) that means your web session is supposed to be secure? That means that some level of secure connection protocol is in use. These protocols have been improved several times over the years for better security, and some of the older versions have problems and really shouldn’t be used anymore.

For compatibility reasons, though, when a client (like your web browser) connects to a server (like https://fedoraproject.org/), they both negotiate the newest version that both sides can understand. If it happens to be something old, that’s what gets used, flaws and all. One particular old version, SSLv3, has some terrible flaws which make it easy for attackers to decrypt your supposedly-secure traffic. Normally, this is not a problem if you’re using a web browser newer than, say, ten years old — the updated, more secure protocol versions will be used. But the “POODLE” attack uses a “man in the middle” attack to confuse the negotiation, tricking the systems into using the insecure old version.

This can be mitigated by limiting the age of the protocol that servers and clients will fall back to. This may break the ability to connect to some very old services or using very old web browsers, but, arguably, those ancient systems were broken already and just plain need to be updated.

So, the bottom line is: on servers and clients, disable SSLv3 (and, of course, older). Updates to Fedora packages which make this the default will be forthcoming, but in the meantime, you can do it manually. Red Hat is working on a security blog article explaining the steps to take for different software; we’ll link to that when it becomes available.

Update: Red Hat’s security blog now has a detailed article, POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566). That includes a lot more explanation, plus links to knowledge base articles explaining how to mitigate the problem in various different applications.

If you are using Firefox, you can set the hidden configuration option security.tls.version.min to 1 — or you can install the SSL Version Control addon, which sets this immediately (and gives you a user interface option to set even higher levels in the future). This is highly recommended!

If you’re running an Apache web server, add -SSLv3 to your SSLProtocol in the configuration and reload. By default, that’s in the file /etc/httpd/conf.d/ssl.conf, but of course may be elsewhere depending on your local configuration.

Of course, all security flaws like this should be taken seriously, but on the “sky is falling” scale, this seems lower than the other recent big-news vulnerabilities, as it does require an active man-in-the-middle attack — an attacker can’t just probe the web for it or automatically attack a client connecting to a malicious server. (The main risk is when you’re on an untrusted network, like wifi at a cofffeeshop or at a conference.)

If you want more details, you can read the short (and moderately technical) paper published to announce this issue: This POODLE Bites: Exploiting the SSL 3.0 Fallback. (Yes, that’s the real title. I think the security researchers are a getting a bit giddy from all of the recent heartbleed and shellshock.)

More on this coming later today… stay tuned!

How to check for SSL POODLE / SSLv3 bug? How to fix Nginx?
Google has just disclosed SSL POODLE vulnerability which is a design flaw in SSLv3. Since it is a design flaw in the protocol itself and not an implementation bug, there will be no patches. Only way to mitigate this is to disable SSLv3 in your web server or application using SSL.

How to test for SSL POODLE vulnerability?
$ openssl s_client -connect google.com:443 -ssl3
If there is a handshake failure then the server is not supporting SSLv3 and it is secure from this vulnerability. Otherwise it is required to disable SSLv3 support.

How to disable the SSLv3 support on Nginx?
In nginx configuration, just after the "ssl on;" line, add the following to allow only TLS protocols:
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

Hacker News: Discuss and upvote on Hacker News.
Fedora-Infra: Did you know? The package information are now updated weekly in pkgdb2!

The package database pkgdb2 is the place where is managed the permission on the git repositories.

In simple words, it is the place managing the "who is allowed to do what on which package".

For each package, when they are created, the summary, the description and the upstream URL from the spec file are added to the database, which allow us to display the information on the page concerning the package. However, until two weeks ago, this information was never updated. That means that if you had an old package whose description had changed over time, pkgdb would present the one from the time the package was created in the database.

Nowadays, we have a script running on a weekly basis and updating the database. Currently, this script relies on the information provided by yum's metadata on the rawhide repo. This means that packages that are only present in EPEL or that are retired on rawhide but present in F21, will not have their information updated. This is likely something we will fix in the future though.

In the mean-time, you can now enjoy a pkgdb with summary and description information for almost all packages!

As an example, checkout the fedocal page, you can now see a link to the upstream website, a short summary and a little longer description of the project.

Also, to give you a little hint on the amount of updates we did:

The first time we ran the script:

 16638 packages checked
 15723 packages updated

Last week's run:

 16690 packages checked
 50 packages updated
Setup Vagrant with VirtualBox backend up on Fedora 20
Usually Vagrant with VirtualBox is one of the easiest things of the world. You go to the website of each project, download the corresponding package for your system, install it, run a vagrant up command on your project and your'e done.

I recommend you to stay at this process if you're new to Fedora and you're not sure how to fix issues on your Fedora box. The following article is for advanced users who know what they do and know how to fix (small) issues on their Fedora installation.

My Fedora installations are always a bit… let's say unusual. From my personal point of view they run very stable and I can work with them, but I am a guy who is always doing things which sometimes are not recommended, just to get the latest stuff up and running.

So these were the requirements I had:

* I don't want to manually download an RPM package from a third-party site
* I rather want to install the packages using my preferred package manager (which is currently yum)
* I have enabled the updates-testing repositories for Fedora and RPMFusion
* I have installed the latest kernel from the RawhideKernelNodebug repository
* I'm surely not going to downgrade my kernel just to get VirtualBox or Vagrant working
* This has to work even when I upgrade my kernel

At that constellation I'm running here and I must say it is stable enough to work for me.  If you're interested in how I did this, here is a small check list what you need:

* VirtualBox thankfully is available in RPMFusion, so this is nothing difficult so far, I even received an update to VirtualBox 4.3.18 via rpmfusion-free-updates-testing this morning
* Vagrant is available in Fedora Copr here: http://copr.fedoraproject.org/coprs/jstribny/vagrant-f20/

Having the repositories placed in /etc/yum.repos.d/ and enabled I'm always doing the following steps

yum clean all
yum makecache

And then install the packages:

yum install VirtualBox VirtualBox-kmodsrc akmod-VirtualBox vagrant

At the first vagrant up, I was facing a small issue claiming the installed bundler version doesn't match the required version which needed to be greater or equal 1.5.2 and smaller than 1.7.0. The currently installed version in Fedora was 1.7.3. To fix this I've pointed out the file where this requirement is specified:

/usr/share/gems/specifications/vagrant-1.6.5.gemspec

What I've done now is kind of a "dirty hack" to work around this dependency problem with replacing the requirement to bundler version smaller 1.7.0 to a higher version (in my case 1.8.0). The next vagrant up command was working without any issues and my Vagrant box was up and running.

I of course still have an item on my Todo list to report the issue to jstribny as the Copr maintainer for Vagrant. :-) Jstribny already fixed this issue in an update for the Vagrrant package. Thank you! :-)

Respectful Software

To what extent should Free Software respect its users?

The question, strange as it may sound, is not only valid but also becoming more and more important these days. If you think that the four freedoms are enough to guarantee that the Free Software will respect the user, you are probably being oversimplistic. The four freedoms are essential, but they are not sufficient. You need more. I need more. And this is why I think the Free Software movement should have been called the Respectful Software movement.

I know I will probably hear that I am too radical. And I know I will hear it even from those who defend Free Software the way I do. But I need to express this feeling I have, even though I may be wrong about it.

It all began as an innocent comment. I make lots of presentations and talks about Free Software, and, knowing that the word “Free” is ambiguous in English, I started joking that Richard Stallman should have named the movement “Respectful Software”, instead of “Free Software”. If you think about it just a little, you will see that “respect” is a word that brings different interpretations to different people, just as “free” does. It is a subjective word. However, at least it does not have the problem of referring to completely unrelated things such as “price” and “freedom”. Respect is respect, and everybody knows it. What can change (and often does) is what a person considers respectful or not.

(I am obviously not considering the possible ambiguity that may exist in another language with the word “respect”.)

So, back to the software world. I want you to imagine a Free Software. For example, let's consider one that is used to connect to so-called “social networks” like GNU Social or pump.io. I do not want to use a specific example here; I am more interested in the consequences of a certain decision. Which decision? Keep reading :-).

Now, let's imagine that this Free Software is just beginning its life, probably in some code repository under the control of its developer(s), but most likely using some proprietary service like GitHub (which is an issue by itself). And probably the developer is thinking: “Which social network should my software support first?”. This is an extremely valid and important question, but sometimes the developer comes up with an answer that may not be satisfactory to its users. This is where the “respect” comes into play.

In our case, this bad answer would be “Facebook”, “Twitter”, “Linkedin”, or any other unethical social network. However, those are exactly the easiest answers for many and many Free Software developers, either because those “vampiric” services are popular among users, or because the developer him/herself uses them!!

By now, you should be able to see where I am getting at. My point, in a simple question, is: “How far should we, Free Software developers, allow users to go and harm themselves *and* the community?”. Yes, this is not just a matter of self-inflicted restrictions, as when the user chooses to use a non-free software to edit a text file, for example. It is, in most cases, a matter of harming the community too. (I have written a post related to this issue a while ago, called “Privacy as a Collective Good”.)

It should be easy to see that it does not matter if I am using Facebook through my shiny Free Software application on my computer or cellphone. What really matters is that, when doing so, you are basically supporting the use of those unethical social networks, to the point that perhaps some of your friends are also using them because of you. What does it matter if they are using Free Software to access them or not? Is the benefit offered by the Free Software big enough to eliminate (or even soften) the problems that exist when the user uses an unethical service like Linkedin?

I wonder, though, what is the limit that we should obey. Where should we draw the line and say “I will not pass beyond this point”? Should we just “abandon” the users of those unethical services and social networks, while we lock ourselves in our not-very-safe world? After all, we need to communicate with them in order to bring them to our cause, but it is hard doing so without getting our hands dirty. But that is a discussion to another post, I believe.

Meanwhile, I could give plenty of examples of existing Free Softwares that are doing a disservice to the community by allowing (and even promoting) unethical services or solutions for their users. They are disrespecting their users, sometimes exploiting the fact that many users are not fully aware of privacy issues that come as a “gift” when you use those services, without spending any kind of effort to teach the users. However, I do not want this post to become a flamewar, so I will not mention any software explicitly. I think it should be quite easy for the reader to find examples out there.

Perhaps this post does not have a conclusion. I myself have not made my mind completely about the subject, though I am obviously leaning towards what most people would call the “radical” solution. But it is definitely not an easy topic to discuss, or to argument about. Nonetheless, we are closing our eyes to it, and we should not do so. The future of Free Software depends also on what kinds of services we promote, and what kinds of services we actually warn the users against. This is my definition of respect, and this is why I think we should develop Free and Respectful Software.

Robotic will rock FUDCon

The electronic start of FUDCon will be Icaro, an educationl robotics project from Argentina. Valentin Basel has set hardware start from scratch session. Local team has enable him  as good a we could.

ferric

Getting drill bits as thing as 1/32″, copper boards and etching solution will bring electronic circuits to life together with a household iron and a laser printer. The project itself aims to enable robotics as an educational tool to teach programming with a low cost.

Other items were also gathered for Icaro boards like terminals, usb ports type b, leds and capacitors.

usbterminal

ledscapacitor

Icaro is developed in Fedora, with electronic design software, chip programming software and a beautiful block interface for children.

As parts are difficult to acquire in Nicaragua, Valentin will bring more. We expect that people that never have been close to build hardware will link this experience forever with Fedora.

Custom Kernel on Fedora 20

I recently stared experimenting with eBPF patches by Alexei Starovoitov which apparently are extending and restructuring the Berkeley Packet Filter infrastructure in the kernel. BPF is used for a lot of tasks like syscall filtering, packets filtering etc. and obviously deserves a separate and grand blog entry. Well, we’ll see what can be done about it later but for now, I’ll just explain how to do the trivial task of installing a custom kernel (possibly patched/upstream) on your Fedora machine. Its nothing grand, just those familiar textbook tasks. I am using the master from Alexei’s repo.

Build the kernel

git clone https://kernel.googlesource.com/pub/scm/linux/kernel/git/ast/bpf
cd bpf

I changed the EXTRAVERSION string in the Makefile just to differentiate it as I may have to use multiple versions while testing.

make menuconfig
make -j4

Set it up

sudo make modules_install
sudo cp arch/x86_64/boot/bzImage "/boot/vmlinuz-"`make kernelrelease`
sudo cp System.map "/boot/System.map-"`make kernelrelease`
sudo dracut "" `make kernelrelease`
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Restart your machine and enjoy the new kernel. Once you are done with the experiments, boot into the old kernel and remove the initrd, kernel image, and the custom system map. Then update the grub config once more. Thats it! I’ll post more about my experiments with eBPF soon.

 References

[1] http://fedoraproject.org/wiki/BuildingUpstreamKernel
[2] http://broken.build/2012/02/12/custom-kernel-on-fedora


October 14, 2014

GHC14 - Technical Talks
A few notes (as much for myself) on a couple of the technical talks that I attended.

There were three technical tracks (beside the Open Source Day) that I had some interest in: Security/Privacy, Data Science, and IoT/Wearables. Between conflicts in scheduling, making the most of "hallway sessions", seeking [allergy free] food, and pure exhaustion leading to afternoon naps, I only made it to a few of these sessions. Here are reviews of two:

Bio-metrics - Cool or Creepy?

This panel was all industry professionals. There appeared to be a higher priority on customer experience than security. They did acknowledge that there is a difference between personalization identity and authentication identity. It bugged me that they referred to these as "identity" and "authentication" instead of two types of identity. They also made a distinction between local implementation (on a phone or tablet) and cloud or remote identity and pointed out (though maybe not forcefully enough to be heard) that authentication needs multiple factors and a higher level of confidence match.

Fingers, eyes, ear shape, etc were mentioned briefly but without as much technical information on where the industry is for my tastes. There was some discussion of active vs passive enrollments for bio-metric devices. Most of that discussion focused around voice and how it does take into account twins or having a cold or other day to day changes. Basically (and not new to me),any bio-metric technology has a level of confidence match. A low confidence match is sufficient for an recognizing identity for a device used by multi persons such as then providing a custom screen on the family Ipad. One way they want to extend this is with customer service call centers. Instead of having to go through a proof of identity each time to you call in - especially with followup calls - have the computer system recognize the voice. This then extends into the Matrix versions of in store identity and personalized experience. I still vote creepy. I prefer being an anonymous shopper.

Designing secure and privacy-aware IoT and wearable technologies for healthcare.

I was disappointed with this talk based on the title and description. The content was interesting but not what I expected. There were four panelists - two from industry and two from academic research. 

The first two presenters - from FitBit and UC Berkeley - talked mostly about the research they are doing to enhance the user experience. All the ideas are about collecting more data and automating the sharing of the data. Nothing was mentioned about securing the data. I have a fitbit and I already knew that the data is transferred clear text and stored in the cloud. Also that even with my privacy settings set to "me only" for a particular field, that data is still transferred to other organizations who are allowed to sync my data. I think the "allow this company to sync" should be seen at a "friend" level of privacy not a "me" level of see everything. Nothing was mentioned about privacy settings, authentication to get data, or securing the transport of the data. The UC Berkeley professor discussed the challenges of developing wearable technologies for the elderly. Again, interesting research but nothing about privacy or security was mentioned.

The third and fourth presentations got a bit more on topic.
The professor from the University of Illinois at Urbana-Champaign commented on a review of Android apps: 63.6% of mHealth apps are sending data over the internet in plaintext and 81.8% are using 3rd party storage and hosting such as AWS. I almost tuned the whole thing out when she started with "BOYD is coming". News flash: its been an issue for years now! As solutions, she did offer some examples:
"consider auditing" - mAuditor
"consider secure storage" - datalocker
"consider authentication" - lighttouch
"consider secure data collection and transmission" - selinda

The representative from Epic discussed a few issues using EHRs to store and access data:
Problem: Transfer of data: each EHR is separate database. There is a need for more standards concerning sharing the data. Some are: HL7, HealthIT.gov, and FHIR. FHIR is new, uses REST API.
Problem: Proving identity. Some possible starting points are perapp, OpenID, oauth, healthkit
Problem: The firehose of data. How do we make it pretty with quick access for the 7 minutes Dr visit.
Problem: Who cares? A Dr needs data from the less healthy not the healthy tech runner with a fitbit. How do we get information from the right person(s)?
Problem: Legal issues. Current laws require holding all data that is collected. This can be expensive on the storage side unless there is enough value from the data and/or a change in rules that allow for disposing of certain data sets - such as historic fitbit raw data on a healthy individual.

-SML
GHC14 - Day 3 keynotes and wrapup
Day three opened with references to handling mistakes, some direct, some more subtle. Making sure we are all listening, learning, improving, and moving forward.Its been a while since I heard the term "active listening".
The conference was marked by controversy over men at a women's conference telling women the same old things that really have not worked. From how the Wed evening panel was handled (by both the organizers and the participants) to the comments made by the CEO of Microsoft. Both situations resulted in the men LISTENing to the criticism, apologizing for not do better, and publicly trying to lead the change toward doing better. It is still to be seen how the words and efforts will play out in the long run.

Here are a few (paraphrased) sentences I heard:
  • We need to have the tough conversation.
  • We all make mistakes.
  • We cannot push an ally away with persecution for a single mistake.
  • We all need to listen to the criticism and learn from the mistakes.
  • We need to thank those that enter the minefield and show up to have the hard conversations.
Outside of the conference, I loved the quote by Diane Sawyer after talking about The Nobel Peace Prize given to Malala and straight out of Malala's goals: "It is amazing what can happen when we educate girls".

The keynote speaker for the final day was Dr. Arati Prabhaker from DARPA. She was scheduled for last year but the government shutdown forced a last minute substitution. A few of the technical research efforts that her team is working on are:
  • Space - launching small satellites from military planes from any runway. These are 100 pound satellites placed into low orbit with the costs down to 1 million dollars and 24 hours notice instead of 10's of million and in 24 months of planning.
  • Biology - Research into moving injured military from rehab to recovery with better prothetics and direct brain control. Human trials are already started and a 60 minutes video shows a women (in attendance at the conference) using thought to manipulate a robot arm.
  • Biology - Infectious diseases are in the news. Right now a flu shot needs weeks to get the body to create antibodies. DARPA research is looking for quicker diagnosis to reduce the spread as well as create targeted cures and prevention. They are looking for a flu shot that triggers antibodies within hours instead of weeks.
  • Information Technology - Today we really do have only the option of "patch and pray".  What can we do in the future to reset this? DARPA research is working on a Capture the Flag for AI machines only. They plan to place it next to the Capture the Flag at the next DefCon.  They are also working on accuracy and speed in pattern matching and other data analysis technologies.  Recently a pattern matching exercise resulted in a list of phone numbers which were matched to a local law enforcement database which narrowed the list to known criminals and eventually  to about 30 numbers from (or near) North Korea that eventually led to reducing human trafficking. 
The last day is also a day to get the final set of swag from the career fair and attend the final party. Do not forget a bag for the swag at the party and be early to get in on the raffle from the sponsors. 

On the swag: I love the puzzles. I can use the portable chargers and cables (though I now have many more cables than I need). I have once again replenished my office supply of pens (a few more PostIt notes would have been helpful) and my work from home wardrobe of T-shirts (more colors than at most tech conferences but still a lot of my least favorite black and grey). Some of the tote bags are useful, specifically the ones that can handle groceries (flat bottoms with strong handles).

-SML

GHC14 Day 2 - keynotes
The day 2 morning session started out with a special surprise guest of Megan Smith, the CTO of the USA. I have to admit I do not follow a lot of politics (I have a low BS threshold) and I had failed to make the connection. I saw Megan Smith talk last year at GHC and was impressed even though she worked with projects outside my direct interests. I did know that the new CTO was a women. I just failed to connect the name and face and experience. It was exciting to hear her talk again but I avoided the long lines at the meet & greet so no photo op for me.

The morning session also included the numbers for the conference: 8000 people in attendance (about 540 men) from 67 countries. That is double last year and like last year, it was sold out weeks before the event. Sponsors were thanked and the top universities and companies sending people to the conference were listed. It was no surprise to see the local schools and large sponsors topping those lists. Companies participating in the Top Company for Women in Computing ABIE Award were recognized with a banner at their booth in the career fair. Unfortunately neither of my largest clients were recognized. Red Hat was present at the conference but Cloudera was not. Companies related to my field where I know employees, such as TerraData and Rackspace were present and also participants in the initiative.

The keynote itself was a discussion between the President of Harvey Mudd College, Maria Klawe and the new CEO of Miscrosoft, Satya Nadella. I was actually impressed with much of what he had to say as it reminded me of things that have worked for me.

  • He believes that everyone has a "superpower". Use your superpowers. Frequently for women the superpower is a sense of empathy. Often it is job related talent. Be passionate about your work, find something worth doing, do it well, drive the technology (and the company) forward. 
  • Women have a low threshold for BS.
  • He is proud of the work done by the women in his company - at all levels - and would not be surprised to see a women follow him as CEO. Several statements relate to a belief that it is about skill, not gender or anything else. All teams need a diverse background, passionate workers, and the right talent. Hiring and promotions are about the right *person* for the job - including his. 
  • He believes the industry can benefit from a re-entry program for returning workers. The term bootcamp was used and attacked as a threatening term but the idea was welcomed. Create a re-entry program that trains returning workers then helps place them in the right job.  It reminds me of many new grad entry programs I have seen. Obviously this is currently aimed aimed at women who choose to take a few years off for family, but if done correctly, could even encourage more men to be the ones to take a few years off for family. Right now it is easier to enter the workforce as a "trainable new grad" with no skills than to find the right job for outdated experience. It is up to the individual to find the training to update their skills BEFORE applying for a job.

Of course there is the now international controversy of asking for pay raises, but he showed up.  He showed up a day early to listen and to see and experience the conference - not just for the hour on stage. After the keynote, he LISTENed to the criticism, he appears to have LEARNed from the mistake, and he is LEADing by example with his apology and moving forward to make change happen. We all have to be willing to have the hard conversations, and with an honest two way dialog and not just a defensive anger. Also, as a side effect, it has brought the pay equity issue back to front page, however briefly.

-SML

News: NVIDIA PhysX SDK 3.3.2 under Linux.
Brings GPU acceleration under Linux with the new version is the GPU PhysX support. NVIDIA has updated the PhysX SDK in version 3.3.2. NVIDIA PhysX is available for Windows, Linux, OS X, Sony PS3/PS4/PSVita, Nitendo Wii U, XboX (360 , One), Android and iOS. The main changes in v3.3.2 :

  • added set/getRunProfiled() for PxDefaultCpuDispatcher to control profiling at task level. 
  • Android: Support for x86 based devices was added. PxProfileEventHandler::durationToNanoseconds() added. Translates event duration in timestamp (cycles) into nanoseconds. 
  • added SnippetProfileZone to show how to retrieve profiling information. 
  • added SnippetCustomJoint to better illustrate custom joint implementation, and removed SnippetExtension. 
  • added SnippetStepper to demonstrate kinematic updates while substepping with tasks.
  • fixed some issues.
News: Firefox will be updated to version 33.
According to benoitgirard all versions of the Firefox web browser will be updated to version 33.
All updates will be available on October 14, 2014. Firefox 33 introduces several changes.
This are visible on the frontend while others improve the browser in the backend. This feature will move compositing to a second thread to make the main thread loop more responsive.
Also Firefox Beta, Aurora and Nightly versions will be updated to version 34, 35 and 36 respectively. You can download Firefox 33 directly from Mozilla.
I wrote a thing: relval / wikitcms 1.1, Fedora QA wiki test management tool

I think I might finally have to hand in my ‘I don’t code’ card.

As I buried in another post last week, I wrote a thing – Relval (and wikitcms). relval is a tool for interacting with the Fedora wiki, as it is used by the Fedora QA team for managing test results. It’s an interface to our test case management system…which is a wiki. :)

relval was originally written to make it easy to create the release validation testing result pages – you know, Current Installation Test and all the rest – which it still does (the Fedora 21 Beta TC2 and TC3 validation pages were created with relval). With 1.1, though, it grew two new capabilities, user-stats and testcase-stats.

These are re-implementations of (respectively) stats-wiki.py and testcase-stats, written by Kamil Paral and Josef Skladanka. They generate statistics about Fedora validation testing. user-stats generates the statistics about which users contributed results to a given set of pages, which are used in the “Heroes of Fedora testing” posts, like the Fedora 20 one on Mike’s blog. testcase-stats generates statistics about test coverage – usually across an entire release series, like 20 or 21. See its current output for the last few Fedora releases. From the wiki pages, it’s hard to know when a test that needs to be run for a release has not yet been done for any TC / RC, because you can only look at one at a time. testcase-stats makes it easier to see at a glance and in detail which tests have been run against which releases, so you can pick out tests that have been neglected and need to be run ASAP.

wikitcms is a Python library which does the actual work of interfacing with the wiki, using the mwclient Mediawiki API library. relval is backed by wikitcms.

If you want to play with relval, or use wikitcms for your own (nefarious?) purposes, you can easily get it from my repository. Grab wikitcms.repo and place it in /etc/yum.repos.d, then run yum install relval, and you should get relval and future updates for it. Please use --test if playing with compose – it operates against the staging wiki instead of the production one, for test purposes. The stats sub-commands only consume data from the wiki, they don’t modify it, so they should be safe to play with (they do implement --test, just in case you want to generate test data for some tricky corner case or something).