October 25, 2014

Conky Manager to Fedora

Conky Manager is a GUI for managing Conky scripts. It provides options to start/stop, browse and edit Conky themes installed on the system, It

  • Start/Stop, Browse and Edit Conky themes
  • Run Conky on system startup
  • Has options to change location, transparency and size of Conky widget window
  • Has options to change time and network interface

c1

Conky manager now available in Fedora repos, first it has been included in Fedora RAWHIDE repo http://koji.fedoraproject.org/koji/taskinfo?taskID=7940067 and soon in the others.

c2

By arrival of this program to Fedora, users desktop will be more active and customizable, it brings live to solid backgrounds with nice touch.

You can test Conky Manager now in Fedora updates https://admin.fedoraproject.org/updates/search/conky-manager for Fedora 19/20/21/EPEL7.


October 24, 2014

Positive results from Outreach Program for Women

In 2013, Debian participated in both rounds of the GNOME Outreach Program for Women (OPW). The first round was run in conjunction with GSoC and the second round was a standalone program.

The publicity around these programs and the strength of the Google and Debian brands attracted a range of female candidates, many of whom were shortlisted by mentors after passing their coding tests and satisfying us that they had the capability to complete a project successfully. As there are only a limited number of places for GSoC and limited funding for OPW, only a subset of these capable candidates were actually selected. The second round of OPW, for example, was only able to select two women.

Google to the rescue

Many of the women applying for the second round of OPW in 2013 were also students eligible for GSoC 2014. Debian was lucky to have over twenty places funded for GSoC 2014 and those women who had started preparing project plans for OPW and getting to know the Debian community were in a strong position to be considered for GSoC.

Chandrika Parimoo, who applied to Debian for the first round of OPW in 2013, was selected by the Ganglia project for one of five GSoC slots. Chandrika made contributions to PyNag and the ganglia-nagios-bridge.

Juliana Louback, who applied to Debian during the second round of OPW in 2013, was selected for one of Debian's GSoC 2014 slots working on the Debian WebRTC portal. The portal is built using JSCommunicator, a generic HTML5 softphone designed to be integrated in other web sites, portal frameworks and CMS systems.

Juliana has been particularly enthusiastic with her work and after completing the core requirements of her project, I suggested she explore just what is involved in embedding JSCommunicator into another open source application. By co-incidence, the xTuple development team had decided to dedicate the month of August to open source engagement, running a program called haxTuple. Juliana had originally applied to OPW with an interest in financial software and so this appeared to be a great opportunity for her to broaden her experience and engagement with the open source community.

Despite having no prior experience with ERP/CRM software, Juliana set about developing a plugin/extension for the new xTuple web frontend. She has published the extension in Github and written a detailed blog about her experience with the xTuple extension API.

Participation in DebConf14

Juliana attended DebConf14 in Portland and gave a presentation of her work on the Debian RTC portal. Many more people were able to try the portal for the first time thanks to her participation in DebConf. The video of the GSoC students at DebConf14 is available here.

Continuing with open source beyond GSoC

Although GSoC finished in August, xTuple invited Juliana and I to attend their annual xTupleCon in Norfolk, Virginia. Google went the extra mile and helped Juliana to get there and she gave a live demonstration of the xTuple extension she had created. This effort has simultaneously raised the profile of Debian, open source and open standards (SIP and WebRTC) in front of a wider audience of professional developers and business users.

Juliana describes her work at xTupleCon, Norfolk, 15 October 2014

It started with OPW

The key point to emphasize is that Juliana's work in GSoC was actually made possible by Debian's decision to participate in and promote Outreach Program for Women in 2013.

I've previously attended DebConf myself to help more developers become familiar with free and open RTC technology. I wasn't able to get there this year but thanks to the way GSoC and OPW are expanding our community, Juliana was there to help out.

How to know what generation you were born in.
So in 2011, I wrote an article about 'the generations'. Not much has changed much except the current buzz word in marketing is about the 'millennial'. The millennial's are doing this, the millennial's are doing that...

Before anyone starts wondering if this is going to be a 'get off my lawn' article, I have no problem with this. The Generation X thing was so past its expiration date what with even the youngest Generation X person now in their mid-30's. [And thus too old to be marketed to by most magazines and online-journalists.]

My main problem is that when people define where the cut-off for being a 'millenial' is. Depending on the article it ranges from being born from anywhere from 1980 to 2009. [I wonder if the discrepancy is to make sure the author of the piece is still in the 'young and hip' demographic versus being an old flake of the last generation.

OK for any person wanting to know what generation they or some ancestor 'belonged to.. here is a handy reference guide. Like my original article.. it is full of poop, because it is EuroAmerican and doesn't count the 10,000's of generations that various American Indian tribes had before someone tried to find a short cut to China by going the long way around the world. [It also doesn't count in the Norse colonies from the 900's or so.] Instead it uses the one 'defined' generation of the Baby Boomers as the counting point to start backwards and future with and then using X as the generation after that. It is cool that the Generation A would have been some of the first 'Americans'


Generation:  A 1550 -> 1567 (New Mexico 'colonies')
Generation:  B 1568 -> 1585 (St Augustine Fl and first known birth)
Generation:  C 1586 -> 1603 (Roanoke Island)
Generation:  D 1604 -> 1621 (Jamestown)
Generation:  E 1622 -> 1639
Generation:  F 1640 -> 1657
Generation:  G 1658 -> 1675
Generation:  H 1676 -> 1693
Generation:  I 1694 -> 1711
Generation:  J 1712 -> 1729
Generation:  K 1730 -> 1747 (The Founding Parents)
Generation:  L 1748 -> 1765 (The Revolution Fighters)
Generation:  M 1766 -> 1783 (The Last Colonials)
Generation:  N 1784 -> 1801 (The War of 1812 Generation)
Generation:  O 1802 -> 1819
Generation:  P 1820 -> 1837
Generation:  Q 1838 -> 1855 (The Civil War Generation)
Generation:  R 1856 -> 1873
Generation:  S 1874 -> 1891 (Greater Collapse of 1892 gen)
Generation:  T 1892 -> 1909 (Lost Generation of WWI)
Generation:  U 1910 -> 1927 (Greatest Generation of WWII)
Generation:  V 1928 -> 1945 (Silent Generation)
Generation:  W 1946 -> 1963 (Baby Boomers)
Generation:  X 1964 -> 1981 (Generation X)
Generation:  Y 1982 -> 1999 (Millenials)
Generation:  Z 2000 -> 2017 (the "not the last generation")
Generation: AA 2018 -> 2035 (rebuilders from the Unix Apocalypse)
Generation: AB 2036 -> 2063

So hope this is useful. [I probably need an app for this.. just to make sure the millenials get it... crap some kids are on my lawn again.. HEY YOU!!!!]
oVirt Node: hosted-engine
oVirt Node 3.5 contain ovirt-node-plugin-hosted-engine available which make possible setup oVirt Node run oVirt Engine as virtual machine with HA (more then one node required).

To start the setup you must inform in the Text User Interface (TUI) how to proceed the installation of oVirt Engine virtual machine:

- ISO
.ISO file, booting as cdrom, so you proceed the installation as normal operational system, later setting oVirt Engine and hosted-engine will connect to it and make all changed needed to configure your environment. The iso will be downloaded via http.

- OVA
Pre-configured image with oVirt setup, after install required to run: engine-setup --offline --config-append=ovirt-engine-answers

- PXE
Install the virtual machine booting via PXE.

Below example will be based on ISO (CentOS 6.5).

Step1: Providing .ISO


Step 2: The engine-hosted-engine-setup will start in screen...


Step 3: The engine-hosted-engine-setup will start in screen...


Step 4: Initial setup
For this step, please keep in mind:
- The storage supported at moment for the virtual machine of hosted-engine are:
    iscsi, nfs3, nfs4
- Admin portal password and FQDN are required to later hosted-engine connect to oVirt Engine and add the changes.
- The default path that the ISO is uploaded in oVirt Node is:
    /data/ovirt-hosted-engine-setup/my-iso-provided.iso




Step 5: Time to install the Operational System
The virtual machine is started to be installed the OS. Connect to VM via remote-viewer with the temp. password provided in the console and conclude the installation. AFTER the successfully OS installation, press 1 and continue.


$ remote-viewer vnc://IP_ADDRESS:5900






Step 6: Setup oVirt Engine
Connect again into the VM, update the system and setup oVirt Engine, AFTER the engine-setup is complete press 1 and the hosted-engine will finish the setup.

Example:
# yum update
# yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
# yum install ovirt-engine
# engine-setup

Five Conferences in Fedora This Week! FUDCon Managua, LinuxCon EU, SeaGL, and upcoming FOSDEM and DevConf.cz

Fedora is a big project, and it’s hard to keep up with everything that goes on. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for October 24th, 2014:

This is a week with a lot of conference activity — today we have an all-event 5tFTW.

FUDCon LATAM in Managua, Nicaragua

First up: our Latin American FUDCon is going on right now (yesterday, today, and tomorrow). We already have some summary posts from Dennis Gilmore (in English; and he’s planning to post more soon) and Luis Bazan (in Spanish). More on this event next week!

LinuxCon EU 2014

Jiří Eischmann reports on Fedora’s presence at the Linux Foundation’s LinuxCon Europe conference in Düsseldorf, Germany.

A quick quote:

People were more interested in Fedora Server which is different from most events where people are mostly interested in Workstation, but it’s not surprising considering the audience. It really helps to advertise a specialized product because you can clearly say: if you’re interested in server OSes, this is what we have for you and it has these interesting features. That’s why I’m glad we have Fedora Server. From the marketing point of view, it’s much more appealing to have a solution (server product) than just a lego to build it. Quite a few people were interested in Fedora as a future of enterprise Linux because what they work with and care about is Red Hat Enterprise Linux.

… but I think the whole thing is interesting, especially if you’re interested in how we promote Fedora and interact with the community at this type of conference.

Seattle GNU/Linux Conference

Another one going on right now — SeaGL in Seattle, Washington. (From the logo, looks like that’s “seagull” — cute!). Fedora hacker David Gay (a.k.a. “oddshocks”, and one of the people behind Fedora Badges and other projects) is speaking on Free Infrastructure later this afternoon, sharing his experiences and answering questions. Attendance is free, by the way, so if you’re in Seattle, it’s the obvious thing to do with your weekend!

FOSDEM 2015 Call for Papers

FOSDEM (Free and Open Source Software Developers’ European Meeting) is a gigantic community-organized and oriented conference which takes place in Brussels every year at the end of January / beginning of February. Right now, 2015’s conference is in its planning phase, with a “call for papers” (that is, open submission for talks) open now for both developer rooms and lightning talks and booths.

Of particular interest to Fedora is the Distribution Devroom:

The purpose of the distributions devroom is to offer a forum for all people interested in distribution issues to meet and collaborate on improving the distribution ecosystem. What are the upcoming challenges facing the distribution space? How can distribution maintainers collaborate better to solve cross-distribution issues? What are interesting developments helping distribution developers to excel in the distribution space?

If you have a Fedora-related idea, let’s talk about it and get to planning! (The Fedora Ambassadors mailing list is a good place to start.)

DevConf.cz 2015 Call for Papers

Red Hat sponsors a conference in Brno, Czech Republic the week after FOSDEM, and that too has an open Call for Papers. Continuing on the success of last February’s event, the next DevConf.cz will feature an entire Fedora Day — Jiří has details on his blog. Last year, there were over 1000 attendees, and this year, the venue has been moved to accommodate even more!


 

5tftw-large

Virtual Machine Plugin — Avocado 0.14.0 documentation
Virtual Machine Plugin — Avocado 0.14.0 documentation
avocado-framework/avocado
avocado-framework/avocado:
3º Semana de Ciência Meio Ambiente e Tecnologia do Polivalente

A 3ª Feira de Ciência, Meio Ambiente e Tecnologia, Aprimorando a criatividade! É um evento coordenado pelo Professor José Augusto. Evento tem como tradição, recrutamos e capacitação de um grupo de alunos para oferecer no Espaço Fedora palestras, mini curso de instalação, distribuição de mídias bootaveis, sorteio de brindes, e muito mais.

 Foto6

A 3ª Feira de Ciência, Meio Ambiente e Tecnologia reúne de 3 a 5 mil pessoas anualmente e o Espaço Fedora tem a oportunidade de compartilhar com os visitantes esta solução segura e eficiente que é o software livre do Projeto Fedora.

colegio 038 foto5

Nesses dois dias de evento o Espaço Fedora tive um fluxo muito alto de pessoas interessadas em conhecer e entender mais sobre esse sistema operacional, através de palestras sobre o Fedora. A inclusão digital por meio desse poderoso sistema operacional que é o Fedora, contamos também com minicursos de instalação e uso do Fedora no dia-a-dia.

Foto13 colegio 057 2dia  2.1dia


Kerberos over HTTP: getting a TGT on a firewalled network

One of the benefits I originally wanted to bring with the FreeIPA move to GNOME contributors was the introduction of an additional authentication system to connect to to the services hosted on the GNOME Infrastructure. The authentication system that comes with the FreeIPA bundle that I had in mind was Kerberos. Users willing to use Kerberos as their preferred authentication system would just be required to get a TGT (Ticket-Granting Ticket) from the KDC (Key Distribution Center) through the kinit command. Once done authenticating to the services currently supporting Kerberos will be as easy as pointing a configured browser (Google for how to configure your browser to use Krb logins) to account.gnome.org without being prompted for entering the usual username / password combination or pushing to git without using the public-private key mechanism. That theoretically means you won’t be required to use a SSH key for loggin in to any of the GNOME services at all as entering your password to the kinit password prompt will be enough (for at least 24 hours as that’s the life of the TGT itself on our setup) for doing all you were used to do before the Kerberos support introduction.

kerberos-over-http

A successful SSH login using the most recent Kerberos package on Fedora 21

The issue we faced at first was the underlying networking infrastructure firewalling all Kerberos ports blocking the use of kinit itself which kept timing out reaching port 88. A few days later I was contacted by RH’s developer Nathaniel McCallum who worked out a way to bypass this restriction by creating a KDC proxy that accepts requests from port 443 and proxies them to the internal KDC running on port 88. With the recent Kerberos release (released on October 15th, 2014 and following the MS-KKDCP protocol) a patched kinit allows users to retrieve their TGTs directly from the HTTPS proxy completely bypassing the need for port 88 to stay open on the firewall. The GNOME Infrastructure now runs the KDC Proxy and we’re glad to announce Kerberos authentications are working as expected on the hosted services.

If you are facing the same problem and you are curious to know more about the setup, here they come all the details:

On the KDC:

  1. No changes are needed on the KDC itself, just make sure to install the python-kdcproxy package which is available for RHEL 7, HERE.
  2. Tweak your vhost accordingly by following the provided documentation.

On the client:

  1. Install the krb5-workstation package, make sure it’s at least version 1.12.2-9 as that’s the release which had the additional features we are talking about backported. Right now it’s only available for Fedora 21.
  2. Adjust /etc/krb5.conf accordingly and finally get  TGT through kinit $userid@GNOME.ORG.
[realms]
 GNOME.ORG = {
  kdc = https://account.gnome.org/kdc
  kpasswd_server = https://account.gnome.org/kdc
}

That should be all for today!

Introducing Gthree

I’ve recently been working on OpenGL support in Gtk+, and last week it landed in master. However, the demos we have are pretty lame and are not very good to show off or even test the OpenGL support. I’ve looked around for some open source demos that used modern GL that we could use, but I didn’t find anything that we could easily use.

What I did find though, was a lot of WebGL demos that used three.js. This looked like a very nice open source library for highlevel 3d rendering. At first I had some plans to bind OpenGL to gjs so that we could run three.js, but this turned out to be a hard.

Instead I started converting three.js into C + GObject, using the Gtk+ OpenGL support and the vector/matrix library graphene that Emmanuele has been working on recently.

After about a week of frantic hacking it is now at a stage where it may be interesting for others. So, without further ado I introduce:

https://github.com/alexlarsson/gthree

It does not yet support everything that three.js can do, but it does support a meshes with most mesh matrial types and lighting, including a loader for the json model format of thee.js, which means that it is minimally useful.

Here are some screenshots of the examples that ships with the code:

Screenshot from 2014-10-24 15:04:47

Various types of materials

Screenshot from 2014-10-24 15:10:00

Some sample models from three.js examples

Screenshot from 2014-10-24 15:31:40

Some random cubes

This has been a lot of fun to work on as I’ve seen a lot of progress very fast. Mad props to mrdoob and the other three.js developers for creating three.js and making it free software. Gthree is a huge rip-off of their work and would never be possible without it. Thanks also to Emmanuele for his graphene library.

What are you sitting here for, go ahead and play with it! Make some demos, port some more three.js features, marvel at the fancy graphics!

Winners of Wiki Loves Romania 2014

Organizing Wiki Loves Monuments in Romania this year was the hardest so far. Why so? We had a bigger budget, which allowed us to be more ambitious, so on top of the free photography contest for Wikipedia we had to manage a photo exhibition, a 2 day field trip, an additional contest for juniors, a team of volunteering interns and more. But it was rewarding, the results are notable: over 8200 pictures from 216 contributors.

I will tease with the top 3 photos from the contest, you can see all of them on our website.

1st place: Bogdan Croitoru with Monumentul triumfal Tropaeum Traiani
006 MG 6430 Tropaeum Traiani Adamclisi 006
2nd place: Dragoș Pîrvulescu with Fortificație medievală
Cetatea Râșnov, văzută din șoseaua Cristian-Râșnov.
3rd place: Zsolt Deak pentru Ansamblul bisericii evanghelice fortificat-vedere aeriana
Ansamblul bisericii evanghelice fortificat-vedere aeriana

You can also see the winning pictures, along with highlights from the previous editions and winners of the section dedicated to younger contributors in a photo exhibition opened for 3 weeks at the National Library in Bucharest. After that, the expo will move for a couple more weeks at Universitatea de Vest in Timișoara.

expo
Being a Sporadic Overview Of Linux Distribution Release Validation Processes

Yup, that’s what this is. It’s kind of in-progress, I’ll probably add to it later, haven’t looked into what Arch or Debian or a few other likely suspects do.

Fedora

Manual testing

Our glorious Fedora uses Mediawiki to manage both test cases and test results for manual release validation. This is clearly ludicrous, but works much better than it has any right to.

‘Dress rehearsal’ composes of the entire release media set are built and denoted as Test Composes or Release Candidates, which can be treated interchangably as ‘composes’ for our purposes here. Each compose represents a test event. In the ‘TCMS’ a test event is represented as a set of wiki pages; each wiki page can be referred to as a test type. Each wiki page must contain at least one wiki table with the rows representing a concept I refer to as a unique test or a test instance. There may be multiple tables on a page; usually they will be in separate wiki page sections.

The unique, identifying attributes of a unique test are:

  1. The wiki page and page section it is in
  2. The test case
  3. The user-visible text of the link to the test case, which I refer to as the ‘test name’

unique tests may share up to two of those attributes – two tests may use the same test case and have the same test name but be in different page sections or pages, or they may be in the same page section and use the same test case but have a different test name, for instance.

The other attributes and properties of a unique test are:

  1. A milestone – Alpha, Beta or Final – indicating the test must be run for that release and later releases
  2. The environments for the test, which are the column titles appearing after the test case / test name in the table in which it appears; the environments for a given test can be reduced from the set for the table in which it appears by greying out table cells, but not extended beyond the columns that appear in the table
  3. The results that appear in the environment cells

Basically, Fedora uses mediawiki concepts – sections and tables – to structure storage of test results.

The Summary page displays an overview of results for a given compose, by transcluding the individual result pages for that compose.

Results themselves are represented by a template, with the general format {{result|status|username|bugs}}.

Fedora also stores test cases in Mediawiki, for which it works rather well. The category system provides a fairly good capability to organize test cases, and templating allows various useful capabilities: it’s trivial to keep boilerplate text that appears in many test cases unified and updated by using templates, and they can also be used for things like {{FedoraVersion}} to keep text and links that refer to version numbers up to date.

Obvious limitations of the system include:

  • The result entry is awkward, involving entering a somewhat opaque syntax (the result template) into another complex syntax (a mediawiki table). The opportunity for user error here is high.

  • Result storage and representation are strongly combined: the display format is the storage format, more or less. Alternative views of the data require complex parsing of the wiki text.

  • The nature of mediawiki is such that there is little enforcement of the data structures; it’s easy for someone to invent a complex table or enter data ‘wrongly’ such that any attempt to parse the data may break or require complex logic to cope.

  • A mediawiki instance is certainly not a very efficient form of data storage.

Programmatic access

My own wikitcms/relval provides a python library (in Python) for accessing this ‘TCMS’. It treats the conventions/assumptions about how pages are named and laid out, the format of the result template etc as an ‘API’ (and uses the Mediawiki API to actually interact with the wiki instance itself, via mwclient). This allows relval to handle the creation of result pages (which sort of ‘enforces’ the API, as it obviously obeys its own rules/assumptions about page naming and so forth) and also to provide a TUI for reporting results. As with the overall system itself this is prima facie ridiculous, but actually seems to work fairly well.

relval can produce a longitudinal view of results for a given set of composes with its testcase-stats sub-command. I provide this view here for most Fedora releases, with the results for the current pre-release updated hourly or daily. This view provides information for each test type on when each of its unique tests was last run, and a detailed page for each unique test detailing its results throughout the current compose.

Automated testing

Fedora does not currently perform any significant automated release validation testing. Taskotron currently only runs a couple of tests that catch packaging errors.

Examples

  • Fedora result page
  • Fedora test case: the page source demonstrates the use of templates for boilerplate text

Ubuntu

Manual testing

The puppy killers over at Ubuntu use a system called QATracker for manual testing. Here is the front end for manual release validation.

QATracker stores test cases and products (like Kubuntu Desktop amd64, Ubuntu Core i386). These are kind of ‘static’ data. Test events are grouped as builds of products for milestones, which form part of series. A series is something like an Ubuntu release – say, Utopic. A milestone roughly corresponds to a Fedora milestone – say, Utopic Final – though there are also nightly milestones which seem to fuzz the concept a bit. Within each milestone is a bunch of builds, of any number of products. There may be (and often is) more than one build for any given product within a single milestone.

So, for instance, in the Utopic Final milestone we can click See removed and superseded builds too and see that there were many builds of each product for that milestone.

Products and test cases are defined for each series. That is, for the whole Utopic series, the set of products and the set of test cases for each product is a property of the series, and cannot be varied between milestones or between builds. Every build of a given product within a given series will have the same test cases.

Test cases don’t seem to have any capability to be instantiated (as in moztrap) – it’s more like Fedora, a single test case is a single test case. I have not seen any capacity for ‘templating’, but may just have missed it.

Results are stored per build (as we’ve seen, a build is a member of a milestone, which is a member of a series). There is no concept of environments (which is why Ubuntu encodes the environments into the products) – all the results for a single test case within a single build are pooled together.

The web UI provides a fairly nice interface for result reporting, much nicer than Fedora’s ‘edit some wikitext and hope you got it right’. Results have a status of pass or fail – there does not appear to be any warn analog. Bug reports can be associated with results, as in Fedora, as can free text notes, and hardware information if desired.

QATracker provides some basic reporting capabilities, but doesn’t have much in the way of flexible data representation – it presumably stores the data fairly sensibly and separately from its representation, but doesn’t really provide different ways to view the data beyond the default web UI and the limited reporting capabilities.

The web UI works by drilling down through the layers. The front page shows a list of the most recent series with the milestones for each series within them, you can click directly into a milestone. The milestone page lists only active builds by default (but can be made to show superseded ones, as seen above). You can click into a build, and from the build page you see a table-ish representation of the test cases for that build, with the results (including bug links) listed alongside the test cases. You have to click on a test case to report a result for it. The current results for that test case are shown by default; the test case text is hidden behind an expander.

Limitations of the system seem to include:

  • There’s no alternative/subsidiary/superior grouping of tests besides grouping by product, and no concept of environments. This seems to have resulted in the creation of a lot of products – each real Ubuntu product has multiple QATracker products, one per arch, for instance. It also seems to lead to duplication of test cases to cover things like UEFI vs. BIOS, which in Fedora’s system or Moztrap can simply be environments.

  • Test case representation seems inferior to Mediawiki – as noted, template functionality seems to be lacking.

  • There seems to be a lack of options in terms of data representation – particularly the system is lacking in overviews, forcing you to drill all the way down to a specific build to see its results. There appears to be no ‘overview’ of results for a group of associated builds, or longitudinal view across a series of builds for a given product.

Examples

Programmatic access

QATracker provides an XML-RPC API for which python-qatracker is a Python library. It provides access to milestone series, milestones, products, builds, results and various properties of each. I was able to re-implement relval’s testcase-stats for QATracker in a few hours.

Automated testing

Ubuntu has what appears to be a Jenkins instance for automated testing. This runs an apparently fairly small set of release validation tests.

OpenSUSE

Manual testing

Well…they’ve got a spreadsheet.

Automated testing

This is where OpenSUSE really shines – clearly most of their work goes into the OpenQA system.

The main front end to OpenQA provides a straightforward, fairly dense flat view of its results. It seems that test suites can be run against builds of distributions on machines (more or less), and the standard view can filter based on any of these.

The test suites cover a fairly extensive range of installation scenarios and basic functionality checks, comparable to the extent of Fedora’s and Ubuntu’s manual validation processes (though perhaps not quite so comprehensive).

An obvious potential drawback of automated QA is that the tests may go ‘stale’ as the software changes its expected behaviour, but at a superficial evaluation SUSE folks seem to be staying on top of this – there are no obvious absurd ‘failure’ results from cases where a test has gone stale for years, and the test suites seem to be actively maintained and added to regularly.

The process by which OpenQA ‘failures’ are turned into bug reports with sufficient useful detail for developers to fix seems to be difficult to trace at least from a quick scan of the documentation on the SUSE wiki.

October 23, 2014

Fake DNS replies in unit tests using resolv_wrapper
If your unit tests require custom DNS queries, there are some options you might want to take, like adding records to the local /etc/hosts file. But that might not be possible for tests where you don't have root access (for instance, in build systems) and moreover you can't set any other records except A or AAAA. You can also run a full DNS server and set it into your resolv.conf file, but that normally requires root privileges, too and tampers with the usual setup of the test host. What would be ideal is a way to force the test into a mock DNS environment without affecting the live environment on the host system.

As Andreas Schneider pointed out earlier, it is time for another wrapper - so together with Andreas, we wrote resolv_wrapper! This post will show you how can resolv_wrapper help your testing.

Similar to the other wrappers, the resolv_wrapper provides a preloadable version of library calls. In this case it's res_init, res_query, res_search and res_close. These libresolv (or libc, depending on platform) library calls form the basis of DNS resolution routines like gethostbyname and can also be used to resolve less common DNS queries, such as SRV or SOA. In general, a unit test leveraging resolv_wrapper needs to set up its environment (more on that later), preload the libresolv_wrapper.so library using LD_PRELOAD and that's it.

If your test environment has its own DNS server (such as Samba or FreeIPA have), resolv_wrapper allows you to redirect DNS traffic to that server by pointing the test to a resolv.conf file that contains IP address of your DNS server:
echo "search test.example.com" > /tmp/testresolv.conf
echo "nameserver 127.0.0.1" >> /tmp/testresolv.conf
LD_PRELOAD=libresolv_wrapper.so RESOLV_WRAPPER_CONF=/tmp/testresolv.conf ./dns_unit_test

That would make your dns_unit_test perform all DNS queries through your DNS server running at 127.0.0.1, while your system would be still intact and using the original resolv.conf entries. In some other cases, you might want to test DNS resolution, but maybe you don't want to set up a full DNS server just for the test For this use-case, resolv_wrapper provides the ability to fake DNS replies using a hosts-like text file. Consider a unit test, where you want to make sure that kinit can discover a Kerberos KDC with SRV records. Start by defining the the hosts-like file:
echo "SRV _kerberos._tcp.example.com kdc.example.com 88" > /tmp/fakehosts
echo "A   kdc.example.com 127.0.0.10" >> /tmp/fakehosts

Then export this hosts file using the RESOLV_WRAPPER_HOSTS environment variable and preload the resolv_wrapper as illustrated before:
LD_PRELOAD=libresolv_wrapper.so RESOLV_WRAPPER_HOSTS=/tmp/fakehosts ./kinit_unit_test

If something is going wrong, resolv_wrapper allows the user to enable debugging when the RESOLV_WRAPPER_DEBUGLEVEL is set to a numerical value. The highest allowed value, that enabled low-level tracing is 4.

Let's show a complete example with a simple C program that tries to resolve an A record of kdc.example.com. We'll start with this C source file:
#include &ltstdlib.h>
#include &ltunistd.h>
#include &ltstring.h>
#include &ltstdio.h>

#include &ltnetinet/in.h>
#include &ltarpa/nameser.h>
#include &ltarpa/inet.h>
#include &ltresolv.h>

int main(void)
{
        int rv;
        struct __res_state dnsstate;
        unsigned char answer[256];
        char addr[64] = { 0 } ;
        ns_msg handle;
        ns_rr rr;

        memset(&dnsstate, 0, sizeof(struct __res_state));
        res_ninit(&dnsstate);
        res_nquery(&dnsstate, "kdc.example.com", ns_c_in, ns_t_a, answer, sizeof(answer));

        ns_initparse(answer, sizeof(answer), &handle);
        ns_parserr(&handle, ns_s_an, 0, &rr), 0;
        inet_ntop(AF_INET, ns_rr_rdata(rr), addr, sizeof(addr));
        puts(addr);

        return 0;
}

Please note I omitted all error checking to keep the code short.

Compile the file and link it with libresolv:
gcc rwrap_example.c -lresolv -o rwrap_example

And now you can just run the example binary along with resolv_wrapper, using the RESOLV_WRAPPER_DEBUGLEVEL to see the progress:
LD_PRELOAD=libresolv_wrapper.so RESOLV_WRAPPER_HOSTS=/tmp/fakehosts RESOLV_WRAPPER_DEBUGLEVEL=4 ./rwrap_example
RWRAP_TRACE(1970) - _rwrap_load_lib_function: Loaded __res_ninit from libc
RWRAP_TRACE(1970) - rwrap_res_nquery: Resolve the domain name [kdc.example.com] - class=1, type=1
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver: 10.11.5.19
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver: 10.5.30.160
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver: 192.168.1.254
RWRAP_TRACE(1970) - rwrap_res_fake_hosts: Searching in fake hosts file /tmp/fakehosts
RWRAP_TRACE(1970) - rwrap_res_fake_hosts: Successfully faked answer for [kdc.example.com]
RWRAP_TRACE(1970) - rwrap_res_nquery: The returned response length is: 0
127.0.0.10

And that's pretty much it!

The resolv_wrapper lives at the cwrap.org site along with the other wrappers and has its own dedicated page. You can grab the source code from git.samba.org. The git tree includes a RST-formatted documentation file, with even more details and example. We're also working on making resolv_wrapper usable on other platforms than Linux, although there are still some bugs here and there.
FUDCon Managua: Openshift lleva tu Desarrollo a Otro Nivel (Slides)
For the interested, here are my slides for the topic (in Spanish): OpenShift: Lleva tu Desarrollo a Otro Nivel.
Download it as PDF from this URL: http://goo.gl/TlenXZ
perf.gnome.org – introduction

My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on build.gnome.org. GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

But actually putting performance tests into the set of tests that are run by build.gnome.org doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

The overall architecture is shown in the following diagram:

HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to build.gnome.org to download builds or perf.gnome.org to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to perf.gnome.org.

perf.gnome.org is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s perf.gnome.org catching a regression (fix)

perf.gnome.org regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at https://wiki.gnome.org/Projects/HardwareTesting.


GNOME Tweak tool, aún presente
Con el auge y salida de GNOME 3. Muchas de las funcionalidades que se encontraban en su anterior versión, la 2. Se vieron bastantemente recortadas incluyéndose la gestión de temas de GNOME Shell siendo más difícil que el usuario pudiese cambiar a su gusto el 'theme' por defecto; las opciones del ratón al pulsar Clic derecho en el escritorio para seleccionar el fondo del escritorio, o el no poder maximizar, minimizar las ventanas con los típicos botones de las ventanas...

Imagen extraída de WorlOfGNOME

Fue tal el cambio que muchos usuarios, amigos que conocía (incluyéndome) nos fuimos a otros entornos como KDE, XFCE, LXDE o gestores de ventanas. Incluso en Fedora generaron un primer 'fork' de GNOME 2 llamado BlueBubble que permitía utilizar GNOME 2 en Fedora 15 que proveía los primeros pasos de GNOME 3, ya desmantenido.
Por otro lado, el equipo de desarrollo, empaquetadores... de Linux Mint, decidieron crear un fork de GNOME Shell llamado Cinnamon que aún sigue en activo; Ubuntu lanzó su propia shell llamada Unity; El equipo de GNOME desarrolló una alternativa a la apariencia de GNOME 3 y su clásica Shell lamada GNOME-classic...

Sin embargo, todos aquellos usuarios que seguían (o siguen) utilizando GNOME 3 con su Shell por defecto. Decidieron crear una herramienta para gestionar las extensiones, las combinaciones de teclas, los temas de GNOME Shell, aplicaciones que iniciaran por defecto entre otras muchas más características de esta herramienta.

A día de hoy, todavía sigue haciendo falta. Dado que la herramienta de extensiones como hemos hablado en el post de Gestionando las extensiones en GNOME shell no nos permite gestionar correctamente los temas para la Shell; gestionar las tipografías que queramos utilizar en nuestro escritorio, las combinaciones de teclas, programas que corran luego de iniciar sesión...

Resumiendo, para instalarla y poder tener un mejor control sobre nuestro entorno en GNOME 3. Podemos instalar esta herramienta con el nombre de paquete gnome-tweak-tool en Fedora. Y comenzar a modificar la apariencia y determinados aspectos funcionales como las propiedades del teclado y ratón, combinaciones de teclas y muchas cosas que no podemos configurar, al menos gráficamente.



Referencias

  • Wikipedia
  • Google
Inicia FUDCon Managua 2014!
De la manera mas formal entonando la letra del himno nacional de Nicaragua y con palabras del rector de la Universidad de Ciencias Comerciales - UCC se da inicio al FUDCon Managua 2014 (Fedora Users & Developers Conference).

Charlas como Fedora.Next con Dennis Gilmore, Fedora website su futuro en fedora.next con Robert Mayr, Valentin Basel - Hardware libre :-) y Openshift con Abdel Martinez, se rompe el hielo en este gran evento que nos une como comunidad y compartiendo todos estos conocimientos se levantan los animos y el interes de los usuarios en colaborar en esta gran comunidad.

Saludos!
Portapapeles en GNOME 3
Un dato curioso de la instalación por defecto de GNOME 3 en Fedora es que no viene un gestor de portapapeles por defecto. Para los que no sepan que es un 'gestor de portapapeles' es un programa que permite gestionar lo que copiamos, cortamos, pegamos con las famosas teclas CTRL+X, C ó V; o bien con el ratón haciendo clic derecho y seleccionando 'Copiar, pegar...'.

Bien hay un portapapeles que se integra bastante bien en GNOME 3 llamado gpaste. Lo podemos instalar con los paquetes gpaste y además de la extensión gnome-shell-extension-gpaste para GNOME-Shell.

Visualizando el historial del portapapeles
GPaste provee multitud de funcionalidades que van desde cuántas líneas queramos que se nos guarde en nuestro historial, pasando por almacenamiento de imágenes que se hayan copiado, cortado previamente hasta la configuración de combinaciones de teclas o realizar una copia de seguridad de lo que hayamos copiado entre otras muchas funcionalidades.

Gestionando algunos parámetros de GPaste

Referencias


Gestionando las extensiones de GNOME-Shell
Como abremos visto en el artículo anterior sobre cómo Instalar extensiones en GNOME Shell, en dónde tenemos una pequeña sección de la página que permite gestionar nuestras extensiones previamente instaladas en nuestro directorio de usuario. Aquí hablaremos de cómo 'invocar' a la utilidad que modifica los parámetros y configuraciones de cada extensión.

GNOME Shell provee de dos herramientas con la que podemos configurar nuestras extensiones. Viene incluída en el paquete de gnome-shell. Así que cualquier usuario que utilice GNOME 3 le viene intrínseca en su sistema.

La herramienta se puede utilizar de forma gráfica o mediante terminal.

De forma gráfica, el ejecutable se llama gnome-shell-extension-prefs



Podemos elegir dentro del menú despegable las opciones de cada extensión de GNOME Shell.

Editando las opciones de la extensión 'Alternate Tab'
Por otro lado, podemos gestionarlas de modo no gráfico con el comando gnome-shell-extension-tool

La sintaxis del comando es la siguiente:
$ gnome-shell-extension-tool --help
Usage: gnome-shell-extension-tool [options]
Options:
  -h, --help            show this help message and exit
  -d DISABLE, --disable-extension=DISABLE
                        Disable a GNOME Shell extension
  -e ENABLE, --enable-extension=ENABLE
                        Enable a GNOME Shell extension
  -c, --create-extension
                        Create a new GNOME Shell extension

Referencias

GNOME 3 ~ Extension preferences
yum whatprovides  gnome-shell-extension-tool
Instalando extensiones para GNOME Shell
Muchos de los que usamos GNOME 3 o como también sucede con KDE y sus 'add-ons', nos sentimos atraídos por enriquecerlo aún más y aumentar, potenciar o mejorar el propio entorno para nuestros fines. Para ello tenemos una página disponible de extensiones disponible, en la que podremos instalar directamente desde nuestro navegador los 'addons' que queramos. 


Nos saldrá ese mensajito de error. Con tan solo hacer clic en "Allow..." o en español "Habilitar..." nos detectará el GNOME Shell y listo. Podremos instalar todo lo que queramos, añadir una nueva que hayamos creado estando previamente registrado en el sitio Web por supuesto, o visualizar las extensiones que tengamos instaladas en nuestro usuario.

Visualizando las extensiones que tenemos instaladas



Como vemos tenemos las opciones obvias para gestionar nuestras extensiones instaladas en nuestro usuario. Las podemos activar, desactivar, configurarlas o simplemente eliminarlas.

Instalando extensiones



Tan simple como elegir una de ellas y accionar el interruptor. Nos saltará el siguiente mensaje de confirmación, y ya la tendremos instalada.

Referencias


FUDCon LATAM 2014

FUDCon LATAM started this morning in Managua Nicaragua. The turnout is great and there is a high level of anticipation of what is to come in the next few days.  The organisers have done a really good job in planning. I for one am looking forward to seeing what is new in the Fedora world in LATAM.

The LATAM Fedora community is very passionate about Fedora and growing every day.  I will be writing more about the happenings over the next few days

Latinoware 2014

  A Conferência Latino Americana de Software Livre – Latinoware, foi realizada nos dias 15, 16 e 17 de Outubro de 2014 nas mediações do Parque Tecnológico ITAIPU(PTI) na cidade de Foz do Iguaçu – PR. A iniciativa que tem se mostrado cada vez mais importante na busca pelo debate de boas ideias e na apresentação de ferramentas open-sources que facilitem cada vez mais o nosso dia a dia .Referência em debates sobre software livre, a Latinoware, tem como fomento promover ações voltadas ao desenvolvimento econômico, científico e tecnológico.

foz 172

Foto 1 – Palestra de Abertura.

O Projeto Fedora esteve presente em mais um grande evento, com o objetivo de apresentar aos participantes o valor do software livre na sociedade, esferas empresariais, órgãos públicos e privados. Buscando também agregar valores ao Projeto Fedora, como novos contribuintes. O Projeto Fedora contou com a participação de Embaixadores de diversas localidades do Brasil e de vários países, Bahia(Murilo Mattos Maia), São Paulo(Davi Doyal) Espirito Santo(Ramilton Costa), Daniel Lara(Rio Grande do Sul), Marcelo Barbosa(Rio Grande do Sul), Wolnei Junior (Jaguara do Sul), Rino Rondan(Argentina), Juan(Paraguai) entre outros embaixadores, .

foto_embaixadores

embaixadores_fedora1

Foto – Stand Fedora.

Nesta edição da Latinoware o Projeto Fedora esteve ministrando várias palestras, dentre elas destacamos:

  • ARM no Fedora
  • Servidor Bacula com Fedora
  • Fedora QA
  • Fedora Além do Projeto
  • Inclusão digital com Fedora
davi

Davi Doyal – Fedora Além do projeto

marcelo

Marcelo Barbosa – Arm no fedora

daniel_lara

Daniel Lara – Bacula no Fedora

 

 

 

 

 

Tivemos em nosso stand também uma introdução ao empacotamento com o amigo Rino Rondan no qual explanou sobre o assunto com muita sabedoria e conhecimento, mostrando que o empacotamento não e tão difícil como parece.

foz 227

Rino Rondan e Junior Wolnei – Introdução a empacotamento

O projeto fedora também distribuiu mídias, canetas e adesivos. No final do evento posso dizer que plantamos muitas sementes, que com certeza nos darão muitos frutos. Conseguimos atingir os nossos objetivos, conseguimos agregar valores ao Projeto Fedora. Agradeço ao Projeto Fedora pelo apoio, e por acreditar em nosso trabalho perante a comunidade.


The UEFI Security Databases

A little background

When we talk about UEFI Secure Boot, a lot of times we talk about what we call the whitelist and the blacklist. These databases go by many names—formally EFI_IMAGE_SECURITY_DATABASE and EFI_IMAGE_SECURITY_DATABASE1, sometimes d719b2cb-3d3a-4596-a3bc-dad00e67656f:db and d719b2cb-3d3a-4596-a3bc-dad00e67656f:dbx, but most often just db and dbx. These two1 databases, stored in UEFI Authenticated Variables2, constitute lists of which binaries can and cannot be executed within a UEFI environment when Secure Boot is enabled.

When a UEFI binary is loaded, the system first checks to see if any revocations in dbx are applicable to that binary. A dbx entry may apply in three ways—it may contain the hash of a specific binary, an X.509 certificate, or the hash of a certificate3. If any of these matches the binary in question, UEFI raises the error EFI_SECURITY_VIOLATION, and your binary will not go to space today.

If a binary successfully passes that hurdle, then the same verification method is processed with db, the whitelist, but with the opposite policy: if any entry describes the binary in question, verification has succeeded. If no entry does, EFI_SECURITY_VIOLATION is raised.

The need for updates

When a UEFI binary is discovered to have a serious security flaw which would allow a malicious user to circumvent Secure Boot, it becomes necessary to prevent it from running on machines. When this happens, the UEFI CA issues a dbx update. The mechanism for the update is a file that’s structured as a UEFI Authenticated Variable append. This file gets distributed as part of an OS update, and when the update is applied, the UEFI variable is updated.

One mechanism for doing this in Linux is via dbxtool. In Fedora, the dbxtool package includes the current dbx updates in /usr/share/dbxtool/DBXUpdate-2014-04-13-22-14-00.bin, as well as a systemd service to apply them during boot4. In the special case of Fedora’s shim being added to a blacklist, we would include a dependency on the fixed version of shim in the dbxtool package so that systems would remain bootable.

The structure of a `dbx` update

The dbx variable is composed of an array of EFI_SIGNATURE_LIST structures, each themselves containing an array of EFI_SIGNATURE_DATA entries. A dbx update is that same structure, but wrapped in a EFI_VARIABLE_AUTHENTICATION_2 structure to authenticate it. The definitions look like this5:

<figure class="code"><figcaption>wincert.h </figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
typedef struct {
        efi_guid_t      SignatureOwner;         // who owns this entry
        uint8_t         SignatureData[0];       // the data we want to
                                                // fish out of this thing
} EFI_SIGNATURE_DATA;

typedef struct {
        efi_guid_t      SignatureType;       // type of structure in
                                             // EFI_SIGNATURE_DATA.SignatureData
        uint32_t        SignatureListSize;   // Total size of the signature
                                             // list, including this header.
        uint32_t        SignatureHeaderSize; // Size of type-specific header
        uint32_t        SignatureSize;       // The size of each individual
                                             // EFI_SIGNATURE_DATA.SignatureData
                                             // in this list.
        // uint8_t      SignatureHeader[SignatureHeaderSize]
                                             // this is a header defined by
                                             // and for each specific
                                             // signature type.  Of course
                                             // none of them actually define
                                             // a header.
        // EFI_SIGNATURE_DATA[...][SignatureSize] // actual signature data
} EFI_SIGNATURE_LIST;

typedef struct {
        efi_guid_t        HashType;
        uint8_t           PublicKey[256];
        uint8_t           Signature[256];
} EFI_CERT_BLOCK_RSA_2048_SHA256;

typedef struct {
        uint32_t        dwLength;         // Length of this structure
        uint16_t        wRevision;        // Revision of this structure (2)
        uint16_t        wCertificateType; // The kind of signature this is
        //uint16_t      bCertificate[0];  // The signature data itself. This
                                          // is actually, and not the least
                                          // bit confusingly, the rest of
                                          // the WIN_CERTIFICATE_EFI_GUID
                                          // structure wrapping this one.
} WIN_CERTIFICATE;

#define WIN_CERT_TYPE_PKCS_SIGNED_DATA  0x0002
#define WIN_CERT_TYPE_EFI_PKCS115       0x0ef0
#define WIN_CERT_TYPE_EFI_GUID          0x0ef1

typedef struct {
        WIN_CERTIFICATE   Hdr;         // Info about which structure this is
        efi_guid_t        CertType;    // Type of certificate in CertData
        uint8_t           CertData[0]; // A certificate of some kind
} WIN_CERTIFICATE_EFI_GUID;

typedef struct {
        EFI_TIME                 TimeStamp; // monotonically increasing
                                            // timestamp to prevent replay
                                            // attacks.
        WIN_CERTIFICATE_EFI_GUID AuthInfo;  // Information about how to
                                            // authenticate this variable
                                            // against some KEK entry
} EFI_VARIABLE_AUTHENTICATION_2;
</figure>

Conceptually, this means the structure we’ve got is:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[ dbx update file:
  [ Authentication structure:
    [ monotonic number | timestamp ]
    [ WIN_CERTIFICATE Header ]
    [ Cert Type ]
    [ Certificate Data ] ]
  [ EFI_SIGNATURE_LIST:
    [ EFI_SIGNATURE_DATA ]
    [ EFI_SIGNATURE_DATA ]
    [ EFI_SIGNATURE_DATA ] ]
  [ EFI_SIGNATURE_LIST:
    [ EFI_SIGNATURE_DATA ]
    ...
    [ EFI_SIGNATURE_DATA ] ]
  ... ]
</figure>

So a full update looks something like this:

<figure class="code"><figcaption>auth2.TimeStamp</figcaption>
1
00000000  da 07 03 06 13 11 15 00  00 00 00 00 00 00 00 00  |................|
</figure>

2010-03-06 19:17:21 GMT+0000

<figure class="code"><figcaption>auth2.AuthInfo.Hdr.DwLength</figcaption>
1
00000010  bd 0c 00 00                                       |....            |
</figure>

It is 0x00000cbd bytes long6.

<figure class="code"><figcaption>auth2.AuthInfo.Hdr.wRevision</figcaption>
1
00000010              00 02                                 |    ..          |
</figure>

It’s revision is 2. It is always revision 2.

<figure class="code"><figcaption>auth2.AuthInfo.Hdr.wCertificateType</figcaption>
1
00000010                    f1 0e                           |      ..        |
</figure>

0x0ef1, which as we see above is WIN_CERT_TYPE_EFI_GUID. The interesting bit here is that .bCertificate isn’t quite a real thing. This is actually describing that auth2.AuthInfo is a WIN_CERTIFICATE_EFI_GUID, and .bCertificate is actually the fields other than auth2.AuthInfo.Hdrauth2.AuthInfo.CertType and auth2.AuthInfo.CertData. Again, this shouldn’t confuse you at all. As a result, next, in the place of .bCertificate, we have:

<figure class="code"><figcaption>auth2.AuthInfo.CertType</figcaption>
1
2
00000010                           9d d2 af 4a df 68 ee 49  |        ...J.h.I|
00000020  8a a9 34 7d 37 56 65 a7                           |..4}7Ve.        |
</figure>

4aafd29d-68df-49ee-8aa9-347d375665a7, aka EFI_GUID_PKCS7_CERT, which means that .CertData is verified against EFI_SIGNATURE_DATA entries in the Key Exchange Keys database (KEK), but only those entries which are contained in EFI_SIGNATURE_LIST structures with .SignatureType of EFI_CERT_X509_GUID. Whew.

<figure class="code"><figcaption>auth2.AuthInfo.CertData</figcaption>
1
2
3
4
5
00000020                           30 82 0c a1 02 01 01 31  |        0......1|
00000030  0f 30 0d 06 09 60 86 48  01 65 03 04 02 01 05 00  |.0...`.H.e......|
[ what an X.509 certificate looks like is left as an exercise for the reader ]
00000cb0  b6 2b 89 02 73 c4 86 57  83 6f 28 57 e0 12 cb 05  |.+..s..W.o(W....|
00000cc0  6d d0 3e 60 8f 85 9f dd  fc 46 ac 54 44           |m.>`.....F.TD   |
</figure>

Yep, that’s an ASN.1 DER encoding of a PKCS-7 Certificate.
That’s the end of the EFI_VARIABLE_AUTHENTICATION_2 structure, and so on to the actual data:

<figure class="code"><figcaption>EFI_SIGNATURE_LIST.SignatureType</figcaption>
1
2
00000cc0                                          26 16 c4  |             &..|
00000cd0  c1 4c 50 92 40 ac a9 41  f9 36 93 43 28           |.LP.@..A.6.C(   |
</figure>

That’s c1c41626-504c-4092-aca9-41f936934328, aka EFI_GUID_SHA256

<figure class="code"><figcaption>EFI_SIGNATURE_LIST.SignatureListSize</figcaption>
1
2
00000cd0                                          cc 01 00  |             ...|
00000ce0  00                                                |.               |
</figure>

The list size is 0x00001cc bytes

<figure class="code"><figcaption>EFI_SIGNATURE_LIST.SignatureHeaderSize:</figcaption>
1
00000ce0     00 00 00 00                                    | ....           |
</figure>

This is actually always 0.

<figure class="code"><figcaption>EFI_SIGNATURE_LIST.SignatureSize</figcaption>
1
00000ce0                 30 00 00  00                       |     0...       |
</figure>

0x00000030 bytes. That’s 16 bytes for the efi_guid_t, and since .SignatureType was EFI_GUID_SHA256, 32 bytes of SHA-256 data.

<figure class="code"><figcaption>EFI_SIGNATURE_DATA[0].SignatureOwner</figcaption>
1
2
00000ce0                              bd 9a fa 77 59 03 32  |.....0......wY.2|
00000cf0  4d bd 60 28 f4 e7 8f 78  4b                       |M.`(...xK       |
</figure>

77fa9abd-0359-4d32-bd60-28f4e78f784b , which is the GUID Microsoft uses to identify themselves7

<figure class="code"><figcaption>EFI_SIGNATURE_DATA[0].SignatureData</figcaption>
1
2
3
00000cf0                              80 b4 d9 69 31 bf 0d  |         ...i1..|
00000d00  02 fd 91 a6 1e 19 d1 4f  1d a4 52 e6 6d b2 40 8c  |.......O..R.m.@.|
00000d10  a8 60 4d 41 1f 92 65 9f  0a                       |.`MA..e..       |
</figure>

This is the actual SHA-256 data. It goes on like this:

<figure class="code"><figcaption>EFI_SIGNATURE_DATA[1..8]</figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
00000d10                              bd 9a fa 77 59 03 32  |         ...wY.2|
00000d20  4d bd 60 28 f4 e7 8f 78  4b f5 2f 83 a3 fa 9c fb  |M.`(...xK./.....|
00000d30  d6 92 0f 72 28 24 db e4  03 45 34 d2 5b 85 07 24  |...r($...E4.[..$|
00000d40  6b 3b 95 7d ac 6e 1b ce  7a bd 9a fa 77 59 03 32  |k;.}.n..z...wY.2|
00000d50  4d bd 60 28 f4 e7 8f 78  4b c5 d9 d8 a1 86 e2 c8  |M.`(...xK.......|
00000d60  2d 09 af aa 2a 6f 7f 2e  73 87 0d 3e 64 f7 2c 4e  |-...*o..s..>d.,N|
00000d70  08 ef 67 79 6a 84 0f 0f  bd                       |..gyj....       |
...
00000e60                              bd 9a fa 77 59 03 32  |         ...wY.2|
00000e70  4d bd 60 28 f4 e7 8f 78  4b 53 91 c3 a2 fb 11 21  |M.`(...xKS.....!|
00000e80  02 a6 aa 1e dc 25 ae 77  e1 9f 5d 6f 09 cd 09 ee  |.....%.w..]o....|
00000e90  b2 50 99 22 bf cd 59 92  ea                       |.P."..Y..|
</figure>

How the structure is used

When you apply a database update, you’re basically doing a SetVariable() call to UEFI with a couple of flags set:

<figure class="code"><figcaption></figcaption>
1
2
3
4
5
6
  rc = efi_set_variable(EFI_IMAGE_SECURITY_DATABASE_GUID, "dbx", data, len,
                        EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS |
                        EFI_VARIABLE_APPEND_WRITE |
                        EFI_VARIABLE_BOOTSERVICE_ACCESS |
                        EFI_VARIABLE_RUNTIME_ACCESS |
                        EFI_VARIABLE_NON_VOLATILE);
</figure>

These flags tell the firmware some crucial things – that this variable is authenticated with the EFI_VARIABLE_AUTHENTICATION_2 structure, that this is an append, that both Boot Services (i.e. the firmware) and Runtime Services (i.e. the OS) should have access to it, and that it should persist across a reboot. As a special case in the spec, an append has a special meaning for the UEFI security databases:

For variables with the GUID EFI_IMAGE_SECURITY_DATABASE_GUID (i.e. where the data buffer is formatted as EFI_SIGNATURE_LIST), the driver shallnot perform an append of EFI_SIGNATURE_DATA values that are already part of the existing variable value.
Note: This situation is not considered an error, and shall in itself not cause a status code other than EFI_SUCCESS to be returned or the timestamp associated with the variable not to be updated.

<footer>UEFI Specification section 7.2.1 Revision 2.4</footer>

As a result, what happens here is that the first time you write to dbx, any EFI_SIGNATURE_LIST structures and the EFI_SIGNATURE_DATA entries they contain get added to the variable, but not the EFI_VARIABLE_AUTHENTICATION_2 structure. Then when later dbx updates are issued, they contain a superset of the previous ones. When you apply them, the firmware only appends the difference to the variable.

Tools

Obviously a system this complex needs some tools. To manage these databases on linux, I’ve written a tool called dbxtool, which may be available in your linux distribution of choice. It can be used to apply dbx changes8, to list the contents of the UEFI Security Databases, and to list the contents of updates files:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
fenchurch:~$ dbxtool -l
   1: {microsoft} {sha256} 80b4d96931bf0d02fd91a61e19d14f1da452e66db2408ca8604d411f92659f0a
   2: {microsoft} {sha256} f52f83a3fa9cfbd6920f722824dbe4034534d25b8507246b3b957dac6e1bce7a
   3: {microsoft} {sha256} c5d9d8a186e2c82d09afaa2a6f7f2e73870d3e64f72c4e08ef67796a840f0fbd
   4: {microsoft} {sha256} 363384d14d1f2e0b7815626484c459ad57a318ef4396266048d058c5a19bbf76
   5: {microsoft} {sha256} 1aec84b84b6c65a51220a9be7181965230210d62d6d33c48999c6b295a2b0a06
   6: {microsoft} {sha256} e6ca68e94146629af03f69c2f86e6bef62f930b37c6fbcc878b78df98c0334e5
   7: {microsoft} {sha256} c3a99a460da464a057c3586d83cef5f4ae08b7103979ed8932742df0ed530c66
   8: {microsoft} {sha256} 58fb941aef95a25943b3fb5f510a0df3fe44c58c95e0ab80487297568ab9771
   9: {microsoft} {sha256} 5391c3a2fb112102a6aa1edc25ae77e19f5d6f09cd09eeb2509922bfcd5992ea
  10: {microsoft} {sha256} d626157e1d6a718bc124ab8da27cbb65072ca03a7b6b257dbdcbbd60f65ef3d1
  11: {microsoft} {sha256} d063ec28f67eba53f1642dbf7dff33c6a32add869f6013fe162e2c32f1cbe56d
  12: {microsoft} {sha256} 29c6eb52b43c3aa18b2cd8ed6ea8607cef3cfae1bafe1165755cf2e614844a44
  13: {microsoft} {sha256} 90fbe70e69d633408d3e170c6832dbb2d209e0272527dfb63d49d29572a6f44c
</figure>

As usual, there are a couple of places where vendors have not gotten everything quite right, and sometimes things fail to work correctly—but that’s for another post.


  1. There is actually a third in this set, dbt, which can be used in revocation processing.

  2. Strictly speaking these are only an analog to Authenticated Variables, which can only be appended to, replaced, or deleted by updates signed with the key that created them. Key database updates are instead controlled by a list of keys stored in another variable called KEK – the Key Exchange Keys. Otherwise the mechanism is the same.

  3. That is, the digest of the certificate’s TBSCertificate as defined in RFC 5280 section 4.1.1.1, using the digest specified in the database entry itself.

  4. This is currently disabled by default in Fedora. I’m looking at enabling this as an F22 feature. Getting these things right is important, and it takes time.

  5. I have left out the definitions of EFI_TIME and efi_guid_t; they are quite boring.

  6. Here dwLength includes the size of Hdr itself (that is, the size of dwLength, wRevision, and wCertificateType) as well as the data following it (bCertificate). Because we live in the best of all possible worlds, Authenticode signatures—the signatures on binaries themselves—use the same structure but only include the size of Hdr.bCertificate. This hasn’t ever confused anybody.

  7. Big congratulations to Acer, who used 55555555-5555-5555-5555-555555555555 for this in one of their db entries. Not only did they win the random number generator lottery to get that, but also experienced a minimum of 3 single bit errors, since the closest valid GUID to that is 55555555-5555-4555-9555-555555555555.

  8. Also theoretically db updates, dbt updates, and KEK updates, but those are much more rare.

Fedora Day @ DevConf.cz 2015

DevConf.cz is the largest developer conference devoted to Red Hat related technologies (Linux, JBoss, OpenShift, OpenStack,…). This year, there were around 1000 attendees which is a sizable number for a deeply technical conference. Because we were hitting the capacity limits of the venue, we decided to move the event to a different university campus which offers more rooms – FIT BUT. For those who attended GUADEC 2013: it’s the same venue.

The next DevConf.cz will span three days again – February 6-8th. And like this year we’d like to make the last day a Fedora Day. I think the Fedora Day was a success this year. Matthew Miller delivered his FPL’s keynote on Fedora.Next, representatives of working groups spoke about their progress, and there was overall an interesting discussion about the direction Fedora was taking. Not counting Flock, DevConf.cz is a conference with the largest number of Fedora contributors, so why not to use it for discussions, planning, and hacking?

I’m also in talks with the CentOS guys whether they want to join us for the Fedora Day and make it a Fedora & CentOS Day. I think there are quite a few topics the two projects can discuss.

DevConf.cz’s CfP has been on for some time and will be open till Dec 1st. If you have an interesting topic for a talk, workshop, or hackfest, submit it. And even if you don’t, consider attending. I assure you that you will enjoy the conference. You will have a chance to attend a lot of Fedora-related talks and meet many interesting people from the project.

devconf-logo


How to select and set default applications in Fedora

Do you have a type of document you want to open with a specific default application in Fedora? For example, do you want to always open JPG or PNG files in The GIMP, WAV files in Audacity, or SVG files in Inkscape? You can do this and more in the interface of the Files file browser (also known as nautilus) in Fedora via the Properties window.

Right-click on any file of the type you’re interested in. Then select Properties to bring up the file properties. Select the Open With pane, and select the desired app. You can then either open the file this time only with that app, or you can select Set as default to always open with the new app from now on.

Take a look at the demonstration graphic below:

set_as_default

Many apps automatically provide a default assignment like this when installed. However, often file types can be opened by more than one app. Now you can choose the one you prefer.

More Fedora in life

I am using Fedora from the very fast release. Started contributing to the project from around 2005. I worked on Fedora during my free time, I did that before I joined Red Hat in 2008, during the time I worked in Red Hat and after I left Red Hat last year.

But for the last two weeks I am working on Fedora not only on my free times but also as my day job. I am the Fedora Cloud Engineer as a part of Fedora Engineering team and part of the amazing community of long time Fedora Friends.

Using docker in Fedora for your development work

Last week I worked on DNF for the first time. In this post I am going to explain how I used Docker and a Fedora cloud instance for the same.

I was using a CentOS vm as my primary work system for last two weeks and I had access to a cloud. I created a Fedora 20 instance there.

The first step was to install docker in it and update the system, I also had to upgrade the selinux-policy package and reboot the instance.

# yum upgrade selinux-policy -y; yum update -y
# reboot
# yum install docker-io
# systemctl start docker
# systemctl enable docker

Then pull in the Fedora 21 Docker image.

# docker pull fedora:21

The above command will take time as it will download the image. After this we will start a Fedora 21 container.

# docker run -t -i fedora:21 /bin/bash

We will install all the required dependencies in the image, use yum as you do normally and then get out by pressing Crrl+d.

[root@3e5de622ac00 /]# yum install dnf python-nose python-mock cmake -y

Now we can commit this as a new image so that we can reuse it in the future. We do this by docker commit command.

#  docker commit -m "with dnf" -a "Kushal Das" 3e5de622ac00 kushaldas/dnfimage

After this the only thing left to start a container with this newly created image and mounted directory from host machine.

# docker run -t -i -v /opt/dnf:/opt/dnf kushaldas/dnfimage /bin/bash

This command assumes the code is already in the /opt/dnf of the host system. Even if I managed to do something bad in that container, my actual host is safe. I just have to get out of the container and start a new one.

Linux Container Security
First, read these slides. Done? Good.

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:

  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments
Contributing to the Fedora Project

Once of the many things I do for the Fedora Project is Tagging, it’s something any one can do and it’s a quick/easy way to give back to Fedora.

First go here

https://apps.fedoraproject.org/tagger/

blog1

 

You’ll need to login with your FAS credentials.

blog2You’ll then be taken to this page

blog3

 

From this page, you have two choices, you can like an existing tag, or you can make suggestions for additional ones, just click the RIGHT triangle to move to the next page if you don’t want to do anything on the current package.  Now like most of us, I don’t really know what all packages do, so how am I going to be able to suggest new tags?

Well I personally go here

blog4

http://rpms.famillecollet.com/rpmphp/zoom.php?rpm=libcomps

You can search for the app you’re looking for, read it’s description etc, and then add a tag, in the example above, you could of added, library or XML, which has already been suggested, well I was going to add those, so I’ll like them.

For each TAG/Like you do, it will increase your score, you can see your current score at the top right of the page,

blog5

One of the cool things about doing this is you get badges.

blog6

See the bottom three, they are package tagging badges, now people know I helped out, the more tags you add, tages you like, the more badges you get.

A nice easy way of giving back to the project you so dearly love.

 

 

The post Contributing to the Fedora Project appeared first on Paul Mellors [DOT] NET.

flattr this!

Real money with browser game.
So I tested the markerglory game into my browsers ( Chrome and Firefox) .
MarketGlory is a strategy game, economic, political, social and military strategy game.
Working with all curency , let you to use the virtual money in your benefit.
All users have one referal into the game. This will give some goals by the game.
You can working one time / day and this will give money and experience points.
Also when you grow you can build your companies for workers and sell your products to local or global market. It's very simple and you can try it.
I think this help me to buy my Fedora dvd :), or maybe to make a Fedora Organization to help others.

What do you think about this game versus another games?
An early view of GTK+ 3.16

A number of new features have landed in GTK+ recently. These are now available in the 3.15.0 release. Here is a quick look at some of them.

Overlay scrolling

We’ve had long-standing feature requests to turn scrollbars into overlayed indicators, for touch systems. An implementation of this idea has been merged now. We show traditional scrollbars when a mouse is detected, otherwise we fade in narrow, translucent indicators. The indicators are rendered on top of the content and don’t take up extra space. When you move the pointer over the indicator, it turns into a full-width scrollbar that can be used as such.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1295-1" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/10/overlay-scroll2.webm?_=1" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/10/overlay-scroll2.webm</video>

Other new scrolling-related features are support for synchronized scrolling of multiple scrolled windows with a shared scrollbar (like in the meld side-by-side view), and an ::edge-overshot signal that is generated when the user ‘overshoots’ the scrolling at either end.

OpenGL support

This is another very old request – GtkGLExt and GtkGLArea have been around for more than a decade.  In 3.16, GTK+ will come with a GtkGLArea widget.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1295-2" preload="metadata" width="474"><source src="http://blogs.gnome.org/mclasen/files/2014/10/opengl.webm?_=2" type="video/webm">http://blogs.gnome.org/mclasen/files/2014/10/opengl.webm</video>

Alex’ commit message explains all the details, but the high-level summary is that we now render with OpenGL when we have to, and we can  fully control the stacking of pieces that are rendered with OpenGL or with cairo: You can have a translucent popup over a 3D scene, or mix buttons into your 3D views.

While it is nice to have a GLArea widget, the real purpose is to prepare GDK for Emmanuele’s scene graph work, GSK.

A Sidebar widget

Ikey Doherty contributed the GtkSidebar widget. It is a nice and clean widget to turn the pages of a GtkStack.

sidebar
IPP Printing

The GTK+ print dialog can now handle IPP printers which don’t provide a PPD to describe their capabilities.

Pure CSS theming

For the last few years, we’ve implemented more and more CSS functionality in the GTK+ style code. In 3.14, we were able to turn Adwaita into a pure CSS theme. Since CSS has clear semantics that don’t include ‘call out to arbitrary drawing code’, we are not loading and using theme engines anymore.

We’ve landed this change early in 3.15 to give theme authors enough time to convert their themes to CSS.

More to come

With these features, we are about halfway through our plans for 3.16. You can look at the GTK+ roadmap to see what else we hope to achieve in the next few months.

Trinity and pages of random data.

Something trinity uses a lot, are pages of random data. They get passed around to syscalls, ioctls, whatever. 5 years ago, before I’d even added multiple children to trinity, this was done using ‘page_rand’. A single page allocated on startup, that was passed around, and scribbled over by anyone who needed something to scribble over.

After the VM work I did earlier this year, where we recycle successful calls to mmap, and inherit them across children, quite a few places started passing around map structs instead. This was good, because it started shaking out the many many kernel bugs that we had lingering in huge page support.

It kind of sucked that we had two sets of routines for doing things like “get a page”, “dirty a page” etc which were fundamentally the same operations, except one set worked on a pointer, and one on a struct. It also sucked that the page_rand code was actually buggy in a number of ways, which showed up as overruns.

Over time, I’ve been trying to move all the code that used page_rand to using mappings instead. Today I finished that work, and ripped out the last vestiges of page_rand support. The only real remnants of the supporting code was some of the dirtying code. We used to have separate ‘dirty page_rand’ and ‘dirty an mmap’ routines. After todays work, there’s now a single set of functions for mappings. There’s still a bunch more consolidation and cleanup to do, which I’ll get fixed up and merged over the next week.

The only feature that’s now missing is periodic dirtying of mappings. We did this every 100 syscalls for page_rand. Right now we only dirty mmap’s after a mmap() call succeeds, or on an mremap(). I plan on getting this done tomorrow.

The motivation for ripping out all this code, and unifying a lot of the support code is that a lot of code paths get simpler, and more importantly, the code in place now takes ‘len’ arguments, so we’re in a better position to make sure we’re not passing buffers that are too small when we do random syscalls.

In other news: while I was happy to report a few days ago that 3.18rc1 fixed up the btrfs bug that had been bothering me for a while, I’ve now managed to discover two new btrfs bugs [1]. [2]. Grumble.

Trinity and pages of random data. is a post from: codemonkey.org.uk

October 22, 2014

Testing Evolution’s git master and GNOME continuous

I’ve wanted a feature in Evolution for a while. It was formally requested in 2002, and it just recently got fixed in git master. I only started publicly groaning about this missing feature in 2013, and mcrha finally patched it. I tested the feature and found a small bug, mcrha patched that too, and I finally re-tested it. Now I’m blogging about this process so that you can get involved too!

Why Evolution?

  • Evolution supports GPG (Geary doesn’t, Gmail doesn’t)
  • Evolution has a beautiful composer (Gmail’s sucks, just try to reply inline)
  • Evolution is Open Source and Free Software (Gmail is proprietary)
  • Evolution integrates with GNOME (Gmail doesn’t)
  • Evolution has lots of fancy, mature features (Geary doesn’t)
  • Evolution cares about your privacy (Gmail doesn’t)

The feature:

I’d like to be able to select a bunch of messages and click an archive action to move them to a specific folder. Gmail popularized this idea in 2004, two years after it was proposed for Evolution. It has finally landed.

In your account editor, you can select the “Archive Folder” that you want messages move to:

evolution-account-archive-folder

This will let you have a different folder set per account.

Archive like Gmail does:

If you use Evolution with a Gmail account, and you want the same functionality as the Gmail archive button, you can accomplish this by setting the Evolution archive folder to point to the Gmail “All Mail” folder, which will cause the Evolution archive action to behave as Gmail’s does.

To use this functionality (with or without Gmail), simply select the messages you want to move, and click the “Archive…” button:

evolution-context-menu-archive

This is also available via the “Message” menu. You can also activate with the Control-Alt-a shortcut. For more information, please read the description from mcrha.

GNOME Continuous:

Once the feature was patched in git master, I wanted to try it out right away! The easiest way for me to do this, was to use the GNOME Continuous project that walters started. This GNOME project automatically kicks off integration builds of multiple git master trees for the most common GNOME applications.

If you follow the Gnome Continuous instructions, it is fairly easy to download an image, and then import it with virt-manager or boxes. Once it had booted up, I logged into the machine, and was able to test Evolution’s git master.

Digging deep into the app:

If you want to tweak the app for debugging purposes, it is quite easy to do this with GTKInspector. Launch it with Control-Shift-i or Control-Shift-d, and you’ll soon be poking around the app’s internals. You can change the properties you want in real-time, and then you’ll know which corresponding changes in the upstream source are necessary.

Finding a bug and re-testing:

I did find one small bug with the Evolution patches. I’m glad I found it now, instead of having to wait six months for a new Fedora version. The maintainer fixed it quickly, and all that was left to do was to re-test the new git master. To do this, I updated my GNOME Continuous image.

  1. Click on Control-Alt-F2 from the virt-manager “Send Key” menu.
  2. Log in as root (no password)
  3. Set the password to something by running the passwd command.
  4. Click on Control-Alt-F1 to return to your GNOME session.
  5. Open a terminal and run: pkexec bash.
  6. Enter your root password.
  7. Run ostree admin upgrade.
  8. Once it has finished downloading the updates, reboot the vm.

You’ll now be able to test the newest git master. Please note that it takes a bit of time for it to build, so it is not instant, but it’s pretty quick.

Taking screenshots:

I took a few screenshots from inside the VM to show to you in this blog post. Extracting them was a bit trickier because I couldn’t get SSHD running. To do so, I installed the guestfs browser on my host OS. It was very straight forward to use it to read the VM image, browse to the ~/Pictures/ directory, and then download the images to my host. Thanks rwmjones!

Conclusion:

Hopefully this will motivate you to contribute to GNOME early and often! There are lots of great tools available, and lots of applications that need some love.

Happy Hacking,

James


Creating a jigsaw puzzle with Inkscape and GIMP

Here is a neat tutorial that uses both Inkscape and GIMP to create a bunch of puzzle pieces from a single image. The tutorial also uses an extension that is not included in Inkscape by default, so to do this tutorial, you will also learn how to install extensions for Inkscape.

title step_3

Using FreeIPA as a backend for DHCP
 

Yeah, this…

Disclaimer: This is not an official guide and in no way represents best practices for FreeIPA. It is ugly and involves the digital equivalent of bashing on screws with a hammer. Having said that, when nobody has invented the right screwdriver yet, sometimes you just have to hammer away.

First, some history. We’ve been running separate DHCP, DNS and LDAP servers since we switched from static IP addresses and a Windows NT domain somewhere around ten years ago. The DHCP server was loosely connected with the DNS server, and I had written this beautifully complex (read: messily unreadable) script that would allow you to quickly add a system to both DHCP and DNS. A few months ago, we migrated all of our users over to FreeIPA, and I started the process of migrating our DNS database over. Unfortunately, this meant that our DHCP fixed addresses were being configured separately from our DNS entries.

Last week I investigated what it would take to integrate our DHCP leases into FreeIPA. First I checked on the web to see if something like this had already been written, but the closest thing I could find was a link to a design page for a feature that’s due to appear in FreeIPA 4.x.

So here’s my (admittedly hacky) contribution:

  1. sync_dhcp – A bash script (put in /srv, chmod +x)that constantly checks whether the DNS zone’s serial number has changed, and, if it has, runs…
  2. generate_dhcp.py – A python script (put in /srv, chmod +x) that regenerates a list of fixed-addresses in /etc/dhcp/hosts.conf
  3. dhcpd.conf – A sample dhcpd.conf (put in /etc/dhcp) that uses the list generated by generate_dhcp.py
  4. sync-dhcp.service – A systemd service (put in /etc/systemd/system) to run sync_dhcp on bootup
  5. make_dns – A script (chmod +x) that allows the sysadmin to easily add new dns entries with a mac address

sync_dhcp does need to know your domain so it knows which DNS zone serial to check, but other than that, the first four files should work with little or no modification. You will need to create a dnsserver user in FreeIPA, give the user read access to DNS entries, and put its password in /etc/dhcp/dnspasswd (readable only by root).

make_dns makes a number of assumptions that are true of our network, but may not be true of yours. It first assumes that you’re using a 10.10.0.0/16 network (yes, I know that’s not right; it’s long story) and that 10.10.9.x and 10.10.10.x IPs are for unrecognized systems. It also requires that you’ve installed freeipa-admintools and run kinit for a user with permissions to change DNS entries, as it’s just basically a fancy wrapper around the IPA cli tools.

Bent Screw Hole Backyard Metal Macros by Steven Depolo used under a CC BY 2.0 license


GStreamer Conference 2014 talks online

For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets :)

LISA14 – Simplified Remote Management of Linux Servers

I am giving a talk on Simplified Remote Management of Linux Servers at the upcoming LISA14 conference in Seattle, which runs from November 9-14. My talk is 9:45-10:30am on Friday, November 14. LISA is Large Installation System Administration SIG of Usenix.

If you are attending LISA I would enjoy meeting you and discussing anything around system administration, security, and open source in general! Drop me a line and let’s see about scheduling some time.

Abstract:

How do you manage a hundred or a thousand Linux servers? With practice! Managing Linux systems is typically done by an experienced system administrator using a patchwork of standalone tools and custom scripts running on each system. There is a better way to work – to manage more systems in less time with less work – and without learning an entirely new way of working.

OpenLMI (the Linux Management Infrastructure program) delivers remote management of production servers – ranging from high end enterprise servers with complex network and storage configurations to virtual guests. Designed to support bare metal servers and to directly manipulate storage, network and system hardware, it is equally capable of managing and monitoring virtual machine guests.

In this session we will show how a system administrator can use the new tools to function more effectively, focusing on how they extend and improve existing management workflows and expertise.


Fedora @ LinuxCon Europe 2014

The 4th edition of LinuxCon Europe took place in Düsseldorf, Germany, last week and Fedora was there again like at the first three editions. It was the first time the Linux Foundation asked us to pay a fee. In the past, we got a booth and 4 passes for free. This time, we had to pay $750 for the booth and 3 tickets (we could get 4, but only 3 people signed up for the booth duty) which I think is still a good deal because the standard ticket to get to the event is $600. And I also think it’s an amount that is worth paying to have Fedora at the event.

LinuxCon Europe differs from other Linux and open source conferences. The audience is very different. It’s mostly (upstream) developers, devops, an consultants. So you’re not “selling” Fedora to average users who have little or zero experience with Linux. At LinuxCon, you’re selling it to very experienced users. One would say you don’t have to introduce Fedora to such users. But the opposite is true. Not many people can keep their fingers on the pulse of the industry and know about everything that is going on in the world of Linux. And if we want more corporate users of Fedora, and perhaps corporate contributors eventually, we need to promote Fedora to them.

People were more interested in Fedora Server which is different from most events where people are mostly interested in Workstation, but it’s not surprising considering the audience. It really helps to advertise a specialized product because you can clearly say: if you’re interested in server OSes, this is what we have for you and it has these interesting features. That’s why I’m glad we have Fedora Server. From the marketing point of view, it’s much more appealing to have a solution (server product) than just a lego to build it. Quite a few people were interested in Fedora as a future of enterprise Linux because what they work with and care about is Red Hat Enterprise Linux.

We had two demo computers at the booth. One was showcasing Fedora Workstation with GNOME on Wayland and the other one had Fedora Server running with Cockpit, so that people could check out one of the main features of Fedora 21 Server. We also had a plenty of swag (stickers, case badges, badges, DVDs, fliers,…). A lot of Fedora users stopped by to grab a sticker for their computer. Some of them use Fedora on servers or cloud in production, some use it on developer machines.

LinuxCon is also great for networking. You can meet people from all kind of open source projects, from companies where they use Linux heavily, you can learn how they use it, what their needs and expectations are etc. We were lucky that our booth was on a very visible place and Fedora was the only community distribution which had a booth there. So we were getting quite a lot of people at the booth and I brought a handful of business cards of interesting contacts.

I would like to thank the Fedora Project for paying the booth fee and covering lodging for me. I’d also like to thank Christoph Wickert for doing the booth duty with me and Felix Kaechele for not only doing the booth duty, but also for being a local organizer (accommodation, driving, contact for shipping, evening program,…).

Hope to see you at LinuxCon Europe 2015. Where? It hasn’t been announced yet AFAIK.

Our booth (©Linux Foundation)


UEFI y Stella Linux 6.5
Stella Linux, logo oficial
Stella Linux es un 'remix' del famoso sistema operativo CentOS que nos provee una fusión entre entorno de trabajo servidor/escritorio perfectamente funcional. Sin tener que lidiar con el típico problema a la hora de instalar paquetes tan revolucionarios como VLC, OpenShot, Audacious, Skype... a nivel de repositorios. 
Además de que el mantenedor de Stella Linux (supongo que tiene como seudónimo nux) tiene sus propios repositorios llamados SL base, y 'Nux-desktop', 'Nux-misc' además de los propios de CentOS y EPEL proveen software que no se incluyen en los repositorios anteriores.

Nota adicional: En caso de que utilices CentOS, puedes añadir los repositorios de Nux a través de la siguiente página.

Para quién no sepa qué es CentOS. Es una de las mejores distribuciones de Linux orientadas al universo de los servidores además de ser un proyecto comunitario. Es internacionalmente y ampliamente reconocida en este séctor. CentOS básicamente es un clon del sistema operativo de Red Hat llamado RHEL (Red Hat Enterprise Linux) el cuál utiliza multitud de negocios, comercios y sobre todo bolsas de mercado como la NYSE por su gran calidad, robustez, estabilidad y servicio.

Logo de CentOS

El siguiente problema que me encontré cuando intenté utilizarlo en entorno UEFI habilitado, el resultado que obtenía era básicamente el que no podía arrancar. Sin embargo, en modo normal arranca sin problemas.


La solución la encontré visualizando la estructura de directorios que se encontraban en EFI. Resulta que en la opción para arrancar Stella aparece como 'boot' en vez de 'BOOT'. Para solucionarlo, deberemos editar la entrada del gestor de arranque ('GRUB') nada más arrancar el LiveCD/USB... pulsando la tecla de tabulación antes de que inicie el sistema y posteriormente la tecla 'e' en las dos líneas.
Para salvar el cambio temporal es con Enter.

Y cambiar boot por BOOT como en la siguiente imagen:


 Una vez editemos la línea de 'kernel' y 'initrd' pulsamos 'b'.


¡Y ya lo podremos instalar, o usar!

Referencias

Configuring FreeBSD as a FreeIPA client

A recent thread on the freeipa-users mailing list highlighted one user’s experience with setting up FreeBSD as a FreeIPA client, complete with SSSD and Sudo integration. GNU+Linux systems have ipa-client-install, but the lack of an equivalent on FreeBSD means that much of the configuration must be done manually. There is a lot of room for error, and this user encountered several "gotchas" and caveats.

Services that require manual configuration include PAM, NSS, Kerberos and SSSD. Certain features may require even more services to be configured, such as sshd, for known_hosts management. Most of the steps have been outlined in a post on the FreeBSD forums.

But before one can even begin configuring all these services, SSSD, Sudo and related software and dependencies must be installed. Unfortunately, as also outlined in the forum post, non-default port options and a certain make.conf variable must be set in order to build the software such that the system can be used as a FreeIPA client. Similarly, the official binary package repositories do not provide the packages in a suitable configuration.

This post details how I built a custom binary package repository for FreeBSD and how administrators can use it to install exactly the right packages needed to operate as a FreeIPA client. Not all FreeBSD administrators will want to take this path, but those who do will not have to worry about getting the ports built correctly, and will save some time since the packages come pre-built.

Custom package repository

poudriere is a tool for creating binary package repositories compatible with FreeBSD’s next-generation pkg(8) package manager (also known as "pkgng".) The official package repositories are built using poudriere, but anyone can use it to build their own package repositories. Repositories are built in isolated jails (an OS-level virtualisation technology similar to LXC or Docker) and can build packages from a list of ports (or the entire ports tree) with customised options. A customised make.conf file can also be supplied for each jail.

Providing a custom repository with FreeIPA-compatible packages is a practical way to help people wanting to use FreeBSD with FreeIPA. It means fewer steps in preparing a system as a FreeIPA client (fewer opportunities to make mistakes), and also saves a substantial amount of time since the administrator doesn’t need to build any ports. The BSD Now podcast has a detailed poudriere tutorial; all the detail on how to use poudriere is included there, so I will just list the FreeIPA-specific configuration for the FreeIPA repository:

  • security/sudo is built with the SSSD option set
  • WANT_OPENLDAP_SASL=yes appears in the jail’s make.conf

The repository is currently only being built for FreeBSD 10.0/amd64. 10.1 is not far away; once it is released, I will build it for 10.1/amd64 instead. If anyone out there would like it built for 9.3 and/or i386 I can do that too – just let me know!

Assuming the custom repository is available for the release and architecture of the FreeBSD system, the following script will enable the repository and install the required packages.

#!/bin/sh
pkg install -y ca_root_nss
ln -s /usr/local/share/certs/ca-root-nss.crt /etc/ssl/cert.pem
mkdir -p /usr/local/etc/pkg/repos
cat >/usr/local/etc/pkg/repos/FreeIPA.conf <<"EOF"
FreeIPA: {
  url: "https://frase.id.au/pkg/${ABI}_FreeIPA",
  signature_type: "pubkey",
  pubkey: "/usr/share/keys/pkg/FreeIPA.pem",
  enabled: yes
}
EOF
cat >/usr/share/keys/pkg/FreeIPA.pem <<EOF
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAopt0Lubb0ur+L+VzsP9k
i4QrvQb/4gVlmr/d59lUsTr9cz5B5OtNLi+WMVcNh4EmmNIiWoVuQY4Wqjm2d1IA
VCXw+OqeAuj9nUW4jSvI/lDLyErFBXezNM5yggeesiV2ii+uO41zOjUxnSkupFzh
zOWr+Oj4kJI/iNU++3RpzyrBSmSGK9TN9k3afhyDMNlJi5SqK/wOrSjqAMfaufHE
MkJqBibDL/+xx48SbtInhtD4LIneHoOGxVtkLIcTSS5EpnIsDWZgXX6jBatv9LJe
u2UeQsKLKcCgrhT3VX+pc/aDsUFS4ZqOonLRt9mcFVxC4NDNMKsfXTCd760HQXYU
enVLydNavvGtGYQpbUWx5IT3IphaNxWANACpWrcvTawgPyGkGTPd347Nqhm5YV2c
YRf4rVX/S7U0QOzMPxHKN4siZVCspiedY+O4P6qe2R2cTyxntjLVGZcTBlXAdQJ8
UfQuuX97FX47xghxR6wyWfkXGCes2kVdVo0fF0vkYe1652SGJsfWjc5ojR9KFKkD
DN3x3Wu6kW0koZMF3Tf0rtSLDmbZEBddIPFrXo8QHiyqFtU3DLrYWGmbLRkYKnYR
KvG3XCJ6EmvMlfr8GjDIaEiGo7E7IyLusZXXzbIW2EKQdwa6p4N8wrW/30Ov53jp
rO+Bwn10+9DZTupQ3c04lsUCAwEAAQ==
-----END PUBLIC KEY-----
EOF
pkg update
pkg install -r FreeIPA -y cyrus-sasl-gssapi sssd sudo

Once the packages are installed from the custom repository, configuration can continue as indicated in the forum post.

Future efforts

This post was concerned with package installation. This is an important but relatively small part of setting up a FreeBSD client. There is more that can be done to make it easier to integrate FreeBSD (and other non-GNU+Linux systems) with FreeIPA. I will conclude this post with some ideas along this trajectory.

Recent versions of FreeIPA include the ipa-advise tool, which explains how various legacy systems can be configured to some extent as FreeIPA clients. ipa-advise config-freebsd-nss-pam-ldapd shows advice on how to configure a FreeBSD system, but the information is out of date in many respects – it references the old binary package tools (which have now been completely removed) and has no information about SSSD. This information should be updated. I have had this task on a sticky-note for a little while now, but if someone else beats me to it, that would be no bad thing.

The latest major version of SSSD is 1.12, but the FreeBSD port is back at 1.9. The 1.9 release is a long-term maintenance (LTM) release, but any efforts to bring 1.12 to FreeBSD alongside 1.9 would undoubtedly be appreciated by the port maintainer and users.

A longer term goal should be a port of (or an equivalent to) ipa-client-install for FreeBSD. Most of the software needed for FreeIPA integration on FreeBSD is similar or identical to that used on GNU+Linux, but there are some differences. It would be a time consuming task – lots of trial runs and testing – but probably not particularly difficult.

In regards to the package repository, work is underway to add support for package flavours to the FreeBSD packaging infrastructure. When this feature is ready, a small effort should be undertaken to add a FreeIPA flavour to the ports tree, and ensure that the resultant packages are made available in the official package repository. Once this is achieved, neither manual port builds nor the custom package repository will be required –
everything needed to configure FreeBSD as a FreeIPA client will be available to all FreeBSD users by default.

Cassandra Keyspace case-sensitiveness WTF

cqlsh> DESCRIBE KEYSPACES;
foo   bar  OpsCenter

cqlsh> use opscenter;
Bad Request: Keyspace 'opscenter' does not exist

cqlsh> use OpsCenter;
Bad Request: Keyspace 'opscenter' does not exist

cqlsh> USE "OpsCenter";
cqlsh:OpsCenter>

Seriously this is the way Cassandra handle case-sensitiveness ???

FUDCon Managua - No seas tonto: Fortifica tu servidor con SELinux (Slides)
For the interested, here are my slides for the topic (in Spanish): No seas tonto: Fortifica tu servidor con SELinux.
 
Download it as PDF from this URL: http://goo.gl/n9ybN9

October 21, 2014

New badge: Let's have a party (Fedora 21) !
Let You organized a party for the release of Fedora 21
[GNU IceCat] 31.1.1 released

GNU Icecat is now available on Fedora repositories.

We’ve packaged latest release 31.1.1 based on Firefox 31 ESR. The 08th October, it has been announced by IceCat’s new maintainer, Rubén Rodríguez:

After many small changes and improvements I managed to produce a new
release for IceCat, available (by now) here:
http://gnuzilla.gnu.org/releases/31.1.1/

I'd like to get some testing and feedback before doing the official
release, also to get time to update the documentation.

Some notes:

- It is based on Firefox 31 ESR. I decided to stick to the ESR upstream
releases (https://www.mozilla.org/en-US/firefox/organizations/faq/)
because they provide security updates over a stable base. This way we
won't have to fight with changes in the APIs we base our features on.
That will also eventually allow to port privacy features from
TorBrowser, which is being upgraded to follow v31 ESR too.

- To filter privacy trackers I modified Adblock Plus to allow filter
subscriptions to be optionally enabled during Private Browsing mode. I
did some other small changes, along with removing the "acceptable ads"
pseudofeature. Because of all this I decided to rebrand the extension to
"Spyblock", to avoid confusion with the upstream project.
I also set custom lists at http://gnuzilla.gnu.org/filters/ and I made a
point of preserving self-served advertisement, as the goal is not to
block ads but to preserve privacy. That's another reason for rebranding.

- I compiled binary packages for GNU/Linux using Trisquel 6, both for 32
and 64 bit. Those binaries should work in most recent distros. These are
the ones I'm more certain that should work: Trisquel 6 and 7, Ubuntu
Precise or newer, Debian Wheezy, testing and sid. Please test in other
distros and send reports of success and any bugs you find.

- Video in h264 format (youtube, vimeo...) only shows a black screen in
my machines, but so do the precompiled Firefox bundles, so I guess they
need to be compiled in a less "portable" way for that feature to work.
It seems to work when packaged for Trisquel.

- Packagers are welcome! We want to get the package in other distros and
also compiled for MacOS and Windows.

Happy testing!

 

 

icecatOriginally, it’s released including some free-addons:

* LibreJS (6.0.1)
GNU LibreJS aims to address the JavaScript problem described in Richard
Stallman’s article The JavaScript Trap.

* SpyBlock (2.6.3.0)
Blocks privacy trackers while in normal browsing mode, and all third party
requests when in private browsing mode. Based on Adblock Plus.

* AboutIceCat
Adds a custom “about:icecat” homepage with links to information about the
free software and privacy features in IceCat, and check-boxes to enable
and disable the ones more prone to break websites.

HTTPS-Everywhere is already packaged in Fedora. Request Policy is NOT included in Icecat but package separately.
I don’t exclude the packaging of additional free addons in future.

Installation

If you have enabled Icecat copr project previously, disable it before to install Icecat by Fedora repositories:

# dnf copr disable sagitter/Icecat
# yum install icecat --enablerepo=updates-testing

Testing

If you’re interested, please, install IceCat by yum or dnf from Fedora updates-testing repositories and leave a positive/negative karma or open a bug report if something is wrong.


Filed under: Articoli, English, FedoraPlanet, Fedoraproject, GNU, IceCat, Packaging, RPM, Software
HEADS-UP: mod_qos Update

I've pushed mod_qos-11.5 into testing, I didn't want to keep 10.x because it does not support IPv6 properly.

If you happen to use mod_qos, I'd really appreciate your feedback either in Bugzilla, Bodhi, email or irc

EDIT: EPEL7 package

Notes from Strata + Hadoop World 2014

I went to Strata + Hadoop World last week. This event targets a pretty broad audience and is an interesting mix of trade show, data science conference, and software conference. However, I’ve been impressed by the quality and relevance of the technical program both of the years that I’ve gone. The key to finding good talks at this kind of event is to target talks focusing on applications, visualization, and fundamental techniques, rather than ostensibly technical talks.1 In the rest of this post, I’ll share some of my notes from the talks and tutorials I enjoyed the most.

D3.js

The first event I attended was Sebastian Gutierrez’s D3 tutorial. (Gutierrez’s slides and sample code are available online.) A great deal of this session was focused on introducing JavaScript and manipulating the DOM with D3, which was great for me; I’ve used D3 before but my knowledge of web frontend development practice is spotty and ad-hoc at best.

This was a great tutorial: the pace, scope, content, and presentation were all really excellent. It provided enough context to make sense of D3’s substantial capabilities and left me feeling like I could dive in to using D3 for some interesting projects. Finally, Gutierrez provided lots of links to other resources to learn more and experiment, both things I’d seen, like bl.ocks.org, and things that were new to me, like tributary.io and JSFiddle.

The standalone D3 tutorial on DashingD3JS.com seems like an excellent way to play along at home.

Engineering Pipelines for Learning at Scale

I attended the “Hardcore Data Science” track in the afternoon of the tutorial day; there were several interesting talks but I wanted to call out in particular Ben Recht’s talk, which mentioned BDAS in the title but was really about generic problems in developing large-scale systems that use machine learning.

Recht’s talk began with the well-known but oft-ignored maxim that machine learning is easy once you’ve turned your analysis problem into an optimization problem. I say “oft-ignored” here not because I believe practitioners are getting hung up on the easy part of the problem, but because it seems like raw modeling performance is often the focus of marketing for open-source and commercial projects alike.2 The interesting engineering challenges are typically in the processing pipeline from raw data to an optimizable model: acquiring data, normalizing and pruning values, and selecting features all must take place before modeling and classification. Learning problems often have this basic structure; Recht provided examples of these pipeline stages in object recognition and text processing domains.

The main motivating example was Recht’s attempt to replicate the Oregon State Digital Scout project, which analyzed video footage of American football, and answered questions including what plays were likely from given offensive formations. So Recht’s team set out to stich together video footage into a panorama, translate from video coordinates to inferred field coordinates, and track player positions and movements. This preprocessing code, which they had implemented with standard C++ and the OpenCV library, took ten hours to analyze video of a four-second NFL play. Expert programmers could surely have improved the runtime of this code substantially, but it seems like we should be able to support software engineering of machine-learning applications without having teams of experts working on each stage of the learning pipeline.

The talk concluded by presenting some ideas for work that would support software engineering for learning pipelines. These were more speculative but generally focused on using programming language technology to make it easier to develop, compose, and reason about learning pipelines.

Keynotes

Since the talks from the plenary sessions are all fairly brief (and freely available as videos), I’ll simply call out a couple that were especially enjoyable.

Domain-Specific Languages for Data Transformation

Joe Hellerstein and Sean Kandel of Trifacta gave a talk on domain-specific languages for data transformation in general and on their Wrangle system in particular. (Trifacta is also responsible for the extremely cool Vega project.) The talk led with three rules for successful data wrangling, which I’ll paraphrase here:

  • your processes, not your data, are most important;
  • reusable scripts to process data are more valuable than postprocessed and cleaned data; and
  • “agile” models in which preprocessing is refined iteratively in conjunction with downstream analysis of the processed data are preferable to having preprocessing take place as an isolated phase.3

The first act of the talk also referenced this quotation from Alfred North Whitehead, which set the tone for a survey of DSLs for data transformation and query construction:

By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems…

In the 1990s, several data processing DSLs emerged that were based on second-order logic, proximity joins, and composable transformations.4 Hellerstein argued that, since these underlying formalisms weren’t any simpler to understand than the relational model, the DSLs weren’t particularly successful at simplifying data transformation for end-users.

Raman and Hellerstein’s Potter’s Wheel system presented a visual interface to SQL for data cleaning; it could automatically detect bad data and infer structure domains but wasn’t particularly easier to use than SQL: putting a visual, menu-driven interface over a language doesn’t generally lower the cognitive load of using that language.5

The Wrangler system (which forms the basis for the Wrangle DSL in Trifacta’s product) attacks the problem from a different angle: data cleaning operations are suggested based on explicit user guidance, semantic information about types and domains, and inferences of user intent based on prior actions. The entire problem of suggesting queries and transformations is thus expressible as an optimization problem over the space of DSL clauses. With the interface to the Wrangle DSL, the user highlights features in individual rows of the data to appear in the transformed data or in a visualization, and the system presents several suggested queries that it had inferred. The Wrangle system also infers regular expressions for highlighted text transformations, but since regular expressions — and especially machine-generated ones — are often painful to read, it translates these to a more readable (but less expressive) representation before presenting them to the user.

Kandel gave a very impressive live demo of Wrangle on a couple of data sets, including techniques for sensibly treating dirty or apparently-nonsensical rows (in this example, the confounding data included negative contribution amounts from election finance data, which represented refunds or revised accounting). This is excellent work at the intersection of databases, programming languages, machine learning, and human-computer interaction — which is a space that I suspect a lot of future contributions to data processing will occupy.

Clustering for Anomaly Detection

Sean Owen’s talk on using Spark and k-means clustering for anomaly detection was absolutely packed. Whether this was more due to the overwhelming popularity of talks with “Spark” in the title or because Sean is widely known as a sharp guy and great presenter, I’m not sure, but it was an excellent talk and was about more than just introducing clustering in Spark. It was really a general discussion of how to go from raw data to a clustering problem in a way that would give you the most useful results — Spark was essential to make the presentation of ETL and clustering code simple enough to fit on a slide, but the techniques were generally applicable.

The running example in this talk involved system log data from a supervised learning competition. Some of the log data was generated by normal activity and some of it was generated by activity associated with various security exploits. The original data were labeled (whether as “normal” or by the name of the associated exploit), since they were intended to evaluate supervised learning techniques. However, Owen pointed out that we could identify anomalies by clustering points and looking at ones that were far away from any cluster center. Since k-means clustering is unsupervised, one of Owen’s first steps was to remove the labels.

With code to generate an RDD of unlabeled records, Owen then walked through the steps he might use if he were using clustering to characterize these data:

  • The first question involves choosing a suitable k. Since Owen knew that there were 23 different kinds of records (those generated by normal traffic as well as by 22 attack types), it seems likely that there would be at least 23 clusters. A naïve approach to choosing a cluster count (viz., choosing a number that minimizes the distance from each point to its nearest centroid) falls short in a couple of ways. First, since cluster centers are originally picked randomly, the results of finding k clusters may not be deterministic. (This is easy to solve by ensuring that the algorithm runs for a large number of iterations and has a small distance threshold for convergence.) More importantly, though, finding n clusters, where n is the size of the population, would be optimal under this metric, but it would tell us nothing.
  • The vectors in the raw data describe 42 features but the distance between them is dominated by two values: bytes read and bytes written, which are relatively large integer values (as opposed to many of the features, which are booleans encoded as 0 or 1). Normalizing each feature by its z-score made each feature contribute equally to distance.
  • Another useful trick is encoding features whose space is a finite domain of n elements as n boolean features (where only one of them will be true). So if your feature could be either apple, banana, cantaloupe, or durian in the raw data, you could encode it as four boolean-valued features, one of which is true if and only if the feature is apple, one of which is true if and only if the feature is banana, and so on.
  • Using entropy (or normalized mutual information) to evaluate the clustering can be a better bet since fitness by entropy won’t increase until k = n as fitness by Euclidean distance will. Another option is restoring the labels from the original data and verifying that clusters typically have homogeneous labels.

Summary

Many of the talks I found interesting featured at least some of the following common themes: using programming-language technology, improving programmer productivity, or focusing on the pipeline from raw data to clean feature vectors. I especially appreciated Sean Owen’s talk for showing how to refine clustering over real data and demonstrating substantial improvements by a series of simple changes. As Recht reminds us, the optimization problems are easy to solve; it’s great to see more explicit focus on making the process that turns analysis problems into optimization problems as easy as possible.


  1. Alas, talks with technical-sounding abstracts often wind up having a nontrivial marketing component!

  2. This is related to the reason why “data lakes” have turned out to be not quite as useful as we might have hoped: a place to store vast amounts of sparse, noisy, and irregular data and an efficient way to train models from these data are necessary but not sufficient conditions for actually answering questions!

  3. Hellerstein pointed out that the alternative to his recommendation is really the waterfall model (seriously, click that link), which makes it all the more incredible that anyone follows it for data processing. Who would willingly admit to using a practice initially defined as a strawman?

  4. SchemaSQL and AJAX were provided as examples.

  5. I was first confronted with the false promise of visual languages in the 1980s. I had saved my meager kid earnings for weeks and gone to HT Electronics in Sunnyvale to purchase the AiRT system for the Amiga. AiRT was a visual programming language that turned out to be neither more expressive nor easier to use than any of the text-based languages I was already familiar with. With the exception of this transcript of a scathing review in Amazing Computing magazine (“…most of [its] potential is not realized in the current implementation of AiRT, and I cannot recommend it for any serious work.”), it is now apparently lost to history.

Server WG Weekly Meeting Minutes (2014-10-21)

<html> <head> <meta content="text/html;charset=UTF-8" http-equiv="Content-type"/>
<style type="text/css"> /* This is for the .html in the HTML2 writer */ body { font-family: Helvetica, sans-serif; font-size:14px; } h1 { text-align: center; } a { color:navy; text-decoration: none; border-bottom:1px dotted navy; } a:hover { text-decoration:none; border-bottom: 0; color:#0000B9; } hr { border: 1px solid #ccc; } /* The (nick, time) item pairs, and other body text things. */ .details { font-size: 12px; font-weight:bold; } /* The 'AGREED:', 'IDEA', etc, prefix to lines. */ .itemtype { font-style: normal; /* un-italics it */ font-weight: bold; } /* Example: change single item types. Capitalized command name. /* .TOPIC { color:navy; } */ /* .AGREED { color:lime; } */ </style>

</head> <body>

#fedora-meeting-1: Server Working Group Weekly Meeting (2014-10-21)

Meeting started by sgallagh at 15:00:33 UTC (full logs).

Meeting summary

  1. roll call (sgallagh, 15:00:33)
  2. Agenda (sgallagh, 15:06:12)
    1. Agenda Item: Fedora 21 Install Media (sgallagh, 15:06:33)
    2. Agenda Item: Fedora 21 Beta Status (sgallagh, 15:06:33)

  3. Fedora 21 Install Media (sgallagh, 15:08:37)
    1. AGREED: Server WG finds it acceptable that all netinstalls be universal and select Server as the default installation environment in interactive Anaconda. (+8, 0, -0) (sgallagh, 15:21:43)

  4. Fedora 21 Beta Status (sgallagh, 15:22:26)
    1. https://www.happyassassin.net/testcase_stats/21/Server.html (adamw, 15:25:26)
    2. danofsatx has been running tests against the Domain Controller Role. Is encountering an issue with named. (sgallagh, 15:26:50)
    3. ACTION: junland to jump right in with TC testing (sgallagh, 15:27:40)
    4. ACTION: danofsatx to file a bug against FreeIPA for the named start failure (sgallagh, 15:28:43)
    5. we really need to run those tests against Beta TC4/RC1 (sgallagh, 15:29:23)
    6. https://fedoraproject.org/wiki/Test_Results:Fedora_21_Beta_TC4_Server (sgallagh, 15:29:35)
    7. https://fedoraproject.org/wiki/Fedora_21_Beta_Release_Criteria#Server_Product_requirements (sgallagh, 15:30:19)
    8. for anyone who doesn’t know, you can nominate blocker bugs at https://qa.fedoraproject.org/blockerbugs/propose_bug , or just mark them as blocking the bug ‘BetaBlocker’ and explain why in a comment. (sgallagh, 15:32:01)
    9. Go/No-Go Meeting is Thursday, which means we hopefully don’t have any blockers but if there are any we need to know *today* to have any chance of avoiding slippage. (sgallagh, 15:35:25)
    10. It would be appreciated if anyone with spare cycles spends some time testing Beta TC4 today. (sgallagh, 15:36:03)

  5. Open Floor (sgallagh, 15:40:57)
    1. Product GUI install media still doesn’t have the Product Logo (sgallagh, 15:46:34)
    2. No risk to Beta release due to branding/logo (sgallagh, 15:48:28)

  6. Server WG Test Day (sgallagh, 15:50:06)
    1. ACTION: junland to look into scheduling a Fedora Server Test Day (sgallagh, 15:57:26)

Meeting ended at 16:02:18 UTC (full logs).

Action items

  1. junland to jump right in with TC testing
  2. danofsatx to file a bug against FreeIPA for the named start failure
  3. junland to look into scheduling a Fedora Server Test Day

Action items, by person

  1. danofsatx
    1. danofsatx to file a bug against FreeIPA for the named start failure
  2. junland
    1. junland to jump right in with TC testing
    2. junland to look into scheduling a Fedora Server Test Day

People present (lines said)

  1. sgallagh (108)
  2. adamw (37)
  3. simo (35)
  4. junland (26)
  5. nirik (13)
  6. danofsatx (13)
  7. zodbot (9)
  8. tuanta (5)
  9. mitr (3)
  10. davidstrauss (2)
  11. stefw (0)
  12. mizmo (0)

Generated by MeetBot 0.1.4. </body></html>

Toma, te comparto mi tema :)

Hoy, hoy les comparto mi tema :) Clearlook custom  + Flattr custom

Acerca los colores (android holo theme),estos están en mi github, sobre los iconos son los famosos Flattr pero los estoy modificando ya que algunos se ven mal ( sobre todo cuando le das a alt + tab) así que si te gusta como se ven , vamos te animo a descargarlos :)

Descargar tema desde GitHub

ClearLook

Window Background                    #DCD9DD

Window Text                                 #1A1A1A

Input Boxes Background            #D0D0D0

Input Boxes Text                         #1A1A1A

Selected Items Background    #56B8D8

Selected Items Text                  #F0F0F0

Tooltips Background                 #F0F0F0

Tooltips Text                              #1A1A1A


Quedando así

Screenshot-2


Screenshot-4


n0oir.

 


Malayalam opentype specification – part 1

This post is a promised followup from last November documenting intricacies of opentype specification for Indic languages, specifically for Malayalam. There is an initiative to document similar details in the IndicFontbook, this series might make its way into it. A Malayalam unicode font supporting traditional orthography is required to correctly display most of the examples described in this article, some can be obtained from here.

Malayalam has a complex script, which in general means the shape and position of glyphs are determined in relation with other surrounding glyphs, for example a single glyph can be formed out of a combination of independent glyphs in a specific sequence forming a conjunct. Take an example: ക + ്‌ ‌+ ത + ്‌ + ര => ക്ത്ര in traditional orthography. Note that in almost all the cases glyph shaping and positioning change such as this example is due to the involvement of Virama diacritic ” ്‌ “. The important rules on glyph forming are:

  1. When Virama is used to combine two Consonants, it usually forms a Conjunct, such as ക + ്‌ ‌+ ത => ക്ത. This is known as C₁ conjoining as a half form of first consonant is joined with second consonant.
  2. The notable exceptions to point 1 are when the followed Consonants are either of യ, ര, ല, വ. In those cases, they form the ‘Mark’ shapes of യ, ര, ല, വ =>  ്യ, ്ര,  ്ല,  ്വ. This is known as C₂ conjoining as a modified form of second consonant is attached to the first consonant.
  3. When Virama is used to combine a Consonant with Vowel, the Vowel forms a Vowel Mark => such as ാ, ി, ീ.

Opentype organizes these glyph forming and shaping logic by a sequence of ‘Lookup tables (or rules)’ to be defined in the font. The first part gives an overview of the relevant lookup rules used for glyph processing by shaping engine such as Harfbuzz or Uniscribe.

Only those opentype features applicable for Malayalam are discussed. The features (or lookups) are applied in the following order:

  1. akhn (Akhand – used for conjuncts like ക്ക, ക്ഷ, ല്ക്ക, യ്യ, വ്വ, ല്ല etc)
  2. pref (Pre-base form – used for pre base form of Ra –  ്‌ + ര =   ്ര)
  3. blwf (Below base form – used for below base form of La – virama+La – ്‌ + ല =  ്ല)
  4. half (Half form – Not used in mlm2 spec by Rachana and Meera, but used in mlym spec and might be useful later. For now, ignore)
  5. pstf (Post base form – used for post base forms of Ya and Va – ്‌ +യ =  ്യ, ്‌ + വ = ്വ. Note that  യ്യ & വ്വ are under akhn rule)
  6. pres (Pre-base substitution – mostly used for ligatures involving pref Ra – like ക്ര, പ്ര, ക്ത്ര, ഗ്ദ്ധ്ര  etc)
  7. blws (Below base substitution – used for ligatures involving blwf La – like ക്ല, പ്ല, ത്സ്ല etc. Note that  ല്ല is under akhn rule)
  8. psts (Post base substitution – used for ligatures involving post base Matras – like കു, ക്കൂ, മൃ etc)
  9. abvm (Above base Mark  positioning – used for dot Reph – ൎ)

Last 3 forms (pres, blws, psts) are presentation forms, they have lower priority in the glyph formation. They usually form the large number of secondary glyphs. The final one (abvm) is not a GSUB (glyph substitution lookup) but a GPOS (glyph position lookup) – this is used to position dotreph correctly above the glyphs.

  • akhn: Use this for conjuncts (കൂട്ടക്ഷരങ്ങള്‍) like ക്ക, ട്ട, ണ്ണ, ക്ഷ, യ്യ, വ്വ, ല്ല, മ്പ. This rule has the highest priority, so akhn glyphs won’t be broken by the shaping engine.
  • pref: Used only for pre-base form of Ra ര –  ്ര
  • blwf: Used only for below base form of La ല –  ്ല
  • pstf: Used for the post base forms of Ya, Va യ, വ – ്യ, ്വ
  • pres: One of the presentation forms, mostly used for ligatures/glyphs with pref Ra ര – like ക്ര, പ്ര, ക്ത്ര, ഗ്ദ്ധ്ര etc. This could also used together with the ‘half’ forms in certain situations, but that is for later.
  • blws: Used for ligatures/glyphs with blwf La ല – like ക്ല, പ്ല, ത്സ്ല etc.
  • psts: Used by a large number of ligatures/glyphs due to the post base Matras (ു,ൂ,ൃ etc) – like  കു, ക്കൂ, മൃ etc. Other Matras (ാ,ി,ീ,േ,ൈ,ൈ,ൊ,ോ,ൌ,ൗ) are implicitly handled by the shaping engine based on their Unicode properties (pre-base, post-base etc) as they don’t form a different glyph together with a consonant – there is no need to define lookup rules for those matras in the font.

I will discuss these lookup rules and how they fit in the glyph shaping sequence with detailed examples in next episodes.

(P.S: WordPress tells me I started this blog 7 years ago on this day. How time flies.)


Tagged: fonts, opentype
A GNOME Kernel wishlist
GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.