April 18, 2015

Appium + iOS Simulatorでのテストコードメモ

Appium + iOS Simulatorのメモです。実機は設定が面倒なのでシミュレータで。

インストールとかセットアップは公式のマニュアルを読むか、qiitaなどで検索すれば問題無いと思います。

使うものはこれら。

  • appium
  • node.js

node.jsはnpmでこれらを入れます。というか必要なものは全てnpmで入れます。

  • appium
  • wd
  • chai
  • chai-as-promised

wdはSelenium Webdriver/Appiumのnode.js bindingです。これ1個で両方対応できるので便利ですね。

テストコードはappiumのgithubにあるサンプルコードを参考にします。helpers/にあるjsファイルの読み込みはなんかはインライン展開して1ファイルにまとめてます。 これはテストのフレームワークにmochaを使ってます。

"use strict";

var wd = require("wd");

require('colors');
var chai = require("chai");
var chaiAsPromised = require("chai-as-promised");
chai.use(chaiAsPromised);
var should = chai.should();
chaiAsPromised.transferPromiseness = wd.transferPromiseness;

describe("ios safari", function () {
  this.timeout(300000);
  var driver;

  before(function () {
    var serverConfig = {
      host: 'localhost',
      port: 4723
    };

    driver = wd.promiseChainRemote(serverConfig);

    var desired = {
      browserName: 'safari',
      'appium-version': '1.3',
      platformName: 'iOS',
      platformVersion: '8.3',
      deviceName: 'iPhone 6',
      app: undefined
    };
    return driver.init(desired);
  });

  after(function () {
    return driver
      .quit();
  });

  it("should get the url", function () {
    return driver
      .get('https://www.google.com')
      .sleep(1000)
      .waitForElementByName('q', 5000)
        .type('masami256')
      .waitForElementByName('btnG')
        .click()
      .waitForElementByLinkText('なんつってつっちゃった (@masami256) | Twitter')
        .click()
      .sleep(5000)
      .saveScreenshot('test.png');
  });

});

コードはこんな感じです。いくつか見ていくと、

この部分はAppiumサーバーがどこで動いているかの設定です。npmで入れたappiumコマンドは引数なしで実行すると0.0.0.0:4723にbindするのでこのように書いても問題なしです。

    var serverConfig = {
      host: 'localhost',
      port: 4723
    };

次のdesiredは実行対象のOSやブラウザの指定です。mobile safariを動かす場合はappは指定しなくてOKです。 実機を使う場合はudidを指定する必要があります。ここではSimulatorで動かすので設定してません。

    var desired = {
      browserName: 'safari',
      'appium-version': '1.3',
      platformName: 'iOS',
      platformVersion: '8.3',
      deviceName: 'iPhone 6',
      // udid: 'device udid',
      app: undefined
    };

deviceNameはXcodeWindows -> Devicesから調べるのが手っ取り早いと思います。

f:id:masami256:20150418121133p:plain

appium周りの設定はこれ位ですね。あとはmochaの作法に則ってテストを書いて処理はwdのマニュアルで調べる感じです。

注意点としてはSelenium IDEと違ってClick And Waitが無いので表示の待ち方は気をつけないといけないですね。

で、上のテストを実行すると実行した場所にtest.pngが作られます。

f:id:masami256:20150418121724p:plain

追記: wdの便利なところはserverConfigとdesiredの部分さえ切り替える仕組みを作っておけば(環境変数使うとか)PC版Webサイト・スマートフォン版Webサイト両対応させることができます( ´∀`)bグッ! 当然、css要素等が同じ必要はありますけども・・・

情熱プログラマー ソフトウェア開発者の幸せな生き方

情熱プログラマー ソフトウェア開発者の幸せな生き方

April 17, 2015

What happened in Toulouse?

… a KDE PIM sprint happened in Toulouse! And what happened during that sprint? Well, read this wholly incomplete report!

Let’s start with the most important part: we decided what to do next! On the last PIM sprint in Munich in November when Christian and Aaron introduced their new concept for next version of Akonadi, we decided to refocus all our efforts on working on that which meant switching to maintenance mode of KDE PIM for a very long time and then coming back with a big boom. In Toulouse we discussed this plan again and decided that it will be much better for the project and for the users as well if we continue active development of KDE PIM instead of focusing exclusively on the “next big thing” and take the one-step-at-the-time approach. So what does that mean?

We will aim towards releasing KF5-based KDE PIM in August as part of KDE Applications 15.08. After that we will be working on fixing bugs, improving the current code and adding new features like normally, while at the same time preparing the code base for migration to Akonadi 2 (currently we call it Akonadi Next but I think eventually it will become “2”). I will probably write a separate technical blog post on what those “preparations” mean. In the meantime Christian will be working from the other side on Akonadi 2 and eventually both projects should meet “in the middle”, where we simply swap the Akonadi 1 backend with the Akonadi 2 backend and ship next version. So instead of one “big boom” release where we would switch to Qt 5 and Akonadi 2 at the same time we do it step-by-step, causing as little disruption to user experience as possible and allowing for active development of the project. In other words WIN-WIN-WIN situation for users, devs and the KDE PIM project.

I’m currently running the entire KDE PIM from git master (so KF5-based) and I must say that everything works very well so far. There are some regression against the KDE 4 version but nothing we couldn’t handle. If you like to use bleeding-edge versions of PIM feel free to update and help us finding (and fixing) regressions (just be careful not to bleed to death ;-)).

Another discussion we had is closely related to the 15.08 release. KDE PIM is a very huge code base, but the active development team is very small. Even with the incredible Laurent Montel on our side it’s still not enough to keep actively maintaining all of the KDE PIM (yes, it’s THAT huge ;-)). So we had to make a tough decision: some parts of KDE PIM have to die, at least until a new maintainer steps up, and some will move to extragear and will live their own lives there. What we release as part of KDE Applications 15.08 I call KDE PIM Core and it consists of the core PIM applications: KMail, KOrganizer, KAddressbook, Kleopatra, KNotes and Kontact. If your favorite PIM app is not in the list you can volunteer as a maintainer and help us make it part of the core again. We believe that in this case quality is more important than quantity and this is the trade-off that will allow us to make the next release of PIM the best one to date ;-).

Still related to the release is also reorganization of our repos, as we have some more splitting and indeed some merging ahead of us but we’ll post an announcement once everything is discussed and agreed upon.

Thanks to Christian’s hard work most of the changes that Kolab did in their fork of KDE PIM has been upstreamed during the sprint. There are some very nice optimizations and performance improvements for Akonadi included (among other things), so indeed the next release will be a really shiny one and there’s a lot to look forward to.

Vishesh brought up the topic of our bug count situation. We all realize the sad state of our PIM bugs and we talked a bit about re-organizing and cleaning up our bug tracker. The clean up part has already begun as Laurent with Vishesh have mass-closed over 850 old KMail 1 bugs during the sprint to make it at least a little easier to get through the rest. Regarding the re-organization I still have to send a mail about it but a short summary would be that we want to remove bugzilla components and close bugs for the apps we decided to discontinue and maybe do a few more clean up rounds for the existing bugs.

I’m sure I’ve forgotten something because much more happened during the sprint but let’s just say I’m leaving some topics for others to blog about ;-).

Huge thank you to Franck Arrecot and Kevin Ottens for taking care of us and securing the venue for the sprint! All in all it was a great sprint and I’m happy to say that we are back on track to dominate the world of PIM.

The only disappointment of the entire sprint was my failure to acquire a French beer. I managed to try Belgian, Spanish, Mexican and Argentinian beer but they did not serve any French beer anywhere. Either there’s no such thing or it must be really bad…:-)

KDE PIM Sprint in Toulouse

We had a great dinner with the local KDE people on Saturday. Also a prove that Laurent is a real person :-D

 

COPRs: Ein zweischneidiges Schwert

Als vor nicht all zu langer Zeit COPRs (Cool Other Peoples Repositores) Einzug in das Fedora Repository hielten, war die Freunde gros, das man endlich ein Pendant zu den PPAs aus dem Untuntu-Universum habe.

Inzwischen hat man jedoch immer öfter den Eindruck, das COPRs als eine Art Zwischenschicht zwischen Rawhide und Stable missbraucht werden. Als jüngstes Beispiel könnte man das mit dem Kernel 4.0 eingeführte Live-Patching nennen. Einerseits wird bekanntgegeben, das Live-Patching standardmäßig im Fedora-Kernel deaktiviert ist, im gleichen Atemzug wird jedoch auf ein use on your own risk COPR verwiesen, wo man Kernel mit aktiviertem Live-Patching bekommen kann.

COPRs sind durchaus nützlich und hilfreich, wenn es darum geht, Dritten Pakete zur Verfügung zu stellen, die (noch) nicht in den offiziellen Fedora Repositories enthalten ist oder um neuere Versionen der in Fedora enthaltenen Software zum Testen bereit zu stellen. Ich halte es jedoch falsch, COPRs für das Entwickeln und Testen neuer Features zu verwenden. Dafür war bislang Rawhide zuständig und so sollte es auch bleiben. Ansonsten war Fedora die längste Zeit die Distribution, die man sich installiert hat, wenn man möglichst nahe am Puls der Linux-Entwicklung sein wollte und wird immer mehr zu einem Debian auf rpm-basis.

Judgement Day

GDB will be the weapon we fight with if we accidentally build Skynet.

Red Hat joins Khronos

So Red Hat are now formally a member of the Khronos Groups who many of probably know as the shepherds of the OpenGL standard. We haven’t gotten all the little bits sorted yet, like getting our logo on the Khronos website, but our engineers are signing up for the various Khronos working groups etc. as we speak.

So the reason we are joining is because of all the important changes that are happening in Graphics and GPU compute these days and our wish to have more direct input of the direction of some of these technologies. Our efforts are likely to focus on improving the OpenGL specification by proposing some new extensions to OpenGL, and of course providing input and help with moving the new Vulkan standard forward.

So well known Red Hat engineers such as Dave Airlie, Adam Jackson, Rob Clark and others will from now on play a much more direct role in helping shape the future of 3D Graphics standards. And we really look forward to working with our friends and colleagues at Nvidia, AMD, Intel, Valve and more inside Khronos.

F22 Beta, Flock, Linux 4.0, Fedora 23 (!), and Diversity — it’s 5tFTW for April 17th, 2015

Fedora is a big project, and it’s hard to keep up with everything. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for April 17th, 2015:

Fedora 22 Beta is Go!

The Thursday before a planned Fedora release (whether alpha, beta, or final), we hold a meeting with key groups including engineering, quality assurance, and release engineering to decide if the release is “Go” or “No Go” — last week, it was the latter, but with the remaining blocker bugs ironed out, Fedora 22 beta is now Go. Expect to see it Tuesday morning!

Flock Conference Update

Flock (our big annual planning and development conference for contributors) will be held in Rochester, New York this August 12-15. Registration is open now, so if you haven’t already, please sign up. The conference is free and open to all, thanks to funding from our sponsors. Also, if you have something to share about Fedora development and ideas for our future, please submit a talk proposal. (We do have a limited budget available for travel for speakers in need of assistance.)

Fedora and Linux 4.0

The latest Linux kernel release has been getting a lot of hype in the press the last few days, primarily because of the big version number switch. (I know a lot of you are excited about it from the traffic and comments on our Fedora Magazine article on the topic earlier this week.) On his blog, Fedora Kernel team member Josh Boyer breaks down what this means for Fedora. In short, don’t get too excited over the version, as it’s just a number not really meant to signify any big change. That includes the “live patching” functionality, which, as Josh explains, is not a very useful fit for Fedora, since we update to the newest kernel release frequently.

Fedora 23 Already

I know Fedora 22 isn’t even out the door yet, but the calendar marches on. Jan Kurik (working with Fedora Program Manager Jaroslav Resnik) has put together a preliminary schedule proposal for Fedora 23 for FESCo (the Fedora Engineering Steering Committee) to review. This targets an October release, in keeping with our general plan to keep on a “Mother’s Day / Halloween” release cadence. Of particular note to Fedora developers, that means that the submission process for Fedora Changes opens this coming Tuesday, right after the F22 beta ships, with a deadline in about two months. So, if you have an idea that didn’t quite make F22, or a new one brewing, the window is open again.

Diversity Advisor Search Team

Last month, I announced our plan to form a search committee to help us find the right person for the Diversity Advisor role on the Fedora Council. I’m happy to say that we got substantial interest in participation and to announce the formation of the search team. See that message for details, and stay tuned for more updates as the group begins its search.

GNOME 3.16.1 is out.
 GNOME 3.16 is the latest version of GNOME 3, and is the result of six months' work by the GNOME project. It includes major new features, as well as a large number of smaller improvements and enhancements. The release incorporates 33525 changes, made by approximately 1043 contributors. New features and improvements being introduced in GNOME 3.16 include:

The lists of updated modules and changes are available here:
  core   -  https://download.gnome.org/core/3.16/3.16.1/NEWS
  apps   -  https://download.gnome.org/apps/3.16/3.16.1/NEWS

The source packages are available here:
  core   -  https://download.gnome.org/core/3.16/3.16.1/sources/
  apps   -  https://download.gnome.org/apps/3.16/3.16.1/sources/
Looking back on a day in the mud – 2015 Rasputitsa

Back in mid-January, the weather in New England had been unseasonably nice and it was looking like we were going to have a mild winter. I had completed the Rapha Festive 500 at the end of the year and felt like it would be a good winter of riding although it was starting to get cold in January. Someone mentioned the Rasputitsa gravel race (probably Chip) and I thought it looked like it could be fun. There was one little blizzard as we neared the end of January (and the registration increase!) but things still seemed okay. So I signed up, thinking it would help keep me riding even through the cold. Little did I know that we were about to get hit with a record amount of snow basically keeping me off the bike for six weeks. So March rolls around, I’ve barely ridden and Rasputitsa is a month away. Game. On.

I stepped up my riding and by a week ago, I started to feel I’d at least be able to suffer through things. But everyone that I’d been talking with about driving up with was bailing and so I started thinking along the same lines. But on Friday afternoon, I was reminded by my friend Kate that “What would Jens do?”. And that settled it, I was going.

I drove up and spent the night in Lincoln, NH on Friday night to avoid having to do a 3 hour drive on Saturday morning before the race. I woke up Saturday morning, had some hotel breakfast and drove the last hour to Burke. As I stepped out of the car, I was hit by a blast of cold wind and snow flurries were starting to fall. And I realized that my vest and my jacket hadn’t made the trip with me, instead being cozy in my basement. Oops.

I finished getting dressed, spun down to pick up my number and then waited around for the start. It was cold but I tried to at least keep walking around, chatting with folks I knew and considering buying another layer from one of the vendors, although I decided against.

<figure class="wp-caption aligncenter" id="attachment_4175" style="width: 300px;">It's overcast and chilly as we line up at the start<figcaption class="wp-caption-text">It’s overcast and chilly as we line up at the start</figcaption></figure>

But then we lined up and, with what was in retrospect not my wisest choice of the day, I decided to line up with some friends of mine who were near the back. But then we started and I couldn’t just hang out at the back and enjoy a nice ride. Instead, I started picking my way forward through the crowd. My heart rate started to go up, though my Garmin wasn’t picking up the HR strap, just as the road did. The nice thing was that this also had the impact of warming me up and not feel cold. The roads started out smooth but quickly got to washed out dirt, potholes and peanut butter thick mud. But it was fun… I hadn’t spent time on roads like this before but it was good. I got into a rhythm where on the flats and climbs, I would push hard and then on some of the downhills, I would be a little sketched out and take it slower. So I’d pass people going up, they’d pass me going down. But I was making slow progress forward.

Until Cyberia. I was feeling strong. I was 29.3 miles in of 40. And I thought that I was going to end up with a pretty good time. After a section of dirt that was all up-hill, we took a turn to a snow covered hill. I was able to ride about 100 feet before hopping off and starting to walk the bike up hill. And that is when the pain began. My calves pulled and hurt. I couldn’t go that quickly. The ruts were hard to push the bike through. And it kept going. At the bottom of the hill, they had said 1.7 miles to the feed zone… I thought some of it I’d ride. But no, I walked it all. Slowly. Painfully. And bonking while I did it as I was needing to eat as I got there and I couldn’t walk, push my bike and eat at the same time. I made it to the top and thought that maybe I could ride down. But no, more painful walking. It was an hour of suffering. It wasn’t pretty. But I did it. But I was passed by oh so many people. It was three of the hardest miles I’ve ever had.

<figure class="wp-caption aligncenter" id="attachment_4180" style="width: 300px;">The slow and painful slog through the snow. Photo courtesy of @jarlathond<figcaption class="wp-caption-text">The slow and painful slog through the snow.
Photo courtesy of @jarlathond</figcaption></figure>

I reached the bottom where the road began again and I got back on my bike. They said we had 7.5 miles to go but I was delirious. I tried to eat and drink and get back into pedaling.  I couldn’t find my rhythm. I was cold. But I kept going, because suffering is something I can do. So I managed to basically hold on to my position, although I certainly didn’t make up any ground. I took the turn for 1K to go, rode 200 meters and saw the icy, snowy chute down to the finish… I laughed and I carefully worked my way down it and then crossed the finish line. 4:12:54 on the clock… a little above the 4 hours I hoped for but the hour and 8 minutes that I spent on Cyberia didn’t help me.

<figure class="wp-caption aligncenter" id="attachment_4178" style="width: 300px;">Yep, ended up with some mud there.<figcaption class="wp-caption-text">Yep, ended up with some mud there.</figcaption></figure>

I went back to the car, changed and took advantage of the plentiful and wonderful food on offer before getting back in the car and starting the three hour drive back home.

<figure class="wp-caption aligncenter" id="attachment_4176" style="width: 300px;">Mmm, all the food<figcaption class="wp-caption-text">Mmm, all the food</figcaption></figure>

So how was it? AWESOME. One of the most fun days I’ve had on the bike. Incredibly well-organized and run. Great food both on the course (Untappd maple syrup hand up, home made cookie handup, home made doughnuts at the top of Cyberia, Skratch Labs bottle feeds) and after. The people who didn’t come missed out on a great day on a great course put on by great people. I’m already thinking that I probably will have to do the Dirty 40 in September. As for next year? Well, with almost a week behind me, I’m thinking that I’ll probably tackle Rasputitsa again… although I might go for more walkable shoes than the winter boots I wore this year and try to be a bit smarter about Cyberia. But what a great start event for the season!

<figure class="wp-caption aligncenter" id="attachment_4177" style="width: 300px;">Fire.  Chainsaws.  Alf. Basically, all of Vermont's finest on offer.<figcaption class="wp-caption-text">Fire. Chainsaws. Alf.
Basically, all of Vermont’s finest on offer.</figcaption></figure>
A nova TI do iPhone

Do PC ao Datacenter, como o iPhone mudou tudo o que fazíamos em TI

A fórmula era ambiciosa para 2007: um telefone com inovadora tela multitoque grande, teclado virtual que finalmente funcionava, SMS repensado e apresentado como uma conversa, aplicação de e-mail com interface extremamente efetiva e clara, inúmeros sensores que interagiam com o mundo físico. E, acima de tudo, um browser completo e avançado, que funcionava tão bem quanto o que tínhamos no desktop. Além disso, a Apple inteligentemente recheou o aparelho com inúmeros padrões abertos, pontos de integração e mecanismos de sincronização como HTML5, MPEG-4, DAV, LDAP, IPSEC, IMAP, SSL etc, que outros fabricantes de smartphones negligenciávam ou apresentavam implementações falhas e limitadas. O resultado era um produto bonito e incomparável, que saia de fábrica já com funcionalidades avançadas e que agradava ao mesmo tempo das vovós aos nerds exigentes. Sua interface fluida simulava as leis da gravidade e inércia, entendia movimentos humanos (gestures) e eliminava a curva de aprendizado para usá-lo. Não é de se admirar que foi um sucesso de mercado e todos os outros fabricantes passaram a seguir sua fórmula.

teen-girls-with-mobile-phones

O iPhone representa o momento em que passamos a carregar no bolso um confiável computador completo, sempre ligado e conectado. A partir daí as estatísticas comprovam que a quantidade de acessos a sites a partir de aparelhos móveis que seguiram sua fórmula não pára de crescer e, em diversos casos, já ultrapassa o número de acesso provenientes de computadores pessoais convencionais. Desenvolvedores passam a reforçar a importância do Mobile First, onde se prega que sites devem ser projetados primeiro para dispositivos móveis (sem Flash e com tela pequena) e depois para o desktop. iPhones e iPads (tablets que derivaram do modelo do iPhone) foram os que mais influenciaram a consolidação do HTML5 como padrão universal, que, por sua vez, derruba a hegemonia do MS Windows no desktop, porque HTML5 funciona em qualquer lugar onde houver um browser moderno.

Surgem também os aplicativos móveis, que dão mais poder e controle ao desenvolvedor e levam a experiência do usuário além das limitações do browser. A popularidade desses smartphones passaram a ser a oportunidade de ouro que os departamentos de marketing das empresas tanto esperavam para estarem sempre com seus clientes, fornecendo informações contextualizadas por geolocalização e capturando dados sobre seu mundo ao redor. Estar sempre com o cliente via seu celular passou a ser um fator competitivo entre as empresas e TI finalmente entrou na agenda de departamentos como RH, marketing e vendas, que outrora a olhavam como uma espécie de mal necessário.

Por uma questão de agilidade, tais departamentos passaram a controlar o ciclo todo, da concepção, ao desenvolvimento da app, ao celular do usuário, mas os processos tradicionais de TI de suas empresas não tinham tal agilidade. E foi nesse contexto que Cloud cresceu, justamente por seus provedores fornecerem infra-estrutura elástica instantaneamente. Agências de propaganda se tornaram bureaus de desenvolvimento de apps da noite para o dia e, junto com seus clientes, passaram a ser ávidos consumidores de serviços na Cloud. Tais agências se popularizaram com o nome “agências de digital”.

O aumento de “computadores” circulando pelo mundo aumentou também a necessidade de analisar e extrair significado da massa crescente de dados que eles geram. Nesse contexto, quem usa bem tecnologias analíticas — BI, BAO, Big Data — passa a ter um diferencial competitivo. Na verdade, apps que são amadas pelos usuários são aquelas que dão acesso a uma quantidade descomunal de dados mas que são entregues de forma precisa, contextualizada, útil e bonita. É esta a intrínseca relação entre analítica (BAO) e mobilidade bem feita. Esta relação e fenômeno são chamados de Socialitics – onde analítica instantânea promove engajamento e maior uso de uma aplicação que por sua vez gera mais dados, fechando assim o ciclo de alta circulação de dados.

Aplicações e sites bonitos, bem desenhados e com curva de aprendizado igual a zero viraram o padrão almejado do mercado. Encapsulamento de complexidade, usabilidade e interface com usuário (comumente chamados de UX, ou user experience) ganharam enorme importância porque o computador não era mais uma ferramenta exclusiva do information worker, agora estava nas mãos de todos. Aplicações que até podiam ter boas funcionalidade, mas que não eram de uso intuitivo e agradável, não seriam mais toleradas.

O crescimento explosivo das redes sociais também aconteceu exclusivamente devido a popularização de smartphones pois passaram a envolver e engajar o usuário com suas notificações na tela do celular, onde quer que estivesse. A experiência nessas redes se tornou mais intensa e verdadeiramente online. Conquistaram as pessoas porque estas perceberam que tratava-se de uma forma de comunicação tão onipresente, dinâmica e ágil quanto a lingua falada, graças ao smartphone sempre a mão.

Após conquistar das crianças aos idosos e mudar o dia-a-dia de todas as gerações intermediárias, a próxima fronteira da mobilidade é no campo do trabalho. Sendo finalmente os “computadores” que caminhoneiros e executivos aprendem a usar instantaneamente, tablets e smartphones passam a ser sua ferramenta única de trabalho. Não somente para email e calendário, mas agora para interagirem diretamente com seus sistema corporativos, ERPs, CRMs, SCMs, eliminando papelada intermediária e adicionando geolocalização, câmera, notificações contextualizadas, dados preciso, multimídia, interatividade e conexão instantânea com outros funcionários. Este último aspecto é uma nova tendência nas relações de trabalho que tem sido chamado de Social Business. Tudo o que se conhecia por Produtividade mudou completamente de patamar.

Mas aí começa um paradoxo. Enquanto acesso a tecnologia ingressa um exuberante processo de popularização e simplificação, criar e manter todo o arsenal tecnológico para suportar tal processo é deveras mais complexo que antes, tanto em capacidade computacional necessária, quanto em agilidade para lançar novidades, quanto em complexidade de integração de produtos díspares para atender necessidades avançadas, quanto em novos conhecimentos multidisciplinares exigidos das equipes de TI em todos os níveis. Este é o novo desafio da Indústria de TI, onde soluções integradas de alta complexidade — mas de gestão e uso fáceis — tendem a ser mais valorizadas. E, por sua vez, fornecedores de soluções tecnológicas que atuam dessa forma integrada tendem a ganhar proeminência.

Sete anos se passaram do nascimento do iPhone e vimos mudanças profundas no mercado de TI. Mas não se engane, estamos apenas começando.

Running VMs on Fedora/AArch64

There are moments when more than one machine would be handy. But AArch64 computers are not yet available in shop around a corner we have to go for other options. So this time let check how to get virtual machines working.

Requirements

For this I would use Fedora 22 on APM Mustang (other systems will be fine too). What else will be needed:

  • libvirtd running
  • virt-manager 1.1.0-7 (or higher) installed on AArch64 machine
  • UEFI for AArch64 from Gerd’s Hoffmann firmware repository
  • Fedora or Debian installation iso (Ubuntu does not provide such)
  • computer with X11 working (to control virt-manager)

Is KVM working?

First we need to get KVM working — run “dmesg|grep -i kvm” after system boot. It should look like this:

hrw@pinkiepie-f22:~$ dmesg|grep -i kvm
[    0.261904] kvm [1]: interrupt-controller@780c0000 IRQ5
[    0.262011] kvm [1]: timer IRQ3
[    0.262026] kvm [1]: Hyp mode initialized successfully

But you can also get this:

[    0.343796] kvm [1]: GICV size 0x2000 not a multiple of page size 0x10000
[    0.343802] kvm [1]: error: no compatible GIC info found
[    0.343909] kvm [1]: error initializing Hyp mode: -6

In such case fixed DeviceTree blob from bug #1165290 would be needed. Fetch attached DTB, store as “/boot/mustang.dtb” and then edit “/etc/grub2-efi.cfg” file so kernel entry will look like this:

menuentry 'Fedora (4.0.0-0.rc5.git4.1.fc22.aarch64) 22 (Twenty Two)' --class fedora --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-4.0.0-0.rc5.git2.4.1.fc22.aarch64-advanced-13e42c65-e2eb-4986-abf9-262e287842e4' {
        load_video
        insmod gzio
        insmod part_gpt
        insmod ext2
        set root='hd1,gpt32'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt32 --hint-efi=hd1,gpt32 --hint-baremetal=ahci1,gpt32  13e42c65-e2eb-4986-abf9-262e287842e4
        else
          search --no-floppy --fs-uuid --set=root 13e42c65-e2eb-4986-abf9-262e287842e4
        fi
        linux /boot/vmlinuz-4.0.0-0.rc5.git4.1.fc22.aarch64 root=UUID=13e42c65-e2eb-4986-abf9-262e287842e4 ro  LANG=en_GB.UTF-8
        initrd /boot/initramfs-4.0.0-0.rc5.git4.1.fc22.aarch64.img
        devicetree /boot/mustang.dtb
}

After reboot KVM should work.

Software installation

Next step is installing VM software: “dnf install libvirt-daemon* virt-manager” will handle that. But to run Virt Manager we also need a way to see it. X11 forwarding over ssh to the rescue ;D After ssh connection I usually cheat with “sudo ln -sf ~hrw/.Xauthority /root/.Xauthority” to be able to run UI apps as root user.

UEFI firmware

Next phase is UEFI which allows us to boot virtual machine with ISO installation images (compared to kernel/initrd combo when there is no firmware/bootloader possibility). We will install one from repository provided by Gerd Hoffmann:

hrw@pinkiepie-f22:~$ sudo -s
root@pinkiepie-f22:hrw$ cd /etc/yum.repos.d/
root@pinkiepie-f22:yum.repos.d$ wget https://www.kraxel.org/repos/firmware.repo
root@pinkiepie-f22:yum.repos.d$ dnf install edk2.git-aarch64

Then libvirtd config change to give path for just installed firmware. Edit “/etc/libvirt/qemu.conf” file and at the end of file add this:

nvram = [
   "/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw"
]

Restart libvirtd via “systemctl restart libvirtd“.

Running Virtual Machine Manager

Now we can connect via “ssh -X” and run “sudo virt-manager“:

vm1

Next step is connection to libvirtd:

vm2vm3

Now we are ready for creating VMs. After pressing “Create a new VM” button we should see this:

vm4

And then creation of VM goes nearly like on x86 machines as there is no graphics only serial console.

But if you forgot to setup UEFI firmware then you will get this:

vm5

In such case get back to UEFI firmware step.

Installing Fedora 22 in VM

So let’s test how it works. Fedora 22 is in Beta phase now so why not test it?

vm6

vm10

vm11

2GB ram and 3 cpu cores should be more than enough ;D

vm12

And 10GB for minimal system:

vm13

vm14

But when it went to serial console it did not look good :(

vm15

I realized that I forgot to install fonts, but quick “dnf install dejavu*fonts” sorted that out:

vm16

Go for VNC controller installation.

After installation finish system runs just fine:

vm17

Summary

As you can see Fedora 22 has everything in place to get VM running on AArch64. UEFI firmware is the only thing out of distribution but that’s due to some license stuff on vfat implementation or something like that. I was running virtual machines with Debian ‘jessie’ and Fedora 22. Wanted to check Ubuntu but all I found was kernel/initrd combo (which is one of ways to boot in virt-manager) but it did not booted in VM.


All rights reserved © Marcin Juszkiewicz
Running VMs on Fedora/AArch64 was originally posted on Marcin Juszkiewicz website

All systems go
Service 'Package Updates Manager' now has status: good: Everything seems to be working.
Net Neutrality and the Death of Distance


Many years ago, I had interviewed Kevin Kelly, the celebrated former editor of Wired magazine. From that interview, it was clear that the telcos and the Internet players would end up clashing one day. The net is a global platform where distance does not matter, while the telco world is metered on distance through different rates for STD and ISD calls. The advent of smartphones and the mobile Internet has lead to a collision of both these worlds. In a world where bandwidth is abundant and cheap, the concept of metering based on distance will fade away. This is the reason that telcos are mortally scared of services like Skype, Whatsapp and others that take away their voice and SMS revenues. The death of distance is a consumer friendly evolution that the telcos will keep resisting till their last breath.

Telcos have also not been terrible at fostering innovation as the failure of Value Added Services proves. In sharp contrast, the combination of smartphones and mobile internet has lead to a thriving app ecosystem. The telcos have only themselves to blame for the fact that the app ecosystem has completely bypassed them. The VAS ecosystem they controlled was extremely unfriendly to entrepreneurs and customers. If the telcos are allowed to decide which app to promote, it could lead to another fiasco. The Internet is a level playing field where innovation and consumer friendliness wins. Private arrangements like Airtel Zero could distort this market through sheer money power, because those who pay to be featured on such platforms would get an advantage over others. However, I have mixed feelings about Internet.org, which provides some Internet services for free, since those who could not afford Internet would at least get a taste of it.

Tampering with the level playing field of the Internet is an extremely bad idea which will destroy the innovative nature of the Internet. If we go down this path it could take years, if not decades to repair the damage. TRAI has done a great disservice by putting out a discussion paper that articulated only the telco's point of view. I hope the government nips this proposal in the bud and defends net neutrality.

I am quoted in this article in Economic Times. My quotes have been heavily edited, and hence this lengthy preamble.
Fedora 22 will contain Django-1.8

One of the new features in upcoming Fedora 22 will be Django-1.8. Django project released its most recent version earlier this month, and it's going to be a long term supported version after Django-1.4 became a bit ancient nowadays. Fedora had release 1.6 in which is now deprecated by Django upstream.

A few packages required patches to support Django-1.8 though:

  • python-django-openstack-auth
  • python-django-compressor
  • python-django-nose
  • python-django-horizon

Of course, they were fixed and included already in Fedora 22.

Major service disruption
Service 'Package Updates Manager' now has status: major: Load issues, being looked into
All systems go
Service 'Package Updates Manager' now has status: good: Everything seems to be working.
Minor service disruption
Service 'Package Updates Manager' now has status: minor: Load issues, being looked into
LOADays 2015: Logging, ElasticSearch, home automation and other sysadmin topics
This was a busy weekend again, but I do not mind, as I was at my favorite conference: Linux Open Administration Days in Antwerp. It is a relatively small conference, with less than 200 visitors. On the other hand, all of them are sysadmins or develop software to ease the work of sysadmins, so everyone […]
PHP version 5.4.40, 5.5.24 and 5.6.8

RPM of PHP version 5.6.8 are available in remi repository for Fedora  21 and  remi-php56 repository for Fedora ≤ 20 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.5.24 are available in remi repository for Fedora ≤ 20 and in remi-php55 repository for  Enterprise Linux.

RPM of PHP version 5.4.40 are available in remi repository Enterprise Linux.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix various security bugs, so update is strongly recommended.

Version announcements:

emblem-important-2-24.png5.4.33 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version and installation mode.

Replacement of default PHP by version 5.6 installation (simplest):

yum --enablerepo=remi-php56,remi update php\*

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum --enablerepo=remi install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum --enablerepo=remi-php55,remi update php\*

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

Replacement of default PHP by version 5.4 installation (enterprise only):

yum --enablerepo=remi update php\*

Parallel installation of version 5.4 as Software Collection (x86_64 only):

yum --enablerepo=remi install php54

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.1
  • EL6 rpm are build using RHEL-6.6
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php54 - php55)

April 16, 2015

WordPress Community is in Pain

I don’t know about you senior bloggers but I’m starting to hate the way the WordPress community has evolved and what it became.

From a warm and advanced blogging software and ecosystem it is now an aberration for poor site makers. Themes are now mostly commercial, focused on institutional/marketing sites and not blogs anymore. WordPress is simply a very poor tool for this purpose. You can see this when several themes are getting much more complex than WordPress per se.

It is still a very powerful software for bloggers, but apparently individuals and theme makers are not focusing in this domain anymore. What a pity !

Drupal would be a much better solution for general non-blog sites but it has a bigger learning curve. Still on Drupal, I’ve tried to migrate my WordPress blog to Drupal several times in the past, but WordPress is still more straight forward for a personal blog.

I’m desperately seeking for a new blogging platform for https://Avi.Alkalay.net. The best information architecture I’ve seen for a blog is the front page of ProBlogger.net. It has a feature post on top, the less important stream of recent posts on the left and a central block with 3 tabs with links to important posts. I’ve tried in the past to write a similar theme and succeeded, but to maintain it was a pain for an ocasional blogger as me. I posted it on GitHub as Plasma and here is a screenshot of how my blog looked like a few years ago:

Screenshot of Plasma WordPress theme

Screenshot of Plasma WordPress theme and how it leverages your best content, not just your last posts.

kdbus discussion

I am following the discussion caused by Greg Kroah-Hartman requesting that kdbus be pulled into the next kernel release. First of all my hat of to Greg for his persistence and staying civil. There has already been quite a few posts to the thread at coming close to attempts at character assassination and a lot of emails just adding more noise, but no signal.

One point I feel is missing from the discussion here though is the question of not making the perfect the enemy of the good. A lot of the posters are saying that ‘hey, you should write something perfect here instead of what you have currently’. Which sounds reasonable on some level, but when you think of it is more a deflection strategy than a real suggestion. First of all the is no such thing as perfect. And secondly if there was how long would it take to provide the theoretical perfect thing? 2 more years, 4 years, 10 years?

Speaking as someone involved in making an operating system I can say that we would have liked to have kdbus 5 years ago and would much prefer to get kdbus in the next few Months, than getting something ‘perfect’ in 5 years from now.

So you can say ‘hey, you are creating a strawman here, nobody used the word ‘perfect” in the discussion. Sure, nobody used that word, but a lot of messages was written about how ‘something better’ should be created here instead. Yet, based on from where these people seemed to be coming the question I ask then is: Better for who? Better for the developers who are already using dbus in the applications or desktops? Better for a kernel developer who is never going to use it? Better for someone doing code review? Better for someone who doesn’t need the features of dbus, but who would want something else?

And what is ‘better’ anyway? Greg keeps calling for concrete technical feedback, but at the end of the day there is a lot of cases where the ‘best’ technical solution, to the degree you can measure that objectively, isn’t actually ‘the best’. I mean if I came up with a new format for storing multimedia on an optical disk, one which from a technical perspective is ‘better’ than the current Blu-Ray spec, that doesn’t mean it is actually better for the general public. Getting a non-standard optical disc that will not play in your home Blu-Ray player isn’t better for 99.999% of people, regardless of the technical merit of the on-disc data format.

Something can actually be ‘better’ just because it is based on something that already exists, something which have a lot of people already using it, lets people quickly extend what they already are doing with the new functionality without needing a re-write and something which is available ‘now’ as opposed to some undefined time in the future. And that is where I feel a lot of the debaters on the lkml are dropping the ball in this discussion; they just keeping asking for a ‘better solution’ to the challenges of a space they often don’t have any personal experience with developing in, because kdbus doesn’t conform to how they would implement a kernel IPC mechanism for the kind of problems they are used to trying to solve.

Also there has been a lot of arguments about the ‘design’ of kdbus and dbus. A lot of lacking concreteness to it and mostly coming from people very far removed from working on the desktop and other relevant parts of userspace. Which at the end of the day boils down to trying to make the lithmus test ‘you have to prove to me that making a better design is impossible’, and I think that anyone should be able to agree that if that was the test for adding anything to the Linux kernel or elsewhere then very little software would ever get added anywhere. In fact if we where to hold to that kind of argumentation we might as well leave software development behind and move to an online religion discussion forum tossing the good ol’ “prove to me that God doesn’t exist’ into the ring.

So in the end I hope Linus, who is the final arbiter here, doesn’t end up making the ‘perfect’ the enemy of the good.

Flockchester 2015

Flock to Fedora is our yearly conference where Fedora contributors from all around the world gather to discuss the past year, talk about where we’re headed, hack on various projects, see old friends, and meet new faces. This year, we’ll converge on Rocheseter, NY, where I went to college. I’m excited to take a trip back east, and hopefully see some friends who are still living in the area. The event will take place from August 12-15, and is sure to be a good time.

All are welcome to attend Flock. If you’re planning to come, make sure to pre-register and book a hotel if you need one. I expect to see a lot of RIT students there, especially you FOSSBoxers! ;)

If we’ll see you in Rochester this summer, consider proposing a talk! It’s a great way to network with the community and share something cool. More proposals mean a better schedule for everyone. Proposals are due on May 2, so don’t hesitate for too long! You can propose a talk via that same registration page.

Flock is always a blast. Conference days are fun, loaded with new ideas, fresh code, and good times. They’re followed by the infamous evening parties (and afterparties) that have become a staple of this incredible week that is not to be missed. Hope to see you there!

Debian Jessie release, 100 year ANZAC anniversary

The date scheduled for the jessie release, 25 April 2015, is also ANZAC day and the 100th anniversary of the Gallipoli landings. ANZAC day is a public holiday in Australia, New Zealand and a few other places, with ceremonies remembering the sacrifices made by the armed services in all the wars.

Gallipoli itself was a great tragedy. Australian forces were not victorious. Nonetheless, it is probably the most well remembered battle from all the wars. There is even a movie, Gallipoli, starring Mel Gibson.

It is also the 97th anniversary of the liberation of Villers-Bretonneux in France. The previous day had seen the world's first tank vs tank battle between three British tanks and three German tanks. The Germans won and captured the town. At that stage, Britain didn't have the advantage of nuclear weapons, so they sent in Australians, and the town was recovered for the French. The town has a rue de Melbourne and rue Victoria and is also the site of the Australian National Memorial for the Western Front.

Its great to see that projects like Debian are able to span political and geographic boundaries and allow everybody to collaborate for the greater good. ANZAC day might be an interesting opportunity to reflect on the fact that the world hasn't always enjoyed such community.

Creating a new Network for a dual NIC VM

I need a second network for testing a packstack deployment. Here is what I did to create it, and then to boot a new VM connected to both networks.

Once again the tables are too big for the stylesheet I am using, but I don’t want to modify the output. The view source icon gives a more readable view.

The Common client supports creating networks.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack network create ayoung-private
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| id          | 9f2948fa-77dd-483d-8841-f9461ee50aee |
| name        | ayoung-private                       |
| project_id  | fefb11ea894f43c0ae5c9686d2f49a9d     |
| router_type | Internal                             |
| shared      | False                                |
| state       | UP                                   |
| status      | ACTIVE                               |
| subnets     |                                      |
+-------------+--------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron  subnet create ayoung-private 192.168.52.0/24 --name ayoung-subnet1
Invalid command u'subnet create ayoung-private 192.168.52.0/24 --name'

But not any of the other neutron operations…at least not at first glance. we’ll see later if that is the case, but for now, use the neutron client, which seems to support the V3 Keystone API for Auth. Create a subnet:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron  subnet-create ayoung-private 192.168.52.0/24 --name ayoung-subnet1
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.52.2", "end": "192.168.52.254"} |
| cidr              | 192.168.52.0/24                                    |
| dns_nameservers   |                                                    |
| enable_dhcp       | True                                               |
| gateway_ip        | 192.168.52.1                                       |
| host_routes       |                                                    |
| id                | da738ad8-8469-4aa8-ab91-448bd3878ae6               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | ayoung-subnet1                                     |
| network_id        | 9f2948fa-77dd-483d-8841-f9461ee50aee               |
| tenant_id         | fefb11ea894f43c0ae5c9686d2f49a9d                   |
+-------------------+----------------------------------------------------+

Create router for the subnet

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron router-create ayoung-private-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 |
| name                  | ayoung-private-router                |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | fefb11ea894f43c0ae5c9686d2f49a9d     |
+-----------------------+--------------------------------------+

Now I need to find the external network and create a router that points to it:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron net-list
+--------------------------------------+------------------------------+-------------------------------------------------------+
| id                                   | name                         | subnets                                               |
+--------------------------------------+------------------------------+-------------------------------------------------------+
| 63258623-1fd5-497c-b62d-e0651e03bdca | idm-v4-default               | 3227f3ea-5230-411c-89eb-b1e51298b4f9 192.168.1.0/24   |
| 9f2948fa-77dd-483d-8841-f9461ee50aee | ayoung-private               | da738ad8-8469-4aa8-ab91-448bd3878ae6 192.168.52.0/24  |
| eb94d7e2-94be-45ee-bea0-22b9b362f04f | external                     | 3a72b7bc-623e-4887-9499-de8ba280cb2f                  |
+--------------------------------------+------------------------------+-------------------------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron router-gateway-set 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 eb94d7e2-94be-45ee-bea0-22b9b362f04f
Set gateway for router 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78

The router needs an interface on the subnet.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$  neutron router-interface-add 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 da738ad8-8469-4aa8-ab91-448bd3878ae6
Added interface 782fdf26-e7c1-4ca7-9ec9-393df62eb11e to router 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78.

Not sure if I need to create a port, but worth testing out;

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron port-create ayoung-private --fixed-ip ip_address=192.168.52.20
Created a new port:
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:vnic_type     | normal                                                                               |
| device_id             |                                                                                      |
| device_owner          |                                                                                      |
| fixed_ips             | {"subnet_id": "da738ad8-8469-4aa8-ab91-448bd3878ae6", "ip_address": "192.168.52.20"} |
| id                    | 80f302db-6c27-42a0-a1a3-45fcfe0b23fe                                                 |
| mac_address           | fa:16:3e:bf:e3:7d                                                                    |
| name                  |                                                                                      |
| network_id            | 9f2948fa-77dd-483d-8841-f9461ee50aee                                                 |
| security_groups       | 6c13abed-81cd-4a50-82fb-4dc98b4f29fd                                                 |
| status                | DOWN                                                                                 |
| tenant_id             | fefb11ea894f43c0ae5c9686d2f49a9d                                                     |
+-----------------------+--------------------------------------------------------------------------------------+

Now to create the vm. I specify the –nic param twice.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack server create   --flavor m1.medium   --image "CentOS-7-x86_64" --key-name ayoung-pubkey  --security-group default  --nic net-id=63258623-1fd5-497c-b62d-e0651e03bdca  --nic net-id=9f2948fa-77dd-483d-8841-f9461ee50aee     test2nic.cloudlab.freeipa.org
+--------------------------------------+--------------------------------------------------------+
| Field                                | Value                                                  |
+--------------------------------------+--------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                 |
| OS-EXT-AZ:availability_zone          | nova                                                   |
| OS-EXT-STS:power_state               | 0                                                      |
| OS-EXT-STS:task_state                | scheduling                                             |
| OS-EXT-STS:vm_state                  | building                                               |
| OS-SRV-USG:launched_at               | None                                                   |
| OS-SRV-USG:terminated_at             | None                                                   |
| accessIPv4                           |                                                        |
| accessIPv6                           |                                                        |
| addresses                            |                                                        |
| adminPass                            | Exb7Qw3syfDg                                           |
| config_drive                         |                                                        |
| created                              | 2015-04-16T03:35:27Z                                   |
| flavor                               | m1.medium (3)                                          |
| hostId                               |                                                        |
| id                                   | fffef6e0-fcce-4313-af7a-81f9306ef196                   |
| image                                | CentOS-7-x86_64 (38534e64-5d7b-43fa-b59c-aed7a262720d) |
| key_name                             | ayoung-pubkey                                          |
| name                                 | test2nic.cloudlab.freeipa.org                          |
| os-extended-volumes:volumes_attached | []                                                     |
| progress                             | 0                                                      |
| project_id                           | fefb11ea894f43c0ae5c9686d2f49a9d                       |
| properties                           |                                                        |
| security_groups                      | [{u'name': u'default'}]                                |
| status                               | BUILD                                                  |
| updated                              | 2015-04-16T03:35:27Z                                   |
| user_id                              | 64951f595aa444b8a3e3f92091be364d                       |
+--------------------------------------+--------------------------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack server list
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| ID                                   | Name                                | Status  | Networks                                                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| 820f8563-28ae-43fb-a0ff-d4635bd6dd38 | ecp.cloudlab.freeipa.org            | SHUTOFF | idm-v4-default=192.168.1.77, 10.16.19.28                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+

Set a Floating IP and ssh in:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack  ip floating list | grep None | sort -R | head -1
| a5abf332-68dc-46c5-a4f1-188b91f8dbf8 | external | 10.16.18.225 | None           | None                                 |
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack ip floating add  10.16.18.225 test2nic.cloudlab.freeipa.org

echo 10.16.18.225 test2nic.cloudlab.freeipa.org | sudo tee -a /etc/hosts
10.16.18.225 test2nic.cloudlab.freeipa.org
$ ssh centos@test2nic.cloudlab.freeipa.org
The authenticity of host 'test2nic.cloudlab.freeipa.org (10.16.18.225)' can't be established.
ECDSA key fingerprint is e3:dd:1b:d6:30:f1:f5:2f:14:d7:6f:98:d6:c9:08:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'test2nic.cloudlab.freeipa.org,10.16.18.225' (ECDSA) to the list of known hosts.
[centos@test2nic ~]$ ifconfig eth1
eth1: flags=4098<broadcast>  mtu 1500
        ether fa:16:3e:ab:14:2e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Fedora 22 and Kernel 4.0
As Ryan noted yesterday, Fedora 22 is on track to ship with the 4.0 kernel release. So what does that mean in the grand scheme of things? In short, not much.

The major version change wasn't done because of any major feature or change in process or really anything exciting at all. Linus Torvalds changed it because he felt the minor version number was getting a bit large and he liked 4.0 better. It was really a whim more than any thing contained within the kernel itself. The initial merge window builds of this kernel in Fedora were even called 3.20.0-rc0.gitX until the 4.0-rc1 release came out.

In fact, this kernel release is one of the more "boring" releases in a while. It has all the normal fixes and improvements and new hardware support one would normally expect, but overall it was a fairly quiet release. So version number aside, this is really more of the same from our beloved kernel development community.

However, there is one feature that people (and various media sites) seem to have keyed in on and that is the live patching functionality. This holds the promise of being able to patch your running kernel without rebooting. Indeed, this could be very useful, but it isn't quite there yet. And it also doesn't make a whole lot of sense for Fedora at this time. The Fedora kernels have this functionality disabled, both in Fedora 22 and Rawhide.

What was merged for 4.0 is the core functionality that is shared between two live patching projects, kPatch and kGraft. kPatch is being led by a development team in Red Hat whereas kGraft is being developed by a team from SuSE. They both accomplish the same end result, but they do so via a different approach internally. The two teams met at the Linux Plumbers conference last year and worked on some common ground to make it easier to merge into mainline rather than compete with each other. This is absolutely awesome and an example of how new features should be developed upstream. Kudos to all those involved on that front.

The in-kernel code can accept patches from both methods, but the process and tools to create those patches are still being worked on in their upstream communities. Neither set are in Fedora itself, and likely won't be for some time as it is still fairly early in the life of these projects. After discussing this a bit with the live patching maintainer, we decided to keep this disabled in the Fedora kernels for now. The kernel-playground COPR does have it enabled for those that want to dig in and generate their own patches and are willing to break things and support themselves.

In reality, we might not ever really leverage the live patching functionality in Fedora itself. It is understandable that people want to patch their kernel without rebooting, but the mechanism is mostly targeted at small bugfixes and security patches. You cannot, for example, live patch from version 4.0 to 4.1. Given that the Fedora kernel rebases both from stable kernel (e.g. 3.19.2 to 3.19.3) and major release kernels over the lifetime of a Fedora release, we don't have much opportunity to build the live patches. Our update shipping infrastructure also isn't really geared towards quick, small fixes. Really, the only viable target for this functionality in Fedora is likely on the oldest Fedora release towards the end of its lifecycle and even then it's questionable whether it would be worth the effort. So I won't say that Fedora will never have a live patch enabled kernel, but there is a lot of work and process stuff to be figured out before that ever really becomes an option.

So that's the story around the 4.0 kernel. On the one hand, it sounds pretty boring and is likely to disappoint those hoping for some amazing new thing. On the other hand, it's a great example of the upstream kernel process just chugging along and delivering pretty stable quality kernels. As a kernel maintainer, I like this quite a bit. If you have any questions about the 4.0 kernel, or really any Fedora kernel topics, feel free to email us at the Fedora kernel list. We're always happy to discuss things there.
Linux Kernel 4.0 available in Fedora 22 Alpha

Early this week, Linus released version 4.0 of the Linux Kernel. Now, this updated version of the Linux Kernel is available in the official Fedora repositories for users running the alpha release of Fedora 22.

To get the updated version of the kernel on your Fedora 22 machine, either update the system via the Software application (in Fedora Workstation), or using dnf update on the command line.

Using the openstack command line interface to create a new server.

I have to create a new virtual machine. I want to use the V3 API when authentication to Keystone, which means I need to use the common client, as the keystone client is deprecated and only supports the V2.0 Identity API.

To do anything with the client, we need to set some authorization data variables. Create keystonerc with the following and source it:

  1 export OS_AUTH_URL=http://oslab.exmapletree.com:5000/v3
  2 export OS_PROJECT_NAME=Syrup
  3 export OS_PROJECT_DOMAIN_NAME=Default
  4 export OS_USER_DOMAIN_NAME=Default
  5 export OS_USERNAME=ayoung
  6 export OS_PASSWORD=yeahright

The Formatting for the output of the commands is horribly rendered here, but if you click the little white sheet of paper icon that pops up when you float your mouse cursor over the black text, you get a readable table.

Sanity Check: list servers

[ayoung@ayoung530 oslab]$ openstack server list
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| ID                                   | Name                                | Status  | Networks                                                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| a5f70f90-7d97-4b79-b0f0-044d8d9b4c77 | centos7.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.72, 10.16.19.63                                    |
| 35b116e4-fdd2-4580-bb1a-18f1f6428dd5 | mysql.cloudlab.freeipa.org          | ACTIVE  | idm-v4-default=192.168.1.70, 10.16.19.92                                    |
| a5ca7644-d703-44d7-aa95-fd107e18aefd | horizon.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.67, 10.16.19.24                                    |
| f7aca565-4439-4a2f-9c31-911349ce8943 | ldapqa.cloudlab.freeipa.org         | ACTIVE  | idm-v4-default=192.168.1.66, 10.16.19.100                                   |
| 2b7b5cc1-83c4-45c3-8ca3-cd4ba4b589d3 | federate.cloudlab.freeipa.org       | ACTIVE  | idm-v4-default=192.168.1.61, 10.16.18.6                                     |
| a8649175-fd18-483c-acb7-2933226fd3a6 | horizon.kerb-demo.org               | ACTIVE  | kerb-demo.org=192.168.0.5, 10.16.19.183                                     |
| 38d24fb3-0dd3-4cf0-98d6-12ea22a1d718 | openstack.kerb-demo.org             | ACTIVE  | kerb-demo.org=192.168.0.3, 10.16.19.101                                     |
| ca9a8249-1f09-4b1a-b8d4-850019b7c4e5 | ipa.kerb-demo.org                   | ACTIVE  | kerb-demo.org=192.168.0.2, 10.16.18.218                                     |
| 29d00b3b-5961-424e-b95c-9d90b3ecf9e3 | ipsilon.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.60, 10.16.18.207                                   |
| 028df8d8-7ce9-4f61-b36f-a080dd7c4fb8 | ipa.cloudlab.freeipa.org            | ACTIVE  | idm-v4-default=192.168.1.59, 10.16.18.31                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+

I made a pretty significant use of the help output. To show the basic help string

openstack --help

Gives you a list of the options. To see help on a specific comming, such as the server create command we are going to work towards executing, run:

openstack help server create

A Server is a resource golem composed by stitching together resources from other services. To create this golem I am going to stitch together:

  1. A flavor
  2. An image
  3. A Security Group
  4. A Private Key
  5. A network
First, to find the flavor:

[ayoung@ayoung530 oslab]$ openstack flavor list
+----+---------------------+------+------+-----------+-------+-----------+
| ID | Name                |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------------------+------+------+-----------+-------+-----------+
| 1  | m1.tiny             |  512 |    1 |         0 |     1 | True      |
| 2  | m1.small            | 2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium           | 4096 |   40 |         0 |     2 | True      |
| 4  | m1.large            | 8192 |   80 |         0 |     4 | True      |
| 6  | m1.xsmall           | 1024 |   10 |         0 |     1 | True      |
| 7  | m1.small.4gb        | 4096 |   20 |         0 |     1 | True      |
| 8  | m1.small.8gb        | 8192 |   20 |         0 |     1 | True      |
| 9  | oslab.4cpu.20hd.8gb | 8192 |   20 |         0 |     4 | True      |
+----+---------------------+------+------+-----------+-------+-----------+

I think this one should taste like cherry. But, since we don’t have a cherry flavor, I guess I’ll pick m1.small.4gb as that has the 4 GB RAM I need.

To FInd an image:

[ayoung@ayoung530 oslab]$ openstack image list
+--------------------------------------+----------------------------------------------+
| ID                                   | Name                                         |
+--------------------------------------+----------------------------------------------+
| 415162df-4bec-474f-9f3b-0a79c2ed3848 | Fedora-Cloud-Base-22_Alpha-20150305          |
| b89dc25b-6f62-4001-b979-05ac14f60e9b | rhel-guest-image-7.1-20150224.0              |
| 38534e64-5d7b-43fa-b59c-aed7a262720d | CentOS-7-x86_64                              |
| bc3c35a2-cf96-4589-ad51-a8d499708128 | Fedora-Cloud-Base-20141203-21.x86_64         |
| 9ea16df1-f178-4589-b32b-0e2e32305c61 | FreeBSD 10.1                                 |
| 6ec77e6e-7ad4-4994-937d-91003fa2d6ac | rhel-6.6-latest                              |
| e61ec961-248b-4ee6-8dfa-5d5198690cab | ubuntu-12.04-precise-cloudimg                |
| 54ba6aa9-7d20-4606-baa6-f8e45a80510c | rhel-guest-image-6.6-20141222.0              |
| bee6e762-102f-467e-95a8-4a798cb5ec75 | heat-functional-tests-image                  |
| 812e129c-6bfd-41f5-afba-6817ac6a23e5 | RHEL 6.5 20140829                            |
| f2dfff20-c403-4e53-ae30-947677a223ce | Fedora 21 20141203                           |
| 473e6f30-a3f0-485b-a5e5-3c5a1f7909a5 | RHEL 6.6 20140926                            |
| b12fe824-c98a-4af5-88a6-b1e11a511724 | centos-7-cloud                               |
| 601e162f-87b4-4fc1-a0d3-1c352f3c2988 | fedora-21-atomic                             |
| 12616509-4c4f-47a5-96b1-317a99ef6bf8 | Fedora 21 Beta                               |
| 77dcb29b-3258-4955-8ca4-a5952c157a2b | RHEL6.6                                      |
| 8550a6db-517b-47ea-82f3-ec4fd48e8c09 | centos-7-x86_64                              |
+--------------------------------------+----------------------------------------------+

Although I really want a Fedora Cloud image…I guess I’ll pick fedora-21-atomic. Close enough for Government work.

[ayoung@ayoung530 oslab]$ openstack keypair list
+---------------+-------------------------------------------------+
| Name          | Fingerprint                                     |
+---------------+-------------------------------------------------+
| ayoung-pubkey | 37:81:08:b2:0e:39:78:0e:62:fb:0b:a5:f1:d7:41:fc |
+---------------+-------------------------------------------------+

That decision is tough.

[ayoung@ayoung530 oslab]$ openstack network list
+--------------------------------------+------------------------------+--------------------------------------+
| ID                                   | Name                         | Subnets                              |
+--------------------------------------+------------------------------+--------------------------------------+
| 3b799c78-ca9d-49d0-9838-b2599cc6b8d0 | kerb-demo.org                | c889bb6b-98cd-47b8-8ba0-5f2de4fe74ee |
| 63258623-1fd5-497c-b62d-e0651e03bdca | idm-v4-default               | 3227f3ea-5230-411c-89eb-b1e51298b4f9 |
| 650fc936-cc03-472d-bc32-d56f56116761 | tester1                      |                                      |
| de4300cc-8f71-46d7-bec5-c0a4ad54954d | BROKEN                       | 6c390add-108c-40d5-88af-cb5e784a9d31 |
| eb94d7e2-94be-45ee-bea0-22b9b362f04f | external                     | 3a72b7bc-623e-4887-9499-de8ba280cb2f |
+--------------------------------------+------------------------------+--------------------------------------+

Tempted to using BROKEN, but I shall refrain. I set up idm-v4-default so I know that is good.

[ayoung@ayoung530 oslab]$ openstack security group list
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 6c13abed-81cd-4a50-82fb-4dc98b4f29fd | default | default     |
+--------------------------------------+---------+-------------+

Another tough call. OK, with all that, we have enough information to create the server:

One note, the –nic param is where you can distinguish which network to use. That param takes a series of key/value params connected by equal signs. I figured that out from the old nova command line parameters, and would jhave been hopeless lost if I hadn’t stumbled across the old how to guide.

openstack server create   --flavor m1.medium   --image "fedora-21-atomic" --key-name ayoung-pubkey    --security-group default  --nic net-id=63258623-1fd5-497c-b62d-e0651e03bdca ayoung-test

In order to ssh to the machine, we need to assign it a floating IP Address. TO find one that is unassigned:

[ayoung@ayoung530 oslab]$ openstack  ip floating list | grep None
| 943b57ea-4e52-4d05-b665-f808a5fbd887 | external | 10.16.18.61  | None           | None                                 |
| a1f5bb26-4e47-4fe7-875e-d967678364a0 | external | 10.16.18.223 | None           | None                                 |
| a419c144-dbfd-4a42-9f5e-880526683ea0 | external | 10.16.18.235 | None           | None                                 |
| a5abf332-68dc-46c5-a4f1-188b91f8dbf8 | external | 10.16.18.225 | None           | None                                 |
| b5c21c4a-3f12-4744-a426-8d073b3be3c8 | external | 10.16.18.70  | None           | None                                 |
| b67edf85-2e54-4ad1-a014-20b7370e38ba | external | 10.16.18.170 | None           | None                                 |
| c43eb490-1910-4adf-91b6-80375904e937 | external | 10.16.18.196 | None           | None                                 |
| c44a4a56-1534-4200-a227-90de85a218eb | external | 10.16.19.28  | None           | None                                 |
| e98774f9-fe6e-4608-a85d-92a5f39ef2c8 | external | 10.16.19.182 | None           | None                                 |
| f2705313-b03f-4537-a2d8-c01ff1baaee1 | external | 10.16.18.203 | None           | None                                 |

I’ll chose one at random

[ayoung@ayoung530 oslab]$ openstack  ip floating list | grep None | sort -R | head -1
| a419c144-dbfd-4a42-9f5e-880526683ea0 | external | 10.16.18.235 | None           | None                                 |

And Add it to the server:

openstack ip floating add  10.16.18.235  ayoung-test

Test it out:

[ayoung@ayoung530 oslab]$ ssh root@10.16.18.235
The authenticity of host '10.16.18.235 (10.16.18.235)' can't be established.
ECDSA key fingerprint is 2a:cd:5f:37:63:ef:7f:2d:9d:83:fd:85:76:4d:03:3c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.16.18.235' (ECDSA) to the list of known hosts.
Please login as the user "fedora" rather than the user "root".

And log in:

[ayoung@ayoung530 oslab]$ ssh fedora@10.16.18.235
[fedora@ayoung-test ~]$ 

April 15, 2015

JWCrypto a python module to do crypto using JSON

Lately I had the need to do use some crypto in a web-like scenario, a.k.a over-HTTP(S) so I set out to look at what could be used.

Pretty quickly it came clear that the JSON Web Encryption standard proposed in the IETF JOSE Working Group would be a good fit and actually the JSON Web Signature would come useful too.

Once I was convinced this was the standard to use I tried to find out a python module that implemented it as the project I am going to use this stuff in (FreeIPA ultimately) is python based.

The only implementation I found initially (since then I've found other projects scattered over the web) was this Jose project on GitHub.

After a quick look I was not satisfied by three things:

  • It is not a complete implementation of the specs
  • It uses obsolete python crypto-libraries wrappers
  • It is not Python3 compatible
While the first was not a big problem as I could simply contribute the missing parts, the second is, and the third is a big minus too. I wanted to use the new Python Cryptography library as it has proper interfaces and support for modern crypto, and neatly abstracts away the underlying crypto-library bindings.

So after some looking over the specs in details to see how much work it would entail I decided to build a python modules to implement all relevant specs myself.

The JWCrypto project is the result of a few weeks of work, complete of Documentation hosted by ReadTheDocs.

It is an almost complete implementation of the JWK, JWE, JWS and JWT specs and implements most of the algorithms defined in the JWA spec. It has been reviewed internally by a member of the Red Hat Security Team and has an extensive test suite based on the specs and the test vectors included in the JOSE WG Cookbook. It is also both Python2.7 and Python3.3 compatible!

I had a lot of fun implementing it, so if you find it useful feel free to drop me a note.

[ANN] Release of Bugzilla 5.0rc3, 4.4.9, 4.2.14, and 4.0.18
Flisol David, Chiriqui 2015
April, is Flisol (Free Software Installation Festival in Latin America) and normally it is held on April 3rth Saturday  we planned and held it on April 11 on David Chiriqui the main reason we don't live in David and is about 7 hours drive to get there for us.

Prof. Ender Gonzalez from Universidad del Istmo helps us organize the evento on its Univiersity in David. We took a bus at mid night and arrive there early in the morning everything was ready for us.

Event started at 9 with a full house we started talks about free software, Fedora, Firefox OS, Mozilla, Docker and many other topics, we talk with students and teachers who were really into learning about Fedora and Free Software.

It was our first visit to David and its people make a big impression we added more Fedora members and help install some machines plus give away stikers and Fedora ISO. We hope to have some of those new Fedora members start collaborating on translation and other areas within Fedora.


Thanks to U. del Istmo.
virt-builder: Fedora 21 ppc64 and ppc64le images

virt-builder now has Fedora 21 ppc64 and ppc64le images available, and you can run these under emulation on an x86-64 host. Here’s how to do it:

$ virt-builder --arch ppc64 fedora-21 \
    -o fedora-21-ppc64.img

or:

$ virt-builder --arch ppc64le fedora-21 \
    -o fedora-21-ppc64le.img

To boot them:

$ qemu-system-ppc64 -M pseries -cpu POWER8 -m 4096 \
    -drive file=fedora-21-ppc64[le].img \
    -serial stdio

Oddly the boot messages will appear on the GUI, but the login prompt will only appear on the serial console.

Libvirt also has support, so with a sufficiently new version of the toolchain you can also use:

$ virt-install --import --name=guestname \
    --ram=4096 --vcpus=1 \
    --os-type=linux --os-variant=fedora21 \
    --arch=ppc64[le] --machine pseries \
    --disk=fedora-21-ppc64[le].img,format=raw
$ virsh start guestname

It’s quite fun to play with Big Iron, even in an emulator that runs at about 1/1000th the speed of the real thing. I know a lot about this, because we have POWER8 machines at Red Hat, and they really are the fastest computers alive, by a significant multiple. Of course, they also cost a fortune and use huge amounts of power.

Some random observations:

  1. The virt-builder --size parameter cannot resize the ppc64 guest filesystem correctly, because Anaconda uses an extended partition. Workaround is to either add a second disk or to create another extended partition in the extra space.
  2. The disks are ibmvscsi model (not virtio or ide). This is the default, but something to think about if you edit or create the libvirt XML manually.
  3. Somehow the same CPU/machine model works for both Big Endian and Little Endian guests. It must somehow auto-detect the guest type, but I couldn’t work out how that works. Anyway, it just works by magic. Edit: it’s done by the kernel
  4. libguestfs inspection is broken for ppc64le
  5. Because TCG (qemu software emulation) is single threaded, only use a single vCPU. If you use more, it’ll actually slow the thing down.

Thanks: Maros Zatko for working out the virt-install command line and implementing the virt-builder script to build the images.


How Openstack handle RootDisk of VMs

RootDisk of VM is persistent and I always thought that it is stored together with all other volumes. In Cinder. I was soo wrong. Curious too?

Let start with terminology. Openstack knows these storages:

  • Images - this is stored in Glance service. When VM start RootDisk is created from content of some image.
  • Volumes - this is stored in Cinder service. You can attach Volume to VM and its life is independent from VM.
  • Ephemeral - this is handled by Nova service and it is created in /var/lib/nova/instances on Compute node. If you shut down, reset or move VM, then this device is cleared and you have to partition/format it again. On the other hand this device is local on Compute node and therefore usually faster.
  • Swap - this is just Ephemeral storage, but cloud-init will mkswap and mount it as swap for you.
  • RootDisk - this is ... what? It is persistent across reboots. But it is not stored in Cinder service. Who handle it? And how?

RootDisk is basicaly Ephemeral storage with special handling. It is created by Nova service in /var/lib/nova/instances and is stored only on Compute node where VM was started. If you shutdown instance and then again start it up, VM will always stay on the same Compute node, because RootDisk is only there. You must initiate Migration to move it to another Compute node. Then Nova copy RootDisk (via SSH) to another node (other Ephemeral storages and swap is not migrated).

If your Compute nodes use some cheap storage then you should know that when Compute node dies, it will take some RootDisks to the grave.

One solution can be /var/lib/nova/instances located on some better storage and then e.g. NFS exported to Compute nodes (this way you get shared storage and Live migration will start working). On the other hand Ephemeral devices then does not have too much sense as they will likely consume the same storage as normal Volumes.

Talk selection for FUDCon Pune 2015

We received over 140 submissions for FUDCon Pune and Amit, Kushal, Neependra and I had our task cut out. We had room to select about 40 talks and some workshops given that we did not want to go over 3 main talk tracks for the conference - workshops would be treated separately since their requirements are usually completely different from the talks.

We decided to do this in multiple passes, reducing the number of talks till we had a list that we were satisfied with. We started by individually scoring all of the talks and then comparing notes. This gave us an indication of talks we had full agreement over and with that out of the way, we just had to fight it out over the remaining talks. We quickly found out that this was time consuming, but there seemed to be no other way, so we stuck to it and asked for our original self-imposed deadline to be pushed over from 3rd April to 15th April.

Docker, Docker Docker, Gluster, Gluster, Gluster...

One thing that stood out during our selection process was the sheer number of submissions for Container technologies (Atomic, Docker), Software defined storage (Gluster, Ceph) and OpenStack. We did not want to reject a lot of these talks but at the same time we did not want to turn a Fedora conference into a Cloud conference, so after discussing in the FUDCon planning meeting, we decided to have 3 separate tracks for these talks. Each track would run for a day, with an introductory talk in the main track followed by sessions and workshops in a separate dedicated track. On the last day, a representative from each track would give a 10 minute summary of what happened at their track.

This format obviously meant that these talks were not suitable for inclusion in the tracks as is - speakers would have to get together and work on the kind of talks they want to see at their track so that they tell a coherent story around that technology. We also identified leaders for each track to coordinate this effort, but the leaders do not decide what goes in their track. It is the job of the entire group to come to a consensus about their talks and what their track looks like. We have begun communicating this to the speakers now and are looking forward to their active participation.

And the speakers are...

This has been the slowest bit. The mass mailer module on the Drupal COD is not working for us for some reason and we’re now trying to figure out how to not do the busy work of sending individual emails and using a script to do this at least for accepted proposals. The rejections are a bit trickier because we haven’t declined a lot of talks outright. There are a lot of cases where we want to request speakers to run a BoF for their topics or merge their workshop with another workshop proposal if possible, to provide more complete coverage on the topic. Even for those that we have declined outright, we would like to make sure that they still come since we would like to hear from them at the barcamp or at the lightning talks. We would like to try as much as we can to get space for everybody to share their experience and knowledge at FUDCon.

All of this means that we have to send out a lot of personal emails and that is taking time. We would like to hold off publishing the list until we have sent out these emails, so we hope to come up with a finished list by the end of the week, or latest by the next meeting.

Travel and Stay

As we have mentioned before, if you are an active Fedora contributor or a speaker and want to come to FUDCon but don’t have the resources to travel or the means to stay in the city for the duration of the conference, then let us know. We have a limited budget to support travel and stay, which we can use to help some of you. Being selected as a speaker does not necessarily entitle you to travel and stay, you need to make this request regardless of the result if you need assistance. The deadline for submitting these requests is 30th April, but we’re processing them every week, so don’t wait till the last day to make your request.

Every week brings FUDCon Pune closer and we’re very excited about sharing ideas with some very interesting people. See you all at FUDCon!

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Hanthana Linux 21 (Sinharaja) released
11148201_10152864110953099_766673552_o

Hanthana Linux Project was founded in 2009 and we promote Free and Open Source Software among the community and also let them to contribute back. We are happy to announce our latest release Hanthana Linux 21 today.

This new release Hanthana Linux 21, is ship with several Desktop Enviroments such as Gnome, KDE, XFCE, Sugar and LXDE. There are several editions in Hanthana 21, for general usage (Hanthana 21 LiveDVD) , educational purpose you can use Hanthana 21 Edu and Hanthana 21 Dev can be use for Software Development purposes. For those who just use Office packages can download either Hanthana 21 Light) or Hanthana 21 Light2. Each of these editions comes with both i686 (32bit) and x86_64 (64bit) architectures and 10 ISO images available for download.

Download Hanthana 21 now! http://hanthana.org/download.php
Hanthana 21 release is named 'Sinharaja' after the infamous tropical rainforest in Sri Lanka. UNESCO identifies Sinharaja as a World Heritage Site. It spans over 88.64sqkm and harbours high biological diversity. Hantana team thinks It is important to raise awareness on wealth of Sri Lankan wilderness so that the current release is dedicated to Sinharaja. The picture in the background is a plant species from Sinharaja Forest shot by Gauwrika Wijeratne.

Hats off to those who help us to release Hanthana 21 (Sinharaja)!.

We highly appreciate your feedback on Hanthana 21 release in order to make our next better suite for your requirements. Feel free to spread the word among your relatives and friends. Moreover you can conduct events in schools, universities, government and private organizations.

Cheer!
FLISOL David 2015 Report
Panama Fedora team participated in Festival Latinoamericano de Instalación de Software Libre (FLISOL) given on Saturday, April 11th at Universidad del Istmo, Chiriquí headquarters.




We had the opportunity to present lectures and workshops, which were well received by the event participants. Among the topics presented were:

Alongside us was Rafael Agostini, Panamanian web and mobile developer, who gave a presentation on best practices when developing a mobile application.

About 10 Fedora installations were performed. This represents 20% of the event participants.

Another important detail is that the Fedora Atomic + Docker presentation, gave us the opportunity to have the approval from Docker to organise the first Docker Meetup in Panama.

Finally, we are preparing a new member, Kenneth Guerra, which we hope will be an active contributor to the project and help us on organising Chiriqui Fedora-related events.

I would like to thank Universidad del Istmo for the space and all the participants for their interest and time.
Major service disruption
New status major: Network issues for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, FedOAuth, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar

April 14, 2015

FOSS Emoji

Just wanted to make a quick note here.

Today looking for emoji for pagure I ran into http://emojione.com/. This project provides Free and Open Source emoji icons that can thus be re-used on other projects.

Just heads up to those looking for a FOSS emoji database/project and big thanks to the developers and artists behind this awesome project!

Woot! Eight years of my blog

The spring of 2015 marks eight years of this blog! I’ve learned plenty of tough lessons along the way and I’ve made some changes recently that might be handy for other people. After watching Sasha Laundy’s video from her awesome talk at Pycon 2015, I’m even more energized to share what I’ve learned with other people. (Seriously: Go watch that video or review the slides whether you work in IT or not. It’s worth your time.)

Let’s start from the beginning.

History Lesson

When I started at Rackspace in late 2006, I came from a fairly senior role at a very small company. I felt like I knew a lot and then discovered I knew almost nothing compared to my new coworkers at Rackspace. Sure, some of that was impostor syndrome kicking in, but much of it was due to being in the right place at the right time. I took a lot of notes in various places: notebooks, Tomboy notes, and plain text files. It wasn’t manageable and I knew I needed something else.

Rackspace ZeppelinMany of my customers were struggling to configure various applications on LAMP stacks and a frequent flier on my screen of tickets was WordPress. I installed it on a shared hosting account and began tossing my notes into it instead of the various other places. It was a bit easier to manage the content and it came with another handy feature: I could share links with coworkers when I knew how to fix something that they didn’t. In the long run, this was the best thing that came out of using WordPress.

Fast forward to today and the blog has more than 640 posts, 3,500 comments, and 100,000 sessions per month. I get plenty of compliments via email along with plenty of criticism. Winston Churchill >said it best:

Criticism may not be agreeable, but it is necessary. It fulfils the same function as pain in the human body. It calls attention to an unhealthy state of things.

I love all the comments and emails I get — happy or unhappy. That’s what keeps me going.

Now Required: TLS (with optional Perfect Forward Secrecy)

I’ve offered encrypted connections on the blog for quite some time but it’s now a hard requirement. TLS 1.0, 1.1 and 1.2 are supported and the ciphers supporting Perfect Forward Secrecy (PFS) are preferred over those that don’t. For the super technical details, feel free to review a scan from Qualys’ SSL Labs.

You might be asking: “Why does a blog need encryption if I’m just coming by to read posts?” My response is “Why not?”. The cost for SSL certificates in today’s market is extremely inexpensive. For example, you can get three years on a COMODO certificate at CheapSSL for $5 USD per year. (I’m a promoter of CheapSSL — they’re great.)

Requiring encryption doesn’t add much overhead or load time but it may prevent someone from reading your network traffic or slipping in malicious code along with the reply from my server. Google also bumps up search engine rankings for sites with encryption available.

Moved to nginx

Apache has served up this blog exclusively since 2007. It’s always been my go-to web server of choice but I’ve taken some deep dives into nginx configuration lately. I’ve moved the blog over to a Fedora 21 virtual machine (on a Fedora 21 KVM hypervisor) running nginx with PHP running under php-fpm. It’s also using nginx’s fastcgi_cache which has really surprised me with its performance. Once a page is cached, I’m able to drag out about 800-900 Mbit/sec using ab.

Another added benefit from the change is that I’m now able to dump my caching-related plugins from WordPress. That means I have less to maintain and less to diagnose when something goes wrong.

Thanks!

Thanks for all of the emails, comments, and criticism over the years. I love getting those emails that say “Hey, you helped me fix something” or “Wow, I understand that now”. That’s what keeps me going. ;)

The post Woot! Eight years of my blog appeared first on major.io.

[Fedora 21 Docker Base Image] Download

Fedora_Docker_Base_Image

 

Fedora 21 Official Docker Base Images can be found at the following URL :

http://spins.fedoraproject.org/docker/

You can easily load this Docker image into your running Docker daemon using command:
docker load -i Fedora-Docker-Base-20141203-21.x86_64.tar.gz

Kind Regards

Frederic


the more things change.. 4.0


$ ping gelk
PING gelk.kernelslacker.org (192.168.42.30) 56(84) bytes of data.
WARNING: kernel is not very fresh, upgrade is recommended.
...
$ uname -r
4.0.0

Remember that one time the kernel versioning changed and nothing in userspace broke ? Me either.

Why people insist on trying to think they can get this stuff right is beyond me.

YOU’RE PING. WHY DO YOU EVEN CARE WHAT KERNEL VERSION IS RUNNING.

update: this was already fixed, almost exactly a year ago in the ping git tree. The (now removed) commentary kind of explains why they cared. Sigh.

the more things change.. 4.0 is a post from: codemonkey.org.uk

ABRT and virtualization Test Days this week!

This week in Fedora QA we have two Test Days! Today (yes, right now!) is ABRT Test Day. There are lots of tests to be run, but don’t let it overwhelm you – no-one has to do all of them! If you can help us run just one or two it’ll be great. A virtual machine running Fedora 22 is the ideal test environment – you can help us with Fedora 22 Beta RC2 validation testing too. All the information is on the Test Day page, and the abrt crew is available in #fedora-test-day on Freenode IRC (no, you darn kids, that’s not a hashtag) right now to help with any questions or feedback you have. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Thursday 2015-04-16 will be Virtualization Test Day. As with the ABRT event there’s lots of testing you can do, but just doing a little helps us out! Once again, we’ll be meeting in #fedora-test-day, and all the test instructions and other necessary information are on the wiki page.

Many thanks to everyone who can find some time to help with either or both Test Days!

persistent namespaces
Today I merged support for persistent namespaces to unshare(1). The persistent namespace does not require any running process with in the namespace and it's possible to enter the namespace by nsenter(1).

For example let's create a new UTS namespace and set a different hostname within the namespace:

# hostname
ws
# touch /root/ns-uts
# unshare --uts=/root/ns-uts
# hostname FooBar
# exit
Now there is no process in the namespace, try to enter the namespace by --uts=/root/ns-uts reference:

# nsenter --uts=/root/ns-uts
# hostname
FooBar
# exit
The reference to the namespace is bind mount to /proc/[pid]/ns/[type], so umount(8) is enough to remove the reference:

# umount /root/ns-uts
If there is no another reference or any running process within the namespace then the namesapce is destoyed.

It's also possible to create another types of the persistent namespaces (--net, --ipc, ...). Don't forget that if you want to create a persistent mount namespace than the file (--mount=file) has to be on "private" filesystem, for example on Fedora where all is "shared" you have to use:

# mount --bind /mnt/test /mnt/test
# mount --make-rprivate /mnt/test
# touch /mnt/test/my-ns
# unshare --mount=/mnt/test/my-ns
...
Note that PID namespace cannot be without a running process (or more precisely the PID namespace is dead thing after init process (PID 1) termination).
How to Find if LVM Volume is Thinly Provisioned

The latest versions of Red Hat Enterprise Linux, CentOS and Fedora all support LVM thin provisioning. Here's how to tell if a logical volume has been thinly provisioned or not.

Using lvs to display volume information look under the Attr column. Attribute values have the following meaning:

The lv_attr bits are:

1 Volume type: (C)ache, (m)irrored, (M)irrored without initial sync, (o)rigin,
(O)rigin with merging snapshot, (r)aid, (R)aid without initial sync,
(s)napshot, merging (S)napshot, (p)vmove, (v)irtual, mirror or raid (i)mage,
mirror or raid (I)mage out-of-sync, mirror (l)og device, under (c)onversion,
thin (V)olume, (t)hin pool, (T)hin pool data, raid or pool m(e)tadata or
pool metadata spare.

This is how lvs looks like when you have a regular LVM setup:

<notextile><figure class="code">
1
2
3
4
5
6
7
</p>

<h1>lvs</h1>

<p>  LV   VG              Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_dhcp70-183 -wi-ao---- 17,47g                                                  <br/>
  swap rhel_dhcp70-183 -wi-ao----  2,00g  <br/>
</figure></notextile>

When using LVM thin provisioning you're looking for the left-most attribute bit to be V, t or T. Here's an example:

<notextile><figure class="code">
1
2
3
4
5
6
7
8
</p>

<h1>lvs</h1>

<p>  LV     VG              Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00 rhel_dhcp71-101 twi-aotz-- 14,55g               7,52   3,86                          <br/>
  root   rhel_dhcp71-101 Vwi-aotz-- 14,54g pool00        7,53                                 <br/>
  swap   rhel_dhcp71-101 -wi-ao----  2,00g <br/>
</figure></notextile>

FUDCon Pune Planning meeting minutes: 2015-04-14

We had our regular weekly FUDCon planning meeting today and most of the volunteers were present. We went through all the discussion topics and agendas. As the conference is approaching fast, we spent pretty decent time on travel section and it is high time for people who need sponsorship for travel and/or accommodation, please open a Fedora trac ticket for funding request here.

Main highlights of the meeting was :

1. We have updated our site with volunteers list – http://fudcon.in/conference/about . If you are helping and your name has been missed, then please accept our apologies and let us know so that we can add it here.

2. Talk selection done: Roughly 50 talks, 15 workshops and 1 hackfest selected. So, if you have purposed one, you can expect an email soon.

3. Marketing Planning is in its full swing  – http://piratepad.net/marketing-plan-fudcon

Here are the full logs ::
====================
 
Travel updates
  • Start meeting regarding this in #fudcon-planning
  • Accepted several tickets
  • Deferred some; we’ll keep going over the list.
  • All tickets https://fedorahosted.org/fudcon-planning/report/3
  • Prepare an invitation letter for them (for visa).
  • Huzaifa got in touch with Global Mobility and now has the required information to process visa letter requests
  • Ask Ruth about us booking tickets instead of getting speakers/requestors to book tickets and reimburse after the conference – Huzaifa has emailed Ruth.  He will follow up with an action plan
  • For ausil, jrusnack, reicatnor: Huzaifa has emailed Kushal

Budget

  • Make and maintain a publicly visible sheet to track expenses?
  • Need to keep tab on the number of accommodation requests
  • Sent a reminder to Ruth
  • Ruth replied; she’s OK with using RH expense system

Scheduling

  • Talk selection done: Roughly 50 talks, 15 workshops and 1 hackfest selected
  • Slow progress on marking and communicating results

Outreach

  • this is for industry + mailing lists (communities)
  • we need help here with more lists + more volunteers to do the outreach.
  • Reach out to MIT to let them know about the event (mid/late May)
  • Do some sessions in colleges after exams over (exams get over mid-April)
  • Let’s do in:
  • MIT-COE
  • Cummins
  • COEP
  • PCCOE
  • Sinhagad
  • 12 weekends before our event to do these sessions
  • Video series
  • Shreyank spoke with some video editors to get an idea of things to keep in mind; and also give them an idea of what we are looking for.
  • Videos from FPL (Matthew Miller), jsmith, Kushal, Parag, Rahul, Joerg, etc. — extolling the virtues of FUDCon + Pune
  • Kushal and Shreyank to work on this
  • Two videos:
  • One in April
  • One in May
  • Reach out to design/marketing team for editing help.

Marketing

  • no updates this week
  • Fedora magazine
  • Twitter
  • Facebook
  • Google Plus
  • LinkedIn group (Reach out to Soni)

FUDCon.next planning

  • We should start a tradition to announce the next fudcon at the current one
  • We should start the bid process beforehand and get a bid selected before the current one starts
  • The FUDCon pages on the Fedora wiki already mention 1 yr of lead time is needed for starting the fudcon planning process.
  • Tuanta taken ownership of driving this at last APAC ambassadors meeting

Website

  • Add names of MIT organizers (PraveenSiddhesh)
  • Done
  • Graphics status update?
  • Clear to open tickets for other graphics elements
  • Also pls update current ticket so that designers know not to keep working on this design anymore
  • new tshirt design needed
  • Going to fudcon
  • 2011 designs

FUDPub

  • Rupali reached out to Venue1
  • Potential Venue 1
  • Space for 100 people
  • Reasonable (approx 1800 per person)
  • RH has relationship; payments are easier
  • Close to cocoon
  • No limitation on sound limits – a nice party can be had.
  • Amit suggsts place where there is bowling option. 
  • Reached out to one place
  • Option 1: $5600 for unlimited games, food and drinks for unlimited people (max 250)
  • Option 2: $4800 for 150 people, limited to 12 lanes and unlimited food, drinks, games.
  • On paud road there is go-carting place, not sure if they have bowling too.
  • Rupali continuing to reach out to others
  • Another venue visit next week

Swag

  • Let’s start thinking about this now; approach vendors.
  • Swag for Volunteers
  • tshirts (200)
  • Swag for Organisers?
  • Swag for Speakers
  • Mugs (200)?
  • siddhesh’s quote: 
  • shreyank’s quote: 
  • Umbrellas (200) (for sweet Pune rains)
  • Swag in general <- Shreyank
  • buttons (3000)
  • tattoo pasties
  • stickers (5000) <- siddhesh
  • pens (3000)
  • caps (300)
  • cloth bags (300)
  • diaries (200)  <- siddhesh
  • bottles (200)
  • pen drives (200)
  • magic mint dispenser (200) <- Amita
  • Fedora badge for attendees?(added to the FAS account)

Venue

  • WiFi
  • Siddhesh, Huzaifa, Rupali had a call with MIT sysadmin
  • MIT COE are keen on doing it; they need input from us.
  • Also talk about the Fedora mirror with them.
  • Power connector extensions
  • MIT are going to set this up.  Follow up
  • Note to speakers (include in prep email): In seminar hall: projectors are 4:3, screen quite small (don’t include small text)
  • Refreshments for speakers lounge / otherwise

MIT meetups

  • What to do?
  • Packaging?
  • Bugzapping
  • Siddhesh to reach out to MCUG (this week, I promise!)

Volunteers

  • College reopens on Jun 15
  • Many students will be on leave till Jun 15
  • We should identify students who will be available in the break – e.g. students from Pune who don’t plan to travel elsewhere; we don’t need too much of their time anyway
  • Rupali to get a list of volunteers from MITCOE.

Mobile Application

  • No updates
  • Siddharth + Rohan had volunteered

Videographing

  • No updates
  • kpoint: Not an option. Rates too high (20k per day just for recording)
  • asked for clarification on rates; they might have subsidised options for us
  • hasgeek
  • They’re  allowing us use of their equipment + train a few volunteers who can do  the recording.   Equipment needs to be brought from BLR to Pune.  Nice  gesture by them; but sounds complicated given the expensive equipment +  need to get volunteers to be trained.
  • Look for cheaper quotes from other professionals (Bipin)
  • Buy our own cameras? (Rupali)
  • Open source solutions for streaming (amit)
  • Last option will be to have a tiny webcam doing live Hangout — advantage is it has auto-archival on youtube.
  • amit: +1 for this option (or using the open source one for streaming)

[Devoxx France 2015] Optaplanner Session

Devoxx_Optaplanner_2015

 

Here is the presentation “OptaPlanner ou comment optimiser les itinéraires, les plannings et bien plus encore…” Geoffrey and I did at Devoxx France on the Friday April 10 2015.

http://slides-fhornain.rhcloud.com/#/

BTW, slides are in French

 

Optaplanner presentation @ Devoxx

N.B. : Only Chrome, Safari, Firefox, Opera and IE10-11 are supported

What is OptaPlanner?

OptaPlanner is a constraint satisfaction solver. It optimizes business resource planning. Every organization faces scheduling puzzles: assign a limited set of constrained resources (employees, assets, time and money) to provide products or services to customers. OptaPlanner optimizes such planning problems to do more business with less resources. Use cases include Vehicle Routing, Employee Rostering, Job Scheduling, Bin Packing and many more.

More inforation at http://www.optaplanner.org/

Optaplanner and CDI guys

 http://www.devoxx.fr/wp-content/uploads/2015/04/DSC_2596.jpg

Ref:

http://www.devoxx.fr/

http://cfp.devoxx.fr/2015/talk/ZUC-9786/OptaPlanner_ou_comment_optimiser_les_itineraires,_les_plannings_et_bien_plus_encore…

Kind Regards

Frederic

 


New fuzzer in Fedora

Do you fuzz ? If you do,  Fedora now has a fuzzer called radamsa. More information about radamsa can be found here.  Radamsa is now available in F20, F21 and F22.

Happy fuzzing!


Fedora 22 Virt Test Day is Thu Apr 16!
A reminder that the Fedora 22 Virt Test Day is this coming Thu Apr 16. Check out the test day landing page:

https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization

It's a great time to make sure your virt workflow is still working correctly with the latest packages in Fedora 22. No requirement to run through test cases on the wiki, just show up and let us know what works (or breaks).

Updating to a development release of Fedora scares some people, but it's NOT required to help out with the test day: you can test the latest virt bits on the latest Fedora release courtesy of the virt-preview repo. For more details, as well as easy instructions on updating to Fedora 22, see:

https://fedoraproject.org/wiki/Test_Day:2015-04-16_Virtualization#What.27s_needed_to_test

Though running latest Fedora 22 on a physical machine is still preferred :)

If you want to help out, pop into #fedora-test-day on Thursday and give us a shout!

April 13, 2015

[Short Tip] Use host names for Docker links

Docker-logo-011

Whenever you link Docker containers together, the question comes up how to access services provided by the linked container: the actual IP address of the container is not static and cannot be guessed beforehand. Sure, the IP address can be looked up by the environment variables ($ env), but not all programs can be modfied to understand these variables. This is even more true for containers which you receive from the Docker registry.

Thus the quickest way is to define a host name along the docker run. The container can be reached afterwards via that exact name.

$ docker run --hostname=db-container -d postgres
...
$ docker run -it --link db:dbtestlink centos /bin/bash
# ping db-container
PING dbtestlink (172.17.0.13) 56(84) bytes of data.
64 bytes from dbtestlink (172.17.0.13): icmp_seq=1 ttl=64 time=0.178 ms

Filed under: Debian, Fedora, Linux, Shell, Short Tip, Technology, Virtualization
Preserving contaner properties via volume mounts

In the Kolla project, we were heavily using host bind mounts to share filesystem data with different containers.  A host bind mount is an operation where a host directory, such as /var/lib/mysql is mounted directly into the container at some specific location.

The docker syntax for this operation is:

sudo docker run -d -v /var/lib/mysql:/var/lib/mysql -e MARIADB_ROOT_PASSWORD=password kollaglue/centos-rdo-mariadb-app

This pulls and starts the kollaglue/centos-rdo-mariadb-app container and bind mounts /var/lib/mysql from the host into the container at the same location.  This allows all containers to share the host’s /var/lib/mysql that are started with this bindmount.

Through months of trial and error, we found bind mounting host directories to be highly suboptimal.

Containers exhibit three magic properties.

  • Containers are declarative in nature. A container either starts or fails to start, and should do so consistently. Even though containers typically run imperative code, the imperative nature is abstracted behind a declarative model. So it is possible that an imperative change in the how the container starts could remove this spectacular property. If the service relies on a database, or data stored on the filesystem, the system becomes non-deterministic. Determinism is a major advantage of declarative programming.
  • Containers are immutable. The contents, once created can not be modified except by the container software itself. It is almost like composing an entire distribution including compilers and library runtimes as one binary to be run.
  • Containers should be idempotent. A container should be able to be re-run consistently without failing if it started correctly the first time.

Using a host bind mount weakens or destroys the three magic properties of containers.  Docker, Inc. is intuitively aware this was a problem so they implemented docker data volume containers.  A docker data container is is a container that is started once and creates a docker volume.  A docker volume is permanent persistent storage created by the VOLUME operation in a Dockerfie or the –volume command.  Once the data container is created, it’s docker volume is always available to other docker containers using the volumes-from operation.

The following operation starts a data container based upon the centos image, creates a data volume called /var/lib/myql, and finally runs /bin/true which exits quickly:

docker run -d --name=mariadb_data --volume=/var/lib/mysql centos true

Next the container ID must be retrieved to start the application container:

sudo docker ps -a
CONTAINER ID   IMAGE           COMMAND  CREATED         STATUS                    PORTS  NAMES
56361937ac79    centos:latest  "true"   10 minutes ago  Exited (0) 10 minutes ago        mariadb_data

Next we run the mariadb-app container using the –volumes-from feature. Note docker allows short-hand specification of container id, so in this example 56 is the centos data container 56361937ac79:

sudo docker run -d --volumes-from=56 -e MARIADB_ROOT_PASSWORD=password kollaglue/centos-rdo-mariadb-app

When using data volume containers, all the correct permissions are sorted out by docker automatically. Data is shared between containers. Most importantly it is more difficult to modify the container’s volume contents from outside the container. All of these benefits help preserve the declarative, immutable, and idempotent properties of containers.

We also use data containers for nova-compute in Kolla.  We still continue to use bind mounts in some circumstances.  For example, nova-api needs to run modprobe to load kernel modules.  To support that we allow bind mounting of /var/lib/modules:/var/lib/modules with the :ro (read only) flag.

We also continue to have some container-writeable bind mounts.  The nova-libvirt container requires /sys/fs/cgroups:/sys/fs/cgroups to be bind mounted.  Some types of super privileged containers cannot get away from bind mounts, but most of the Kolla system now runs without them.


Dockerizing OpenQA

If you checked the Installation Matrices in the last two months or so, or read any of the HappyAssassin’s blogposts, you know that we started using OpenQA to automate some of the Installation Testcases.

The OpenQA project is heavily developed, which means both quick fixes and responses to feature requests, but also a bit of a pain for “production use” – when the stuff is changing rapidly, only AdamW can keep the pace :) Some of this is addressed by the OpenQA’s stable branch, but even that is changing a lot from update to update, and to asses the tool properly, we really wanted to be able to just use a certain version, figure stuff out, and set it up in production.
The other “issue” is, that you need to have an OpenSuse box to use OpenQA, and having OpenSuse in Fedora Infrastructure would most certainly be frowned upon. Thus Garretraziel (mostly) and I (mostly cheerleading) decided to “freeze” the software in a specific state, and what better to use than Docker !

In the end, it took us two days of what I like to call “extreme Dockerizing” – extensive swearing, getting elbows deep into the code, and some tears of joy and hate – to get it done.

The weirdest error we had to solve, was not caused by the OpenQA though, but by the content of the OpenSUSE’s base Docker image. What rendered as an “No OpenID provider found at $URL”, was in fact caused by the missing ca-certificates package. So if you ever need to use the Docker image for accessing secure http pages, do not forget to install the package in your Dockerfile.

Some other issues were tied to the fact, that the OpenQA WebUI and Worker are not really that prepared to be on separate machines – it works in the end, but sharing of some directories needed to be solved. This is the thing that we need to solve better, in some time – now we just have a shared VOLUME which contains the shared data, and some configuration. That means we need to set a+rwx on the shared dir, and figure out either UID/GID squashing or ACLs later.

But, in the end, IT WORKS! If you want to run your OpenQA instance, you can do it from the comfort of your home^W distribution of choice. It is quite simple, in the end, you just

Get the docker images:

docker pull fedoraqa/openqa_webui 
docker pull fedoraqa/openqa_worker

Pull our repo containing some bits and pieces(thanks Bernhard for pointing me to the .gitignore files for adding empty dirs):

git clone -b feature/docker https://bitbucket.org/rajcze/openqa_fedora_tools openqa_fedora_tools
cd openqa_fedora_tools/docker
git co -b docker be42c791474b02131e0f65413e90605998c420d0
git submodule init
git submodule update
cp -a data.template data
chmod -R 777 data

Next up, run the WebUI:

docker run -d -h openqa_webui -v `pwd`/data:/data --name openqa_webui -p 8080:443 fedoraqa/openqa_webui

It is important, that you are in the root of the cloned git repository, since the -v `pwd`/data:/data parameter mounts the directory into the Docker’s VOLUME.

Check whether WebUI runs by accessing https://localhost:8080 and logging in. Go to https://localhost:8080/api_keys, generate key and secret. Uncomment the [openqa_webui] and [localhost] sections in data/conf/client.conf, and fill in the key and secret.

Populate OpenQA’s database:

docker exec openqa_webui /var/lib/openqa/tests/fedora/templates

And run the worker:

docker run -h openqa_worker_1 --name openqa_worker_1 -d --link openqa_webui:openqa_webui -v `pwd`/data:/data --privileged fedoraqa/openqa_worker 1

Once again, make sure that you are in the root of the cloned git repository, we are once again using that data dir.
Running multiple workers is also easy – just substitute all the 1‘s with 2/3/4…
If nothing went wrong, then you should see the worker as connected in the WebUI.

To run a job
you need to download a Fedora Installation image (Server DVD, or Server Netinst) into the data/factory/iso directory. For Fedora-Server-netinst-x86_64-22_Beta_TC6.iso the command running a default-install check would be:

docker exec openqa_webui /var/lib/openqa/script/client isos post ISO=Fedora-Server-netinst-x86_64-22_Beta_TC6.iso DISTRI=fedora VERSION=rawhide FLAVOR=generic_boot ARCH=x86_64 BUILD=22_Beta_TC6

And that’s it. You can see, that there still are some quirks – the most painful being access control to the shared data – but hey, it’s just two days of work, and we’re not done yet ;)