July 04, 2015

Stratum-1 NTP server, part 1

I’m following the instructions here and here to try and make a Stratum-1 NTP server using GPS and a Raspberry Pi 2. If this works, I should have a very accurate NTP server for a very reasonable amount of money.

20150704_134742

I ordered the Raspberry Pi 2 from Amazon, and the other parts from HAB Supplies in the UK. The total cost (prices include tax) was:

Raspberry Pi 2 £29.99
GPS expansion board £35.99
Antenna and cable £41.99
Postage £5.52
Total £113.49

July 03, 2015

Configurando Puppet 4

Tenia pendiente esta entrada, con la ya salida “oficial” de #puppet4 para Centos7, lo que voy a describir es como tener una estructura con Puppet4 y su nueva web que será backend y CA Authority y clientes puppet (Centos y Ubuntu).

Aqui un pequeño cuadro:

Diagrama_Puppet

Empecemos:

1.- Necesitamos tener matriculados en un DNS los nombres:

puppetmaster.example.com – 192.168.122.34

puppet-cliente.example.com – 192.168.122.35

puppet-ubuntu.example.com – 192.168.122.36

 

Configurando puppetmaster (Centos7.1):

1- Instalamos usando estos repositorios: Puppet y Epel

1.2- Ahora procedemos a instalar el agente y el server:

 yum -y install puppet-agent puppetserver httpd mod_ssl gcc-c++ httpd-devel apr-devel ruby-devel ruby-rdoc openssl* openssl-*

1.3.- Procedemos a configurar puppetmaster, para ello ha cambiado la estructura de directorios, nos tenemos que posicionar en /etc/puppetlabs/puppet y editar puppet.conf

1.4.- Usando vim, añadimos estas lineas a puppet.conf

[master]
 vardir = /opt/puppetlabs/server/data/puppetserver
 logdir = /var/log/puppetlabs/puppetserver
 rundir = /var/run/puppetlabs/puppetserver
 pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
 codedir = /etc/puppetlabs/code
 ssldir = /etc/puppetlabs/puppet/ssl
certificate_revocation = false
dns_alt_names = puppetmaster.example.com, puppetmaster
server = puppetmaster.example.com

[agent]
 server = puppetmaster.example.com
 report = true

*Obs lo de negrita son las lineas añadidas

1.5.- Ahora si miramos un poco los ficheros ubicados en /etc/sysconfig, vemos que estan puppet y puppetserver, miremos primero puppetserver

# Location of your Java binary (version 7 or higher)
JAVA_BIN=”/usr/bin/java”

# Modify this if you’d like to change the memory allocation, enable JMX, etc
JAVA_ARGS=”-Xms2g -Xmx2g -XX:MaxPermSize=256m”

# These normally shouldn’t need to be edited if using OS packages
USER=”puppet”
GROUP=”puppet”
INSTALL_DIR=”/opt/puppetlabs/server/apps/puppetserver”
CONFIG=”/etc/puppetlabs/puppetserver/conf.d”
BOOTSTRAP_CONFIG=”/etc/puppetlabs/puppetserver/bootstrap.cfg”
SERVICE_STOP_RETRIES=60

# START_TIMEOUT can be set here to alter the default startup timeout in
# seconds.  This is used in System-V style init scripts only, and will have no
# effect in systemd.
# START_TIMEOUT=120

A diferencia de la version 3.x, puppet4 hace uso de JAVA para parte del entorno WEB. (Si desean ver mas sobre esto, pueden dirigirse a /etc/puppetlabs/puppetserver/conf.d)

1.6.- Vamos a editar /etc/sysconfig/puppet y añadir:

PUPPET_SERVER=puppetmaster.example.com

1.7.- Procedemos a iniciar puppetserver de la siguiente manera:

/opt/puppetlabs/bin/puppetserver foreground

Obtendremos una salida similar a esta:

puppetserver

Configuración puppet-agent (Ubuntu 14.04)

1.- Usamos el repositorio para puppet: puppet

1.1.- Ejecutamos:

apt-get update && apt-get install puppet-agent

1.2.- Nos dirigimos a /etc/puppetlabs/ y editamos puppet.conf para añadir lo siguiente:

[agent]

server = puppetmaster.example.com

1.3.- Ejecutamos el servicio de la siguiente manera:

/opt/puppetlabs/bin/puppet agent -t

La salida debe ser similar a esta:

puppetcliente1.4.- Ahora nos posicionamos en puppetmaster, con el fin de firmar el certificado para el cliente:

puppetserver1

1.5.- En puppetmaster, procedemos a firmar el certificado usando:

puppet cert – -sign puppet-ubuntu.example.com

La salida debe ser similar a esta:

puppetserver21.6.- En el cliente ejecutamos:

/opt/puppetlabs/bin/puppet agent -t

Y obtenemos el siguiente resultado:

puppetcliente2Obs. si no queremos estar ejecutando todo /opt/puppetlabs/bin/puppet, podemos hacemos un symlink de puppet a /usr/bin de la siguiente manera:

ln -s /opt/puppetlabs/bin/puppet  /usr/bin/

Configuración puppet-cliente (Centos 7.1)

1.- Usamos el repositorio para puppet: puppet

1.2.- Instalamos puppet:

yum -y install puppet-agent

1.3.- Nos ubicamos en /etc/puppetlabs/puppet y añadimos estas lineas a puppet.conf

[agent]

server = puppetmaster.example.com

1.4.- Ahora editamos /etc/sysconfing/puppet y añadimos estas lineas:

PUPPET_SERVER=puppetmaster.example.com
PUPPET_EXTRA_OPTS=- -waitforcert=500

1.5.- Ejecutamos el servicio de la siguiente manera:

/opt/puppetlabs/bin/puppet agent -t

La salida debe ser similar a esta:

puppetcliente41.6.- Nos ubicamos en puppetmaster y listamos el certificado del cliente nuevo:

puppet cert – -list

La salida debe ser similar a esta:

puppetserver31.7.- Ahora procedemos a firmar el certificado ejecutando:

puppet cert – -sign puppet-cliente.example.com

La salida debe ser similar a esta:

puppetserver41.8.- Al ejecutar en puppet-cliente este comando, veremos la siguiente salida:

puppet agent -t

puppetcliente5Obs. Yo he copiado /opt/puppetlabs/bin/puppet a /usr/bin

Con esto ya tenemos nuestra arquitectura puppet, master y clientes, con los certificados firmados de forma correcta, en el próximo post veremos ya un poco de manifests, creación de algunos modulos y definición del nuevo environments. Espero que les sea de ayuda este post :)

Bootstrapping a DevOps Movement in Red Hat IT – Video

Back in late 2013 I joined what was jokingly referred to as the Red Hat IT “DevOps” team. We didn’t like that name, so we changed it and there-after became officially known as Team InceptionFrom the time the team was formed, we all accepted that the team was to retire in 18-24 months. We were totally cool with that too! To us having a pure “DevOps” team in perpetuity just didn’t make sense.

Over the course of the team’s lifespan I feel like I experienced incredible growth, both personally and professionally. I’m proud to look back at all the cool things we accomplished as a team, and I’m even more thankful to have had the opportunity to be a member of that team. Here’s a taste of some things we did as a team publicly:

  • Blogged a bunch of posts for Red Hat Developer Blog
  • Created a functioning Continuous Deployment system, Release Engine [Docs]
  • Created jsonstats, a tool for exporting system information over a REST interface
  • And another tool, Talook, which provides a view into jsonstats running on a collection of servers
  • Cacophony, a simple REST API for automatic SSL certificate generation
  • The Ansible XML module. This grew in popularity so much that we realized the best way to ensure it lives on was by transferring complete ownership over to Chris Prescott, a former contributor (Thanks Chris)!
  • git-branch-blacklist – A git-hook based system for blacklisting pushes to specific branches

Two week ago (2015-06-20 → 2015-06-24) the DevNation and Red Hat Summit 2015 conferences were held in Boston, MA. Of the many excellent speakers and panel groups [S|D] that held sessions during the conferences, there’s one group I am especially fond of: My old team, Inception.

On Wednesday, July 24th we held a panel session called Bootstrapping a DevOps Movement in Red Hat IT. This was our final activity together as Team Inception. During this panel-style session Jen Krieger, our Product Owner/Scrum Master, facilitated a look back at some of our experiences during our 18 months as the Red Hat IT “DevOp Team”.

We began the initial round of questions with what our individual perceptions of “DevOps” were before the team had formed. We followed that with what ended up being a great Q&A with the audience (thank you everyone who participated!). We ended the panel with our closing thoughts on what “DevOps” means to each of us now.

Here’s a snippet from the official session description:

Topics will include

  • What we accomplished. We’ll take you step-by-step through how we deliver our work using a combination of open source tools, including Docker.
  • How we rate both our cultural and tooling success.
  • Roadblocks, disruptions, and surprises we encountered and how we handled them.
  • How this project has changed the way we view our jobs and our work relationships.
  • What’s next for the team.

Panel

Panel facilitator

Between ourselves, we casually referred to this as our final team retrospective, an honest (and very public) look-back at lessons learned over the last year and a half.

Click play below to watch the full video now, or go directly to it on YouTube.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="450" src="https://www.youtube.com/embed/ijDKvNyAJfs?feature=oembed" width="800"></iframe>

And before I forget. So much thanks to Andrew “Hoss” Butcher [GH] for recording, editing, and posting the recording for us. I (we) owe you many Tasty Beverages for that.

FUDCon Pune 2015

This year FUDCon, held at Pune, last week was my first ever FUDCon, and my first steps in to awesome Fedora community. This was also the first conference where I delivered a full-fledged talk about ‘Automating UI testing’ presenting some of the work I did in automating the UI tests for gnome-photos. The talk was more about how they can make their UI tests automated.

I also talked about ‘Integrating LibreOffice with your applications’ in a barcamp talk sharing and discussing ideas with few people, presenting what I am up to in this project in LibreOffice, and how they can take advantage by either directly using the new, evolving LibreOfficeKit API, or by using the new Gtk3 widget in their applications. I talked about how I am acheiving this using tiled rendering, and how I (with Michael and Miklos) am planning to enhance this in future by incoporating the support for opengl, efficient tile management, and multi-threaded support.

Besides that, it was a wonderful opportunity for me to meet new people contributing to Fedora project, and sharing ideas with them. I now have a better idea of how I can contribute more to Fedora, and feel motivated enough to continue my contributions. I have made quite a few friends who, I think, would be happy to help me if I plan to get started with any of the Fedora teams, and I do plan to involve myself in few more interesting teams in future sparing time out of my regular work.

Last but not the least, I would like to thank all the organizers for making this event possible. They have been working hard for months, and have had many sleepless nights just to make sure everything remains on track. I would also like to thank them for sponsoring my stay and travel, without which I would not have been able to attend the event.

July 02, 2015

We need 64bit, everywhere!

Just bought a new 8TB disk drive. Following my standard procedures I run a badblock -w against the disk as burn in test. Running on Fedora 20 on x86_64 I was surprised to see this:

badblocks -v -v -w /dev/sdf
 badblocks: Value too large for defined data type invalid end block (7516192768): must be 32-bit value

Reproducer:

lvcreate -L 1G --type thin-pool --thinpool thin_pool $VG
lvcreate -T $VG/thin_pool -V 4T -n thinvol
badblocks -v -v -w /dev/$VG/thin_pool

Solution

badblock -v -v -w -b 4096 /dev/$VG/thin_pool
my T-Shirt

root is god
selinux is blasphemy

How I Created Hardcore Freestyle Emacs
I've been asked more and more lately how I've gone about creating Hardcore Freestyle Emacs, my literate computing configuration. After all, it's over 100 pages printed, covers damn near everything I use computers for, from chat to email to un-fucking servers to how I remember all of the useless crap that my coworkers love me for knowing. The code itself is mostly the work of others, with my own customizations and improvements and increasingly my own custom ELisp, but the way they fit together is uniquely me.

July 01, 2015

Hello Red Hat

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect – so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don’t really know how it will be – but I’m very much looking forward to it, it’s an amazing opportunity! =)


Filed under: Business, Fedora, Linux, Politics, Technology, Thoughts
Converting VMWare Disk Images For Use With QEMU
At my current job we have a VM that was created by someone using VMware which hosts a scaled down version of our runtime environment. Being the new guy, and someone who's way more comfortable with Fedora Linux than Windows, I wanted to take the VM and run it on my favorite OS.

But I couldn't just copy the image over since the 60G disk was split up into 31 separate VMDK files, which cried out to be converted to a format usable by Linux.

So here's what I did:

 $ for vmdk in *.vmdk; do qemu-convert -f $vmdk -O raw $(basename -s .vmdk $vmdk).img; done
 $ cat *.img >> runtime_environment_vm.img

When it was all done I had a bootable disk. I created a new VM using that disk and was off to work in no time!

FUDCon APAC 2015 – Summary

I was last week in Pune for FUDCon, so its time to write about the experiences.

Day -2

I had to start my travel 2 days before the event from airport Leipzig flying to Munich, well the travel to Leipzig and also the checkin and flight was without something to mention. Just one hour ride to Leipzig and 40 minutes flight to Munich. I had a long layover in Munich so I checked before the situation with WIFI and electricity, well Munich has free wifi and they even proudly anounced that the equipped the gate areas with power supply, well they was there but simple not working. With a little searching I found one which was working but there was nothing to sit there so I spent the time in EMEA Ambassadors meeting standing :D

Day -1

Also for that day there was no action, boarding the plane in the morning and leaving it in Mumbai in the night. The immigration process with the TVOA visa worked well, except to find the counter for them was difficult. So if you travel to India and eligible to get it, thats the easiest way.
Outside the airport Prima Yogi Loviniltra and a volunteer did already wait for me, just Ryan Lerch was missing but he joined as a few minutes later. So the fetch on the airport was well organized. The cab ride took some time but interesting conversations made that not feelable. So we arrived in the morning in Pune just enough for 3 hours sleep.

Day 0

I had not really time for a good breakfast but I needed one as I had not a dinner either, but Kushal was already standing there telling me I was to late. But I managed to grab something to eat even the waiter always removed my tools when I grabbed something. Also here the transport was well organized and the bus for bringing us to MIT was already waiting.
On the first day I joined a few sessions but I will write only something about the most important for me. We had a BoF session for the next APAC FUDCon. What I am worried about is the style which is coming up there, just finding some point why a thing not might be good instead of trying to find out how problems can be solved!

Day 1

I started the day with Jiri Eischmanns presentation about the Fedora workstation and his presence and future. I joined several other talks to, where I just want to mention the Achieving Community Goals With Fedora one from Tenzin Chokden, awesome job of Joerg Simon and Fabian Affolter and their l10n hackfest for the tibetian language, good work. Besides that I had many organizational conversations for organzing stuff in the APAC region.

Day 2

This was actually the most busy day for me as I had in the morning my own workshop about using Inkscape. Well there are some negative things, the room was not the best choice, its hard to work on a small table attached to the chair. But more worst was that the picture from projector was looped to an camera, but the camera was missing so we had to find that out first, so I lost some time for the workshop. But we still managed to draw with the participants what I wanted to achieve and they have a good foundation to use Inkscape in their work and learn it more. I already prepared a screencast, which you can find here.

Ryan switched more to a presentation style, which I only do on events to impress people what with our free software tools can be done and how easy it can be. But as I expect people on a FOSS event like FUDCon are already open to using our tools and just need some help starting to work with them and the best way to learn it is by doing.

After the lunch we had another Ambassadors BoF for APAC, time to speak more Ambassador stuff. We made during this year some progress in APAC and on FUDCon it was visible a lot of asians in a Fedora Ambassador shirts, but we still have not all equipped with them.

We also made progress in producing F21 media centralized but still there are some problems to solve as the process didnt work well, we had no central production in APAC for F22 but we talked how we shall do it in the future and think found the best solution for it. We also had time to discuss if it would possible to replace the DVD media at least partly with USB sticks. We agreed on doing a research on prices until next FAD APAC end of the year and look there if we do it or not. And yes we found an place where we want to do the next FAD – Singapore so we will work on it during the next months to organize it. So we moved again forward.

Day 3

Some went out really in the morning to spent some time in Mumbai, but I am to old for touristic pressure even I would like to have seen more from that city, I was in India for FUDCon and not for an touristic tour. So I went out of the hotel at noon and the cab brought me to the airport, where I enjoyed the sun outside and later I spent some time with Danishka Naven and Izhar Firdaus in the airport.

Day 4

This day I entered a plane in Mumbai short after the day had begun and woke up landing in Zurich. Well I found easily an AC port in this airport and free wifi was also available so I could do some work during the time I had to wait for the next flight, wich was this time just 5 hours later. After this flight I just had a short trip with the train and I arrived right in time for the Design Team meeting.

Summary

It was an interesting trip to India seeing how people live there and make new friends, meeting people I worked already with and meeting old friends. I think we moved for the Ambassadors in the APAC region and that means Fedora makes progress in the future in Asia.

Photos: future plans

hicolor_apps_256x256_gnome-photos

This is the third in my series of blog posts about the latest generation of GNOME application designs. In this post, I’m going to talk about Photos. Out of the applications I’ve covered, this is the one that has the most new design work.

One of the unique things about Photos is that it has been built from the ground up with cloud integration in mind. This means that you can use it to view photos from Facebook, Google or Flickr, as well as the images on your local machine. This is incredibly important for the future of the app, and is something we’d like to build on.

Until recently, we’ve focused on getting Photos into shape as a storage-agnostic Photo viewer and organiser. Now that it has matured in these areas, we’ve begun the process of filling out its feature set.

Editing

I’m starting with editing because work in this area is already happening. Photos uses GEGL, the GIMP’s next generation image processing framework, which means that there’s a lot of power under the hood. Debarshi has been (understandably) keen to make use of this for image editing.

For the design of Photos, we’re following one of the design principles drawn up by Jon for GNOME Shell: “design a self-teaching interface for beginners, and an efficient interface for advanced users, but optimize for intermediates”. This can be seen in the designs for photo editing: they are simple and straightforward if you are new to photo editing, but there’s also enough in there to satisfy those who know a few tricks.

We’re also following the other principles of GNOME 3 design: reducing the amount of work the user has to do, preventing mistakes where possible (and allowing them to be easily reversed when they do happen), prioritising the display of content, and using transitions effectively.

The designs organise the different editing tools according to a logical workflow: crop, fix colours (brightness, contrast, saturation, etc), check sharpeness, and finally apply effects.

Editing: Crop

Editing: Filters

We also want editing to feel smooth and seamless, and are focusing on getting the transitions right. To that end, Jakub has been working on motion mockups.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="495" src="https://www.youtube.com/embed/F3_e24QHzfI?feature=oembed" width="660"></iframe>

There’s already a branch with Debarshi’s editing work on it, for those who want to give it a try. The usual caveat applies: this is work in progress.

Import

As I mentioned, Photos already has cloud support. This solves the problem of getting photos into the Photos app for many people – if you are shooting with an Android phone, images can be synced to the cloud and will magically appear in GNOME.

However, if you are one of those old-fashioned types who shoots with an actual camera, you have to manually copy images over from the device, and this is labour intensive and error prone. We want a much more convenient and seamless experience for getting images from these devices onto your computer.

The initial designs for importing photos from a device are deliberately simple: generally speaking, all it should take is a single click to import new shots.

One important aspect of this design is that we want it to be error-proof. There’s nothing worse than realising that you forgot to copy over images and then blanked the SD card they were stored on – we want to prevent this from happening by shifting responsibility for maintaining the photo collection to the app.

Sharing

Sharing is critical for a photos app, since sharing is often the primary reason we take a picture in the first place. It can take place through various means, including showing slideshows, but social media is obviously critical.

There are plans to equip GNOME with a system-wide sharing framework, which will allow posting to social media, and we’d like Photos to take advantage of that. However, sharing is so important for Photos that we don’t want to wait around for system-wide sharing to become available.

Sharing

There are also wireframes and a bug report.

General improvements

Aside from big new features like editing, import and sharing, we have other changes planned. The first is an improved photos timeline view:

Photos Timeline

The main changes here are the addition of date headings, switching to square thumbnails, and hiding image titles. This is all intended to give you a less cluttered view of your photos, as well as clearer markers for navigation.

Another change that we have planned is a details sidebar. This is intended to provide a kind of organising mode, in which you can go through a series of photos and give them titles and descriptions, or assign them to different albums.

Details Sidebar

How to help

There’s already some cool work happening in Photos, and I’m pretty excited about the plans we have. If you want to help or get involved, there’s plenty to be done, but everything is clearly organised.

Bugs have been filed for all the new features and changes that I’ve mentioned in this post, and each one links to the relevant designs and mockups – you can find them listed on the Photos roadmap page.

Also, Debarshi (the photos maintainer), is happy to review patches, and Jakub and I are available for design discussion and review.

Announce: libvirt-sandbox “Dashti Margo” 0.6.0 release – an application sandbox toolkit

I pleased to announce the a new public release of libvirt-sandbox, version 0.6.0, is now available from:

http://sandbox.libvirt.org/download/

The packages are GPG signed with

  Key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF (4096R)

The libvirt-sandbox package provides an API layer on top of libvirt-gobject which facilitates the cration of application sandboxes using virtualization technology. An application sandbox is a virtual machine or container that runs a single application binary, directly from the host OS filesystem. In other words there is no separate guest operating system install to build or manage.

At this point in time libvirt-sandbox can create sandboxes using either LXC or KVM, and should in theory be extendable to any libvirt driver.

This release contains a mixture of new features and bugfixes.

The first major feature is the ability to provide block devices to sandboxes. Most of the time sandboxes only want/need filesystems, but there are some use cases where block devices are useful. For example, some applications (like databases) can directly use raw block devices for storage. Another one is where a tool actually wishes to be able to format filesystems and have this done inside the container. The complexity with exposing block devices is giving the sandbox tools a predictable path for accessing the device which does not change across hypervisors. To solve this, instead of allowing users of virt-sandbox to specify a block device name, they provide an opaque tag name. The block device is then made available at a path /dev/disk/by-tag/TAGNAME, which symlinks back to whatever hypervisor specific disk name was used.

The second major feature is the ability to provide a custom root filesystem for the sandbox. The original intent of the sandbox tool was that it provide an easy way to confine and execute applications that are installed on the host filesystem, so by default the host / filesystem is mapped to the sandbox / filesystem read-only. There are some use cases, however, where the user may wish to have a completely different root filesystem. For example, they may wish to execute applications from some separate disk image. So virt-sandbox now allows the user to map in a different root filesystem for the sandbox.

Both of these features were developed as part of a Google Summer of Code 2015 project which is aiming to enhance libvirt sandbox so that it is capable of executing images distributed by the Docker container image repository service. The motivation for this goes back to the original reason for creating the libvirt-sandbox project in the first place, which was to provide a hypervisor agnostic framework for sandboxing applications, as a higher level above the libvirt API. Once this is work is complete it’ll be possible to launch Docker images via libvirt QEMU, KVM or LXC, with no need for the Docker toolchain itself.

The detailed list of changes in this release is:

  • API/ABI in-compatible change, soname increased
  • Prevent use of virt-sandbox-service as non-root upfront
  • Fix misc memory leaks
  • Block SIGHUP from the dhclient binary to prevent accidental death if the controlling terminal is closed & reopened
  • Add support for re-creating libvirt XML from sandbox config to facilitate upgrades
  • Switch to standard gobject introspection autoconf macros
  • Add ability to set filters on network interfaces
  • Search /usr/lib instead of /lib for systemd unit files, as the former is the canonical location even when / and /usr are merged
  • Only set SELinux labels on hosts that support SELinux
  • Explicitly link to selinux, instead of relying on indirect linkage
  • Update compiler warning flags
  • Fix misc docs comments
  • Don’t assume use of SELinux in virt-sandbox-service
  • Fix path checks for SUSE in virt-sandbox-service
  • Add support for AppArmour profiles
  • Mount /var after other FS to ensure host image is available
  • Ensure state/config dirs can be accessed when QEMU is running non-root for qemu:///system
  • Fix mounting of host images in QEMU sandboxes
  • Mount images as ext4 instead of ext3
  • Allow use of non-raw disk images as filesystem mounts
  • Check if required static libs are available at configure time to prevent silent fallback to shared linking
  • Require libvirt-glib >= 0.2.1
  • Add support for loading lzma and gzip compressed kmods
  • Check for support libvirt URIs when starting guests to ensure clear error message upfront
  • Add LIBVIRT_SANDBOX_INIT_DEBUG env variable to allow debugging of kernel boot messages and sandbox init process setup
  • Add support for exposing block devices to sandboxes with a predictable name under /dev/disk/by-tag/TAGNAME
  • Use devtmpfs instead of tmpfs for auto-populating /dev in QEMU sandboxes
  • Allow setup of sandbox with custom root filesystem instead of inheriting from host’s root.
  • Allow execution of apps from non-matched ld-linux.so / libc.so, eg executing F19 binaries on F22 host
  • Use passthrough mode for all QEMU filesystems
WLE 2015 in... Brazil
Lajedo de Pai Mateus - Pedra do Capacete
By Ruy Carvalho (Own work) CC BY-SA, via Wikimedia Commons

This year I was invited once more to be a jury member for the Brazilian Wiki Loves Earth photo competition (thanks Rodrigo!) and it was a pleasure to witness so many wonderful images (yes, I am a bit jealous for my recent inactivity in travel/landscape photography).

Taking a look at their top 10 winners anyone would probably agree this is quality stuff, which will rightfully enrich Wikipedia. Myself, after seeing the larger (around 600 images) selection for the jury, I dare to conclusion a significant increase in quality over the previous year. And I understand the increase was also in quantity, so it looks like a win-win.

Congratulations to the organizers and all the participants!

PS: take a few more moments to admire the winners from the other countries, they are added to the page gradually, as each local jury get its work done. I still ting the Brazilian pictures are among the best so far :)

Cacimba do Padre - Fernando de Noronha
By Dante Laurini Jr (Own work) CC BY-SA, via Wikimedia Commons
Fedora Hubs Update!!!

fedora-hubs_logo

The dream is real – we are cranking away, actively building this very cool, open source, socially-oriented collaboration platform for Fedora.

Myself and Meghan Richardson, the Fedora Engineering Team’s UX intern for this summer, have been cranking out UI mockups over the past month or so (Meghan way more than me at this point. :) )

Screenshot from 2015-06-23 09-24-44

We also had another brainstorming session. We ran the Fedora Hubs Hackfest, a prequel to the Fedora Release Engineering FAD a couple of weeks ago.

After a lot of issues with the video, full video of the hackfest is now finally available (the reason for the delay in my posting this :) ).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/p-KYhPlUUBU" width="560"></iframe>

Let’s talk about what went down during this hackfest and where we are today with Fedora Hubs:

What is Fedora Hubs, Exactly?

(Skip directly to this part of the video)

We talked about two elevator pitches for explaining it:

  • It’s an ‘intranet’ page for the Fedora Project. You work on all these different projects in Fedora, and it’s a single place you can get information on all of them as a contributor.
  • It’s a social network for Fedora contributors. One place to go to keep up with everything across the project in ways that aren’t currently possible. We have a lot of places where teams do things differently, and it’s a way to provide a consistent contributor experience across projects / teams.

Who are we building it for?

(Skip directly to this part of the video)

  • New Fedora Contributors – A big goal of this project is to enable more contributors and make bootstrapping yourself as a Fedora contributor less of a daunting task.
  • Existing Fedora Contributors – They already have a workflow, and already know what they’re doing. We need to accommodate them and not break their workflows.

The main philosophy here is to provide a compelling user experience for new users that can potentially enhance the experience for existing contributors but at the very least will never disrupt the current workflow of those existing contributors. Let’s look at this through the example of IRC, which Meghan has mocked up in the form of a web client built into Fedora Hubs aimed at new contributor use:

If you’re an experienced contributor, you’ve probably got an IRC client, and you’re probalby used to using IRC and wouldn’t want to use a web client. IRC, though, is a barrier to new contributors. It’s more technical than the types of chat systems they’re accustomed to. It becomes another hurdle on top of 20 or so other hurdles they have to clear in the process of joining as a contributor – completely unrelated to the actual work they want to do (whatever it is – design, marketing, docs, ambassadors, etc.)

New contributors should be able to interact with the hubs IRC client without having to install anything else or really learn a whole lot about IRC. Existing contributors can opt into using it if they want, or they can simply disable the functionality in the hubs web interface and continue using their IRC clients as they have been.

Hackfest Attendee Introductions

(Skip directly to this part of the video)

Next, Paul suggested we go around the room and introduce ourselves for anybody interested in the project (and watching the video.)

  • Máirín Duffy (mizmo) – Fedora Engineering UX designer working on the UX design for the hubs project
  • Meghan Richardson (mrichard) – Fedora Engineering UX intern from MSU also working on the UX design for the hubs project
  • Remy Decausemaker (decause) – Fedora Community lead, Fedora Council member
  • Luke Macken (lmacken) – Works on Fedora Infrastructure, release engineering, tools, QA
  • Adam Miller (maxamillion) – Works on Release engineering for Fedora, working on build tooling and automation for composes and other things
  • Ralph Bean (threebean) – Software engineer on Fedora Engineering team, will be spending a lot of time working on hubs in the next year
  • Stephen Gallagher (sgallagh) – Architect at Red Hat working on the Server platform, on Fedora’s Server working group, interested in helping onboard as many people as possible
  • Aurélien Bompard (abompard) – Software developer, lead developer of Hyperkitty
  • David Gay (oddshocks) – Works on Fedora infrastructure team and cloud teams, hoping to work on Fedora Hubs in the next year
  • Paul Frields (sticksteR) – Fedora Engineering team manager
  • Pierre-Yves Chibon (pingou) – Fedora Infrastructure team member working mostly on web development
  • Patrick Uiterwijk (puiterwijk) – Member of Fedora’s system administration team
  • Xavier Lamien (SmootherFrOgZ) – Fedora Infrastructure team member working on Fedora cloud SIG
  • Atanas Beloborodov (nask0) – A very new contributor to Fedora, he is a web developer based in Bulgaria.
  • (Matthew Miller and Langdon White joined us after the intros)

Game to Explore Fedora Hub’s Target Users

(Skip directly to this part of the video)

We played a game called ‘Pain Gain’ to explore both of the types of users we are targeting: new contributors and experienced Fedora contributors. We started talking about Experienced Contributors. I opened up a shared Inkscape window and made two columns: “pain” and “gain:”

  • For the pain column, we came up with things that are a pain for experienced contributors the way our systems / processes currently work.
  • For the gain column, we listed out ways that Fedora Hubs could provide benefits for experienced contributors.

Then we rinsed and repeated for new contributors:

paingain

While we discussed the pains/gains, we also came up with a lot of sidebar ideas that we documented in an “Idea Bucket” area in the file:

idea-bucket

I was worried that this wouldn’t work well in a video chat context, but I screen-shared my Inkscape window and wrote down suggestions as they were brought up and I think we came out with a useful list of ideas. I was actually surprised at the number of pains and gains on the experienced contributor side: I had assumed new contributors would have way more pains and gains and that the experienced contributors wouldn’t have that many.

Prototype Demo

(Skip directly to this part of the video)

Screenshot from 2015-06-23 12-57-27

Ralph gave us a demo of his Fedora Hubs prototype – first he walked us through how it’s built, then gave the demo.

diagram

In the README there is full explanation of how the prototype works so I won’t reiterate everything there. Some points that came up during this part of the meeting:

  • Would we support hubs running without Javascript? The current prototype completely relies on JS. Without JS, it would be hard to do widgets like the IRC widget. Some of the JS frameworks come with built-in fail modes. There are some accessibility issues with ways of doing things with JS, but a good design can ensure that won’t happen. For the most part, we are going to try to support what a default Fedora workstation install could support.
  • vi hotkeys for Hubs would be awesome. :) Fedora Tagger does this!
  • The way the widgets work now, each widget has to define a data function that gets called with a session object, and it has to return JSON-ifiable python code. That gets stored in memcached and is how the wsgi app and backend communicate. If you can write a data function to return JSON and write a template the data gets plugged into – that’s mainly what’s needed. Take a look at the stats widget – it’s pretty simple!
  • All widgets also need a ‘should_invalidate()’ function that lets the system know what kinds of information apply to which widgets. Every fedmsg has to go through every widget to see if it invalidates a given widget’s data – we were worried that this would result in a terrible performance issue, but by the end of the hackfest we had that figured out.
  • Right now the templates are ginja2, but Ralph thinks we should move to client-side (javascript) templates. The reason is that when updated data gets pushed over websockets from the bus, it can involve garbage communication any time new changes in data come across – it’s simpler that the widget doesn’t have to request the templates and instead the templates are already there in the client.
  • Angular could be a nice client-side way of doing the templates, but Ralph had heard some rumors that AngularJS 2 was going to support only Chrome, and AngularJS 1.3 and 2 aren’t compatible. nask0 has a lot of experience with Angular though and does not think v2 is going to be Chrome-only.
  • TODO: Smoother transitions for when widgets pop into view as they load on an initial load.
  • Langdon wondered if there would be a way to consider individual widgets being able to function as stand-alones on desktops or mobile. The raw zeromq pipes could be hooked up to do this, but the current design uses EventSource which is web-specific and wouldn’t translate to say a desktop widget. Fedora Hubs will emit its own fedmsgs too, so you could build a desktop widget using that as well.
  • Cache invalidation issues was the main driver of the slowness in Fedora Packages, but now we have a cache that updates very quickly so we get constant time access to delivering those pages.

Mockup Review

Screenshot from 2015-06-23 13-48-56

Next, Meghan walked us through the latest (at the time :) we have more now!) mockups for Fedora Hubs, many based on suggestions and ideas from our May meetup (the 2nd hubs video chat.)

Creating / Editing Hubs

(Skip directly to this part of the video)

First, she walked us through her mockups for creating/editing hubs – how a hub admin would be able to modify / set up their hub. (Mockup (download from ‘Raw’ and view in Inkscape to see all screens.)) Things you can modify are the welcome message, colors, what widgets get displayed, the configuration for widgets (e.g. what IRC channel is associated with the hub?), and how to add widgets, among many other things.

Meghan also put together a blog post detailing these mockups.

One point that came up here – a difference is that when users edit their own hubs, they can’t associate an IRC channel with it, but a nick and a network, to enable their profile viewers to pm them.

We talked about hub admins vs FAS group admins. Should they be different or exactly the same? We could make a new role in FAS – “hub admin” – and store it there if it’s another one. Ralph recommended keeping it simple by having FAS group admins and hub admins one and the same. Some groups are more strict about group admins in FAS, some are not. Would there be scenarios where we’d want people to be able to admin the FAS group for a team but not be able to modify the hub layout (or vice-versa?) Maybe nesting the roles – if you’re a FAS admin you can be FAS admin + hub admin, if you’re a hub admin you can just admin the hub but not the FAS group.

Another thing we talked about is theming hubs. Luke mentioned that Reddit allows admins to have free reign in terms of modifying the CSS. Matthew mentioned having a set of backgrounds to choose from, like former Fedora wallpapers. David cautioned that we want to maintain some uniformity across the hubs to help enable new contributors – he gave the example of Facebook, where key navigational elements are not configurable. I suggested maybe they could only tweak certain CSS classes. Any customizations could be stored in the database.

Another point: members vs subscribers on a hub. Subscribers ‘subscribe’ to a hub, members ‘join’ a hub. Subscribing to a hub adds it to your bookmarks in the main horizontal nav bar, and enables certain notifications for that hub to appear in your feed. We talked about different vocabulary for ‘subscribe’ vs ‘join’ – instead of ‘subscribe’ we talking about ‘following’ or ‘starring’ (as in Github) vs joining. (Breaking News :) Since then Meghan has mocked up the different modes for these buttons and added the “star” concept! See below.)

hub-buttons

We had a bit of an extended discussion about a lot of the different ways someone could be affiliated with a team/project that has a hub. Is following/subscribing too non-committal? Should we have a rank system so you could move your way up ranks, or is it a redundant gameification given the badge system we have in place? (Maybe we can assign ranks based on badges earned?) Part of the issue here is for others to identify the authority of the other people they’re interacting with, but another part is for helping people feel more a part of the community and feel like valued members. Subscribing is more like following a news feed, being a member is more being part of the team.

Joining Hubs

(Skip directly to this part of the video)

The next set of mockups Meghan went through showed us the workflow of how a user requests membership in a given hub and how the admin receives the membership request and handles it.

We also tangented^Wtalked about the welcome message on hubs and how to dismiss or minimize them. I think we concluded that we would let people collapse them and remove them, and if they remove them we’ll give them a notification that if they want to view them at any time they can click on “Community Rules and Guidelines.”

Similarly, the notification to let the admin know that a user has requested access to something and they dismiss it and want to tend to it later – it will appear in the admin’s personal stream as well for later retrieval.

We talked about how to make action items in a user’s notification feed appear differently than informational notifications; some kind of different visual design for them. One idea that came up was having tabs at the top to filter between types of notifications (action, informational, etc.) I explained how we were thinking about having a contextual filter system in the top right of each ‘card’ or notification to let users show or hide content too. Meghan is working on mockups for this currently.

David had the idea of having action items assigned to people appear as actions within their personal stream… since then I have mocked this up:

actionitem_preview

Personal Profiles

(Skip directly to this part of the video)

Next Meghan walked us through the mockups she worked on for personal profiles / personal streams. One widget she mocked up is for personal library widgets. Other widgets included a personal badges earned display, hubs you’re a member of, IRC private message, a personal profile.

Meghan also talked about privacy with respect to profiles and we had a bit of a discussion about that. Maybe, for example, by default your library could be private, maybe your stream only shows your five most recent notifications and if someone is approved (using a handshake) as a follower of yours they can see the whole stream. Part of this is sort of a bike lock thing…. everything in a user’s profile is broadcast on fedmsg, but having it easily accessible in one place in a nice interface makes it a lot easier (like not having a lock on your bike.) One thing Langdon brought up is that we don’t want to give people a false sense of privacy. So we have to be careful about the messaging we do around it. We thought about whether or not we wanted to offer this intermediate ‘preview’ state for people’s profiles for those viewing them without the handshake. An alternative would be to let the user know who is following them when they first start following them and to maintain a roster of followers so it is clear who is reading their information.

Here’s the blog post Meghan wrote up on the joining hubs and personal profile mockups with each of the mockups and more details.

Bookmarks / main nav

(Skip directly to this part of the video)

The main horizontal navbar in Fedora Hubs is basically a bookmarks bar of the hubs you’re most interested in. Meghan walked us through the bookmarks mockups – she also covered these mockups in detail on her bookmarks blog post.

ZOMG THIS IS SO AWESOME!

Yes. Yes, it is.

So you may be wondering when this is going to be available. Well, we’re working on it. We could always use more help….

help-1

Where’s stuff happening?

How does one help? Well, let me walk you through where things are taking place, so you can follow along more closely than my lazy blog posts if you so desire:

  • Chat with us: #fedora-hubs on irc.freenode.net is where most of the folks working on Fedora Hubs hang out, day in and day out. threebean’s hooked up a bot in there too that pushes notifications when folks check in code or mockup updates.
  • Mockups repo: Meghan and I have our mockups repo at https://github.com/fedoradesign/fedora-hubs, which we both have hooked up via Sparkleshare. (You are free to check it out without Sparkleshare and poke around as you like, of course.)
  • Code repo: The code is kept in a Pagure repo at https://pagure.io/fedora-hubs. You’ll want to check out the ‘develop’ branch and follow the README instructions to get all setup. (If I can do it, you can. :) )
  • Feature planning / Bug reporting: We are using Pagure’s issue tracker at https://pagure.io/fedora-hubs/issues to plan out features and track bugs. One way we are using this which I think is kind of interesting – it’s the first time I’ve used a ticketing system in exactly this way – is that for every widget in the mockups, we’ve opened up a ticket that serves as the design spec with mockups from our mockup repo embedded in the ticket.
  • Project tracking: This one is a bit experimental. But the Fedora infra and webdev guys set up http://taiga.fedoraproject.org – an open source kanban board – that Meghan and I started using to keep track of our todo list since we had been passing post-it notes back and forth and that gets a bit unwieldy. It’s just us designers using it so far, but you are more than welcome to join if you’d like. Log in with your Fedora staging password (you can reset it if it’s not working and it’ll only affect stg) and ping us in #fedora-hubs to have your account added to the kanban board.
  • Notification Inventory: This is an inventory that Meghan started of the notifications we’ve come up with for hubs in the mockups.
  • Nomenclature Diagram for Fedora Hubs: We’ve got a lot of neat little features and widgets and bits and bobs in Fedora Hubs, but it can be confusing talking about them without a consistent naming scheme. Meghan created this diagram to help sort out what things are called.

How can I help?

Well, I’m sure glad you asked. :) There’s a few ways you can easily dive in and help right now, from development to design to coming up with cool ideas for features / notifications:

  1. Come up with ideas for notifications you would find useful in Fedora Hubs! Add your ideas to our notification inventory and hit us up in #fedora-hubs to discuss!
  2. Look through our mockups and come up with ideas for new widgets and/or features in Fedora Hubs! The easiest way to do this is probably to peruse the mini specs we have in the pagure issue tracker for the project. But you’re free to look around our mockups repo as well! You can file your widget ideas in Pagure (start the issue name with “Idea:” and we’ll review them and discuss!
  3. Help us develop the widgets we’ve planned! We’ve got little mini design specs for the widgets in the Fedora Hubs pagure issue tracker. If a widget ticket is unassigned (and most are!), it’s open and free for you to start hacking on! Ask Meghan and I any questions in IRC about the spec / design as needed. Take a look at the stats widget that Ralph reviewed in explaining the architecture during the hackfest, and watch Ralph’s demo and explanation of how Hubs is built to see how the widgets are put together.
  4. There are many other ways to help (ask around in #fedora-hubs to learn more,) but I think these have a pretty low barrier for starting up depending on your skillset and I think they are pretty clearly documented so you can be confident you’re working on tasks that need to get done and aren’t duplicating efforts!

    Hope to see you in #fedora-hubs! :)

micro-webapps: webconf-spec, nginx support, tests and new github repositories
It has been some time since I wrote a post about micro-webapps development progress. Just to remind you, the goal of the micro-webapps project is to allow simple deployment of web applications in the cloud (multi container) environment. For more information, read the micro-webapps GitHub page before reading further. In this post, I'm going to sum up what has happened during the last month.

GitHub repositories


The first big change is that micro-webapps has been split into various sub-projects:

  • micro-webapps - Contains Docker images and Nulecules used to deploy the web applications.
  • webconf-spec - Specification of webserver-independent configuration of web applications. This will be described later in this post.
  • webconf-spec-tests - Simple testing framework to test webconf-spec implementations. This is also described further in this post.
  • haproxy-cfg - Implementation of webconf-spec for HAProxy.
  • nginx-cfg - Implementation of webconf-spec for nginx.
  • httpd-cfg - Implementation of webconf-spec for Apache httpd.
  • kubernetes-confd - Support tool to get the web application's configuration from Kubernetes/Openshift API server.

You can also see the list of micro-webapps sub-projects on micro-webapps GitHub organization page.

Introducing the webconf-spec


Another decision made during the last month is the introduction of webconf-spec.

Webconf-spec is specification of webserver configuration for web applications configuration. The goal of webconf-spec is to provide a way how to configure widely used webservers or proxies like Apache httpd, Nginx or HAProxy using the single configuration file.

The idea behind webconf-spec is to allow web application's developers or packagers to define single webserver configuration in JSON format, which would be handled by the particular webserver when the web application is installed or deployed.

This brings lot of benefits. The biggest one is that the developer can distribute self-contained application which also includes the configuration used to deploy it and he does not have to take care about the particular webserver used for deployment.

Another big benefit is that the deployer can switch the webserver (or cloud provider) without the need to change any configuration file.

Using the Nulecule, it is possible to use micro-webapps together with webconf-spec to create full ecosystem of web applications which can be deployed in simple way on the running cloud. See the Deploying the micro-webapps application - Wordpress example.

Introducing the nginx-cfg


There have been HAProxy and Apache httpd implementations of webconf-spec for some time. These implementations takes the webconf-spec formatted files as an input and generates HAProxy or Apache httpd configuration as an output.

Last month, I've added another webconf-spec implementation - nginx-cfg, so it's now possible to deploy the web applications using the nginx webserver too.

Webconf-spec implementation tests


Another improvement of micro-webapps is testing framework for the webconf-spec implementations. There is special GitHub repository, webconf-spec-tests, containing the tests which are shared between all the implementations (HAProxy, nginx and Apache httpd).

These tests are testing the difference in the webconf-spec implementation's output and they are also running the webserver and really try that they webserver is responding to requests in the configured way.

Tests are executed per commit using the Travis CI. You can see the tests output here:


Using these tests, we can be quite sure that all the implementations are generating equivalent webserver's configuration (If you don't count various bugs which still need to be fixed :) )
FUDCon APAC 2015 in Pune

I had the pleasure to attend my second FUDCon APAC, in Pune, India this time. I arrived the day before the conference at the airport in Bombay where I met Tuan. After four tiring hours, we arrived to Pune and met Kushal.

My contribution to the conference was a keynote on Fedora Workstation. I found out just a couple of days before the conference that my talk had been selected as a keynote. That is why I changed my presentation last minute, I removed slides with technical details, so that it’s understandable for general audience. I also didn’t speak about Fedora Workstation specifically, but about (Linux) desktop problems in general and how we’re trying to solve them in Fedora Workstation. I think the talk went pretty well and received a lot of questions in Q&A at the end of the keynote and later during hallway conversations. The most frequent complaint of users was lack of multimedia support, so I added it to my presentation, and explained that it’s not really a technical issue, and that we’re working hard to make it better, and that we might see a significant improvement in Fedora 23.

Me giving the keynote, photo is courtesy of Kushal Das.

Me giving the keynote, photo is courtesy of Kushal Das.

I also really enjoyed other keynotes, especially the one by Tenzin Chokden who has worked on adding Tibetan translations to Fedora.

I also achieved other things:

  • participated in the discussion about the location for the next FUDCon APAC.
  • shared with APAC ambassadors what is our system for swag production and distribution in EMEA.
  • attended a key-signing party and got my GPG key signed by Harris Pillay, Dennis Gilmore, Jared Smith and others.
  • met a lot of Fedora contributors from India and people from Pune office of Red Hat.
  • agreed with Ryan Lerch that we would create a repository of artworks for Fedora swag production (yay!)

I’m staying in India for a few more days. I and Dennis Gilmore went to the historical center of Pune on Monday and to highlands near Pune yesterday.

I’d like to thank Fedora Project for providing me with accommodation during the conference and taking care of me (it was my first conference where they arranged a pick-up at the airport for me!). My big thanks go to the whole organizing team and especially Kushal who has been a great host to us.


Datum und Uhrzeit mit timesyncd aktuell halten
Bitte beachtet auch die Anmerkungen zu den HowTos!

Bereits vor einiger Zeit haben wir hier beschrieben, wie man sein Fedora auf die systemd Komponenten networkd und timesyncd umstellt.

Um timesyncd für die Synchronisation von Datum und Uhrzeit zu nutzen, muss zuerst timesyncd mitgeteilt werden, welche Zeitserver es verwenden soll. Dazu müssen folgende Zeilen in /etc/systemd/timesyncd.conf eingefügt werden:

NTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org 2.fedora.pool.ntp.org 3.fedora.pool.ntp.org
FallbackNTP=0.pool.ntp.org 1.pool.ntp.org 0.fr.pool.ntp.org

Im nächsten Schritt muss nun noch die Synchronisation via ntp aktiviert werden

sudo timedatectl set-ntp true

Um Probleme beim Stellen der Systemzeit zu vermeiden, sollte man timesyncd zusätzlich noch anweisen, die Systemuhr auch in UTC zu stellen

sudo timedatectl set-local-rtc 0

Nachdem die obrigen Schritte ausgeführt wurden, kümmert sich fortan Systemd auch um die korrekte Uhrzeit und das korrekte Daum des Systems.

Stumbling into the world of 4K displays [UPDATED]

Samsung U28D590D 4K displayWoot suckered me into buying a 4K display at a fairly decent price and now I have a Samsung U28D590D sitting on my desk at home. I ordered a mini-DisplayPort to DisplayPort from Amazon and it arrived just before the monitor hit my doorstep. It’s time to enter the world of 4K displays.

The unboxing of the monitor was fairly uneventful and it powered up after small amount of assembly. I plugged my mini-DP to DP cable into the monitor and then into my X1 Carbon 3rd gen. After a bunch of flickering, the display sprang to life but the image looked fuzzy. After some hunting, I found that the resolution wasn’t at the monitor’s maximum:

$ xrandr -q
DP1 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 607mm x 345mm
   2560x1440     59.95*+
   1920x1080     60.00    59.94  
   1680x1050     59.95  
   1600x900      59.98

I bought this thing because it does 3840×2160. How confusing. After searching through the monitor settings, I found an option for “DisplayPort version”. It was set to version 1.1 but version 1.2 was available. I selected version 1.2 (which appears to come with something called HBR2) and then the display flickered for 5-10 seconds. There was no image on the display.

I adjusted GNOME’s Display settings back down to 2560×1440. The display sprang back to life, but it was fuzzy again. I pushed the settings back up to 3840×2160. The flickering came back and the monitor went to sleep.

My laptop has an HDMI port and I gave that a try. I had a 3840×2160 display up immediately! Hooray! But wait — that resolution runs at 30Hz over HDMI 1.4. HDMI 2.0 promises faster refresh rates but neither my laptop or the display support it. After trying to use the display at max resolution with a 30Hz refresh rate, I realized that it wasn’t going to work.

The adventure went on and I joined #intel-gfx on Freenode. This is apparently a common problem with many onboard graphics chips as many of them cannot support a 4K display at 60Hz. It turns out that the i5-5300U (that’s a Broadwell) can do it.

One of the knowledgeable folks in the channel suggested a new modeline. That had no effect. The monitor flickered and went back to sleep as it did before.

I picked up some education on the difference between SST and MST displays. MST displays essentially have two chips handling half of the display within the monitor. Both of those do the work to drive the entire display. SST monitors (the newer variety, like the one I bought) take a single stream and one single chip in the monitor figures out how to display the content.

At this point, I’m stuck with a non-working display at 4K resolution over DisplayPort. I can get lower resolutions working via DisplayPort, but that’s not ideal. 4K works over HDMI, but only at 30Hz. Again, not ideal. I’ll do my best to update this post as I come up with some other ideas.

UPDATE 2015-07-01: Thanks to Sandro Mathys for spotting a potential fix:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

I found BIOS 1.08 waiting for me on Lenovo’s site. One of the last items fixed in the release notes was:

(New) Supported the 60Hz refresh rate of 4K (3840 x 2160) resolution monitor.

After a quick flash of a USB stick and a reboot to update the BIOS, the monitor sprang to life after logging into GNOME. It looks amazing! The graphics performance is still not amazing (but hey, this is Broadwell graphics we’re talking about) but it does 3840×2160 at 60Hz without a hiccup. I tried unplugging and replugging the DisplayPort cable several times and it never flickered.

The post Stumbling into the world of 4K displays [UPDATED] appeared first on major.io.

FUDCon, where friends meet

The madness is over. FUDCon Pune 2015 happened between 26-28 June 2015, and we successfully hosted a large number of people at MIT College of Engineering. This was not without challenges though and we met yesterday to understand what went well for us (i.e. the FUDCon volunteer team) and what could have been better. This post however is not just a summary of that discussion, since it is heavily coloured by my own impression of how we planned and executed the event.

The bid

Our bid was pretty easy to get together because we had a pretty strong organizer group at the outset and we more or less knew exactly what we wanted to do. We wanted to do a developer focussed conference that users could attend and hopefully become contributors to the Fedora project. The definition of developer is a bit liberal here, to mean any contributor who can pitch in to the Fedora project in any capacity. The only competing bid was from Phnom Penh and it wasn’t a serious competition by any stretch of imagination since its only opposition to our bid was “India has had many FUDCons before”. That combined with some serious problems with their bid (primarily cash management related) meant that Pune was the obvious choice. We had trouble getting an official verdict on the bid due to Christmas vacations in the West, but we finally had a positive verdict in January.

The CfP

The call for participants went out almost immediately after the bid verdict was announced. We gave about a month for people to submit their proposals and once we did that, a lot of us set out pinging individuals and organizations within the Open Source community. This worked because we got 142 proposals, much more than we had imagined.

We had set out with the idea of doing just 3 parallel tracks because some of us were of the opinion that more tracks would simply reduce what an individual could take away from the conference. This also meant that we had at most 40 slots with workshops taking up 2 slots instead of 1.

fudcon.in

The website took up most of my time and in hindsight, it was time that I could have put elsewhere. We struggled with Drupal as none of us knew how to wrangle it. I took the brave (foolhardy?) task of upgrading the Drupal instance and migrating all of the content, only to find out that the schedule view was terrible and incredibly non-intuitive. I don’t blame Drupal or COD for it though; I am pretty sure I missed something obvious. SaniSoft came to the rescue though and we were able to host our schedule at shdlr.com.

The content

After the amazing response in the CfP, we were tempted to increase the number of tracks since a lot of submissions looked very promising. However, we held on tight and went about making a short list. After a lot of discussions, we finally gave in to the idea of making a separate workshop track and after even more discussions, we separated out a Container track, a Distributed Storage track and an OpenStack track. So all of a sudden, we now had 5 tracks in a day instead of 3!

Sankarshan continually reminded me to reach out to speakers at the event to make sure that their talk fit in with our goals. I could not do that, mainly because we did not have the bandwidth but also because I realize that in hindsight, our goal wasn’t refined beyond the fact that we wanted a more technical event. The result was that we made a couple of poor choices, the most notable being the opening keynote of the conference. The talk about Delivering Fedora for everyone was an excellent submission, but all of us misunderstood the content of the talk. The talk was a lot more focussed than we had thought it would be and it ended up being the wrong beginning for the conference since it seemed to scare away a lot of students.

The content profile overall however was pretty strong and most individual talks had almost full rooms. The auditorium looked empty for a lot of talks, but that was because each row of the massive auditorium could house 26 people, so even a hundred people in the auditorium filled in only the first few rows. The kernel talks had full houses and the Container, OpenStack and Storage tracks were packed. It was heartening to see some talks where many in the audience followed the speaker out to discuss the topic further with them.

One clear failure on the content front was the Barcamp idea. We did a poor job of planning it and an even poorer job of executing it.

Travel, Accommodation and Commute

We did a great job on travel and accommodation planning and execution. Travel subsidy arrangements were well planned and announced and we had regular meetings to decide on them. Accommodation was negotiated and booked well in advance and we had little issues on that front except occasionally overloaded network at the hotel. We had excellent support for visa applications as well as making sure that speakers were picked up and dropped to the airport on time. The venue was far from the hotel, so we had buses to ferry everyone across. Although that was tiring, it was done with perfect precision and we had no unpleasant surprises in the end.

Materials, Goodies and SWAG

We had over 2 months from the close of CfP to conference day, and we wasted a lot of that time when we should have been ordering and readying swag. This is probably the biggest mistake we had made in planning and it bit us quite hard near the closing weeks. We had a vendor bailing on us near the end, leading to a scramble to Raviwar Peth to try and get people to make us stuff in just over a week. We were lucky to find such vendors, but we ended up making some compromises in quality. Not in t-shirts though, since that was an old reliable vendor that we had forgotten about during the original quote-collection. He worked night and day and delivered the t-shirts and socks despite the heavy Mumbai rains.

The design team was amazing with their quick responses to our requests and made sure we had the artwork we needed. They worked with some unreasonable deadlines and demands and came out on top on all of them. The best part was getting the opportunity to host all of them together on the final day of the conference and doing a Design track where they did sessions on Inkscape, Blender and GIMP.

We struggled with some basic things with the print vendor like sizes and colours, but we were able to fix most of those problems in time.

Venue

We settled on MIT College of Engineering as the venue after considering 2 other colleges. We did not want to do the event at COEP again since they hosted the event in 2011. They had done really well, but we wanted to give another college the opportunity to host the event. I had been to MIT weeks earlier as a speaker at their technical event call Teknothon and found their students to be pretty involved in Open Source and technology in general, so it seemed natural to refer them as potential hosts. MITCOE were very positive and were willing to become hosts. With a large auditorium and acceptably good facilities, we finalized MITCOE as our venue of choice.

One of the major issues with the venue though was the layout of the session rooms. We had an auditorium, classrooms on the second floor of another building and classrooms on the 4th floor of the same building. The biggest trouble was getting from the auditorium to that other building and back. The passages were confusing and a lot of people struggled to get from one section to the other. We had put up signs, but they clearly weren’t good enough and some people just gave up and sat wherever they were. I don’t know if people left out of frustration; I hope they didn’t.

The facilities were pretty basic, but the volunteers and staff did their best to work around that. WiFi did not work on the first two days, but the internet connection for streaming talks from the main tracks worked and there were a number of people following the conference remotely.

HasGeek pitched in with videography for the main tracks and they were amazing throughout the 3 days. There were some issues on the first day in the auditorium, but they were fixed and the remainder of the conference went pretty smoothly. We also had a couple of laptops to record (but not stream) talks in other tracks. We haven’t reviewed their quality yet, so the jury is still out on how useful they were.

Volunteers and Outreach

While our CfP outreach was active and got good results, our outreach in general left a lot to be desired. Our efforts to engage student volunteers and the college were more or less non-existent until the last days of the conference. We spoke to our volunteers the first time only a couple of days before the conference and as expected, many of the volunteers did not even know what to expect from us or the conference. This meant that there was barely any connect between us.

Likewise, our media efforts were very weak. Our presence in social media was not worth talking about and we only reached out to other colleges and organizations in the last weeks of the conference. Again, we did not invest any efforts in engaging organizations to try and form a community around us. We did have a twitter outreach campaign in the last weeks, but the content of the tweets actually ended up annoying more people than making a positive difference. We failed to engage speakers to talk about their content or share teasers to build interest for their sessions.

FUDPub

Best. FUDPub. Ever.

After looking at some conventional venues (i.e. typical dinner and drinks places) for dinner and FUDPub, we finally settled for the idea of having the social event at a bowling arcade. Our hosts were Blu’O at the Phoenix Market City mall. The venue had everything from bowling to pool tables, from karaoke rooms to a dance floor. It had everything for everyone and everyone seemed to enjoy it immensely. I know I did, despite my arm almost falling off the next day :)

Budget

We had an approval for up to $15,000 from the Fedora budget and we got support from a couple of other Red Hat departments for $5,000 each, giving us a total room of $25,000. The final picture on the budget consumption is still work in progress as we sort out all of the bills and make reimbursements in the coming weeks. I will write another blog post describing that in detail, and also how we managed and monitored the budget over the course of the execution.

Overall Impressions

We did a pretty decent event this time and it seemed like a lot of attendees enjoyed the content a lot. We could have done a lot better on the venue front, but the efforts from the staff and volunteers were commendable. Would I do this again? maybe not, but that has more to do with wanting to get back to programming again than with the event organization itself. Setting up such a major conference is a lot of work and things only get better with practice. Occasional organizers like yours truly cannot do justice to a conference of this size if they were to do it just once every five years. This probably calls for a dedicated team that does such events.

There were also questions of whether such large conferences were relevant anymore. Some stated their preference for micro-conferences that focussed on a specific subset of the technology landscape, but others argued that having 10 conferences for 10 different technologies was taxing for budgets since it is not uncommon for an individual to be interested in more than 1 technology. In any case, this will shape the future of FUDCon and maybe even Flock, since with such a concentration of focus, Flock could end up becoming a meetup where contributors talk only about governance issues and matters specific to the Fedora project and not the broader technology spectrum that makes Fedora products.

In the end though, FUDCon is where I made friends in 2011 and again, it was the same in 2015. The conference brought people from different projects together and I got to know a lot of very interesting people. But most of all, the friends I made within our volunteer team were the biggest takeaway from the event. We did everything together, we fought and we supported each other when it mattered. There may be things I would have done differently if I did this again, but I would not have asked for a different set of people to work with.

June 30, 2015

Running Kubernetes in Offline Mode
Here I'll talk about how to run kubernetes on a flight that doesn't have wifi... or, Red Hat Summit hands on lab that is completely disconnected.  In either case, to set some context, this is useful for me while I'm running on a single host kubernetes configuration for a lab or development where network access is limited or non-existent.

The issue is that K8s tries to pull the pause container whenever it launches a pod.  As such, it tries to connect to gcr.io and make a connection to download the pause image. The gcr.io is the Google Container Registry.  When you are in a disconnected environment this will cause the pod to enter a state of pending until it can pull down the pause container. 

Here's what you can do to bypass that - at least the only thing I know you can do: pull the pause container ahead of time.  It helps if you know you'll be in an environment with limited access ahead of time. 

       
# docker pull gcr.io/google_containers/pause
Trying to pull repository gcr.io/google_containers/pause ...
6c4579af347b: Download complete
511136ea3c5a: Download complete
e244e638e26e: Download complete
Status: Downloaded newer image for gcr.io/google_containers/pause:latest




# docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
fedora/apache                    latest              1eff270e703a        7 days ago          649.7 MB
gcr.io/google_containers/pause   1.0                 6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   go                  6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   latest              6c4579af347b        11 months ago       239.8 kB




Now try to launch a pod:

       
# kubectl create -f apache.json


# kubectl get pods
POD                 IP                  CONTAINER(S)        IMAGE(S)            HOST                LABELS              STATUS
apache                                  my-fedora-apache    fedora/apache       127.0.0.1/          name=apache         Pending

The pod is in pending state.  You will see the following error if you check the log files.

       
# journalctl -fl -u kube-apiserver.service -u kube-controller-manager.service -u kube-proxy.service -u kube-scheduler.service -u kubelet.service -u etcd -u docker


<snip>
Jun 30 17:29:11 localhost.localdomain docker[978]: time="2015-06-30T17:29:11Z" level="info" msg="-job pull(docker.io/kubernetes/pause, latest) = ERR (1)"
Jun 30 17:29:11 localhost.localdomain kubelet[1544]: E0630 17:29:11.946950    1544 kubelet.go:1002] Failed to introspect network container: Get https://index.docker.io/v1/repositories/kubernetes/pause/images: dial tcp: lookup index.docker.io: no such host; Skipping pod "apache.default.etcd"
<snip>


You'll now need to tag it such that kubernetes realizes that it's local and is able to pull it.

       
# docker tag gcr.io/google_containers/pause docker.io/kubernetes/pause



# docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
fedora/apache                    latest              1eff270e703a        7 days ago          649.7 MB
gcr.io/google_containers/pause   1.0                 6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   go                  6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   latest              6c4579af347b        11 months ago       239.8 kB
kubernetes/pause                 latest              6c4579af347b        11 months ago       239.8 kB



At this point, you should be funtional. 

       
# kubectl get pods
POD                 IP                  CONTAINER(S)        IMAGE(S)            HOST                LABELS              STATUS
apache              172.17.0.2          my-fedora-apache    fedora/apache       127.0.0.1/          name=apache         Running



You don't need to re-deploy the pod.  K8s will pick up on the available pause image and launch the contianer correctly.
On vacation
Irish sunshine

Irish sunshine

So we’ve managed to make it to Ireland to visit my wife’s family for the summer. It took only a few hours of flying and the kids were great, and now we get to enjoy the beautiful Irish weather.

There are a lot of family things and a few technical things I’d like to get done this summer. Based on last summer’s record, I’ll probably accomplish most of the family stuff, but I’m not too hopeful that I’ll actually get to any of the tech stuff. And that’s probably for the best.


Call for applications for Fedora Diversity Advisor (A Volunteer Position)

Fedora is a big community that includes contributors and users from many different countries, each with their own experiences and historical backgrounds that contribute to a diverse mix of cultural, educational, and behavioral norms. To continuously create and foster an inclusive environment in the Fedora community, it’s important to respond to the needs of existing contributors and users, and welcome new contributors and users from diverse backgrounds.

The Fedora Diversity Advisor Search team for the Fedora Council is looking for a Fedora Diversity Advisor (a volunteer position) to act as a source of support and information for all contributors and users, especially those from underrepresented populations, so that issues of inclusion and equity can be discussed and addressed with the planning of multiculturalism and comfort.

Primary responsibilities

  • Implement diversity efforts that will have a big effect on the Fedora community
  • Promote inclusion in the Fedora community
  • Act as a source of support and a mediator should there be any
    concerns or incidents
  • Serve on the Fedora Council, our top-level governance and leadership body

To achieve the above you will work with Fedora community members to identify which of the following and other strategies would be most effective and feasible, and help implement them:

  • Increasing visibility of minority contributors and users through
    talks, feature articles, and awards
  • Making explicit invitations and providing financial support to
    attend conferences and give talks
  • Helping develop skills by offering targeted workshops and internships
  • Creating a welcoming environment by adopting and enforcing codes of
    conduct
  • Fostering communication, networking, and support forums with mailing
    lists and in-person events

Required skills

  • Knowledge of and experience in working with historically
    underrepresented groups
  • Excellent written communications skills, demonstrated through blog
    posts or other written work
  • Understanding of and experience in open source communities
  • Ability to communicate at a moderate technical level about open
    source projects
  • Experience in similar roles in your past is a significant advantage
  • Experience writing grants is a plus.

To apply for the position, please answer the following questions and send your responses and a CV (or link to your online profile) to diversity-app@lists.fedoraproject.org

  • Why do you believe diversity and inclusion are important for Fedora?
  • Why do you want to serve as Fedora’s Diversity & Inclusion Advisor?
  • What specific minority group(s) or issues can you offer insight about?
  • What perspectives, experiences, or knowledge about diversity and inclusion
    could you share with the Fedora community?
  • Do you have experience working across various cultures? (Cross cultural
    refers to various geographies, cultural groups, etc.)
  • To give us further insight, feel free to provide names and contact
    information for up to three people who can speak to your passion,  interest
    or experience with diversity and inclusion.

NOTE: Last date of submission of application is July 31, 2015. Also, this is a volunteer position and NOT a paid position.

Looking forward to a great participation.

Fedora Workstation next steps : Introducing Pinos

So this will be the first in a series of blogs talking about some major initiatives we are doing for Fedora Workstation. Today I want to present and talk about a thing we call Pinos.

So what is Pinos? One of the original goals of Pinos was to provide the same level of advanced hardware handling for Video that PulseAudio provides for Audio. For those of you who has been around for a while you might remember how you once upon a time could only have one application using the sound card at the same time until PulseAudio properly fixed that. Well Pinos will allow you to share your video camera between multiple applications and also provide an easy to use API to do so.

Video providers and consumers are implemented as separate processes communicating with DBUS and exchanging video frames using fd passing.

Some features of Pinos

  • Easier switching of cameras in your applications
  • It will also allow you to more easily allow applications to switch between multiple cameras or mix the content from multiple sources.

  • Multiple types of video inputs
  • Supports more than cameras. Pinos also supports other type of video sources, for instance it can support your desktop as a video source.

  • GStreamer integration
  • Pinos is built using GStreamer and also have GStreamer elements supporting it to make integrating it into GStreamer applications simple and straightforward.

  • Pinos got some audio support
  • Well it tries to solve some of the same issues for video that PulseAudio solves for audio. Namely letting you have multiple applications sharing the same camera hardware. Pinos does also include audio support in order to let you handle both.

What do we want to do with this in Fedora Workstation?

  • One thing we know is of great use and importance for many of our users, including many developers who wants to make videos demonstrating their software, is to have better screen capture support. One of the test cases we are using for Pinos is to improve the built in screen casting capabilities of GNOME 3, the goal being to reducing overhead and to allow for easy setup of picture in picture capturing. So you can easily set it up so there will be a camera capturing your face and voice and mixing that into your screen recording.
  • Video support for Desktop Sandboxes. We have been working for a while on providing technology for sandboxing your desktop applications and while we with a little work can use PulseAudio for giving the sandboxed applications audio access we needed something similar for video. Pinos provides us with such a solution.

Who is working on this?
Pinos is being designed and written by Wim Taymans who is the co-creator of the GStreamer multimedia framework and also a regular contributor to the PulseAudio project. Wim is also the working for Red Hat as a Principal Engineer, being in charge of a lot of our multimedia support in both Red Hat Enterprise Linux and Fedora. It is also worth nothing that it draws many of its ideas from an early prototype by William Manley called PulseVideo and builds upon some of the code that was merged into GStreamer due to that effort.

Where can I get the code?
The code is currently hosteed in Wim’s private repository on freedesktop. You can get it at cgit.freedesktop.org/~wtay/pinos.

How can I get involved or talk to the author
You can find Wim on Freenode IRC, he uses the name wtay and hangs out in both the #gstreamer and #pulseaudio IRC channels.
Once the project is a bit further along we will get some basic web presence set up and a mailing list created.

FAQ

If Pinos contains Audio support will it eventually replace PulseAudio too?
Probably not, the usecases and goals for the two systems are somewhat different and it is not clear that trying to make Pinos accommodate all the PulseAudio usescases would be worth the effort or possible withour feature loss. So while there is always a temptation to think ‘hey, wouldn’t it be nice to have one system that can handle everything’ we are at this point unconvinced that the gain outweighs the pain.

Will Pinos offer re-directing kernel APIs for video devices like PulseAudio does for Audio? In order to handle legacy applications?
No, that was possible due to the way ALSA worked, but V4L2 doesn’t have such capabilities and thus we can not take advantage of them.

Why the name Pinos?
The code name for the project was PulseVideo, but to avoid confusion with the PulseAudio project and avoid people making to many assumptions based on the name we decided to follow in the tradition of Wayland and Weston and take inspiration from local place names related to the creator. So since Wim lives in Pinos de Alhaurin close to Malaga in Spain we decided to call the project Pinos. Pinos is the word for pines in Spanish :)

Parsing Option ROM Firmware

A few weeks ago an issue was opened on fwupd by pippin. He was basically asking for a command to return all the hashes of the firmwares installed on his hardware, which I initially didn’t really see the point of doing. However, after doing a few hours research about all the malware that can hide in VBIOS for graphics cards, option ROM in network cards, and keyboard matrix EC processors I was suitably worried also. I figured fixing the issue was a good idea. Of course, malware could perhaps hide itself (i.e. hiding in an unused padding segment and masking itself out on read) but this at least raises the bar from a security audit point of view, and is somewhat easier than opening the case and attaching a SPI programmer to the chip itself.

Fast forward a few nights. We can now verify ATI, NVIDIA, INTEL and ColorHug firmware. I’ve not got any other hardware with ROM that I can read from userspace, so this is where I need your help. I need willing volunteers to compile fwupd from git master (or rebuild my srpm) and then run:

cd fwupd/src
find /sys/devices -name rom -exec sudo ./fwupdmgr dump-rom {} \;

All being well you should see something like this:

/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom -> f21e1d2c969dedbefcf5acfdab4fa0c5ff111a57 [Version: 013.012.000.019.000000]

If you see something just that, you’re not super helpful to me. If you see Error reading from file: Input/output error then you’re also not so helpful as the kernel module for your hardware is exporting a rom file and not hooking up the read vfuncs. If you get an error like Failed to detect firmware header [8950] or Firmware version extractor not known then you’ve just become interesting. If that’s you, can you send the rom file to richard_at_hughsie.com as an attachment along with any details you know about the hardware. Thanks!

Richard.

Post Filtering

In order to prevent users from being overwhelmed by a fire hose of notifications from the hubs they’re subscribed to and from all the other apps connected to Fedora Hubs, we decided to design a filtering system.

If a user sees a type of notification that they don’t find valuable, they can use a dropdown located in the upper right of the notification card to decide how they want to filter out these sorts of posts.

Selection_014

As you can see, there are three different options for how users can block future posts. First, they can decide not to see any notifications that are exactly like the active card – here, the example is design team meeting reminders. The user can also choose to hide all updates from the design-team hub, if that is the kind of content they find unhelpful, or hide all meeting reminders. We hope that with these three options will allow users to gain full control over their notification streams.

Also in this dropdown are the options to ‘save link’ or ‘pin link’. The idea here is that users will be able to bookmark specific notifications they find important. Saving the link will be a private area of the main notification stream, while pinning the link will put it in their library, which is a kind of public reference area. Libraries were initially conceptualized as being a place where team or project hubs can put links to content important for new contributors, but we realized that individual users may also want to collect these references to show to their peers or to new users they may be mentoring.

Selection_015

To complement the ability to filter ones stream, we needed to design the filtered stream itself. Each tab here reveals a different stream of notifications: ‘my stream’ has the contextual filters from above applied. ‘My actions’ is notifications that require action – generally things like approving subscribers, new hub members, answering private messages on IRC, and looking at tickets that have been assigned to you. The ‘saved notifications’ was mentioned above, and it’s where the notifs people explicitly wanted to save will populate. The ‘all’ tab is the firehose of all notifications.

If people want to look at their filters and remove any, they can do that by clicking ‘view filters’, which will open a modal.

Selection_016

I wasn’t sure whether these interactions would be clear to potential users, so I threw together a quick sketch and tested it on one of my fellow Red Hat interns.

image(1)image

 

 

 

Her feedback informed me that while the filtering dropdown was in the usual place, the wording of ‘hide this notification’ implied that it would only hide that specific card, not all cards with that exact overlap of traits. She also informed me that filter management seemed like something that would happen by going to the main settings panel in the upper right corner of the whole page. This issue may come from the fact that the ‘view filters’ link isn’t as visually prominent on my sketch as it is on the more polished design, but her feedback did encourage me to try to make the link as noticeable as possible. Fedora Hubs is still moving forward!


PHP SIG - Autoloader

The Fedora PHP SIG (Special Interest Group) is back / working.

Here is a quick presentation about how to handle PHP autoloader in packaging.

 

See : PHP SIG / Packaging Tips / Autoloader

Common design: consumer autoloader

It is one of the most commonly used solution to implement autoloader in application, and in packaging. The application provides an autoloader which take care of all its dependencies. So it the application need A and B, if B need C, you need to manage A + B + C in autoloader.

Problem: is B change in dependency from C to D, your autoloader need to be fixed, and your application is probably broken.

This is the solution implemented in composer, but it only works because every library is bundled, and autoloader is generated according to the list of installed components, at installation time.

In a perfect world, everything will be PSR-0 / PSR-4 compliant, with an clean namespace, and a very simple autoloader will be able to manage everything installed in the system tree (/usr/share/php). But, no luck, perfection doesn't exists.

New design: provider autoloader

The idea is to provide an autoloader for each library (which will consume autoloader of its dependencies).

So it the application need A and B, you only need to include A + B autoloaders in the application one (and B autoloader will requires C or D). Exactly as the RPM world, you don't have to take care of dependencies tree.

Improvment: shared autoloader

When a lot of dependencies are used by an application, having a huge stack of autoloaders can be a bootleneck for performance.

So the idea is to use a single autoloader, shared and configured by each library.

Here is an example:

<?php
if (!isset($fedoraClassLoader) || !($fedoraClassLoader instanceof \Symfony\Component\ClassLoader\ClassLoader)) {
    if (!class_exists('Symfony\\Component\\ClassLoader\\ClassLoader', false)) {
        require_once '%{_datadir}/php/Symfony/Component/ClassLoader/ClassLoader.php';
    }
    $fedoraClassLoader = new \Symfony\Component\ClassLoader\ClassLoader();
    $fedoraClassLoader->register();
}
// This library
$fedoraClassLoader->addPrefix('Foo\\Bar\\', dirname(dirname(__DIR__)));
// Another library (dependency)
require_once '/usr/share/php/Foo/Baz/autoload.php';

If Foo/Baz autoloader use the same implementation, the single instance ($fedoraClassLoader) will be shared.

Initially we start using an instance of Symfony UniversalClassLoader, but as it is deprecated in Symfony 2.7, we switched to the more simple ClassLoader.

I think UniversalClassLoader was a better choice, behaviors are really different:

  • With ClassLoader, the first path added will have priority, with UniversalClassLoader  the last path will have priority (and the order can be very important in some stack, with circular dependencies)
  • With ClassLoader, there is no check to avoid duplicated path for a given prefix (I've tried to fix this, see PR #7, waiting for upstream feedback)
  • With ClassLoader, you can only add "prefix" (no distinction between prefix, namespace, PSR-0 or PSR-4 whe).

Notice: it probably only make sense to use this Symfony component when some other Symfony components are already in the dependency tree. If you don't want this dependency on Symfony, you can write your own autoloader, use the Zend Framework autoloader, or a simple classmap generated by phpab (theseer/autoload).

More examples, see the phpunit, phpcompatinfo or phpspec packages, which use and share some Symfony, Doctrine and other components, and mostly implement this new way.

Feedback: want to dicuss about PHP packaging, give feedback about this, join the PHP SIG Mailing list!

Great Thanks to Shawn Iwinski for his work on this feature, his tests, and lot of valuable discussions.

Video: Demystifying systemd (RHS 2015)

I haven't watched this yet... but I'm sure it is a new classic... with a title like Demystifying systemd. There are a number of awesome videos from Red Hat Summit 2015 so check them out.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://www.youtube-nocookie.com/embed/S9YmaNuvw5U?rel=0" width="853"></iframe>

For those with iframe issues, here's the direct link:
https://www.youtube.com/watch?v=S9YmaNuvw5U

read more

Westcoast summit 2015

I am in San Francisco this week, for the second Westcoast Summit – this event was born last year, as a counterweight to the traditional GNOME summits that are happening every fall on the other coast.

Sunday was a really awesome day to arrive in San Francisco, my hotel is right around Market St, where this was going on:

Gay pride parade

Maybe this inspired my choice of texture when I  wrote this quick demo in the evening:

Pango power

Like last year, we are being hosted by the awesome people at Endless Mobile . And like last year, we are lucky to have the elementary team join us to work together and have a good time.

We started the day with a topic collection session:

Topic collectionSince everyone was interested in sandboxing and xdg-app, we started with Alex giving an overview of the current status and ideas around xdg-app. Endless has done an experimental build of their own runtime, and (almost) got it working in a day. After the general sandboxing discussion had run its course, Alex and I decided to patch evince to use the document portal.

GTK topicsIn the afternoon, we did a session on GTK+ topics, which produced a number of quick patches for issues that the elementary team has in their use of GTK+.

Later on, KentonVarda and Andy Lutomirsky of sandstorm.io came by for a very useful exchange about our approaches to sandboxing.

kdbus vs cap'n protoI have put some notes of todays discussions here.

Fedora 20 updates metrics, compared to F19.

Fedora 20 recently reached End Of Life. Below are metrics generated from Bodhi, along with the percent change compared to Fedora 19.

Closing bugs in RPMFusion's Bugzilla

A quick note to people who have RPMFusion repos enabled: RPMFusion will no longer be publishing updates for Fedora 20.

In a few days, I'll be closing RPMFusion's Bugzilla bugs against Fedora 20 with the EXPIRED resolution. If you're involved in one of these bugs, please assign it to another version of Fedora so that it will stay opened.

JFTR, I'll be concentrating on making Bugzilla 4.4 availible to the RPMFusion guys so we can finally upgrade our installation and bring it in sync with the version of the bugzilla used by Fedora.

KDE ActivityManager in Emacs
Today I whipped up a small Emacs minor-mode to interface with KDE's ActivityManager system. It's my first minor-mode and it's janky as fuck right now, but I'm going to expand on it to eventually be able to filter, for example, to just buffers that are linked to your current activity, pushing me towards a long-standing goal of mine to create a system which *flows* with what I'm doing, rather than forcing me in to its workflow.

June 29, 2015

¿Qué hacer después de instalar Fedora 22? (Workstation/Server)

Guía de Post-Instalación
(Workstation & Server)

$ = usuario normal
# = usuario root

Workstation


Primero abriremos una terminal y cambiamos a modo root con el siguiente comando en la consola:

$ su -

Ahora sí, comencemos con la guía...




Optimizar DNF

# dnf -y install yumex dnf-plugins-core

Actualizar tu Sistema

# dnf -y update

Repositorios Extra (Necesarios)

Añadiendo éstos repositorios a tu sistema, podrás encontrar prácticamente cualquier paquete de software (programa) sin problemas. Sólo tienes que recordar que algunos de éstos repositorios contienen paquetes que no se consideran 100% software libre. Sin embargo, en muchos casos necesitarás uno o dos paquetes que vengan de estos repositorios para hacer funcionar ciertas aplicaciones.

RPM Fusion (Free & Non-Free)

# dnf install --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

# dnf -y update

KDE RedHat (Sólo usuarios de KDE)

# dnf -y install wget && wget http://apt.kde-redhat.org/apt/kde-redhat/fedora/kde.repo -O /etc/yum.repos.d/kde.repo


Drivers Gráficos Libres (NVIDIA ATI, INTEL)

Éstos son los drivers gráficos que deberías usar sin importar tu GPU si estás en Fedora Linux. En el post que enlazamos abajo, explicamos cómo sacar el máximo partido a tu hardware usando dichos drivers sin necesidad de recurrir a los drivers propietarios de los fabricantes.

Mejorar rendimiento de drivers gráficos libres (MESA) en Fedora Linux
Drivers Gráficos Propietarios

Úsalos solo si NO notas un buen rendimiento gráfico con los drivers libres aún después de seguir las instrucciones especificadas en el paso de arriba (y reiniciar) o bien, si vas a hacer gaming pesado en tu máquina.

NOTA: Si cuentas con gráficos AMD/ATI la verdad no vale la pena utilizar los drivers propietarios (a menos que tengas una muy buena razón como la minería de criptodivisas o algo así) ya que los libres funcionan muchísimo mejor en todos los casos de uso comunes, desde videos, pasando por gaming y hasta rendering (entre otras cosas).


 

Códecs y Aplicaciones


Los códecs son escenciales para poder reproducir diferentes archivos de audio/video en tu sistema independientemente del formato. Instálate un buen códec pack con el siguiente comando en consola:

# dnf -y install gstreamer1-libav gstreamer1-plugins-bad-free-extras gstreamer1-plugins-bad-freeworld gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer-ffmpeg xine-lib-extras xine-lib-extras-freeworld k3b-extras-freeworld gstreamer-plugins-bad gstreamer-plugins-bad-free-extras gstreamer-plugins-bad-nonfree gstreamer-plugins-ugly gstreamer-ffmpeg mencoder

Como en todo Sistema Operativo, en Fedora necesitas aplicaciones para poder trabajar en tu computadora con diferentes archivos y bajo diferentes circunstancias/situaciones. Aquí una lista detallada de apps que no pueden faltar en tu sistema Fedora Linux según categoría:

Diseño/Edición

# dnf -y install gimp scribus inkscape kdenlive openshot blender audacity-freeworld calligra-krita shutter pencil

En esta categoría otras apps importantes son el gestor de fotografías y editor rápido. GNOME tiene a Shotwell mientras que KDE tiene a DigiKam/ShowFoto.

Multimedia (Audio/Video)

# dnf -y install vlc clementine transmageddon

Rip & Burn

# dnf -y install k3b sound-juicer kid3

Administración del sistema

# dnf -y install gparted nano wget curl smartmontools htop inxi bleachbit firewall-config beesu pysdm

Mensajería y Comunicación

# dnf -y install pidgin xchat

E-mail

# dnf -y install thunderbird

Compresión/Decompresión

# dnf -y install unrar unzip zip p7zip p7zip-plugins

Impresoras/Escáneres

# dnf -y install python-qt4 hplip hplip-gui libsane-hpaio simple-scan

Internet

# dnf -y install firefox epiphany uget gigolo

Juegos

# dnf -y install steam

Wine

# dnf -y install wine.i686 cabextract

Java

# dnf -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel icedtea-web

Otros

# dnf -y install screenfetch rfkill freetype-freeworld lsb

GNOME

# dnf -y install cheese gnome-shell-extension-common dconf-editor gnome-tweak-tool gtk-murrine-engine* libreoffice-langpack-es

KDE

# dnf -y install kamoso digikam kde-i18n-Spanish kde-l10n-es

Software básico de compilación

Necesitarás instalar éstos paquetes si piensas compilar algo dentro de tu Fedora Linux (nunca está demás, siempre te puedes topar con software del que únicamente tengas un tarball por ejemplo); Es definitivamente recomendable instalarlos.

# dnf -y install kernel-headers kernel-devel dkms
# dnf -y install kernel-PAE-devel (Sólo Si tienes Kernel PAE)
# dnf groupinstall "Development Tools" && dnf groupinstall "Development Libraries"

Y con esto termina la parte de códecs y aplicaciones... Aquí también puedes aprovechar para personalizar un poco la selección de apps en tu sistema, yo personalmente elimino todo lo referente a evolution y rhythmbox (con mucho cuidado de no cargarme el sistema con las dependencias) ya que reemplazo estas 2 apps con thunderbird y clementine respectivamente.

Seguridad


Si tienes tu máquina Linux en red junto a máquinas Windows/Mac siempre es bueno tener a la mano un buen anti-rootkit y antivirus (por protección de los demás), ya que aunque linux es prácticamente invulnerable a las amenazas de malware comunes, los otros sistemas no:
Protección anti-malware en Linux (5 tips básicos)
Otras Apps


NOTA: Las apps que se descarguen como paquetes rpm en este apartado se instalan con el comando

dnf -y install ruta/al/paquete.rpm

Esto instalará las dependencias por nosotros y después el paquete descargado como es de esperarse.

LibreOffice (Si usas KDE)

# dnf -y install libreoffice libreoffice-kde libreoffice-langpack-es && dnf -y remove calligra*

Google Chrome

Muy importante ya que es la única manera oficial de poder disfrutar de Flash Player y Netflix en sistemas Linux como Fedora:

Descarga RPM - http://chrome.google.com/

Skype

Descarga RPM - http://www.skype.com/intl/es/get-skype/on-your-computer/linux/

Spotify
Instalar Spotify Linux en Fedora 21 y 22
Atom Editor

Fedora es un sistema operativo mayormente enfocado para programadores e ingenieros, así que asumí que incluir ésta aplicación aquí no estaría demás:
Instalar Atom Editor en Fedora 21 y 22 
Electrum Bitcoin Wallet

En el mundo de hoy nunca está demás tener una cartera bitcoin a la mano:
#QuickTip: Install Electrum Bitcoin Wallet on Fedora Linux
PlayOnLinux
Instalar PlayOnLinux en Fedora 21 y 22
VirtualBox
Instalar VirtualBox 4.3.x en Fedora 21 y 22
Extras



Y pues bueno, aquí termina la parte de post-install para la edición Workstation. Espero como con cada lanzamiento esta guía sea su referencia de elección tras instalar Fedora Linux y pues cualquier cosa no duden en dejarnos un comentario abajo.

Si esta guía te fue de ayuda, apóyanos con un tweet

<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>

Server


DNF y utilidades

# dnf -y install nano wget curl dnf-plugins-core smartmontools htop inxi mc

Actualiza tu sistema

# dnf -y update

Configurar firewall

NOTA: Si por alguna razón quisieras usar iptables en lugar de firewalld puedes seguir estas instrucciones en lugar de los pasos descritos a continuación.

Primero necesitamos obtener la(s) zona(s) activas:

# firewall-cmd --get-active-zones

Este comando nos devolverá el nombre de la(s) zona(s) activa(s) (ej. public) y las interfaces de red que están ligadas a ella(s) como podrían ser eth0, p16p1, wlan0 etc.

Es recomendable también listar los puertos y servicios permitidos en la(s) zona(s) activas para hacer personalizaciones, esto se hace con:

# firewall-cmd --zone=myzone --list-all

Obviamente usando el nombre de la zona indicada en lugar de myzone.

Luego para abrir y cerrar puertos podemos usar:

# firewall-cmd --zone=public --add-port=xxxx/tcp
# firewall-cmd --zone=public --remove-port=xxxx/udp

Respectivamente, cambiando xxxx por el número de puerto deseado y especificando el protocolo (tcp/udp) según corresponda.

Activar rc.local
Activar /etc/rc.local en Fedora Linux
Habilitar tuned

# dnf -y install tuned
# setenforce 0
# tuned-adm list
# tuned-adm profile perfil-deseado
# setenforce 1

Después añadimos a nuestro rc.local:

# Tuned
setenforce 0
service tuned start
setenforce 1

Esto se hace con el comando:

# nano /etc/rc.d/rc.local

Asumiendo que ya hayas activado el rc.local previamente.

Kernel VM Tunings
Kernel VM tunnings para mejorar rendimiento en Linux
Habilitar Zswap
Haz tu Linux más rápido con Zswap
Instalar Entorno Gráfico

NOTA: Hacerlo sólo si tienes una muy buena razón para ello.
Cómo instalar un entorno gráfico en Fedora Server
Protección Anti-Malware

En un servidor esto es cosa de sí o sí. Recuerda que en el caso de ClamAV (citado en el artículo a continuación) es recomendable establecer un cronjob como root para el análisis y limpieza de archivos infectados en directorios relevantes (por ejemplo el directorio de subidas públicas en un servidor web):
Protección anti-malware en Linux (5+ tips básicos)
Configurar Google DNS

Si no lo hiciste al instalar tu sistema y configurar tu red vale la pena hacerlo:
Google Public DNS
SELinux permisivo

NOTA: Hacerlo sólo si tienes una muy buena razón para ello.

# nano /etc/selinux/config

Y en el archivo que nos saldrá, cambiamos enforcing a permissive como se ve a continuación:


Extras
Y pues bueno, aquí termina la parte de post-install para la edición Server. Espero que al igual que en el caso de la edición para estaciones de trabajo, esta guía se convierta en su referencia de elección tras instalar Fedora y pues cualquier cosa no duden en dejarnos un comentario abajo.

Si esta guía te fue de ayuda, apóyanos con un tweet

<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
All systems go
Service 'Koschei Continuous Integration' now has status: good: Everything seems to be working.
Major service disruption
Service 'Koschei Continuous Integration' now has status: major: There are problems with the web frontend. We are currently working towards fixing them.
I get a SYS_PTRACE AVC when my utility runs ps, how come?
We often get random SYS_PTRACE AVCs, usually when an application is running the ps command or reading content in /proc.

https://bugzilla.redhat.com/show_bug.cgi?id=1202043

type=AVC msg=audit(1426354432.990:29008): avc:  denied  { sys_ptrace } for  pid=14391 comm="ps" capability=19  scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tclass=capability permissive=0

sys_ptrace usually indicates that one process is trying to look at the memory of another process with a different UID.

man 

man capabilites
...

       CAP_SYS_PTRACE
              *  Trace arbitrary processes using ptrace(2);
              *  apply get_robust_list(2) to arbitrary processes;
              *  transfer data to or from the memory  of  arbitrary  processes
                 using process_vm_readv(2) and process_vm_writev(2).
              *  inspect processes using kcmp(2).


These types of access should probably be dontaudited. 

Running the ps command was a privileged process can cause sys_ptrace to happen.  

There is special data under /proc that a privileged process would access by running the ps command,  

This data is almost never actually needed by the process running ps, the data is used by debugging tools 
to see where some of the randomized memory of a process is setup.  

Easiest thing for policy writers to do is to dontaudit the access.


How do files get mislabled?
Sometimes we close bugs as CLOSED_NOT_A_BUG, because of a file being mislabeled, we then tell the user to just run restorecon on the object.

But this leaves the user with the question,

How did the file get mislabeled?

They did not run the machin in permissive mode or disable SELinux, but still stuff became mislabeled?  How come?

The most often case of this in my experience is the mv command, when users mv files around their system the mv command maintains the security contenxt of the src object.

sudo mv ~/mydir/index.html /var/www/html

This ends up with a file labeled user_home_t in the /var/www/html, rather then http_sys_content_t, and apache process is not allowed to read it.  If you use mv -Z on newer SELinux systems, it will change the context to the default for the target directory.

Another common cause is debugging a service or running a service by hand.

This bug report is a potential example.

https://bugzilla.redhat.com/show_bug.cgi?id=1175934

Sometimes we see content under /run (/var/run) which is labeled var_run_t, it should have been labeled something specific to the domain that created it , like apmd_var_run_t.
The most likely cause of this, is that the object was created by an unconfined domain like unconfined_t.  Basically an unconfined domain creates the object based on the parent directory, which would label it as var_run_t.

I would guess that the user/admin ran the daemon directly rather then through the init script.

# /usr/bin/acpid
or
#gdb /usr/bin/acpid


When acpid created the /run/acpid.socket then the object would be mislableed.  Later when the user runs the service through the init system it would get run with the correct type (apmd_t) and would be denied from deleting the file.

type=AVC msg=audit(1418942223.880:4617): avc:  denied  { unlink } for  pid=24444 comm="acpid" name="acpid.socket" dev="tmpfs" ino=2550865 scontext=system_u:system_r:apmd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file permissive=0


Sadly their is not much we can do to prevent this type of mislabeled file from being created, and end up having to tell the user to run restorecon.
FESCo vote history

A while back I gathered some numbers about the number of participants to some election held in Fedora.

With the results of the new FESCo election being announced I wanted to go back and see the new trend:

            FESCo (voters)
   2008-07    150
   2008-12    169
   2009-06    308
   2009-12    216
   2010-05    180
   2010-11    240
   2011-06    200
   2011-12    225
   2012-06    236
   2012-12    206
   2013-06    166
   2014-02    265
   2014-07    195
   2015-01    283
   2015-06     90

Graphically: 20150629_fesco_voters.png

As you can see, this last election was the one with the lowest number of participants since at least July 2008.

FUDCon Pune 2015 – toward the future
FUDCon Pune 2015 has just been closed. My flight would be one of the earliest one after FUDCon leaving India ( I am the Mumbai airport now). While Closing keynote shown a lot of impressing numbers, I personally want to share some thoughts from one of FUDCon participants’ point of view. I haven’t attended any […]

June 28, 2015

Activities from Mon, 22 Jun 2015 to Sun, 28 Jun 2015

Activities

Activities Amount Diff to previous week
Badges awarded 715 -35.24%
Builds 32755 -51.17%
Copr build completed 17917 +250.69%
Copr build started 18006 +243.17%
Edit on the wiki 586 -20.60%
FAS user created 132 -03.65%
Meeting completed 14 -50.00%
Meeting started 14 -50.00%
New packages 106 -60.89%
Posts on the planet 104 -87.79%
Retired packages 0 NA
Updates to stable 361 +13.88%
Updates to testing 513 -19.59%

Top contributors of the week

Activites Contributors
Badges awarded leinfeva (26), infevvia (18), jonkni (13)
Builds sharkcz (14981), karsten (11354), pbrobinson (2621)
Copr build completed robert (13430), yselkowitz (1083), avsej (488)
Copr build started robert (13437), yselkowitz (1106), avsej (488)
Edit on the wiki adamwill (200), gbcox (32), fedoradummy (28)
Meeting completed Corey84 (4), adamw (2), danofsatx (2)
Meeting started alick (1), atinm (1), dgilmore (1)
New packages  
Posts on the planet admin (26), rrix (13), jmlevick (9)
Retired packages  
Updates to stable remi (34), siwinski (19), denisarnaud (15)
Updates to testing siwinski (53), remi (25), raphgro (12)
Extending Storage on an Fedora Atomic Host
I had to spend some time understanding how to use docker-storage-setup on an Atomic host. The tool docker-storage-setup comes by default and makes the configuration of storage on your Atomic host easier. I didn't read any of the provided documentation (although that probably would have helped) other than the script itself.  So, pardon me if this is a duplicate of other info out there.  It was a great way to learn more about it.  The goal here is to add more disk space to an Atomic host.  By default, the cloud image that you download has one device (vda) that is 6GB in size.  When I'm testing many, many docker builds and iterating through the Fedora-Dockerfiles repo, that's just not enough space.  So, I need to know how to expand it.

To provide some context about my environment, I'm using a local KVM environment to hack around in.  The first thing I'll do is go ahead and add a few extra disks to my environment so I can do some testing of docker-storage-setup.  Here is what we will be modifying on our running Atomic VM:

My VM is called: atomic1
New disk 1: vdb (logical name presented to VM)
New disk 2: vdc (logical name presented to VM)
New disk 3: vdd (logical name presented to VM)

As with anything you do regarding storage, make sure you have a backup.

Here is what it looks like on the Atomic VM before I add my disks:

       

# atomic host status
TIMESTAMP (UTC) VERSION ID OSNAME REFSPEC
* 2015-06-27 20:22:47 22.50 0eca6e0777 fedora-atomic fedora-atomic:fedora-atomic/f22/x86_64/docker-host
2015-05-21 19:01:46 22.17 06a63ecfcf fedora-atomic fedora-atomic:fedora-atomic/f22/x86_64/docker-host


# fdisk -l | grep vd
Disk /dev/vda: 6 GiB, 6442450944 bytes, 12582912 sectors
/dev/vda1 * 2048 616447 614400 300M 83 Linux
/dev/vda2 616448 12582911 11966464 5.7G 8e Linux LVM



As you can see, I only have a vda disk.  I need to create 3 additional disks.  I do this on my KVM hypervisor that I am running Atomic on.  In my case it's a Fedora 21 host.

       
# for i in $(seq 1 3); do qemu-img create -f qcow2 -o preallocation=metadata disk$i.qcow2 4G &> /dev/null; chown qemu.qemu disk$i.qcow2 && chmod 744 disk$i.qcow2; done && ls -ltr disk*
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk1.qcow2
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk2.qcow2
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk3.qcow2



Now that I have 3 new disks, I want to attach them to my running Atomic VM.  Note that I am starting with vdb because the VM already has a vda.

       
# virsh attach-disk atomic1 /extra/libvirt/images/disk1.qcow2 vdb --targetbus virtio  --live
Disk attached successfully

# virsh attach-disk atomic1 /extra/libvirt/images/disk2.qcow2 vdc --targetbus virtio  --live
Disk attached successfully

# virsh attach-disk atomic1 /extra/libvirt/images/disk3.qcow2 vdd --targetbus virtio  --live
Disk attached successfully


Now, back on the Atomic VM you can see the new disks. You don't need to partition them, or pvcreate them.  The docker-storage-setup script will handle all that.

       
# fdisk -l | grep vd
Disk /dev/vda: 6 GiB, 6442450944 bytes, 12582912 sectors
/dev/vda1 * 2048 616447 614400 300M 83 Linux
/dev/vda2 616448 12582911 11966464 5.7G 8e Linux LVM
Disk /dev/vdb: 4 GiB, 4295884800 bytes, 8390400 sectors
Disk /dev/vdc: 4 GiB, 4295884800 bytes, 8390400 sectors
Disk /dev/vdd: 4 GiB, 4295884800 bytes, 8390400 sectors



Now, I can start playing around with docker-storage-setup.  There's at least two scenarios that I want to evaluate.  The first is adding a new disk to my host as a new phyiscal volume, VG and LV.  I want that to be what Docker uses for storage. After that, I want to extend that volume with the other two disks.  So, when I am finished, I will have a total of ~ 12GB of space for my Docker images.  I can get instructions on how to do this by looking at the /bin/docker-storage-setup script.  It says:

       
# This section reads the config file (/
# Currently supported options:
# DEVS: A quoted, space-separated list of devices to be used.  This currently
#       expects the devices to be unpartitioned drives.  If "VG" is not
#       specified, then use of the root disk's extra space is implied.
#
# VG:   The volume group to use for docker storage.  Defaults to the volume
#       group where the root filesystem resides.  If VG is specified and the
#       volume group does not exist, it will be created (which requires that
#       "DEVS" be nonempty, since we don't currently support putting a second
#       partition on the root disk).


Let's take a look at the current configuration.

       
# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: atomicos-docker--pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 11.8 MB
Data Space Total: 2.961 GB
Data Space Available: 2.949 GB
Metadata Space Used: 49.15 kB
Metadata Space Total: 8.389 MB
Metadata Space Available: 8.339 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3


You can see that the Pool is atomicos-docker--pool.  We want to change that.

Scenario 1


For the first scenario, I want to go ahead and add the initial disk.  It's really, really easy.

1. Check the configuration before making the changes.

       
# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g

# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0



2. Create the file /etc/sysconfig/docker-storage-setup with the following entries.

       
# cat /etc/sysconfig/docker-storage-setup

DEVS="vdb"
VG="test-disk"


3. Run the command docker-storage-setup.

       
# docker-storage-setup
Volume group "test-disk" not found
Cannot process volume group test-disk
0
Checking that no-one is using this disk right now ... OK

Disk /dev/vdb: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0xe737456c.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdb2:
New situation:

Device Boot Start End Sectors Size Id Type
/dev/vdb1 2048 8390399 8388352 4G 8e Linux LVM

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdb1" successfully created
Volume group "test-disk" successfully created
NOCHANGE: partition 2 is size 11966464. it cannot be grown
Physical volume "/dev/vda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Rounding up size to full physical extent 8.00 MiB
Logical volume "docker-meta" created.
Logical volume "docker-data" created.


4. Restart Docker to consume the new configuration.

       
# systemctl restart docker


5. Check the new configuration.

       
# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0
/dev/vdb1 test-disk lvm2 a-- 4.00g 84.00m

# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0
test-disk 1 2 0 wz--n- 4.00g 84.00m

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g
docker-data test-disk -wi-ao---- 3.91g
docker-meta test-disk -wi-ao---- 8.00m

# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-253:0-8473021-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/test-disk/docker-data
Metadata file: /dev/test-disk/docker-meta
Data Space Used: 11.8 MB
Data Space Total: 4.194 GB
Data Space Available: 4.183 GB
Metadata Space Used: 53.25 kB
Metadata Space Total: 8.389 MB
Metadata Space Available: 8.335 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3


Now you can see that we are using the new disk and we have a "Data Space Total" of 4GB.

Before using this new storage, you will need to clean out /var/lib/docker and restart Docker.  The reason for this is that we are going from one thin pool volume to another.

Scenario 2


For the second scenario, I want to extend that so we have more space for the data file.  Again, really easy.

1. Modify the /etc/sysconfig/docker file to add the two new disks and run docker-storage-setup.

       
# cat /etc/sysconfig/docker-storage-setup

DEVS="vdc vdd"
VG="test-disk"

# docker-storage-setup
0
Checking that no-one is using this disk right now ... OK

Disk /dev/vdc: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x2bd6f997.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdc2:
New situation:

Device Boot Start End Sectors Size Id Type
/dev/vdc1 2048 8390399 8388352 4G 8e Linux LVM

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdc1" successfully created
0
Checking that no-one is using this disk right now ... OK

Disk /dev/vdd: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x4d2c0a6f.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdd2:
New situation:

Device Boot Start End Sectors Size Id Type
/dev/vdd1 2048 8390399 8388352 4G 8e Linux LVM

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdd1" successfully created
Volume group "test-disk" successfully extended
NOCHANGE: partition 2 is size 11966464. it cannot be grown
Physical volume "/dev/vda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Rounding size to boundary between physical extents: 16.00 MiB
Size of logical volume test-disk/docker-meta changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents).
Logical volume docker-meta successfully resized
Size of logical volume test-disk/docker-data changed from 3.91 GiB (1000 extents) to 11.81 GiB (3024 extents).
Logical volume docker-data successfully resized



2. Now restart Docker and check the new configuration.

       
# systemctl restart docker


# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-253:0-8473021-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/test-disk/docker-data
Metadata file: /dev/test-disk/docker-meta
Data Space Used: 11.8 MB
Data Space Total: 12.68 GB
Data Space Available: 12.67 GB
Metadata Space Used: 90.11 kB
Metadata Space Total: 16.78 MB
Metadata Space Available: 16.69 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3

# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0
/dev/vdb1 test-disk lvm2 a-- 4.00g 0
/dev/vdc1 test-disk lvm2 a-- 4.00g 0
/dev/vdd1 test-disk lvm2 a-- 4.00g 164.00m

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g
docker-data test-disk -wi-ao---- 11.81g
docker-meta test-disk -wi-ao---- 16.00m

# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0
test-disk 3 2 0 wz--n- 11.99g 164.00m


That's it.  Enjoy your new disk space!

The Beginnings of a Systems Integration Hydra
Over on Hardcore Freestyle Emacs I'm beginning to take a look at what sort of systems-level integration I want to build in to my Emacs workflow; the end goal, naturally is to run nearly everything inside of Emacs; having to open up a terminal to do things like this throughout the day is a major pain in the butt and it'd be nice to not have to do that, even though XMonad does make that fairly simple.
Capture All EWW Buffers
On FSEM, [[I built a system to capture and kill my EWW buffers]] in to my refile.org. Throughout the day, I end up with 20 or 30 EWW buffers for things I want to read at the end of the day, be them from Gnus, Twitter or IRC. I try to not get sucked in to reading them while I'm working, and so these things just get pulled in to a big mess of buffers. By pushing them in to refile.org and then killing them, I can force myself to properly clock time spent reading them, as well as making sure they eventually end up in my read list.

June 27, 2015

Introducción a GIT

Después de mucho tiempo :-), en esta ocasión haré publicaciones a modo de documentar el recorrido que haré para ingresar al equipo de Infraestructura de Gnome, partiendo con Git y luego con Puppet (Juraba que era Ansible por Dios 😐 )… Bueno al ruedo :-)

¿Qué es un repositorio?

Si nos referimos a un repositorio de la forma literal entonces estamos hablando de un lugar u objeto donde se colocan o guardan cosas, una alacena o un estante por ejemplo. Pero si hablamos de repositorios desde el contexto informático hablamos de un lugar (servidor o carpeta) donde se almacenan información digital, por ejemplo los sitios donde consultamos libros o artículos científicos como http://ieeexplore.ieee.org/Xplore/home.jsp es un repositorio de artículos :-)

¿Qué es control de versiones?

El control de versiones se refiere al control o la gestión de cambios que se realizan en archivo o grupo de ellos. Esta gestión se puede realizar manualmente (Creando carpetas, renombrando archivos y otros métodos creativos:-) ) o utilizando un sistema o aplicativo que nos ayude en esto, es ahí donde entra Git, Mercurial entre muchos otros.

¿Qué es Git?

Git es un software que precisamente nos ayuda a controlar las versiones de nuestros archivos. Entre sus principales ventajas tenemos:

  • Es software libre.
  • Es eficiente y confiable.
  • Es sencillo de aprender.
  • Se puede gestionar de manera distribuida.
  • Soporta http, https, ssh, rsync.

Podemos mencionar también que su creador fue Linus Torvalds  y que actualmente se utiliza en la gestión de código fuente del kernel Linux.

Por estas razones git se ha convertido en una buena opción para gestionar ficheros y los cambios que hacemos sobre ellos.

Instalación

Por lo general el comando git ya se encuentra en todos los sistemas Linux, pero en caso de que no estuviera por alguna razón, por ejemplo que estemos utilizando una instalación mínima, podemos instalarlo con el siguiente comando:

sudo dnf install git

Primeros pasos

Creando el repositorio

En primer lugar para crear un repositorio necesitamos crear un área de trabajo, es decir un directorio de trabajo, aunque también podemos comenzar en un directorio previamente existente con archivos.

Si por ejemplo fuéramos a crear un directorio nuevo haríamos esto:

mkdir mi_directorio
cd mi_directorio
git init

Pero si nuestro directorio ya existiese y tuviera archivos dentro de él solamente hariamos esto:

cd mi_directorio
git init

En ambos casos se crearia un directorio oculto llamado “.git” donde es que se almacenan todos los datos del repositorio (historial de cambios, logs, el mismo codigo fuente, etc).

Para comprobar su existencia basta con ejecutar el comando “ls -A” y nos daría un resultado similar a éste:

[alexove@wayra proyecto]$ ls -A
.git
[alexove@wayra proyecto]$

Configurando lo básico

En algún momento después de haber creado el repositorio git nos pedirá que configuremos el nombre de usuario y el correo electronico del usuario. Para configurar estos datos basta con ejecutar los siguientes comandos:

[alexove@wayra proyecto]$ git config --global user.name "Alex Oviedo"
[alexove@wayra proyecto]$ git config --global user.email alexove@fedoraproject.org

Luego de lo cual podemos ejecutar “git config –list” para verificar que todo este correcto. El resultado deberia ser similar a:

[alexove@wayra proyecto]$ git config --list
user.name=Alex Oviedo
user.email=alexove@fedoraproject.org
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
[alexove@wayra proyecto]$

Despedida

Hasta aquí la parte introductoria de esta serie de publicaciones de git, durante la próxima semana iré publicando más artículos sobre git y mi camino a Gnome-Infraestructure, saludos y comenten para mejorar este post :-)

Ksplice: Actualizaciones de seguridad rebootless para Linux

Ksplice Uptrack es un gestor de actualizaciones de seguridad (como las del kernel y similares) para GNU/Linux y su ventaja sobre de las herramientas comunes para el trabajo es que permite la aplicación de dichas actualizaciones en vivo (sin necesidad de reiniciar) cosa que es increíblemente buena para los que manejamos servidores. El uso de Ksplice es sencillo: Simplemente tenemos que descargar e instalar el paquete indicado desde su web oficial (Es gratuito para Ubuntu y Fedora) y entonces tendremos a la mano tanto una GUI como una CLI (se habilita una/la otra o las dos según se necesite) para manejar este nuevo gestor de updates. Cabe destacar que Ksplice no se contrapone a tu gestor de paquetes actual, es más bien un refuerzo que cumple con la tarea específica de instalar actualizaciones de seguridad en vivo en lugar representar un reemplazo para el gestor de paquetes en turno y el trabajo que éste realiza.


Si te encuentras en un servidor Ubuntu 14.04 (por ejemplo) puedes instalarlo de la siguiente manera:

# wget https://www.ksplice.com/uptrack/dist/trusty/ksplice-uptrack.deb -O ksplice-uptrack.deb
# dpkg -i ksplice-uptrack.deb
# apt-get install -f

Y en el caso de estar en Fedora Server correríamos:

# wget https://ksplice.oracle.com/uptrack/dist/fedora/XX/ksplice-uptrack.rpm -O ksplice-uptrack.rpm
# dnf -y install ksplice-uptrack.rpm

(Obviamente reemplazando user por tu nombre de usuario en los comandos y XX por la versión de fedora a utilizar, ej: 21 y/o 22); Una vez isntalado el programa actualizamos nuestro sistema normalmente:

# apt-get update && apt-get upgrade (Ubuntu)
# dnf -y update (Fedora)

Y corremos:

# uptrack-upgrade -y

Lo que nos presentará una licencia si es la primera vez que corremos el comando. La aceptamos y el programa debería tratar de instalar las actualizaciones pertinentes (en este caso no habría). Verificamos la versión efectiva del kernel con:

# uptrack-uname -r

Y eso es todo.

Recuerda que puedes controlar ksplice-uptrack con los comandos de la CLI que están listados por acá.
Protección anti-malware en Linux (5+ tips básicos)

Todos sabemos que ningún sistema es invulnerable, pero GNU/Linux es lo más cercano que encontraremos a esto en cuanto a sistemas operativos se refiere. Sin embargo, hay maneras de hacer tu distribución aún más segura que como viene por defecto, veamos cómo:

1) Prevención de Buffer Overflow (Servidores)

Para contrarrestar ataques de desbordamiento de búfer, basta con aplicar los tunings del kernel de los que ya hemos hablado anteriormente por acá... En /etc/sysctl.conf:

VPS pequeño y equipos con memoria limitada (=< 512MB RAM)

# Swappiness & Memory Tunning
vm.swappiness=100
vm.overcommit_memory=2
vm.overcommit_ratio=50
vm.oom_kill_allocating_task=1
vm.dirty_ratio=10
vm.dirty_background_ratio=5

Servidor físico (>= 1GB RAM)

# Swappiness & Memory Tunning
vm.swappiness=60
vm.panic_on_oom=0
vm.oom_kill_allocating_task=1
vm.dirty_ratio=10
vm.dirty_background_ratio=5


2) Antivirus (Instalar y configurar ClamAV)


Tener un antivirus en un equipo GNU/Linux permite (entre otras cosas) analizar archivos/carpetas y dispositivos sospechosos en tu máquina sin poner en peligro otros equipos que puedas tener en una red como computadoras Windows/OS X ya que Linux es, (como ya dije al principio) prácticamente invulnerable al malware común. En un servidor de correo (citando otro ejemplo práctico), el tener un antivirus te permitirá analizar los archivos adjuntos antes de procesarlos. El antivirus más famoso que existe para cualquier distro Linux allá afuera es ClamAV que cuenta con escáneres bajo demanda, interfaz headless, interfaz gráfica e incluso un daemon que se integra perfectamente con servidores de correo.

Instalación

Para instalarlo en Fedora Linux por ejemplo haríamos lo siguiente:

# dnf -y install clamav clamav-update clamtk (GNOME/GTK)
# dnf -y install clamav clamav-update klamav (KDE/QT)
# dnf -y install clamav clamav-update (headless)

Configuración

Después, (independientemente de la distro) lo configuraremos de la siguiente manera:

# nano /etc/freshclam.conf

En el archivo que nos saldrá, comentaremos la línea Example:


Descomentaremos la línea DNSDatabaseInfo:


Nos aseguraremos de que las 2 líneas que referencian DatabaseMirror estén descomentadas y en la que tiene XY reemplazaremos dicho código por el código oficial de nuestro país, mismo que obtendremos de esta lista (En el caso de México por ejemplo el código correcto es mx):


Finalmente regresamos al principio y descomentamos las líneas DatabaseDirectory y UpdateLogFile:


Guardamos el archivo de configuración y corremos en consola:

# freshclam

Análisis por consola

NOTA: Antes de correr un análisis con ClamAV, es recomendable instalar todas las actualizaciones disponibles y limpiar tu sistema con programas como bleachbit/sweeper. Recuerda que bleachbit tiene un modo para root y uno para tu usuario normal, así que te tocará correrlo dos veces.

Una vez que la base de datos de virus se actualice, Si estás en un servidor headless (o no quieres usar la GUI), puedes ejecutar:

$ man clamscan

para aprender más sobre cómo usar clamav desde consola. Para hacer un escaneo estándar del sistema completo, se recomienda usar este comando por ejemplo:

#freshclam && echo "Time: $(date)" >> /var/log/clamscan.log && clamscan -r --bell -io / --log=/var/log/clamscan.log --enable-stats

Este comando actualizará la base de datos de virus, luego correrá un análisis que checará todo el sistema por virus/malware de manera recursiva y hará sonar una "campana" cada que detecte algo. Guardará todo en un archivo llamado clamscan.log dentro de /var/log adjuntando la fecha y hora del análisis al mismo. También, enviará las estadísticas de uso al motor de ClamAV y sus desarrolladores para mejorar la eficacia del antivirus. Cabe destacar que pueden reemplazar / por /home por ejemplo si sólo quieren buscar en sus carpetas personales (o cualquier otra ruta según lo amerite el caso).

Análisis vía GUI

NOTA: Antes de correr un análisis con ClamAV, es recomendable instalar todas las actualizaciones disponibles y limpiar tu sistema con programas como bleachbit/sweeper. Recuerda que bleachbit tiene un modo para root y uno para tu usuario normal, así que te tocará correrlo dos veces.

Si instalaste un GUI, abrimos dicha interfaz nuestro menú de aplicaciones y en Configuración marcamos todas las opciones disponibles:



Después abrimos el Asistente de actualización y activamos las actualizaciones automáticas:



Falsos positivos

Antes de hacer nada con los archivos infectados, recuerda que muchos de estos archivos pueden ser falsos positivos, ya que por ejemplo los archivos chainstate de la blockchain de bitcoin pueden contener firmas de virus que la gente pone ahí para probar las capacidades de la blockchain para guardar cualquier cosa, pero esto no quiere decir que dichos archivos sean malware... En este ejemplo específico estos archivos no son ejecutables y sólo contienen "la descripción" del virus más no el virus en sí. Otro caso interesante son los archivos que contienen malware pero en sí no están infectando nada (como algún keygen que tengas guardado en algún respaldo de Windows o qué se yo); Éstos se pueden tratar como amenazas potenciales o mantener ahí mientras no los ejecutes ni con wine.

Es muy importante entonces que revises uno a uno los archivos infectados (y la amenaza que supuestamente presentan) para asegurarte de si realmente son una amenaza o te encuentras delante de un falso positivo. En este paso tu buscador favorito es la mejor herramienta que podrías tener a la mano.

Tratamiento de amenazas

Para lidiar con los archivos realmente infectados tienes 2 opciones: Eliminarlos o Moverlos a Cuarentena. En caso de que estés usando la GUI ésta debería darte opciones para ambas cosas pero como personalmente a mi no me gustó explicaré el procedimiento vía consola:

Elminarlos es sencillo:

# rm -rf ruta/al/archivo

Para moverlos "a cuarentena" lo ideal es crear un usuario especial para mantener dicha carpeta y después mover los archivos a la misma para aplicarles los siguientes permisos (supongamos que tu usuario se llama antivirus y tu carpeta se llama cuarentena):

# cd /home/antivirus/cuarentena
# find -type d -exec chmod 000 {} \;
# find -type f -exec chmod 000 {} \;

Esto dejará la carpeta y los archivos sellados a lectura/escritura y ejecución, que es lo que queremos cuando sospechamos que algo contiene algún malware.

3) Anti-rootkits

Éstos son los programas chkrootkit y rkhunter. Para instalarlos por ejemplo en Fedora Linux, corremos en una terminal:

# dnf -y install chkrootkit rkhunter

Cada uno de estos se usan como root de la siguiente manera:

# chkrootkit
# rkhunter

4) Adblock Plus y HTTPS Everywhere

La mayoría del malware actual llega a través del navegador web, esto no es una sorpresa para nadie. La mayoría de navegadores web incluyen protección contra phishing (robo de identidad) y te advierten sobre sitios sospechosos, pero nunca está demás un poco más de seguridad... Instalando estas 2 extensiones (disponibles para Chrome/Firefox y otros) podrás protegerte de bastantes amenazas allá afuera, ABP cuenta con protección anti-malware y DoNotTrack por ejemplo:



Mientras que HTTPS Everywhere se encargará de forzar la utilización de cifrado en tus peticiones a los sitios web que visites, previniendo ataques de phishing/hijacking y similares mediante su programa de SSL Observatory:





5) Firewall


Asegúrate de que el cortafuegos de tu sistema esté activo y bien configurado. Bloquea todos aquellos puertos y servicios que no tengan motivo para estar abiertos. Si usas IPTables, asegúrate de que tienes reglas establecidas tanto para IPv4 como para IPv6.

Finalizando...

Estos fueron 5 tips básicos para la protección anti-malware en GNU/Linux... Obviamente dependiendo del caso específico, otras reglas de seguridad importantes como la autenticación SSH-RSA o el no permitir sudo son relevantes, pero en cuanto a malware se refiere estos flancos son lo más importante. Sin importar la plataforma, no olvides las actualizaciones de rutina, el mantenerte lejos de sitios sospechosos y el no abrir correos que ciertamente parecen una trampa. Aplicar una cultura de la prevención es siempre el paso más importante en cuestiones de seguridad informática.

¿Alguna otra cosa que consideres se nos haya escapado en el artículo?
GSoC updates on week before the mid evaluations
Well, this week is kind of a very tiresome week for me because I needed to spend lot of time trying to grasp the correct process which I need to follow in integrating the styles I have coded, to the testing instance of AskFedora. So, here goes my update on the project.

Basically I have set up the testing repository for AskFedora in Openshift. The source for testing can be cloned from the git repository here

Basically the first step you need to follow is having Openshift client (rhc) installed in your PC. We can install rhc by typing in the following command in the terminal:

$ sudo gem install rhc

Then the following command run the setup that will create a config file and and ssh keypair:

$ rhc setup

And then you need to add the ssh key and start the ssh agent by:

$ ssh-all ~/.ssh/id_ras
$ ssh-agent

When setting up the testing instance in Openshift you need to go to www.openshift.com and create a new DJango based application using the source given in the above mentioned git repository. And then you need to ssh into your repository on Openshift typing the command:

rhc ssh your_application_name

on your terminal if you are using Linux or on command line if you are using Windows. Then you will be directed to Openshift and from there it's like working on another remote machine from your own computer. I have explored the file structure and directories on Openshift and yeah it is just like you own another computer on which you can work. (of course on a terminal though with no graphical user interfaces :P)

And then you need to go to app-root/repo/asgi/askbot_devel directory from the terminal and there you need to type in the following commands:

python manage.py syncdb
python manage.py migrate
python manage.py collectstatic

Well it's quite trouble when we need to type in the above commands each time we deploy a new instance. And there is a way in not having to do so each time. This is how I was able to do that:

Inside the .openshift -> action_hooks directory I created a deploy script with the following entries:

#!/bin/bash

echo "Executing 'python ${OPENSHIFT_REPO_DIR}wsgi/askbot_devel/manage.py syncdb --noinput'"
python "$OPENSHIFT_REPO_DIR"wsgi/askbot_devel/manage.py syncdb --noinput

echo "Executing 'python ${OPENSHIFT_REPO_DIR}wsgi/askbot_devel/manage.py migrate --noinput'"
python "$OPENSHIFT_REPO_DIR"wsgi/askbot_devel/manage.py migrate --noinput

echo "Executing 'python ${OPENSHIFT_REPO_DIR}wsgi/askbot_devel/manage.py collectstatic --noinput'"
python "$OPENSHIFT_REPO_DIR"wsgi/askbot_devel/manage.py collectstatic --noinput

so that on every deploy, it executes this deploy script and get the work done without me having to ssh and type the commands.

Now the application will run on Openshift just fine.

My first naive approach towards integration :P

Okay the first way I followed in integrating my styles with the test instance is quite naive I should say. What I did was I ssh into Openshift repository and explored the directory structure and found out that when installing dependencies askbot latest version is fetched and installed in the directory app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages. Then studied the structure of askbot as well and located where the template html files and the styles are in. The html templates of askbot are inside the directory called "templates" and all the css, less, javascript and image files are inside the "media" directory of askbot.

All the templates for pages in askbot extends the "base.html" file and then the "two_column_body.html" file. Basically these pages include many other html files such as header.html, secondary_header.html etc. Because of this structure it is very easy to identify the places where we need to access in order to do a change. Hence maintaining such an instance is quite easy because of the modularity of code. And also I found out that the most of the styles of askbot is included in the "style.css" file which is inside media/style directory. It also has an associated "style.less" file. 

At first I tried to integrate my styles directly to the askbot repository located at app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages which became a quite hectic process as I needed to work on Openshift using the nano editor through the terminal. And bit later I got to know that I was following the wrong method in integrating my styles with askbot and whatever the changes that I should be doing must be done in my local repo and it should be committed and pushed to Openshift to get the expected outcome. Well that was good lesson I learnt. Basically that is why frameworks are there right? Frameworks help up to do things the easier way. So, what I need to do is finding how templates and styles of askbot can be overridden in my testing instance so that I can simply work on my local repository and commit those changes to Openshift. :)
Fedora 22 - Gnome - do not disturb - is now an extension

Just a quick note.

I'd written earlier about setting the Gnome Shell session status to "busy". The post documented how it could be done from the terminal. We'd been discussing this on the Fedora Workstation mailing list and one of the subscribers there, Norman, went ahead and wrote an extension already! I tried it out already and it works really well. You can get it here from the Gnome Shell extensions page.

It adds a button to the top panel that toggles the session status between "Active" and "Busy".

Gnome shell extension DND button screenshot

Cheers!

That heisenbug in touchpad KCM is fixed

KDE touchpad configuration module supports both Libinput touchpad driver and Synaptics driver. Newer versions of distros like Fedora 22 comes with both libinput and synaptics drivers installed, where libinput driver is chosen by default for touchpads. Some users wanted to use synaptics driver and tweak all options exported by it using the touchpad KDE control module. To do so, simply uninstall the libinput driver (xorg-x11-drv-libinput) and touchpad kcm uses synaptics driver which makes all the kcm options tweak-able. Some of those users reported that after uninstalling libinput driver but keeping synaptics driver (xorg-x11-drv-synaptics), touchpad KCM displayed the error message “No touchpad found” and no options were editable as reported in this bug.

This wasn’t easily reproducible in my system though I have seen it once or twice. On a fresh Fedora 22 KDE spin installation which comes with both libinput and synaptics drivers, I was able to reproduce the issue by simply uninstalling libinput driver which helped to debug the issue. The XlibBackend class first checked for the presence of X atom “libinput Tapping Enabled” to determine if libinput driver is active. In that case, the XlibLibinputBackend was instantiated which handled the configuration. Otherwise, fallback to synaptics driver and instantiate XlibSynapticsBackend.

The issue, turns out that X atom “libinput Tapping Enabled” is active even after libinput driver is uninstalled! This was verified by checking the list of initialized atoms, with a nimble tool “xlsatoms” from the xorg-x11-utils package. With and without libinput driver installed, the output of this command were something like:

$ xlsatoms | grep -i tap
316 libinput Tapping Enabled
$ dnf remove xorg-x11-drv-libinput
(logout/restart and login again for X to use synaptics driver)
$ xlsatoms | grep -i tap
313 libinput Tapping Enabled
342 synaptics Tap Action

Which clearly shows the libinput atom is active even when driver is not installed. That caused the KCM code to try to instantiate XlibLibinputBackend which is non-existent and fails with error message “No touchpad found”. This seems to be a bug in Clutter, Mutter and Gtk+ as found out in this Fedora bug ‘touchpad not found’ . Those toolkits inadvertently created this atom while the intention was to check its existence; but I don’t know if kcm_touchpad code was also creating this atom.

With that finding, kcm_touchpad code is revised to first instantiate XlibLibinputBackend and checks for failures. If it fails, we try to instantiate the XlibSynapticsBackend. It is a small fix, yet solves an issue that affected many users. This fix is confirmed by some testers and is now pushed to plasma-desktop. The code adds a couple of error messages, so it is not available in 5.3.2 release but will be available in 5.4.0.


Tagged: hacking, kde
What am I Reading?
I spend a fair amount of time just reading things that interest me. As outlined in [[http://doc.rix.si/org/fsem.html#sec-8][FSEM]], I use Gnus as my primary method to consume newsletters, blogs, and media. I thought it would be interesting to dig in to the things that I read about the most, since Gnus gives me really good tools to introspect that, through its [[http://whatthefuck.computer/blog/2014/12/03/gnus-adaptive-scoring/][adaptive scoring system]].

June 26, 2015

Mejorar rendimiento de drivers gráficos libres (MESA) en Fedora Linux

Fedora es una distro que opta por la libertad más allá de todo. Apoya todo lo que sea 100% libre y no le hace tan buena cara a lo que no. Este tip es para aquellos que, (como en la filosofía de fedora) disfrutan al máximo de la libertad del software y quieren separarse de las opciones que los privan de sus libertades; En esta ocasión veremos cómo obtener el mayor rendimiento con tu GPU NVIDIA, ATI o INTEL (o la que sea que tengas) en Fedora GNU/Linux sin necesidad de instalar los drivers propietarios de los fabricantes:

1.- Activar los Repositorios RPMFusion

# dnf install --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

2.- Actualizar

# dnf -y update

3.- Instalar Utilidades

# dnf -y install mesa-dri-drivers mesa-libGLU
# su -c 'dnf install libtxc_dxtn --enablerepo=rpmfusion-free-updates-testing'

3.1.- Y en 64 Bits un paso más...

# dnf -y install mesa-dri-drivers.i686 mesa-libGLU.i686
# dnf install libtxc_dxtn.i686 --enablerepo=rpmfusion-free-updates-testing

4.- Actualizar de Nuevo

# dnf -y update

Ahora, procedemos a Reiniciar y Listo! Soporte 3D completo en tu tarjeta gráfica bajo Linux (Usando los drivers libres sólamente). Ahora bien, ¿Qué cambios notaremos con ésto? Bueno pues notaremos un mejor rendimiento  en los efectos del escritorio y otras cosas más interesantes como el hecho de que incluso podremos correr varios juegos y otras cosas que requieren aceleración 2D/3D sin mayor problema.