April 23, 2014

How Groups Are Handled in Modern-day DNF

Probably ever since comps.xml was invented, and putting aside its undeniable usefulness for the installer users who could pick and choose groups of packages with a few clicks, the exact semantics of operations like yum group install "some group" and yum group remove "some group", or even yum group upgrade "some group" has been either not useful (the pre-groups-as-objects era) or not clear (the post-groups-as-objects era). Our hope is that the upcoming implementation of groups-as-objects in DNF 0.5.0, with the accompanying documentation in DNF manual as well as this blog post will bring an end to that. I am not going to go into details of different package membership types or categories and environments. These concepts are flawed and their chances of surviving into next few years worth of Fedora releases are mediocre or less.

First are foremost: group operations exist to simplify admin’s life by letting him operate arbitrarily large sets of packages with simple commands. While they do not interfere with other DNF commands, to take maximum value out of them, one should install the system using Anaconda with DNF backend and then manage software using the group command as much as possible. The groups-as-objects DNF remembers what groups were installed and what packages were installed as their parts, so installing some packages manually with the install command hinders DNF’s ability to deduce what is really meant to be installed as a part of a group.

To show some examples:


dnf group install "some group"

goes ahead and looks what packages “some group” contains. If they are A, B and C and C is already installed, DNF installs A and B and remembers that “some group” is installed now. If the next day the admin decides he no longer needs “some group”, then:


dnf group remove "some group"

removes A and B again but leaves C intact. In this case, DNF knows that C does not come from any one group but rather was installed on explicit user’s demand or perhaps is a dependency of some other package from earlier. Anyway, DNF won’t remove C in this case. (Let’s disclose it right here that if an intervening transaction added X depending on B, then B and X both get removed now. The solver should be smarter than that but a) we’re not quite there yet b) this is consistent with the rest of DNF operation where “last in wins”.). If an intervening operation added another B-containing group then group remove "some group" wouldn’t remove B either, so to keep it installed for the other group.

But removal of groups is perhaps not so interesting. What is interesting is keeping the software installed. If, after some time, the distribution release engineering decides D is now part of “some group”, then


dnf group upgrade "some group"

adds D and upgrades the other packages of the group. Similarly, it can be decided that A is no longer in “some group”, then the upgrade would remove it. Keep calm and trust the comps.

Since DNF remembers what groups are installed and what packages were considered group members at a particular time, it actually does what one would expect when he goes ahead and manually removes B from a system where “some group” is installed and then runs upgrade on “some group”: DNF won’t install it back. And that’s one of the main goals we had when designing the new group-handling system: listen to the admins wishes and maintain them through upgrades as much as other constrains will let us.

Happy grouping.

Red Hat Summit

I’m just back from Red Hat Summit in San Francisco. The event was well organized and had a great turnout.

We showed up early to check out the room where we were giving our OpenLMI presentation and to do a last dry run. (Nervous? Me? Don’t be ridiculous!) The room was huge, holding well over 200 people. Fine, plenty of extra capacity.

Then things started getting strange. A half hour before the presentation was due to start, we had over a dozen people in the room. OK, let’s talk to them and see if they are in the right place… Yes, they were. And they had questions and gave us feedback on what they wanted to hear. Well, there goes the dry run. But this was better!

By the time the presentation started the room was packed – in fact, standing room only! The presentation went well. Stephen Gallagher and I played the roles of an experienced SysAdmin and a junior SysAdmin exploring Linux system management and how OpenLMI could make life better. Much of the presentation involved showing actual LMIShell commands to perform a wide range of system management tasks – concluding with “OK, if OpenLMI is so great, can it make me a sandwich??”. The answer may be a bit surprising; ask me about it sometime.

We finished the presentation early to have time for questions – and we got great questions and ran out of time before running out of questions. We then talked to people until we were chased out of the room for the next presentation.

We had to run to the demo pod for our OpenLMI demo. We had put together a demo showing LMIShell configuring storage and networks – by an amazing coincidence, the same tasks we had shown in the presentation… We had a steady stream of people stopping by, many with solid questions.

If anyone reading this was at Red Hat Summit, thank you! We welcome any questions, comments, or suggestions you might have. And please try OpenLMI!


April 22, 2014

Pimp my dnf: DeltaRPM und fastestmirror Funktionen aktivieren
Bitte beachtet auch die Anmerkungen zu den HowTos!

Seit einiger Zeit verfügt Fedoras zukünftiger Standard-Paketmanager dnf über Support für deltarpms und beherrscht auch die fastestmirror Funktion, die bei yum noch mittels Plugin nachgerüstet werden musste, nativ.

Um diese beiden Features zu aktivieren, muss lediglich die dnf.conf mittels

su -c'nano /etc/dnf/dnf.conf'

editiert und um die folgenden beiden Zeilen ergänzt werden:

deltarpm=1
fastestmirror=1

Durch diese beiden Einstellungen sollten Updates und Paket-Installationen zukünftig nach zügiger als ohnehin schon vonstatten gehen.

Inkscape FAQ: How do I crop in Inkscape?

One of the most frequently asked questions from Inkscape users is “how do i crop an image or object?”. Inkscape is primarily a vector graphics editor, so when someone asks this question, they could possibly mean something slightly different to a traditional image crop. This FAQ explains a few of the techniques that people actually mean when they say they want to crop in inkscape.

What do you mean when you say “crop”

  • If you have a single path or object (like a star or a rectangle), and want to trim or crop that object down, then Boolean Operations is probably what you need. (click here to jump to how to do this)
  • If you are exporting your inkscape document (SVG) to a bitmap (a PNG) with the “File > Export Bitmap” command, and want to only export a portion of your document, then changing the document size, and just exporting the document is probably the solution for your needs. (click here to jump to how to do this)

 

Clipping

The Clipping feature is an easy and versatile way to crop vector or bitmap/raster objects in Inkscape. Let’s start with our little monster friend that i downloaded from the Open Clip Art Library:


Our monster is actually a group of 21 objects (a mixture of Ellipses and Paths). When clipping, it is always easier to group the objects being clipped. Grouping objects is as simple as selecting 2 or more objects and choosing Object > Group.


Choose the Rectangle Tool from the Toolbar, and draw a Rectangle over our poor little monster’s face.


Select both the the monster (the group) and the Grey Rectangle (a rectangle object). After selecting both, Choose Object > Clip > Set from the menu.


…and our monster is now cropped in a nice neat rectangle.


But what has happened to the rest of the monster? Well, one of the awesome things about the Clipping feature in Inkscape is that it is non-destructive.  We can remove the clip at any time by selecting the clipped object, and then choosing Object > Clip > Release from the menu.


…and now our monster is back to normal! Well, the rectangle that was clipping him before is still there, but trust me, so is the monster.


But can you crop your image with something other than a rectangle? Yes! Clipping in inkscape can be done with a wide range of clipping objects, including Text Objects…


Circle and Ellipse objects…


and Stars and Polygons.


Even a path can be used as a clipping object.


In fact, if you use a path as the clipping object, you can actually edit the clip path without having to Release it. First select the clipped object, then choose the Node Editing Tool. Your clip path will be outlined Green, with the normal path editing nodes visible.


Now, you can edit this path, and change the area that is clipped / cropped.

Clipping is one feature in inkscape that you will use time and time again. When working with imported bitmap / raster images, clipping is a easy way to crop without having to open up the GIMP. Additonally, when combined with blur, you can achieve some awesome effects like simple bubbles.

Boolean Operations

If you have a single path or object (like a star or a rectangle), and want to trim or crop that object down, then Boolean Operations is probably what you need. In Inkscape, you can use Boolean Operations to “crop” vector objects. This method works best if you have a single vector object that you want to trim. Note also, that unlike Clipping, this operation is destructive, you are deleting data from your SVG. This just covers one boolean operation (intersection) to achieve a basic “crop”. There are many other boolean ops in inkscape too.

Take the following landscape lineart that was vectorised with Inkscape: Selection_032

It is a single filled-in path with no stroke:

Selection_033To “Crop” this object, simply draw a rectangle over it, select both the rectangle and the landscape beneath:

Selection_036And choose Path > Intersection from the menu. Your landscape should now be cropped:

Selection_037 Additionally, you can “Crop” vectors into shapes other than rectangles, for example, draw a shape:

Selection_040Then choose Path> IntersectionSelection_041

Changing the Document size

If you are exporting your inkscape document (SVG) to a bitmap (a PNG) with the “File > Export Bitmap” command, and want to only export a portion of your document, then changing the document size, and just exporting the document is probably the solution for your needs.

Consider we have the following landscape drawn in inkscape. Note that the black box around the landscape is the document boundary.  Selection_044

If we were to go File > Export Bitmap (changed to File > Export PNG in newer versions of inkscape), and Set the export area to Page, we would get something like this:

Selection_047

To change the Document Boundary to a better size, and “Crop” our output, first draw a rectangle over where you want to “crop” the document to.

Selection_045Then, select the black box, and go to File > Document Properties, and choose “Resize Page to Drawing or Selection”. The Page boundary should resize to the size of the box. Note that you may need to check the box “Border on top of drawing” to see the page boundary. Also delete the black box.
Selection_046Now, when you use File > Export Bitmap (changed to File > Export PNG in newer versions of inkscape), and Set the export area to Page, your output should be a “cropped” version of our entire document:

exportpage2

Five Things in Fedora This Week (2014-04-22)

The Join team spins up, Fedora Docs “beats”, Fedora Workstation (and an alternate view), Fedora Atomic, and other (automated) Fedora weekly data….

5tFTW
Fedora is a big project, and it’s hard to follow it all. This series highlights interesting happenings in five different areas every week. It isn’t comprehensive news coverage — just quick summaries with links to each. Here are the five things for April 22nd, 2014:

Making it easier to join Fedora

The Fedora Join SIG is our special interest group dedicated to improving the new-contributor experience. It’s been dormant for a while, but it’s back with a bang thanks to Ankur Sinha, Amita Sharma, Sarup Banskota, and others. A recent IRC meeting came up with a couple of immediate ideas, including a Fedora site inspired by What can I do for Mozilla?

If making Fedora more welcoming to contributors is interesting to you, join the mailing list and help keep up the momentum.

Fedora Docs “Beats”

Speaking of things you can do for Fedora… how about contributing expertise in your area for the Fedora 21 release notes? I know F21′s October release target seems a long way off, but there’s a lot to do and the summer is going to fly by. Docs team leader Pete Travis recently announced that F21 Beats are Open, noting:

If you’re new to Docs, Beat writing is a good way to get started. Simply choose a package, service, or functionality that interests you and do a little research to see how will change in F21. You can check rawhide package changelogs, read the software changelogs in /usr/share/doc/$pkgname, scrape upstream mailing lists and commit logs, and reach out to package maintainers or developers.

As always, Pete also provides great, non-intimidating guidance for new docs contributors.

Fedora Workstation, and an alternate view — both part of Fedora!

Fedora Workstation developer Christian Schaller wrote a long blog post explaining some of the mindset and background behind the upcoming Fedora Workstation. If you care about Linux on the desktop, this is an interesting read, whether you’re a GNOME fan or not (and whether or not you agree). And if you do disagree, remember that that’s absolutely okay too. Longtime Fedora contributor Stephen Smoogen (a self-described “411 year old Linux administrator”) has another great blog post responding to one particular somewhat-contentious Fedora Workstation decision and why he’s not worried.

Our “Fedora.next” efforts are additive rather than restrictive, and are centered around our Friends Foundation; we may disagree on details, but we can all work together to advance free software as a project.

Fedora Atomic?

atomic logoAt last week’s Red Hat Summit, Red Hat announced a new initiative called Project Atomic. This isn’t really new software, or a new operating system or distribution — it’s best described as a pattern for putting together some pieces we already have. Not surprisingly, a lot of these pieces were (and are) worked on by Fedora contributors, including rpm-ostree, Docker, systemd, and a new orchestration building-block called GearD (you can read more about GearD on the OpenShift blog and of course you can `yum install geard ` to check it out).

So… (you may be thinking…) how does this fit into Fedora? Well, most of these are parts that we have already been talking about in the Fedora Cloud Working Group, and it may be that GearD provides one of the key missing pieces. We’ve filed a Fedora 21 change proposal for a specially-tailored Docker Host Image, and although we haven’t made any decisions yet, I think the Atomic patterns are very well aligned with what we want to do (and overlap with what we are already doing anyway), so that may end up being a sort of “Fedora Atomic” cloud spin.

One of the pieces of Project Atomic that the Cloud WG hadn’t looked at is Cockpit, a web-based server management GUI. The interesting thing is that this is one of the key features proposed for Fedora Server, and if we decide to include that part in our Docker cloud image, that will be a point of coherence across the products. (See the Project Atomic docs on what that might provide.)

Automatic Weekly Data!

If I had been paying attention and knew about http://thisweekinfedora.org/ before I started doing these posts, I might have named this something a bit less similar. Oh well! Names aside, though, these articles and that site are actually complementary. I pick out things to feature as they strike my attention, while This Week in Fedora presents automatically-collected statistics every Monday. You won’t really learn any news there, but you will find data on all sorts of contributor activities, from package builds to user creation to meetings logged. If you’re data-oriented, it’s an interesting way to get a feel for the technical pulse of the community.

Inkscape 0.91 Feature — Greyscale Colour Mode
The Inkscape developers are hard at work developing the new version of Inkscape (0.91). This post is part of a series that will outline some of the awesome new features that will be available when Inkscape 0.91 is released.

The upcoming release of Inkscape has a new feature that allows an artist to easily view their entire image in greyscale. This feature is useful for those times you want to focus more on drawing layout and space weighting than colour. This mode is separate to the previous Display Modes of Normal, Outline and No Filters, so you can also view your no-filtered drawing in greyscale also.

To enable this mode in inkscape 0.91, simply choose View > Colour Display Mode > Greyscale.

inkscapemodes

 

If you want to try out this new feature already, you will need to  Download a “nightly” or “development” version of inkscape. Links to various builds of development versions of inkscape are listed at the Inkscape downloads page.
Moving back to Chattanooga means more time for Fedora, Family, Art, Lure Coursing and Friends
Today marks the last day before we drive in to the state of Tennessee. I continue to be a little concerned over the time commitments I will have to make in assisting my sister in her battle against breast cancer, but I am hopeful that our journey will begin there and time for each of my favorite past times will grow with her continued success in this fight. Deanna is a fighter. I can only provide support to this battle-tough woman who faces the suffering and pain of others every day in her daily work, then deals with her own in her off time. I am often humbled by her fortitude.

I am looking at the work related to Docker more and more. After helping Luke to become a package maintainer for etcd and watching it be (rightly so!) swallowed in to Red Hat production by the cloud infrastructure team, I am newly invigorated as a mentor. When my time opens up, I look forward to spending a good amount of my free time on the fourth floor of the Chattanooga library working on my own projects and assisting with the projects of others. I hope to see you there. I will try to spend my time prompting young minds to create using the Fedora Project's features as much as possible.

For leadership, Chattanooga could not be better served than they are right now by Mayor Andy Berke. It was my pleasure to meet with him during the beginning of Code for America's work in the Chattanooga area. I found him engaging and a true listener. His pride swelling with the initiation of great works for the community, not his own personal accomplishments. His active participation in community education and city growth will keep Chattanooga moving in the right direction. My only request to Andy, &quotDon't forget the arts" already working to raise the consciousness of the citizenry.

So we'll be there soon, less than 24 hours and we'll see what is next. Clean up, healing, community, and love. The greatest of these is love. I think . . . love in the form of a Linuxfest!
The Horizontal and Vertical Bezier technique

Here is a tutorial / article that outlines the “Horizontal and Vertical” Bezier curve technique. Basically, with a little practice, editing beziers can become a lot easier when you align all your handles horizontally or vertically. While this tutorial talks specifically about illustrator, the concept also works with inkscape beziers.

In inkscape, holding down the alt key is the simplest way to constrain your bezier handles to the horizontal or the vertical.

beziers

virt-convert command line has been reworked

One of the changes we made with virt-manager 1.0 was a large reworking of the virt-convert command line interface.

virt-convert started life as a tool for converting back and forth between different VM configuration formats. Originally it was just between vmx and virt-image(5), but it eventually grew ovf input support. However, for the common usage of trying to convert a vmx/ovf appliance into a libvirt guest, this involved an inconvenient two step process:

* Convert to virt-image(5) with virt-convert
* Convert to a running libvirt guest with virt-image(1)

Well, since virt-image didn't really have any users, it's planned for removal. So we took the opportunity to improve virt-convert in the process. Running virt-convert is now as simple as:

 virt-convert fedora18.ova  

or

 virt-convert centos6.tar.gz  

And a we convert directly to libvirt XML and launch the guest. Standard libvirt options are allowed, like --connect for specifying the libvirt driver.

The tool hasn't been heavily used and there's definitely still a lot of details we are missing, so if you hit any issues please file a bug report.

(Long term it sounds like gnome-boxes may grow a similar feature as mentioned over here, so maybe virt-convert isn't long for this world since there will likely be a command line interface for it as well)
Pseudo-HDR editing

Usually I don't edit much my landscape photos, not because I don't know how but I prefer them this way. Still, recently I felt the need for some more advanced processing for a picture, it enjoyed some success so I decided to share the process. The tools used were UFRaw (in the form of the GIMP plugin), Luminance HDR and, of course, GIMP.

I passed by this scene in the nearby park at the "golden hour" and it looked photogenic, but I wanted to make it more dramatic. One can increase the drama in a landscape photo by using a HDR treatment, but not having the tripod with me (for a proper HDR image you need at least 3 images with exactly the same scene but different exposures) I decided to go for pseudo-HDR. For this, I set the camera recording mode to RAW.

pseudo hdr

Note: the real purpose of a HDR image is to have details both in the shadows and in the highlights, beyond what the camera sensor can record, the improved drama is a side effect.

The RAW image was imported in GIMP via the UFRaw plugin 3 times: with normal, -1 and +1 exposure. If you really want, you can try doing the same starting from a single JPEG an simulate the exposure bracketing with color levels/curves, but I wouldn't advise: if from a RAW you can recover some lost image details, in JPEG they are gone forever.

pseudo hdr

The result is 3 JPEG images, one under-exposed, one exposed properly and the other over-exposed, which are to be combined in a HDR. For more drama, you can bracket with more than one step.

pseudo hdr

I imported the JPEGs to Luminance HDR and set their exposures manually to -1, 0 and +1 (or whatever values you used for RAW development). Then just press "Next" a few times, there is no need to adjust parameters, nor align the images (they were obtained from the same source).

pseudo hdr

Now we have a High Dynamic Range image, which can't be used or viewed as-is on a normal computer display, it has to be converted back to Low Dynamic Range, but optimized for what do we want from it (details in shadows and/or highlights, drama, whatever).

pseudo hdr

Time to pick one of the presets in the right column, one you think is the best for your case.

pseudo hdr

Then I adjusted the color levels a bit (if you prefer, the levels can be adjusted later with GIMP or any other image editing app).

pseudo hdr

Now the image can be exported as a JPEG benefiting from the HDR/pseudo-HDR treatment. You can leave it as-is if you like.

pseudo hdr

However, I opened it again with GIMP for more refinement: sharpening and color curves adjustment, to make the colors warmer. This is my end result.

pseudo hdr

New-DJBDNS 1.06

Hello..!!!

I’m extremely happy to announce, the release 1.06 of the New-DJBDNS is now available for general usage. New updates are pushed to Fedora repositories and shall soon be available via stable channels. It’s a landmark 10′th release of the project. :)

        See -> http://pjp.dgplug.org/ndjbdns/

A major highlight of this release is the couple of security fixes for potential Denial of Service (DoS) flaws. One happens by a subtle hash collision attack, while the other is a result of excessive read(2) calls. It is highly recommended to upgrade your set-up to use 1.06 release. Nevertheless, CVE requests for these vulnerabilities were rejected on non-technical grounds.
See:
        -> http://www.openwall.com/lists/oss-security/2014/02/10/4
        -> http://www.openwall.com/lists/oss-security/2014/02/17/3

Apart from these issues, 1.06 release fixes an important time zone bug(discussed -> here) to account for the Daylight Savings Time(DST) and introduces new command line options to read from non-default configuration file. This will help to run multiple instances of servers with different configuration parameters. Thanks to Francisco M Biete for writing a patch to introduce new options and to Don Ky for reporting the time zone issue.

Another major highlight of this release is an excellent documentation contributed by Satya K. It was the last pending packaging goal set in the early days, now met! :)

        See -> http://pjp.dgplug.org/ndjbdns/document.html

If you find a bug or spot anything amiss, please let me know, I’ll fix it. With last of the initial goals met, this release truly marks an important milestone in the evolution of the New-DJBDNS. Many people have contributed, in various ways, to this progress & the growth of New-DJBDNS. I sincerely thank them all for the constant support and encouragement they have offered me. It’s bliss!

Now, it’s time to set new achievable goals. It’s time to define the new possibilities. One of the long-standing inherent drawback of the New-DJDBNS is its inability to communicate over IPv6. In the second spell of its development, I plan to rid New-DJBDNS of this very inability. Apart from this super goal, if you have suggestions, feature requests or patches that you’d like to see merged in New-DJBDNS, I’m all ears, please feel free to write to me.

Thank you! :)


Tại sao ra quyết định đồng thuận tốt hơn cho các dự án PMTDNM
Nhiều người trong chúng ta sử dụng mô hình ra quyết định đồng thuận (Consensus-based decision making) trong các dự án PMTDNM như Apache’s lazy consensus model, nhưng phần lớn chúng ta thực hiện hay thậm chí đặt thành qui định cuối cùng theo qui trình biểu quyết theo đa số (Majority-based decision making). Trong […]
Launch secure LXC containers on Fedora 20 using SELinux and sVirt

selinux-penguin-new_mediumGetting started with LXC is a bit awkward and I’ve assembled this guide for anyone who wants to begin experimenting with LXC containers in Fedora 20. As an added benefit, you can follow almost every step shown here when creating LXC containers on Red Hat Enterprise Linux 7 Beta (which is based on Fedora 19).

You’ll need a physical machine or a VM running Fedora 20 to get started. (You could put a container in a container, but things get a little dicey with that setup. Let’s just avoid talking about nested containers for now. No, really, I shouldn’t have even brought it up. Sorry about that.)

Prep Work

Start by updating all packages to the latest versions available:

yum -y upgrade

Verify that SELinux is in enforcing mode by running getenforce. If you see Disabled or Permissive, get SELinux into enforcing mode with a quick configuration change:

sed -i 's/^SELINUX=.*/SELINUX=enforcing/' /etc/selinux/config

I recommend installing setroubleshoot-server to make it easier to find the root cause of AVC denials:

yum -y install setroubleshoot-server

Reboot now. This will ensure that SELinux comes up in enforcing mode (verify that with getenforce after reboot) and it ensures that auditd starts up sedispatch (for setroubleshoot).

Install management libraries and utilities

Let’s grab libvirt along with LXC support and a basic NAT networking configuration.

yum -y install libvirt-daemon-lxc libvirt-daemon-config-network

Launch libvirtd via systemd and ensure that it always comes up on boot. This step will also adjust firewalld for your containers and ensure that dnsmasq is serving up IP addresses via DHCP on your default NAT network.

systemctl start libvirtd.service
systemctl enable libvirtd.service

Bootstrap our container

Installing packages into the container’s filesystem will take some time.

yum -y --installroot=/var/lib/libvirt/filesystems/fedora20 --releasever=20 --nogpg install systemd passwd yum fedora-release vim-minimal openssh-server procps-ng

This step fills in the filesystem with the necessary packages to run a Fedora 20 container. We now need to tell libvirt about the container we’ve just created.

virt-install --connect lxc:// --name fedora20 --ram 512 --filesystem /var/lib/libvirt/filesystems/fedora20/,/

At this point, libvirt will know enough about the container to start it and you’ll be connected to the console of the container! We need to adjust some configuration files within the container to use it properly. Detach from the console with CTRL-].

Let’s stop the container so we can make some adjustments.

virsh -c lxc:// shutdown fedora20

Get the container ready for production

Hop into your container and set a root password.

chroot /var/lib/libvirt/filesystems/fedora20 /bin/passwd root

We will be logging in as root via the console occasionally and we need to allow that access.

echo "pts/0" >> /var/lib/libvirt/filesystems/fedora20/etc/securetty

Since we will be using our NAT network with our auto-configured dnsmasq server (thanks to libvirt), we can configure a simple DHCP setup for eth0:

cat < < EOF > /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network
NETWORKING=yes
EOF
cat < < EOF > /var/lib/libvirt/filesystems/fedora20/etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth0
EOF

Using ssh makes the container a lot easier to manage, so let’s ensure that it starts when the container boots. (You could do this via systemctl after logging in at the console, but I’m lazy.)

chroot /var/lib/libvirt/filesystems/fedora20/
ln -s /usr/lib/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/
exit

Launch!

Cross your fingers and launch the container.

virsh -c lxc:// start --console fedora20

You’ll be attached to the console during boot but don’t worry, hold down CTRL-] to get back to your host prompt. Check the dnsmasq leases to find your container’s IP address and you can login as root over ssh.

cat /var/lib/libvirt/dnsmasq/default.leases

Security

After logging into your container via ssh, check the process labels within the container:

# ps aufxZ
LABEL                           USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 1 0.0  1.3 47444 3444 ?      Ss   03:18   0:00 /sbin/init
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 18 0.0  2.0 43016 5368 ?     Ss   03:18   0:00 /usr/lib/systemd/systemd-journald
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 38 0.4  7.8 223456 20680 ?   Ssl  03:18   0:00 /usr/bin/python -Es /usr/sbin/firewalld -
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 40 0.0  0.7 26504 2084 ?     Ss   03:18   0:00 /usr/sbin/smartd -n -q never
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 41 0.0  0.4 19268 1252 ?     Ss   03:18   0:00 /usr/sbin/irqbalance --foreground
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 44 0.0  0.6 34696 1636 ?     Ss   03:18   0:00 /usr/lib/systemd/systemd-logind
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 46 0.0  1.8 267500 4832 ?    Ssl  03:18   0:00 /sbin/rsyslogd -n
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 dbus 47 0.0  0.6 26708 1680 ?     Ss   03:18   0:00 /bin/dbus-daemon --system --address=syste
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 rpc 54 0.0  0.5 41992 1344 ?      Ss   03:18   0:00 /sbin/rpcbind -w
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 55 0.0  0.3 25936 924 ?      Ss   03:18   0:00 /usr/sbin/atd -f
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 56 0.0  0.5 22728 1488 ?     Ss   03:18   0:00 /usr/sbin/crond -n
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 60 0.0  0.2 6412 784 pts/0   Ss+  03:18   0:00 /sbin/agetty --noclear -s console 115200 
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 74 0.0  3.2 339808 8456 ?    Ssl  03:18   0:00 /usr/sbin/NetworkManager --no-daemon
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 394 0.0  5.9 102356 15708 ?  S    03:18   0:00  \_ /sbin/dhclient -d -sf /usr/libexec/nm
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 polkitd 83 0.0  4.4 514792 11548 ? Ssl 03:18   0:00 /usr/lib/polkit-1/polkitd --no-debug
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 rpcuser 110 0.0  0.6 46564 1824 ? Ss   03:18   0:00 /sbin/rpc.statd
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 111 0.0  1.3 82980 3620 ?    Ss   03:18   0:00 /usr/sbin/sshd -D
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 409 0.0  1.9 131576 5084 ?   Ss   03:18   0:00  \_ sshd: root@pts/1
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 413 0.0  0.9 115872 2592 pts/1 Ss 03:18   0:00      \_ -bash
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 438 0.0  0.5 123352 1344 pts/1 R+ 03:19   0:00          \_ ps aufxZ
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 411 0.0  0.8 44376 2252 ?    Ss   03:18   0:00 /usr/lib/systemd/systemd --user
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 412 0.0  0.5 66828 1328 ?    S    03:18   0:00  \_ (sd-pam)
system_u:system_r:virtd_lxc_t:s0-s0:c0.c1023 root 436 0.0  0.4 21980 1144 ?    Ss   03:19   0:00 /usr/lib/systemd/systemd-hostnamed

You’ll notice something interesting if you run getenforce now within the container — SELinux is disabled. Actually, it’s not really disabled. The processing of SELinux policy is done on the host. The container isn’t able to see what’s going on outside of its own files and processes. The libvirt documentation for LXC hints at the importance of this isolation:

A suitably configured UID/GID mapping is a pre-requisite to making containers secure, in the absence of sVirt confinement.

In the absence of the “user” namespace being used, containers cannot be considered secure against exploits of the host OS. The sVirt SELinux driver provides a way to secure containers even when the “user” namespace is not used. The cost is that writing a policy to allow execution of arbitrary OS is not practical. The SELinux sVirt policy is typically tailored to work with an simpler application confinement use case, as provided by the “libvirt-sandbox” project.

This leads to something really critical to understand:

Containers don’t contain

Dan Walsh has a great post that goes into the need for sVirt and the protections it can provide when you need to be insulated from potentially dangerous virtual machines or containers. If a user is root inside a container, they’re root on the host as well. (There’s an exception: UID namespaces. But let’s not talk about that now. Oh great, first it was nested containers and now I brought up UID namespaces. Sorry again.)

Dan’s talk about securing containers hasn’t popped up on the Red Hat Summit presentations page quite yet but here are some notes that I took and then highlighted:

  • Containers don’t contain. The kernel doesn’t know about containers. Containers simply use kernel subsystems to carve up namespaces for applications.
  • Containers on Linux aren’t complete. Don’t compare directly to Solaris zones yet.
  • Running containers without Mandatory Access Control (MAC) systems like SELinux or AppArmor opens the door for full system compromise via untrusted applications and users within containers.

Using MAC gives you one extra barrier to keep a malicious container from getting higher levels of access to the underlying host. There’s always a chance that a kernel exploit could bypass MAC but it certainly raises the level of difficulty for an attacker and allows server operators extra time to react to alerts.

Launch secure LXC containers on Fedora 20 using SELinux and sVirt is a post from: Major Hayden's blog.

Thanks for following the blog via the RSS feed. Please don't copy my posts or quote portions of them without attribution.

April 21, 2014

MySQL, Fedora 20, and Devstack

Once again, the moving efforts of OpenStack and Fedora have diverged enough that devstack did not run for me on Fedora 20. Now, while this is something to file a bug about, I like to understand the issue to have a fix in place before I report it. Here are my notes.

Running on a F20 Cloud image. Did a yum groupinstall “Development Tools” before git clone devstack and then a simple localrc with

The failure looks like this:

+ recreate_database_mysql keystone utf8
+ local db=keystone
+ local charset=utf8
+ mysql -ufedora -pkeystone -h127.0.0.1 -e 'DROP DATABASE IF EXISTS keystone;'
ERROR 1045 (28000): Access denied for user 'fedora'@'localhost' (using password: YES)

Now, I run with the following value in localrc:

MYSQL_USER=fedora

So let me wipe the DB and try with the defaults.

The database is stored in:

/var/lib/mysql/

Configuration is stored in /etc/my.cnf which is really just a pointer to /etc/my.cnf.d/

Inside there:
client.cnf mysql-clients.cnf server.cnf

The only interesting thing in server.cnf

log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

So configuration looks mainly to be a skeleton.

Looking in /var/lib/mysql/

aria_log.00000001 ibdata1 ib_logfile1 performance_schema
aria_log_control ib_logfile0 mysql test

sudo ls /var/lib/mysql/mysql/

Doesn’t show anything that looks like configuration options.

OK…wipe:

sudo yum -y erase `rpmquery -a | grep maria`
sudo rm /etc/my.cnf.rpmsave
sudo rm -rf /var/lib/mysql/

now running ./stack gets further:

+ /opt/stack/keystone/bin/keystone-manage db_sync

OK, that looks like a problem with keystone not pip-installing the SQL-A driver for MySQL:

sudo pip install MySQL-python

_mysql.c:44:23: fatal error: my_config.h: No such file or directory

#include “my_config.h”

Need the mysql development files….possibly an issue due to the switch over to mariadb
sudo yum -y install mysql-devel

Installs
mariadb-devel.x86_64 1:5.5.36-1.fc20

Trying the RPM version we get:

sudo yum -y install MySQL-python

Seems to have done it…

So it looks like the problem was an artifact of how I ran devstack, and that Running the mysql module in devstack is not enough to install the python and devel packages for MySQL.

Why consensus-decision making is better for open source projects

Many of us use consensus-style decision making in our free/open source projects such as Apache’s lazy consensus model, but often we have a practice or even a governance of having things end up in a majority-wins voting process.

In a majority-wins voting model, the dynamic is one where the dissenters are marginalized — the majority has to put the dissenting minority in the position of being a “loser” in a vote.

In a consensus-decision model with blocking, you have a situation where it becomes the duty of the entire group to take care of the dissenters’ concerns.

In general, consensus decisions force the group to focus on a compromise around the best-possible solution. When people are in the position of being a winner or a loser, the effect is to make people solidify around one of two extremes that may not represent the best possible solution.

Often achieving consensus only requires clarification of a misunderstanding or minor adjustments to the original proposal. This occurs even where no one has blocked, but the appearance of -0 (or a stand aside) will also make it clear that the original proposal might need more thought — getting a -0 from a leading thinker in a group spurs others to wonder if maybe there is more that can be done to make the proposal fully supported.

There are a lot more details to how things work in practice in a consensus-decision model, which I covered fairly well in the Appendix to the CentOS Project Board governance, quoted here:

In the CentOS Project a discussion toward a decision follows this process:

  1. A proposal is put forth and a check for consensus is made.
    1. Consensus is signified through a +1 vote.
  2. A check is made for any dissent on the proposal.
    1. Reservations? State reservation, sometimes with a ‘-1’ signifier
      1. Reservations about the proposal are worked through, seeking consensus to resolve the reservations.
      2. A reservation is not a vote against the proposal, but may turn into a vote against if unresolved. It is often expressed with an initial -1 vote to indicate reservations and concerns. This indicates there is still discussion to be had.
    2. Stand aside? No comment, or state concerns without a -1 reservation; sometimes the ‘-0’ signifier is used.
      1. This option allows a member to have issues with the proposal without choosing to block the proposal, by instead standing aside with a +/-0 vote.
      2. The stated concerns may influence other people to have or release reservations.
    3. Block? Vote ‘-1’ with reasons for the block.
      1. This is a complete block on a proposal, refusing to let it pass. A block is a -1 vote and must be accompanied with substantive arguments that are rooted in the merit criteria of the Project – protecting the community, the upstream, technical reasons, and so forth.

Block (-1) votes used as a veto are typically used only when consensus cannot otherwise be met, and are effectively a veto that any sitting Board member can utilize with sufficient substantiation.

In writing the original section of The Open Source Way, I didn’t go so far as to recommend the abandonment of the majority-wins voting method, instead I said, “Seek consensus — use voting as a last resort.” That section (unfinished) is now going to get a rewrite where I’ll definitely come down against majority-wins, and write out more of the why.

Partially I owe my improved understanding from using the consensus model in a business collective where I’m a partner, Santa Cruz Pedicab. Working with the model in the physical world made me intensely aware of the human impact of majority-wins by comparison, and convinced me it was really the backbone to a welcoming community.

Instalando Fedora 20 para Inicios de actividades académicas

Como ya es conocido el cuarto año consecutivo, el Instituto Manuel Seoane Corrales en el Distrito San Juan de Lurigancho – Lima Perú, como parte de su preparación para el Ciclo Académico 2014 – I, instalamos el Sistema Operativo Fedora 20; dicho sistema nos permitirá correr distintas aplicaciones para dictar diversas asignaturas orientados a fundamentos de programación, desarrollo web, gestión y administración web modelamiento y administración de base de datos, etc. el proceso de instalación se inició con la grabación de DVDs, y creación de LiveUSBs, cada estudiando tuvo la oportunidad de instalar en un equipo llegando a superar 25 pcs.
Es necesario mencionar que también este laboratorio nos permitirá trabajar diversos talleres a llevarse a cabo este sábado 26 en el FLISOL 2014 que seremos sede por 3ra. vez.
No faltaron varias preguntas que le hágo llegar a Uds. y si pueden responder a los siguientes:
¿Por qué usar fedora y no otra distro?
¿Qué ventaja me lleva a conocer fedora?
¿Qué relación tiene Fedora con RedHat?
¿Qué no puedo realizar con fedora?
¿Es posible realizar virtualizaciones con Fedora?
¿Puedo usar fedora como servidor web y otros servicios?

Las preguntas fueron constestadas en su momento por mi persona, pero también quiero que Uds. puedan tener una respuesta quizas más acertadas ya que permanente nos encontramos con nuevos estudiantes y que las preguntas simpre son muy similares.

2014-04-21 10.07.27

2014-04-21 10.09.36

2014-04-21 10.10.14

2014-04-21 10.10.31

2014-04-21 10.12.15

2014-04-21 10.13.51

2014-04-21 10.14.09

2014-04-21 10.15.24

2014-04-21 10.15.52

2014-04-21 10.20.46

2014-04-21 10.21.19

2014-04-21 10.21.35

2014-04-21 10.25.56

2014-04-21 10.26.03

2014-04-21 10.26.26

2014-04-21 14.05.14

2014-04-21 14.06.05


Where to go for real-time Inkscape help!

There are plenty of places around where you can get your Inkscape questions answered, including the inkscape forum, inkscape answers on launchpad, and the inkscape section on the graphic design stackexchange.

But if you need an answer to a question in real time, the official #inkscape user channel on irc.freenode.net is the best place to go.

Never used IRC before? All Good, as the new inkscape website now has a web app that lets you connect directly through your webbrowser to all the knowledgeable folks in the #inkscape chat!

Inkscape 0.91 Feature — Measurement tool
The Inkscape developers are hard at work getting ready for the release of the new version of Inkscape (0.91). This post is part of a series that will outline some of the awesome new features that will be available when Inkscape 0.91 is released.

The Measurement tool is a new feature for the artist to measure the elements in their drawing. To use the measurement tool, simply choose the tool, click anywhere on the drawing and drag the ruler out. The measurement tool will live-update with measurements of length and angles as you pass over objects in your drawing.

ruler

If you want to try out this new feature already, you will need to  Download a “nightly” or “development” version of inkscape. Links to various builds of development versions of inkscape are listed at the Inkscape downloads page.
The Forgotten “F”: A Tale of Fedora’s Foundations

Lately, I’ve been thinking a lot about Fedora’s Foundations: “Freedom, Friends, Features, First”, particularly in relation to some very sticky questions about where certain things fit (such as third-party repositories, free and non-free web services, etc.)

Many of these discussions get hung up on wildly different interpretations of what the “Freedom” Foundation means. First, I’ll reproduce the exact text of the “Freedom” Foundation:

“Freedom represents dedication to free software and content. We believe that advancing software and content freedom is a central goal for the Fedora Project, and that we should accomplish that goal through the use of the software and content we promote. By including free alternatives to proprietary code and content, we can improve the overall state of free and open source software and content, and limit the effects of proprietary or patent encumbered code on the Project. Sometimes this goal prevents us from taking the easy way out by including proprietary or patent encumbered software in Fedora, or using those kinds of products in our other project work. But by concentrating on the free software and content we provide and promote, the end result is that we are able to provide: releases that are predictable and 100% legally redistributable for everyone; innovation in free and open source software that can equal or exceed closed source or proprietary solutions; and, a completely free project that anyone can emulate or copy in whole or in part for their own purposes.”

The language in this Foundation is sometimes dangerously unclear. For example, it pretty much explicitly forbids the use of non-free components in the creation of Fedora (sorry, folks: you can’t use Photoshop to create your package icon!). At the same time, we regularly allow the packaging of software that can interoperate with non-free software; we allow Pidgin and other IM clients to talk to Google and AOL, we allow email clients to connect to Microsoft Exchange, etc. The real problem is that every time a question comes up against the Freedom Foundation, Fedora contributors diverge into two armed camps: the hard-liners who believe that Fedora should never under any circumstances work (interoperate) with proprietary services and the the folks who believe that such a hard-line approach is a path to irrelevance.

To make things clear: I’m personally closer to the second camp than the first. In fact, in keeping with the subject of this post, I’d like to suggest a fifth Foundation, one to ultimately supersede all the rest: “Functional”. Here’s a straw-man phrasing of this proposal:

Functional means that the Fedora community recognizes this to be the ultimate truth: the purpose of an operating system is to enable its users to accomplish the set of tasks they need to perform.

With this in place, it would admittedly water down the Freedom Foundation slightly. “Freedom” would essentially be reduced to: the tools to reproduce the Fedora Build Environment and all packages (source and binary) shipped from this build system must use a compatible open-source license and not be patent-encumbered. Fedora would strive to always provide and promote open-source alternatives to existing (or emerging) proprietary technologies, but accepts that attracting users means not telling them that they must change all of their tools to do so).

The “Functional” Foundation should be placed above the other four and be the goal-post that we measure decisions against: “If we make this change, are we reducing our users’ ability to work with the software they want/need to?”. Any time the answer to that question would be “yes”, we have to recognize that this translates into lost users (or at the very least, users that are working around our intentions).

Now, let me be further clear on this: I am not in any way advocating the use of closed-source software or services. I am not suggesting that we start carrying patent-encumbered software. I think it is absolutely the mission of Fedora to show people that FOSS is the better long-term solution. However, in my experience a person who is exposed to open source and allowed to migrate in their own time is one who is more likely to become a lifelong supporter. A person who is told “if you switch to Fedora, you must stop using Application X” is a person who is not running Fedora.


Sharing of Ownership to encourage collaboration in the Community
Mozilla Hindi Community Meet 2014 was a meet-up with a difference in the sense that it brought together the young and the old, the active and the not-so-active, those with experience and skills, and those with great optimism, enthusiasm and know-how. But the chemistry between the two 'generations' and diverse teams was quite magical, and communication near perfect. That is why it was a meet-up that promised little but delivered quite a bit and every participant went back with a sense of enthusiasm, energy and expectation. We did some solid concrete hands-on work and promised to continue the conversation. Mozilla Hindi Community Meet was organized on March 22-23, 2014 in Pune (India). The two-day event was organized by Mozilla. The meet was hosted by Red Hat at its Pune, India office. Different contributors working with different projects of Mozilla from different parts of India working for Hindi language participated in the meet. Ravikant, Vibhas Chandra Verma, Ashish Namdev, Umesh Agarwal, Shahid Farooqui, Guntupalli Karunakar, Meghraj Suthar, Suraj Kawade, Sangeeta Kumari, Chandan Kumar, Rajesh Ranjan, Aniket Deshpande, Himanshu Anand, and Ankit Gadgil participated in the meet.  

Agenda of Awesome

The event started on 22 March 2014 in the morning. I, being a co-ordinator of Mozilla Hindi team, welcomed all the participants. I initially gave a brief summary of all Mozilla projects and explained the need and agenda of the meetup. I also discussed about Mozilla products' translation, its various tools, related linguistic resources etc, necessary for the work of localization in Hindi. I discussed why I moved the translation from VCS hg to Pootle and how it hugely helped in the growth contributor base.

Hindi, lingua franca and its structure

A fellow of Center for Study of Developing Societies, a great open source enthusiast and theorist, Ravikant talked in detail about the basics of translation, its importance and its socio-cultural history and importance. He said that generally in India people take translation as a non-creative, boring and second rate work. We must give respect to our translators, and we must enjoy the creativity involved in the act of translation. He added that historically dominant languages have been used to create a gap between peoples. Later he gave elaborate examples to show how and why translation is an important exercise and how the volunteer open source community are engaged in a very major activity in the age of rapid transitions. A senior lecturer in the university of Delhi, Vibhas Chandra Verma, presented a talk on 'Good Hindi'. He quoted from the great saint poet Kabir to say that language is like free-flowing water, which gathers no mud. He replied about a question between the difference of Hindi and Urdu and said that there is almost no difference between Hindi and Urdu in common parlance, except the their different scripts. It is difficult to distinguish between Hindi and Urdu sentences and grammars. He also cited several examples from Sanskrit, Hindi and Urdu to demonstrate the similarity and difference between Sanskrit on the one hand and Hindi and Urdu on the other.
 

Style is the Soul - Hindi Style Guide Reviewed

One major agenda of the meet was to review Computer Translation Style and Convention Guide for Hindi prepared by the larger Hindi community under FUEL Project. The participants discussed the positives and negatives of the guide in detail. After discussing the broad parameters of a good guide, Ravikant and Vibhas Chandra Verma took the responsibility of editing the document. On the basis of the guidelines decided upon, we will soon have it ready to be finally reviewed by the community.And then we will be ready to put it in the public domain. The style guide is written in English so that any Quality Engineer can also access it and work on the quality assessment of the translation.  

And quite flows the Fennec - Feel of Fennec is revitalized

The whole team reviewed the major GUI of Firefox for Andriod in Hindi. Fennec is going to be released in all Indian languages including Hindi. Years before, Firefox browser in Hindi was reviewed at Sarai-CSDS, Delhi in a review workshop organized by Sarai-CSDS. A review meet of this kind is essential before any major release and the Hindi Mozilla review team felt the that the review process for Fennec was satisfactory.

Typing made easy - type क, ख, ग in FirefoxOS

 One major progress happened in the area of FirefoxOS Hindi Devanagari Keyboard. There are no Indic keyboard (except for Bengali) for Firefox OS. On the 2nd day of the meet, Karunakar, Secretary of IndLinux Group prepared a Hindi Inscript keyboard for FirefoxOS and added it to GitHub, and this is the pull request for the same. On the basis of Hindi, Aniket added the Marathi Inscript keyboard in no time and it is here. We are thankful to Karunakar and Aniket. Hope this will be of great use for FirefoxOS and create a momemtum for the FirefoxOS keyboards for Indian langauges. Firefox OS KEON is actually very attractive and it was shown around in the meetup. A lot of them had not seen or heard about keon! So, Mozilla Hindi Community requests Mozilla to send keon to its contributors and hopes they will listen to our request.

Share Ownership - Help your language grow

The most important agenda of the meet was to share the ownership and choose one person responsible for each major work area. Initially as a coordinator of Mozilla Hindi Community, myself, proposed the idea of a division of labour and sharing of ownership. I started a thread on the community list and emphasized upon the need of sharing of not just work and contribution but also the sharing of ownership. In the meet it was realized that sharing of ownership gives a sense of responsibility in the new person and it helps enlarge the contributor base, and enhances a sense of collaboration among a bigger network. Karunakar took the responsibility of managing the Hindi keyboard for FirefoxOS. Ravikant and Vibhas happily agreed to mentor the community and help in translating problematic strings whenever in need. And here is the result of division of labour.  

Localization Training and SUMO/WebMaker

On the 2nd day, Chandan Kumar worked with all the attendees and gave them all necessary traning to use efficiently all the resources available. He showed by example and worked with all the volunteers to solve their problem and question related to localization. Ankit Gadgil shared about Webmaker and its localization. Ravikant who encountered the webmaker for the first time, felt that the Webmaker was going to change the web as we knew it. Ashish Namdev discussed issues in SUMO translation and Hindi community agreed to translate all the major 100 frequently used articles in SUMO in Hindi language.The SUMO team should be merged with Pootle to facilitate convenience in and consistency of translation. The community also stressed on the need of holding sprints and agreed to use it in future to overcome lags if any. The etherpad contains several information that came after pooling together of little known links of dictionaries in available in the public domain

Aama adami ka browser - aama adami se dur kyon! ( Why is the common people's browser in languages not so popular amongst commoners)

It is sad that though we work hard to create a better product in Hindi but the market and download numbers are not so convincing compared to the vast population of the Hindi speaking people. A brainstorming session was organized on the 2nd day of the meet. Several suggestions came and we, the Mozilla Hindi community decied that we will try our best to work on those suggestions. Ravikant and Vibhas Chandra Verma told that they will also support the cause in their own little ways.  

Khaana-Peena

First day we took dinner at Tjs Brew Works, Pune that was very near to the venue of the meet. The Khana-Peena both were awesome including the ambience of the the restaurant. The awesome working lunch, pizza and snacks for both days were arranged by the host of the event - Red Hat. The whole facility team of Red Hat was very much helpful. We are thankful to Red Hat and Mozilla for the great food that ignited out thought. I am thankful to all the active volunteer community who attended the meetup. I am thankful to Mozilla for sponsoring the event and to Red Hat for hosting it. We the whole active community are thankful to Ravikant and Vibhas Chandra Verma who have readily helped the community if and when called upon. Chadan help was very important in training the localizers. Last but not the least, Shahid Farooqi took the responsibility of filing bugs for budget and related affairs, and was key in coordinating the event. Thanks a lot to Shahid for shouldering this additional responsibility. We are Hindi, We are India, We are Mozilla, We are Awesome! Ref Link: Event Page Event Pix Team Wiki Page Etherpad Silde
The Long Gray Line

“The Long Gray Line” is a film about a man, fresh off the boat from Ireland in 1898, who becomes an long term fixture at West Point. I had heard of the movie for years, but never watched it before. My main impetus in watching it was to see what the Academy looked like before they built Eisenhower and MacArthur Barracks, Washington Hall, and the rest of the “new” buildings that made up so much of my experience there.

Funny how many scenes were shot with active Cadets playing extras. They didn’t even need period costumes, they just showed up in their issued uniforms. The officer and NCO uniforms changed visibly over the years, but not the Cadet uniforms.

When the Lusitania sunk, and the Trumpet sounded, the question was not “are we going to lose anyone” but “who are we going to lose.”

The train doesn’t stop at West Point anymore: there is an iron fence between the Train Station building (used for Social Events) and the still active tracks that periodically send trains to chase the climbing team from their perches near “Crew” wall. 20% Of the Corps of Cadets are women. Cadets have Majors, cars, and cell phones now. Much of the plain has be converted to Sprots fields. Graduation is held in Michie Statdium, not at Battle Monument. Central Divisions are gone, with the execption on the first division, kept as a Bank and Museum. Intercollegiate athletics have taken on a huge role, displacing military training as the primary form of physical exercise.

Cadets still take Boxing and Swimming. Cadets in trouble still walk their post in a military manner at the quicktime, 120 steps per minute, for several hours each weekend, until their hours are all worked off. Chapel, no longer mandatory, still fills a huge role in the lives of Cadets and Officers alike. West Point graduates still fill the upper officer ranks at disproportionate numbers to their commissioning ratio.

I mentally compared it to the movie, “The Butler.” Both told the story of an institution from the point of view of someone fairly far down the chain. Both are historical, and driven by real people and events. Both have their share of Schmaltz, of makeup and aging, of historical costumes often becoming the real star of a scene. Both deal with pieces of American Government. Most important, both show peepholes int exclusive institutions that are otherwise reserved for people who have committed themselves far beyond the average. Both have Eisenhower.

But where as “The Butler” shows the evolution of America, it is the static aspect of West Point that strikes home hardest. Even the New Buildings don’t radically alter the image of West Point, they just sharpen it. The waiters in the Mess Hall are still culled from the most recent of immigrants. The words to songs like “The Corps” and “The Alma Mater” may have been slightly adjusted to reflect the greater mixing of genders, the songs still instill the thrill from the presence of Ghostly Assemblage of The Long Gray Line.

There is always something a little silly in watching actors play roles when you know the real people involved. I was a Cadet, and watching a trained actor play one with all of the earnestness and fresh-faced appeal that is the hallmark of the 1950s feels almost like I am being aped. Of course, that must be true of any role copied from real events, and I take no real offence from it. It just further reinforces how strange West Point must seem to those whom have never attended it. How can your really understand that place until you have had a dream where you are in the wrong place, in the wrong uniform, desperately sprinting to get to formation? West Point may be America’s Camalot, but for me it truly is my Alma Mater.

Home entertainment implementations are pretty appalling
I picked up a Panasonic BDT-230 a couple of months ago. Then I discovered that even though it appeared fairly straightforward to make it DVD region free (I have a large pile of PAL region 2 DVDs), the US models refuse to play back PAL content. We live in an era of software-defined functionality. While Panasonic could have designed a separate hardware SKU with a hard block on PAL output, that would seem like unnecessary expense. So, playing with the firmware seemed like a reasonable start.

Panasonic provide a nice download site for firmware updates, so I grabbed the most recent and set to work. Binwalk found a squashfs filesystem, which was a good sign. Less good was the block at the end of the firmware with "RSA" written around it in large letters. The simple approach of hacking the firmware, building a new image and flashing it to the device didn't appear likely to work.

Which left dealing with the installed software. The BDT-230 is based on a Mediatek chipset, and like most (all?) Mediatek systems runs a large binary called "bdpprog" that spawns about eleventy billion threads and does pretty much everything. Runnings strings over that showed, well, rather a lot, but most promisingly included a reference to "/mnt/sda1/vudu/vudu.sh". Other references to /mnt/sda1 made it pretty clear that it was the mount point for USB mass storage. There were a couple of other constraints that had to be satisfied, but soon attempting to run Vudu was actually setting a blank root password and launching telnetd.

/acfg/config_file_global.txt was the next stop. This is a set of tokens and values with useful looking names like "IDX_GB_PTT_COUNTRYCODE". I tried changing the values, but unfortunately made a poor guess - on next reboot, the player had reset itself to DVD region 5, Blu Ray region C and was talking to me in Russian. More inconveniently, the Vudu icon had vanished and I couldn't launch a shell any more.

But where there's one obvious mechanism for running arbitrary code, there's probably another. /usr/local/bin/browser.sh contained the wonderful line:
export LD_PRELOAD=/mnt/sda1/bbb/libSegFault.so
, so then it was just a matter of building a library that hooked open() and launched inetd and dropping that into the right place, and then opening the browser.

This time I set the country code correctly, rebooted and now I can actually watch Monkey Dust again. Hurrah! But, at the same time, concerning. This software has been written without any concern for security, and it listens on the network by default. If it took me this little time to find two entirely independent ways to run arbitrary code on the device, it doesn't seem like a stretch to believe that there are probably other vulnerabilities that can be exploited with less need for physical access.

The depressing part of this is that there's no reason to believe that Panasonic are especially bad here - especially since a large number of vendors are shipping much the same Mediatek code, and so probably have similar (if not identical) issues. The future is made up of network-connected appliances that are using your electricity to mine somebody else's Dogecoin. Our nightmarish dystopia may be stranger than expected.

comment count unavailable comments

April 20, 2014

Activities from Mon, 14 Apr 2014 to Sun, 20 Apr 2014

Activities

Activities Amount Diff to previous week
Badges awarded 625 -17.11%
Builds 6472 +08.68%
Copr build completed 1372 +141.12%
Copr build started 1374 +131.70%
Edit on the wiki 463 -25.20%
FAS user created 113 -11.02%
Meeting completed 23 +15.00%
Meeting started 23 +15.00%
New packages 34 +13.33%
Posts on the planet 50 -58.33%
Retired packages 14 +40.00%
Updates to stable 163 -47.25%
Updates to testing 372 +16.98%

Top contributors of the week

Activites Contributors
Badges awarded gawbul (7), kislik (7), jbernard (6)
Builds karsten (1323), dwa (1064), sharkcz (678)
Copr build completed msuchy (350), domcleal (153), rhughes (92)
Copr build started msuchy (352), domcleal (153), rhughes (92)
Edit on the wiki jreznik (48), petersen (30), fedoradummy (28)
Meeting completed jreznik (5), nirik (5), adamw (4)
Meeting started FranciscoD (2), adamw (2), heidie (2)
New packages ddick (4), jamielinux (2), mimccune (2)
Posts on the planet davej (3), duffy (3), netsys (3)
Retired packages petersen (6), tibbs (5), dbhole (1)
Updates to stable jamielinux (12), ralph (12), cicku (6)
Updates to testing jamielinux (55), remi (20), dtardon (11)
Quick things to do with Tile Clones

Tile clones is a powerful feature of inkscape, it allows you to create tiled copies of an object while tweaking the variables on how they are placed and styled. The dialog, however, can be daunting for the artist that is not familiar with it.

In this instalment of the “Inkscape Quick Tips” series on Tuts+, Aaron Neize provides a brief intro into the tile clones dialog, and shows you a few quick, yet awesome things you can achieve with it.

repeating-4

Red Hat Summit 2014 – Session Slides

This is an update to my previous post about my (then) upcoming conference presentation.

The slides from my session at Red Hat Summit 2014, JBoss in the Trenches, are now posted online. Most of the slides have generous amounts of extra material in their notes.

And as a bonus, here’s a select few pictures from the trip to San Francisco:

Room Schedule Audience Start Something Different
Packet Tracer 5.3.3 en Fedora 20
Enrutamiento dinámico mediante OSPF

Packet Tracer es uno de los simuladores más útiles de la red. Este nos permite crear diversas topologías de red con diversas tecnologías Ethernet, Wireless, FiberChannel; diferentes tipos de redes, PAN, LAN, MAN, WAN...; configuración de los distintos elementos de la red así como su zona geográfica...
Lo que se nos limita el acceso al software porque no se distribuye de forma gratuita. Para poder obtenerlo de forma oficial, debes ser un profesor, o un estudiante o de una entidad autorizada por la NetAcademy, que es la red académica de la empresa Cisco Systems.

Sin embargo, googleando un poco damos con muchas versiones de PT disponibles para ArchLinux, Gentoo, Slackware, Debian, openSUSE y como no Fedora pudiéndolo descargar he instalar en nuestro PC.
Nota: En caso de que poseas una distro diferente, puedes obtener la versión para la tuya desde la página fuente de descargas PT utilizada en este post.

Para instalarlo deberemos instalar o tener instalado el paquete:
  • libXrandr.i686
  • libpng12.i686
A continuación nos dirigimos a la siguiente página,y descargamos el script para instalarlo. Para ello deberemos otorgarle permisos de ejecución.
$ cd /home/$USER/Descargas
$ chmod +x Packet[Pulsar_Tabulador]
$ beesu ./Packet[Pulsar_Tabulador]
Y se nos abrirá un diálogo de instalación como el que aparece abajo.



Aceptamos la licencia y procedemos con la instalación.

Una vez terminada se nos creará un acceso a Packet Tracer desde el menú de nuestro entorno. Si usas tal vez WM como wmii, dwm...et tal vez te interese que el nombre del comando es /usr/local/PacketTracer5/packettracer

Nos aparecerá las siguientes pantallas de bienvenida, siendo la última la ventana principal y de trabajo.


Referencias

Shuttertux ~ Install Cisco Packet Tracer under Linux
HeiseR ~ Packet Tracer Version 5.3.3 Software Downloads
Google
Fedora Activity Day China 2014 Report

Fedora Activity Day (FAD) China 2014 was successfully held at Park Plaza of Beijing Science Park, Beijing on Mar 30 (Sunday). It was organized by Fedora Zhongwen User Group with the help of CSDN. It was under the umbrella of Open Source Technology Conference (OSTC) 2014 initiated by CSDN, being one of three parallel sessions in the afternoon. There were around 600 attendees in total, and about 200 in the FAD session.

I arrived at the venue in the morning with more than 100 Live DVDs and 500 stickers. We distributed them during the Red Hat booth as well as the afternoon session. In the morning there were several keynote speeches and one panel discussion. More details can be found in the CSDN news report.

The afternoon session began at 1:30 pm. Thomas Yao and I served as the hosts of FAD. The first talk was “About Those Python Asynchronous Concurrency Frameworks” by Fantix King, CTO at FlowForge Games, Archlinux x32 committer, and Python programmer. He introduced the concept of concurrency, compared Tornado, Twisted, and Gevent, and then introduced asyncio, the newly available framework in Python 3.4.0.

Python Concurrency Frameworks by Fantix King

Python Concurrency Frameworks by Fantix King

The second talk was “Use Linux Command Line as a Hacker” by Xiaodong Xu (Toy), the webmaster of LinuxToy. He shared a lot of command line tips to fix typos, manipulate shell history, and speed up operations. In the QA session he talked about his opinion on text editor choice and Linux distro choice.

Toy

Toy

The next talk was “Reform the Toolbox: From Open Source Software to Open Source Service” by Daobing Li, the chief architect of Qiniu, who is also a Debian developer. He talked about the achievements of open source cloud service, and shared his vision of future cloud service – cloud in computer room. He also gave suggestions on how developers treat cloud services.

Daobing Li

Daobing Li

Following the talk was “Introduction to HackRF & GNURadio” by Scateu Wang, the creator of hackrf.net and the former leader of TUNA. He demonstrated the ease of using GNU Radio to develop software defined radio applications for DTMF decoding, FM modulation and demodulation, digital audio broadcasting, etc. He also introduced HackRF, the newly created inexpensive hardware peripheral used with GNU Radio.

Scateu Wang

Scateu Wang

The next talk was “Fedora Ambassadors & FUDCon” by myself. I introduced the four foundations of Fedora Project, gave an overview of Fedora Ambassadors project, showed what ambassadors do and how reimbursement works, and then shared the recent progress of organizing FUDCon APAC 2014 to be held in May and welcomed everyone to join.

Alick

Alick

Next Emily Chen, GNOME.Asia founder, senior software engineer in Oracle, gave the talk “Bringing More Women to Free and Open Source Software”. She introduced the Outreach Program for Women initiated by GNOME, talked about how it increases the women participation in open source projects. The annual program provides prize for women participants in similar way with GSoC, but it does not require the applicants to be students, and applicants does not need to write code in the program.

Emily Chen

Emily Chen

The following talk was “Operation and Management of SHLUG” by Thomas Yao, leader of Shanghai Linux User Group (SHLUG), founder of GitCafe.com. Thomas gave an impressive speech without any slides. He talked about the history of SHLUG and shared the experience. He pointed out the pioneering effort of building the first open source mirror site in China, Geekbone, and the importance of keeping the community focus on technique rather than commercial activities. He also shared the interesting stories of Hacking Thursday and Rails Girls.

Thomas Yao

Thomas Yao

After that is the panel discussion on “History and Future of Open Source OS in China”. It was moderated by Thomas Yao, and the panelists are Weijia He from Redflag College of Education, Jianzhong Huang from Redflag R&D, Jack Yu from UbuntuKylin, and Yong Wang from Linux Deepin. They shared their opinions and experiences about Linux Desktop, collaboration of distros, cultivation of open source talents, and open source in education.

Panel Discussion

Panel Discussion

At last, Martin, leader of Beijing Linux User Group (BLUG) gave a lightining speech of introducing BLUG and its activities. Everyone is welcomed and should not worry about their English since there are actually many Chinese there. And it is quite easy to join the event by registering on BLUG website or joining discussion in mailing list.

At night there is the Open Source Night, a social event for free face-to-face discussions. Unfortunately I didn’t attend it. I had dinner with FUDCon and GNOME.Asia organizers and discussed current progress and following tasks.

Overall it is a very good event in my mind. If I have to point out some issues, I’d say there might be too many talks and no time for tea or coffee in between! Besides, my own talk was prepared in a bit hurry, and not practised well beforehand.

The slides links is available on CSDN news. They also provide a Chinese report for FAD. There is a great minute for the meeting by Bojie Li from USTC LUG. The videos and various other materials can be found at this link.

Archive Mounter, una cómoda utilidad
A la hora de estar manejando ciertos archivos comprimidos. Tenemos la tendencia de descomprimirlos ya sea con programas como Ark en KDE, File Roller en GNOME, Engrampa en MATE, Xarchiver o desde el mismo Thunar en XFCE... el comando tar, zip...

Pues con Archive Mounter podemos montar los comprimidos como .tar.gz, .tar, .zip, .iso... como si fueran unidades virtuales de discos. Esto nos permite un rápido acceso al comprimido y seleccionar lo que queramos desde el navegador de ficheros que tengamos como podría ser Thunar, Nautilus, Caja, Dolphin... 

En la siguiente captura de pantalla podemos ver como montamos una imagen .iso con Archive Mounter en MATE.


Una vez que lo montemos nos aparecerá algo como esto:



Lo único que hay que aclarar que ese "Archive Mounter" no se llama tal cual, he incluso lo podemos modificar si queremos. Sino que es un comando que está ubicado en el directorio /usr/libexec y recibe el nombre de gvfsd-archive

Visualizando el contenido del paquete gvfs-archive. Podemos ver lo siguiente:
[netSys@keys0 ~]$ rpm -ql gvfs-archive
/usr/libexec/gvfsd-archive
/usr/share/applications/mount-archive.desktop
/usr/share/gvfs/mounts/archive.mount
Exactamente, para que nos aparezca el "Archive Mounter" en nuestra selección como forma de abrir nuestro comprimido está especificado en el fichero:
/usr/share/applications/mount-archive.desktop
Los ficheros .desktop son un estándar dictado por FreeDesktop.org para determinar el comportamiento de un programa, ejecutable... en el servidor gráfico X.org. Si vemos su contenido podemos ver efectivamente que el nombre coincide y que llama a ese ejecutable:
[netSys@keys0 ~]$ cat /usr/share/applications/mount-archive.desktop
[Desktop Entry]
Name=Archive Mounter
Exec=/usr/libexec/gvfsd-archive file=%u
X-Gnome-Vfs-System=gio
MimeType=application/x-cd-image;application/x-bzip-compressed-tar;application/x-compressed-tar;application/x-tar;application/x-cpio;application/x-zip;application/zip;application/x-lzma-compressed-tar;application/x-xz-compressed-tar;
Terminal=false
StartupNotify=false
Type=Application
NoDisplay=true
X-GNOME-Bugzilla-Bugzilla=GNOME
X-GNOME-Bugzilla-Product=gvfs
X-GNOME-Bugzilla-Component=archive-backend
X-GNOME-Bugzilla-Version=@VERSION@
En MimeType nos muestran los tipos de archivos soportados como los .bzip, .tar, .xz... se puede modificar para soportar más tipos de archivos comprimidos pero no es seguro que nos los pueda montar. Eso biene más bien según cómo se haya compilado ese binario y con qué soportes.

Ten years!

Ten years ago today I joined Red Hat’s Satellite team. It’s been amazing to see the company grow from 850 people to well over 6000. Lately I’ve been working on the
Candlepin project.


April 19, 2014

Instalando Tryton (ERP Libre) en Fedora
Anteriormente escribí sobre como instalar OpenERP en Fedora, Tryton un fork de OpenERP tambien esta disponible para su instalación desde los repos de Fedora.

La instalación consta de dos partes, primero hay que instalar el servidor y luego el cliente.

Instalando el Servidor de Tryton en Fedora (trytond)

yum install -y trytond trytond* postgresql-server
Donde trytond* instalara todos los módulos disponibles de Tryton desde el repositorio
2- Iniciar base de datos Postgres
postgresql-setup initdb
3- Iniciar servicios
systemctl start postgresql.service
systemctl start trytond.service
4- Habilitar autoinicio de servicios
systemctl enable postgresql.service
systemctl enable trytond.service
5- Detener corta fuegos
systemctl stop firewalld
En realidad hay que agregar una excepción al firewall pero de momento es mas fácil y rápido detener temporalmente el cortafuegos, luego no es necesario.
6- Iniciar con el usuario postgres
su - postgres
7- Crear usuario y base de datos
createuser --createdb openerp

Instalando el Cliente de Tryton en Fedora

yum install -y tryton
Listo!
Hablando de Virtualización, VMware Player 6.0.2 en Fedora 20
Justamente anoche me picó un poco la curiosidad de probar este software de virtualización después de tanto tiempo. Y al parecer me sorprendió. Hacía muchos años que no probaba la versión Player. Que es de la que vamos ha hablar porque no permitía que se pudiese crear máquinas virtuales como según podemos encontrar en la Wikipedia, claro que fíjense si es vieja mi experiencia que apunta a la versión 3.0 y está ya sobre la 6.0.2. 

Sin embargo, en este post explicaremos como instalar este software de virtualización privativo tanto para la arquitectura 32 bits como 64 bits en nuestro Fedora 20. Después de haber aclarado un poco qué es la virtualización.

¿Qué es la virtualización?

Sin copiar la definición de la Wikipedia, nos atrevemos a decir que es un proceso en el cuál el sistema operativo que tengamos instalado junto con el Hardware de nuestro PC. Interactúan de tal forma que nos permita la ejecución de otro sistema operativo mediante un software que hace esto posible.
Es decir, que si tenemos un programa como por ejemplo VMware Player, VirtualBox o KVM podremos utilizar Windows XP, o cualquier otro sistema operativo sin tener que eliminar, ni redimensionar o reparticionar nuestro disco duro.
Esto genera para nosotros, el beneficio de poder probar cualquier sistema o distro Linux sin necesidad de estar teniendo que reiniciar nuestro ordenador para poder elegir otro sistema que hayamos instalado previamente. O si no tienes problemas económicos, tal vez tengas más de un PC con muchos sistemas operativos instalados. ¡Imagínense todo el dinero que se desperdicia!

En suma, en las empresas, que son las que mayormente impulsan el tinglado del software, están sumamente interesadas por el ahorro del coste económico-eficiente que supone. No es lo mismo tener una gran multitud de máquinas virtuales corriendo en un solo equipo. Que tener 100 máquinas corriendo un sólo sistema en el que a lo mejor, de todos ellos solo se toma el 10% del costo-rendimiento. Esto genera graves pérdidas económicas además de todo el costo de esa infraestructura.

¿Qué es una máquina virtual?

Una máquina virtual o virtual machine es aquella que va a "coger prestado" tu hardware para poder correr el sistema operativo que queramos instalar dentro del entorno, o software de virtualización como puede ser VirtualBox, VMware... Ésta tendrá una pequeña BIOS, un conjunto de dispositivos como discos duros, conectores USB como impresoras... siendo lo más parecido a tener otro PC en nuestra casa pero dentro de nuestro ordenador.

¿Qué es la relación host-guest (anfitrión-invitado)? 
En la rama de la virtualización desde mi conocimiento, tenemos una serie de términos para hablar del sistema operativo que está ejecutándose y del que se está ejecutando por encima del que tenemos instalado en nuestro disco duro.
El anfitrión o también conocido como host en inglés, se refiere al sistema que está instalado en el disco duro de nuestro ordenador como es el caso de Fedora, openSUSE, Gentoo, ArchLinux... Es el que utilizamos para trabajar todos los días.

Sin embargo, el sistema invitado o guest, es aquel que se virtualizará en nuestro ordenador. El cuál tendrá, según el software de virtualización que se use, ficheros de configuración que describan las prestaciones de la máquina virtual como el espacio, dónde está alojado el disco duro de la máquina

¿Cómo puedo comprender el funcionamiento de un disco duro aquí?

Generalmente, esto varía según el entorno de virtualización que estés usando. Por ejemplo, en VirtualBox genera por defecto un fichero llamado .vdi. Que puede ocupar el espacio de 40GB o el tamaño que le asignes cuando creas la máquina virtual. O bien, ese fichero irá expandiendo su tamaña en relación a lo que añadas en ese disco duro.

En KVM podemos hacer un tanto de lo mismo, lo que el disco duro se guardaría en otro formato como QCOW2 o raw. Sin embargo, podemos arrancar desde una partición de otro disco duro, pendrive... o disco completo como podríamos hacer en VMware entre otras muchas más cosas.

Lo más importante es que, el disco duro de nuestra máquina virtual se guardará como un fichero, o un conjunto de ficheros en nuestra partición. No se va a crear ninguna partición en nuestro disco duro.

Instalación en Fedora 20

Bueno una vez explicado un poco qué es y como funciona la virtualización y las máquinas virtuales. Procederemos a instalar VMware Player 6.0.2 en nuestro sistema.

Primeros pasos, mantener nuestro sistema actualizado

Primero deberemos tener todo nuestro sistema totalmente actualizado. Para aquellos que estén familiarizados con programas gráficos que trabajen con yum, rpm... como Apper, Yumex...etc sabrán como mantenerlo al día al igual que utilizando yum, dnf... desde consola.

Instalando dependencias

Deberemos instalar o tener instalado los siguientes paquetes:
  • dkms
  • kernel-devel
  • kernel-headers
  • gcc
  • binutils
  • make
  • beesu
Descargando el software según nuestra versión

Desde la siguiente página oficial de VMware, podéis descargaros la versión tanto para 32 como para 64 bits. 
Como nota: No he probado utilizar VMware Player de 32 bits en un entorno de 64 bits. Supongo que habrá que instalar las librerías de compatibilidad si ya no vienen instaladas con las librerías gtkmm...etc

Instalando VMware

Tenemos dos formas de proceder a instalarlo. Una es mediante la opción console que será instalarlo en modo texto, ésta puede utilizarse si se está en modo tty, o porque te guste más.
Por otro lado, tenemos la versión gráfica del instalador que utiliza Gtk, que es la que vamos a usar.

Una vez descargado, abriremos una terminal sea Konsole de KDE, mate-terminal de MATE, gnome-terminal de GNOME, xterm... y nos situaremos en el directorio dónde tengamos el VMware. En mi caso está en /home/netSys/Descargas.
$ cd /home/netSys/Descargas
$ chmod +x VMware-Player-6.0.2-1744117.*.bundle
$ beesu ./VMware-Player-6.0.2-1744117.*.bundle --gtk
Posteriormente se nos abrirá la interfaz de instalación de VMware Player:


Aceptamos los términos de la licencia


Esto os lo dejo a vuestro criterio. Si queréis que cada vez que se inicie el programa busque actualizaciones, o las busques de forma manual.


Como en cualquier otro programa que pida estadísticas "anónimas" para mejorar, saber qué uso tiene...etc lo puedes permitir o no.


Nosotros la dejaremos en blanco dado que no nos hace falta. 


Tan sólo hay que darle al botón de "Install" para proceder a la copia de los archivos he instalar el programa.


Y voilá, ya lo tenemos listo, según el entorno que utilices se almacenará en la entrada correspondiente. En caso de quererlo correr como consola deberás ejecutar el comando vmplayer.

Echándole el primer vistazo

Ya con nuestro VMware Player instalado, procederemos a toquetear con él. En los siguientes ejemplos que vienen a continuación voy a explicar un poco cómo crear nuestro huésped o guest, así de cómo instalar las VMware tools en una máquina virtual con Fedora 20 MATE instalado.

Ventana principal del programa



Podemos observar las diferentes opciones que tenemos disponibles a nuestra derecha. En la parte de la izquierda se nos listarán las máquinas virtuales que hayamos creado con el uso.

Una de las cosas que echo en falta es el poder exportar las máquinas con VirtualBox. En este caso aquí no podemos hacer nada más que importarlas mediante el menú de "Open a Virtual Machine".

Descargando e instalando las VMware tools para todos los sistemas disponibles



Haciendo clic en la opción "File" -> "Player Preferences" en la ventana principal se nos abrirá una ventana. Haremos clic en el botón "Download All Components Now" para descargar todas las VMware tools para los sistemas listados, incluyendo Linux.

Esto nos servirá para posteriormente, después de haber creado la máquina virtual y haber instalado el sistema operativo, poder instalar las herramientas de VMware. Estas nos darán más fluidez, soporte de carpetas compartidas (no es SAMBA ojo)...

Creando una VM
Seleccionando el medio de instalación



Este es el primer diálogo que veremos cuando vamos a crear una máquina virtual. Como vemos tenemos hasta 3 opciones para obtener la fuente de instalación. Esta puede ser un dispositivo físico como nuestro lector de CD/DVD, pendrive... También podemos cargar una imagen de disco (.iso) que contenga el sistema operativo como en el ejemplo. O por último, puedes elegirlo más tarde.

Identificando el sistema operativo



Aquí marcaremos la opción del sistema que vayamos a virtualizar.

Escogiendo nombre y directorio de almacenamiento



Le asignamos el nombre a nuestra VM y el directorio donde se va a almacenar. Generalmente utiliza una ruta por defecto y le añade el nombre que se le ha asignado.

Seleccionando forma de almacenamiento del disco duro



En este diálogo nos pide que seleccionamos la forma en el que se almacenará el disco duro. La primera opción generará un sólo fichero, mientras que la segunda opción corta el fichero generado en muchos trozos, esto es aconsejable si se va a copiar la VM a otro ordenador, porque requiere menos esfuerzo de copia.

Además de la manera de almacenamiento, se asigna un tamaño para el disco duro.

Resumen de la máquina virtual



Una vez aquí solo nos queda o bien confirmar nuestra nueva máquina, o cancelar el proceso además de volver hacia atrás para modificar cualquier otra cosa.

No obstante podemos configurar los parámetros relacionados con el Hardware haciendo clic en Customize Hardware... se nos abrirá una ventana como la siguiente:



Como podemos ver, podemos alterar ciertos parámetros antes de iniciar la máquina. Esto no es sólo lo que podemos modificar. Hay otro menú, una vez después creada la máquina al que se puede acceder y cambiarle muchas más cosas siempre que esté apagada.

Varios Tips después de haber creado la VM



Este es la última ventana que veremos en este proceso. Aquí nos explican un poco cómo instalar las VMware tools y los pasos que hemos hecho anteriormente.

Visualizando la nueva máquina



Las opciones que hablábamos antes están en la opción "Edit virtual machine settings". 



Lanzando la máquina



Aquí visualizamos como arrancará Fedora 20 Spin KDE desde la .iso que descargué. 

Instalando VMware tools

Primeramente, deberemos tener instalado nuestro sistema operativo con todas las actualizaciones en caso de ser una distribución GNU/Linux. En nuestro caso lo haremos con Fedora Spin 20 MATE. Es importante, que con Fedora se eliminen los siguientes paquetes:
  • open-vm-tools
  • open-vm-tools-desktop
Porque causarán un conflicto con las VMware tools. Y es importante la instalación de los siguientes paquetes en la máquina virtual para que puedan generarse los correspondientes módulos.
  • kernel
  • kernel-devel
  • kernel-headers
  • gcc
  • make
  • binutils
  • beesu
 Una vez hecho esto, deberemos pulsar en el menú "Virtual Machine" -> "Reinstall VMware tools..."



Se nos abrirá caja, el navegador de ficheros de MATE con el .iso montado. Deberemos copiar el fichero que se nos presenta en la imagen superior a nuestro home mismo y lo descomprimimos.

Nota: Desde la migración a systemd los puntos de montajes están ubicados en /var/run/media/$USER/

Un ejemplo mediante terminal sería:
$ cd /var/run/media/maquina_virtual/VMware\ Tools/
$ cp VMware-Tools[Pulsa_Tabulador] /home/maquina_virtual
$ cd /home/maquina_virtual
$ tar xfv VM[Pulsa_Tabulador]
Ahora procedemos a instalarlas, para ello deberemos situarnos en la carpeta nueva generada por el comando tar anterior, y otorgaremos permisos de ejecución al script llamado vmware-install.pl
$ cd vmware-tools-distrib
$ chmod +x vmware-install.pl
$ beesu ./vmware-install.pl
Saltará una especie de instalación via diálogo por la terminal en la que puedes usar todas las que están por defecto o cambiarlas bajo nuestro criterio. Una vez instaladas reiniciamos la máquina y las tendremos listas para su uso.

Montando carpetas compartidas 


Vamos a crear carpetas compartidas accediendo a la opción "Virtual Machine" -> "Virtual Machine Settings". Y las añadiremos. En mi caso escogí montar el directorio /tmp de mi disco duro y llamarlo "temporal".


Posteriormente nos dirigimos a la máquina virtual y ejecutamos el siguiente ejemplo en terminal.
$ su -c "mount -t vmhgfs .host:/temporal /home/$USER/Plantillas"

El ejemplo de la imagen superior nos muestra como se ha completado con éxito.


Extraje del manual de VMware Player en línea como podemos montar las carpetas.

Importando máquinas


En la opción "Open Virtual Machine" podemos importar máquinas de usos anteriores. Se supone que podría importarse las máquinas creadas con VirtualBox pero obtiene un error al intentar leer el .ova. Creo que será porque usa el .vdi en vez de .vmdk como formato de disco. Cosa que puede cambiarse a la hora de crear el disco en VirtualBox por supuesto.

Escogiendo un disco duro



Si tienes un disco duro "suelto" de VMware, seleccionar un disco duro físico, o bien crear otro, puedes crear una máquina virtual y luego añadirlo mediante las opciones de la VM.


Seleccionamos el tipo de disco duro virtual, o bien lo dejamos por defecto.


Tenemos tres opciones. Crear un nuevo disco duro con su tamaño, su manera de partirlo, o bien utilizar uno único; buscar un disco duro existente en nuestro sistema o por último utilizar un disco duro físico que tengamos adherido, un pendrive....

Referencias
  • Wiki ArchLinux - VMware 
  • Google
  • VMware Página oficial 

April 18, 2014

HEADS-UP: EPEL5 mod_security-2.6.8-5 security update is broken

While ago, I pushed a mod_security security update (one line patch for CVE-2013-5705) without testing it thoroughly on EL5, which turns out to be broken (httpd does not start) [1].

I usually test all packages before pushing updates, but at that time I didn't have access to my build box (which has all my test VMs)

If you're going to update mod_security on EL5 box, you should get the one from epel5-testing:
https://admin.fedoraproject.org/updates/mod_security-2.6.8-6.el5

Sorry for any inconvenience caused.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1089343

Let me introduce myself. Ambassador from Uruguay.

Greetings to all from Uruguay. :)

Let me introduce myself (a couple of weeks delayed by some technical issues, but now here I am). I’m very glad to become a new Fedora Ambassador from Uruguay.

Some years ago I joined to the community of the Project in our country, thanks to Xigatec and Ein, all while we organised FLISoL 2010 in Montevideo.

Since that time I have been organising and participating in several events with the rest of the local FOSS communities .

I am currently living in a small city of our country (Fray Bentos, Rio Negro), working on IT and studying for RHCE.

Last days I have been organising FLISoL, which will take place in one of the three public high schools in the city. One of our goals will make a link between teacher, students and the community, taking advantage of the OLPC project in our country (Ceibal Project ).

In a few weeks I will be participating of FISL15, with the sponsorship of Fedora. I’m sure it will be a nice opportunity to meet other Fedora Contributors and talk a bit about our plans for Fedora in Uruguay.

Keep in touch. ;)

Regards.

Diego Daguerre,
FAS: lunaticc0.

Presentandome. Embajador desde Uruguay.

Saludos a todos desde Uruguay. :)

Contento de presentarme como nuevo embajador de Fedora desde Uruguay. (Un par de semanas demorada la presentación por algunos inconvenientes técnicos, pero acá estoy.)

Hace algunos años me uní a la comunidad del Proyecto de nuestro país, al lado de Xigatec y Ein, todo mientras organizabamos el FLISoL del 2010 en Montevideo. A partir de ese momento he organizado y participado en varios eventos junto al resto de las comunidades de Software Libre locales.

Actualmente estoy viviendo en una ciudad pequeña del interior de nuestro país (Fray Bentos, Río Negro), trabajando en Informática y estudiando para el RHCE.

Estos últimos días he estado organizando el FLISoL en uno de los tres liceos públicos de la ciudad. Uno de nuestros objetivos será crear un vinculo entre los docentes, alumnos y la comunidad, tratando de aprovechar un poco el Plan Ceibal (OLPC) de nuestro país.

En unas pocas semanas estaré participando del FISL15 gracias al patrocinio de Fedora. Estoy seguro que será una linda oportunidad para conocer a otros contribuidores de Fedora y hablar un poco de nuestros planes para Fedora en Uruguay.

Nos estamos hablando. ;)

Saludos.

Diego Daguerre,
FAS: lunaticc0.

 

Weekly Fedora kernel bug statistics – April 18th 2014
  19 20 rawhide  
Open: 103 204 149 (456)
Opened since 2014-04-11 3 14 8 (25)
Closed since 2014-04-11 6 9 6 (21)
Changed since 2014-04-11 6 29 14 (49)

Weekly Fedora kernel bug statistics – April 18th 2014 is a post from: codemonkey.org.uk

F1LT banned by FOM

F1LT Formula 1 Live Time application seems to have been recently banned by FOM (Formula One Management).

Latest release packaged in Fedora is the 3.0.0; its development is freezed (probably permanently).


Archiviato in:English, FedoraPlanet, fedoraproject, pacchettizzazione, Software
libsolv und hawkey legen dnf lahm

Wer dnf als Paketmanager verwendet, wird wahrscheinlich momentan mit der Fehlermeldung

AttributeError: No such checksum.

konfrontiert, sobald man versucht, sein System auf dem aktuellen Stand zu halten.

Ursache dafür ist wohl eine Inkompatibilität zwischen libsolv 0.6 und hawkey 0.4.12.

Um das Problem zu beheben, hilft nur, kurzzeitig auf yum zurück zu greifen und hawkey mit Hilfe von yum zu aktualisieren:

su -c'yum update hawkey'

Anschließend sollte dnf wieder wie gewohnt funktionieren und die verbleibenden Updates ohne Probleme installieren.

Back to Fedora Packaging

I’m back doing some rpm packaging for Fedora after a little break.  Got 2 on the go, one of which is currently being reviewed.  Thing is, the reviewer brought up some good points which I’m addressing, but the documentation is all over the place.  Go here for systemd scriptlets info, so there for the latest standard and lets not forget the contradictions.

Is annoy for me, and I’ve be doing rpm packaging for quite a while, so I dread to thing how a new packager copes.  And this must waste the reviewers time as well because of the things that are overlooked.

It would be very helpful to have something like a flowchart that outlines the steps needed and links to information on those steps.  Personally I find the current wiki packaging details way too verbose, nice simple steps ensures no mistakes.  If more detailed is required that can be linked off somewhere else.

Just my 2c worth, but it is something I think needs to be addressed.

2014 Red Hat Summit: Open Playground

<iframe class="youtube-player" frameborder="0" height="289" src="http://www.youtube.com/embed/7ukA6KiM8SQ?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="460"></iframe>

;)

/f


Instalar OpenERp en Fedora
OpenERP es una aplicación libre de gestión empresarial, para mi ha sido una agradable sorpresa descubrir que esta aplicación esta empaquetada para Fedora por lo que su instalación es muy fácil e incluso OpenERP puede ser administrado como un servicio por medio de systemd (systemctl).

La instalación consta de dos partes, primero hay que instalar el servidor y luego el cliente.

Instalando el Servidos de OpenERP en Fedora

1- Instalar OpenERP y Postgres

yum install -y openerp postgresql-server

2- Iniciar base de datos Postgres

postgresql-setup initdb

3- Iniciar servicios

systemctl start postgresql.service
systemctl start openerp.service

4- Habilitar autoinicio de servicios

systemctl enable postgresql.service
systemctl enable openerp.service

5- Detener corta fuegos

systemctl stop firewalld

En realidad hay que agregar una excepción al firewall pero de momento es mas fácil y rápido detener temporalmente el cortafuegos, luego no es necesario.

6- Iniciar con el usuario postgres

su - postgres

7- Crear usuario y base de datos

createuser --createdb openerp

Instalando el Cliente de OpenERP en Fedora

1- Instalar Cliente

yum install -y openerp-client

Listo!

Con eso ya hay un servicio de OpenERP funcionado y un cliente para poder acceder al sistema. Solo hay que iniciar el cliente de OpenERP y conectarse al servidor local.

Mas adelante espero poder escribir mas sobre el uso y configuración de OpenERP

April 17, 2014

Get involved in the Fedora.next web efforts!

Lately I’ve been blogging about the proposal for new Fedora websites to account for the Fedora.next effort. So far, the proposal has been met with warm reception and excitement! (Yay!)

We Really Would Love Your Help

Two very important things that I’d like to make clear at this point:

  • This plan is not set in stone. It’s very sketchy, and needs more refinement and ideas. There is most certainly room to join in and contribute to the plan! Things are still quite flexible; we’re still in the early stages!
  • We would love your help! I know this usually goes without saying in FLOSS, but I still think it is worth saying. We would love more folks – with any skillset – to help us figure this out and make this new web presence for Fedora happen!

Are you interested in helping out? Or perhaps you’d just like to play around with our assets – no strings attached – for fun, or follow along on the progress at a lower level than just reading these blog posts? Let’s talk about where the action is happening so you can get in on it! :)

How To Get Involved

Up until this point, the Fedora.next web ideas and mockups have been scattered across various blogs, Fedora people pages, and git repos. We talked a bit last week in #fedora-design about centralizing all of our assets in one place to make it easier to collaborate and for new folks to come on board and help us out. Here’s what we’ve set up so far:

  • A Fedora Design GitHub group – I’ve already added many of our friends from the Fedora Design team. If you’d like to be included, let me know your github usersname!
  • nextweb-assets git repo – This repo has the Inkscape SVG source for the mockups and diagrams I’ve been blogging here. Please feel free to check them out, remix them, or contribute your own! I tried to set up a sensible directory structure. I recommend hooking this repo up to SparkleShare for a nice workflow with Inkscape.
  • mockups-getfedora git repo – This repo holds the prototype Ryan has been working on for the new getfedora.org ‘Brochure Site’ in the proposal.

We also, of course, have #fedora-design in freenode IRC for discussing the design, as well as the design-team mailing list for discussion.

The Fedora Websites team will be setting up a branch for the new websites work sometime by the end of next week. For now, you can take a look at the mockups-getfedora repo. You also might want to set up a local copy of the Fedora websites repo by following these instructions to get familiar with the Fedora websites workflow.

Okay, I hope this makes it abundantly clear that we’d love your help and gives you some clear steps towards starting to get involved should you be interested. Please don’t hesitate to get in touch with me or really anyone on the design team or websites team if you’d like to get started!

How to download MP3s! From youtube ..
It's fairly easy, just run `youtube-dl --extract-audio --audio-format mp3 -l ` and it's all yours!
256 Bits of Security

This is an incomplete discussion of SSL/TLS authentication and encryption.  This post only goes into RSA and does not discuss DHE, PFS, elliptical, or other mechanisms.

In a previous post I created an 15,360-bit RSA key and timed how long it took to create the key.  Some may have thought that was some sort of stunt to check processor speed.  I mean, who needs an RSA key of such strength?  Well, it turns out that if you actually need 256 bits of security then you’ll actually need an RSA key of this size.

According to NIST (SP 800-57, Part 1, Rev 3), to achieve 256 bits of security you need an RSA key of at least 15,360 bits to protect the symmetric 256-bit cipher that’s being used to secure the communications (SSL/TLS).  So what does the new industry-standard RSA key size of 2048 bits buy you?  According to the same document that 2048-bit key buys you 112 bits of security.  Increasing the bit strength to 3072 will bring you up to the 128 bits that most people expect to be the minimum protection.  And this is assuming that the certificate and the certificate chain are all signed using a SHA-2 algorithm (SHA-1 only gets you 80 bits of security when used for digital signatures and hashes).

So what does this mean for those websites running AES-256 or CAMELLIA-256 ciphers?  They are likely wasting processor cycles and not adding to the overall security of the circuit.  I’ll make two examples of TLS implementations in the wild.

First, we’ll look at wordpress.com.  This website is protected using a 2048-bit RSA certificate, signed using SHA256, and using AES-128 cipher.  This represents 112 bits of security because of the limitation of the 2048-bit key.  The certificate is properly chained back to the GoDaddy CA which has a root and intermediate certificates that are all 2048 bits and signed using SHA-256.  Even though there is a reduced security when using the 2048-bit key, it’s likely more efficient to use the AES-128 cipher than any other due to chip accelerations that are typically found in computers now days.

Next we’ll look at one of my domains: christensenplace.us.  This website is protected using a 2048-bit RSA certifcate, signed using SHA-1, and using CAMELLIA-256 cipher.  This represents 80 bits of security due to the limitation of the SHA-1 signature used on the certificate and the CA and intermediate certificates from AddTrust and COMODO CA.  My hosting company uses both the RC4 cipher and the CAMELLIA-256 cipher.  In this case the CAMELLIA-256 cipher is a waste of processor since the certificates used aren’t nearly strong enough to support such encryption.  I block RC4 in my browser as RC4 is no longer recommended to protect anything.  I’m not really sure exactly how much security you’ll get from using RC4 but I suspect it’s less than SHA-1.

So what to do?  Well, if system administrators are concerned with performance then using a 128-bit cipher (like AES-128) is a good idea.  For those that are concerned with security, using a 3072-bit RSA key (at a minimum) will give you 128 bits of security.  If you feel you need more bits of security than 128 then generating a solid, large RSA key is the first step.  Deciding how many bits of security you need all depends on how long you want the information to be secure.  But that’s a post for another day.


Configuring mod_nss for Horizon

Horizon is the Web Dashboard for OpenStack. Since it manages some very sensitive information, it should be accessed via SSL. I’ve written up in the past how to do this for a generic web server. Here is how to apply that approach to Horizon.

These instructions are based on a Fedora 20 and packstack install.

As a sanity check, point a browser at your Horizon server before making any changes. If hostname is not set before you installed packstack, you might get an exception about bad request header suggesting you might need to set ALLOWED_HOSTS: If so, you have to edit /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['192.168.187.13','ayoungf20packstack.cloudlab.freeipa.org', 'localhost', ]

Once Horizon has been shown to work on port 80, proceed to install the Apache HTTPD module for NSS:

sudo yum install mod_nss

While this normally works for HTTPD, something is different with packstack; all of the HTTPD module loading is done with files in /etc/httpd/conf.d/ whereas the mod_nss RPM assumes the Fedora approach of putting them in /etc/httpd/conf.modules.d/. I suspect it has to do with the use of Puppet. To adapt mod_nss to the packstack format, after installing mod_nss, you need to mv the file:

sudo mv /etc/httpd/conf.modules.d/10-nss.conf   /etc/httpd/conf.d/nss.load

Note that mv keeps SELinux Happy, but cp does not: ls -Z to confirm

$ ls -Z /etc/httpd/conf.d/nss.load 
-rw-r--r--. root root system_u:object_r:httpd_config_t:s0 /etc/httpd/conf.d/nss.load

If you get a bad context there, the cheating way is to fix is yum erase mod_nss and rerun yum install mod_nss and then do the mv. That is what I did.

edit /etc/httpd/conf.d/nss.conf:

#Listen 8443
Listen 443

and in the virtual host entry change 8443 to 443

Add the following to /etc/httpd/conf.d/openstack-dashboard.conf

<virtualhost>
   ServerName ayoungf20packstack.cloudlab.freeipa.org
   Redirect permanent / https://ayoungf20packstack.cloudlab.freeipa.org/dashboard/
</virtualhost>

replacing ayoungf20packstack.cloudlab.freeipa.org with your hostname.

Lower in the same file, in the section

<directory>

add

  NSSRequireSSL

To enable SSL.

SSL certificates really should not be self signed. To have a real security strategy, your X509 certificates should be managed via a Certificate Authority. Dogtag PKI provides one, and is deployed with FreeIPA. So, for this setup, the Horizon server is registered as an IPA client.

There will be a selfsigned certificate in the nss database from the install. We need to remove that:

sudo certutil -d /etc/httpd/alias/ -D -n Server-Cert

In order to fetch the certificates for this server, we use the IPA command that tells certmonger to fetch and track the certificate.

ipa service-add HTTP/`hostname`
sudo ipa-getcert request -d /etc/httpd/alias -n Server-Cert -K HTTP/`hostname` -N CN=`hostname`,O=cloudlab.freeipa.org

If you forgot to add the service before requesting the cert, as I did on my first iteration, the request is on hold: it will be serviced in 12 (I think) hours by certmonger resubmitting it, but you can speed up the process:

sudo getcert resubmit -n Server-Cert  -d /etc/httpd/alias

You can now see the certificate with:

 sudo certutil -d /etc/httpd/alias/ -L -n Server-Cert

Now, if you restart the HTTPD server,

sudo systemctl restart httpd.service

and point a browser at http://hostname, it should get redirected to https://hostname/dashboard and a functioning Horizon application.

Note that for devstack, the steps are comparable, but different:

  • No need to mv the 10-nss.conf file from modules
  • The Horizon application is put into /etc/httpd/conf.d/horizon.conf
  • The horizon app is in a virtual host of <VirtualHost *:80> you can’t just change this to 443, or you lose all of the config from nss.conf. The two VirtualHost sections should probably be merged.
Caseless virtualization cluster, part 4

AMD supports nested virtualization a bit more reliably than Intel, which was one of the reasons to go for AMD processors in my virtualization cluster. (The other reason is they are much cheaper)

But how well does it perform? Not too badly as it happens.

I tested this by creating a Fedora 20 guest (the L1 guest). I could create a nested (L2) guest inside that, but a simpler way is to use guestfish to carry out some baseline performance measurements. Since libguestfs is creating a short-lived KVM appliance, it benefits from hardware virt acceleration when available. And since libguestfs ≥ 1.26, there is a new option that lets you force software emulation so you can easily test the effect with & without hardware acceleration.

L1 performance

Let’s start on the host (L0), measuring L1 performance. Note that you have to run the commands shown at least twice, both because supermin will build and cache the appliance first time and because it’s a fairer test of hardware acceleration if everything is cached in memory.

This AMD hardware turns out to be pretty good:

$ time guestfish -a /dev/null run
real	0m2.585s

(2.6 seconds is the time taken to launch a virtual machine, all its userspace and a daemon, then shut it down. I’m using libvirt to manage the appliance).

Forcing software emulation (disabling hardware acceleration):

$ time LIBGUESTFS_BACKEND_SETTINGS=force_tcg guestfish -a /dev/null run
real	0m9.995s

L2 performance

Inside the L1 Fedora guest, we run the same tests. Note this is testing L2 performance (the libguestfs appliance running on top of an L1 guest), ie. nested virt:

$ time guestfish -a /dev/null run
real	0m5.750s

Forcing software emulation:

$ time LIBGUESTFS_BACKEND_SETTINGS=force_tcg guestfish -a /dev/null run
real	0m9.949s

Conclusions

These are just some simple tests. I’ll be doing something more comprehensive later. However:

  1. First level hardware virtualization performance on these AMD chips is excellent.
  2. Nested virt is about 40% of non-nested speed.
  3. TCG performance is slower as expected, but shows that hardware virt is being used and is beneficial even in the nested case.

Other data

The host has 8 cores and 16 GB of RAM. /proc/cpuinfo for one of the host cores is:

processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 21
model		: 2
model name	: AMD FX(tm)-8320 Eight-Core Processor
stepping	: 0
microcode	: 0x6000822
cpu MHz		: 1400.000
cache size	: 2048 KB
physical id	: 0
siblings	: 8
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1
bogomips	: 7031.39
TLB size	: 1536 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro

The L1 guest has 1 vCPU and 4 GB of RAM. /proc/cpuinfo in the guest:

processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 21
model		: 2
model name	: AMD Opteron 63xx class CPU
stepping	: 0
microcode	: 0x1000065
cpu MHz		: 3515.548
cache size	: 512 KB
physical id	: 0
siblings	: 1
core id		: 0
cpu cores	: 1
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx pdpe1gb lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm svm abm sse4a misalignsse 3dnowprefetch xop fma4 tbm arat
bogomips	: 7031.09
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

Update

As part of the discussion in the comments about whether this has 4 or 8 physical cores, here is the lstopo output:

lstopo


3.14 Fedora ARM kernel status

With 3.14.1 out Josh and Justin are preparing to land the 3.14 kernel into Fedora 20. So what does it mean in terms of ARM on Fedora. Well it’s an evolution. There’s of course the usual raft of new devices and some new SoCs, and best of all lots of improvements in support for existing devices. Even the return of some old favourites! Generally from the stash of devices Paul, I and others have that get regularly tested things are looking pretty good with 3.14.

New SoCs and device support:

  • Tegra 4/K1 support has been enabled. For 3.14 this doesn’t mean much as there’s not a wide level of devices out there that we ship device tree blobs for but it’s a good preparation for 3.15 as we should have pretty reasonable support for the nice new Tegra K1 dev board!
  • TI devices: The 3.14 finally brings working support for the OMAP5 EVM board, this will improve further in 3.15 but it boots and generally works. It also adds support for the original BeagleBone and the USB is now finally all back for the Beagle-XM devices so they go back into the fully working list! Also the DT bits for the Overo devices has started to land so if is interested in those I’d love feedback from people with those devices.
  • Freescale i.MX: To this lovely growing list we add initial support for the various Cubox-i devices and the hummingboard. Still no HDMI support but here’s hoping for 3.15
  • Xilinx Zynq 7000: The SD controller has finally landed for this which means they should be bootable to login but from there I’m not sure the status of the rest of the support. We ship DT for the zynq-zc702, zynq-zc706 and zynq-zed devices so if you’ve got one of those and have time to test feedback would be welcome.

Interesting bugs fixed:
Nothing particularly exciting comes to mind here. The fix for the Beagle-XM usb hub power up is nice, as was the final config option for the BeagleBone White. There’s obviously a deluge of other ARM driver fixes and improvements (PTP high precision time support for modular CPSW ethernet on the BeagleBone’s anyone?).

Outstanding bugs and issues:
So this is really the list of items I have outstanding for 3.14 so that I can spin some new images. Feedback on any of these and anything I might not be aware of would be very useful.

  • OMAP DRM display. In 3.11 a new display framework landed for the OMAP DRM driver and in 3.12 the old one was dropped. This broke X on devices like the Pandaboard for 3.12+ kernels. I know roughly the problematic area but I just need to get the cycles to debug this. Any help is welcome.
  • Serial over USB. While this isn’t a kernel bug but rather just needs me to hack together some scripts it’s a blocker for easy OOTB support for devices like the BeagleBone Black
  • Tegra DRM display. After a re-write it’s back and modular again in 3.14. Just need to ensure it’s working and ready to go
  • Testing… testing… testing :-)

That’s mostly all I have on my list. If there’s anything I’m not aware of please do let me know and I’ll endevour to help out where possible. In particular I’m very interested in boot issues for devices that would would be supportable with new ARM images based on 3.14. From the 3.11 kernel that F-20 GA shipped with there’s been a lot of change and improvements, and while non boot enhancements are easy to do with a “yum update” issues with boot aren’t quite so easy to deal with!

Semi irregular Fedora ARM kernel status reports

I thought I’d start doing semi irregular ARM kernel status reports. I’ll do them as often as I think they might be useful and those who know me know I travel a lot and randomly and that ARM isn’t my $dayjob.

Few are bored or stupid enough to follow the 20 or so ARM kernel trees or have the regular insight as to what’s happening, what’s landed, what new devices might work and what bugs come and go that I do so I thought I’d try and dispense some of the more interesting bits of that information and how it relates to Fedora ARM to a wider audience by both the fedora-arm mailing list and my blog so those people that don’t sit on the IRC channel and those that like to lurk might have a better idea what’s going on.

The general format I plan to use is basically:

  • What’s new including SoCs, boards and new devices
  • Interesting bugs fixed
  • Outstanding bugs and issues
  • Random other insights

I don’t intend them to be long but rather short, sweet and to the point. They’ll probably come out when new major releases hit either rawhide or stable or something of particular interest lands. Feedback on both the format most other things is welcome as are questions and status of devices people might have had success or less so with.

I plan to have the first for 3.14 out later today and one for 3.15 RSN.

What is GOM¹
Under that name is a simple idea: making it easier to save, load, update and query objects in an object store.

I'm not the main developer for this piece of code, but contributed a large number of fixes to it, while porting a piece of code to it as a test of the API. Much of the credit for the design of this very useful library goes to Christian Hergert.

The problem

It's possible that you've already implemented a data store inside your application, hiding your complicated SQL queries in a separate file because they contain injection security issues. Or you've used the filesystem as the store and threw away the ability to search particular fields without loading everything in memory first.

Given that SQLite pretty much matches our use case - it offers good search performance, it's a popular thus well-documented project and its files can be manipulated through a number of first-party and third-party tools - wrapping its API to make it easier to use is probably the right solution.

The GOM solution

GOM is a GObject based wrapper around SQLite. It will hide SQL from you, but still allow you to call to it if you have a specific query you want to run. It will also make sure that SQLite queries don't block your main thread, which is pretty useful indeed for UI applications.

For each table, you would have a GObject, a subclass of GomResource, representing a row in that table. Each column is a property on the object. To add a new item to the table, you would simply do:

item = g_object_new (ITEM_TYPE_RESOURCE,
"column1", value1,
"column2", value2, NULL);
gom_resource_save_sync (item, NULL);

We have a number of features which try to make it as easy as possible for application developers to use gom, such as:
  • Automatic table creation for string, string arrays, and number types as well as GDateTime, and transformation support for complex types (say, colours or images).
  • Automatic database version migration, using annotations on the properties ("new in version")
  • Programmatic API for queries, including deferred fetches for results
Currently, the main net gain in terms of lines of code, when porting SQLite, is the verbosity of declaring properties with GObject. That will hopefully be fixed by the GProperty work planned for the next GLib release.

The future

I'm currently working on some missing features to support a port of the grilo bookmarks plugin (support for column REFERENCES).

I will also be making (small) changes to the API to allow changing the backend from SQLite to a another one, such as XML, or a binary format. Obviously the SQL "escape hatches" wouldn't be available with those backends.

Don't hesitate to file bugs if there are any problems with the API, or its documentation, especially with respect to porting from applications already using SQLite directly. Or if there are bugs (surely, no).

Note that JavaScript support isn't ready yet, due to limitations in gjs.

¹: « SQLite don't hurt me, don't hurt me, no more »
Daily log April 16th 2014

Added some code to trinity to use random open flags on the fd’s it opens on startup.

Spent most of the day hitting the same VM bugs as yesterday, or others that Sasha had already reported.
Later in the day, I started seeing this bug after applying a not-yet-merged patch to fix a leak that Coverity had picked up on recently. Spent some time looking into that, without making much progress.
Rounded out the day by trying out latest builds on my freshly reinstalled laptop, and walked into this.

Daily log April 16th 2014 is a post from: codemonkey.org.uk

April 16, 2014

java remote console on DRAC7 and Fedora…

Just a short blog about accessing the java remote console on DRAC7 using servers from a Fedora client (mostly so I can find this again in a few years and remember what I had to do):

If you just try and launch it, everything goes great until the end, and you get “Connection Refused”. This is of course a completely wrong error message. Really the problem is that it can’t find cacerts. You must:

mkdir -p ~/.java/deployment/security

ln -s /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.5.0.16.pre02.fc21.x86_64/jre/lib/security/cacerts ~/.java/deployment/security/trusted.certs

Of course you need to fill in your openjdk version that you have installed. Once you do that, launch the console and it should connect and work fine. 

Many thanks to Patrick Uiterwijk for figuring this out. :)