October 13, 2014

New features in mock-1.2
You may noticed there's been a new release of mock in rawhide (only). It incorporates all the new features I've been working on during my Google Summer of Code project, so I'd like to summarize them here for the people who haven't been reading my blog. Note that there were some other new features that weren't implemented by me, so I don't mention them here. You can read more about the release at http://miroslav.suchy.cz/blog/archives/2014/10/12/big_changes_in_mock/index.html.

LVM plugin
The usual way to cache already initialized buildroot is using tarballs. Mock can now also use LVM as a backend for caching buildroots which is a bit faster and enables efficient snapshotting (copy-on-write). This feature is intended to be used by people who maintain a lot packages and find themselves waiting for mock to install the same set of BuildRequires over and over again.
Mock uses LVM thin provisioning which means that one logical volume (called thinpool) can hold all thin logical volumes and snapshots used by all buildroots (you have to set it like that in the config) without each of them having fixed size. Thinpool is created by mock when it's starts initializing and after the buildroot is initialized, it creates a postinit snapshot which will be used as default. Default snapshot means that when you execute clean or start a new build without --no-clean option, mock will rollback to the state in default snapshot. As you install more packages you can create your own snapshots (usually for dependency chains that are common to many of your packages). I'm a Java packager and most of my packages BuildRequire maven-local which pulls in 100MB worth of packages. Therefore I can install maven-local just once and then make a snapshot with
mock --snapshot maven
and then it will be used as the default snapshot to which --clean will rollback whenever I build another package. When I want to rebuild a package that doesn't use maven-local, I can use
mock --rollback-to postinit
and the initial snapshot will be used for following builds. My maven snapshot will still exist, so I can get back to it later using --rollback-to maven. To get rid of it completely, I can use
mock --remove-snapshot maven
So how do you enable it?
The plugin is distributed as separate subpackage mock-lvm because it pulls in additional dependencies which are not available on RHEL6. So you first need to install it.
You need to specify a volume group which mock will use to create it's thinpool. Therefore you need to have some unoccupied space in your volume group, so you'll probably need to shrink some partition a bit. Mock won't touch anything else in the VG, so don't be afraid to use the VG you have for your system. It won't eat your data, I promise. The config for enabling it will look like this:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'my-volume-group',
    'size': '8G',
    'pool_name': 'mock',
}

To explain it: You need to disable root_cache - having two caches with the same contents would just slow you down. You need to specify a size for the thinpool. It can be shared across all mock buildroots so make sure it's big enough. Ideally there will be just one thinpool. Then specify name for the thinpool - all configs which have the same pool_name will share the thinpool, thus being more space-efficient. Just make sure the name doesn't clash with existing volumes in your system (you can list existing volumes with lvs command). For information about more configuration options for LVM plugin see config documentation in /etc/mock/site-defaults.cfg.
Additional notes:
Mock leaves the volume mounted by default so you can easily acces the data. To conveniently unmount it, there's a command --umount. To remove all volumes use --scrub lvm. This will also remove the thinpool only if no other configuration has it's volumes there.
Make sure there's always enough space, overflown thinpool will stop working.

Nosync - better IO performance

One of the reasons why mock has always been quite slow is because installing a lot of packages generates heavy IO load. But the main bottleneck regarding IO is not unpacking files from packages to disk but writing Yum DB entries. Yum DB access (used by both yum and dnf) generates a lot of fsync(2) calls. Those don't really make sense in mock because people generally don't try to recover mock buildroots after hardware failure. We discovered that getting rid of fsync improves the package installation speed by almost a factor of 4. Mikolaj Izdebski developed small C library 'nosync' that is LD_PRELOADed and replaces fsync family of calls with (almost) empty implementations. I added support for it in mock.
How to activate it?
You need to install nosync package (available in rawhide) and for multilib systems (x86_64) you need version for both architectures. Then it can be enabled in mock by setting
config_opts['nosync'] = True
It is requires those extra steps to set up but it really pays off quickly.

DNF support
Mock now has support for using DNF as package manager instead of Yum. To enable it, set
config_opts['package_manager'] = 'dnf'
You need to have dnf and dnf-plugins-core installed. There are also commandline switches --yum and --dnf which you can use to choose the package manager without altering the config. The reason for this is that DNF is still not yet 100% mature and there may be a situation where you'd need to revert back to Yum to install something.
You can specify separate config for dnf with dnf.conf config option. If you omit it, mock will use the configuration you have for Yum (config_opts['yum.conf']). To use yum-cache with DNF you have to explicitly set
cachedir=/var/cache/yum
in the dnf.conf or yum.conf config option.
Otherwise, it should behave the same in most situations and also be a bit faster.

Printing more useful output on terminal
Mock will now print the output of Yum/DNF and rpmbuild. It also uses a pseudoterminal to trick it into believing it's attached to terminal directly and also get package downloading output including the progress bars. That way you know whether it's dowloading something or it cannot connect. You need to have debuglevel=2 in your yum.conf for this to work.

Concurrent shell acces to buildroot
Non-desctructive operations use just a shared lock intread of exclusive one. That means you can get shell even though there's a build running. Please use it with caution to not alter the environment of the running build. Destructive operations like clean still need exclusive lock.

Executing package management commands
Mock now has a switch --pm-cmd which you can use to execute arbitrary Yum/DNF command. Example:
mock --pm-cmd clean metadata-expire
There are also --yum-cmd and --dnf-cmd aliases which force using particular package manager.

--enablerepo and --disablerepo options
Passing --enablerepo/--disablerepo to package manager whenever mock invokes it. Now you can have a list of disabled repos in your mock config and enable them only when you need them.

short-circuit
rpmbuild has --short-circuit option that can skip certain stages of build. It can be very useful for debuging builds which fail in later stages. Mock now also has --short-circuit option which leverages it. It accepts a name of the stage that will be the first one to be executed. Available stages are: build, install and binary. (prep stage is also possible, but I'm not the one who added that and I have no idea what it's supposed to do :D). Example:
mock --short-circuit install foo.1.2-fc22.src.rpm
rpmbuild arguments
You can specify arbitrary options that will be passed to rpmbuild with --rpmbuild-opts. Mainly for build debugging purposes.

Configurable executable paths
Mock now also supports specifying paths to rpm, rpmbuild, yum, yum-builddep and dnf executables so you can use different than system-wide versions. This may be useful for Software Collections in the future.

Automatic initialization
You don't need to call --init, you can just do --rebuild and it will do init for you. It will also correctly detect when the initialization didn't finish succesfully and start over.

More thorough cleanup logic
There sould be no more mounted volumes left behind after you interrupted build by ^C. And if they are (i.e. because it was killed), it should handle it without crashing.

Python 3 support
Main part of mock should be fully Python 3 compatible. Python 2 is still used as default. Unported parts are LVM plugin and mockchain.

--unique-ext
This is a feature that was already present for few releases, but it seems only a few people know about it, so I'd like to mention it even though it's not new. I quite often find myself in situation when I want to build a package with the same config, but there's some other build already running, so I cannot. A lot of people just copy the config and change the name of chroot, but that means additional work and most importantly it cannot use the same caches as the original config, because mock sees them as something different. Unique-ext provides a better way. It's a commandline switch that adds a suffix to chroot name, so mock creates different chroot, but it uses the same config and in turn also same caches. Caching mechanisms provide locking to make this work. Using unique-ext with LVM plugin means that the new chroot is based on the postinit snapshot. There's a lock that prevents the postinit snapshot being unnecessarily initialized twice.

If you have any questions, ping me on #fedora-devel (nick msimacek)

October 08, 2014

Software Freedom Day Hanoi: Fedora Report

I was in Hanoi last month to participate in the APAC ambassadors meeting, as well as the Software Freedom Day event. This post summarizes notes from the trip.

APAC Ambassadors Meeting

On the first day, we had a meeting set up, to go through the current year’s budget, and discuss concerns with our respective countries. Tuan, Thang, Alick, Somvannda and yours truly were physically present. Gnokii, Kushal and Ankur participated for significant portions of time, remotely over IRC.

Fedora Folks posing at the SFD banner Photo: Unknown

We started with a general discussion about the APAC situation. Some (not sic) moments:

Tuan:

We have a lack of physical meetups among APAC folks.

Tuan was my roomate in Prague (at Flock), and we had a brief discussion about this. For most APAC meetings, at least until a few weeks ago, there would be very few representatives from Asian countries. When the budget was to be made for the current FY, Tuan announced over the mailing lists, but nobody showed up.

We discussed how this situation is improving. In November this year, a FAD is planned where folks have been invited to help with the budget planning. The recent meetings have run over an hour and we regularly have irregular meetings these days ;) While that is indeed trouble, it indicates interest, which is a good thing.

Kushal:

We should stop people from treating Fedora as a travel agency.

For context, here’s a blogpost around the same concern. Many Indians just want to become ambassadors, because they think that warrants them funds to travel. It’s of course great if people have been contributing in volumes and want funds to travel and speak about it - in fact, that’s encouraged. But in the recent times, Kushal says he receives mentorship requests, where the person doesn’t want to go through the mentoring process and wants to gain ambassador status directly. Kushal quoted examples and how the mentors team in India dealt with it.

Us hard at work Photo: Somvannda

Next, we worked on the most important bit: the budget. Alick volunteered to review and update Q2 and I helped with Q1. Alick got lucky since most events planned in Q2 were cancelled and there wasn’t much to review. Thang helped me cross-check events, swag requests and travel tickets from Q1.

After a lunch break, we turned to discuss Ambassador Polos and FAD Phnom Penh. I had been working on cleaning up the entire APAC trac for two weeks, but was unable to complete it because people hardly respond. Finally, at the meeting, with help from everyone present, the APAC trac is now Sparkly!

Software Freedom Day at the Uni

This was my second SFD, first being the one I helped organize in school. The way this one was organized was definitely more colorful - it started off with a Tux dance!

Alick had some swag flown over from China, so we used them up at our booth - it disappeared quickly, even before we had a chance to grab some of the folks and do some Fedora preaching. Nonetheless, it was super fun. I think we managed to direct some of the students to our Fedora room for the afternoon sessions. With the swag all gone and not much agenda for the rest of the morning, we headed to the main hall.

Sponsors being felicitated Photo: Alick

I’m going to have to quote the following line from Alick’s report:

Alick:

Sarup, Somvannda, and I are honored to be introduced as special international guests to the event (in English).

It was funny (although exciting) to attend the first few talks in the regional language. Well, we even attended the “How to contribute to Fedora without programming skills” keynote by Tuan in Vietnamese ;)

Come afternoon, we moved to our Fedora room. While Trang and I went around gathering folks to attend our sessions, Thang introduced the attendees to the Fedora Project - who we are, what we do, our goals, and why bother. He did his session in Vietnamese, and the attendees were visibly glued.

Next was Alick’s session on FOSS Software Defined Radio. I think he did a great job introducing the topic - it was a topic unfamiliar to me, but now I get the basics. I liked his idea of motivating through examples.

Finally, I did my mini workshop on FOSS 101. Prior to the event, we had a little debate around what I should talk about - GSoC? Git? Rails? From my understanding of the audience, I decided to do a diluted version of my FOSSASIA workshop. I introduced attendees to the idea of FOSS, put up quotes sent to me by Sumanah and Tatica (who I’ve always felt are great examples of our awesome lady FOSS activists) and showed them around IRC & the idea of mailing lists. I wrapped up with a basic introduction to Git (for which I should thank Alick for his help with the demos and Trang for the translation).

Arrangements

Day 0 was 18 September 2014. I was put up at the Hanoi Legacy Hotel near the Hoan Kiem lake. My roomate was Alick, who arrived later in the evening. Somvannda was at the hotel a day in prior. Tuan and Thang being the locals, were our awesome hosts. For dinner on all but the last day, we had street food near the hotel. On the last day, we had dinner with the VFOSSA folks, other organizers and volunteers.

The meeting was held on the first day, 19 September, at the VAIP office. The SFD event was held on the second day, 20 September at Hanoi University of Engineering and Technology.

Fun Memories

As you would guess, we had fun along the way! On Day 0, Somvannda and I went around Hanoi’s streets hunting for Egg Coffee.

For dinner, Tuan and Thang took the rest of us to a nearby food joint, where we tried out some rather interesting Vietnamese food. I (kinda) picked up how to use a chopstick too.

Newly acquired chopstick skills Photo: Me

On Day 1, after the meeting was over, we headed to the Water Puppet Theater - a unique concept. For dinner, we roamed the street for local food, followed by a brief trip to the Night Market in the Hanoi Old Quarter. I wish we could have revisited the place on the final day as well, but we couldn’t as the events ended late.

On the final day, we were joined by the awesome (hopefully significant future contributors) Trang and Phuong. Trang made us try “Corn in Fish Sauce” and we wrapped up with the usual beer :-)

It was definitely a weekend well spent and I’d like to thank everyone for the fun and productive time!

October 04, 2014

Software Freedom Day Hanoi: Fedora Report

I was in Hanoi last month to participate in the APAC ambassadors meeting, as well as the Software Freedom Day event. This post summarizes notes from the trip.

APAC Ambassadors Meeting

On the first day, we had a meeting set up, to go through the current year’s budget, and discuss concerns with our respective countries. Tuan, Thang, Alick, Somvannda and yours truly were physically present. Gnokii, Kushal and Ankur participated for significant portions of time, remotely over IRC.

Fedora Folks posing at the SFD banner Photo: Unknown

We started with a general discussion about the APAC situation. Some (not sic) moments:

Tuan:

We have a lack of physical meetups among APAC folks.

Tuan was my roomate in Prague (at Flock), and we had a brief discussion about this. For most APAC meetings, at least until a few weeks ago, there would be very few representatives from Asian countries. When the budget was to be made for the current FY, Tuan announced over the mailing lists, but nobody showed up.

We discussed how this situation is improving. In November this year, a FAD is planned where folks have been invited to help with the budget planning. The recent meetings have run over an hour and we regularly have irregular meetings these days ;) While that is indeed trouble, it indicates interest, which is a good thing.

Kushal:

We should stop people from treating Fedora as a travel agency.

For context, here’s a blogpost around the same concern. Many Indians just want to become ambassadors, because they think that warrants them funds to travel. It’s of course great if people have been contributing in volumes and want funds to travel and speak about it - in fact, that’s encouraged. But in the recent times, Kushal says he receives mentorship requests, where the person doesn’t want to go through the mentoring process and wants to gain ambassador status directly. Kushal quoted examples and how the mentors team in India dealt with it.

Us hard at work Photo: Somvannda

Next, we worked on the most important bit: the budget. Alick volunteered to review and update Q2 and I helped with Q1. Alick got lucky since most events planned in Q2 were cancelled and there wasn’t much to review. Thang helped me cross-check events, swag requests and travel tickets from Q1.

After a lunch break, we turned to discuss Ambassador Polos and FAD Phnom Penh. I had been working on cleaning up the entire APAC trac for two weeks, but was unable to complete it because people hardly respond. Finally, at the meeting, with help from everyone present, the APAC trac is now Sparkly!

Software Freedom Day at the Uni

This was my second SFD, first being the one I helped organize in school. The way this one was organized was definitely more colorful - it started off with a Tux dance!

Alick had some swag flown over from China, so we used them up at our booth - it disappeared quickly, even before we had a chance to grab some of the folks and do some Fedora preaching. Nonetheless, it was super fun. I think we managed to direct some of the students to our Fedora room for the afternoon sessions. With the swag all gone and not much agenda for the rest of the morning, we headed to the main hall.

Sponsors being felicitated Photo: Alick

I’m going to have to quote the following line from Alick’s report:

Alick:

Sarup, Somvannda, and I are honored to be introduced as special international guests to the event (in English).

It was funny (although exciting) to attend the first few talks in the regional language. Well, we even attended the “How to contribute to Fedora without programming skills” keynote by Tuan in Vietnamese ;)

Come afternoon, we moved to our Fedora room. While Trang and I went around gathering folks to attend our sessions, Thang introduced the attendees to the Fedora Project - who we are, what we do, our goals, and why bother. He did his session in Vietnamese, and the attendees were visibly glued.

Next was Alick’s session on FOSS Software Defined Radio. I think he did a great job introducing the topic - it was a topic unfamiliar to me, but now I get the basics. I liked his idea of motivating through examples.

Finally, I did my mini workshop on FOSS 101. Prior to the event, we had a little debate around what I should talk about - GSoC? Git? Rails? From my understanding of the audience, I decided to do a diluted version of my FOSSASIA workshop. I introduced attendees to the idea of FOSS, put up quotes sent to me by Sumanah and Tatica (who I’ve always felt are great examples of our awesome lady FOSS activists) and showed them around IRC & the idea of mailing lists. I wrapped up with a basic introduction to Git (for which I should thank Alick for his help with the demos and Trang for the translation).

Arrangements

Day 0 was 18 September 2014. I was put up at the Hanoi Legacy Hotel near the Hoan Kiem lake. My roomate was Alick, who arrived later in the evening. Somvannda was at the hotel a day in prior. Tuan and Thang being the locals, were our awesome hosts. For dinner on all but the last day, we had street food near the hotel. On the last day, we had dinner with the VFOSSA folks, other organizers and volunteers.

The meeting was held on the first day, 19 September, at the VAIP office. The SFD event was held on the second day, 20 September at Hanoi University of Engineering and Technology.

Fun Memories

As you would guess, we had fun along the way! On Day 0, Somvannda and I went around Hanoi’s streets hunting for Egg Coffee.

For dinner, Tuan and Thang took the rest of us to a nearby food joint, where we tried out some rather interesting Vietnamese food. I (kinda) picked up how to use a chopstick too.

Newly acquired chopstick skills Photo: Me

On Day 1, after the meeting was over, we headed to the Water Puppet Theater - a unique concept. For dinner, we roamed the street for local food, followed by a brief trip to the Night Market in the Hanoi Old Quarter. I wish we could have revisited the place on the final day as well, but we couldn’t as the events ended late.

On the final day, we were joined by the awesome (hopefully significant future contributors) Trang and Phuong. Trang made us try “Corn in Fish Sauce” and we wrapped up with the usual beer :-)

It was definitely a weekend well spent and I’d like to thank everyone for the fun and productive time!

September 16, 2014

Don't stop at the summer project!

Note: this is work in progress. I’d like to improve this post to include universal opinions, so I’d appreciate any feedback!

Ouch, two summers with Fedora are over! As far as GlitterGallery news goes: Emily’s working on setting up a demo for design team, fantastic Paul is scrubbing up some final pre-release bugs and more potential contributors are now showing up. As far as GSoC itself is concerned: Google’s sent over money, the tshirt should be here soon enough and it doesn’t look like any more formalities are pending. Time to pack, find a job, and say goodbye to friends at Fedora project, right?

Wrong.

The other day, Kushal called me up and mentioned his concerns with students disappearing once their GSoC projects are over, and once they have their money. The experienced folks in most communities share the same disappointment. I couldn’t agree less, and promised to write about it, hence this post. I’m not sure who the target readers should be, but my best guess would be anyone aspiring to start contributing to a FLOSS project, especially students hoping do a GSoC next year :-)

Why bother contributing to a FLOSS project?

Let’s be done with the incentives first. Sure, there’s the geek-badge associated with it, and you’re helping make the world a better place. What other incentives do FLOSS communites offer? Here are the ones that attract me:

  • Something meaningful to work on: If you’re a student stuck in a place where they make you do things you aren’t motivated about (I hear jobs aren’t too different), then being involved in a community can make your spent time meaningful. It doesn’t really have to be a FLOSS community, but in my case, it seems to have worked out well. I would rather feel awesome about having built a small piece of software that does something for me, over mugging an outdated book on “Net Centric Programming”.

  • Jobs, money, opportunities: Depending on your case, you may not necessarily get paid, but typical FLOSS communities have participants from world over => you get exposed to a lot of opportunities you wouldn’t hear of otherwise. Many of my professors think the idea of writing FLOSS is stupid. As a result, their understanding of opportunities is limited to campus placements. It doesn’t really have to be! I have come to learn that there’s an entire industry of people who land jobs just based on links to stuff they’ve worked on.

  • Friends around the world: It’s embarrasing I didn’t know of a country by the name Czech Republic until about last December. Now I not only have friends from Cz who I speak to quite often, I actually was in Prague a month ago and even did a talk! My summer mentor is from the USA. My closest friend is a German. On a daily basis, I probably end up interacting with someone from each continent. It’s a lot of fun learning how things in other places work. If you’re from India like me, the idea of trains departing at times like 15:29 should impress you.

Why not contribute?

However much FLOSSy geeks will brag about their flawlessness, FLOSS communities aren’t for everyone. Some hints:

  • You need a certificate for showing up: I wish I could wrap two bold tags there. Please contribute only if you want to do it for the fun of it. Most people in any community exist because they want to improve or utilize a skill, not because they can stack up a bunch of certificates on their resume.

  • You need to be spoonfed Unfortunately, as much as everyone would like to help new contributors to a project, showing people around takes time. Sure, we’re willing to put in an hour or two every week finding links and emailing you. But if you aren’t going to read them, and learn to find more links, then you’re making things difficult.

  • You need to made ambassador the first thing, just so you have a tshirt: Here’s the thing about Ambassador programs - they were created to provide structure for contributors to show off the awesome stuff they’re building. If you aren’t contributing, you need to do that first. Ideally, if there are incentives coming up (swag, flight tickets, whatever), they go to the active folks first. Of course there are exceptions once in a while when new people are encouraged with incentives when they seem promising, that’s different. (I have had a junior ask me what organization offers the best perks so he could contribute there, and another one wanting to fly to a different continent at a community’s expense, because she wanted to attend a Django workshop).

In my case, I got involved with the Fedora community through a design team project I ended up co-authoring. But I’d say it was just a starting point! I don’t have unlimited time thanks to University classes, but with what I have, I contribute where I can. It really doesn’t have to be limited to my project (although that’s where I focus my efforts on) - it could be a random broken wiki page. These days I’m cleaning up expired requests on our request-tracking system. A while ago, I started with Inkscape and attempted Fedora.next logos. On other days I hang out on IRC channels geared at helping newbies. Even though Fedora infra doesn’t do ruby oriented projects, I sometimes hang out in their meetings to see what they’re up to. I don’t understand how Marketing works, so next I’m planning to give it a shot. Ultimately, the goal is to quickly pick up a skill, while improving Fedora as a community in whatever small way I can.

That’s something I’d request everyone to do. Being involved with a GSoC or a similar summer engagement is fun - you get to work on something large enough to be accountable for, while being small enough to pick up quickly. But try to look around - find projects that your project depends on. Fix them. Find projects that could use yours. Fix them. If they don’t exist, make them! I bet Kushal wants to convey the same message: just don’t stop with your project. A successful summer is a good thing - but if you’re simply going to disappear, then it’s purpose is defeated. You have to justify the time your mentor spent on you! :-)

On an ending note, how would you look for more areas to contribute? It’s simple - ask your mentor. Or just try to remember the inconvenience you had with library X compiling too slow. It was a good thing you overlooked it then because you had to keep track of the bigger picture. Now’s the time to return to it and fix it. Also, try to attend events relevant to what you’re working on. I’m really lucky Gnokii invited me to LGM in his country - I ended up finding another project to use within GlitterGallery, for a start.

There’s almost always everts happening around where you live. I’m in Coimbatore which is relatively sleepy, but I travel to Bangalore about every month to participate at an event. If you find an event that could benefit from you, try and ask the organizers if you could be funded. Just don’t stop!

September 13, 2014

ReFS: All Your Resilience Are Belong To Us

(grammer intentional) The last few months I've been looking into the Resilent File System (ReFS), which has only undergone limited analysis so far. Let's fix that shall we!

Before we begin, I've found these to be the best existing public resources so far concerning the FS, they've helped streamline the investigation greatly.

[1] blogs.msdn.com - Straight from the source, a msdn blog post on various concepts around the FS internals.

[2] williballenthin.com - An extended analysis of the high level layout and data structures in ReFS. I've verified alot of these findings using my image locally and expanded upon various points below. Aspects of the described memory structures can be seen in the images locally.

[3] forensicadventures.blogspot.com - Another good analysis, of particular interest is the ReFS / NTFS comparison graphic (here).

Note in general it's good to be familiar w/ generic FS concepts and ones such as B+ trees and journaling.

Also familiarity w/ the NTFS filesystem helps.

Also note I'm not guaranteeing the accuracy of any of this, there could be mistakes in the data and/or algorithm analysis.

Volume / Partition Layout

The size of the image I analyzed was 92733440 bytes with the ReFS formatted partition starting at 0x2010000.

The first sector of this partition looks like:

byte 0x00: 00 00 00 52   65 46 53 00   00 00 00 00   00 00 00 00
byte 0x10: 46 53 52 53   00 02 12 E8   00 00 3E 01   00 00 00 00
byte 0x20: 00 02 00 00   80 00 00 00   01 02 00 00   0A 00 00 00
byte 0x30: 00 00 00 00   00 00 00 00   17 85 0A 9A   C4 0A 9A 32

Since assumably some size info needs to be here, it is possible that:

vbr bytes 0x20-0x23 : bytes per sector (0x0200)
vbr bytes 0x24-0x27 : sectors per cluster (0x0080)

Thus:

1 sector = 0x200 bytes = 512 bytes
0x80 sectors/cluster * 0x200 bytes/sector = 0x10000 bytes/cluster = 65536 = 64KB/cluster

Clusters are broken down into pages which are 0x4000 bytes in size (see [2] for page id analysis).

In this case:

0x10000 (bytes / cluster) / 0x4000 (bytes/page) = 4 pages / cluster

Also:

0x4000 (bytes/page) / 0x200 (bytes/sector) = 0x20 = 32 sectors per page

VBR bytes 0-0x16 are the same for all the ReFS volumes I've seen.

This block is followed by 0's until the first page.

Pages

According to [1]:

"The roots of these allocators as well as that of the object table are reachable from a well-known location on the disk"

On the images I've seen the first page id always is 0x1e, starting 0x78000 bytes after the start of the partition.

Metadata pages all have a standard header which is 0x30 (48) bytes in length:

byte 0x00: XX XX 00 00   00 00 00 00   YY 00 00 00   00 00 00 00
byte 0x10: 00 00 00 00   00 00 00 00   ZZ ZZ 00 00   00 00 00 00
byte 0x20: 01 00 00 00   00 00 00 00   00 00 00 00   00 00 00 00

bytes 0/1 (XX XX) is the page id which is sequential and corresponds to the 0x4000 offset of the page
byte 2 (YY) is the sequence number
byte 0x18 (ZZ ZZ) is the virtual page number

The page id is unique for every page in the FS. The virtual page number will be the same between journals / shadow pages though the sequence is incremented between those.

From there the root page has a structure which is still unknown (likely a tree root as described [1] and indicated by the memory structures page on [2]).

The 0x1f page is skipped before pages resume at 0x20 and follow a consistent format.

Page Layout / Tables

After the page header, metadata pages consist of entries prefixed with their length. The meaning of these entities vary and are largely unknown but various fixed and relational byte values do show consistency and/or exhibit certain patterns.

To parse the entries (which might be refered to a records or attributes), one could:

  • parse the first 4 bytes following the page header to extract the first entry length
  • parse the remaining bytes from the entry (note the total length includes the first four bytes containing the length specification).
  • parse the next 4 bytes for the next entry length
  • repeat until the length is zero

The four bytes following the length often takes on one of two formats depending on the type of entity:

  • the first two bytes contain entity type with the other two containing flags (this hasn't been fully confirmed)
  • if the entity if a record in a table, these first two bytes will be the offset to the record key and the other two will be the key length.

If the entry is a table record,

  • the next two bytes are the record flags,
  • the next two bytes is the value offset
  • the next two bytes is the value length
  • the next two bytes is padding (0's)

These values can be seen in the memory structures described in [2]. An example record looks like:

bytes 0-3: 50 00 00 00 # attribute length
bytes 4-7: 10 00 10 00 # key offset / key length
bytes 8-B: 00 00 20 00 # flags / value offset
bytes C-F: 30 00 00 00 # value length / padding

bytes 10-1F: 00 00 00 00   00 00 00 00   20 05 00 00   00 00 00 00 # key (@ offset 0x10 and of length 0x10)
bytes 20-2F: E0 02 00 00   00 00 00 00   00 00 02 08   08 00 00 00 # -|
bytes 30-3F: 1F 42 82 34   7C 9B 41 52   00 00 00 00   00 00 00 00 #  |-value (@ offset 0x20 and length 0x30)
bytes 40-4F: 08 00 00 00   08 00 00 00   00 05 00 00   00 00 00 00 # -|

Entries

Various attributes and values in them take on particular meaning.

  • the first attribute (type 0x28) has information about the page contents,
  • Bytes 1C-1F of the first attribute seem to be a unique object-id / type which can idenitify the intent of the page (it is consistent between similar pages on different images). It is also repeated in bytes 0x20-0x23
  • Byte 0x20 of the first attribute contains the number of records in the table. This value is repeated in the record collection attribute. (see next bullet)
  • Before the table collection begins there is an 0x20 length attribute, containing the number of entries at byte 0x14. If the table gets too long this value will be 0x01 instead and there will be an additional entry before the collection of records (this entry doesn't seem to follow the conventional rules as there are an extra 40 bytes after the entry end indicated by its length)
  • The collection of table records is simply a series of attributes, all beginning w/ the same header containing key and value offset and length (see previous section)

Special Pages

Particular pages seem to take on specified connotations:

  • 0x1e is always the first / root page and contains a special format. 0x1f is skipped before pages start at 0x20
  • On the image I analyzed 0x20, 0x21, and 0x22 were individual pages containing various attributes and tables w/ records.
  • 0x28-0x38 were shadow pages of 0x20, 0x21, 0x22
  • 0x2c0-0x2c3 seemed to represent a single table with various pages being the table, continuation, and shadow pages. The records in this table have keys w/ a unique id of some sort as well as cluster id's and checksum so this could be the object table described in [1]
  • 0x2c4-0x2c7 represented another table w/ shadow pages. The records in this table consisted of two 16 byte values, both which refer to the keys in the 0x2c0 tables. If those are the object id's this could potentially be the object tree.
  • 0x2c8 represents yet another table, possibly a system table due to it's low virtual page number (01)
  • 0x2cc-0x2cf - consisted of a metadata table and it's shadow pages, the 'ReFs Volume' volume name could be seen in the UTF there.

The rest of the pages were either filled with 0's or non-metadata pages containing content. Of particular note is pages 0x2d0 - 0x2d7 containing the upcase table (as seen in ntfs).

Parser

I've thrown together a simple ReFS parser using the above assumpions and threw it upon github via a gist.

To utilize download it, and run it using ruby:

ruby resilience.rb -i foo.image --offset 123456789 --table --tree

You should get output similar to the following:

Of course if it doesn't work it could be because there are differences between our images that are unaccounted for, in which case if you drop me a line we can tackle the issue together!

Next Steps

The next steps on the analysis roadmap are to continue diving into the page allocation and addressing mechanisms, there is most likely additional mechanisms to navigate to the critical data structures immediately from the first sector or page 0x1e (since the address of that is known / fixed). Also continuing to investigate each page and analyzing it's contents, especially in the scope of various file and system changes should go a long ways to revealing semantics.

read more

August 31, 2014

Google summer of Code final update final update

This has been one of quite interesting summers I have had, I worked with the Google summer of Code program. Being my first time it was really interesting, to work along the wonderful people in the opensource community.


This summer I was working with the fedora-infra team on the project Fedora College. The project was to develop a virtual classroom environment for new contributors in the community. Though it may look a bit mundane but, it surely does help to solve a major problem i.e. introducing new contributors towards the fedora project. 


The project was completed the well within the proposed time line and we are planning to do the packaging of the project in the coming days to deploy it on fedora infrastructure. The main part of the project (i.e. the coding part) was completed well within the timeline, but another part that's deploying and executing the project still remains. We are planning to launch the project as soon as possible, but a large responsibility also lies on the team that manages and creates the content for the fedora project. The project lays great emphasis on the video content and multimedia lectures. We need to have a dedicated team for managing the things.

Also, the web-application is currently not too tightly coupled with the fedora infrastructure. Though the API classes and the fedora message parser are well written and can help significantly for further integration with the fedora messaging system.

So, once the project has been completed I moved to Hong Kong, for my masters studies. Though from the curriculum it appears I wont be able to have much time for continuing the community effort but, I can surely help at times when I am required.

So, anyone who would like to contribute and test the things, could do it here:                                                                                                                                          http://github.com/hammadhaleem/fedora-college


Thanks for Reading this post.




August 20, 2014

GSOC : Final Update
After months of scrounging the xlators for getting to know how they work, we've finally come to an end with my glusterfsiostat project. You can check out the latest source code from https://forge.gluster.org/glusterfsiostat.

I've worked on a fast pace over the past couple of days and completed all the tasks left out. This includes finishing up the python web server script which now supports displaying of live I/O graphs for multiple Gluster volumes mounted at the same time. The server and the primary stat scripts now also generate error if profiling is not turned on for any volume. Following are some screenshots of the server live in action.

 
  



The approach here is quite similar to Justin Clift's tool(https://github.com/justinclift/glusterflow) but I've tried to build this as a bare bones package, since Justin's tool requires you to first setup an extensive stack (Elastic Search server, logstash etc.). My aim is that the contents of this tool should be self sufficient for anyone to download once and use it, not to complete dependencies first. The response from my mentor about the work done has been pretty supportive. I look forward to improving this project and working on some more exciting ones with GlusterFS in the future.

August 18, 2014

Bugspad and Future Plans

Code cleaning, and rigourous testing and bug fixing, underwent this week. Tested the instance with bigger datasets, and the tested for response times.
The current code, is also a bit untidy, and needs some refactoring. So, I have started my work on making, a design documentation, with explicit, details,
of the workflow of the application. Currently hand-drafting it, then I would be digitising it(For the time being I am uploading my workflow explanation,
charts which is not of great quality :P ). I have divided the whole workflow, according to urls, and then subdiving it into components, which have an
effect on the performance (especially speed), directly, ie all SQL/Cache queries, being made. This would allow a clear idea, of the purpose, and role
of each component. This would also invite more contributors, and increase the understandability of the current code. Also, it would allow to focus especially,
on the bottlenecks of time in the workflow, and experiment, with different available tools and methods. So, this would be what I would doing into next week. Cheerio!
IMG_20140818_190205468

IMG_20140818_190144592

IMG_20140818_190045239

IMG_20140818_190022250


August 15, 2014

Notification Emails in GlitterGallery

I just pushed changes for Notification emails in GlitterGallery. Now, when someone places a comment on a project - all the people who are following the project are sent across an email.

Here’s the Pull Request

If you’d like to see it in action, login to my demo and try it out! :)

Google Summer of Code 2014
Summer of Code week 11 Report

So, Here we are , in the last few weeks of Google summer of Code. It had been a really exciting and happening journey for me. I can consider this to be my first actual set of contributions to any opensource project.

As, always last few weeks for any project goes towards documentation and testing, so did mine. During the period we worked on a set of targets and pushed those to my mentors repository.

To, list down in brief :

1. Added categories and sort by categories features. These endpoint though quite effective in helping user find out content were largely left uncovered during the project.

2. Worked on improving the project documentation. Inclusive of the API docs, Project Docs, Sample content and the code docs.

3.We created a request for resources ticket with the fedora admin for initial deployment of the project.
Ticket  : https://fedorahosted.org/fedora-infrastructure/ticket/4480#comment:3

4. But, before we can actually get the project hosted. Its important to do the packaging of the product. And get it included in the Redhat Bugzilla.

I guess, most of the time left would be spent on the packaging and testing. I guess, the project has some external dependencies and may require us to revamp the code.

Demo for the project is : 
http://demo.engineerinme.com or http://demo.engineerinme.com/admin (Visible only to a group member of fedora project i.e. Summer coding group and the Proven packagers group.)


Thanks
Hammad Haleem

Flock 2014: Report

Flock 2014 took place in Prague this year. Here’s a photo to start with:

Tshirt ;)

Although Flock wasn’t the first FOSS/Dev/Tech conference I have been to, it was my first Fedora-specific event. Definitely special in it’s own way - most of the speakers and attendees are from within the Fedora community, so basically every third person is someone you have chatted with over irc, seen on planet, or is someone whose wikipage you have stumbled upon at some point. Which means, you walk to the end of the corridor, look at a person’s badge and say “Oh, so YOU are that guy!”

I wouldn’t say I attended a lot of the talks, since many of them would go over my head; but I did spend a good time interacting with the people present & offered to help with their projects & found potential contributors for mine. After all, the Flock organizers were awesome enough to live stream and upload all the sessions on YouTube (I’m still watching some): Flock channel on YouTube.

First day started with the opening by Matt, followed by a keynote on how FOSS was accepted in the EU by Gijs Hillenius. It was inspiring; I was left wondering how I can make an impact at least at the University level for a start. The next one I attended (online though) was on the State of Fedora Fonts, by Pravin Satpute.

I spent most of the remainder of the first day in the hackroom, reviewing slides for my talk, scheduled for later during the day. Mine followed Marina’s session on Gnome OPW which I attended in part - she did a great job of outlining how the community had succeeded in increasing participation of women within FOSS communities, events and projects. Her talk also reminded me that Marie (riecatnor) was here at Flock! I’ve known Marie for a while through the Fedora design IRC, but we hadn’t met in person.

Soon after, I did a talk called the Curious Case of Fedora Freshmen. It covered how Freshmen found it difficult to cope up with more experienced folk speaking complex things to them or simply not paying enough attention. I brought up various programs that would help Freshmen, should they be worked on. The talk was followed by a pretty extensive discussion. I’ll start Wiki pages working on some of the stuff I brought up during the session within a couple of week’s time. Slides for my talk are here and the video is here.

One interesting session I attended was by Chris Roberts and Marie Catherine Nordin - on Fedora badges. Chris did a quick run through about how fedmsg awards badges, and Marie followed up with her Fedora badges internship (which I have been impressed with like forever). Post-session, I introduced myself to Marie and we’ve been super friends since! :D

In fact, at this point, you should read Marie’s blog post about Flock. It covers a bit of what we did over our own impromptu hackfest - as she calls it ;) Marie spent a great deal of time explaining me how to play with nodes and we worked a bit on Waartaa’s logo. She also did a Glyph from scratch, it was quite amazing!

Snake man!

Among other sessions I attended were one on state of the Ambassador’s Union by Jiri, Advocating Fedora.next by Christoph, Improving Ambassadors Mentor program by Tuan (online) and Meet your FesCo (filled with Josh Boyer’s humor).

Fun stuff: On the first evening was FudPub, where we competed over who can take in more beer ;) Thanks to the organizers, we were on a boat another evening. We did a tour of Prague during the night - pretty mesmerizing! On other days, Gnokii took us to Budvarka, where some of us had lots of local food and beer ;)

Budvarka

On final day, Marie, Ralph, Toshio, Pingou, Arun and I, among others went on a quick city trip before we were headed home :) I’ll have to say, I’m now richer in terms of memories and a few badges ;)

Me in Prague

I’d really like to thank the Fedora community for having supported me on this trip! You guys deserve a badge ;)

August 11, 2014

Xtreme Programming into Bugspad

I am now into the last phase of my project. Due to delays and bit less communication post mid term, due to
my poor internet connectivity (which was my sole responsibility, and I agree :( ), I could not discuss much with
my mentor much. However I am planning to use my backup plan to discuss what was remaining, and make the implemented features robust and error free. It will be my toughest week of the project, testing out and removing tad bits of the code. Here I go to have a taste of XP (Xtreme Programming).
PS. Was short on words!


August 09, 2014

GSoC - week 10+11
Last week I've been working on improving the LVM plugin thinpool sharing capabilities. I didn't explain LVM thin-provisioning before, so I'll do it now to rationalize what I'm doing.
With classical volumes you have a volume with given size and it always occupies the whole space that was given to it at the time of creation, which means that when you have more volumes, you usually can't use the space very efficiently because some of the volumes aren't used up to the capacity whereas others are full. Resizing them is a costly and risky operation. With LVM thin-provisioning there's one volume called thinpool, which provides space to thin volumes that are created within. The thin volumes take only as much space as they need and they don't have physical size limit (unless you set virtual size limit). That means that if the space is not used by one volume it can be used by another.
Previously there was one thinpool per configuration which corresponded to one buildroot. It could have snapshots, but there was still one buildroot that could be used at the moment. Now you can use mock's unique-ext mechanism to spawn more buildroots from single config. Unique-ext was there even before I started making changes, but now I implemented proper support for it in the LVM plugin. It's a feature that was designed mostly for buildsystems, but I think it can also be very useful for regular users who want to have more builds running at the same time. With LVM plugin the thinpool and all the snapshots are shared between the unique-exts, which means you can have multiple builds sharing the same cache and each one can be based on different snapshot. The naming scheme had to be changed to have multiple working volumes where the builds are executed. Mock implements locking of the initial volume creation, so if you launch two builds from the same config and there wasn't any snapshot before, only one of the processes will create the initial snapshot with base packages. The other process will block for that time, because otherwise you'll end up with two identical snapshots and that would be a waste of resources.
Other sharing mechanism that is now implemented is sharing the thinpool among independent configs. Then the snapshots aren't shared because the configs can be entirely different (for example different versions of Fedora), but you can have only one big logical volume (the thinpool) for all mock related stuff, which can save a lot of space for people that often use many different configs. You can set it with config_opts['pool_name'] = 'my-pool' and then all the configs with pool_name set to the same name will share the same thinpool.
Other than that I was mostly fixing bugs and communicating with upstream.

This week I've been on Flock and it has been amazing. There were some talks that are relevant for mock, most notably State of Copr build service, which will probably use some of the new features of mock in the future and Env&Stacks WG plans, which also mentioned mock improvements as one of their areas of interest.

August 06, 2014

Up from slumber into the pre alpha release mode.

A really long gap of a week, the slumber period, caused by the havocous TPSW department of our college, due to which I could not work much. Now fortunately the period is over. I could not devote much time to my work, so I am going to make up for the time in the coming days. The work planned remains:

  • Redisifying the mysql queries.
  • Fixing any incumbent bugs
  • Working with my mentor on rpm packaging of the code.

The extra features, for the admin interface such as the permissions and groups is due, after this, which we will be working on. In spirit of the pre Alpha release.


How we handle Issue IDs in GlitterGallery

I came across an interesting problem recently when working on GlitterGallery.

We have a Project model, and an Issue model. A Project has many issues, and every Issue belongs to a project. Now when we show issues on a project, we want to assign IDs to them just like GitHub does. So the first issue created on a project would be labeled as #1. The difficulty here - Rails doesn’t have a built in simple method to achieve this. If I just tried to output issue.id, I’d get a number greater than 1, unless this is the first issue ever created throughout the GlitterGallery installation.

Here’s how I went about it.

I call this #1 the sub_id of the issue. I find this sub_id by finding the position of the issue’s id in the array of ids of all the issues in the project. That’s a complicated sentence - Let me explain with an example.

We have a Project A, that has 3 issues.

  • Issue A - id#234
  • Issue B - id#289
  • Issue C - id#324

When I try to access projecta.issue_ids, i get the following array

[234,289,324]

To find the sub_id of Issue B - I just have the find the position of it’s ID in this array. 289 is the second element, so the sub_id is 2.

Here’s how that looks in code:

sub_id = (project.issue_ids.index(id) + 1).to_i

And to get the issue from the sub_id - reverse the logic.

issue = Issue.find(project.issue_ids[sub_id.to_i - 1])

There is one downside to this though. You shouldn’t be doing this if you expect issues to be deleted (fortunately, we don’t). Reason being - when you delete one issue, the sub_ids of the other issues might change. And that’s not good in the long term because people might get by confused by older references to a issue’s sub_id.

The solution in that case could be - Add two more columns to your database. One for the sub_id on the issue model, and an increment counter on the project model.

If you know a simpler way around this, do let me know! Or even better, let us know at GlitterGallery

EDIT: Emily gave me some feedback about this approach. Here’s what she said:

It still might be better to keep them in the db - I don’t have any benchmarks or what have you, but I’m thinking a db lookup for an issue will be faster than getting an array of all ids, sorting, and searching the array. Just a thought. :)

There is an issue open to convert this to storing the subIDs in the database on our GitHub repo.

August 05, 2014

Google Summer of Code 2014: Weekly update

Google Summer of Code 2014: Week 8-10 update

It is one of the penultimate weeks of the summer of code program. It had been one of most exciting summer so far. And Here, we have reached the eleventh Week of the summer of code program. This week was quite fruitful, we worked on various small aspects of the fedora college, Tweaking things and generally concentrated on making things better.

TO summarize this week I worked on following things :
  1. API Documentation , I wrote the documentation for the API and designed web pages for the display of the same. When you install the project, you can view the API documentation at the /api/docs/
  2. Yohan, pointed out some interesting issues that were required to be taken care of like the decorators for auth, making path dynamic and other minor changes. 
  3. Made the GUI admin portal usable, create template for admin page.
  4. There were some errors in the delete media endpoint and were corrected for good.
  5. Also, there were a couple of other issues, that were addressed this week. https://github.com/echevemaster/fedora-college/pull/23
Now the project has been formally added to the fedora-infra, https://github.com/fedora-infra/fedora-college/

Demo for the project is : 
http://demo.engineerinme.com or http://demo.engineerinme.com/admin (Visible only to a group member of fedora project.)


Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Also, This week we as students planned a Google summer of code meetup at my university. The meetup, is for emphasizing on how to contribute to the Open Source community. More details about the meetup can be found here : http://gsoc.jmilug.org/ .The meetup is quite interesting and GsoC student participants from various organizations will be coming down to speak about what they did this summer.


Thanks for Reading through the Post.
   
Hammad Haleem

July 30, 2014

Google Summer of Code 2014: Week 8-10 update

Google Summer of Code 2014: Week 8-10 update

Hello Folks, 

My project is almost complete and in the past weeks we did work on some things and I would like to update about the status of my project. Also, this time  I have included a couple of screenshots in the blog post. 

To be precise in the previous weeks I have been working on following things: 
  1. Improve the GUI for home page. The CSS has been inspired from pintrest. You can see a demo here. http://demo.engineerinme.com/  
                             
  2. Also, we worked on the parser class for the fedora messaging bus. So, that messages sent from the fedora college can be easily parsed.
  3. I have also added  ability to rate and mark tutorials as favorites. Below are presented some screenshots about the same. Though this is not currently reflected in the demo, but is present in the code published at my repository. 
  4. There is a list of to-do's present here :https://titanpad.com/fedoracollege. Once I am done with these, they can be added to the Redhat BugZilla. THis can be created as a package and added to fedora.

Now the project has been formally added to the fedora-infra, https://github.com/fedora-infra/fedora-college/


Demo for the project is : 
http://demo.engineerinme.com or http://demo.engineerinme.com/admin (Visible only to a group member of fedora project.)

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Thanks for Reading through the Post.


July 29, 2014

IsItFedoraRuby new design

The past week I tried to do something about the looks of isitfedoraruby. It was fun using bootstrap (my first time) and I think the outcome is cool. I tried to use Fedora like colors and the font is Liberation Sans, same as Fedora pkgdb.

You can check the overall changes:

Tables

They are now borderless, with highlighted headings. They are also responsive which means if the table is bigger than the page it gets its own sidebar without breaking the rest of the site.

fedorarpms

index page

The index page show all packaged rubygems along with some interesting info. You can see if a package is out of date if is highlighted with a red color. On the other hand green means is up to date with latest upstream.

The code that does that is pretty simple. Bootstrap provides some css classes for coloring. So I wanted to use warning for outdated and success for up to date packages. I highlighted the whole table row so I used:

%tr{class: rpm.up_to_date? ? 'success' : 'danger'}

In particular check line 19.

show page

Previously there was a ton of information all in one page. Now, the info is still there but I have devided it into tab sections.

Currently there are 5 tabs.

The main tab has a gem's basic info:

  • Up to date badge (green yes or red no)
  • Gitweb repository url
  • SPEC file url
  • Upstream url
  • Maintainer FAS name
  • Number of git commits
  • Last packager (in case a package is co-maintained)
  • Last commit message
  • Last commit date
  • Description

Basic Info

Then there is a tab about version information:

  • Table with gem versions across supported Fedora versions (rawhide, 21, 20)

Versions

Another important tab is a list with a packages's dependencies:

  • One table with dependencies with column whether they are runtime/development deps
  • One table with dependents packages

Dependencies

The bugs tab depicts all of package's open bugs for Fedora in a table.

Bugs

And lastly koji builds for only the supported Fedora versions.

Builds

rubygems show page

The description is now on top of the page. Instead of one column, the new look has two columns, one for basic info and one for the depdendencies table.

Compare rake:

owner page

I added some info on top of the page about the number of the packages a user owns:

  • Total
  • Up to date
  • Outdated

The table that has an owner's packages is also highlighted to depict outdated and up to date packages.

Here's an embarassing screenshot which reminds me I have to update my packages...

Owner page

The navigation bar was a PITA to configure and make as responsive as possible. There were a lot of bits and pieces needed to fit together, here are some of them.

I used a helper method which I found in this so answer.

I used the same colors of Fedora pkgdb. With the help of a firefox extension named colorpicker and http://twbscolor.smarchal.com/ I gave the navbar the color it has now. twbscolor is a cool site that extracts your chosen color even in scss, which I used along with some minor tweaks.

In responsive mode there is a dropdown menu. That requires some javascript and the steps are:

1.Add *= require bootstrap in app/assets/stylesheets/application.css

2.Add //= require bootstrap in app/assets/javascripts/application.js

3.Add in app/assets/javascripts/application.js:

$('#header-collapse').collapse({
  toggle: false
})

4.Add bootstrap classes to header view:

%header.navbar.navbar-default.navbar-fixed-top
  .container
    .navbar-header
      %button.navbar-toggle{ type: 'button', data: {toggle: 'collapse', target: '#header-collapse'}}
        %span.sr-only 'Toggle navigation'
        %span.icon-bar
        %span.icon-bar
        %span.icon-bar
        %span.icon-bar
      = link_to 'FedoraRuby', root_path, class: 'navbar-brand'

    %nav.collapse.navbar-collapse#header-collapse{role: 'navigation'}
      %ul.nav.navbar-nav
        %li{class: is_active?(root_path)}
          = link_to _('Home'), root_path
        %li{class: is_active?(rubygems_path)}
          = link_to _('Ruby Gems'), rubygems_path
        %li{class: is_active?(fedorarpms_path)}
          = link_to _('Fedora Rpms'), fedorarpms_path
        %li{class: is_active?(about_path)}
          = link_to _('About'), about_path

Search field

I wanted the search field to be together with the search button. In bootstrap this is accomplished with input-group-buttons. The final code was:

%ul.nav.navbar-nav.navbar-right
  %li
    = form_tag( { :controller => 'searches', :action => 'redirect' },
    :class => 'navbar-form', :method => 'post') do
      .input-group
        = text_field_tag :search, params[:search] ||= '',
            class: 'search-query form-control',
            placeholder: 'Search'
        %span.input-group-btn
          = button_tag raw('<span class="glyphicon glyphicon-search"></span>'), name: nil, class: 'btn btn-default'

Instead for a search button with text, I used an icon.

There was also another problem regarding responsiveness. In different page sizes the header looked ugly and the search bar was getting under the menu.

I fixed it by adding a media query in custom.css.scss that disappears the logo in certain widths.

@media (min-width: 768px) and (max-width: 993px) {
  .navbar-brand {
    display: none
  }
}

Here are before/after screenshots to better understand it.

Before

After

Responsive design

Bootstrap comes with responsiveness by default. In order to activate it you have to add a viewport meta tag in the head of your html, so in app/views/layouts/application.html.haml add:

%meta{ :content => "width=device-width, initial-scale=1, maximum-scale=1", :name => "viewport" }

See full application.html.haml


It sure was fun and I learned a lot during the process of searching and fixing stuff :)

Testing, testing and more testing

The previous week, was very eventful. I and my mentor were discussing on the plans of implementing, the groups and permissions feature, which I had planned earlier. However, we concluded that it would be better to clean up the current code, and perform more rigorous testing so that the current implemented features are robust and performance centric. So, have placed the permissions stuff on the shelf for the time being. So far I have been testing via the API, filed 1.7 million bugs or so, only to realise that I wont be able to access those, as I had missed the product versions in each, which I have made compulsory as a part of design decision. So I fixed that part, and am refiling more bugs. The testing which I have done so far, have the result as follows, when only 1 user makes a request at a time:

  • Fetching 10,000+ bugs takes around 1-2 seconds.
  • Filing a bug via API takes around 2-3 seconds(on an average).
  • Filing bugs via the UI(mechanize) takes 4-5 seconds (on an average)

I know the above numbers are not impressive, and the reason behind the same, is that I used mysql at places wherein I should have used redis. So I am onto that now, and more testing, which would be followed by the initial RPM packaging of the application. :D


July 27, 2014

Updates with GlitterGallery

Personally, I’ve been troubled with illness for a while now. College has started and time has gotten scarce. However, work on GG is going great as usual. As usual, I’d recommend running through the demo hosted at http://glittery-banas.rhcloud.com, since design work is only best experienced.

For Fedora folks following the project, I’d like to mention some highlights:

For starters, now we have a remember me, because a couple of people said it’s trouble having to enter login credentials every time they’re on a new instance of their browser. Of course, it’s optional; it’s only meant to aid you. We’re facing trouble with the 3rd party login, which suddenly seems to have broken. Paul is investigating it at the moment.

Login page

I’ve improved the toolbars on Project and User projects. There are slight changes to the transitions and the active element is now highlighted properly.

Toolbars

Paul recently rolled out the server side stuff for GlitterGallery Issues. I’ve also given it some front-end polish. Here’s some screenies:

Issues New

Issues List

Other areas I’m currently working on:

  1. Slideshow display for project images (80% complete)
  2. Multiple uploads for project components (50% complete)
  3. OpenShift QuickStart (stuck)

July 24, 2014

GlitterGallery's Issue Tracker

These 2 weeks Sarup and I will be working on an issue tracker for GlitterGallery. We’re aiming at something along the lines of what GitHub has. Just enough functionality, and as usual - a clean and simple interface. Here’s the Pull Request.

There is one feature that we haven’t seen implemented anywhere else. We’re allowing people to comment on projects, and mark a comment as an issue right there itself. We think that this is a very interesting feature to have, and it’ll be nice to see how the implementation turns out. Displaying the issue status inline alongside the comment seems like a good idea to me, let’s see what else we come up with! :)

GSoC - Mock improvements - week 9
This week I've been mostly focusing on minor improvements and documentation (manpages). Almost all my changes were already submitted upstream and if everything goes well, you can expect a new release of mock to be available in rawhide in the near future. I merged changes from the ready branch to master so now they should differ only in minor things. (Sorry for duplicates in git history, I didn't realize that beforehand)

Support for Mikolaj's nosync external library was added and the old implementations that existed as a part of mock were dropped. You can enable it by setting
config_opts['nosync'] = True
and you have to install the nosync library (mock doesn't require it in the specfile, because it's not available everywhere). If the target is multilib, you need both aritectures of the library to be installed in order to have a preload library for both types of executables. If you don't, it will print a warning and won't be activated. If you can't install both versions and still want to use it, set
config_opts['nosync_force'] = True
but expect a lot of (harmless) warnings from ld.so.  The library is available in rawhide (your mirrors might not have picked it yet)

LVM plugin was moved to separate subpackage and conditionaly disabled on RHEL 6, since it requires lvm2-python-libs and newer kernel and glibc (for setns). One of the things that I needed to sacrifice when I was making the LVM plugin was the IPC namespace unsharing, which mock uses for a long time. The problem was that lvcreate and other commands deadlocked on unitialized semaphore in the new namespace, so I temporarily disabled it and hoped I'll find a solution later. And I did, I wrapped all functions that manipulate LVM in function that calls setns to get back to global IPC namespace and after the command is done, it call setns again to get back to mock's IPC namespace.

One of the other problems I encountered is Python's (2.7) handling of SIGPIPE signal. It sets it to ignored and doesn't set it back to default when it executes a new process, so a shell launched from Python 2 (by Popen, or execve) doesn't always behave the same as regular shell.
Example in shell:
$ cat /dev/zero | head -c5
# cat got SIGPIPE and exited without error

$ python -c 'import subprocess as s; s.call(["bash"])'
$ cat /dev/zero | head -c5
cat: write error: Broken pipe
# SIGPIPE was ignored and cat got EPIPE from write()

It can be fixed by calling
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
in Popen's preexec_fn argument.

For cat it's just an example and it didn't make much difference. But if you put tee between the cat and head, it will loop indefinitely instead of exiting after first 5 bytes. And there are lots of scripts out there relying on the standard behavior. It actually bit me in one of my other programs, so I thought it's worth sharing and I also fixed it in mock.

July 23, 2014

GSOC Week 8+9 : To be or not to be
These past two weeks, I've been busy since my college re-opened and I spent my past weekend coding away in an overnight hackathon. As instructed by my mentor, I spent this week testing my recent patch whether enabling I/O profiling always in io-stats really degrades io-performance or not.

For this, I performed two write tests, one with a 20 MB file and the other with a 730 MB file. Each file was written 20 times to the mounted volume after clearing the buffers on every iteration and the time taken measured with the time command. Since the values at different times for writing the same file are quite varied, I plotted a graph using the obtained values(Y-axis represents seconds). As you might see in these images, there is no clear pattern found in the variation of values obtained while writing.




So according to me, values in both the conditions are quite near to each other and equally capable of going quite high or low than the mean value and hence, there is no negative effect seen due to the change proposed. You can follow this discussion on the ML at http://supercolony.gluster.org/pipermail/gluster-devel/2014-July/041756.html

July 21, 2014

Groups, permissions and bugspad.

This week was not much productive in terms of the amount of code written, as I was traveling by the grace of Indian railways. I finally am in my hostel room. However I used this time to plan out things and also test out the bugspad instance on the server. I made a script using mechanize and requests libraries of python to do so, which I’ll be adding to the scripts section of the repo. I am also working on the permissions stuff on a new branch. Instead of having groups I am planning to have usertypes instead keeping it product centric. This would require a minor change in the schema as I would be using charfields to denote the user types. For example, “c1″ for users assigned to group with component id 1, similarly “p1″ for users with product id 1. Would discuss more with the upstream is my mentor on the missing features and how to go about it.


July 18, 2014

GSoC - Mock improvements - week 8
Good news, we're merging my changes upstream. It's been a lot of changes and the code wasn't always in the best shape, so I didn't want to submit it before all major features are implemented. Mirek Suchy agreed he'll do the code review and merge the changes. Big thanks to him for that :)
I've setup a new branch, rebased it on top of current upstream and tried to revisit all my code and get rid of changes that were reverted/superseded, or are not appropriate for merging yet.  I squashed fixup commits to their originl counterparts to reduce the number of commits and changed lines.
The changes that weren't submitted are:
  • C nofsync library, because Mikolaj made a more robust nosync library that is packaged separately, and therefore supersedes the bundled one.
    Link: https://github.com/kjn/nosync/
    I did the review: https://bugzilla.redhat.com/show_bug.cgi?id=1118850
    That way mock can stay noarch, which gets rid of lots of packaging issues. And also saves me a lot problems with autoconf/automake. There is no support for it yet, because I need to figure out how to make it work correctly in multilib environment.
  • nofsync DNF plugin - it's an ugly DNF hack and I consider it superseded by aforementioned nosync library
  • noverify plugin - it's also a DNF hack, I will make a RFE for optional verification in DNF upstream instead
Everything else was submitted including the LVM plugin.  The merging branch is not pushed on github because I frequently need to make changes by interactive rebasing and force pushing the branch each time kind of defeats the purpose of SCM.
Other than that I was mostly fixing bugs, the only new features are the possibility of specifying additional commandline options to rpmbuild, such as --rpmfcdebug with --rpmbuild-opts option and ability to override command executable paths for rpm, rpmbuild, yum, yum-builddep and dnf, in order to be able to use different version of the tools than the system-wide version.

July 16, 2014

HOPE X Lightning Track

Planning to be in the city this weekend? Want to give a short presentation at one of the world's biggest hacker/maker conferences? I'm helping organize the Lightning Talks Track at this year's HOPE X. All are welcome to present, topics can be on anything relevant / interesting.

If you're interested simply email me or add yourself to the wiki page and we'll be in touch with more information. Even if you don't want to give a talk, I encourage you to checkout the conf schedule, it's looking to be a great lineup!

Happy hacking!


***update 07-26***: Conference was great and talks were awesome. Besides a few logistical glitches, all went well. The final speaker / presentation lineup can be seen on the HOPE Wiki (also attached below). Many thanks to all that participated!

Friday:

Saturday:

read more

July 15, 2014

GlitterGallery has Notifications now!

We’ve just rolled out a notifications feature in GlitterGallery. Here’s a working demo if you’d like to try it out.

Here’s the discussion that me and Sarup had on GitHub about how to implement it. I’m outlining the outcome and how we implemented it below.

What we’re trying to do

We’d like to log all public activity that a user might want to know about, and keep it stored so that he can view it the next time he’s online. So if a user follows a few projects, and there is activity on those projects, we want to send him notifications.

The information we need

When we have an Actor who performs an activity on an object, the basic information we’ll need to create a notification is-

  • The Object Type (If the activity was a comment, the object could be the comment)
  • The Object ID (The comment’s ID in the database)
  • The Actor (This is the ID of the user who commented)
  • A list of victims (This is the list of people who have to be notified), and a seen attribute for each (This will help in showing people only unread notifications).

How we use the information

When an activity takes place, we create a notification with the above given details. Now we add a few custom methods, to make the information useful. The most important - is the linkable URL. When we display the notification to a user and she clicks on it, we’ve got to know what URL to redirect to. We could store this in the database, but that’d lead to redundancy because we already have all the information we need to create the URL. For example, if the activity was a comment - we could create a URL to the comment by using the comment’s ID. Similarly if the activity was a follow on a project, we could create a URL to the project’s followers page from the project ID we have.

How we implement it

We show the user a notifications page where she can see her unread notifications. When she clicks on one, we redirect to the URL and mark the notification as seen in our database. Pretty simple eh :)

Future improvements

  • Multiple notifications/Notification digests

Sarup had mentioned this in the discussion link that I provided at the start.

So, in addition to the the notification button, we will want to do a few varieties of counts. Some examples: Emily, Paul, Mo and 6 others upvoted your project. Ryan and Emily reported two issues on GlitterGallery.

This is something that isn’t implemented yet. If 5 people follow your project, you’re currently going to receive 5 notifications. That’s not good, and this will be changed soon.

  • Email notifications

Not every user will want to log in to view their notifications. We should let users opt-in for notifications to be delivered through email.

  • Disable notifications automatically

This one looks a bit tricky to implement. Here’s how Sarup explained it -

When I look at my fb or GitHub notifications every two days, I realize I have notifications. However, I end up scrolling through my fb and GitHub activity feed instead, and would have typically replied to comments or visited any issue pages directly from there - when I open up my notifications, I’ve already seen most of them.

What we’d want to do - is track whether a person has seen the activity, even if she didn’t do so by clicking on the notification.

July 14, 2014

bugspad missing features

This week composed of reading on missing features of bugspad and planning as to incorporate it. I went through the design docs of bugzilla,whatever was available :P.

  • Group permissions and what to choose for the alpha state of bugspad.
  • Flags to be used for both bugs and attachments.
  • Mailing server setup and handling of mails for the cc list. I and mentor have decided and he is going to help on this to get going.
  • Testing on bigger data sets on the infra system.

I missed the infra meeting last Thursday due to my stupid Internet woes which is finally going to end as I return back to my college. :D


Google Summer of Code, week 8 update.
Google Summer of Code 2014: Week 7 update

Its 8th week of Google summer of Code, according to the submitted schedule. I was supposed to complete my work on the backend module. And start my work on the GUI from 9th week. This week was more for polishing things up.

I would like to again refer to the the  https://titanpad.com/fedoracollege . Where we usually have discussions about the things left to do. 

With a target to complete the backend module and start with documentation and GUI from next week. We did some last minute polishing and almost finalized the working of the product.

So, broadly speaking I can list what I was upto in the previous week.
  1. Added the support for FedMsg, 
    1. Added the ability to publish fedmsg by the application for the following actions 
      1. Upload of any media content
      2. Creation / Revision of Content
  2. Worked on the email system, enabling email to be sent for user registrations.
  3. Worked on the Admin panel. I was using the flask-admin for admin panel. It was not showing the foreign key relations properly. It was due to some error with the database models.
  4. Other Smaller changes include the GUI improvements, smaller bug fixes and pagination.
Also, we have sent a request to Ralp Bean to help us setup a staging environment for fedora-college. So, with the initial demo of the product ready you guys can actually see it on the staging environment.

Now the project has been formally added to the fedora-infra,  https://github.com/fedora-infra/fedora-college

Demo for the project is :  http://demo.engineerinme.com / http://demo.engineerinme.com/admin

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Thanks for Reading through the Post.

July 12, 2014

isitfedoraruby gsoc midterm sum up

This sums up my past month involvement with the project. A lot of reading in between...

Changelog

I added a changelog so that the changes are easily seen, so here it is (this week is v 0.9.1):

v 0.9.1

- Refactor rake tasks
- Source Code uri in fedorarpms, points to pkgs.fp.o gitweb
- Add first integration tests
- Retrieve commit data via Pkgwat
- Show name of last packager in fedorarpms#show
- Show last commit message in fedorarpms#show
- Show last commit date in fedorarpms#show
- Use api to fetch rawhide version instead of scrapping page
- Retrieve homepage via Pkgwat
- Fix duplication of dependencies in fedorarpms#show
- Do not show source url in rubygems#show if it is the same as the homepage
- Do not show source url in fedorarpms#show if it is the same as the homepage
- Split methods: versions, dependencies in fedorarpm model
- New rake tasks to import versions, dependencies and commits
- Show last packager in fedorarpms#show
- Show last commit message in fedorarpms#show
- Show last commit date in fedorarpms#show

v 0.9.0

- Remove unused code
  - Remove HistoricalGems model
  - Remove Build controller/view
  - Remove methods related to local spec/gem downloading
  - Remove empty helpers
  - Cleaned routes, removed unused ones
- Conform to ruby/rails style guide
- Maintainer field for packages are now using the fas_name
- Automatically fetch versions of Fedora by querying the pkgdb api
- Addded rake task to fetch rawhide version and store it in a file locally
- Show koji builds from supported Fedora versions only
- Bugs
  - Query bugs via api using pkgwat
  - Drop is_open from bugs table
  - Show only open Fedora bugs, exclude EPEL
- Hover over links to see full titles when truncated
- Rename builds table to koji_builds
- Added tests
  - Unit tests for models
- Added Github services
  - travis-ci
  - hound-ci
  - coveralls
  - gemnasium
- Development tools
  - shoulda-matchers
  - rspec
  - capybara
  - rack-mini-profiler
  - rubocop
  - factory_girl
  - annotate
  - railsroady

You should notice some version numbers. That's also a new addition and every week I will deploy a new version, so eventually at some point in the end of the summer, version 1.0.0 will be released.

Here are some nice stats from git log.

Git stats: 91 commits / 4,662 ++ / 2,874 --

Rails/Ruby style guide

Fixed arround 500 warnings that rubocop yielded.

Tests

Added: unit tests for models.

Missing: A bunch of code still needs testing, rspec is not enough to properly test api calls. I will use vcr and webmock in the future to cover these tests. Integration tests are also not complete yet.

Bugs fixed

wrong owners

Previously it parsed the spec file and checked the first email in the changelog. Co-maintainers have also the ability to build a package and in that case it shows wrong info. Another case is where a user changes their email they are taken into account twice, so when hitting /by_owner not all packages are shown. I was hit by this bug.

It now fetches the owner's fas name using pkgwat which I use to sort by owner.

dependencies shown twice

The current implementation scraps the SPEC file of a rubygem via the gitweb and then stores the dependencies. The problem is that when one uses gem2rpm, ~> is expanded to >= and <=, which leads to list some dependencies twice.

Double dependencies

The fix was quite easy. Here is the controller that is in charge for the show action:

  def show
    @name = params[:id]
    @rpm = FedoraRpm.find_by_name! @name
    @page_title = @rpm.name
    @dependencies = @rpm.dependency_packages.uniq
    @dependents = @rpm.dependent_packages.uniq
    rescue ActiveRecord::RecordNotFound
      redirect_to action: 'not_found'
  end

All I did was to add uniq.

duplicate homepage and source uri

In a gem page you could see this:

Double homepage

The information is taken from the https://rubygems.org api. Some have the same page for both gem's homepage and source uri. The secret was lying in the [view][].

%div.info
  %h3 Gem Information
  %p
    Homepage:
    =link_to @gem.homepage, @gem.homepage
  - unless @gem.source_uri.blank?
    %p
      Source Code:
      =link_to @gem.source_uri, @gem.source_uri

All I did was to change this from this:

- unless @gem.source_uri.blank?

to this:

- unless @gem.source_uri.blank? || @gem.source_uri == @gem.homepage

So now it skips showing the homepage if it is the same as the source uri.

Enhancements

Show more info in fedorarpm show page

I added some more information at the fedorarpm page. Now it shows, last packager, last commit message and last commit date. Useful if something is broken with the latest release and you want to blame someone :p

And since many times a package has many co-maintainers you get to see the real last packager.

Here's a shot of the page as it is now:

More info

Rake tasks

As I have made some major refactoring in the fedorarpms model, I split many methods to their own namespace. For example, previously there was a single method for importing the versions and dependencies, now they are two separate.

As a consequense, I added rake tasks that could be invoked for a single package. Also the namespace is now more descriptive.

The tasks are for now the following:

rake fedora:gem:import:all_names               # FEDORA | Import a list of names of ALL gems from rubygems.org
rake fedora:gem:import:metadata[number,delay]  # FEDORA | Import gems metadata from rubygems.org
rake fedora:gem:update:gems[age]               # FEDORA | Update gems metadata from rubygems.org
rake fedora:rawhide:create                     # FEDORA | Create file containing Fedora rawhide(development) version
rake fedora:rawhide:version                    # FEDORA | Get Fedora rawhide(development) version
rake fedora:rpm:import:all[number,delay]       # FEDORA | Import ALL rpm metadata (time consuming)
rake fedora:rpm:import:bugs[rpm_name]          # FEDORA | Import bugs of a given rubygem package
rake fedora:rpm:import:commits[rpm_name]       # FEDORA | Import commits of a given rubygem package
rake fedora:rpm:import:deps[rpm_name]          # FEDORA | Import dependencies of a given rubygem package
rake fedora:rpm:import:gem[rpm_name]           # FEDORA | Import respective gem of a given rubygem package
rake fedora:rpm:import:koji_builds[rpm_name]   # FEDORA | Import koji builds of a given rubygem package
rake fedora:rpm:import:names                   # FEDORA | Import a list of names of all rubygems from apps.fedoraproject.org
rake fedora:rpm:import:versions[rpm_name]      # FEDORA | Import versions of a given rubygem package
rake fedora:rpm:update:oldest_rpms[number]     # FEDORA | Update oldest <n> rpms
rake fedora:rpm:update:rpms[age]               # FEDORA | Update rpms metadata

That was it for now. For any changes be sure to check out the changelog regularly!

GSoC 2014 - week 7
Hi again, I'm sorry I didn't post last week, because I've been on a vacation.
Here's what I've done this week:

Passing additional options to underlying tools
rpmbuild has an option --short-circuit that skips stages of build preceding the specified one. It doesn't build a complete RPM package, but it's very handy for debugging builds that fail, especially in the install section. But this option is not accessible from within mock and I already mentioned in my proposal that I want to make it available. The option is also called --short-circuit and it accepts an argument - either build, install, or binary, representing the build phase that will be the first while the preceding phases would be skipped.
Example invocation:
$ mock rnv-1.7.11-6.fc21.src.rpm --short-circuit install

For Yum or DNF some of the options that are often used when user invokes the package manager directly also weren't available in mock. --enablerepo and --disablerepo are very common ones and now they are also supported by mock - they're directly passed to the underlying package manager.
Example invocation:
$ mock --install xmvn maven-local --enablerepo jenkins-xmvn --enablerepo jenkins-javapackages
The repos of course have to be present in the yum.conf in mock config.

Python 3 support
I started working on porting mock to Python 3. This doesn't mean that mock will run on Python 3 only, I'm trying to preserve compatibility with Python 2.6 without the need to have two version of mock for each. I changed the trace_decorator to use regular Python decorators instead of peak.utils.decorate and dropped dependency on the decoratortools package. There are slight changes in traceLog's output, that I don't consider important, but if someone did, it could be solved by using python-decorator package, which is available for both versions. There are some features that are still untested, but the regularly used functionality is already working. Rebuilding RPMs, SRPMs, working in shell, manipulating packages is tested. The plugins, that are enabled by default (yum-cache, root-cache, ccache, selinux) also work. What doesn't work is the LVM plugin, because it uses lvm2-python-libs, which doesn't have a Python 3 version yet. Same applies to mockchain, which uses urlgrabber. To try mock with Python 3, either change your system default Python implementation or manually hardcode python3 as the interpreter to the shebang in /usr/sbin/xmock.

July 09, 2014

Bookmarking chat logs in waartaa – GSoC post-midterm

Post-midterm phase of GSoC has already begun and there is still lot of work to be done(mostly, UI improvement and deploying all my previous work on server).

Lately, I wasn’t getting much time but somehow I have managed to add bookmarking feature in waartaa. I have added support for both single as well as multiple bookmarking in both live chat page and search page.

Single Bookmark

Beside every chat message, there appears a bookmark icon on hover. When user clicks on it, it gets bookmarked(in front-end only) and a popup appears on top of chat window which has field ‘Label’ with default value equal to chat message’s date-time, a ‘Done’ button to save data in db and ‘Cancel’ button for obvious reason.

Multiple Bookmarks

It happens many times when user wants to bookmark multiple chat messages under one label, for instance, he wants to save a conversation happened in some random IRC channel. Its easy to bookmark multiple messages in waartaa. You just have to choose two endpoints of a conversation and long click(atleast one second) one of them and normal click the other one. This will bookmark all messages in between along with the endpoints.

Bookmarks model

/*                                              
Bookmarks {                                     
  label: String, // Bookmark label                                
  roomType: String ('channel'/'pm'/'server'),   
  logIds: List, // chat messages id               
  user: String, // username of user for whom bookmark is created                                
  userId: String,  // user id                             
  created: Datetime,                   
  lastUpdated: Datetime,                        
  creator: String, // username of user who created bookmark                   
  creatorId: String                             
}                                               
*/                                            

Screenshots

I know there isn’t much you can infer from below screenshots but this is all I have right now to share with you.

single-bookmark

Single bookmarking

multiple-bookmark

Multiple bookmarking

Conclusion

With this, bookmarking feature is complete and here is the PR 129.

<script>JS I love you.</script>


July 08, 2014

Google Summer of Code seventh Week update.
Google Summer of Code 2014: Week 7 update


Its seventh week of Google summer of code, The week had been quite hectic, It was decided by us to release a version before all the fedora infra team members, by the end of this week. So, most of the time went on polishing stuff, making existing code more efficient and writing demos.

So, here we had discussions about targets and other stuff. https://titanpad.com/fedoracollege

Formally we worked to solve the following issues:

  1. Implementation of a blog, for fedora college. Inclusive of a blog RSS feed.
  2. The uploads now are more effective rather than writing whole of the file at once,we now upload in chunks.
  3. Added pagination to various modules and make the GUI more elaborative.
  4.  Configured Welcome e-mail for web-application.
  5. Support for tags for content.
  6. Added support for Code highlighter among other things.
  7. Wrote some demos for making the look of the web-application presentable.


Now the project has been formally added to the fedora-infra, ( https://github.com/fedora-infra/fedora-college ), the code would be now reviewed and viewed by whole community. We have worked really hard on this and expecting good reviews. With the polishing task taken up this week I would say the Project is almost complete.


Thanks for Reading through the Post.

July 07, 2014

GSOC Week 7 : Back on track
It's time to get back on track. Passing the midterms with supposedly good flying colors was really great. I apologize for my tardiness during the last two weeks for unable to post any update regarding my progress, owing to the fact of me not feeling very well during this time.

The progress till now includes re-thinking of the previous patch and the methodology io-stats will use to dump the private info. As suggested by my mentor, I'm moving the job of speed calculation and other major work to the glusterfsiostat script rather than code it all in the glusterfs codebase. You can look at the new patch here : http://review.gluster.org/#/c/8244/.

Also, my project was accepted to be hosted on Gluster Forge at https://forge.gluster.org/glusterfsiostat where you can track the progress for the python script and rest of the code base related to my project.

Recently, my mentor and me have started to track our progress with the help of Scrum model, by using trello. This helps us break the bigger jobs into smaller tasks and set the deadline on each of them to better estimate their supposed date of completion.

July 06, 2014

Bugspad bootstrapped!

Had a refreshing retreat with my family, this week. As planned I was going through bootstrap CSS, to give a decent and responsive look to the UI of bugspad. Also I have been planning on how to use flags in bugspad, alongwith the user group permissions. The following are some of the snapshots of the current revamped after integrating bootstrap. You can otherwise check it out here . I have also been planning to have a mascot for bugspad, as tux is for linux, and the Buggie is for Bugzilla.

bug_desc

filebug

home

login

searchbug


July 01, 2014

Google Summer of Code sixth Week update.
Google Summer of Code 2014: Week 6 update


Its sixth week of Google summer of code and officially is the middle week of the program. I have passed the midterm evaluations by the Google team. It has been a really great summer till now, with a lots of opportunities to learn. 

This week we worked mostly towards preparing a demo of what we have done. A lot's of time went to configure the web-server for running Flask with wgsi mod but with little success. I have created a discussion about the problem at stack-over flow. Anyone who would like to come forward with a helping hand is most welcomed.

Also, I worked on the comments module enabling comments on the webpages. I was able to successfully implement the comment module but it still doesn't not support threading and I expect to add logic for threading by end of this week. We moved the GUI for the project form old self written CSS to Foundation CSS. Making it more user friendly and compatible with all devices.



Thanks for Reading through the Post.

June 30, 2014

Rails development tools

During the past two months I have been reading constantly about Rails and how I could get more productive when writing code and testing my apps. There is a ton of information about those matters on the web and I'll try to include as many articles as I could find useful to my knowledge building.

Disclaimer: This article is heavily inspired by Thoughtbot's Vim for Rails Developers which I stumbled upon during browsing the screencasts of codeschool.

Editor of choice (vim)

When you work from the command line and you use linux, your editor preference comes down to two choices: vim and emacs. I started with vim some time ago so I'll stick with it.

If you are new to vim read this cheatsheet to learn the basical commands.

vim plugins

Start by installing pathogen.vim, a vim plugin manager:

mkdir -p ~/.vim/autoload ~/.vim/bundle && \
curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim

Then add this to your vimrc:

execute pathogen#infect()

From now on, every plugin that is compatible with pathogen can be simply installed by cloning its repo at ~/.vim/bundle.

An alternative for pathogen is vundle. Haven't used it but it behaves similarly.

rails.vim

Probably the one most useful plugin when dealing with Rails projects.

Install it with:

git clone git://github.com/tpope/vim-rails.git ~/.vim/bundle/vim-rails

Browsing through the app

You can use :RController foos and it will take you straight to the app/controllers/foos_controller.rb. As you might guess, same happens with :RModel foo, etc. There is also tab completion so that you can toggle between all models/controllers, etc.

Another useful command is :find. Invoking it with a name foo, it first searches for a model named foo. Tab completion is also your friend.

One other really cool feature is the go to file. Supposedly we have the following model:

class Blog < ActiveRecord::Base

  has_many :articles

end

Placing the cursor on the articles word and pressing gf vim opens the article model. After saving your changes you can go back to the blog model by pressing Ctrl-o.

Run your tests through vim

Running test is also a matter of a command. Say you are editing a specific spec/test file. All you have to do is run :Rake and the tests for that particular file will be ran, without leaving your favorite editor :)

The supported commands are a lot and your best bet is to invoke :help rails in vim and learn about them.

Be sure to also check vim-rails on github.

vim-snipmate

SnipMate implements snippet features in Vim. A snippet is like a template, reducing repetitive insertion of pieces of text. Snippets can contain placeholders for modifying the text if necessary or interpolated code for evaluation.

Install it:

cd ~/.vim/bundle
git clone https://github.com/tomtom/tlib_vim.git
git clone https://github.com/MarcWeber/vim-addon-mw-utils.git
git clone https://github.com/garbas/vim-snipmate.git
git clone https://github.com/honza/vim-snippets.git

Writing a method

Reading the source code of snippets above let's see how we can create a method. The snippet reads:

snippet def
        def ${1:method_name}
                ${0}
        end

So, the snippet is named def and in order to invoke it we must write def and hit Tab. It then expands, placing the cursor in the highlited method_name. This is what it looks like:

def method_name

end

Once you start typing, method_name gets replaced with what you type. When you finish, hit Tab again to go to the method body.

Now all you have to do is read the ruby.snippet and find out what snippets are supported.

fugitive.vim

vim-fugitive brings the power of git commands inside vim.

Install it with:

git clone git://github.com/tpope/vim-fugitive.git ~/.vim/bundle/vim-fugitive

Check out the github page for a list of commands and some interesting screencasts.

Terminal multiplexer (tmux)

Again, here you have two options. screen or tmux. My first contact was with screen but recently I decided to try tmux.

I won't go into any details but I highly reccomend watching Chris Hunt's presentation Impressive Ruby Productivity with Vim and Tmux. It's an awesome talk.

Development stack

There is a great article I stumbled upon yesterday about some must have gems for development, some of which I haven't tested. Here is what I got so far.

jazz_hands

jazz_hands is basically a collection of gems that you get for free with just one gem. It focuses on enhancing the rails console. It provides:

- Pry for a powerful shell alternative to IRB.
- Awesome Print for stylish pretty print.
- Hirb for tabular collection output.
- Pry Rails for additional commands (show-routes, show-models, show-middleware) in the Rails console.
- Pry Doc to browse Ruby source, including C, directly from the console.
- Pry Git to teach the console about git. Diffs, blames, and commits on methods and classes, not just files.
- Pry Remote to connect remotely to a Pry console.
- Pry Debugger to turn the console into a simple debugger.
- Pry Stack Explorer to navigate the call stack and frames.
- Coolline and Coderay for syntax highlighting as you type. Optional. MRI 1.9.3/2.0.0 only

Again, visiting the github page, you will get all the info you want. There is an open issue and installation on ruby 2.1.2 is failing for now. For the time being you can put the following in your Gemfile:

gem 'jazz_hands', github: 'nixme/jazz_hands', branch: 'bring-your-own-debugger'
gem 'pry-byebug'

rubocop

rubocop is a tool which checks if your code conforms to the ruby/rails community guidelines.

You can check the article I wrote where I explain how to set it up and running.

railroady

railroady is a tool that lets you visualize how the models and the controllers of your app are structured. Instructions on how to install it are on the github page. You can check how it looks like on the fedoraruby project I'm currently working on.

annotate

annotate generates a schema of the model and places it on top of the model. It can also place it on top of your rspec files and the factories. It looks like this:

# == Schema Information
#
# Table name: bugs
#
#  id            :integer          not null, primary key
#  name          :string(255)
#  bz_id         :string(255)
#  fedora_rpm_id :integer
#  is_review     :boolean
#  created_at    :datetime
#  updated_at    :datetime
#  last_updated  :string(255)
#  is_open       :boolean
#

Testing stack

There is a ton of useful tools out there and if you are new to rails development you can easilly get lost. Rails has Two Default Stacks is a nice read that sums it up. I will try to update this post as I find more useful tools in my way.

rspec

I am mostly in favor of rspec because of its descriptive language and the great support by other complement testing tools.

capybara

So, why capybara and not cucumber? I'm not an expert on neither of these tools but from my understanding capybara is more focused on developers whereas cucumber's human language mostly targets aplications where one talks to a non-technical customer.

guard

Guard watches files and runs a command after a file is modified. This allows you to automatically run tests in the background, restart your development server, reload the browser, and more.

It has nearly 200 plugins which provide different options as guard is not only used for testing. The particular plugin for rspec is guard-rspec.

When you make the smallest change to a test and you hit save, guard will run that particular test group again to see if it still passes.

I tend to invoke guard with guard -c which runs the tests in a clear console every time.

Read the guard wiki page which is comprehensive and also watch the [guard railscast][] to better understand it.

Other super useful tools

ctags

Quoting from What is ctags?:

Ctags generates an index (or tag) file of language objects found in source files that allows these items to be quickly and easily located by a text editor or other utility.

There are a bunch of different tools to create a tags file, but the most common implementation is exuberant ctags which we will use.

It supports 41 programming languages and a handful of editors. directories

Installation

Install ctags via your package manager. It should be supported in all major ditributions.

Configuration

For a rails project, in your application root directory you can run:

ctags -R --exclude=.git --exclude=log *

This searches recursively all files in the current directory, excludes the .git and log directories and creates a tags file under current dir. You may want to add it to .gitignore by the way.

Next, adding the following line to ~/.vimrc:

set tags=./tags;

sets the location of the tags file, which is relative to the current directory.

You can move the above options in ~/.ctags, so in our case this will be:

--recurse=yes
--tag-relative=yes
--exclude=.git
--exclude=log

So in future runs of ctags all you need to do is ctags *.

ctags doesn't autogenerate, so each time you write code that is tagable, you have to run the command again. If you are working in a git repository be sure to checkout Tim Pope's Effortless Ctags with Git. What this does is:

Any new repositories you create or clone will be immediately indexed with Ctags and set up to re-index every time you check out, commit, merge, or rebase. Basically, you’ll never have to manually run Ctags on a Git repository again.

Usage

Say we have a file containing hundrends of lines. Inside a method you see the below definition:

def contain_multiple_methods
  method_one
  method_two
  method_three
end

While you could search for these methods, you can save a few keystrokes by simply getting the cursor on the line of the method to search and in vim normal mode press Ctrl + ] (control and right square bracket). This should get you where the method is. Go back to where you were by pressing Ctrl + t.

Note: The usage of ctags isn't restricted only in the current file. If a method in your file is inherited by another class, then searching for it will jump in this particular file.

Secret power

Wouldn't it be cool if we could search for methods in the Rails source code? Here is where the power of ctags really excels. All you have to do is tell ctags to also tag the rails source code.

First I cloned the rails repository into vendor/rails:

git clone https://github.com/rails/rails.git vendor/rails

It should take less than a minute to download. You wouldn't want the rails source code to be included in your git tree, so you simply exclude vendor/rails by adding it to .gitignore.

Lastly, create again the tags with ctags *.

Now navigate with vim to one of your models that has for example the association has_many, place the cursor on it (or just on the same line) and hit Ctrl + ]. Pretty cool huh? In case you forgot, go back to where you were with Ctrl + t.

ack

ack is like grep but on steroids.

Designed for programmers with large heterogeneous trees of source code, ack is written purely in portable Perl 5 and takes advantage of the power of Perl's regular expressions.

It supports multiple types which you can see by typong ack --help-types.

Of course there is a vim plugin!

alternative (ag)

While reading the more-tools page of ack I found out about ag, also called the_silver_searcher. It is said to search code about 3–5× faster than ack, is written in C and have some more enhancements than ack. You may want to give this a try to. And as you have guessed there is also an ag vim plugin.

Conclusion

The editor of choice and the tools you use in web development play a great role in one's productivity, so you have to choose wisely and spend some time to get to know it. Personally, I learned a lot more these past days I was crafting this post and I hope you got something out of it too :)

June 28, 2014

Post Midterms: Bugspad now into dev-testing stage 2

Cheered that I passed the midterms :) . Hats off to my mentor Kushal Das, who has been patient with me, throughout literally helping me with tiny bits, and bearing my silly mistakes. Have learnt a lot under him. Got a new instance with http://209.132.184.128/. Thanks a lot to the fedora-infra team . This would be used for doing my dev-testing with larger data sets, and real time performance assessments. I have added a error logging system, to log in the various printf statements throughout my code. I have also refactored it to make it more clean and understandable. My next target is to fix the current crappy UI of bugspad with bootstrap, and use the bugspad api, to do testing on a larger data set. Am excited about it! :D


June 27, 2014

Ticket Handling System in Freemedia
Ticket handling system is also plays a major role in Freemedia service. Because it is the method volunteer can get to know about media requests.
Basically ticket is created after user(requester) fill the requesting form. Then it will be added to the ticket list where volunteers can see them. Below it show the how it looks like.


Here ticket ID is generated according to the time it creates. Then the volunteer can decide what ticket he/she could choose in order to fulfill. when click on the ticket ID volunteer will be directed to the page where entire ticket will be shown.



In ticket view we can see there are few information attributes .


  • Summary - A brief description summarizing the request. It has the format below.                                               <first name> from <country> need <requested media>
  • Description - This contains the residential address of the requester.
  • Reporter - Requester's email ID.
  • Status -  What is the current status? One of new, assigned, closed, reopened.
  • Keywords - Keywords that a ticket is marked with. Useful for searching and report generation.
  • Media Version - Version of the media that this ticket pertains to.
  • Cc - A comma-separated list of other users or E-Mail addresses to notify.
  • Assigned  - Principal person (volunteer) responsible for handling the ticket. 
  • Resolution- Reason for why a ticket was closed. One of fixed, invalid, wontfix, duplicate,                                     worksforme.

Below illustrate the ticket life cycle.



Integrating bootstrap

To create ticket view I uses bootstrap which is nice framework for UI design.
We can easily  integrate  bootstrap into cakephp.
After download and extract  bootstrap from this link . We can simply add files to the app/webroot folder.
bootstrap.min.css -   app/webroot/css
bootstrap.min.js -     app/webroot/js

To work with bootstrap we need jquery. So simply add jquery-x.x.x.min.js to app/webroot/js.

Then we have to import bootstrap before use. Instead of importing in each view file we can add them in
app\View\Layouts\default.ctp   which is the layout shared in every view in cakephp.
we can import like below.

<?php echo  $this->Html->script('jquery-1.9.1.min');?>
<?php echo  $this->Html->script('bootstrap.min');?>

<?php echo $this->Html->css('bootstrap.min'); ?>

So we can simply update class attribute like below so that all CSS stuffs will added like a magic :)

<?php echo $this->Form->input('Reporter',
       array(
         'class'=>"form-control",
          'label'=>'Reporter:',
          'required'=>false,
         'value'=>  $Email_ID)); ?>

So here class attribute is defined as form-control , then css will be applied to the input.

More details about bootstrap CSS
http://getbootstrap.com/css/





June 25, 2014

GSoC week 5
Nofsync
I created another implementation of nofsync plugin (disables fsync(), makes it much faster), this time in python as DNF plugin that disables fsyncing in the YumDB. It is a small bit slower than the C library using LD_PRELOAD, because it doesn't eliminate fsyncs made from scriptlets (by gtk-update-icon-cache and such). But it's much simpler from packaging perspective (mock can stay noarch) and could be actually upstreamable (in dnf), because there are some other use cases, where you don't try to recover from hardware failure anyway - for example anaconda. If the power goes down, you probably don't try to resume existing installation. And this could make it faster (nofsync makes package installation approximately 3 times faster).
To compare the two implementations, set either
config_opts['nofsync'] = 'python'
or
config_opts['nofsync'] = 'ld_preload'
Default is python, to disable it, set the option to something else (empty string)

LVM support
Last week I implemented base for LVM plugin for mock using regular snapshots. This week I rewrote the plugin to use LVM Thin snapshots, which offer better performance, flexibility and share the space with the original volume and other snapshots, therefore don't waste much space. I created basic commands that can be used to manipulate the snapshots.
Example workflow:
I'll try to demonstrate how building different packages can be faster with LVM plugin. Let's repeat the configuration options necessary to set it up:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'my-volume-group',

}
You can now also specify 'mount_options', which will be passed to -o option of mount. To set size to larger than the default 2GB, use for example 'size': '4G' (it is passed to lvcreate's -L option, so it can be any string lvcreate will understand). Now let's initialize it:
$ mock --init
Mock will now create thin pool with given size, create a logical volume in it, mount it and install the base packages into it. After the initialization is done, it creates a new snapshot named 'postinit', which will be then used to rollback changes during --clean (which is by default also executed as part of --rebuild). Now try to install some packages you often use for building your own packages. I'm a Java packager and almost every Java package in Fedora requires maven-local to build.
$ mock --install maven-local
But now since I want to rebuild more Java packages, I'd like to make snapshot of the buildroot.
$ mock --snapshot mvn
This creates a new snapshot of the current state and sets it as the default. We can list snapshots of current buildroot with --list-snapshots command (the default snapshot is prefixed with asterisk)
$ mock --list-snapshots
Snapshots for mock-devel:
  postinit
* mvn


So let's rebuild something
$ mock --rebuild jetty-9.2.1-1.fc21.src.rpm
$ mock --rebuild jetty-schemas-3.1-3.fc21.src.rpm
Because the 'mvn' snapshot was set as the default, it means that each clean executed as part of the rebuild command didn't return to the state in 'postinit', but to the state in 'mvn' snapshot. And that was the reason we wanted LVM support in the first place - it didn't have to install 300+MB of maven-local's dependencies again (with original mock, this would probably take more than 3 minutes) but still the buildroot was cleaned of the packages pulled in by previous build. We could then install some additional packages, for example eclipse, and make a snapshot that can be used to build eclipse plugins.
Now let's pretend there has been an update to my 'rnv' package, which is in C and doesn't use maven-local.
$ mock --rollback-to postinit
$ mock --list-snapshots
  mvn
* postinit
Now 'postinit' snapshot was set as default and buildroot has been restored to the state it was in when 'postinit' snapshot was taken (after initialization, no maven-local there). The 'mvn' snapshot is retained and we can switch back again using --rollback-to mvn.
So now I can rebuild my hypothetical rnv update. If I decide that I don't need the 'mvn' snapshot anymore, I can remove it with
$ mock --remove-snapshot mvn
You cannot remove 'postinit' snapshot. To remove all logical volumes belonigng to the buildroot, use mock --scrub lvm
 
So that's it. You can create as many snapshots as you want (and snapshots of snapshots) and keep a hierarchy of them to build packages that have different sets of BuildRequires.
Few more details:
  • The real snapshot names passed to LVM commands have root name prefixed to avoid clashes with other buildroots or volumes that don't belong to mock at all. It also checks whether the snapshots belong to mock's thinpool.
  • The volume group needs to be provided by user, mock won't create one. It won't touch anything else besides the thinpool, so it should be quite safe if it uses the same volume group as you system (I have it like that).
  • The command names suck. I know. I'll try to provide short options for them.
  • If you try the version in my jenkins repository, everything is renamed to xmock including the command - to allow it to exist alongside original mock.

June 24, 2014

How GlitterGallery handles git repositories using Rugged

All projects are git-backed in GlitterGallery. Grit started throwing errors when we moved to Ruby 2.0, so we decided to change to Rugged - here’s a short blog post on how we handle the repositories.

I never really understood the concept (and the importance) of bare repos in git. The basic difference is - Bare repos don’t have working trees, non-bare repos do. That’s not very helpful. I got to know more by reading this.

In Git (from version 1.7.0 and above) the repository has to be “bare” (no working files) in order to accept a push.

Shared Repositories Should Be Bare Repositories

Sometime in the future, we’d like to allow users to git push their repos directly to GlitterGallery and share repositories. Keeping that in mind, all our main work takes place in bare repos. We do have non-bare repos too (called satellite repos), but they’re just used for intermediary work. For example - when a user uploads an image, we copy it to the satellite repo directory, add it to the satellite repo’s index, and then push to the bare repository. This is how Gitlab handles their repos too.

Google Summer of Code fifth Week update.

Google Summer of Code 2014: Week 5 update
And 
Updates about my work before the start of mid term evaluations.


Hello folks, 

I would like to share details about what I have done this week. Also, as the fifth week evaluations have approached I would also like to sum up all the tasks completed till this date.  

This week, I was able to implement a basic admin module for the web application. Allowing easy control of content present on the application by the admin. Also, some time was spent to design and implement media search. We designed a method for creating content, i.e. a way to embed images, videos and other media content into the text content.

The last five weeks have been quite hectic for me, I was working on fedora-college project with the help of the fedora infra team. I was allotted Mr. Eduardo Echeverria, Mr. Luis Bazan, Mr. Yohan Graterol and bckurera for my project. Most of my interactions have been with the Eduardo Echeverria, and Yohan Graterol. They have helped me quite often and even come to my rescue frequently on problems during development phase. Also, I would like to thank the fedora-infra team and fedora-design team members to be available always for help. 

During the last few weeks I have worked on the project "Fedora College", Its aimed to create a virtual classrooms for new fedora contributors, where we will use the available video and other multimedia resources to help new contributors as well as existing ones to learn and engage with the community. It Acts as a platform for new contributors to engage with the community and learn how they can contribute best in the community. Mostly this service will be used to run online courses on contributing at various levels be it documentation, bug-fixing or packaging.


The work for the project was divided into two parts namely the product API and the web based GUI. The API is mostly read-only and offer only write permissions for managing media content. The web GUI offers functionality like the railcast.com and edx.com helping to deliver multimedia content to people enrolled (registered) with us. Also, We aim to create a method for creating community interactions. In the previous five weeks I was able to implement the search, The product API and multiple modules for the project. The modules like the Authentication, Content writing, Admin, Home, Blog and Search we developed in the course of time. Once you clone the Github repository and run the project you can find a detailed list of available API endpoints at the " /api/docs/ ". During the last week we also, published the initial release of the product. The code for the same is here and the project proposal for the GSoC is here


Thanks for Reading through the post.



GG halfway through with redesign

We’ve reached about halfway with the GG redesign, so I’ve decided to share updates on this blog post. It’s basically design work, so the best way is to just try it out. Head over to http://glittery-banas.rhcloud.com, sign up for an account and give it a roll :) Some features may be hidden/removed for now, because we wanted to make sure what is visible is performing good.

Managed to capture the login page right in time to include the red nprogress loader :D

Login page

I wanted to make the onboarding experience smooth, so there’s guides that show up everywhere to help you around the system.

Dashboard

As you’d notice, there’s plenty of whitespace and big buttons to help you navigate without confusion. As you create new projects, widgets show up on the right to act as quick links to your recent projects. Notifications will be introduced next to projects soon :)

New Project

Here’s yet another example of a guide, helping a user understand what to do on a new project page.

Another example of a guide prompting the next action

Project files show up neatly in different columns - when I’m doing JS activity, I’ll try to make it pinterest style.

Freshly created project page

Also, I’ve tinkered a little with the User & project settings, introducing more UI elements.

Settings page

So go ahead and give it a try - http://glittery-banas.rhcloud.com. As usual, remember to report issues into the issue tracker :)

June 23, 2014

Bug fixing on bugspad

This week was spent mostly on fixing the tiny bits of bugspad. 

Did the following:

  • Removed unneccessary binary files.
  • Added missing changes for using redis bug tags.
  • Bugs cannot be closed unless all dependent bugs are closed.
  • Emails are not visible if the user is not logged in
  • Added missing status tag
  • Auto change of status from new to open upon commenting

I am currently working on the search interface which is planned to be completely built
on redis, the main indregient in the lightning fast nature of the planned bugspad.


June 19, 2014

QA Testing Video Chat App – GSoC Week 4

I would recommend you to read my last post first before reading this one. This should help you understand the content of this post better :)

Last week, I did both dev testing and load testing of video chat app. In this post, I am gonna show you the results I got.

Bugs & Fixes

Bugs I found during testing:-

  • Re-rendering video object: When a user moves from video chat app to an irc channel/server and then back to video chat again, video stream(both local and remote) doesn’t get re-render. Actually,  this was a browser specific issue and I found it’s fix by chance. Have a look at the fix.
  • Click event on ‘Accept/Reject’ button gets triggered more than once: This one was simple to fix. I just had to use jQuery off method to remove click event attached to element before attaching new click event. Fixed.
  • Option to enable/disable video chat: I know this isn’t a bug. My mentor told me to add this feature.

Load Testing

Framework

In my last post, I mentioned I used EasyRTC framework to build p2p(peer to peer) video chat. Although, it’s a p2p app, it still requires server for signaling(read more). For signaling, EasyRTC uses socket.io built on node.js. So, basically I load tested EasyRTC server’s socket.io implementation.

I used socket.io-client module to interact/connect with EasyRTC server by writing against their server side API, as documented here. Below is the snippet of simple client I created.

var Client = function () {
  /* Some code omitted */
  this.connectToServer = function (callback) {
    var client = io.connect(
        'http://'+SERVER_HOST+':'+SERVER_PORT ,
        {'force new connection': true}
    );
    if (!client) {
      throw "Couldn't connect to socket server";
    }

    var msg = {
      msgType: 'authenticate',
      msgData: {
        apiVersion: "1.0.11",
        applicationName: "waartaa"
      }
    };

    var easyrtcAuthCB = function (msg) {
      if (msg.msgType == 'error') {
        callback('easyrtc authentication falied');
      } else {
        // code omitted
      }
    };

    client.json.emit('easyrtcAuth', msg, easyrtcAuthCB);
  };
}

Script & Parameters

I made a command line tool in nodejs to ran tests.

node run.js --no-of-connection=700 --no-of-concurrent-connections=[1,5,10]

In above mentioned command you should notice two things, namely, no. of connections and concurrent connection. My system(Intel(R) Core(TM) i5 CPU M 460 @2.53GHz, 4 GB RAM) couldn’t stand more than 700 connections. Server process started consuming 1 GB RAM for more than 700 connections. I will tell you the reason for this later. For now, 700 was suffice for initial testing. My mentor told me to keep concurrent connections very low because we won’t be expecting too high concurrency in production initially.

Data collected

After the script was ran several times, I collected the following data each time for different concurrency against no. of connections:-

  • Roundtrip time(ms): Time taken for client to get connected to server. This includes latency(which is negligible because client and server are on same machine) + client-to-server message time + server-to-client message time.
  • Avg. CPU load(%) & Memory (MB): I used this node module to collect these data.

Below you can see the data plotted to graph and their analysis:-

Roundtrip time analysis roundtrip_time

  • As no. of connections increases, round-trip time increases. This is because the default behavior of EasyRTC is to send every client in a room a separate update whenever a new client leaves a room, enters a room, or changes its status. And this leads to increase in round-trip time.
  • Increase in concurrency too increases round-trip time. I guess this is because nodejs being a single threaded server processes one request at a time. Please correct me if I am wrong.
  • Roundtrip time < 500 ms is a good no. For concurrency = 1 and any no. of connections in above graph, round-trip time < 500 ms.

CPU analysis cpu

  • Nothing too unnatural about cpu load increasing with no. of connections and concurrency.
  • Good thing is the rise is not too steep for lower concurrency.

Memory analysis memory

  • As I mentioned earlier, 700 connections consumed 1 GB memory and you can clearly see that in above graph. Well, this is because as I said earlier, the default behavior of EasyRTC is to send every client in a room a separate update whenever a new client leaves a room, enters a room, or changes its status. That means for 700 connections (700*701)/2 = 245350 messages were sent by server. You can do the math now :)
  • For any concurrency, memory rises at the same rate with increase in no. of connections. Thus, memory consumption is independent of concurrency.

Conclusion

  • More bugs will pop up when my mentors and other people start using the app until then a developer can only believe his/her app works perfectly ;)
  • The default behavior of EasyRTC server can cause scaling issues. We might have to change it in future by adding custom event listeners on server side.

PS: If you have made this far and found anything wrong in this post, please do comment. :)

<script>JS I love you</script>


June 18, 2014

User Form Handling in Freemedia Tool
Very first thing in the  Freemedia process is user filling the form in order to create a ticket(making a request). This form should handle correctly , otherwise whole process will become a mess. That because if user enter invalid details, then volunteers will not be able to fulfill the requests. So form handling mechanism should have;
  • Proper validation for each field.
  • Preventing duplicates.
  • User friendly.  
For achieve these things first we need to have a proper database structure.So following  illustrate the ER diagram. 

After the database design I created models for each entity in cakePHP. For that I use CLI tool provided in cakePHP. It makes life easy :) . By just typing "cake bake model" we can create a new model. CLI tool suggest appropriate models from the database. So that we can select what we want. Then tool suggest what are the things that can included in a model.Such as validations,associations(linking models) and etc.


<script src="https://gist.github.com/coma90sri/b2510f7c3f979ddc094e.js"></script>
Then I created controller for handle model data. Up to now controller has add() function which insert user data into the database.

<script src="https://gist.github.com/coma90sri/decf5728d2555f5f9df6.js"></script>
Then we need a view to view the user form.

<script src="https://gist.github.com/coma90sri/1d8f0422457edd7579fb.js"></script> In next form, there is a field for select country. In order to add country list I used formHelp in view.So I created file in view/Helper and use it in the controller(new helper function name should be added to $helper array). ex- if file is LangHelper.php  in controller $helpers = array('Html', 'Form','Lang', 'Session')  should be added.
Here is the code for helper I added to create country drop down list

<script src="https://gist.github.com/coma90sri/481b7d6eaeb6de14f0ec.js"></script> Then in view file which controller has added the helper can use helper we can create country list like this
             $this->lang->countrySelect('Country');
GSoC - week 4
Last week I had exams at the university and that left me with less time for work. But I made some progress anyway.

Mock performance
Mock builds usually take a considerable amount of time and there is nothing much that can be done about the speed of actual building but the package installation can be improved. Last time I created noverify plugin which provided considerable speed up and my mentor recomended to try removing fsync calls during the package installation. I do that by making a small library in C containg only empty fsync() and fdatasync() functions and copying it into the buidroot. Then using LD_PRELOAD to make it replace actual libc implementation of this calls. My mentor measured the performance differences on his kvm virtual machine and the results are amazing - times installing @buildsys-build and maven-local (look at the wall clock time):

Standard yum:
User time (seconds): 55.66
System time (seconds): 5.78
Percent of CPU this job got: 25%
Elapsed (wall clock) time (h:mm:ss or m:ss): 4:02.03

Standard dnf:
User time (seconds): 49.61
System time (seconds): 5.68
Percent of CPU this job got: 23%
Elapsed (wall clock) time (h:mm:ss or m:ss): 3:50.94

With noverify plugin:
User time (seconds): 47.85
System time (seconds): 5.32
Percent of CPU this job got: 36%
Elapsed (wall clock) time (h:mm:ss or m:ss): 2:25.25
Maximum resident set size (kbytes): 150248

With noverify plugin, fsync() and fdatasync() disabled:
User time (seconds): 46.38
System time (seconds): 4.97
Percent of CPU this job got: 87%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:58.56
Maximum resident set size (kbytes): 150260


That's more than 4x faster and could be valuable improvement for both packagers and koji builders.

LVM plugin
I started implementing the basis for LVM support. Now it can already use LVM snapshot instead of root cache. To enable it, put the following in your config:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'mock-vg'
}

where mock-vg is the name (not path) of the volume group you want mock to use for creating new volumes. There are other configuration options possible - filesystem, size, snapshot_size, mkfs_args. Root cache is disabled because it would be redundant. When started, it creates a logical volume, mounts it and after it's initialized, it makes a snapshot and mounts the snapshot instead of the original. Then all following builds alter only the snapshot and when clean command is executed (usually at the beginning of new build) the snapshot is deleted and replaced with new one. I originally tried to implement it the other way around - making a snapshot and still working with the original volume and then merging it when cleaning. But it was very slow - the merging took more than 10s. The current approach is fast enough - cleaning is just deleting a snapshot and creating new one which happens almost instantly (compared to deleting buildroot and unpacking root cache).

The next week I'll try to implement more advanced features of the LVM plugin - snapshot management which would allow having a hierarchy of snapshots with different preinstalled packages facilitating faster workflow for packagers working with more diverse set of packages.


June 17, 2014

GSOC Week 4: "This is not a coding contest"
Basing my first patch to Gluster as a stepping stone, I've written a small utility glusterfsiostat, in python which can be found at https://github.com/vipulnayyar/gsoc2014_gluster/blob/master/stat.py. Currently, the modifications done by my patch to io-stats which is under review as of now, dumps private information from the xlator object to the proper file for private info in the meta directory. This includes total bytes read/written along with read/write speed in the previous 10 seconds. The speed at every 1 second is identified by it's respective unix timestamp and hence given out in bytes/second. These values at discrete points of time can be used to generate a graph. 

The python tool first identifies all gluster mounts in the system, identifies the mount path and parses the meta xlator output in order to generate output similar to the iostat tool. Passing '-j' option gives you extra information in a consumable json format. By default, the tool pretty prints the basic stats which are human readable. This tool is supposed to be a framework on which other applications can be built upon. I've currently put this out in the gluster devel ML for community feedback so as to improve it further.

Note: In order to test this, you need to apply my patch(http://review.gluster.org/#/c/8030/) in your repo first, build and then mount a volume. Preferably perform a big read/write operation with a file on your Gluster mount before executing the python script. Then run it as 'python stat.py' or 'python stat.py -j'

Quoting my mentor Krishnan from our latest weekly hangout, "This is not a coding contest". What he meant was that just writing the code and pushing it is not the essence of open source development. I still need to interact and gain more feedback from the community regarding the work I've done till now, since our aim is not to complete the project for just the sake of doing it, but to build something that people actually use.

June 16, 2014

Adding OpenID Auth system to cakePHP
Fedora infrastructure currently support openID and Persona (FedAuth). So I have to add openID Auth system in login system.

What is openID?
    OpenID allows to use an existing account to sign in to multiple websites,without needing to create new passwords. With that, password is only given to identity provider, and that provider then confirms the identity of the person to the website he/she visit. So no need to worry about unscrupulous or insecure website compromising visitors identity.

How to handle OpenID in cakephp?

   In this process we need openID library. So here I found openID library for php by janrain and it is licensed under MIT license.
First of all openID library(Auth folder) should be added to app\vendor folder. Then openID component  (OpenidComponent.php) to app\Controller\Component. Then we need a login form.

<script src="https://gist.github.com/coma90sri/f5b6b0b4c2ba1ddb2081.js"></script> Next we have to write controller to handle this form. This controller handle following tasks

  • Show the login form.
  • Redirect user to openId provider (when hit submit)
  • Handle the response from OpenID provider.


    Below code is just checks whether openID is successfully authenticated or not.
<script src="https://gist.github.com/coma90sri/33ea3646e1743c3dfa25.js"></script> Above code is modified version of previous userController.php. Here I added Simple Registration Extension(SReg) which retrieve nine commonly requested information
nickname, email, fullname, dob (date of birth), gender, postcode, country, language, and timezone

So that we can retrieve at least few of them and use to identify the user.All request info arrives as an array by post method.
Check the implementation of this in OpenShift.
http://freemedia-dulanja.rhcloud.com/users/login

Bugspad.org – Into the dev testing phase.

Am greatly excited, to tell that Bugspad has entered into the dev testing phase of development, wherein we would be testing it on my mentor’s server. Btw, it is not feature complete but you can have a first look at bugspad.org. As for the week I added the basic admin interface and added in missing features.

  • Added bug assignee and version tables, modifying corresponding tables and templates.
  • Fixed Null bugs, components, products editing page.
  • Added feature for adding in attachments to bugs.
  • Added dependencies and fixed the correponding table schema.
  • Added support for filling in assignee, dependencies and docs maintainer at bug filing page.
  • Added auto-comment on adding blocks and depends on.

Overall many features were added in. Now since we are into production environment, would love to hear feedback before feature complete testing can begin.