July 09, 2014

Bookmarking chat logs in waartaa – GSoC post-midterm

Post-midterm phase of GSoC has already begun and there is still lot of work to be done(mostly, UI improvement and deploying all my previous work on server).

Lately, I wasn’t getting much time but somehow I have managed to add bookmarking feature in waartaa. I have added support for both single as well as multiple bookmarking in both live chat page and search page.

Single Bookmark

Beside every chat message, there appears a bookmark icon on hover. When user clicks on it, it gets bookmarked(in front-end only) and a popup appears on top of chat window which has field ‘Label’ with default value equal to chat message’s date-time, a ‘Done’ button to save data in db and ‘Cancel’ button for obvious reason.

Multiple Bookmarks

It happens many times when user wants to bookmark multiple chat messages under one label, for instance, he wants to save a conversation happened in some random IRC channel. Its easy to bookmark multiple messages in waartaa. You just have to choose two endpoints of a conversation and long click(atleast one second) one of them and normal click the other one. This will bookmark all messages in between along with the endpoints.

Bookmarks model

/*                                              
Bookmarks {                                     
  label: String, // Bookmark label                                
  roomType: String ('channel'/'pm'/'server'),   
  logIds: List, // chat messages id               
  user: String, // username of user for whom bookmark is created                                
  userId: String,  // user id                             
  created: Datetime,                   
  lastUpdated: Datetime,                        
  creator: String, // username of user who created bookmark                   
  creatorId: String                             
}                                               
*/                                            

Screenshots

I know there isn’t much you can infer from below screenshots but this is all I have right now to share with you.

single-bookmark

Single bookmarking

multiple-bookmark

Multiple bookmarking

Conclusion

With this, bookmarking feature is complete and here is the PR 129.

<script>JS I love you.</script>


July 07, 2014

GSOC Week 7 : Back on track
It's time to get back on track. Passing the midterms with supposedly good flying colors was really great. I apologize for my tardiness during the last two weeks for unable to post any update regarding my progress, owing to the fact of me not feeling very well during this time.

The progress till now includes re-thinking of the previous patch and the methodology io-stats will use to dump the private info. As suggested by my mentor, I'm moving the job of speed calculation and other major work to the glusterfsiostat script rather than code it all in the glusterfs codebase. You can look at the new patch here : http://review.gluster.org/#/c/8244/.

Also, my project was accepted to be hosted on Gluster Forge at https://forge.gluster.org/glusterfsiostat where you can track the progress for the python script and rest of the code base related to my project.

Recently, my mentor and me have started to track our progress with the help of Scrum model, by using trello. This helps us break the bigger jobs into smaller tasks and set the deadline on each of them to better estimate their supposed date of completion.

July 06, 2014

Bugspad bootstrapped!

Had a refreshing retreat with my family, this week. As planned I was going through bootstrap CSS, to give a decent and responsive look to the UI of bugspad. Also I have been planning on how to use flags in bugspad, alongwith the user group permissions. The following are some of the snapshots of the current revamped after integrating bootstrap. You can otherwise check it out here . I have also been planning to have a mascot for bugspad, as tux is for linux, and the Buggie is for Bugzilla.

bug_desc

filebug

home

login

searchbug


July 01, 2014

Google Summer of Code sixth Week update.
Google Summer of Code 2014: Week 6 update


Its sixth week of Google summer of code and officially is the middle week of the program. I have passed the midterm evaluations by the Google team. It has been a really great summer till now, with a lots of opportunities to learn. 

This week we worked mostly towards preparing a demo of what we have done. A lot's of time went to configure the web-server for running Flask with wgsi mod but with little success. I have created a discussion about the problem at stack-over flow. Anyone who would like to come forward with a helping hand is most welcomed.

Also, I worked on the comments module enabling comments on the webpages. I was able to successfully implement the comment module but it still doesn't not support threading and I expect to add logic for threading by end of this week. We moved the GUI for the project form old self written CSS to Foundation CSS. Making it more user friendly and compatible with all devices.



Thanks for Reading through the Post.

June 30, 2014

Rails development tools

During the past two months I have been reading constantly about Rails and how I could get more productive when writing code and testing my apps. There is a ton of information about those matters on the web and I'll try to include as many articles as I could find useful to my knowledge building.

Disclaimer: This article is heavily inspired by Thoughtbot's Vim for Rails Developers which I stumbled upon during browsing the screencasts of codeschool.

Editor of choice (vim)

When you work from the command line and you use linux, your editor preference comes down to two choices: vim and emacs. I started with vim some time ago so I'll stick with it.

If you are new to vim read this cheatsheet to learn the basical commands.

vim plugins

Start by installing pathogen.vim, a vim plugin manager:

mkdir -p ~/.vim/autoload ~/.vim/bundle && \
curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim

Then add this to your vimrc:

execute pathogen#infect()

From now on, every plugin that is compatible with pathogen can be simply installed by cloning its repo at ~/.vim/bundle.

An alternative for pathogen is vundle. Haven't used it but it behaves similarly.

rails.vim

Probably the one most useful plugin when dealing with Rails projects.

Install it with:

git clone git://github.com/tpope/vim-rails.git ~/.vim/bundle/vim-rails

Browsing through the app

You can use :RController foos and it will take you straight to the app/controllers/foos_controller.rb. As you might guess, same happens with :RModel foo, etc. There is also tab completion so that you can toggle between all models/controllers, etc.

Another useful command is :find. Invoking it with a name foo, it first searches for a model named foo. Tab completion is also your friend.

One other really cool feature is the go to file. Supposedly we have the following model:

class Blog < ActiveRecord::Base

  has_many :articles

end

Placing the cursor on the articles word and pressing gf vim opens the article model. After saving your changes you can go back to the blog model by pressing Ctrl-o.

Run your tests through vim

Running test is also a matter of a command. Say you are editing a specific spec/test file. All you have to do is run :Rake and the tests for that particular file will be ran, without leaving your favorite editor :)

The supported commands are a lot and your best bet is to invoke :help rails in vim and learn about them.

Be sure to also check vim-rails on github.

vim-snipmate

SnipMate implements snippet features in Vim. A snippet is like a template, reducing repetitive insertion of pieces of text. Snippets can contain placeholders for modifying the text if necessary or interpolated code for evaluation.

Install it:

cd ~/.vim/bundle
git clone https://github.com/tomtom/tlib_vim.git
git clone https://github.com/MarcWeber/vim-addon-mw-utils.git
git clone https://github.com/garbas/vim-snipmate.git
git clone https://github.com/honza/vim-snippets.git

Writing a method

Reading the source code of snippets above let's see how we can create a method. The snippet reads:

snippet def
        def ${1:method_name}
                ${0}
        end

So, the snippet is named def and in order to invoke it we must write def and hit Tab. It then expands, placing the cursor in the highlited method_name. This is what it looks like:

def method_name

end

Once you start typing, method_name gets replaced with what you type. When you finish, hit Tab again to go to the method body.

Now all you have to do is read the ruby.snippet and find out what snippets are supported.

fugitive.vim

vim-fugitive brings the power of git commands inside vim.

Install it with:

git clone git://github.com/tpope/vim-fugitive.git ~/.vim/bundle/vim-fugitive

Check out the github page for a list of commands and some interesting screencasts.

Terminal multiplexer (tmux)

Again, here you have two options. screen or tmux. My first contact was with screen but recently I decided to try tmux.

I won't go into any details but I highly reccomend watching Chris Hunt's presentation Impressive Ruby Productivity with Vim and Tmux. It's an awesome talk.

Development stack

There is a great article I stumbled upon yesterday about some must have gems for development, some of which I haven't tested. Here is what I got so far.

jazz_hands

jazz_hands is basically a collection of gems that you get for free with just one gem. It focuses on enhancing the rails console. It provides:

- Pry for a powerful shell alternative to IRB.
- Awesome Print for stylish pretty print.
- Hirb for tabular collection output.
- Pry Rails for additional commands (show-routes, show-models, show-middleware) in the Rails console.
- Pry Doc to browse Ruby source, including C, directly from the console.
- Pry Git to teach the console about git. Diffs, blames, and commits on methods and classes, not just files.
- Pry Remote to connect remotely to a Pry console.
- Pry Debugger to turn the console into a simple debugger.
- Pry Stack Explorer to navigate the call stack and frames.
- Coolline and Coderay for syntax highlighting as you type. Optional. MRI 1.9.3/2.0.0 only

Again, visiting the github page, you will get all the info you want. There is an open issue and installation on ruby 2.1.2 is failing for now. For the time being you can put the following in your Gemfile:

gem 'jazz_hands', github: 'nixme/jazz_hands', branch: 'bring-your-own-debugger'
gem 'pry-byebug'

rubocop

rubocop is a tool which checks if your code conforms to the ruby/rails community guidelines.

You can check the article I wrote where I explain how to set it up and running.

railroady

railroady is a tool that lets you visualize how the models and the controllers of your app are structured. Instructions on how to install it are on the github page. You can check how it looks like on the fedoraruby project I'm currently working on.

annotate

annotate generates a schema of the model and places it on top of the model. It can also place it on top of your rspec files and the factories. It looks like this:

# == Schema Information
#
# Table name: bugs
#
#  id            :integer          not null, primary key
#  name          :string(255)
#  bz_id         :string(255)
#  fedora_rpm_id :integer
#  is_review     :boolean
#  created_at    :datetime
#  updated_at    :datetime
#  last_updated  :string(255)
#  is_open       :boolean
#

Testing stack

There is a ton of useful tools out there and if you are new to rails development you can easilly get lost. Rails has Two Default Stacks is a nice read that sums it up. I will try to update this post as I find more useful tools in my way.

rspec

I am mostly in favor of rspec because of its descriptive language and the great support by other complement testing tools.

capybara

So, why capybara and not cucumber? I'm not an expert on neither of these tools but from my understanding capybara is more focused on developers whereas cucumber's human language mostly targets aplications where one talks to a non-technical customer.

guard

Guard watches files and runs a command after a file is modified. This allows you to automatically run tests in the background, restart your development server, reload the browser, and more.

It has nearly 200 plugins which provide different options as guard is not only used for testing. The particular plugin for rspec is guard-rspec.

When you make the smallest change to a test and you hit save, guard will run that particular test group again to see if it still passes.

I tend to invoke guard with guard -c which runs the tests in a clear console every time.

Read the guard wiki page which is comprehensive and also watch the [guard railscast][] to better understand it.

Other super useful tools

ctags

Quoting from What is ctags?:

Ctags generates an index (or tag) file of language objects found in source files that allows these items to be quickly and easily located by a text editor or other utility.

There are a bunch of different tools to create a tags file, but the most common implementation is exuberant ctags which we will use.

It supports 41 programming languages and a handful of editors. directories

Installation

Install ctags via your package manager. It should be supported in all major ditributions.

Configuration

For a rails project, in your application root directory you can run:

ctags -R --exclude=.git --exclude=log *

This searches recursively all files in the current directory, excludes the .git and log directories and creates a tags file under current dir. You may want to add it to .gitignore by the way.

Next, adding the following line to ~/.vimrc:

set tags=./tags;

sets the location of the tags file, which is relative to the current directory.

You can move the above options in ~/.ctags, so in our case this will be:

--recurse=yes
--tag-relative=yes
--exclude=.git
--exclude=log

So in future runs of ctags all you need to do is ctags *.

ctags doesn't autogenerate, so each time you write code that is tagable, you have to run the command again. If you are working in a git repository be sure to checkout Tim Pope's Effortless Ctags with Git. What this does is:

Any new repositories you create or clone will be immediately indexed with Ctags and set up to re-index every time you check out, commit, merge, or rebase. Basically, you’ll never have to manually run Ctags on a Git repository again.

Usage

Say we have a file containing hundrends of lines. Inside a method you see the below definition:

def contain_multiple_methods
  method_one
  method_two
  method_three
end

While you could search for these methods, you can save a few keystrokes by simply getting the cursor on the line of the method to search and in vim normal mode press Ctrl + ] (control and right square bracket). This should get you where the method is. Go back to where you were by pressing Ctrl + t.

Note: The usage of ctags isn't restricted only in the current file. If a method in your file is inherited by another class, then searching for it will jump in this particular file.

Secret power

Wouldn't it be cool if we could search for methods in the Rails source code? Here is where the power of ctags really excels. All you have to do is tell ctags to also tag the rails source code.

First I cloned the rails repository into vendor/rails:

git clone https://github.com/rails/rails.git vendor/rails

It should take less than a minute to download. You wouldn't want the rails source code to be included in your git tree, so you simply exclude vendor/rails by adding it to .gitignore.

Lastly, create again the tags with ctags *.

Now navigate with vim to one of your models that has for example the association has_many, place the cursor on it (or just on the same line) and hit Ctrl + ]. Pretty cool huh? In case you forgot, go back to where you were with Ctrl + t.

ack

ack is like grep but on steroids.

Designed for programmers with large heterogeneous trees of source code, ack is written purely in portable Perl 5 and takes advantage of the power of Perl's regular expressions.

It supports multiple types which you can see by typong ack --help-types.

Of course there is a vim plugin!

alternative (ag)

While reading the more-tools page of ack I found out about ag, also called the_silver_searcher. It is said to search code about 3–5× faster than ack, is written in C and have some more enhancements than ack. You may want to give this a try to. And as you have guessed there is also an ag vim plugin.

Conclusion

The editor of choice and the tools you use in web development play a great role in one's productivity, so you have to choose wisely and spend some time to get to know it. Personally, I learned a lot more these past days I was crafting this post and I hope you got something out of it too :)

June 28, 2014

Post Midterms: Bugspad now into dev-testing stage 2

Cheered that I passed the midterms :) . Hats off to my mentor Kushal Das, who has been patient with me, throughout literally helping me with tiny bits, and bearing my silly mistakes. Have learnt a lot under him. Got a new instance with http://209.132.184.128/. Thanks a lot to the fedora-infra team . This would be used for doing my dev-testing with larger data sets, and real time performance assessments. I have added a error logging system, to log in the various printf statements throughout my code. I have also refactored it to make it more clean and understandable. My next target is to fix the current crappy UI of bugspad with bootstrap, and use the bugspad api, to do testing on a larger data set. Am excited about it! :D


June 27, 2014

Ticket Handling System in Freemedia
Ticket handling system is also plays a major role in Freemedia service. Because it is the method volunteer can get to know about media requests.
Basically ticket is created after user(requester) fill the requesting form. Then it will be added to the ticket list where volunteers can see them. Below it show the how it looks like.


Here ticket ID is generated according to the time it creates. Then the volunteer can decide what ticket he/she could choose in order to fulfill. when click on the ticket ID volunteer will be directed to the page where entire ticket will be shown.



In ticket view we can see there are few information attributes .


  • Summary - A brief description summarizing the request. It has the format below.                                               <first name> from <country> need <requested media>
  • Description - This contains the residential address of the requester.
  • Reporter - Requester's email ID.
  • Status -  What is the current status? One of new, assigned, closed, reopened.
  • Keywords - Keywords that a ticket is marked with. Useful for searching and report generation.
  • Media Version - Version of the media that this ticket pertains to.
  • Cc - A comma-separated list of other users or E-Mail addresses to notify.
  • Assigned  - Principal person (volunteer) responsible for handling the ticket. 
  • Resolution- Reason for why a ticket was closed. One of fixed, invalid, wontfix, duplicate,                                     worksforme.

Below illustrate the ticket life cycle.



Integrating bootstrap

To create ticket view I uses bootstrap which is nice framework for UI design.
We can easily  integrate  bootstrap into cakephp.
After download and extract  bootstrap from this link . We can simply add files to the app/webroot folder.
bootstrap.min.css -   app/webroot/css
bootstrap.min.js -     app/webroot/js

To work with bootstrap we need jquery. So simply add jquery-x.x.x.min.js to app/webroot/js.

Then we have to import bootstrap before use. Instead of importing in each view file we can add them in
app\View\Layouts\default.ctp   which is the layout shared in every view in cakephp.
we can import like below.

<?php echo  $this->Html->script('jquery-1.9.1.min');?>
<?php echo  $this->Html->script('bootstrap.min');?>

<?php echo $this->Html->css('bootstrap.min'); ?>

So we can simply update class attribute like below so that all CSS stuffs will added like a magic :)

<?php echo $this->Form->input('Reporter',
       array(
         'class'=>"form-control",
          'label'=>'Reporter:',
          'required'=>false,
         'value'=>  $Email_ID)); ?>

So here class attribute is defined as form-control , then css will be applied to the input.

More details about bootstrap CSS
http://getbootstrap.com/css/





June 25, 2014

GSoC week 5
Nofsync
I created another implementation of nofsync plugin (disables fsync(), makes it much faster), this time in python as DNF plugin that disables fsyncing in the YumDB. It is a small bit slower than the C library using LD_PRELOAD, because it doesn't eliminate fsyncs made from scriptlets (by gtk-update-icon-cache and such). But it's much simpler from packaging perspective (mock can stay noarch) and could be actually upstreamable (in dnf), because there are some other use cases, where you don't try to recover from hardware failure anyway - for example anaconda. If the power goes down, you probably don't try to resume existing installation. And this could make it faster (nofsync makes package installation approximately 3 times faster).
To compare the two implementations, set either
config_opts['nofsync'] = 'python'
or
config_opts['nofsync'] = 'ld_preload'
Default is python, to disable it, set the option to something else (empty string)

LVM support
Last week I implemented base for LVM plugin for mock using regular snapshots. This week I rewrote the plugin to use LVM Thin snapshots, which offer better performance, flexibility and share the space with the original volume and other snapshots, therefore don't waste much space. I created basic commands that can be used to manipulate the snapshots.
Example workflow:
I'll try to demonstrate how building different packages can be faster with LVM plugin. Let's repeat the configuration options necessary to set it up:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'my-volume-group',

}
You can now also specify 'mount_options', which will be passed to -o option of mount. To set size to larger than the default 2GB, use for example 'size': '4G' (it is passed to lvcreate's -L option, so it can be any string lvcreate will understand). Now let's initialize it:
$ mock --init
Mock will now create thin pool with given size, create a logical volume in it, mount it and install the base packages into it. After the initialization is done, it creates a new snapshot named 'postinit', which will be then used to rollback changes during --clean (which is by default also executed as part of --rebuild). Now try to install some packages you often use for building your own packages. I'm a Java packager and almost every Java package in Fedora requires maven-local to build.
$ mock --install maven-local
But now since I want to rebuild more Java packages, I'd like to make snapshot of the buildroot.
$ mock --snapshot mvn
This creates a new snapshot of the current state and sets it as the default. We can list snapshots of current buildroot with --list-snapshots command (the default snapshot is prefixed with asterisk)
$ mock --list-snapshots
Snapshots for mock-devel:
  postinit
* mvn


So let's rebuild something
$ mock --rebuild jetty-9.2.1-1.fc21.src.rpm
$ mock --rebuild jetty-schemas-3.1-3.fc21.src.rpm
Because the 'mvn' snapshot was set as the default, it means that each clean executed as part of the rebuild command didn't return to the state in 'postinit', but to the state in 'mvn' snapshot. And that was the reason we wanted LVM support in the first place - it didn't have to install 300+MB of maven-local's dependencies again (with original mock, this would probably take more than 3 minutes) but still the buildroot was cleaned of the packages pulled in by previous build. We could then install some additional packages, for example eclipse, and make a snapshot that can be used to build eclipse plugins.
Now let's pretend there has been an update to my 'rnv' package, which is in C and doesn't use maven-local.
$ mock --rollback-to postinit
$ mock --list-snapshots
  mvn
* postinit
Now 'postinit' snapshot was set as default and buildroot has been restored to the state it was in when 'postinit' snapshot was taken (after initialization, no maven-local there). The 'mvn' snapshot is retained and we can switch back again using --rollback-to mvn.
So now I can rebuild my hypothetical rnv update. If I decide that I don't need the 'mvn' snapshot anymore, I can remove it with
$ mock --remove-snapshot mvn
You cannot remove 'postinit' snapshot. To remove all logical volumes belonigng to the buildroot, use mock --scrub lvm
 
So that's it. You can create as many snapshots as you want (and snapshots of snapshots) and keep a hierarchy of them to build packages that have different sets of BuildRequires.
Few more details:
  • The real snapshot names passed to LVM commands have root name prefixed to avoid clashes with other buildroots or volumes that don't belong to mock at all. It also checks whether the snapshots belong to mock's thinpool.
  • The volume group needs to be provided by user, mock won't create one. It won't touch anything else besides the thinpool, so it should be quite safe if it uses the same volume group as you system (I have it like that).
  • The command names suck. I know. I'll try to provide short options for them.
  • If you try the version in my jenkins repository, everything is renamed to xmock including the command - to allow it to exist alongside original mock.

June 24, 2014

Shumgrepper – Summary of work

In this blog, I am going to summarize my work till now. As the mid-term evaluation is going on this week, it would be better to have a look at the work done so far and task that are still left and what i have planned to do in coming days.

As per my proposal, I divided the whole project into 5 major tasks.

1. Query building for database

It involves the following tasks:

  • Setting up summershum database and enabled shumgrepper to query from it.
  • Designed basic layout  and defined directory structure of the app.
  • Creation of end-points to display files information by particular sha1sum, sha256sum, md5sum and tarsum.

2. Web – API Wrapper of the app

It involves:

  • JSON API: It returns json content if the request is made in json or request header is “application/json”.
  • Compare Packages: It returns filenames which are different in packages being compared.
  • File of a package: returns filenames of a package.

To do:

  • GPL License: find GPL license present in packages.

3. Web front-end

It involves improving the GUI of the app:

  • Created a index bar that appears on top of every page.
  • Added a function to summershum to list the names of all the packages. /packages endpoint will list all the package names which when clicked results into package information.
  • Added docs for API.

To do:

  • Design front-page of the app. A text box can be added to make /sha1sum, /sha256sum,  /tarsum, /md5sum simpler.
  • Improvements in the API doc.
  • Separation of API and UI.
  • Improvement in GUI to display filenames for /compare and /packages/{packages}/filenames.
  • Further, it requires other improvements in UI.

4. Deployment

I am currently working on its deployment. I hope it will be completed within next 2 days.

5. Integration and Testing

To do:

  • Create unit-test for the app.
  • Integration of the app. (if time permits)

 


Shumgrepper – Week 5

Last week i worked on improving the GUI of the app, JSON api for all the end-points,  documentation for api and few other bug fixes.

1. GUI

Here’s the top-index bar that will appear in all the pages.

Screenshot from 2014-06-24 14:26:43

 

Packages button will result in list of packages which will result into package information when a particular package is clicked.

Screenshot from 2014-06-24 10:02:43

 

 

2. Json api for all end-points.

Query made to display the files of a package.

 $ http get http://localhost:5000/package/tito/filenames

It will return all the file names present in the package.

[
    "/tito-0.5.4/wercker.yml",
    "/tito-0.5.4/titorc.5.asciidoc",
    "/tito-0.5.4/AUTHORS",
    "/tito-0.5.4/.gitignore",
    "/tito-0.5.4/.gitattributes",
    "/tito-0.5.4/test/functional/__init__.py",
    "/tito-0.5.4/test/functional/specs/extsrc.spec",
    "/tito-0.5.4/src/tito/exception.py",
    "/tito-0.5.4/src/tito/distributionbuilder.py",
    "/tito-0.5.3/test/functional/builder_tests.py",
    "/tito-0.5.3/test/functional/build_gitannex_tests.py",
    "/tito-0.5.3/src/tito/common.py",
    "/tito-0.5.3/src/tito/cli.py",
    "/tito-0.5.3/src/tito/buildparser.py",
    "/tito-0.5.3/src/tito/release/main.py",
    "/tito-0.5.3/src/tito/release/copr.py",
    "/tito-0.5.3/src/tito/release/__init__.py",
    "/tito-0.5.3/hacking/titotest-centos-6.4/Dockerfile",
    "/tito-0.5.3/hacking/titotest-centos-5.9/Dockerfile",
    "/tito-0.5.5/src/tito/tagger/zstreamtagger.py",
    "/tito-0.5.5/src/tito/tagger/rheltagger.py",
    "/tito-0.5.5/src/tito/tagger/main.py",
    "/tito-0.5.5/src/tito/tagger/__init__.py",
    "/tito-0.5.5/src/tito/release/obs.py",
    "/tito-0.5.5/src/tito/release/main.py",
    "/tito-0.5.5/src/tito/release/copr.py",
    "/tito-0.5.5/src/tito/release/__init__.py",
    "/tito-0.5.5/rel-eng/custom/custom.py",
    "/tito-0.5.5/hacking/runtests.sh",
    "/tito-0.5.5/bin/tar-fixup-stamp-comment.pl",
    "/tito-0.5.5/bin/generate-patches.pl"
]

Similarly queries can be made to compare the packages, to display information by sha1sum, sha256sum, tarsum and md5sum.

 

3. Added documentation on how to query results via API.

 

Screenshot from 2014-06-24 10:10:40

 


Google Summer of Code fifth Week update.

Google Summer of Code 2014: Week 5 update
And 
Updates about my work before the start of mid term evaluations.


Hello folks, 

I would like to share details about what I have done this week. Also, as the fifth week evaluations have approached I would also like to sum up all the tasks completed till this date.  

This week, I was able to implement a basic admin module for the web application. Allowing easy control of content present on the application by the admin. Also, some time was spent to design and implement media search. We designed a method for creating content, i.e. a way to embed images, videos and other media content into the text content.

The last five weeks have been quite hectic for me, I was working on fedora-college project with the help of the fedora infra team. I was allotted Mr. Eduardo Echeverria, Mr. Luis Bazan, Mr. Yohan Graterol and bckurera for my project. Most of my interactions have been with the Eduardo Echeverria, and Yohan Graterol. They have helped me quite often and even come to my rescue frequently on problems during development phase. Also, I would like to thank the fedora-infra team and fedora-design team members to be available always for help. 

During the last few weeks I have worked on the project "Fedora College", Its aimed to create a virtual classrooms for new fedora contributors, where we will use the available video and other multimedia resources to help new contributors as well as existing ones to learn and engage with the community. It Acts as a platform for new contributors to engage with the community and learn how they can contribute best in the community. Mostly this service will be used to run online courses on contributing at various levels be it documentation, bug-fixing or packaging.


The work for the project was divided into two parts namely the product API and the web based GUI. The API is mostly read-only and offer only write permissions for managing media content. The web GUI offers functionality like the railcast.com and edx.com helping to deliver multimedia content to people enrolled (registered) with us. Also, We aim to create a method for creating community interactions. In the previous five weeks I was able to implement the search, The product API and multiple modules for the project. The modules like the Authentication, Content writing, Admin, Home, Blog and Search we developed in the course of time. Once you clone the Github repository and run the project you can find a detailed list of available API endpoints at the " /api/docs/ ". During the last week we also, published the initial release of the product. The code for the same is here and the project proposal for the GSoC is here


Thanks for Reading through the post.



June 23, 2014

GG halfway through with redesign

We’ve reached about halfway with the GG redesign, so I’ve decided to share updates on this blog post. It’s basically design work, so the best way is to just try it out. Head over to http://glittery-banas.rhcloud.com, sign up for an account and give it a roll :) Some features may be hidden/removed for now, because we wanted to make sure what is visible is performing good.

Managed to capture the login page right in time to include the red nprogress loader :D

Login page

I wanted to make the onboarding experience smooth, so there’s guides that show up everywhere to help you around the system.

Dashboard

As you’d notice, there’s plenty of whitespace and big buttons to help you navigate without confusion. As you create new projects, widgets show up on the right to act as quick links to your recent projects. Notifications will be introduced next to projects soon :)

New Project

Here’s yet another example of a guide, helping a user understand what to do on a new project page.

Another example of a guide prompting the next action

Project files show up neatly in different columns - when I’m doing JS activity, I’ll try to make it pinterest style.

Freshly created project page

Also, I’ve tinkered a little with the User & project settings, introducing more UI elements.

Settings page

So go ahead and give it a try - http://glittery-banas.rhcloud.com. As usual, remember to report issues into the issue tracker :)

Bug fixing on bugspad

This week was spent mostly on fixing the tiny bits of bugspad. 

Did the following:

  • Removed unneccessary binary files.
  • Added missing changes for using redis bug tags.
  • Bugs cannot be closed unless all dependent bugs are closed.
  • Emails are not visible if the user is not logged in
  • Added missing status tag
  • Auto change of status from new to open upon commenting

I am currently working on the search interface which is planned to be completely built
on redis, the main indregient in the lightning fast nature of the planned bugspad.


June 19, 2014

QA Testing Video Chat App – GSoC Week 4

I would recommend you to read my last post first before reading this one. This should help you understand the content of this post better :)

Last week, I did both dev testing and load testing of video chat app. In this post, I am gonna show you the results I got.

Bugs & Fixes

Bugs I found during testing:-

  • Re-rendering video object: When a user moves from video chat app to an irc channel/server and then back to video chat again, video stream(both local and remote) doesn’t get re-render. Actually,  this was a browser specific issue and I found it’s fix by chance. Have a look at the fix.
  • Click event on ‘Accept/Reject’ button gets triggered more than once: This one was simple to fix. I just had to use jQuery off method to remove click event attached to element before attaching new click event. Fixed.
  • Option to enable/disable video chat: I know this isn’t a bug. My mentor told me to add this feature.

Load Testing

Framework

In my last post, I mentioned I used EasyRTC framework to build p2p(peer to peer) video chat. Although, it’s a p2p app, it still requires server for signaling(read more). For signaling, EasyRTC uses socket.io built on node.js. So, basically I load tested EasyRTC server’s socket.io implementation.

I used socket.io-client module to interact/connect with EasyRTC server by writing against their server side API, as documented here. Below is the snippet of simple client I created.

var Client = function () {
  /* Some code omitted */
  this.connectToServer = function (callback) {
    var client = io.connect(
        'http://'+SERVER_HOST+':'+SERVER_PORT ,
        {'force new connection': true}
    );
    if (!client) {
      throw "Couldn't connect to socket server";
    }

    var msg = {
      msgType: 'authenticate',
      msgData: {
        apiVersion: "1.0.11",
        applicationName: "waartaa"
      }
    };

    var easyrtcAuthCB = function (msg) {
      if (msg.msgType == 'error') {
        callback('easyrtc authentication falied');
      } else {
        // code omitted
      }
    };

    client.json.emit('easyrtcAuth', msg, easyrtcAuthCB);
  };
}

Script & Parameters

I made a command line tool in nodejs to ran tests.

node run.js --no-of-connection=700 --no-of-concurrent-connections=[1,5,10]

In above mentioned command you should notice two things, namely, no. of connections and concurrent connection. My system(Intel(R) Core(TM) i5 CPU M 460 @2.53GHz, 4 GB RAM) couldn’t stand more than 700 connections. Server process started consuming 1 GB RAM for more than 700 connections. I will tell you the reason for this later. For now, 700 was suffice for initial testing. My mentor told me to keep concurrent connections very low because we won’t be expecting too high concurrency in production initially.

Data collected

After the script was ran several times, I collected the following data each time for different concurrency against no. of connections:-

  • Roundtrip time(ms): Time taken for client to get connected to server. This includes latency(which is negligible because client and server are on same machine) + client-to-server message time + server-to-client message time.
  • Avg. CPU load(%) & Memory (MB): I used this node module to collect these data.

Below you can see the data plotted to graph and their analysis:-

Roundtrip time analysis roundtrip_time

  • As no. of connections increases, round-trip time increases. This is because the default behavior of EasyRTC is to send every client in a room a separate update whenever a new client leaves a room, enters a room, or changes its status. And this leads to increase in round-trip time.
  • Increase in concurrency too increases round-trip time. I guess this is because nodejs being a single threaded server processes one request at a time. Please correct me if I am wrong.
  • Roundtrip time < 500 ms is a good no. For concurrency = 1 and any no. of connections in above graph, round-trip time < 500 ms.

CPU analysis cpu

  • Nothing too unnatural about cpu load increasing with no. of connections and concurrency.
  • Good thing is the rise is not too steep for lower concurrency.

Memory analysis memory

  • As I mentioned earlier, 700 connections consumed 1 GB memory and you can clearly see that in above graph. Well, this is because as I said earlier, the default behavior of EasyRTC is to send every client in a room a separate update whenever a new client leaves a room, enters a room, or changes its status. That means for 700 connections (700*701)/2 = 245350 messages were sent by server. You can do the math now :)
  • For any concurrency, memory rises at the same rate with increase in no. of connections. Thus, memory consumption is independent of concurrency.

Conclusion

  • More bugs will pop up when my mentors and other people start using the app until then a developer can only believe his/her app works perfectly ;)
  • The default behavior of EasyRTC server can cause scaling issues. We might have to change it in future by adding custom event listeners on server side.

PS: If you have made this far and found anything wrong in this post, please do comment. :)

<script>JS I love you</script>


June 18, 2014

User Form Handling in Freemedia Tool
Very first thing in the  Freemedia process is user filling the form in order to create a ticket(making a request). This form should handle correctly , otherwise whole process will become a mess. That because if user enter invalid details, then volunteers will not be able to fulfill the requests. So form handling mechanism should have;
  • Proper validation for each field.
  • Preventing duplicates.
  • User friendly.  
For achieve these things first we need to have a proper database structure.So following  illustrate the ER diagram. 

After the database design I created models for each entity in cakePHP. For that I use CLI tool provided in cakePHP. It makes life easy :) . By just typing "cake bake model" we can create a new model. CLI tool suggest appropriate models from the database. So that we can select what we want. Then tool suggest what are the things that can included in a model.Such as validations,associations(linking models) and etc.


<script src="https://gist.github.com/coma90sri/b2510f7c3f979ddc094e.js"></script>
Then I created controller for handle model data. Up to now controller has add() function which insert user data into the database.

<script src="https://gist.github.com/coma90sri/decf5728d2555f5f9df6.js"></script>
Then we need a view to view the user form.

<script src="https://gist.github.com/coma90sri/1d8f0422457edd7579fb.js"></script> In next form, there is a field for select country. In order to add country list I used formHelp in view.So I created file in view/Helper and use it in the controller(new helper function name should be added to $helper array). ex- if file is LangHelper.php  in controller $helpers = array('Html', 'Form','Lang', 'Session')  should be added.
Here is the code for helper I added to create country drop down list

<script src="https://gist.github.com/coma90sri/481b7d6eaeb6de14f0ec.js"></script> Then in view file which controller has added the helper can use helper we can create country list like this
             $this->lang->countrySelect('Country');
GSoC - week 4
Last week I had exams at the university and that left me with less time for work. But I made some progress anyway.

Mock performance
Mock builds usually take a considerable amount of time and there is nothing much that can be done about the speed of actual building but the package installation can be improved. Last time I created noverify plugin which provided considerable speed up and my mentor recomended to try removing fsync calls during the package installation. I do that by making a small library in C containg only empty fsync() and fdatasync() functions and copying it into the buidroot. Then using LD_PRELOAD to make it replace actual libc implementation of this calls. My mentor measured the performance differences on his kvm virtual machine and the results are amazing - times installing @buildsys-build and maven-local (look at the wall clock time):

Standard yum:
User time (seconds): 55.66
System time (seconds): 5.78
Percent of CPU this job got: 25%
Elapsed (wall clock) time (h:mm:ss or m:ss): 4:02.03

Standard dnf:
User time (seconds): 49.61
System time (seconds): 5.68
Percent of CPU this job got: 23%
Elapsed (wall clock) time (h:mm:ss or m:ss): 3:50.94

With noverify plugin:
User time (seconds): 47.85
System time (seconds): 5.32
Percent of CPU this job got: 36%
Elapsed (wall clock) time (h:mm:ss or m:ss): 2:25.25
Maximum resident set size (kbytes): 150248

With noverify plugin, fsync() and fdatasync() disabled:
User time (seconds): 46.38
System time (seconds): 4.97
Percent of CPU this job got: 87%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:58.56
Maximum resident set size (kbytes): 150260


That's more than 4x faster and could be valuable improvement for both packagers and koji builders.

LVM plugin
I started implementing the basis for LVM support. Now it can already use LVM snapshot instead of root cache. To enable it, put the following in your config:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'mock-vg'
}

where mock-vg is the name (not path) of the volume group you want mock to use for creating new volumes. There are other configuration options possible - filesystem, size, snapshot_size, mkfs_args. Root cache is disabled because it would be redundant. When started, it creates a logical volume, mounts it and after it's initialized, it makes a snapshot and mounts the snapshot instead of the original. Then all following builds alter only the snapshot and when clean command is executed (usually at the beginning of new build) the snapshot is deleted and replaced with new one. I originally tried to implement it the other way around - making a snapshot and still working with the original volume and then merging it when cleaning. But it was very slow - the merging took more than 10s. The current approach is fast enough - cleaning is just deleting a snapshot and creating new one which happens almost instantly (compared to deleting buildroot and unpacking root cache).

The next week I'll try to implement more advanced features of the LVM plugin - snapshot management which would allow having a hierarchy of snapshots with different preinstalled packages facilitating faster workflow for packagers working with more diverse set of packages.


June 17, 2014

GSOC Week 4: "This is not a coding contest"
Basing my first patch to Gluster as a stepping stone, I've written a small utility glusterfsiostat, in python which can be found at https://github.com/vipulnayyar/gsoc2014_gluster/blob/master/stat.py. Currently, the modifications done by my patch to io-stats which is under review as of now, dumps private information from the xlator object to the proper file for private info in the meta directory. This includes total bytes read/written along with read/write speed in the previous 10 seconds. The speed at every 1 second is identified by it's respective unix timestamp and hence given out in bytes/second. These values at discrete points of time can be used to generate a graph. 

The python tool first identifies all gluster mounts in the system, identifies the mount path and parses the meta xlator output in order to generate output similar to the iostat tool. Passing '-j' option gives you extra information in a consumable json format. By default, the tool pretty prints the basic stats which are human readable. This tool is supposed to be a framework on which other applications can be built upon. I've currently put this out in the gluster devel ML for community feedback so as to improve it further.

Note: In order to test this, you need to apply my patch(http://review.gluster.org/#/c/8030/) in your repo first, build and then mount a volume. Preferably perform a big read/write operation with a file on your Gluster mount before executing the python script. Then run it as 'python stat.py' or 'python stat.py -j'

Quoting my mentor Krishnan from our latest weekly hangout, "This is not a coding contest". What he meant was that just writing the code and pushing it is not the essence of open source development. I still need to interact and gain more feedback from the community regarding the work I've done till now, since our aim is not to complete the project for just the sake of doing it, but to build something that people actually use.
Shumgrepper – Week 4 update

Work done this week can be summarized as below.

1. Json output

Earlier it returns only html content. What if a user request data in json?

A user can request json content via API through http get request. The content type for this request is ‘*/*’ which can be considered as ‘application/json’. Otherwise if user wants to request through user interface, he/she will get the html content of the data. For this, first we need to find the mimetype of the request made.

mimetype = flask.request.headers.get('Accept')

I had already done this before in datagrepper project where i made a function request_wants_html which returns true if mimetype is “text/html”.

def request_wants_html():
    best = flask.request.accept_mimetypes \
        .best_match(['application/json', 'text/html', 'text/plain'])
    return best == 'text/html'

Then i had to convert the data which was file object into its json. I tried using inbuilt functions but they could not directly convert it into json. It took a lot of time to get through this problem. Then finally I serialized the data and converted it into its dict.

def JSONEncoder(messages):
    message_list = []

    for message in messages:
        message_dict = dict(
            tar_file = message.tar_file,
            md5sum = message.md5sum,
            sha256sum = message.sha256sum,
            pkg_name = message.pkg_name,
            filename = message.filename,
            tar_sum = message.tar_sum,
            sha1sum = message.sha1sum
        )
        message_list.append(message_dict)

    return message_list

 

2. Compare packages

This can be used to compare the packages and return the filenames different(or same) in two or more packages. I first approached this by creating a endpoint like this  /compare/packages/{package1}/{package2}.  By this way, i could only compare two packages. I discussed with pingou, he suggested to take input from user all the packages name and then comparing them. I read a few tutorials and found this of great help. I created a web form using Flask-WTF extension.

Screenshot from 2014-06-17 17:13:57

It returns the filenames common to all the packages. But today, pingou suggested me that user would be more interested to know the files that have changed when he/she wants to compare the different versions of the packages. Besides this, I have to create API for it.


June 16, 2014

Adding OpenID Auth system to cakePHP
Fedora infrastructure currently support openID and Persona (FedAuth). So I have to add openID Auth system in login system.

What is openID?
    OpenID allows to use an existing account to sign in to multiple websites,without needing to create new passwords. With that, password is only given to identity provider, and that provider then confirms the identity of the person to the website he/she visit. So no need to worry about unscrupulous or insecure website compromising visitors identity.

How to handle OpenID in cakephp?

   In this process we need openID library. So here I found openID library for php by janrain and it is licensed under MIT license.
First of all openID library(Auth folder) should be added to app\vendor folder. Then openID component  (OpenidComponent.php) to app\Controller\Component. Then we need a login form.

<script src="https://gist.github.com/coma90sri/f5b6b0b4c2ba1ddb2081.js"></script> Next we have to write controller to handle this form. This controller handle following tasks

  • Show the login form.
  • Redirect user to openId provider (when hit submit)
  • Handle the response from OpenID provider.


    Below code is just checks whether openID is successfully authenticated or not.
<script src="https://gist.github.com/coma90sri/33ea3646e1743c3dfa25.js"></script> Above code is modified version of previous userController.php. Here I added Simple Registration Extension(SReg) which retrieve nine commonly requested information
nickname, email, fullname, dob (date of birth), gender, postcode, country, language, and timezone

So that we can retrieve at least few of them and use to identify the user.All request info arrives as an array by post method.
Check the implementation of this in OpenShift.
http://freemedia-dulanja.rhcloud.com/users/login

Bugspad.org – Into the dev testing phase.

Am greatly excited, to tell that Bugspad has entered into the dev testing phase of development, wherein we would be testing it on my mentor’s server. Btw, it is not feature complete but you can have a first look at bugspad.org. As for the week I added the basic admin interface and added in missing features.

  • Added bug assignee and version tables, modifying corresponding tables and templates.
  • Fixed Null bugs, components, products editing page.
  • Added feature for adding in attachments to bugs.
  • Added dependencies and fixed the correponding table schema.
  • Added support for filling in assignee, dependencies and docs maintainer at bug filing page.
  • Added auto-comment on adding blocks and depends on.

Overall many features were added in. Now since we are into production environment, would love to hear feedback before feature complete testing can begin.


Google Summer of Code fourth Week update.
Google Summer of Code 2014: Week 4 update


Hello folks, 

Here I am back with the updates for fourth week of my Google summer of code project. This week I was able to give more time to the project (compared to previous week). Hence a larger domain of targets were achieved this week.

To brief up, this week I was able to complete the following tasks : 

  1. Polished the API for the product : Removed Minor bugs, added some features and added the search endpoint
  2. Added the full text search to the web application : I was able to add full text search to the product using the Wooshalchemy. It offers great text search along a language to interact with the plugin.
  3. Moved the back-end from sqllite to Postgresql : As the number of write connections handled by sqllite is limited to one , we moved the database backend to postgres even for testing.
  4. Improved user interactions, by decreasing form fields, Added support for CKeditor and Polished the GUI. Also, I did ask for some help from the fedora design team, they are actually very helpful and I think would be helping me with the GUI for the project.

Also, we automated the mechanism for generating video and image thumbnails. The current dependency for this task is the oggThumb tool available from the oggVideo package.

For the next week I have been working on a text parser to for reading media. Also, I am working on the admin portal. I think next release would be with a fully featured admin panel. 

We are here able to present an initial release of the product. The code for the same is here and the project proposal for the GSoC is here . 



Thanks for Reading through the post.

June 15, 2014

Rubocop to the rescue!

I decided to drop the GSoC related titles and focus on the things that I work during the week. That means I'll probably blog more often :p

This week I mostly focused on cleaning the code of fedoraruby and conforming to the ruby/rails community guidelines.

The gem that helps you do that is rubocop and is kind of the standard method in the ruby world.

RuboCop

Rubocop refers to each check as a cop. There are a bunch and you can see the supported ones by reading these [files][rubo-enabled].

After installing rubocop, call it with the rubocop command and it will check all Ruby source files in the current directory.

If working on a Rails project, you have to invoke it with the -R flag.

The first time I ran rubocop I was presented with no more or less 666 violations. Which meant if I wanted to clean up the code I'd had to manually edit all 666 of them. Luckily, as you may have imagined, rubocop provides the -a/--auto-correct flag which does what it says. In the documentation there is a note: Experimental - use with caution. What the heck, I had nothing to lose, I am under version control so I could go back any time. It worked like a charm and this brought the violations to about 150. Not bad at all.

So what about the rest? Well, you have to do it manually and so I began. If you run rubocop without any flags, it uses the default config that ships with the gem. If you want to use your config you can do so by defining it with the -c flag.

Now, there is another cool feature rubocop provides. It can create a config file for you containing all the violations found so far. Run rubocop with the --auto-gen-config flag and that will create .rubocop_todo_yml in the current dir. Then you can check against that file with rubocop -R -c .rubocop_todo_yml.

All cops in this yaml file are set to false, which means they won't be taken into account, not unless you explicitly set them to true. That way you can work your way up in fixing all violations by enabling one cop at a time. Basically what is included in this file, overrides the default values.

If you want to omit calling on .rubocop_todo_yml every single time, place this in .rubocop.yml:

inherit_from: .rubocop_todo.yml

Form now on you can just call it with rubocop -R.

To sum up, run rubocop -R see that there is no violation, edit .rubocop_todo_yml, set one of the cops to true, run rubocop again, fix the errors and work your way up until there is no violation.

Of course all of these are optional steps. Ruby's interpreter doesn't care about identation, it won't complain if you run a method 20 lines long and it won't throw an error if you have chain 16 methods spanning to 300 chars. All these are conventions among the Ruby community and you are not compelled to follow them. BUT, it provides much cleaner code and when you find yourself contributing to a project, all these will probably matter.

In my case, you can see through this commit what changed and in this gist you can see the difference from before/after running rubocop. Dropped to 73 violations from 666.

I've skipped some of them as I didn't see fit, like commenting above every class what it does. I'm not saying this isn't good to have it's just it also includes migrations and I'd like to avoid that. Also some code will be deprecated/rewritten any time soon so it doesn't make sense to fix the violations if I'm to remove the code afterwards.

HoundCI

Rubocop is good to test locally, but what about the code you host remotely? Enter Houndci.

Houndci is a web app written in Rails by Thoughtbot that integrates with your github account. It checks for violations every time a Pull Request is submited against your repository. It relies on the rubocop gem, but it may follow different approaches than rubocop.

I almost spent a day to find this out. I'll tell you what I mean since there was a particular error that made me search for many hours.

Let's start by saying that it is common practice to not have lines spanning on more that 80 characters. Python has it pinned to 79.

In rubocop, there is a cop that checks for method chaining. When the line is too long you should break it down, so this cop checks whether the dot(.) that chains two methods is placed before or after the methods. Here's an example to better visualize it:

def method_with_arguments(argument_one, argument_two)
  a_really_long_line_that_is_broken_up_over_multiple_lines_and.
  subsequent_lines_are_indented_and.
  each_method_lives_on_its_own_line
end

When I ran rubocop locally it complained with Place the . on the next line, together with the method name. Ok I did that and pushed. Then why was houndci told me otherwise?

Digging in rubocop's default config file I found that this particular cop was invoking an additional parameter: EnforcedStyle: leading. Interesting, so why houndci was telling me the opposite? Digging some more, this time in rubocop's source code, I found the responsible method. It seems rubocop gives you the option to decide which style fits you better and from what I've seen so far, houndci prefered the trailing dot. Ok let's fix that.

Reading the configuration guide, and since houndci uses rubocop, I copied .rubocop_todo_yml to .hound.yml. There, following the config file I appended

Style/DotPosition:
  EnforcedStyle: leading
  Enabled: true

in .hound.yml, pushed the change to my repo and created a test pull request to check if it worked. No... but whyyyy??

After some more more digging, this time at the issue tracker of houndci, I finally found the culprit. The latest version of rubocop changed the way cops are presented and that broke compatibility with houndci. Back to .hound.yml I removed Style/ and pushed to github. Finally, this time it was fixed.

Not much of a story, probably you already got bored or didn't make it this far, but anyway. Onto more interesting stuff, until next time it is.

June 13, 2014

Overall Design of the Freemedia Tool
Freemedia Tool consist of five main features. 
  • User profiles and user forms.
  • Ticket System.
  • Report generating System.
  • Email System
All these features along with the system database connect to the back end.

Architecture
Let's get a simple idea about above features.

User profiles and user forms.

This is the most important feature.Because all interaction with user are done here. For this project I am using cakePHP which uses MVC(Modal View Control) software architecture. So this feature directly connect user , view component and controller component.In future blog posts I'll explain them briefly. 
In this system there are three different types of users.
ex- admin,volunteer,requester.
Each type has a different role in the system. So that each user type has different user interface to interact with. 
ex-Only admins and volunteers have access to user profile page and every other users(requesters) only see request form.

Some expected access details of users.
 

Ticket system.

Ticket system is fully automated system. Because ticket is generated using the data which are given through the freemedia request page. Not all the data collected by the request page is use in ticket generating. So it need separate database table to store ticket details.
Ticket will rejected in case of ;
  • Duplicates
  • Invalid data 
  • Expire due to no volunteer involve .
Following illustrate the ticket handling done in the system.



Report generating system.

This feature is represent the statistical data of the data in the  freemedia database.
ex-  How many request from each country.
       Number of volunteers in each region.
       How many tickets are fulfilled.

 It will uses charts , graphs and other way of the representing statistical data. This also contain leader board of volunteers(top list of volunteers who fulfilled many requests). 


Email System

Mailing system is used to send notification about each important action done in this system.Following figure describes when mails will be send to the each type of user.

In the ticket handling process, mails are auto generated and send to users.In other situations like asking further clarifications about ticket from the request, volunteers can use the template provided and send them.
Also volunteer can change the notification mail settings. So that he/she will received relevant mails.



June 11, 2014

p2p Video Chat – GSoC Week 3

After finished implementing chat log search last week, I moved on to my next major task – implementing p2p video chat in waartaa.

Introduction

A p2p(peer to peer) plugin-free video/audio Real Time Communication(short for RTC) had always been a nightmare for developers to implement. This was changed few years back with the introduction of WebRTC - an open source, plugin-free, built into browsers technology. Today, there are many web services out there which uses webRTC eg. Bistri.

Implementation

A WebRTC application needs to do several things:

  • Get streaming audio, video or other data.
  • Get network information such as IP addresses and ports, and exchange this with other WebRTC clients (known as peers) to enable connection, even through NATs and firewalls.
  • Coordinate signaling communication to report errors and initiate or close sessions.
  • Exchange information about media and client capability, such as resolution and codecs.
  • Communicate streaming audio, video or data.

There are many libraries/frameworks available which does that for you. I am using EasyRTC because it is open source, maintains an active repo on GitHub and uses a node server for handling signaling.

Our initial goal was to use irc nicks for communication between peers but we had to settle using waartaa’s username for now because of trust/authentication issues with irc nicks. Imagine user has to first register his irc nick before starting video chat, that would have consumed alot time of user.

Screenshots

I have tried to tightlty integrate video chat UI with current UI so that user can do both video chat and normal chat(in an IRC channel) in same browser window simultaneously.

Client A
Client A1
Client A2
Client B1
Client B2

Conclusion

I am not sure if this is production ready yet because I haven’t done much research on how much load can easyrtc server handle. Also, by default easyrtc uses a public stun server of google to handle NAT and firewall between peers and that doesn’t sound good to me. It is quite possible that we might have to setup our own stun server.

PR: https://github.com/waartaa/waartaa/pull/119

<script>JS I love you</script>


June 10, 2014

GSOC Week 3: Patches Ahoy!!
This week, I focused on modifying the io-stats xlator so that it has the capability to store the speeds of recent reads and writes. Another information needed by my tool, i.e the amount of data read and written is stored in the private section of the xlator object(this->private). The meta xlator has a feature to custom dump the private info of an xlator, but for that it requires an initialized dumpops structure just like the fops one.

So for io-stats, I initialized dumpops and added the definition of the custom dump function(.priv) which is called by meta when doing a `cat profile` in the .meta folder. In order to store the read/write speeds, two separate agnostic doubly linked lists are used which store a max of 10 elements at a time. Each element represents a 1 second interval and stores the data in bytes read/wriiten during that duration and this element is uniquely identified by their respective unix timestamps(seconds). The following read_speed and write_speed fields in the file represent speed in bytes/sec for an interval of 1 sec which can be identified by the unix timestamp in the parentheses.
/mnt/.meta/graphs/active/newvol 
[root@myfedora newvol]# cat private
write_speed(1402602084) = 12552555
write_speed(1402602085) = 18558756
write_speed(1402602086) = 23685425
write_speed(1402602087) = 9786084
write_speed(1402602088) = 9543367
write_speed(1402602089) = 796957833
write_speed(1402602090) = 8530576722
write_speed(1402602091) = 10028056272
write_speed(1402602092) = 10719120525
write_speed(1402602093) = 10528354767
read_speed(1402602084) = 8961522
read_speed(1402602085) = 8082654
read_speed(1402602086) = 7617477
read_speed(1402602087) = 9810846
read_speed(1402602088) = 10258556
read_speed(1402602089) = 193668615
read_speed(1402602090) = 261608047
read_speed(1402602091) = 29639965
read_speed(1402602092) = 47595000
read_speed(1402602093) = 39282929
data_read_cumulative = 729737216
data_read_incremental = 729737216
data_written_cumulative = 729737216
data_written_incremental = 729737216

This patch is under review as of now. It currently produces a successful build with rpms and smoke tests with Jenkins . Also, the new rackspace regression test gives a success build too.    
Google Summer of Code Third Week update.
Google Summer of Code 2014 :  Week Three. 



Hello All, 
This week has been quite an hectic week for me. I had to travel a lot, to many parts of India. I did much of my work without any stable and fast internet connection. But, by the end of this week I was able to complete some work and below I present the progress for the same.


In the previous weeks I had started with the GUI design and the API. This week I was able to complete a yet important part of the API. Adding to the last week's progress this week I did following things : 
  1. Created the Authorization service for the API. That is method for generation of access tokens and access of API when user is logged in.
  2. Created the Upload API. Which offers facilities to upload, delete and upload revisions for the media content. I merged the Upload API for Images, videos and documents. 
  3. Designed and furnished the home page for the project. Being naive to the css and web fronted design, its taking much of my time to develop the GUI. I was able to create a grid for the GUI but styling  and other smaller things need a lots of improvement. Being too pathetic with the color choices and images I am looking for all possible help with GUI :P.  Anyone willing to help with the GUI can create either a pull request at the project or leave a comment here.
Also, We had discussions over important issue of copyright formats for video and image. Most of the video formats available are copyright and the use software that works on these software i.e process and manipulating  is not allowed on the fedora platform. We are developing a web application that would be finally uploaded on the fedora-infra server hence its required of me to follow the fedora guidelines. So we have made it compulsory.
Also for generating thumbnails from the videos, we are exploring various options. And I am open to any tools that available from fedora main repository. We even had discussions on what various endpoints that should be added in the API.

The code for the project can be found here and the project proposal for the GSoC is here


Thanks for Reading through the post.




June 09, 2014

Bugspad finally deploy ready (dev-testing)

After nitty-gritty fixes and additions. Bugspad is finally deploy ready. Have added many missing features and bugfixes which are listed below. Finally I would be deploying on my mentor’s server with nginx. and test it in production.

  • Added feature for editing QA, Docs.
  • Made a separate page for CC list editing.
  • Integrated auto comment upon changing of bug fields.
  • Gave a uniform grey feel to the bugspad UI.
  • Resolved errors due to renaming in the code.
  • Tested the features implemented.

bug_desc

filebug

home

login

prod_desc


June 08, 2014

GSoC-2014 isitfedoraruby - Week 3

Testing, testing, testing. Diving into BDD for the first time can be a little tedious but you sure learn a lot. In the ruby/rails world there is a ton of excellent tools to help you test your app. Some more popular than the others. I'm no exception so I picked what the majority of the community dictated.

Testing tools

Rspec

The Rspec test suite is well established among ruby developers and has a big community to support it. You can also find many good books about it. One that I highly recommend is Everyday Rails Testing with RSpec. It basically includes all the tools I'll be using, I'm a little biased I admit it but it is really worth it.

Here are the specs that will be populated with tests over time.

models
├── bug_spec.rb
├── build_spec.rb
├── dependency_spec.rb
├── fedora_rpm_spec.rb
├── rpm_version_spec.rb
└── ruby_gem_spec.rb

Currently, I have worked only on bug_spec.rb which is finished for the time being.

# app/spec/models/bug_spec.rb
#
# == Schema Information
#
# Table name: bugs
#
#  id            :integer          not null, primary key
#  name          :string(255)
#  bz_id         :string(255)
#  fedora_rpm_id :integer
#  is_review     :boolean
#  created_at    :datetime
#  updated_at    :datetime
#  last_updated  :string(255)
#  is_open       :boolean
#
require 'rails_helper'

describe Bug do
  it 'has valid factory' do
    expect(create(:bug)).to be_valid
  end

  before(:all) do
    @bug = create(:bug)
    @bugzilla_url = 'https://bugzilla.redhat.com/show_bug.cgi?id='
  end

  it 'has valid bugzilla url' do
    expect(@bug.url).to match(/#{Regexp.quote(@bugzilla_url)}\d+/)
  end

  it 'is a Review Request' do
    expect(@bug.is_review).to eq true
  end

  it 'is open' do
    expect(@bug.is_open).to eq true
  end

  it 'is closed' do
    @bug.is_open = false
    expect(@bug.is_open).to eq false
  end

end

Here I'm using the new rspec method expect(object).to instead of the old one object.should.

In the validation of the bugzilla url I wanted to test against a regular expression that would return the bug url and bug number. At first I used /#{@bugzilla_url}\d+/ but that was interpreted into /https:\/\/bugzilla.redhat.com\/show_bug.cgi?id=\d+/. So, the slashes where treated as regexp wildcards. The trick I learned is to enclose the string into Regexp.quote(str). This method escapes any characters that would otherwise have special meaning1.

FactoryGirl

FactoryGirl is a replacement for fixtures, Rails' default way of creating test data. In my first attempt I used it to create a Bug object.

# app/spec/factories/bugs.rb

FactoryGirl.define do
  factory :bug do |b|
    b.bz_id '12345'
    b.is_review true
    b.is_open true
  end 
end

So, when I call create(:bug) in my bug_spec.rb it automatically creates a new Bug object in the database with the predefined attributes I gave it in the factory file. I could probably use build(:bug) instead of create and that would simply create the object but not save it in the database. This could get a lot better since it takes 2.2 seconds to just run 5 tests. Refactoring will come later, I'll primarily focus on making enough tests to cover as many edge cases as I can find.

Cucumber/capybara

So far I talked about unit testing. When it comes to integration testing, that is how the application as a whole behaves, there is cucumber and capybara. I haven't actually used any of these two yet. Cucumber is known for its descriptive language and better used when one works with a non-programmer product owner that doesn't want to look at a lot of code2. I'll probably just go with capybara.

Setting a Rails development environment

I spent quite a lot of time to find the proper gems and configuration to have a nice setup. This will do for an article of its own so I won't go into details.

TODOs

Except for preparing the test suite, I'm also into cleaning the code where possible and necessary. There are some functions that need removing, but I have to do it carefully, don't want to break anything and without tests I cannot be 100% sure. So far I have used the rubocop gem with some interesting findings (exactly 666 warnings/errors). I will talk about it next week. Now go and watch the Number of the beast.


  1. http://ruby-doc.org/core-2.1.2/Regexp.html#method-c-quote 

  2. Quote taken from Everyday Rails Testing with RSpec 

First shot at Inkscape - Moar Fedora.next logos!

I had always wanted to learn Inkscape; and the whole buzz around Fedora.next logos gave me exactly he opportunity. Just finished tinkering around with Inkscape for the first time, I must say it was worthwhile!

As promised, I’ve come up with a second iteration of the Fedora.next logos, this time as an SVG ;) If you haven’t been following already, you should check out attempts by Máirín Duffy and Ryan Lerch on the logos. I was without a computer, but did few sketches and my attempts are available here. Based on the feedback I received, I’ve now reduced to three improved concepts and would totally appreciate your feedback on them!

Alright, time for you to enjoy logos and for me to face the difficulty of communicating what exactly I was trying to do! :D

Strip A

I was poking around with Inkscape’s various buttons and accidentally ended up curving one side of the monitor. The end result appeared cool, so I decided to stick to it. I arrived at the keyboard by vertically inverting the monitor and skewing it a little bit.

Strip A

Strip B

These are my personal favorites (the workstation especially). I arrived at this by first creating a bunch of tiny squares, then recursively stacking them together to a produce grid effect. Finally, I colored it up in a way that automagically suggests the context of the logo. I have a few neat hacks in mind for this concept, if enough people agree on it, I could give it a try - the cloud & server can definitely improve! :)

Strip B

Strip C

The server on this strip was an attempt to ‘stack’ server disks that run Fedora - I’m not sure if I succeeded in creating that idea. From the feedback on my sketches (and late realization too), it appears that I couldn’t capture the idea of a cloud by stacking differently oriented Fedora bubbles together, so this time I’ve tried another shape. What do you think? The mouse is a different look at the workstation - most of the ones we’ve seen previously are oriented more towards the entire system, kinda rendering the logo ‘busy’.

Strip C

So…which would you rather pick? How could we possibly improve?

You’ll find the source for these images on my fedorapeople page or on my GitHub repo for this blog. They’re just 30$ for a one time use. J/K ;)

Note - if you actually end up looking at the source, you’re probably somebody who cares about how designers would want to collaborate. I’ve been improving the front end for GlitterGallery and recently wrote a huge post(with lots of pictures), so be sure to check that out too!

Thanks!

GSOC Shumgrepper: Progress till now

Firstly, I want to apologize for being so late in updating my progress. But its better to be late than Never. As I am working on Shumgrepper, a web-app of summershum which collects the md5sum, sha1sum, sha256sum of every file present in every package. I am about to complete the 3rd week of my internship but i am going little slow and need to gear up in the coming days.

My work can be summarized as below:

Developed the basic framework of the app. Defined the directory structure of the app.
shumgrepper\
    shumgrepper\
        templates\                     //store web app templates
        app.py                         //contains definition of various end-points
        default_config.py              
    apache\                            // for installing locations
    summershum\                        //files containing methods used for querying
    fedmsg.d\                          // display fedmsg messages 
                                          in human readable format
    createddb.py
    development.cfg                     
    requirement.txt                    
    runserver.py                      //to run server
    setup.cfg
Setup database and made shumgrepper app to query the summershum database.

After having discussion about database models, i finally decided to use sqlite database in which the original database model of summershum is. I cloned summershum repository and run it to store database and enable shumgrepper to query it.

Creation of endpoints

The confusion was to create separate endpoints or create one endpoint for all. But after discussion with my mentor, we decided to have separate endpoints to make querying easier and more clean.

Till now, I have created 4 endpoints:

  • /sha1/<sha1sum>             // to query by sha1sum
  • /md5/<md5sum>              // to query by md5sum
  • /tar/<tarsum>              // to query by tarsum
  • /sha256/<sha256sum>            // to query by sha256sum
Added CSS to enable the data (files information) appear in the form of table.

Screenshot from 2014-06-07 00:41:04

What’s next?

  • Till now, it returns data in html form. If a user requests data in json format, it should return json output.
  • Adding more endpoints to return the specific information requested by user.
    e.g. If user wants to see the files name with a specific sha1sum.
  • Create front page of the app.

June 06, 2014

Init: Redesigning GlitterGallery responsively

When we started out this summer, I was telling Paul how my work on redesigning GlitterGallery was going to be awesome for my blog - pretty pictures!

GG Banner

I took to a lot of sketching last week since I was without a computer (I’ve posted Fedora.next logo concepts here). Like everything else, initial thoughts on GlitterGallery’s redesign started with sketches.

First, I hopelessly poured over all of my previous sketches to.. you know, get the ball rolling within my head.

First mockup ever!

First mockup on user page

First mockup on Glitterpost

Idea of comment -> track logic

Old plans of doing the pages

Pull request ideas

GitHub inspired mockups

Then it struck me - a lot has actually changed since those initial sketches, and we need to think in terms of how to deliver the new stuff better. Before starting to doodle right away, I went over a talk I did recently at the Libre Graphics Meeting in Leipzig. I couldn’t really explain all my ideas at the event, thanks to the last minute time allocation changes but I knew it was going to remind me of the key ideas we had to keep in mind when working on the new frontend. There were also few mockups I had made as part of my proposal for the summer, I went over them as well.

User page on proposal

Glitterpost on proposal

Issue page on proposal

Infograph on proposal

With that, I started to hopelessly doodle, taking inspiration from various articles Emily shared with me about responsive web design practices.

Initial bad layout

Ideas for the mobile flow

Blank slate ideas

And things started to look better.

Better layout

New user page overview

Aaaand, surprise! Two days ago, I found a charger and finally had access to the computer! Time to do the real pages!:D

Instead of polluting the original GlitterGallery repository, I started a new one for the WIP design stuff. There’s two pages at the time of writing, if you’d like to take a look, the login page and the user page. It might appear as just two simple pages, but these also serve as a framework on which to build the others, so the remaining redesign should be much easier and faster.

There was two important choices to make - the right layout and the right color scheme.

Ryan’s words at LGM come to the mind - “Mockups should show the essence of what you want to convey, without really going into too much detail about things like color. If the mockups are too precise about such things, people will complain when they don’t get what was originally promised - and it can be frustrating to give an answer!”

Pattern and color

He was right. I picked up a neat background texture from subtlpatterns, played around with various color combinations and finally settled with one when it began to feel right.

The challenge wasn’t to make just a few pages, it was to make them accessible on various devices - we had to redesign responsively ;) In the process, I picked up jeet, a cool library I must say; and experimented media queries on my sass - choosing the right breakpoints, sensible sizing, standard patterns, all of that stuff.

Here’s how the pages currently display on various devices - feel free to try them out in whatever cool gadgets you own! You can also try resizing your browser to watch the magic happen :)

User page on desktop

Login page on a notebook

User page on phone

I was super thrilled when I loaded the login page on my own phone! Infact, I had optimized it for the keypad to not cover the login button when typing the email - small things count, don’t they? ;)

Login on my MotoG

The most important part - lessons learned:

  1. Sketch. Easier to make changes to a paper drawing than having to go through all of that programming/layouting mess again.
  2. Design mobile first, and then go upwards on screen size. Practising this will result in more manageable stylesheets and you won’t overwrite styles for elements that behave the same way across smaller width devices.
  3. For responsiveness, choose breakpoints not based on standards, but based on page content.

My TODOs for the next week or so (Fedora stuff):

  1. Improve this framework: when there’s more screen available, things can be dealt with better.
  2. Extend this framework to some more pages.
  3. Work on digitizing the Fedora.next logo concepts.
  4. Go forward and finish the stuff planned for GSoC iteration one.

Great, now I’ll go drink some coffee - it’s been painful taking all these screenshots ;) Until the next post, here’s some family photo for ya:

Login page family pack

GSoC - week 3
This week I've been mostly continuing the parts I started the week before. I performed more drastical refactoring - splitting the main monolithic class (Root) that does basically everything except few specific features that are delegated to other modules. I moved rest of the buildroot-building code to Buildroot class I created before for this purpose. The state logging and plugin loading/calling was decoupled to separate classes. The package management code resides in PackageManager class and Yum/Dnf subclasses. I renamed the former Root class to Commands which now does just what the name suggests - executes commands, such as build, buildsrpm, in buildroot. I've adapted plugins to the new model and customized the initialization to prepare field for the LVM backend, which will of course be implemented as a plugin (but doing that without hardcoding some parts into core will require enhancing the plugin API).

Jenkins
I've requested a Jenkins project for my improved version of mock which is now available at http://jenkins.cloud.fedoraproject.org/job/mock/ and provides built packages (and repository) at http://jenkins.cloud.fedoraproject.org/job/mock/ws/RPMS. I build it using the spec file in the mock source tree but before building I inject a script into %prep which replaces all occurences of 'mock' with 'xmock' (including filenames). Thus, the package can be installed alongside upstream mock without any conflicts. Just everything is renamed to xmock - the binary, the config directory, also the group. So feel free to test it :-)
(But be careful, mock is running as root. If I make a mistake it may have consequences on your system)

DNF
I've already got feedback from my mentor (Mikolaj Izdebski). The bad news is that dnf installroot support doesn't work for him althought it works perfectly fine for me. The problem is that for him, dnf doesn't load the config from the installroot and uses the system-wide one. I've read the corresponding part of dnf's source code and I don't see any reason why it might fail to find the configuration. If installroot is defined and the config file within is readable (tested with access(2)) it will be used. I'm yet to find why it doesn't work for him.

Interactive output
I've modified mock to print output of building and package management commands. One thing my mentor suggested was to get more output from dnf/yum. Currently, neither dnf nor yum output anything when they're synchronizing repos or downloading packages, only when installing. The reason is that they check whether the output is a terminal (with os.isatty()), which is not - in case of mock, it's a pipe. I had to trick it to them to think the output is a tty. I solved it using a pseudoterminal instead of a pipe and now os.isatty() happily returns True in the child processes and dnf now prints everything including the progressbars when downloading. But since it uses carriage retruns and backspaces to erase parts of already printed output it also introduced lot of mess to the logs, because in text files those characters aren't interpreted the same way as on terminal. So I also had to modify the output logging to get rid of these.
Note: to get reasonable output from yum/dnf, it's debuglevel has to be set to at least 2 in the config (current default is 1)

Skipping package verification
Another feature he requested was speeding up mock by skipping verification of packages when installing. Neither yum nor dnf have an option to disable those, but since they're written in Python, there's always a hackish way to accomplish what you need. There are two solutions to this problem: 1. create a plugin that modifies yum/dnf to not verify packages 2. create a wrapper module that will modify them. I chose number 1 and implemented it for dnf for now. I created a simple plugin that, once loaded, rebinds the dnf's method verify_transaction to a no-op lambda. Then I just copy it into buildroot and inject the plugin path to dnf.conf. It can be toggled with config option 'noverify' (enabled by default).

Other
Also, I fixed some corner case behavior. Now it doesn't fail when you delete the /var/lib/mock directory. Previously it recreated it with wrong permissions making it hard to detect, why it failed and how to make it work again (user had to manually set the setgid bit on the directory). Also it prints warning when incorrectly executed as regular user without  setuid wrapper (Previously it printed OSError: Success).

Future
I've been exploring the possibility of mock executing commands in a contained environment and possibly not doing most of it's work as root. I've discovered some interesting things about Linux namespaces that might quite change the way mock works. I will try to make a follow up post about this soon.

June 04, 2014

Implementation of chat log browser – GSoC Week 2

Last week’s task was to implement chat log browser. It basically required working on three things, namely search API, UI and their integration.

Search API

The first thing I had to do was discuss with mentor and finalize API endpoints. We ended up with only one endpoint.

 /api/search/{server-name}/{channel-name-without-#}/?{get-params}

  • {server-name} is eg. Freenode and it is case-sensitive in search query.
  • {channel-name-without-#} is eg. fedora-admin and it is case-sensitive too in search query.
  • {get-params} All of the below get parameters are optional.
    • message: search term eg. ‘trouble installing Fedora’
    • from: message sent by eg. dne0
    • to: message sent from: rtnpro
    • dateFrom: message was sent after date eg. 1 May 2014
    • dateTo: message was sent before date eg. 10  May 2014
    • page: for pagination

Obviously, all these parameters are first validated before querying from elasticsearch. And then results are returned in JSON format as shown below:

{
  status: <bool>,
  errors: <array>,
  results: {
    totalCount: <int>,
    perPage: <int>,
    took: <int>, // time taken in milliseconds
    data: <array of objects>
  }
}

For routing we are using Iron Router, but there are some issues with its server side router. I hope they get resolved quickly so that I can refactor API code and make it look more cleaner :)

Search UI

Being a novice in designing and HTML/CSS, I tried not to waste much time on fonts/color scheme and instead focused more on adding possible search fields which user might want to use to narrow down search results. Currently, UI contains option to select channel, search by message and some advance search options like date, from, to etc.

UI & API Integration

Meteor provides a http package for sending asynchronous requests. I have used this package in my code to call API. As written earlier, the results we get are in JSON format, this JSON is rendered on client side using UI.toHTML api of meteor.

callAPI: function (serverName, channelName, getParams) {
  var API_URL = this.getAPIEnpoint(serverName, channelName, getParams);
  $('.chatlogs-loader-msg').show();
  Meteor.http.get(API_URL, function (err, body) {
    if (!err) {
      waartaa.search.helpers.renderResponse(body);
    } else {
      alert('OOPS! An error occured while fetching data.');
    }
    $('.chatlogs-loader-msg').fadeOut(500);
  });
},

Conclusion

With this, the first version of chat log browser is ready. Here is the PR:  https://github.com/waartaa/waartaa/pull/113

Screenshot from 2014-06-04 15:09:15

Screenshot from 2014-06-04 12:45:09

Please note search terms are highlighted. Thanks to elasticsearch’s highlight search terms feature which made it easy to implement.

PS: Any suggestions for improving the search interface or anything else are welcome :)

What’s left?

  • Refactoring API code.
  • Authentication mechanism in API.
  • Improving UI.
  • Implementing chat log permalink and bookmarking feature.

<script>JS I love you</script>


Google Summer of Code Second Week update.
Google Summer of Code 2014: Week 2 update

Hello All,

As mentioned in my previous post I have been working with the Fedora-project this year on the GSOC program.It's just my second week and I am am writing to report my progress. In brief, During the last few days I have been extensively working on the project. We completed some major tasks and started with others.
  1. I Had discussions about the database models and finalized a database model. Which was implemented. The model though can have multiple corrections according to the requirements. 
  2. I started with the Project API and have completed a small part of the API. I have successfully been able to complete with the access token generation and the Uploads API. 
  3. User authentication : I have completed the user authentication and update. We have decided on the access groups and integrated the Web application with the FAS : Fedora authentication Services. 

In the previous few days I started with the some important part's of project and expect to complete them in the coming week, Namely :


  1. Started with GUI. As the fedora projects' policy of not using any bundled libraries. I have to create a web GUI all by myself. I discussed, about the possibility for the usage of CSS frameworks but was informed many reasons why not to use the bundled libraries. The Work for the web GUI is progressing fast and I have completed with a large part of the same. 
  2. I have started with the User profile and completed a primitive version of the same. The current profile page, currently is equipped with basic features and ability to update / Edit the profile. 
  3. I started with other part's of the project as well. I have written views and forms to add and view data. Work has started on the following subparts like the Home page, Tutorials apps, Media apps and Blog app. 
Currently the Work is Going in Multiple directions and in parallel. I have been quite frequently committing and updating code.

The code for the project can be found here and the project proposal for the GSOC is here .

Thanks for Reading through the post.

June 03, 2014

Dreaming up Fedora.next logos

Last week was a nightmare - my laptop charger simply blew up! Despite searching the best accessible stores in my city, I had a hard time finding a replacement. I spent the week reading on responsive web design practices and sketching mockups for GlitterGallery & other web projects I’m involved with. Hopefully, I should be able to blog about progress with GlitterGallery’s front end by the end of this week, when I have something solid shipped :)

Inspired by Mo and Ryan’s fantastic work on the Fedora.next branding so far, I decided to tinker around with some logo ideas for Fedora.next server, cloud and workstation. A lot of these are improvements over some of the logo concepts Mo put up on her blog, some others are ideas of my own. I’m beginning to pick up Inkscape and with some help from the design team, I’ll try to digitize them soon!

Ok, no more talking, here’s some sketches for ya (apologies in advance for the low-res pictures)! Enjoy!

Strip A:

Strip A

The idea is straightforward - keep the “Fedora” in the Fedora.next the first thing and then supplement the individual components with their own graphics. Perhaps the keys in the workstation could do with a spacebar to make it more obvious?

Strip B:

Strip B

Here I experiment with the server as a stack of disks, and the workstation to be one of those new flexible laptop computers.

Strip Red Bull:

lol

Not really sure what I was trying to do, but it appears like I was trying a fancy cloud and ended up giving Fedora some wings! :D

Strip D:

Strip D

Really, the cloud on this is my personal favorite, it mimics a real cloud - with the Fedora logo put up front. I tried to introduce a desktop computer for the workstation, with the Fedora logo as monitor, a mouse and keys.

Strip E:

Strip E

The server stacks seem prettier to me here. I was hoping to make use of the patterns from Mo’s blog post for the bottom two layers. For the workstation, I introduced a half-shown monitor, with a Fedora mouse.

Strip F:

Strip F

Plain server disks, and a usb stick for the workstation instead - RedHat sent me one from the Brno office (doubles up as a can-opener), I thought it would make a cool concept for a workstation that can be carried around ;)

This whole thing was originally supposed to be an email to Mo, but she encouraged me to blog about it so here we are! Writing this also made me realize how difficult it is to communicate design stuff to an audience effectively. Please let me know if you particularly like/hate any of the logo ideas, it’d be useful to think about it before I get to digitizing them!

Once again, I’m posting the entire series here for you:

Fedora.next logo ideas

GSOC Week 2 : Bootstrapping
Based on recent discussions with my mentor KP, we've decided that instead of looking at the bigger picture of modifying multiple xlators, it's a better alternative to stick to the primary objectives of my proposal and start working on a 0.1 version of the application so as to present it soon in front of the Gluster community. Feedback is very important for our application since it's intended users will be the developers and Gluster users.

I've decided to go the python way for building glusterfsiostat. The further tasks needed to build this kind of thing are:
  1. Get location of every gluster mount in the system. This will be done with the help of the  mount command output by applying a bit of regex parsing. For now, I'll only be focusing on Unix based mount paths.
  2. Every glusterfs mount path contains a virtual directory called .meta which stores the meta information(name, type, structure of graph, latency) for every xlator. Reading the files in .meta will be my primary source of information. As of now, I could only find latency information for each of the 49 fops defined in Gluster, but in order to display total read/write amount and speed like the tool nfsiostat, I may need to modify the meta xlator itself.
  3. Pretty print this output with the application, or in a consumable form like JSON.
This initial version of the application might undergo many changes based on feedback from the community, and I'm ready for that because we're looking for a long haul!!
End of week 2

Week 2 comes to an end. There was plenty of things which was new to me. Upon discussing with my mentor, we decided not to puruse the OpenID as of now, which was what I mentioned in my proposal. It was decided that I start on the admin interface instead.
During this week, I did-

  • The basic CUD(Create, Update, Display) for the Bug was ready. Then I studied redhat.bugzilla.com as a reference as to how the workflow should be.  Now I had to make it into a presentable UI. My CSS/HTML is not state of the art. So initial attempts were’nt so good. However, I managed to give it a good layout to it in the end, and am working on the color schema and the fonts currently.
  • I studied the admin interface of bugzilla, by installing it locally on my system, and started working on editing Products, editing Components, and editing User, for the admin panel. I integrated them, and it is working smoothly.
  • I also looked at the bugzilla table schema and see how it can be improved, and added the versions tables for components and products both.

Currently my prime task is to make bugspad deploy ready ASAP, by giving it a good aesthetic look, and fixing logic issues if any.  And once deployed it would be easy to recieve feedbacks from the fedora-infra team and the community in general.


June 02, 2014

GSoC-2014 isitfedoraruby - Week 2

Here's what I've been doing last week.

Previous week

Architecture analysis

Getting to know an app from the ground up takes some time, especially if that's a framework you are not too familiar with. Luckilly, I found the railroady gem that helped me visualize how the app is structured, you can find the results here (click on one of them and see them as raw).

Deploy to a testing server

Heroku might be a nice option, but their plans were limiting the database rows. Instead, I spawned a Fedora 20 droplet in digitalocean and deployed it there. I used postgres as a database backend, unicorn serving the app and nginx as a reverse proxy.

First of all, I had to enable SELinux, I had to, it was off by default. Then I changed a boolean to make nginx work:

setsebool -P httpd_can_network_connect on

Started fixing bugs

I keep finding small bugs that I currently keep track in my trello board. There is currently a discussion to move isitfedoraruby to a more general namespace, so until then I'm working on my fork and the trello board.

This week

Setting up rspec

As this is the primary testing tool, I have started writing the first scenarios and along the way I'll be adding the actual tests.

May 28, 2014

GSoC 2014 - week 2
It's been a week (and one day) since my last post, so I'd like to share my progress. I've been continuing with refactoring the mock initialization and splitting the functionality into smaller pieces.

DNF support
I've implemented suport for using DNF as a package manager instead of Yum. Both package managers are used through a common abstract class and which one is used is decided based on the configuration ('package_manager' option, which defaults to yum when unspecified). This behavior can be overriden by explicitly passing --yum or --dnf commandline options. There have been some concerns whether the installroot support in DNF is working correctly, but I've been able to do a rebuild of a package using DNF exclusively and it worked fine without any changes to the config.

Direct interaction with package manager
One of the features I mentioned in my proposal was providing direct interaction with the package manager. Current mock allows invoking only small subset of commands that Yum/DNF provide. I added a '--pm-cmd' command that allows invoking arbitrary package management commands. There are also '--yum-cmd' and '--dnf-cmd' that are just shortcuts to specifying '--yum' or '--dnf' and the '--pm-cmd' option.
Example:
mock --yum-cmd reinstall expat-devel

Mock interactivity
The week before I implemented shared locking that allows multiple mock processes to operate on the same buildroot and I was able to get more instances of mock shell. But that was only part of what needed to be done, because other mock commands had their own cleanup logic, thus usually killing still running processes and clashing while unmounting shared devices. I've been working on integrating the new initialization/cleanup scheme into all remaining commands and now there shouldn't be such clashes anymore. I've also made the cleanup more reliable (runs even if exception is raised) and the initialization more resistant - dealing with possible remaining mounts from previous run without needing the user to unmount them manually. I'm also trying to make the startup faster by marking what parts of initialization which were already done, therefore don't need to be repeated.

Next steps
I'd like to start making foundation for the main feature I want to implement - support for using LVM for storing buildroot images and therefore being able to leverage it's snapshoting features.
GSoC 2014: mock improvements - week 1
My proposal for Google Summer of Code 2014 - improving the mock tool has been accepted. In the next few months I'm going to focus on improving this tool used in packaging to facilitate faster and more effective workflow for packagers.

What do I want to improve?
(This section is just recapitulation of the proposal, if you already read it, skip it)
  • Better caching - imporve the performance and provide easier mechanisms for cloning and manipulating buildroots. Currently, mock uses tarballs for keeping the buildroot cache and extracts them when needed. This is slow and not very flexible. Using snapshots would provide a way to conveniently save, restore or clone the current state of the buildroot. This could speed up the workflow of packagers who have to work with many packages at once. Current options are either cleaning the buildroot with each package which takes a long time to since dependencies have to be installed again. Or manually copying the buildroot which is also slow and inconvenient. Or not cleaning the mock at all which can make some packaging mistakes (missing BuildRequires) go unnoticed. My proposed modification could present another workflow that could be faster than cleaning with each rebuild but not error prone as not cleaning at all. For example Java packages built with Maven could save a buildroot snapshot with maven-local and its long chain of dependencies and reuse it for multiple packages.
  • Provide a way to revert recent changes without having to clean the buildroot and install all dependencies over again. A similar idea to the previous. Mock could automatically make a snapshot after installing package's build dependencies, but before actually building it. This could provide a way to do manual modifications to buildroot without having to reinstall dependencies after you turned your immediate changes into patches and you want to rebuild it to see whether they work.
  • Ability to work interactively - add realtime logging to the terminal. Packager should immediately see the output of the package management system and the build system on his terminal without having to look for the logs stored deep in the filesystem
  • Allow to use mock shell during running build - mock uses one global lock per buildroot which prevents you to use the mock shell when there is already a build in progress and vice versa. More finely-grained locking would be more appropriate, because packager usually doesn't want to interfere with the running build, but just query some information or copy files.
  • Allow using rpmbuild's short-circuit mechanisms to reinitiate failed build without having to start over - rpmbuild provides short-circuit option that starts the execution of the build in the given phase skipping previous phases. If the package built fine but the file verification failed, don't force the packager to repeat the whole process from the very beginnning.
  • Use DNF instead of YUM to install packages - DNF is a modular package manager that is meant to replace YUM in the future. Since it is already usable, mock should be able to use it instead.
  • Provide more ways to interact with the package management system within the mock - using DNF straight from the mock shell could be more straightforward than interacting with the package manager via mock command line interface.
  • Handle user interrupts without corrupting the buildroot or leaving still running processes - If you run a mock build and realize you made a mistake and want to stop it with <C-c>, there is a high probability that your buildroot will not be usable for the next build or there will be some remaining processes that weren't terminated when you interrupted. It also should provide a way to pause the whole build, in case you need more computational power or your battery is running low due to increased resource usage
The repository
I've setup a git fork of the upstream git repository based on the msuchy-work branch. You can see my changes here: https://github.com/msimacek/mock

What have I already done?
Started refactoring the mock backend, which currently lacks the abstraction needed for implementing aforementioned features. Mock started as a simple script and evolved as such. The functionality is scattered across the modules and lots of the commands are big monolithic sequences of function calls performing different things that could and should be split into smaller parts. My mentor already suggested making a set of small, atomic commands forming a low-level interface on which the high-level and more complex commands would be built. I began with refactoring the locking scheme. The current implementation allows only one running instance of mock per buildroot by using an exclusive file-based lock. The functions that operate on the buildroot are responsible for locking the buildroot, performing the initialization and the cleanup. And sometimes those actions are done directly from main. I'm trying to factor out the common parts, so the buildroot preparation and finalization is a responsibility of single function and the commands can assume that the builroot is already prepared and will be cleaned up after they finish without their direct intervention.
I changed the locking scheme to use the exclusive lock only when the operations are destructive (buildroot cleaning/deleting). For non-destructive commands the buildroot is locked exclusively only for its initialization and only if it wasn't initialized already. Then it is locked with a shared lock, allowing multiple instances at the same time to coexist. If the buildroot is locked with a shared lock, we know that there is still some mock process working. Thus, the initialization is done only by the first process to enter buildroot and the cleanup actions are performed only by the process that happens to be the last. This allows for example having a running build and still logging into the mock shell and examining the build environment. Further redesigning is needed to enable the possibility of having lvm-based storage with snapshot abilities, which will require slightly different initialization and finalization to work.

Another part where an additional abstraction will be needed is the package management system. Currently, only yum is supported and the command is hardcoded, therefore adding support for dnf needs some refactoring. I created an abstract class providing a common interface for package management, which will then have subclasses Yum and Dnf that would handle specifics of those two systems. The current implementation I have is just a skeleton denoting how it may look like, but it needs to be integrated to the rest of the mock and replace hardcoded invocations of yum.

What I plan to do next?
Continue with refactoring. Apropriate abstraction is essesntial to be able to add new features and fix existing issues. The whole initialization part is quite messy and mixes input parsing, config file parsing, initialization and performing the actual commands into one big part. The backend should be divided into low-level atomic commands and high-level ones, similarly to git which has a plumbing a layer and a porcelain layer built on top of it. It should also provide direct acces to the underlying package manager (currently yum) in order to not restrict users to a particular set of commands and enable them to utilize its full power.
Basic Waartaa-ElasticSearch Integration – GSoC Week 1

ElasticSearch is a flexible and powerful open source, distributed, real-time search and analytics engine. Waartaa-ElasticSearch integration is the first sub-task which begins the implementation of browse/search channel chat logs. After the GSoC bonding period was over I started working on it. Here is what I did in last week.

Creating mapping for Channel Logs index

Last week I spent most of my time reading ElasticSearch documentation and tutorials. I have got good understanding of what ElasticSearch is capable of. Mapping is telling ElasticSearch which field in document is searchable, defining their type and few more things. Here is the first version of mapping that I created. It will change over time as new fields may get added in document and we may also change the analyzer used for ‘message’ field to account for misspelled search terms like Google does it.

Indexing Channel Logs

If you have read my last blog post, you will know I already completed this task during GSoC community bonding period. In last week I just refactored it a bit, made it configurable in settings file and tested the code on my system. My mentor told me he is facing some issue with ElasticSearch secure installation. He will merge my PR as soon as installation is done.

Oh wait I forgot to tell you how I will be transferring older chat logs from MongoDB to ElasticSearch. One way is to write a script but there is always a better way to do such things. There is an open source river plugin available which does exactly what I want.

Generating permalinks for incoming chat messages

Each chat log/message will have it’s own unique permalink that users can send to each other to refer to some old conversation. There are many other use cases of it which I will not be discussing here now. There wasn’t much to do in this task because these are generated automatically in form of  ‘_id’ field of document that is indexed in ElasticSearch.

Conclusion

In terms of code, basic Waartaa-ElasticSearch integration is complete. I hope the problem with ElasticSearch installation gets resolved asap so that I can see my code in action in production :)

What’s next?

Next, I will be working on building an API to search/browse channel logs. Hopefully I will complete it by end of this week and will write a blog post about it too :)

<script>JS I love you</script>


May 26, 2014

GSoC-2014 isitfedoraruby - Week 1

In case you haven't heard, I have been accepted again this year for Google Summer of Code :) This time I will be working on enhancing a Rails app that provides information about the state of rubygem packaging in Fedora.

isitfedoraruby is a project that was crafted in GSoC 2012 by Zuhao Wan. I will pick it where he left off and add some new features that will hopefully make us packagers' life easier.

You can read my proposal here.

Why I got involved in this project

Last year was the first time I took part in GSoC, first time I got involved in Fedora, first time I started contributing to GitLab. I am now a packager, still trying to package GitLab for Fedora/RHEL (help needed!) and a member of GitLab's community core-team. I even became an apprentice to Fedora's infra team (although I haven't dedicated much time admittedly).

So, having packaged a bunch of rubygems already, I have stumbled upon many cases where my packager's life would be easier if I had some tools to work with. And this is where isitfedoraruby comes in. As a rails app, it looked like the perfect opportunity to learn about the framework and make something that the Fedora Ruby community could use on a daily basis when packaging.

Project details

As I wrote earlier isitfedoraruby is a rails app used mostly internally in Fedora by ruby packagers. The source code is hosted on github, deployed on Openshift and all the info is imported using a rake task using a cron job every hour.

What you see in production is using Rails 3 and ruby 1.9.3. Zuhao has already worked in porting it to Rails 4 so I'll be working on that branch.

The same project was also picked by some high school students during Google's Code In program in 2012. You can read more in Mo Morsi's blog post.

What we have so far

Here is a brief list of the pages there are so far.

General info pages

  • Home page
  • All rubygems
  • All fedorarpms
  • Contribute
  • About
  • Gemfile Tool

Single rubygem page

  • Link to fedorarpm
  • Homepage url
  • Source code url
  • Version
  • Total downloads
  • Description
  • Dependencies

Singe fedorarpm page

  • Header links
    • Dependency Tree (using D3 js library)
    • Timeline (chronological visualization of the bugs & version releases)
  • Link to rubygem
  • Link to source code url on http://pkgs.fedoraproject.org/cgit
  • Number of git commits
  • Up to date (yes/no)
  • Link to maintainer
    • link_to bubble chart page showing the packages a user owns
    • Table of rpms owned by user
      • link_to rpm
      • Upstream version
      • Rawhide version
      • Number of git commits
  • Description
  • Versions table
    • rawhide
    • fedora 20
    • fedora 19
    • gem version
  • Dependencies table
    • package name (link_to fedorarpm)
    • rawhide version
    • f20 version
    • f19 version
    • upstream version
  • Dependents table
    • package name (link_to fedorarpm)
    • rawhide version
    • f20 version
    • f19 version
    • upstream version
  • Bugs table
    • ID (link_to bugzilla, striked if resolved)
    • Bug title
    • Review status
  • Builds table
    • Build ID (link_to koji url)
    • Title (package name)

Features to be added and bugs fixed

Test suite

The current test suite is non-existent. Some of the tools I plan to use are:

  • rspec for testing models and controllers
  • factory_girl for feeding tests
  • capybara to simulate user interaction with the app

For views I will use feature specs. Existing minitest tests will be replaced with rspec ones.

Enhance the gemfile tool

The Gemfile tool checks against a Gemfile or Gemfile.lock and shows if the gems are packaged for Fedora. In its current implementation, it basically dumps all the information in the screen, which is not too handy if you need to somehow extract this information.

The functionality I want to add is:

  • provide a better view of the output (prettier table view)
  • ability to provide a Gemfile url and have the output on a static page like: http://isitfedoraruby/stats/gemfile?url=www.example.com/Gemfile.lock

With this we could calculate how many of the gems are in Fedora repos, which brings me to another cool feature.

Show the packaging progress of a gem

There are times where a gem depends on other gems not yet packaged for Fedora. A cool feature would be to able to see the packaging progress of a gem.

The plan is to either implement one of the two options below (or both):

  1. Query https://rubygems.org and extract information on the dependencies.
  2. Provide a yaml file with info on the gem's review request bugzilla issue, then query koji for rawhide builds and bugzilla for any Blocks issues. If we have those two values we can calculate how many dependents gems there are not yet or already packaged.

I prefer the first option as with that one could see what dependencies needed prior to submitting a package for a review request.

A nice progress bar with the percentage of gem packaging completion will be provided as well. For each gem that gets in rawhide, the progress bar gets a little further. There can also be a number of other cool info in the page, like links to bugzilla requests so that someone could lend a hand, etc.

At first I will focus on implementing this for a single gem but later it can be extended to track the progress of a rails app.

That functionality will mainly come from the next feature in the list. Enter dependency checker.

Enhance dependency checker

The current representation of a gem's dependencies is in a tree format, not very handy if you want to extract information.

Influenced by https://www.gemlou.pe, I plan to implement a similar representation and show the dependencies in a more elaborate way.

UI/UX enhancements

The app uses bootstrap as a frontend framework, as you could tell. The idea is to make it more user friendly and prettier. If it were me I would have used foundation in the first place. Upstream provides even a gem for immediate use in rails apps. In my opinion it's targeting more to developers that don't want to get too messy with css. For the time being I don't consider changing frameworks, but it's a food for thought.

Add pages providing more general info

Pages that would help the user experience are:

  • list of gems pending review
  • list of gems already assigned but stagnated for too long
  • list of gems pending review and marked as NEEDSPONSOR

All these can be queried from bugzilla through the ruby-bugzilla gem.

Documentation on contributing

Last but certainly not least, I will document the development contribution process should anyone want to provide any fixes/features.

Apart from developing the app, I will provide a page with comprehensive steps on packaging a rubygem. A draft article can be found here.

Progress of previous week

I have been reading the codebase to get familiar with it and then done a few minor things.

Deploy on staging server

Currently I run it on heroku, but the database exceeded the free plan they provide so I'll have to move it elsewhere. I haven't yet imported the gems as the rake tasks pull in all gems from rubygems.org and the database gets huge. This is noted to be fixed in the rake task. Also, what you see is the next version of the app using Rails 4 :)

Note bugs that need fixing

Like last year, I will be documenting any progress in a trello board. In the code clean up list you can see what is marked as to be fixed. If you have any comments please do leave them on the cards.

What's on this week

We are already in the middle of the second week, so here's a preview of what I'm working on.

Architecture analysis

The code needs a clean up and the only way to do this is to do an analysis of the app's architecture. That will give me an overview of the changes need to be done like merging the rubygem page view into fedorarpm.

Setup a proper dev/test environment

Right now, if anyone wanted to play with the app, apart from db migration you would have to run the rake tasks that import all the info needed in order to have a functional app. Rather than importing each gem (which can be done in batches), I would like to create some fake data to begin with.

This would help me afterwards with the tests.

Fix bugs found in first week

I'll try to apply fixes to as many bugs as possible I found previous week.

The end (for now)

Soo, that's it for now! Hope you made it this far :) Cheers to an interesting summer!

GSOC Week 1 : Getting hands dirty
The primary job in the coming days according to the timeline suggested in the accepted proposal is to identify the counters present in different xlators in the Gluster's inverted graph and standardize the stats that would be available only at one end point in the application. 

With this regard, Anand Avati, a longtime contributor to Gluster has advised me to look into the meta xlator which is already storing some stats like latency measurement and fop count. The meta xlator lies at the top of the graph, hence it's the ideal location to gather overall latency of a request given to gluster. Based on my experience in testing meta, I've found that meta xlator when realised on a running mounted volume, consists purely of virtual files. In order to enable latency measurement, you have to do "echo 1 > $MNT/.meta/measure_latency". Don't forget to put a '-n' after echo other wise the file receives an input of "1\n" which leads to an incorrect write. I realised this after spending quite some time evaluating the variable contents by debugging glusterfs with gdb. A redacted output of this can be found at http://fpaste.org/104770/

So the basic idea right now is to read up all the xlators and document which stats each of it is storing and hence provide access to this information in a standardized manner.
Google Summer of Code 2014 : Week 1 update




It has been quite hectic week for me, being the first week of the GSOC period. I would like to present before you what I will be doing during this summer break. I am working with Project Fedora to develop a Web-Application, that would offer video lecture series and tutorials. This works to create a virtual classroom for new fedora contributors. Acts as a platform for new contributors to engage with the community and learn how they can contribute best in the community. 

Mostly this service will be used to run online courses on contributing at various levels be it documentation, bug-fixing or packaging. The project would certainly increase the activeness in the community and certainly make it easy for newer members to craft their way around the fedora community. The use of virtual classroom environment for training new contributors to the community using know educational resources by a combination of written, images and video content.

The code for the project can be found here and the project proposal for the GSOC is here . 











May 25, 2014

Hacking into Bugspad: First Week

So the first week of the coding period comes to an end. A lot was there to be learned. Before getting the hands dirty my mentor, Kushal Das suggested me to keep track of the features, you wished to have in the bugtracker. I made a doc, to keep track of those, where he highlighted any errors I made. Then, I implemented the basic bug UI for bugspad, ie pages for Creating, Updating and Displaying the bug details, alongwith the feature for commenting. And using the golang http libraries has been a treat, both in terms of the amount of code written and the fast and efficient code output produced. I also attended my first IRC meeting @ #fedora-infra. And it was great meeting and interacting with the infra team. I would now see if the features I have implemented work as intended, and also get a feedback from my mentor, before proceeding. Also I would love to hear some feedback from the rest members of the fedora project.


May 21, 2014

GlitterGallery: 2X GSoC love this year!

Last year was a significant one for GlitterGallery: we went from idea to working buggy prototype. It was a significant year for me too - I fulfilled most of my promises and learned a great deal in the process. This year the significance is going to be 2X!

  • We have a new ninja in the house. My friend Rohit’s going to help us upgrade to rugged, polish the authentication, write better tests & improve pull requests!
  • We’re just two months away from our beta usage by Fedora’s design team.

Before starting on this summer’s work, I just thought to summarize some long pending updates:

  • We now meet every Wednesday at #glittergallery on freenode. If you don’t find one of us there, just say hi to glitterfox! ;)
  • I’m working on a meetbot so we can publish logs for future meetings. Here’s our first and second meeting’s (shabby) logs.
  • My work this summer involves around redesigning the entire thing + the usual stuff I do. Proposal’s available here.
  • Not much community bonding for me this time since I was stuck with exams. However, I’d like to think Rohit’s now comfortable with the way we roll and will do us proud!

That’s all for now! Stay tuned for updates!

Hacking on Waartaa – GSoC Community Bonding Period

Bit of a late, but yes, I have been selected in Google Summer of Code(GSoC) with The Fedora Project to hack on Waartaa. Ratnadeep Debnath(rtnpro) and Sayan Chowdhury are my mentors.

Waartaa is an open source, modern IRC client for the web. It supports centralized logging, 24×7 idling, notifications and unique identity to a user on IRC across multiple clients/devices, and also a rich UI for awesome user experience. During GSoC tenure, I plan to implement:

  • A central hub for searching/reading channel logs.
  • Bookmarking channel logs.
  • Video/audio conferencing facility.
  • Admin console panel in Waartaa.

Check out my detailed GSoC proposal for more info.

It has been a pretty exciting last three weeks. I simply love Javascript and the thought that Waartaa is a Meteor based project excites me to spend even more time on this project. Below are the things I have done/coded so far.

Setting up Waartaa

Before I was selected for GSoC, I had installed Waartaa on Ubuntu. But later I was told to set up Waartaa on Fedora for future development work. And so I did. Unfortunately, it took more time than I anticipated because of some partition issues in my laptop. I am glad that it has been set up properly now.

Indexing Channel Logs in Elastic Search

To implement a central hub for searching/reading channel logs, it is obviously necessary to store logs somewhere first. Waartaa’s current implementation saves channel logs in MongoDB but it is highly unoptimized for full text search. My mentors decided to use Elastic Search considering its full text search capability. Here is the basic code I have written to save channel logs in elastic search: https://github.com/waartaa/waartaa/pull/89/files

Fixing few minor issues

I fixed few minor issues as well. Both of them were client side related issues. Below are the PRs I sent:

Unit testing

I was told to get familiar with the code base by unit testing it. There were no previous unit tests in Waartaa’s code base so I had to start everything from scratch. There aren’t many unit testing frameworks available for Meteor. RTD test runner was our best option. RTD uses Jasmine as it’s default unit testing framework and it has a good documentation available. So writing unit tests was not as hard as I thought it would be. Till now I have written tests for client side template methods and events only. Here is the PR which I sent recently: https://github.com/waartaa/waartaa/pull/102

Please subscribe to get notifications of future posts on my GSoC work. Till then happy summer and happy hacking! :)

<script>JS I love you</script>


May 20, 2014

Coding Period Begins – Bugspad

Now I am into the coding period, the plan of the first week is to have a bug creation page, alongwith a page for displaying them. Have decided on the fields to be required, lets see what all changes need to be done to the basic code layout, that I planned in the community bonding period! #excited


May 08, 2014

Here we go again !!
I'm just gonna put this right here...

https://fedoraproject.org/wiki/GSOC_2014/Student_Application_Vipulnayyar

https://fedoraproject.org/wiki/GSOC_2014/Student_Application_Vipulnayyar/glusterfsiostat

http://www.google-melange.com/gsoc/project/details/google/gsoc2014/vipulnayyar/5686888887222272

Playing the game again, is gonna be interesting this year. Why?

  • A cool project related to distributed storage.
  • An awesome Open source company.
  • An amazing mentor.

April 23, 2014

Bugspad – GSoC 2014

Feels great, having given the opportunity to work as a part of The Fedora Project, my favorite linux distro. I would be implementing a UI from scratch for the current bugspad application created by Kushal Das, who would be mentoring me. I would be using the golang web framework, itself to implement this. Upon discussing with my mentor, we agreed that using a framework for golang like revel, gorilla, would only make it slower, and therefore unfulfilling the very purpose of bugspad, to be a fast and efficient bug tracking system. So, basically I would be redoing bugzilla, using advanced technologies, and keeping in mind performance :D . Expecting a great summer ahead! #fedora, #summer