Just Fedora Planet

Searching my email attachments with Ponymail and Tika

Posted by Mark Cox on April 01, 2023 06:09 PM

Moving my email archives to Ponymail went well

One feature I forgot that zoe had was how it indexed some attachments.  If you have an email with a PDF attachment and that PDF attachment had plain text in it (it wasn&apost just a scan) then you could search on words in the PDF.  That&aposs super handy.  Ponymail doesn&apost do that, in fact you don&apost get to search on any text in attachments, even if they are plain text (or things like patches).  Let&aposs fix that!

Remember how I said whenever I need some code I first look if there is an Apache community that has a project that does something similar?  Well Apache Tika is an awesome project that will return the plain text of pretty much whatever you throw at it.  PDF? sure.  Patches? definitely.  Word docs? yup.  Images? yes.  Wait, images? so Tika will go and use Tesseract and do an OCR of an image.

Okay, so let&aposs add a field to the mbox index, attachmenttext, populate it with Tika, and search on it.  For now if some text in an attachment matches your search query you&aposll see the result, but you won&apost know exactly where the text appears (perhaps later it could highlight which attachment it appears in).

I wrote a quick Python script that runs through all emails in Ponymail (or some search query subset), and if they have attachments runs all the attachments through Apache Tika, storing the plain texts in the attachmenttext field.  We ignore anything that&aposs already got something in that field, so we can just run this periodically rather than on import.  Then a one-line patch to Ponymail also searches the attachmenttext field.  40,000 attachments and two hours later, it was all done and working.

It&aposs not ready for a PR yet; probably for Ponymail upstream we&aposd want the option of doing this at import, although I chose not too so we can be deliberately careful as parsing untrusted attachments is risky

So there we have it; a way to search your emails including inside most attachments, outside the cloud, using Open Source projects and a few little patches.

How Ponymail helped me keep my email archive searchable but out of the cloud.

Posted by Mark Cox on March 28, 2023 11:51 AM
I have a lot of historical personal email, back as far as 1991, and from time to time there&aposs a need to find some old message.  Although services like GMail would like to you keep all your mail in their cloud and pride themselves on searching, I&aposd rather keep my email archive offline and encrypted at rest unless there&aposs some need to do a search.  Indeed I always use Google takeout every month to remove all historic GMail messages. Until this year I used a tool called Zoe for allowing searchable email archives.  You can import your emails, it uses Apache Lucene as a back end, and gives you a nice web based interface to find your mails.  But Zoe has been unmaintained for over a decade and has mostly vanished from the net. It was time to replace it.

Whenever I need some open source project my first place to look is if there is an Apache Software Foundation community with a project along the same lines.  And the ASF is all about communities communicating over Email, so not only is there an ASF project with a solution, but that project is used to provide the web interface for all the archived ASF mailing lists too.   "Ponymail Foal" is the project and lists.apache.org is where you can see it running.  (Note that the Ponymail website refers to the old version of Pony Mail before "Foal")

Internally the project is mostly Python, HTML and Javascript, using Python scripts to import emails into elasticsearch, so it&aposs really straightforward to get up and running following the project instructions.

So I can just import my several hundred thousand email messages I have in random text mbox format files and be done?  Well, nearly.  It almost worked but it needed a few tweaks:

  • Ponymail wasn&apost able to parse a fair number of email messages.  Analysing the mails led to only three root causes of mails not being able to be imported:
    • Bad "Content-Type" headers.  Even my bank gets this wrong with the header Content-Type: text/html; charset="utf-8 charset=\"iso-8859-1\"".  I just made the code ignore similar bad headers and try the fallbacks.   Patch here
    • Messages with no text or HTML body and no attachments.  These are fairly common for example a calendar entry might be sent as "Content-Type: text/calendar".  I just made it so that if there is no recognised body it just uses whatever the last section it found was, regardless of content type.  Patch here
    • Google Chat messages from many years ago.  These have no useful anything, no body, no to: no message id, no return address. Rather than note them as failures I use made the code ignore them completely.  Since this is just a warning, no upstream patch prepared.
  • Handling List-Id&aposs.  Ponymail likes to sort mails by the List-Id which makes a lot of sense where you have the thousands of Apache lists.  But with personal email, and certainly when you subscribe to various newsletters, or get bills, or spam that got into the archives then you end up with lots of list id&aposs that are only used once or twice or are not useful.  Working on open source projects there&aposs lots of lists that I&aposm on that I want the email to get archived, but it would be nice if it was separated out in the Ponymail UI.  So really I needed the ability to have an &aposallow list&apos of list id&aposs that I want to have separate, with everything else defaulting to a generic list id (being my email address where all those mails came into).  Patch here
  • HTML email.  Where an email contains only HTML and no text version then Ponymail will make and store a text conversion of the HTML, but sometimes, especially those pesky bank emails, it&aposs useful to be able to see the HTML with all the embedded images.  Displaying HTML email in HTML isn&apost really a goal for the project, especially since you have to be really careful you don&apost end up parsing untrusted javascript for example.  And you might not want all those tracking images to suddenly start getting pinged.  But I&aposd really like a button that you could use on selected emails to display them in HTML.  Fortunately Ponymail stores a complete raw copy of the email, any my proof-of-concept worked, so this can be easy to add in the future.
  • Managing a personal email archive can be a daunting task especially with the volume of email correspondence. However, with Ponymail, it&aposs possible to take control of your email archive, keep it local and secure, and search through it quickly and efficiently using the power of ElasticSearch.

    Why upstream Ansible stopped shipping rpms

    Posted by Toshio Kuratomi on August 25, 2020 03:41 PM

    The upstream Ansible Project used to ship rpms and tarballs on their server, releases.ansible.com. For Ansible-2.10+, they’ve (I’m part of the project although not the originator of this decision) decided not to ship rpms anymore and to push people to use pypi.python.org as the official source of the tarballs. This came up in a recent thread on twitter with a question of whether this meant that Ansible was forgetting who it was meant to serve (sysadmins) as sysadmins want to get their software in their platforms’ native packaging format rather than the language packaging format.

    I don’t think this decision, in and of itself, means that.

    For tarballs, I’m not sure of the rationale but it doesn’t seem like a problem to me. Most python software packaged in Fedora has pypi as the canonical source of the tarballs.  Pypi serves as a hosting service for the source code rather than the point from which most Fedora users are going to install the software.

    The lack of upstream rpms seems to be what triggered a nerve for some people. I was present for those discussions and I think the reasons make a lot of sense for end users.  A few of those reasons:

    • Fedora and EPEL have been building superior rpms since forever. There’s a few reasons that the Fedora and EPEL rpms are better in my mind:
      • They know the distro and the distro users that they’re targeting so they can do more specific things. For instance, they include a dependency on python-jmespath to make a popular filter plugin work out of the box. The upstream rpm did not as python-jmespath is available in EPEL but not in RHEL.
      • The Fedora rpms were able to switch over to Python3 when the distro did while the upstream rpm had to wait while other rpm-based distros caught up (the spec file was shared) and, with no specific roadmap for dropping python2 support, hesitated over whether switching the rpms would inconvenience its users.
      • The upstream version was an automated build. Not to say that all automated builds are always bad, but our build was really geared towards putting our software into an rpm rather than creating a high quality rpm in and of itself. Although we weren’t quite as bad as simply dropping a binary blob onto the filesystem, we did shortchange things like the rpm changelog because that wasn’t necessary to get the build to work.
    • Our rpms weren’t all that popular. We attributed that to most people using the rpms that came with Fedora or EPEL which we thought were much better overall, anyway.
    • It felt like our rpms were confusing the situation for end users. if you were a RHEL or CentOS user, you could get ansible rpms from three places: A Red Hat channel, EPEL, or releases.ansible.com. All three of these sources shipped different rpms. Dependencies (as noted above) were different, the directories that end-users and other packages could install things into were present in some of the rpms and not others, docs might be built or not or not even shipped in some cases. An end user who installed from releases.ansible.com (the least popular of these sources) could be confused as to why certain things didn’t seem to work the way a blog post or article (written by someone with the EPEL rpm) implied it should.

    Getting back to the fear that removing rpms from releases.ansible.com was an indication that ansible is forgetting that it is a tool for sysadmins and needs to be shipped in ways that sysadmins will find palatable…. I don’t think that the removal of rpms and tarballs is an indication as the above rationale seems like it will make things better for sysadmins in the end. However, ansible-2.10 is a big realignment of how ansible is developed and shipped and I think those changes are going to have costs for sysadmins [2]_, [3]_. nirik (Kevin Fenzi, the Fedora/EPEL ansible package maintainer) and I have been talking on and off about how the Fedora/EPEL ansible rpm should be adapted to minimize those costs but it is a large change and changes are often both hard in the transition and, after the transition is over, may be better in many areas but worse in some others. Ideas about how we can smooth out the things that are worse while taking advantage of the things that are better is appreciated!

    Footnote [2]:

    The problems driving upstream to make the major changes that are present in 2.10: https://www.ansible.com/blog/thoughts-on-restructuring-the-ansible-project

    Footnote [3]:

    A newer document, focused on the implementation of the changes proposed above and how they affect end users: https://github.com/ansible-collections/overview/blob/master/README.rst

    PulseCaster 0.9 released!

    Posted by Paul Frields on September 14, 2019 02:57 AM

    The post PulseCaster 0.9 released! appeared first on The Grand Fallacy.

    It says… It says, uh… “Virgil Brigman back on the air”.

    The Abyss, 1989 (J. Cameron)

    OK, I feel slightly guilty using a cheesy quote from James Cameron for this post. But not as guilty as I feel for leaving development of this tool to hang out so long.

    That’s right, there’s a brand new release of PulseCaster available out there — 0.9 to be exact. There are multiple fixes and enhancements in this version.

    (By the way… I don’t have experience packaging for Debian or Ubuntu. If you’re maintaining PulseCaster there and have questions, don’t hestitate to get in touch. And thank you for helping make PulseCaster available for users!)

    For starters, PulseCaster is now ported to Python 3. I used Python 3.6 and Python 3.7 to do the porting. Nothing in the code should be particular to either version, though. But you’ll need to have Python 3 installed to use it, as most Linux bistros do these days.

    Another enhancement is that PulseCaster now relies on the excellent pulsectl library for Python, by George Filipkin and Mike Kazantsev. Hats off to them for doing a great job, which allowed me to remove many, many lines of code from this release.

    Also, due the use of PyGObject3 in this release, there are numerous improvements that make it easier for me to hack on. Silly issues with the GLib mainloop and other entrance/exit stupidity are hopefully a bit better now.

    Also, the code for dealing with temporary files is now a bit less ugly. I still want to do more work on the overall design and interface, and have ideas. I’ve gotten way better at time management since the last series of releases and hope to do some of this over the USA holiday season this late fall and winter (but no promises).

    A new release should be available in Fedora’s Rawhide release by the time you read this, and within a few days in Fedora 31. Sorry, although I could bring back a Fedora 30 package, I’m hoping this will entice one or two folks to get on Fedora 31 sooner. So grab that when the Beta comes out and I’ll see you there!

    If you run into problems with the release, please file an issue in Github. I have fixed mail filters so that I’ll be more responsive to them in the future.files


    Photo by neil godding on Unsplash.

    Flock 2019 in Budapest, Hungary.

    Posted by Paul Frields on August 16, 2019 07:28 PM

    The post Flock 2019 in Budapest, Hungary. appeared first on The Grand Fallacy.

    Last week I attended the Flock 2019 conference in Budapest, like many Fedora community members. There was a good mix of paid and volunteer community members at the event. That was nice to see, because I often worry about the overall aging of the community.

    Many people I know in Fedora have been with the project a long time. Over time, people’s lives change. Their jobs, family, or other circumstances move them in different directions. Sometimes this means they have less time for volunteer work, and they might not be active in a community like Fedora. So being able to refresh my view of who’s around and interested in an event like Flock was helpful.

    Also, at last year’s Flock in Dresden, after the first night of the conference, something I ate got the better of me — or I might have picked up a norovirus. I was out of commission for most of the remaining time, confined to my room to ride out whatever was ailing my gut. (It wasn’t pretty.) So I was glad this year also to be perfectly well, and able to attend the whole event. That was despite trying this terrible, terrible libation called ArchieMite, provided by my buddy Dennis Gilmore:

    <figure class="wp-block-image"><figcaption>One of the things in this picture is mostly harmless, in the words of Douglas Adams. The other is ArchieMite.</figcaption></figure>

    I’m not going to exhaustively list all the sessions I attended, although I made it to virtually every hour of the event. Here are some of the highlights.

    Plenary sessions at Flock

    It’s always good to hear our project leader Matthew Miller talk about Fedora. I was happy to see the steadily rising popularity of Fedora especially as Linux distros overall seem to be flattening. A lot has been going well with both quality and reliability of the operating system, and it’s reflected in the graphs Matthew showed.

    Speaking as a (very!) part time contributor, I also would have liked to hear a rallying cry or two, something aspirational and specific. But I also understand that Fedora’s resources out of Red Hat haven’t increased a lot over the last couple of years. And it seems like the evolution of tech that Fedora is incubating is doing well overall. So even without a “moonshot” type goal I’m still satisfied that the project is on the right track for the future.

    The talk by Cate Houston from Automattic — makers of WordPress, which powers fine blogs like the one you’re reading, as well as Fedora Magazine — on building stronger teams was good as well. I did feel it lacked a bit on tactical approaches for community projects like Fedora. But there were plenty of lessons to chew on. One of my favorites I RT’d here.

    I also thought about her talk in the context of Fedora’s institutional knowledge. We don’t always do a good job of capturing lessons learned. Sometimes we have retrospectives about things we do. As long as we pull out some improvement from each of these, and put it into play, in the long run we should end up with better processes and practices. It’s up to each of us to drive that treadmill of constant improvement.

    My Flock appearance: RHEL 8 panel

    I spoke on a panel with Brendan Conoboy (development coordinator for RHEL 8), Denise Dumas (Red Hat’s VP of Platform Engineering), and Aleksandra Fedorova (team lead for continuous integration efforts in Fedora and RHEL). We talked about the road to RHEL 8, and what we learned about differences in Fedora and RHEL.

    One major point I made was about the huge weight a “secret schedule” puts on the Fedora community and project as a whole. I experienced this first hand as FPL, and I know Matthew Miller has too. It’s not always clear why certain features are important when they land in Fedora. Now that Red Hat has publicly committed to a new major release of RHEL every three years, we know that RHEL 9 will hit the streets in spring or summer 2022. That, plus knowing that Fedora 27/28 were vital to RHEL 8, gives us an idea about critical upcoming Fedora releases for RHEL 9. Think three years ahead, and it’s clear to me we’ll see efforts on technical change accelerating up through Fedora 33/34 (my estimate, not gospel).

    Modularity

    I also attended several sessions on Modularity. One of them was Merlin Mathesius’ presentation on tools for building modules. Merlin is on my team at Red Hat and I happened to know he hadn’t done a lot of public speaking. But you wouldn’t have guessed from his talk! It was well organized and logically presented. He gave a nice overview of how maintainers can use the available tools to build modules for community use.

    The Modularity group also held a discussion to hear about friction points with modularity. Much of the feedback lined up well with other inputs the group has received. We could solve some with better documentation and awareness. In some cases the tools could benefit from ease of use enhancements. In others, people were unaware of the difficult design decisions or choices that had to be made to produce a workable system. Fortunately there are some fixes on the way for tooling like the replacement for the so-called “Ursa Major” in Fedora. It allows normal packages to build against capabilities provided by modules.

    Zuul + composes

    There was an excellent presentation on Zuul, the Python-based CI system used by OpenStack. It seems quite mature at this point, incredibly flexible, and possibly of interest to Fedora. However, there are a number of CI systems in use. I’m not sure how many CI system maintainers were around for this talk. But I hope they will take note of the video when it’s issued and see if it’s worth investigating.

    Lubomír Sedlá? presented on how to make faster composes. In large part, we stand to gain the most by not producing as much as we have been. One does wonder why we make so many separate expressions of the OS intended for servers, desktop computers, or other footprints.

    Lubomír and others are looking at ways that we could make some composing work available to the community. Currently you can only compose with a tight connection to a very large (and understandably sensitive) NFS storage area where we keep package builds. Being able to make that more available to the community would be helpful. I’ve been wondering for a while why we can’t publish our builds somewhere besides a storage location that others can’t access in a read-only mode as freely (like an Amazon EBS or S3 store, or something like that not at Amazon). But Lubomír also talked about work in progress on the composer code, too, and it’s good to see that no one considers the compose “good enough, let’s not try to improve it.”

    CI

    Another highlight was Dominik Perpeet’s talk on the CI Objective in Fedora. There are a number of people working on bringing the benefits of a CI/CD approach to Fedora. Dominik’s group plays a central part in that. We want to have two orders of magnitude more people run Rawhide daily than right now. A good CI process, along with properly iterating on that process, gets us closer to that goal. Education, awareness, and a shared sense of responsibility across package maintainers will, too! We don’t have to give up daily usability to be able to land new things in Rawhide. But we will need to work better together as a project. I think we’re on the right track but more is yet to come.

    In closing

    I attended a lot more talks, but I’m running out of time to get this blog out and feel like I added to the knowledge pool about Flock proceedings. Many, many other sessions happened at Flock — too many for any one blog to cover! I hope to see the videos soon on the Fedora YouTube channel, and then you too can see some of the fantastic work being done by contributors around the project.

    In closing, I want to sincerely and deeply thank Jen Madriaga, Veronica Cooley, Matthew Miller, Brian Exelbierd, Ben Cotton, and all the folks who helped organize and run such a great conference. This was one of the best Flock events in recent memory and it happens through the hard work of organizers. Thanks to all of you, and hope to see you next year at Flock on this side of the pond!

    Fedora BoF report from Summit 2018

    Posted by Paul Frields on May 15, 2018 12:00 AM

    The post Fedora BoF report from Summit 2018 appeared first on The Grand Fallacy.

    Last week I attended the Red Hat Summit 2018. There I interacted with customers, partners, community leaders, and some friends from around Red Hat. I enjoy going every year and look forward to it, despite the exhaustion factor. This year included a fun event for Fedora — a birds of a feather (or BoF) session. Read on for my report.

    FPL Matthew Miller arranged the BoF, and several people assisted there, including Brian Exelbierd and me. Several core contributors attended as well. But out of the 30-35 people who attended, the vast majority were not community members, but rather people interested in or using Fedora.

    We split into groups and ran each in a Lean Coffee style. My group included about 12 or so people. I don’t recall every topic in our prioritized cards, but notable ones included:

    • Use of legally encumbered codecs
    • NVidia drivers
    • How to get started contributing
    • Whence the minimal install

    Our group members voted up the codecs + NVidia topics by a factor of more than 2 over the next highest rated topic! That’s why it made me very happy to report to them that Fedora 28 includes the GNOME Software function that lets users decide whether to enable selected repositories outside Fedora.

    I know a few people grumble about this function. I used to rail about the topic myself. As I’ve grown older and met more people in different walks of life, I’ve realized I no longer want to interfere with their agency to make their own choices. The BoF attendees responded well to this new feature and were overjoyed to hear about the way this is now available. I’m pretty sure we made a few instant Fedora converts on the spot.

    I enjoyed the BoF overall, but this particular topic resonated with me strongly. I know how hard several of the Workstation working group members worked to make this new feature a reality in Fedora 28. This experience showed me it has an impact on people’s interest in using Fedora. If it also makes them more interested in contributing, that’s great for Fedora in the long run.


    Also, I realize I haven’t written on this blog in a very long time. I’ll try not to take so long next time, but no promises. Life is very busy!

    Better Pagure email filters for Gmail

    Posted by Paul Frields on April 05, 2017 03:03 PM

    The post Better Pagure email filters for Gmail appeared first on The Grand Fallacy.

    I get quite a bit of email notification from Pagure, the free (as in freedom!), Python-based git forge. I have and participate in several repos there. Pagure sends me email when there’s a new or updated issue, pull request, or commit to a repo I monitor.

    Previously, all that email for me has gone into a “Pagure” folder. But Pagure offers a special header, X-pagure-project, which indicates the specific project that triggered the email. Unfortunately, Gmail’s filters notoriously lack the ability to parse arbitrary headers. This is actually a limitation of the Gmail API. Why there’s no getHeaders() call, I have no idea — but maybe it has something to do with worries about arbitrary content there.

    Now, I could probably set up filters to look for mail From: pagure@pagure.io and then look in the subject line for [Project-name]. But this means I have to make a new filter for every project manually. YUCK!

    Enter Google Apps Script

    So I wrote my very first Google Apps Script (this script is MIT licensed).

    Here’s the link, which will stay up to date.

    How to use the script

    To use it, you first need to connect Google Apps Scripts to your Google Drive. (Use New –> More in Google Drive, if this isn’t already done.) Then save this as a new script in your Drive.

    Adjust the parentlabel value if needed. Then set a timer trigger to run the processInbox function every 5 or 10 minutes.

    Make sure to remove any existing filter for Pagure notification email, so that your script has a chance to run against notifications in your Inbox.

    If you’ve never used Google Apps Scripts before, it’s easy to get started. I’m living proof, since I’m not much of a developer. Here’s a quick start from Google that will help.

    The next big things, 2017 edition.

    Posted by Paul Frields on February 09, 2017 03:37 PM

    The post The next big things, 2017 edition. appeared first on The Grand Fallacy.

    Along with several people on the Fedora Engineering team, I recently attended the DevConf.cz 2017 event. The conference has grown into an amazingly successful gathering of open source developers. Most attendees live in Europe but there were some from every continent. The coverage spanned all the big open source buzz-generating technologies. Session topics included containers, PaaS, orchestration and automation, and DevOps.

    One of the most frequent topics I heard at and around the conference was CI — continuous integration. The idea of CI isn’t new in and of itself. Many open source projects put it to use already. If you do any work on Github, you’ve already seen it at work. (Also notably, OpenStack features one of the biggest and arguably most successful at-scale efforts.) However, it hasn’t taken strong root in Fedora, yet.

    Based on the discussions I had and heard at DevConf.cz, that’s likely to change soon. Why? Well, for one thing, Fedora still breaks too often. This is especially (but not exclusively) true about Rawhide. But in many cases, we can easily understand these breakages. That means we can detect and prevent them before they occur. While update testing in Bodhi helps in the case of stable releases, that’s not a cure-all. That process still depends heavily on manual efforts that aren’t guaranteed. Moreover, those efforts form a gate of several days at a minimum, between building, karma, and tagging. This process no longer scales with new, fast-moving deliverables like Fedora’s edition of the Atomic Host. One size no longer fits all. We need a more flexible, automated approach.

    What does that mean? For Fedora, the CI concept means gating on automated tests that guarantee some level of validity to a change. These tests are run prior to introducing changes into a tree, ostree, module, container, etc. That way, we don’t pass breakage on to the consumer.  But we can message that breakage back to the maintainer directly. The maintainer then quickly acts to fix the issues. And they see more immediate results from their fix.

    We must do some work in the Fedora infrastructure to make this process work for Fedora contributors (maintainers especially). Some app work is also required to support our packaging and deliverables. We must make sure it’s easy for people to contribute to tests. Maintainers need to easily understand the status of their build requests. And of course the community must be able to continue contributing to higher levels of testing that make for a better Fedora. So CI isn’t the end of the story. It’s just another strong link in a chain of improvements.

    Fortunately, we’re well situated to do great work here. For instance, the Fedora Infrastructure team already plans to front our dist-git package control with Pagure. (You probably know Pagure is a full-featured collaborative git forge.) This ensures the Fedora dist-git acts as a source for CI, tested with each new proposed change. But we believe this is unlikely to cause more complexity. Packagers can expect to use tools they already know. Furthermore, they could use the popular, effective pull request model to maintain packages and develop testing. This lowers the bar to contribution while maintaining high quality.

    Another point in our favor is the entire Fedora application infrastructure has been running on the fedmsg messaging bus for years. We already can orchestrate and coordinate a huge number of automated activities around events involving our apps. Therefore a higher level of continuous integration and testing is well within our grasp.

    Of course, this blog post is light on details. (What else would you expect from a manager?) ? Those details are obviously important. As the least technical person in the Fedora Engineering team, though, I’m the last person you’d want to describe them in detail. But the folks in the Infrastructure community team will be launching a series of discussions on the mailing list to explore those details.

    We’re all passionate about making Fedora better. So hearing feedback from our contributors and colleagues at DevConf.cz has us very excited and enthusiastic about this next level. I encourage you to get involved and contribute constructively to the discussions!

    Holiday Break 2016.

    Posted by Paul Frields on December 08, 2016 06:31 PM

    The post Holiday Break 2016. appeared first on The Grand Fallacy.

    It’s sad I don’t get more time to post here these days. Being a manager is a pretty busy job, although I have no complaints! It’s enjoyable, and fortunately I have one of the best teams imaginable to work with, the Fedora Engineering team.

    Since we’re coming to the close of another calendar year, I wanted to take a moment to remind people about what the holidays mean to availability. I’m going to crib from an earlier post of mine:

    My good friend John Poelstra is fond of saying, “It’s OK to disappoint people, but it’s not OK to surprise them.” That wisdom is a big reason why I like to set expectations over the holidays.

    Working at Red Hat is a fast paced and demanding job. Working full time in Fedora is itself demanding on top of that. These demands can make downtime at the holiday important for our team. At Red Hat, there’s a general company shutdown between Christmas and the New Year. This lets the whole organization relax and step away from the keyboard without guilt or fear.

    Of course, vital functions are always staffed. Red Hat’s customers will always find us there to support them. Similarly, our Fedora infrastructure team will monitor over the holidays to ensure our services are working nominally, and jump in to fix them if not.

    Some people like to spend time over the holidays hacking on projects. Others prefer to spend the time with family and friends. I’ve encouraged our team to use the Fedora vacation calendar to mark their expected “down time.” I encourage other community members to use the calendar, too, especially if they carry some expectations or regular responsibilities around the project.

    So all this to say, don’t be surprised if it’s harder to reach some people over the holidays. I’m personally taking several weeks around this holiday shutdown as time off, to relax with my family and recharge for what’s sure to be another busy year. Whatever your plans, I hope the holiday season is full of joy and peace for you and yours.

    New layout for PEAR packages

    Posted by Remi Collet on December 05, 2016 01:03 PM

    The /usr/share/pear directory, used for installation of PEAR extensions on Fedora and Enterprise Linux , contains a lot of files which should not be stored there. Especially because this folder in part of the default include_path.

    Starting with Fedora 18, this gets better.

    1. Documentation

    Extensions store their documentation in /usr/share/pear/doc.

    This directory is moved (since Fedora 15) in /usr/share/doc/pear (value of %{pear_docdir} macro).

    No impact on existing packages as, by definition, documentation is not necessary for use.

    2. RPM metadata

    RPM packages store their metadata, a copy of package.xml from project's archive, for registering the extensio in the /usr/share/pear/.pkgxml.

    This directory is moved to /var/lib/pear/pkgxml (value of %{pear_xmldir} macro).

    No impact on old packages, as the path is hardcoded in the RPM (evaluated during its build).

    3. Unit tests

    Unit tests provided by each extension are installed in /usr/share/pear/test.

    This directory is moved to  /usr/share/tests/pear (valule of %{pear_testdir} macro).

    No impact on old packages, as these files are not required for use. Despite, a rebuild is preferred.

    4. Data

    Extensions data are stored  in /usr/share/pear/data.

    This directory is moved to /usr/share/pear-data (value of %{pear_datadir} macro).

    No impact on old packages or manually installed extensions, in fact, this value is evaluated during RPM build, or manual installation of an extension and result is harcoded in the scripts. Of course, existing content is preserved. A rebuild is also preferred.

    5. PEAR metadata

    PEAR installer store its metadata in various folders: /usr/share/pear/.registry, .channels, .depdb, and .filemap.

    As this is alive data, this clearly should not be stored there. See the FHS.

    Its move to /var/lib/pear is planed, and is the subject of a RFC for PEAR project, mainly because it requires significant change to the code.

    So, for now, it is not moved.

    6. Conclusion

    Changes are done in php-pear-1.9.4-11.

    A huge work have been done in Fedora 18 (and will be soon backported to remi repository). More than one hundred of packaged extensions have been rebuilt.

    More work are needed for metadata, especially to insure a smooth upgrade from current layout. I will see when this become possible (Fedora 18 or 19), according to upstream answer, time needed for the change, and distribution schedule.

    To be continued...

    PHP and Arch specific Requires/Provides

    Posted by Remi Collet on December 05, 2016 01:03 PM

    This entry explain the recent changes in PHP packages available in rawhide (and soon in fedora 15) about Arch specific Provides and Requires.

    1. Package dependencies

    All sub-packages automatically have 2 provides (for a while).

    For example : php-mysql and php-mysql(x86-64)

    All virtual provides from php have been fixed to also provides both

    For example : php-mysql provides php-mysqli and php-mysqli(x86-64) 

    So a web application or a pear package (which are noarch) will still use

    Requires: php-mysql

    But a C extension could use

    Requires: php-mysql%{?_isa}

    Of course all dependencies between sub-packages of php, already use Arch requires :

    For example : php-mysql requires php-common(x86-64)

    2. ABI compatibility check

    The old provides are still present

    php(api) = 20090626
    php(zend-abi) = 20090626
    php-pdo-abi = 20080721

    But new Arch specific provides are available and used by default when a package is build

    php(api) = 20090626-x86-64
    php(zend-abi) = 20090626-x86-64
    php-pdo-abi = 20080721-x86-64

    So, there is no need to change the PHP Guidelines, packages which use

    Requires: php(zend-abi) = %{php_zend_api}
    Requires: php(api) = %{php_core_api}

    will works with old provides (so there is no need to rebuild any package now).

    When a package is rebuilt, it will use the new Arch specific ones.

    The old provides will be removed when a new ABI will hit rawhide (php 5.4.0 ?), so when a rebuild will be mandatory.

    Note : in the same way, php requires httpd-mmn = 20051115-x86-64 which is provided by httpd

    3. Target version

    This changes are only planned for fedora >= 15 (php-5.3.5-3)

    So, if you need to add a %{_isa} to your package, this must be fedora >= 15 only. But in all cases, you will have to maintain 2 versions of your spec, or a conditional requires, because of EPEL (and git is your friend).

    This have been discussed on the fedora-php-devel mailing list.

    Comments are welcome.

    PHP and Apache Security, SetHandler vs AddHandler

    Posted by Remi Collet on December 05, 2016 01:03 PM

    In official PHP packages in Enterprise Linux and Fedora <= 17, the engine was activated by the AddHandler directive. With Fedora 18, or for the users of my repository it is now activated by the SetHandler directive.

    Some explanations.

    Old version (in the /etc/httpd/conf.d/php.conf file)

    AddHandler php5-script .php

    As written in Apache documentation, the suffix presence, anywhere in the file name, will activate the engine. This can raise a security problem, in a public upload space, when a lack of control will allow a user to send an image.php.png file and execute it.

    New version, recommended (§8) by PHP project documentation:

    <FilesMatch \.php$>
        SetHandler application/x-httpd-php
    </FilesMatch>

    Now, only a final suffix will activate the engine. So security is improved (even if I really think that giving the control on uploaded file name to the user is really a huge design error). I haven't notice any performance change.

    Warning, this change may breaks some configurations.

    In the case when you want to allow users to upload .php files in a public space, but deactivate the php engine (as on this blog).

    With old configuration, you just have to remove the handler (and probably change the mime type):

    <Directory /path/to/blog/public>
        RemoveHandler .php
    <Files ~ "\.php$">
    ForceType text/plain
    </Files>
    </Directory>

    This configuration will not work anymore, and must be changed.

    For example, I use (and also enable the colorized output of sources for this space) :

    <Directory /path/to/blog/public>
    <FilesMatch \.php$>
        SetHandler None
    ForceType text/plain
    </FilesMatch>
    <FilesMatch \.phps$>
        SetHandler application/x-httpd-php-source
    </FilesMatch>
    </Directory>

    Ex : twit.php ou twit.phps

    So, if you upgrade from Fedora 17 to Fedora 18, or if you update from PHP 5.3 to PHP 5.4 using my repository, don't forget to check and fix all your httpd configuration files.

    Wheee, another addition.

    Posted by Paul Frields on September 19, 2016 07:18 PM

    The post Wheee, another addition. appeared first on The Grand Fallacy.

    I’m thrilled to announce that Jeremy Cline has joined the Fedora Engineering team, effective today. Like our other recent immigrant, Randy Barlow, Jeremy was previously a member of Red Hat’s Pulp team. (This is mostly coincidental — the Pulp team’s a great place to work, and people there don’t just move to Fedora automatically.) Jeremy is passionate about and has a long history of open source contribution. We had many excellent applicants for our job opening, and weren’t even able to interview every qualified candidate before we had to make a decision. I’m very pleased with the choice, and I hope the Fedora community joins me in welcoming Jeremy!

    Another addition.

    Posted by Paul Frields on June 06, 2016 07:43 PM

    The post Another addition. appeared first on The Grand Fallacy.

    I’m extremely happy to announce that Randy Barlow has joined the Fedora Engineering team, effective today. Randy was previously a member of Red Hat’s team working on technologies like Pulp. You can find his fingerprints in many upstream repositories as a frequent contributor. This is fortunate for our team, since Randy will be contributing both to our applications infrastructure and to our release team. His experience with Pulp may come in handy too, since it may play a part in making next-generation Fedora content available to the public. I hope the Fedora community will join me in welcoming Randy!

    Fedora Engineering team opening, April 2016.

    Posted by Paul Frields on April 08, 2016 09:45 PM

    The post Fedora Engineering team opening, April 2016. appeared first on The Grand Fallacy.

    My day job, as you probably know, is at Red Hat, where I manage the Fedora Engineering team. Our team provides engineering and design services for the Fedora Project, a collaborative community project which is the upstream source for a number of influential products. Not the least of these is Red Hat Enterprise Linux, of course.

    We now have a job opening for a senior engineer in the USA to be part of our team.

    About the job

    Collaborating with the rest of the team, the community at large, and your colleagues in Red Hat, you’ll:

    • Build tools and web services that will in turn build a more modular next-generation Fedora
    • Do full-stack development, with a hand in everything from working with designers, to architecting and writing code, to deploying in test/stage/production, to maintenance and enhancement
    • Automate, automate, automate all the things
    • Co-create and develop our next community engagement system, Fedora Hubs
    • Stay tuned into exciting work going on throughout Red Hat related to platform and other technologies
    • As with all Fedora Engineering jobs, communicate openly and continually with the whole community, and build community around everything you do using open source best practices

    About the Fedora Engineering team

    Our team uses a lot of Python (flask and sqlalchemy currently rank highly), and we’ll be expanding outward in the future where it makes sense. We create code upstream that is widely consumable beyond just Fedora, and we deploy our work on both Red Hat Enterprise Linux and Fedora.

    We do that work openly: collaboration via git repositories, rapid and constant communication via IRC, frequent discussion through our mailing lists, and gathering and building community around our work. Simply put, we love open.

    Who we’re looking for

    The right candidate is a team player, positive and constructive, fully engaged and passionately committed to delivering results with their colleagues, wherever they might be. This is a senior position, so we’re looking for someone who’s a proactive, detail-oriented self-starter, and has the ability to lead from the side on a few projects. We want you to be really good at Python and being Pythonic, and hopefully have some similar high-level language experience like Ruby or Java where you can help us grow, too.

    Not interested in working in a Red Hat office? No sweat. We’re a distributed team, and this job is perfect for an experienced remote candidate. Remote doesn’t mean aloof, though. You need to be somewhat of a people person to crush this job, because good open source means collaboration. We meet up at least once a year as a team for Flock, but we work together constantly online.

    I would be remiss in not pointing out this position back-fills some mighty big shoes. That’s why I’ve outlined some ideas for who we’re looking for in this post. That being said, you’ll also be working for a manager who’s a super guy. Just check the sidebar of this blog for proof. In all seriousness though, we all care about our teammates’ success, so you’re never on your own when you need a hand.

    And don’t forget, Red Hat is consistently rated one of the best and most innovative places to work.

    Enough already, where do I apply?

    Does this job sound interesting to you? Make sure you read the full description of the job, and then apply online.

    Contributing to Pagure made easy.

    Posted by Paul Frields on April 05, 2016 12:24 AM

    The post Contributing to Pagure made easy. appeared first on The Grand Fallacy.

    I don’t get as much of a chance these days to do things like patches or other technical contribution. But I had some time free the other day and used it to stick my hands directly into a cool project, Pagure (pronounced roughly “pag-yOOR,” or listen here). You may have read about Pagure in this Fedora Magazine article a few months back.

    It was tremendously easy to get Pagure, fix a bug, test the fix, and contribute it back (see my pull request here). I specifically looked for “easyfix” bugs, since I knew I didn’t have a lot of time to spend to help. So I decided to work on this issue, a button showing up when it shouldn’t.

    First I forked the repo in Pagure. Then I cloned my new fork, and set it up as documented in the README. In the clone, I checked out a new branch using the issue number as a name, issue839.

    To track down the fix, I ran the app locally and duplicated the error. I looked at the CSS style containers for the button and some of the surrounding elements. Using that information, I did a text search on the code to find the file that was generating the button. Then I simply applied some logic to find the fix for the problem, even though I wasn’t really familiar with the code involved.

    Thankfully Pagure has a test suite, so I could also check that my fix didn’t break any of the tests. Then I committed and pushed my changes, and made a pull request using the button in Pagure’s web page.

    I also learned something useful. Since I forked the repo to do my fix and make a pull request, if I force-pushed changes using git to my branch from which I made the pull request, the pull request was automatically updated with the changes! This is probably expected by people who do this all the time, but since I’m new at it, I was excited.

    Bugs like this are something that just about anyone with a small amount of beginning programming skill could fix. Pagure even has other bugs like this waiting for people to handle. Maybe one of them is waiting for you! ?

    Protected: Freewrite 2015-05-13

    Posted by Toshio Kuratomi on May 13, 2015 01:33 PM

    This post is password protected. You must visit the website and enter the password to continue reading.


    The most important Flock Talk

    Posted by Toshio Kuratomi on May 05, 2015 04:09 PM

    I was just voting on FLOCK talks and happened upon this talk proposal:

    What does Red Hat Want?

    It’s no secret that many Fedora participants work for Red Hat. Or that Red Hat provides funding for the Fedora Infrastructure. There have been many conspiracy theories over the years centering on what, exactly, does Red Hat want out of Fedora in return. This talk, by the Red Hat VP who runs the RHEL engineering team, proposes to address that eternal question. What does Red Hat want? Join Denise Dumas to learn what Red Hat is working on next and how we would like to work with the Fedora community

    In my time working for Red Hat on Fedora I often found that the Fedora Community was operating in a vacuum.  We wanted to run a Linux Distribution that we had a stake in and a chance to modify to our needs.  We also knew that Red Hat invested a considerable amount of money into Fedora to support our being able to do that.  But what we were in the dark about was what Red Hat expected to get out of this partnership and what they wanted us to do to justify their continued investment.  Although over time we did get our hands dirty maintaining more of the packages that made up the distribution, in a lot of ways we never graduated beyond mricon’s 2004 tongue-in-cheek posting about Red Hat’s relationship to its community (and its own internal divisions at the time).

    In the last few years, Red Hat’s portfolio of products and future directions have greatly expanded.  No longer just a producer of a Linux distribution, Red Hat is pursuing revenue sources in application middleware, both IaaS and PaaS pieces of the cloud, and containers.  They also have engineers working on a multitude of open source solutions that enhance these basic products, adding flesh to the framework they set up.  But where does the Fedora Community fit into this expanded roster of technologies?  The Fedora Product has been very focused on “A Linux Distro” for a number of years but the Fedora Community is very broad and multi-talented.  I’m hoping that Denise’s talk will provide an entrypoint for Fedora Contributors to start talking about what new directions they can take the Project in that would align with Red Hat’s needs.  There’s a number of difficulties to work out (for instance, how does Fedora keep its identity while at the same time doing more work on these things that have traditionally been “Upstream Projects”) but we can’t even begin to solve those problems until we understand where our partner in this endeavour wants to go.

    Flock Ansible Talk

    Posted by Toshio Kuratomi on March 30, 2015 04:41 PM

    Hey Fedorans, I’m trying to come up with an Ansible Talk Proposal for FLOCK in Rochester. What would you like to hear about?

    Some ideas:

    * An intro to using ansible
    * An intro to ansible-playbook
    * Managing docker containers via ansible
    * Using Ansible to do X ( help me choose a value for X 😉
    * How to write your own ansible module
    * How does ansible transform a playbook task into a python script on the remote system

    Let me know what you’re interested in 🙂

    Hacky, hacky playbook to upgrade several Fedora machines

    Posted by Toshio Kuratomi on December 20, 2014 09:13 PM

    Normally I like to polish my code a bit before publishing but seeing as Fedora 21 is relatively new and a vacation is coming up which people might use as an opportunity to upgrade their home networks, I thought I’d throw this extremely *unpolished and kludgey* ansible playbook out there for others to experiment with:

    https://bitbucket.org/toshio/local-playbooks/src/3d3ae76a56034784ab60fcfa1129221c59a40f3b/provisioning/fedora-upgrade.yml?at=master

    When I recently looked at updating all of my systems to Fedora 21 I decided to try to be a little lighter on network bandwidth than I usually am (I’m on slow DSL and I have three kids all trying to stream video at the same time as I’m downloading packages). So I decided that I’d use a squid caching proxy to cache the packages that I was going to be installing since many of the packages would end up on all of my machines. I found a page on caching packages for use with mock and saw that there were a bunch of steps that I probably wouldn’t remember the next time I wanted to do this. So I opened up vim, translated the steps into an ansible playbook, and tried to run it.

    First several times, it failed because there were unsolved dependencies in my packageset (packages I’d built locally with outdated dependencies, packages that were no longer available in Fedora 21, etc). Eventually I set the fedup steps to ignore errors so that the playbook would clean up all the configuration and I could fix the package dependency problems and then re-run the playbook immediately afterwards. I’ve now got it to the point where it will successfully run fedup in my environment and will cache many packages (something’s still using mirrors instead of the baseurl I specified sometimes but I haven’t tracked that down yet. Those packages are getting cached more than once).

    Anyhow, feel free to take a look, modify it to suit your needs, and let me know of any “bugs” that you find 🙂

    Things I’ll probably do to it for when I update to F22:

    Quick question for Fedora Planet

    Posted by Toshio Kuratomi on December 20, 2014 06:38 PM

    Hey Fedora Planet, quick survey: I’ve been doing some blogging on general python programming and on learning to use Ansible. Are either of these topics that you’d be interested in showing up on planet? Right now the feed I’m sending to planet only includes Fedora-specific posts, linux-specific, or “free software political” but I could easily change the feed to include the ansible or python programming posts if either of those are interesting to people.

    Leave me a comment or touch base with me on IRC: abadger1999 on freenode.

    A going away present

    Posted by Toshio Kuratomi on August 15, 2014 06:17 PM

    For those who haven’t heard through Flock or the rumor mill, today is my last day at Red Hat and also the beginning of a hiatus from working on Fedora. Since I’ve been asked this many times in the past few weeks, this is because I’ve become a bit burnt out having worked on Fedora as both my day job and my hobby for the past seven years. It’s time for me to pull back, let fresh faces fill the roles I held, and do something else for a while to add some spice and variety. I may come back to Fedora or to Red Hat in the future but at the moment I’m only looking far enough ahead to see that I need to go forth and have some new experiences.

    I do want to say thank you to all the wonderful people who have worked not just to make the Fedora distribution a solid piece of software but also filled Fedora with friendly faces and kind words. Truly, although I’m physically far removed from the rest of you, you are my neighbors, my community, and my friends. Even though I’m stepping away from working on Fedora, I hope to keep in touch with you via IRC for many many years.

    I’d also like to announce that I woke up this morning to find that I’d been made the gatekeeper for a new Fedora Badge. As the badge submitter describes it:

    Dancing with ToshioI dream of a future where Toshio could fully express his techniques with the complicity and trust of many dance partners, responding to his moves and being pushed forward by him in the arts of dancing; exchanging, learning, growing as a vibrant community.

    -Aurelien Bompard

    Taking away the specifics of dancing and myself, this is my hope for everyone who participates in Fedora: to be able to grow in sympathy with a larger community.

    With that in mind, if we’ve danced together and you would like this badge, please contact me (abadger1999 on IRC, toshio.fedoraproject.org via email). I can’t remember everyone’s FAS usernames but I’m extremely happy to award you the badge if you remind me what of what it is 🙂

    Trip report from Flock 2014

    Posted by Jared Smith on August 12, 2014 04:43 PM

    I recently returned home from Prague, where I attended the Flock conference.  In it’s second year, the Flock conference is a gathering of free software developers, most of whom work in the Fedora community.  Rather than give a blow-by-blow account of every talk I attended and every conversation I had (which would be exhausting), I’ll instead focus on the highlights of the conference.

    Location, Venue, and Accommodations

    I was very impressed with the location of the conference.  The university was within a five minute walk of the hotel, and close to several convenient tram and metro stops.  The classrooms were well furnished with power connections and comfortable seats, and the larger auditoriums were big enough to handle a big crowd.  The hotel was very nice as well — the lobby was spacious, which made for lots of impromptu meetings and hanging out.  Getting from the airport to the hotel was super-easy as well, as was the return trip.  Also, the cafeteria where we had lunch was was exceptional — the food was delicious, and the location couldn’t have been more perfect.

    Themes

    There were several themes that resonated with me as I attended the conference.  The first was around the changes to the Fedora release products (collectively referred to as Fedora.Next) in Fedora 21 and future releases.  Whereas at last year’s Flock conference there was a lot of apprehension and negativity some of the proposed changes, this year I noticed a remarkably more upbeat attitude toward the changes.  There was a lot of great discussion round how to get the technical work done that’s needed in order to make Fedora 21 (and 22, and so on) a success.

    The next theme that resonated with me was documentation.  Maybe it’s because I was giving a talk on documentation, but I felt there was a lot more interest and cohesion around doing a better job of documenting Fedora than I saw at last year’s conference.  Both my talk (on Docbook and Publican) and Jaromir’s talk on Mallard were packed, and the two documentation workshops were very well attended as well.  At one point during Friday’s workshops, I counted 22 people (besides myself) in the room working on Docs.  We also had several new people dive right in and start working on writing documentation, so that was great to see as well.

    The third theme that I focused on was ARM processors.  The support in Fedora for ARM has grown tremendously over the past couple of years.  Peter Robinson’s “ARM State of the Union” talk showed just how far support for ARM has come — both in 32-bit ARM as a primary architecture and with 64-bit ARM as a secondary arch.  The ARM workshop on Saturday was great too — I was able to confirm that as of the 3.16 kernel, we now support the Plat’home OpenBlocks AX3 and Mirabox as two more Marvell Armada-based devices that will work great in Fedora 21.  (They both require appending the .dtb to the kernel, but other than that, they seem to be working great.)

    Last but not least, it was great to have a lot of hallway discussions with friends and colleagues.  I had too many discussions to be able to remember them all, let alone discuss them here on my blog, but I thoroughly enjoyed catching up with many old friends and making some new ones as well.  I always look forward to opportunities to rub shoulders with so many of the fantastic people that make the Fedora community great.

    Thanks

    Thanks to Ruth and Spot and Josh and Miro and all the other folks who worked hard to organize the conference.  Thanks to Red Hat for sponsoring my flight, and thanks to my employer, Bluehost, for sponsoring the conference and allowing me the opportunity to be in Prague for the conference.  Also, thanks to each one of the presenters for making Flock 2014 a great conference.

    Hacking a Wifi Kettle

    Posted by Mark Cox on February 23, 2014 07:20 PM

    Here is a quick writeup of the protocol for the iKettle taken from my Google+ post earlier this month. This protocol allows you to write your own software to control your iKettle or get notifications from it, so you can integrate it into your desktop or existing home automation system.

    The iKettle is advertised as the first wifi kettle, available in UK since February 2014. I bought mine on pre-order back in October 2013. When you first turn on the kettle it acts as a wifi hotspot and they supply an app for Android and iPhone that reconfigures the kettle to then connect to your local wifi hotspot instead. The app then communicates with the kettle on your local network enabling you to turn it on, set some temperature options, and get notification when it has boiled.

    Once connected to your local network the device responds to ping requests and listens on two tcp ports, 23 and 2000. The wifi connectivity is enabled by a third party serial to wifi interface board and it responds similar to a HLK-WIFI-M03. Port 23 is used to configure the wifi board itself (to tell it what network to connect to and so on). Port 2000 is passed through to the processor in the iKettle to handle the main interface to the kettle.

    Port 2000, main kettle interface

    The iKettle wifi interface listens on tcp port 2000; all devices that connect to port 2000 share the same interface and therefore receive the same messages. The specification for the wifi serial board state that the device can only handle a few connections to this port at a time. The iKettle app also uses this port to do the initial discovery of the kettle on your network.

    Discovery

    Sending the string "HELLOKETTLE\n" to port 2000 will return with "HELLOAPP\n". You can use this to check you are talking to a kettle (and if the kettle has moved addresses due to dhcp you could scan the entire local network looking for devices that respond in this way. You might receive other HELLOAPP commands at later points as other apps on the network connect to the kettle.

    Initial Status

    Once connected you need to figure out if the kettle is currently doing anything as you will have missed any previous status messages. To do this you send the string "get sys status\n". The kettle will respond with the string "sys status key=\n" or "sys status key=X\n" where X is a single character. bitfields in character X tell you what buttons are currently active:

    Bit 6Bit 5Bit 4Bit 3Bit 2Bit 1
    100C95C80C65CWarmOn

    So, for example if you receive "sys status key=!" then buttons "100C" and "On" are currently active (and the kettle is therefore turned on and heating up to 100C).

    Status messages

    As the state of the kettle changes, either by someone pushing the physical button on the unit, using an app, or sending the command directly you will get async status messages. Note that although the status messages start with "0x" they are not really hex. Here are all the messages you could see:

    sys status 0x100100C selected
    sys status 0x9595C selected
    sys status 0x8080C selected
    sys status 0x10065C selected
    sys status 0x11Warm selected
    sys status 0x10Warm has ended
    sys status 0x5Turned on
    sys status 0x0Turned off
    sys status 0x8005Warm length is 5 minutes
    sys status 0x8010Warm length is 10 minutes
    sys status 0x8020Warm length is 20 minutes
    sys status 0x3Reached temperature
    sys status 0x2Problem (boiled dry?)
    sys status 0x1Kettle was removed (whilst on)

    You can receive multiple status messages given one action, for example if you turn the kettle on you should get a "sys status 0x5" and a "sys status 0x100" showing the "on" and "100C" buttons are selected. When the kettle boils and turns off you'd get a "sys status 0x3" to notify you it boiled, followed by a "sys status 0x0" to indicate all the buttons are now off.

    Sending an action

    To send an action to the kettle you send one or more action messages corresponding to the physical keys on the unit. After sending an action you'll get status messages to confirm them.

    set sys output 0x80Select 100C button
    set sys output 0x2Select 95C button
    set sys output 0x4000Select 80C button
    set sys output 0x200Select 65C button
    set sys output 0x8Select Warm button
    set sys output 0x8005Warm option is 5 mins
    set sys output 0x8010Warm option is 10 mins
    set sys output 0x8020Warm option is 20 mins
    set sys output 0x4Select On button
    set sys output 0x0Turn off

    Port 23, wifi interface

    The user manual for this document is available online, so no need to repeat the document here. The iKettle uses the device with the default password of "000000" and disables the web interface.

    If you're interested in looking at the web interface you can enable it by connecting to port 23 using telnet or nc, entering the password, then issuing the commands "AT+WEBS=1\n" then "AT+PMTF\n" then "AT+Z\n" and then you can open up a webserver on port 80 of the kettle and change or review the settings. I would not recommend you mess around with this interface, you could easily break the iKettle in a way that you can't easily fix. The interface gives you the option of uploading new firmware, but if you do this you could get into a state where the kettle processor can't correctly configure the interface and you're left with a broken kettle. Also the firmware is just for the wifi serial interface, not for the kettle control (the port 2000 stuff above), so there probably isn't much point.

    Missing functions

    The kettle processor knows the temperature but it doesn't expose that in any status message. I did try brute forcing the port 2000 interface using combinations of words in the dictionary, but I found no hidden features (and the folks behind the kettle confirmed there is no temperature read out). This is a shame since you could combine the temperature reading with time and figure out how full the kettle is whilst it is heating up. Hopefully they'll address this in a future revision.

    Security Implications

    The iKettle is designed to be contacted only through the local network - you don't want to be port forwarding to it through your firewall for example because the wifi serial interface is easily crashed by too many connections or bad packets. If you have access to a local network on which there is an iKettle you can certainly cause mischief by boiling the kettle, resetting it to factory settings, and probably even bricking it forever. However the cleverly designed segmentation between the kettle control and wifi interface means it's pretty unlikely you can do something more serious like overiding safety (i.e. keeping the kettle element on until something physically breaks).

    Rest, my friend, the next five years are ours to pass along your wisdom

    Posted by Toshio Kuratomi on January 10, 2014 08:34 AM

    Just installed a new system and was having ssh connections timeout. Then I remembered talking about this same issue last year on IRC. The anecdote is amusing so I figured I would post the logs:

    [Mon April 22 2013] * abadger1999 wishes he knew why his ssh connections to infra keep on hanging.
    [Mon April 22 2013] <abadger1999> it’s a timeout of some sort… I just don’t know what.
    [Mon April 22 2013] <skvidal> abadger1999: did you reinstall recently?
    [Mon April 22 2013] <abadger1999> skvidal: nope
    [Mon April 22 2013] <abadger1999> skvidal: would that help?
    [Mon April 22 2013] * abadger1999 still on f17
    [Mon April 22 2013] <skvidal> I have found I often need to set
    [Mon April 22 2013] <skvidal> net.ipv4.tcp_keepalive_time = 300
    [Mon April 22 2013] <skvidal> in /etc/sysctl.conf
    [Mon April 22 2013] <skvidal> to not get timeouts
    [Mon April 22 2013] <abadger1999> Thanks. I’ll try that .

    […]

    [Wed April 24 2013] <abadger1999> skvidal: btw, your sysctl recipe seems to have fixd my ssh timeout issues. Thanks!
    [Wed April 24 2013] <skvidal> abadger1999: 🙂
    [Wed April 24 2013] <skvidal> abadger1999: last time it happened to me I had to google for the solution
    [Wed April 24 2013] <skvidal> abadger1999: and I found a post from myself from 5yrs earlier
    [Wed April 24 2013] <skvidal> abadger1999: _that_ is kinda freaky
    [Wed April 24 2013] <pingou> isn’t that what blog are for? 🙂
    [Wed April 24 2013] <dwa> nice
    [Wed April 24 2013] <abadger1999> Cool 🙂
    [Wed April 24 2013] <skvidal> “wow, this dude knew what was going on…. but he sure writes like he’s an ass”
    [Wed April 24 2013] <skvidal> “oh….. wait”

    https://lists.dulug.duke.edu/pipermail/dulug/2007-July/010956.html

    https://lists.dulug.duke.edu/pipermail/dulug/2003-August/007359.html


    Picture of Seth from 2005 looking into the distance

    Seth, you were more of a teddy bear than an ass.

    حمایت از یک کار فرهنگی ارزشمند در لینوکس ایران

    Posted by Mostafa Daneshvar on December 12, 2013 10:22 AM

    بهنام توکلی را کسانی که در دنیای گنو-لینوکس ایران به خوبی می شناسند. او مدیر مرکز گنو/لینوکس  سی‌تو است که در تهیه و توزیع انواع گنو-لینوکس ها دستی به کار دارد. او مدتی است با راه انداختن یک کمپین برای تولید مجله‌ای به نام لینوکس‌مگ خیز بلندی برای گسترش فرهنگ گنو برداشته است.

    بنده به نوبه خودم از زحمات این عزیز برای کارهایی که تا به حال انجام داده سپاسگزاری می کنم. از همه دوستانی که این نوشته را می خوانند خواهش می کنم با کمک با این مجله شروع خوبی را برای آن رقم بزنند.

    <figure aria-describedby="caption-attachment-3341" class="wp-caption alignnone" id="attachment_3341" style="width: 300px">لینوکس مگ<figcaption class="wp-caption-text" id="caption-attachment-3341">لینوکس مگ</figcaption></figure>

    شما می توانند با واریز مبلغ حداقل ۲۴هزار تومان به این جریان کمک کنید. اگر هم شرکتی دارید یا وسع مالی بیشتر پس درنگ نکنید.

    برای کمک به لینوکس مگ به این لینک مراجعه فرمایید.

    اگر هم سوالی در این باره دارید می توانید به لیست سوالات متداول مربوط به آن مراجعه کنید.

    با تشکر

    مصطفی دانشور – فدورا

    git commit doesn’t commit? (GitPython bug)

    Posted by Toshio Kuratomi on November 04, 2013 08:30 PM

    Mostly posting this to remind myself of the fix the next time I run into this but htis might help some other people as well.

    Every once in a while I’ll be working on a git repo in the fedora packages repository and when I git commit -a it, I’ll end up with an empty commit and the files with changes aren’t actually committed. Other intuitive variations of this like git add FILE && git commit have the same buggy behaviour.

    The reason this is occurring has something to do with the GitPython library which is used by fedpkg to add some changes to your clone of the git repo when you add new source files. It’s somehow changing the index in a way that causes this behaviour. To get out of this there’s a few simple but non-intuitive things you can try:

    git reset FILE && git add FILE
    
    git stash && git stash pop

    After running one of those pairs of commands you should once more be able to git commit -a.

    Details in this GitPython bug report

    rsync unbundles zlib!

    Posted by Toshio Kuratomi on August 07, 2013 11:17 PM

    Over a year ago I mentioned that the code that rsync needed in order to start using vanilla zlib was finally on its way to being merged.  And today, we’ve finally built an rsync package that completes that saga.

    یک خبر بد برای جامعه فدورا

    Posted by Mostafa Daneshvar on July 09, 2013 08:26 PM

    امروز خبری بسیار بدی به من رسید. لیدر تیم فدورا رابین در لیست فوت یکی از اعضای کلیدی جامعه فدورا و متن باز خبر می داد. سث ویدال توسعه دهنده اصلی یام (yum) برنامه نصاب سیستم‌هایی مانند فدورا و ردهت در یک سانحه از دنیا رفت. خبر را با یکی از دوستان ایرانی در ردهت چک کردم متاسفانه خبر درست بود.

    <figure aria-describedby="caption-attachment-3244" class="wp-caption alignnone" id="attachment_3244" style="width: 250px">seth<figcaption class="wp-caption-text" id="caption-attachment-3244">seth</figcaption></figure>

    سث ویدال وقتی که سوار بر دوچرخه خود بود در یک برخورد از پشت سر توسط یک ماشین سواری به شدت آسیب می بیند. راننده ماشین بعد از این تصادف از محل حادثه می گریزد. بنابر اخبار تاکنون راننده هنوز دستگیر نشده است.

    سث از افراد بسیار فعال در زمینه جامعه کاربری و توسعه دهنده نرم‌افزارهای متن باز و آزاد بود. زمانی که به دلایلی با شرکت ردهت مشکلاتی داشتیم او با تمام توان از ما و دوستان ایرانی فدورا حمایت می کرد. این مقدمه بر دوستی چند ساله‌ی مان بود. در این مدت همیشه از طریق دوست مشترک ایرانی مان در ردهت دورا دورا جویای اخبار وی بودم.

    از دست دادن این نیروی ارزشمند را به جامعه فدورا تسلیت می گویم.

    Why Fedora Project Contributor Agreement (FPCA) does no harm

    Posted by Marcel Ribeiro Dantas on June 27, 2013 03:00 PM
    The main reason for this post is to prepare Fedora Ambassadors and Contributors for attempts of others to say FPCA is as bad as Canonical's CLA and that our legal document can do harm as much as theirs. Recently, many posts around the web have spoken of the possible trap Canonical may be setting up. They released Mir, a computer display server for GNU/Linux, under the terms of a copyleft license, the GNU GPLv3, which was supposed to be very well seen by the community. Choosing the GNU GPLv3 as the license of your software makes it clear tha you really care about freedom, since GNU GPLv3 is a very effective license against DRM, software patents and attempts to close the source code of the software...

    GNU GPLvX or any later version

    Posted by Marcel Ribeiro Dantas on June 27, 2013 03:00 PM
    When it comes to releasing your software's source code, it's extremely important to choose very carefully the best license available, in order to make sure your source code will be taken care of, used, distributed and modified in the way you expect it to be. Most free softwares are released under a GNU license, a large number of them licensed under the GNU General Public License (about 68% of the projects listed on SourceForge.net, as of January 2006)...

    Free Software is about USERS

    Posted by Marcel Ribeiro Dantas on June 27, 2013 03:00 PM
    Since the Free Software Movement was created by the hands of software developers, people nowadays tend to think it's a movement from developers to developers, or from technical people to technical people. Unfortunately, it's a common misconception that has spread all over the community and leads people to believe the four essential freedoms are aimed only at software developers...

    MirrorManager 1.4 now in production in Fedora Infrastructure

    Posted by Matt Domsch on June 20, 2013 01:39 AM

    After nearly 3 years in on-again/off-again development, MirrorManager 1.4 is now live in the Fedora Infrastructure, happily serving mirrorlists to yum, and directing Fedora users to their favorite ISOs – just in time for the Fedora 19 freeze.

    Kudos go out to Kevin Fenzi, Seth Vidal, Stephen Smoogen, Toshio Kuratomi, Pierre-Yves Chivon, Patrick Uiterwijk, Adrian Reber, and Johan Cwiklinski for their assistance in making this happen.  Special thanks to Seth for moving the mirrorlist-serving processes to their own servers where they can’t harm other FI applications, and to Smooge, Kevin and Patrick, who gave up a lot of their Father’s Day weekend (both days and nights) to help find and fix latent bugs uncovered in production.

    What does this bring the average Fedora user?  Not a lot…  More stability – fewer failures with yum retrieving the mirror lists, not that there were many, but it was nonzero.  A list of public mirrors where the versions are sorted in numerical order.

    What does this bring to a Fedora mirror administrator?  A few new tricks:

    • Mirror admins have been able to specify their own Autonomous System Number for several years.  Clients on the same AS get directed to that mirror.  MM 1.4 adds the ability for mirror admins to request additional “peer ASNs” – particularly helpful for mirrors located at a peering point (say, Hawaii), where listing lots of netblocks instead is unwieldy.  As this has the potential to be slightly dangerous (no, you can’t request ALL ASNs be sent your way), ask a Fedora sysadmin if you want to use this new feature – we can help you.
    • Multiple mirrors claiming the same netblock, or overlapping netblocks, were returned to clients in random order.  Now they will be returned in ascending netblock size order.  This lets an organization that has a private mirror, and their upstream ISP, both have a mirror, and most requests will be sent to the private mirror first, falling back to the ISP’s mirror.  This should save some bandwidth for the organization.
    • If you provide rsync URLs, You’ll see reduced load from the MM crawler as it will now use rsync to retrieve your content listing, rather than a ton of HTTP or FTP requests.

    What does this bring Fedora Infrastructure (or anyone else running MirrorManager)?

    • reduced memory usage in the mirrorlist servers.  Especially with as bad as python is at memory management on x86_64 (e.g. reading in a 12MB pickle file blows out memory usage from 4MB to 120MB), this is critical.  This directly impacts the number of simultaneous users that can be served, the response latency, and the CPU overhead too – it’s a win-win-win-win.
    • An improved admin interface – getting rid of hand-coded pages that looked like they could have been served by BBS software on my Commodore 64 – for something modern, more usable, and less error prone.
    • Code specifically intended for use by Debian/Ubuntu and CentOS communities, should they decide to use MM in the future.
    • A new method to upgrade database schemas – saner than SQLObject’s method.  This should make me less scared to make schema changes in the future to support new features.  (yes, we’re still using SQLObject – if it’s not completely broken, don’t fix it…)
    • Map generation moved to a separate subpackage, to avoid the dependency on 165MB of  python-basemap and python-basemap-data packages on all servers.

    MM 1.4 is a good step forward, and hopefully I’ve laid the groundwork to make it easier to improve in the future.  I’m excited that more of the Fedora Infrastructure team has learned (the hard way) the internals of MM, so I’ll have additional help going forward too.

    Enterprise Linux 6.3 to 6.4 risk report

    Posted by Mark Cox on February 27, 2013 01:03 PM
    You can read my Enterprise Linux 6.3 to 6.4 risk report on the Red Hat Security Blog.

    "for all packages, from release of 6.3 up to and including 6.4, we shipped 108 advisories to address 311 vulnerabilities. 18 advisories were rated critical, 28 were important, and the remaining 62 were moderate and low."

    "Updates to correct 77 of the 78 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public. The other one was in OpenJDK 1.60 where the update took 4 calendar days (over a weekend)."

    And if you are interested in how the figures were calculated, here is the working out:

    Note that we can't just use a date range because we've pushed some RHSA the weeks before 6.4 that were not included in the 6.4 spin. These issues will get included when we do the 6.4 to 6.5 report (as anyone installing 6.4 will have got them when they first updated).

    So just after 6.4 before anything else was pushed that day:

    ** Product: Red Hat Enterprise Linux 6 server (all packages)
    ** Dates: 20101110 - 20130221 (835 days)
    ** 397 advisories (C=55 I=109 L=47 M=186 )
    ** 1151 vulnerabilities (C=198 I=185 L=279 M=489 )
    
    ** Product: Red Hat Enterprise Linux 6 Server (default installation packages)
    ** Dates: 20101110 - 20130221 (835 days)
    ** 177 advisories (C=11 I=71 L=19 M=76 )
    ** 579 vulnerabilities (C=35 I=133 L=159 M=252 )

    And we need to exclude errata released before 2013-02-21 but not in 6.4:

    RHSA-2013:0273 [critical, default]
    RHSA-2013:0275 [important, not default]
    RHSA-2013:0272 [critical, not default]
    RHSA-2013:0271 [critical, not default]
    RHSA-2013:0270 [moderate, not default]
    RHSA-2013:0269 [moderate, not default]
    RHSA-2013:0250 [moderate, default]
    RHSA-2013:0247 [important, not default]
    RHSA-2013:0245 [critical, default]
    RHSA-2013:0219 [moderate, default]
    RHSA-2013:0216 [important, default]
    
    Default vulns from above: critical:12 important:2 moderate:16 low:3
    Non-Default vulns from above: critical:4 important:2 moderate:5 low:0

    This gives us "Fixed between GA and 6.4 iso":

    ** Product: Red Hat Enterprise Linux 6 server (all packages)
    ** Dates: 20101110 - 20130221 (835 days)
    ** 386 advisories (C=51 I=106 L=47 M=182 )
    ** 1107 vulnerabilities (C=182 I=181 L=276 M=468 )
    
    ** Product: Red Hat Enterprise Linux 6 Server (default installation packages)
    ** Dates: 20101110 - 20130221 (835 days)
    ** 172 advisories (C=9 I=70 L=19 M=74 )
    ** 546 vulnerabilities (C=23 I=131 L=156 M=236 )

    And taken from the last report "Fixed between GA and 6.3 iso":

    ** Product: Red Hat Enterprise Linux 6 server (all packages)
    ** Dates: 20101110 - 20120620 (589 days)
    ** 278 advisories (C=33 I=78 L=31 M=136 )
    ** 796 vulnerabilities (C=104 I=140 L=196 M=356 )
    
    ** Product: Red Hat Enterprise Linux 6 Server (default installation packages)
    ** Dates: 20101110 - 20120620 (589 days)
    ** 134 advisories (C=6 I=56 L=15 M=57 )
    ** 438 vulnerabilities (C=16 I=110 L=126 M=186 )

    Therefore between 6.3 iso and 6.4 iso:

    ** Product: Red Hat Enterprise Linux 6 server (all packages)
    ** Dates: 20120621 - 20130221 (246 days)
    ** 108 advisories (C=18 I=28 L=16 M=46 )
    ** 311 vulnerabilities (C=78 I=41 L=80 M=112 )
    
    ** Product: Red Hat Enterprise Linux 6 Server (default installation packages)
    ** Dates: 20120621 - 20130221 (246 days)
    ** 38 advisories (C=3 I=14 L=4 M=17 )
    ** 108 vulnerabilities (C=7 I=21 L=30 M=50 )

    Note: although we have 3 default criticals, they are in openjdk-1.6.0, but we only call Java issues critical if they can be exploited via a browser, and in RHEL6 the Java browser plugin is in the icedtea-web package, which isn't a default package. So that means on a default install you don't get Java plugins running in your browser, so really these are not default criticals in RHEL6 default at all.

    Security FAD

    Posted by Toshio Kuratomi on November 26, 2012 07:19 PM

    All packed up and waiting for my plane to Raleigh. Going there to work on enabling two-factor authentication for the hosts that give shell access inside of Fedora’s Infrastructure. For the first round, I think we’re planning on going for simple and minimal to show what we can do. Briefly, the simplest and minimalist is:

    * Server to verify a one time password (we already have one for yubikeys)
    * CGI to take a username, password, and otp to verify in fas and the otp server
    * pam module for sudo that verifies the user via the cgi
    * database to store the secret keys for the otp generation and associate them with the fas username

    We’re hoping to go a little beyond the minimal at the FAD:

    * Have a web frontend to configure the secret keys that are stored for an account.
    * Presently we’re thinking that this is a FAS frontend but we may end up re-evaluating this depending on what we decide to do for web apps and what to require for changing an auth source.
    * Allow both yubikey and google-authenticator as otp

    I’m also hoping that since we’ll have most of the sysadmin side of infrastructure present that we’ll get a chance to discuss and write down a few OTP policies for the future:

    * Do we want to make two-factor optional for some people and required for others?
    * How many auth sources do we require in order to change a separate auth source (email address, password, secret for otp generation, phone number, gpg key, etc)?

    If we manage to get through all of that work, there’s a few other things we could work on as well:

    * Design and implement OTP for our web apps

    When is an SRPM not Architecture-neutral?

    Posted by Chris Tyler on November 24, 2012 03:13 AM

    Source RPM packages -- SRPMs -- have an architecture of "src". In other words, a source RPM is a source RPM, with no architecture associated with it. There's an assumption that the package is architecture-neutral in source form, and only become architecture-specific when built into a binary RPM (unless it builds into a "noarch" RPM, which is the case with scripts, fonts, graphics, and data files).

    An SRPM contains source code (typically a tarball, and sometimes patch files) and a spec file which serves as manifest and build-recipe, plus metadata generated from the spec file when the SRPM is built -- including dependencies (which, unlike binary RPMs, are actually the build dependencies).

    However, the build dependencies may vary by platform. If package foo is built against bar and baz, and baz exists on some architectures but not others, then the spec file may be written to build without baz (and the accompanying features that baz enables) on some architectures. The corresponding BuildRequires lines will also be made conditional on the architecture -- and this make total sense. However, querying an SRPM on a given platform may give incorrect build dependency information for that platform if the SRPM was built on another platform -- and only rebuilding the SRPM on the target arch will correct the rpm metadata (and possibly render it incorrect for other platforms). Thus, I've come to realize, SRPMs are not truly architecture-neutral -- and I'm not sure if all our tools take this into consideration.

    Edit: I know that not all of our tools take this into consideration.



    Continue reading "When is an SRPM not Architecture-neutral?"

    Enterprise Linux 6.2 to 6.3 risk report

    Posted by Mark Cox on October 03, 2012 03:16 PM
    You can read my Enterprise Linux 6.2 to 6.3 risk report on the Red Hat Security Blog.
    "for all packages, from release of 6.2 up to and including 6.3, we shipped 88 advisories to address 233 vulnerabilities. 15 advisories were rated critical, 23 were important, and the remaining 50 were moderate and low."

    "Updates to correct 34 of the 36 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public. The Kerberos telnet flaw was fixed in 2 calendar days as the issue was published on Christmas day. The second PHP flaw took 4 calendar days (over a weekend) as the initial fix released upstream was incomplete."

    And if you are interested in how the figures were calculated, as always view the source of this blog entry.

    Fedora or Android?

    Posted by Mark Cox on September 28, 2012 02:48 PM
    20120928_prime For a two day trip I decided to test using my Android tablet instead of also taking a laptop, and it worked out okay for the most part.

    I was booked to go to Red Hat HQ in Raleigh, NC at the start of August for a two-day business trip, well more accurately two-days in the office and another two-days of travelling. I'd usually take my trusty ThinkPad x201 on the trip with me, it's small and light, but it's battery life isn't so great anymore. Earlier this year I'd bought an Android tablet, an ASUS Transformer Prime which with a long battery life would be perfect for movies, but could it replace my ThinkPad completely and save me travelling with two devices? I worked through my requirements and it seemed plausible in theory, so here is how it stacked up in practice:

    • Connectivity. In the UK you can only buy the Prime with the keyboard dock, the keyboard dock is great. The in-built wifi was okay for the airport, hotel, and office. I carry a USB network adapter anyway just in case the hotel has a physical connection. The wifi signal on the Prime is terrible compared to other things (like a phone) though, so be prepared to walk around a bit to the best signal. Partial Win.

    • In flight entertainment. I wanted something to watch movies (as US Airways transatlantic don't yet have seat-back video, really!). The large internal memory meant I could store a few films in decent quality to watch and battery life wasn't a problem. I'd used the tablet continously (without wifi) with the keyboard connected for 6 hours and wasn't even down to 50% battery. Although hardware decoding of videos was a bit hit-and-miss, and after trying a dozen apps only "BS Player" seemed to do a reasonable job. A couple of the movies I'd brought had low audio and I couldn't figure out a way to boost it enough to hear over the noise of the plane, even with decent in-ear noise blocking headphones. Having the keyboard dock helped considerably as with the tablet on the tray-table I could set a decent angle to watch a movie. Win.

    • Reading material. I had a few papers and magazines to read which I'd preloaded onto the tablet in PDF format. The Adobe PDF viewer is acceptable, but it seems a little sluggish for something running on a quad-core processor, and the screen resolution isn't really good enough for magazines. The new Transformer Infinity would help here. Partial Win.

    • Keeping in touch with home. The standard Android GMail app and Facebook app are okay, and I was able to use GMail talk to have video chats with my family from both the hotel and office. Win.

    • Working. With just a couple days away I figured all that was needed was the ability to read and send email and browse intranet internal web pages. The standard VPN client on the Prime worked perfectly, and along with the Firefox beta app gave me perfect access to internal sites. For email I prefer command-line text-window clients anyway, so I just needed to be able to connect to a work machine. "Connectbot" on Android works well enough for ssh, and there are a few forked versions you can get that work with the Prime keyboard. The AndChat app works for irc. Win.

    • Presentations. I was giving a presentation at a meeting, but fortunately they had a laptop set up with the projector and I didn't need to worry about taking a HDMI lead and hoping it was a recent projector. Unexpectedly I needed to edit an existing OpenOffice presentation to remove a couple of slides and then convert to PDF to send to another company. I had to ask a colleague to do it for me. There are apps that can view OpenOffice files, but no native OpenOffice suite for android. I'd probably make sure I had access to a VNC server in the future and use a VNC client for anything like this. Fail.

    • Privacy. My thinkpad has full-disk encryption but I didn't bother for Android as I wasn't going to be storing anything sensitive on the machine. My thinkpad has a 3M privacy filter, which is great for airplanes and airports to stop people either side and behind you looking at your screen. The same filters do exist for Android, but are not as straightforward (it of couse only works in one orientation and attaches like a screen protector, so isn't the easiest thing to continuously take on and off, and forces you to use your screen in portrait mode for everything). Fail.

    • Printing a boarding card. When it was time to return home I was able to use Firefox to check in online, and printing my boarding passes gave me a PDF file. I didn't have any printer apps set up, but it was easy enough to email a PDF to a colleague to print for me. Partial Win.

    So in summary I think I got away with it; having just the tablet didn't stop me doing anything that had to be done on the trip and I'll definately do the same thing again in the future for very short trips. For anything more than a couple of days or where connectivity might be an issue I'd miss having a full-featured OS.

    Red Hat Security Blog

    Posted by Mark Cox on September 20, 2012 12:54 PM
    We now have an official Red Hat Security Blog, and you'll find all my future reports and discussions about security metrics there. In the meantime here are a few already published articles:

    Rilasciata Fedora 18 "Sperical Cow" Alpha

    Posted by Fedora Italia on September 19, 2012 07:56 AM

    L'alpha release di Fedora 18 “Spherical Cow” è pronta per essere testata! Questo rilascio offre un'anteprima di alcune delle migliori tecnologie open source attualmente sotto sviluppo. Ci sono molte nuove features veramente interessanti, dedicate a un audience molto vasto, in questa nuova release.

    Di seguito un piccolo assaggio di quello che ci aspetta nel futuro di Fedora 18.

    Per il Desktop

    • NetworkManager Hotspots migliora la capacità di usare la nostra scheda wi-fi e trasformate il nostro computer in un hot spot.
    • Il sistema di installazione riprogettato aggiunge flessibilità al processo di installazione, semplificando l'interfaccia utente.
    • Desktop updates in abbondanza: Gnome 3.6, KDE Plasma Workspace 4.9, Xfce
    • 4.10, Sugar 0.98, e l'introduzione di MATE Desktop in Fedora.

    Per i Sysadmin

    • Il database NoSQL Riak, un sistema di database fault-tolerant e scalabile, è incluso per la prima volta in Fedora 18.
    • Samba 4 aggiunge il supporto a SMB3 e ai FreeIPA trusted domains.
    • Aggiunge il supporto per l'installazione dei pacchetti del sistema operativo in avvio agli Aggiornamenti del sistema non in linea . In questo modo gli amministratori di sistema avranno la possibilità di effettuare gli upgrade in modo controllato.

    Per gli sviluppatori

    • Lo stack di Python 3 è aggiornato alla versione 3.3
    • Rails è aggiornato dalla versione 3.0 alla 3.2
    • Perl 5.16 aggiunge il supporto ad Unicode 6.1

    Problemi noti e Bug

    Per chi fosse interessato a testare la nuova Fedora 18 ci sono già una serie di problemi e bug noti. Qui di seguito ne riportiamo alcuni, ma potete trovare la lista completa a questo indirizzo: http://fedoraproject.org/wiki/Common_F18_bugs

    • Utilizzando il partizionamento automatico durante l'installazione verranno formattati tutti i dischi selezionati per l'installazione , senza alcun ulteriore preavviso; TUTTI I DATI ESISTENTI SUI DISCHI ANDRANNO PERSI. In questo momento, non c'è l'opzione per poter utilizzare lo spazio libero sui dischi, o per ridimensionare esistente partizioni. Esiste comunque un workaround per risolvere questo problema.
    • Alcune schede grafiche NVIDIA hanno problemi con l'avvio o la visualizzazione del login manager o il desktop. Questo impedisce di avere un desktop usabile, quando si avvia la live o un sistema installato. In questi casi, il login manager o il desktop può non apparire del tutto, o potrebbe essere, ma con il cursore mancante o con problemi di visualizzazione.

    Questo rilascio, come detto prima, presenta una nuova interfaccia utente per anaconda, che rafforzerà in modo significativo l'esperienza dell'utente finale di installazione.Problemi noti relativi alla nuova interfaccia utente di installazione includono:

    • Per le installazione non-grafiche, la pssword di root deve essere settata per abilitare il login; per le installazioni grafiche, il primo utente deve essere settato come amministratore. Questo è attualmente il setup di default durante l'installazione.
    • Non ci sono aggiornamenti anaconda-based o preupgrade a Fedora 18 Alpha, se è necessario aggiornare un sistema installato, si deve usare yum.

    Per ulteriori informazioni, comprese le informazioni sui bug più comuni, suggerimenti su come segnalare i bug, e il calendario di rilascio ufficiale, è possibile consultare le note di rilascio nella pagina del wiki di Fedora 18

    Per il download delle ISO l'indirizzo è questo: http://fedoraproject.org/get-prerelease

    Fonte: Announcing the release of Fedora 18 Alpha!!

    Fedora Test Day: 2012-09-18 OpenStack

    Posted by Fedora Italia on September 18, 2012 10:02 AM

    Oggi si terrà il test day focalizzato su OpenStack IaaS (openstack.org) per Fedora 18.

    OpenStack è uno dei progetti cloud di maggior successo e lo si può trovare anche in Fedora.

    Potete trovare tutte le informazioni riguardanti questo test day, pre-requisiti per il test ed istruzioni su come effettuare i test nella pagina dedicata del wiki: Test Day:2012-09-18 OpenStack

    Tutti sono invitati a contribuire ai test! Potete confermare la vostra presenza nell'evento creato dall'account uffiale di Fedora su Google+: Fedora 18 Test Day - OpenStack

    Fedora video contest

    Posted by Fedora Italia on September 13, 2012 07:15 AM

    Da qualche giorno è partito un video contest che servirà a trovare un template per l'intro/outro dei video del Fedora Videos Project. Il contest è iniziato il 5 settembre 2012 e terminerà il 5 ottobre 2012.

    Per inviare il vostro video, è necessario caricarlo in una posizione accessibile al pubblico e inviare una e-mail a videos@fedoraproject.org con oggetto: "Fedora Videos Contest".

    Il video dovrebbe essere idealmente tra i cinque e trenta secondi. Ovviamente, il contenuto deve essere appropriato per uso pubblico.

    Dal momento che la 'libertà' è uno dei fondamenti del Fedora Project, è consigliato l'utilizzo di strumenti open source per la realizzazione dei video. Tuttavia, non è obbligatorio.

    L'email deve contenere:

    • Il link al video
    • La licenza con la quale è stato rilasciato il video, potete trovare maggiori informazioni qui.
    • Il vostro FAS (Fedora Account System ID), è obbligatoria, per registrarsi basta un minuto Smile
    • Se avete una pagina utente sul wiki, inserite anche questo collegamento.

    Le specifiche tecniche del video sono queste:

    1. Qualità video (minimo suggerito): SD
    2. Dimensioni Intro (minimo suggerito): 1600x1200
    3. Dimensioni Outro (minimo suggerito): 1600x1200
    4. Frame rate (minimo suggerito): 24fps
    5. Formato video: ogg
    6. Formato audio: ogv
    7. Durata del video: tra i 10 e i 30 secondi

    Potete trovare le regole, i criteri di giudizio e tutte le altre informazioni dettagliate del contest a questo indirizzo: https://fedoraproject.org/wiki/Videos/FedoraVideos_Contest_Intro/Outro

    Il premio in palio è un t-shirt Fedora.

    Rilasciata Fedora 17 per ARM

    Posted by Fedora Italia on June 20, 2012 10:53 AM

    Dopo poco più di sei mesi di sviluppo su una architettura tutta nuova viene rilasciata oggi Fedora 17 - Beefy Miracle per ARM. Lo sviluppo è stato accelerato dopo la scelta del Team ARM di Fedora di saltare il rilascio della versione 16 per concentrarsi sulla neo nata F17. La Fedora 17 viene rilasciata per la maggior parte delle piattaforme di sviluppo ARM conosciute e vicine all'opensource, molte delle quali hanno richiesto enormi sacrifici per poter supportate dal kernel scelto ovverio il nuovissimo 3.4.
    Di seguito il messaggio del Team della pubblicazione delle immagini General Available:

    Il team ARM Fedora è lieto di annunciare il rilascio di Fedora 17 GA per ARM, disponibile per il download:

    http://download.fedoraproject.org/pub/fedora-secondary/releases/17/Images/

    Questa release include immagini GA precompilate per Versatile Express (QEMU), Trimslice, Beagleboard XM, Pandaboard, Plugs Kirkwood, Highbank e piattaforme hardware basate su IMX.

    Si prega di visitare la pagina dell' annuncio per ulteriori informazioni e link per specifiche immagini:

    http://fedoraproject.org/wiki/Architectures/ARM/Fedora_17_GA

    Vi invitiamo a scaricare la release di Fedora 17 GA e fornire il proprio prezioso apporto al team ARM Fedora. Unitevi a noi in IRC in # fedora-braccio o su Freenode
    inviate feedback e commenti alla mailing list ARM.

    A nome del team ARM Fedora,
    Paul

    Fedora Board run-off election

    Posted by Jared Smith on June 18, 2012 07:04 PM

    If you’ve followed my blog for long, you probably know that I tend to blog a lot about my favorite distribution (and community), Fedora.  And, as you probably well know, in Fedora we have elections for many things such as seats on the leadership committees and release names. In the most recent round of Fedora elections, we had a tie vote in the elections for a seat on the Fedora Board, so we’re now in the middle of a run-off election.  If you have a Fedora account and haven’t yet voted, please do your civic duty and vote in the run-off election.  The voting ends Tuesday at the end of the day UTC time, so you have roughly twenty-four hours to get your votes in.  As always, I encourage you to vote for the candidate that you think will best represent Fedora and its values.

    More details on the run-off election can be found at https://lists.fedoraproject.org/pipermail/announce/2012-June/003085.html.  To vote, login to the Fedora Accounts System and place your votes at https://admin.fedoraproject.org/voting.

    Let me also add a quick thank you to everyone who has already voted or who has stood up and run for public office.  Leadership in Fedora takes time and effort, and I’m always grateful to those who are willing to put their time and energy and passion into doing a fantastic job.

    Fedora 15 raggiunge la fine del suo ciclo di vita

    Posted by Fedora Italia on May 31, 2012 06:43 PM

    Fedora 15 raggiungere la fine del suo ciclo di vita il 26 giugno 2012 pertanto, non saranno previsti ulteriori aggiornamenti dopo tale data. Inoltre, con il recente rilascio di Fedora 17, non saranno aggiunti nuovi pacchetti alla raccolta di Fedora 15.

    Si prega pertanto di aggiornare la vostra distribuzione seguendo le note di aggiornamento riportate a questo indirizzo.

    Red Hat and CVRF compatibility

    Posted by Mark Cox on May 18, 2012 12:41 PM
    The Common Vulnerability Reporting Framework (CVRF) is a way to share information about security updates in an XML machine-readable format. CVRF 1.1 got released this week and over at Red Hat we've started publishing our security advisories in CVRF format.

    Find out more from our FAQ and formatting guide.

    rsync unforking zlib soon!

    Posted by Toshio Kuratomi on May 07, 2012 04:16 PM

    Congratulations and many thanks to everyone who was involved in the effort to unbundle zlib from rsync! Looks like this long standing bug that’s been a sore spot for many distributions is finally being addressed. It almost makes me want to create a Fedora 18 Feature page for it 🙂

    The role of critics in FOSS development

    Posted by Jared Smith on April 23, 2012 09:01 PM
    <figure class="wp-caption alignright" style="width: 186px">mini cowboy, CC-BY-SA by rdenubila on Flickr<figcaption class="wp-caption-text">mini cowboy, CC-BY-SA by rdenubila on Flickr</figcaption></figure>

    I’ve been thinking a lot over the past couple of years of the role that critics play in the course of free/open source development.  Obviously one of the advantages that FOSS software has over its proprietary counterparts is that it almost always has a richer feedback mechanism, so that it can incorporate the feedback (and patches!) from a wide variety of interested parties.  (The mantra of “No matter how many smart people you hire, there are always smarter people outside your organization.” rings loudly in my ears!)  This feedback loop is important — perhaps even vital — to long term development.  At the same time, the open nature of FOSS development gives critics a large forum in which to voice their opinions.  How best to be make sure that there’s a fair balance between constructive feedback and criticism?  I was reminded of this balance by a couple of quotes from former US President Theodore Roosevelt.  In 1894, he said:

    Criticism is necessary and useful; it is often indispensable; but it can never take the place of action, or be even a poor substitute for it. The function of the mere critic is of very subordinate usefulness. It is the doer of deeds who actually counts in the battle for life, and not the man who looks on and says how the fight ought to be fought, without himself sharing the stress and the danger.

     

    On April 23rd 1910, he put it a little more eloquently:

    It is not the critic who counts: not the man who points out how the strong man stumbles or where the doer of deeds could have done better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood, who strives valiantly, who errs and comes up short again and again, because there is no effort without error or shortcoming, but who knows the great enthusiasms, the great devotions, who spends himself for a worthy cause; who, at the best, knows, in the end, the triumph of high achievement, and who, at the worst, if he fails, at least he fails while daring greatly, so that his place shall never be with those cold and timid souls who knew neither victory nor defeat.

     

    To each of you who are in the arena — who are daring greatly, who are fighting the good fight, I tip my hat to you.  It’s not always easy work, or sexy work, or work that gets a lot of praise and glory. Thank you for your tireless efforts to make the world a better place one piece, one parameter, one package, or one project at a time.  And to those critics who aren’t in the fight, we still hear you — but at the end of the day, I’m more willing to pay attention to those who are engaged in the battle.

    فدورا با نام یا شماره انتشار

    Posted by Mostafa Daneshvar on April 23, 2012 07:21 PM

    مدتی است در هیئت مدیره فدورا و کاربران آن بحثی درباره نام گذاری نسخ فدورا در جریان است. بحث دو تا است.

    اول اینکه آیا باید برای انتشار دوره‌ای فدورا نامی انتخاب شود یا خیر؟ دوم اینکه روند و ارتباط نام‌ها با هم و در کل با فدورا و روند تصمیم گیری درباره آن نام است.

    <figure class="wp-caption alignnone" style="width: 600px"><figcaption class="wp-caption-text">Beefy</figcaption></figure>

    درباره اولی باید بگویم که نسخه‌های فدورا برعکس اوبونتو کمتر با نام انتشار شناخته می شوند. اکثر افراد ترجیح می دهند فدورا را با شماره انتشارش بنامند تا عنوانی که برایش انتخاب شده است. در سیستم کلی فدورا چه در لینک و برخی برنامه‌ها و آدرس‌دهی‌ها عدد انتشار همیشه بر آن لقب برتری کامل داشته است. پس بسیاری بر این عقیده هستند بهتر است به جای طی شدن این روند طولانی اداری و کاری فدورا همان عدد را برای نام‌گذاری فدورا انتخاب کنیم. موافقان می گوید از این نام برای تولید تم‌ها و کارهای گرافیکی استفاده می شود. بدون داشتن یک ایده واحد تولید کارهای این چنین کمی دشوار است.

    درباره دوم: مشکل ربط نام انتخابی با سیستم فدورا،لینوکس و مباحثی این چنین است. بیشتر بحث در این است که رابطه خیلی کم رنگی میان اسم‌های انتخابی و فدورا وجود دارد. برخی حتی آن‌ اسم‌ها را برای فدورا نامناسب و خنده‌دار می دانند. در این زمینه مشکل دیگر روند طولانی انتخاب نام برای هر نسخه انتشار است. از پیشنهاد دادن کاربران تا تایید نهایی ردهت و سپس به رای گذاری آنها زمان طولانی و انرژی زیادی از تیم فدورا می گیرد. برخی با توجه به این نکات استدلال می کنند که بهتر است از این روند‌های دست و پاگیر رها شویم و بیشتر سعی را در بهتر سازی فدورا به کار ببریم.

    اگر شما کاربر فدورا و عضو سیستم فدورا هستید می توانید تا آخر همین هفته در این رای گیری شرکت کنید. پس از این رای گیری هیئت نظرش را درباره این سیستم و مشکلاتش بیان خواهد کرد.

    Crafting the best compromise possible

    Posted by Toshio Kuratomi on April 19, 2012 05:30 PM

    This is something I’ve been noticing for a while and am finally getting around to blogging.

    In the first days of FESCo, Thorsten Leemhuis was the chairman. One of the quirks of his time was that we’d encounter a topic where we voted on a solution and found that a majority agreed with one sentiment but it wasn’t unanimous. When that happened, Thorsten would be sure to ask if there was anything we could do to make the solution more acceptable to the dissenters even if they still wouldn’t vote for the proposal.

    This sometimes lead to discussions of a proposal that had been approved with margins like 7 to 2 and after the discussion and changes, the vote was still 7 to 2. So from an external standpoint, this might be seen as unproductive. Why don’t we just get a decision made and move on?

    But over the years I’ve watched a lot of other split decisions be made on several committees from both the inside and the outside and it’s struck me that, perhaps, we don’t do nearly enough of this sort of examination. Making changes after it was clear that a majority agreed with the basic proposal had several beneficial effects:

    1. It made the proposals more palatable to more people by getting rid of at least some issues that had made their way in to the final drafts.
    2. It forced dissenters to figure out what specific things they wanted to be changed in the proposal rather than simply being able to say “I hate this whole thing”.
    3. It made more people a part of the decision– whether or not they voted for it, if some of their ideas were in it, they felt some ownership for having help craft it.
    4. And perhaps most importantly, it let everyone know that the door of communication still worked. People found that their ideas were still valued by the other members even if they didn’t agree with each other on the overall picture.

    So what can we do with this? Maybe it’s too much to ask that we look over every little decision we make where there’s disagreement and attempt to find every last bit of common ground that we can (There were certainly times when it seemed to take forever to make a decision) but what about decisions that are close votes? What about decisions that have days-long threads as part of their backstory? In these cases, consider the proposal that the majority agrees on to be a strawman. A starting point from which to start chipping away to see what changes can be made that are still acceptable to the majority while addressing many of the issues that the minority has. Remember that the goal is to craft a compromise that addresses as many concerns as possible.