Celebrate SFD 2014 on Saturday, September 20th
   
Text Size
Login
Canonical
Google
Linode
FSF
Linux Magazine
Ubuntu User
Linux Journal
Admin Network & Security Magazine
Smart Developer
Creative Commons
FreeBSD
Joomla!
Open Clipart Library
FSFE
Fundația Ceata
Atom 0.3 RSS 1.0 RSS 2.0 OPML FOAF

March 31, 2015

Aufruf zur Auswertung: Kleine Anfrage zum Auswärtigen Amt

Aufruf zur Auswertung: Kleine Anfrage zum Auswärtigen Amt

Die Grünen haben im Bundestag haben kleine Anfrage zu Thema "Freie Software im Auswärtigen Amt" (pdf) gestellt. Die Bundesregierung hat diese Anfrage nun beantwortet.

Die FSFE hat die Rückmigration des Auswärtigen Amtes zu unfreier Software seit langem kritisch begleitet. Wir rufen nun alle Interessierten dazu auf, die Antwort der Bundesregierung gemeinsam mit uns öffentlich auszuwerten.

"Die angeblichen Vorteile unfreier Software, mit denen die Bundesregierung die Abkehr vom Freie-Software-Kurs im Auswärtigen Amt begründete, sind offenbar nicht eingetroffen", sagt Karsten Gerloff, Präsident der Free Software Foundation Europe. "Eine schlüssige Begründung für die Rückkehr zu einem proprietären Betriebssystem im AA bleibt uns die Bundesregierung weiter schuldig."

Mehr Hintergrund zu der Anfrage gibt es in einem Artikel des Bundestagsabgeordneten Konstantin von Notz.

Support FSFE, join the Fellowship
Make a one time donation

A cautious welcome to the EC's new Free Software strategy

A cautious welcome to the EC's new Free Software strategy

The European Commission has published a new version of its strategy for the internal use of Free Software. The strategy now covers the 2014-2017 timeframe. FSFE has provided extensive input to the Commission during the update process.

While the strategy is broadly similar to the previous version, there are a number of marked improvements:

A more determined attitude to Free Software. This is a minimum requirement for the strategy to have at least some impact in an environment where proprietary software is deeply entrenched. The new strategy talks about creating "a level playing field" for Free Software, and giving it "active and fair consideration".

An approach to Open Standards that goes beyond the watered-down revision of the European Interoperability Framework: "the Commission shall promote the use of products that support recognised, well-documented and preferably open technical specifications that can be freely adopted, implemented and extended".

A commitment to make it easier for Commission developers to participate in external Free Software communities.

"This document is essentially a statement of intent by the Commission," says FSFE's president Karsten Gerloff. "There are many actions the Commission could take to make use of the advantages offered by Free Software and Open Standards - procurement practices come to mind. That said, the new strategy represents a change for the better, and we are happy to see the Commission moving in the right direction."

Crucially, the strategy is accompanied by an action plan aimed at putting it into practice, unlike previous versions. However, the action plan is not public, so it is not possible to assess the Commission's progress towards its own goals. FSFE hopes that the Commission will eventually publish the action plan.

Support FSFE, join the Fellowship
Make a one time donation

OpenSource.com – Open source and DevOps aren’t mandatory, but neither is survival

I recently wrote an article for OpenSource.com – Open source and DevOps aren’t mandatory, but neither is survival This article is part of the Easy DevOps column coordinated by Greg Dekoenigsberg, VP of Community at Ansible. Share your stories and advice that helps to make DevOps practical—along with the tools, processes, culture, successes and glorious/inglorious failures from your experience by contacting us at This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

Technorati Tags:

March 30, 2015

/etc/apt/sources.list, GNOME 3.16 release and Jessie RC2

This would be a longish post about lot of topics starting from documenting my /etc/apt/sources.list, going to various goings on with GNOME 3.16 release and Debian Jessie RC2 release. This is my /etc/apt/sources.list. It is a bit longish as I primarily use Jessie but do take some softwares from unstable/sid and experimental. I stay away […]

How to format Python code without really trying

Years of writing and maintaining Python code have taught us the value of automated tools for code formatting, but the existing ones didn’t quite do what we wanted. In the best traditions of the open source community, it was time to write yet another Python formatter.

YAPF takes a different approach to formatting Python code: it reformats the entire program, not just individual lines or constructs that violate a style guide rule. The ultimate goal is to let  engineers focus on the bigger picture and not worry about the formatting. The end result should look the same as if an engineer had worried about the formatting.

You can run YAPF on the entire program or just a part of the program. It’s also possible to flag certain parts of a program which YAPF shouldn’t alter, which is useful for generated files or sections with large literals.

Consider this horribly-formatted code:

x = {  'a':37,'b':42,

'c':927}

y = 'hello ''world'
z = 'hello '+'world'
a = 'hello {}'.format('world')
class foo  (     object  ):
 def f    (self   ):
   return       \
37*-+2
 def g(self, x,y=42):
     return y
def f  (   a ) :
 return      37+-+a[42-x :  y**3]

YAPF reformats this into something much more consistent and readable:

x = {'a': 37, 'b': 42, 'c': 927}

y = 'hello ' 'world'
z = 'hello ' + 'world'
a = 'hello {}'.format('world')


class foo(object):
   def f(self):
       return 37 * -+2

   def g(self, x, y=42):
       return y


def f(a):
   return 37 + -+a[42 - x:y ** 3]

Head to YAPF's GitHub page for more information on how to use it, and take a look at YAPF’s own source code to see a much larger example of the output it produces.

by Bill Wendling, YouTube Code Health Team

March 27, 2015

Google Code-in 2014 wrap up with OpenMRS

OpenMRS is a medical records system used around the world, especially in places where resources are scarce. It’s also being used with Google’s chlorine-submersible tablets designed for Médecins Sans Frontières to use while treating ebola patients. The OpenMRS community recently participated in Google Code-in, providing young students with an opportunity to get involved with real open source projects and learn about contributing to them. Chaitya Shah, one of OpenMRS’ two grand prize winners, shared this story with us about his participation in the contest.

GCI_2014_logo_small.png

For 7 weeks in December 2014 and January 2015, I worked with OpenMRS in the Google Code-in (GCI) competition. GCI introduces highschool aged kids to open source software development by providing a wide variety of tasks we can complete. For me, it has worked wonders. I’d been interested in the concept of open source software for about a year and even participated in GCI 2013, but this year, the experience turned my interest into a passion. I worked on many new things, met lots of new people, and learned several important skills along the way.

A few days before the competition started, I decided to see how OpenMRS’s software worked. I went through the GitHub repositories and tried to get openmrs-core, the main application, running. After a few tries and the help of several contributors on IRC, I was finally able to do so. Their help showed me what the OpenMRS community was truly about: everyone was very helpful throughout the contest and there was always someone online to help me out at any time of the day.

Several of the tasks I worked on this year were much more complex than the ones I worked on last year, giving me more of a challenge and motivating me to put forth my best effort! The early tasks, however, involved getting acquainted with the OpenMRS community and learning how things work in the organization. Several of these tasks taught some key aspects of open source software or of programming in general. One of the simplest but most important tasks was introducing myself to the community. If the communication between a developer and an organization is weak, the code produced will suffer. It was also inspiring to see so many other people interested in contributing to OpenMRS through GCI.

After learning the basics of OpenMRS, I started to explore tasks in the UI Revamp epic. With guidance from a mentor, I worked on making the OpenMRS ID site look more like the redesigned wireframes provided. These tasks really taught me a lot about design, one of my weak points. I used to know very little about HTML/CSS in general. The revamp tasks taught me about good practices in UI Design and I loved every minute of it.

In the last two weeks of the competition, I decided that I was ready to contribute something brand new to the organization. While deploying OpenMRS on the OpenShift cloud platform as part of a task, I found the developer guide was vague in some areas and difficult to follow. It took me a few days and some experimentation to get it working. To ensure that others wouldn’t have the same troubles, I made two videos showing the exact steps to follow: one for Windows and one for Unix-based systems.

After that, I decided to take on a Docker task. Docker is a system that lets you build, ship, and run distributable applications. This task directed me to create an image that downloads, sets up, and runs OpenMRS automatically. I was slightly overwhelmed at first, but Docker proved to be quite useful because it uses a system of containers rather than virtual machines, making it much faster and easier to deploy applications. I felt a big sense of accomplishment once I had finished publishing my work, writing up documentation, and making a quick video tutorial on how to set it up.

I learned a lot from OpenMRS and GCI this year. I was especially impacted by the weight that community interaction has in open source work. Previously, I’d always had the notion that being a programmer is very lonesome, sitting in a room with nothing but a computer for many hours at a time. However, I now know that everything in open source software development is collaborative; everyone works together to accomplish a single goal. I hope to someday find a job with a company that embraces this collaborative nature. Thank you to OpenMRS and GCI for an awesome experience this year!


By Chaitya Shah, GCI grand prize winner

March 24, 2015

LibrePlanet 2015 brings free software luminaries to MIT

Richard Stallman at LibrePlanet

Richard Stallman gave the opening keynote

At a ceremony on Saturday, March 21st, Free Software Foundation executive director John Sullivan announced the winners of the FSF's annual Free Software Awards. Two awards were given: the Award for the Advancement of Free Software was presented to Sébastien Jodogne for his work on free software medical imaging, and the Award for Projects of Social Benefit was presented to Reglue, an Austin, TX organization that gives GNU/Linux laptops to families in need.

Software Freedom Conservancy executive director Karen Sandler closed out the conference with a rallying cry to "Stand up for the GNU GPL," in which she discussed a lawsuit recently filed in Germany to defend the GNU General Public License. When she asked the audience who was willing to stand up for copyleft, the entire room rose to its feet.

Karen Sandler at LibrePlanet

Karen Sandler gave the closing keynote

Videos of all the conference sessions, along with photographs from the conference, will soon be available on https://media.libreplanet.org, the conference's instance of GNU MediaGoblin, a free software media publishing platform that anyone can run.

LibrePlanet 2015 was produced in partnership by the Free Software Foundation and the Student Information Processing Board (SIPB) at MIT.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Libby Reinish
Campaigns Manager
Free Software Foundation
+1 (617) 542 5942
This e-mail address is being protected from spambots. You need JavaScript enabled to view it

March 23, 2015

The Ten Plagues of Egypt

Here they are, in order, in time for Passover: all Ten Plagues I animated for Seder-Masochism, my feature-film-in-progress. Yes, they will be joined together by a unifying narrative device in the final film, but for now they exist as separate clips. Click on each one for more information and higher resolution. Like this work? Donate here.

Share/Bookmark

flattr this!

March 22, 2015

New committer: Kristof Provost (src)

March 21, 2015

Sébastien Jodogne, Reglue are Free Software Award winners

The Award for the Advancement of Free Software is given annually to an individual who has made a great contribution to the progress and development of free software, through activities that accord with the spirit of free software.

This year, it was given to Sébastien Jodogne for his work on free software medical imaging with his project Orthanc.

Sébastien Jodogne

One of Jodogne's nominators said, "The Orthanc project started in 2011, when Sébastien noticed in his work as a medical imaging engineer that hospitals are very exposed to lock-in problems when dealing with their medical imaging flows....Freely creating electronic gateways between imaging modalities (autorouting), between medical departments, or even between hospitals remains a challenging task. But the amount of medical images that are generated, analyzed, and exchanged by hospitals is dramatically increasing. Medical imaging is indeed the first step to the treatment of more and more illnesses, such as cancers or cardiovascular diseases."

Jodogne said, "Technology and humanism are often opposed. This is especially true in the healthcare sector, where many people fear that technological progress will dehumanize the treatments and will reduce the patients to statistical objects. I am convinced that the continuous rising of free software is a huge opportunity for the patients to regain control of their personal health, as well as for the hospitals to provide more competitive, personalized treatments by improving the interoperability between medical devices. By guaranteeing the freedoms of the users, free software can definitely bring back together computers and human beings."

Jodogne joins a distinguished list of previous winners, including the 2013 winner, Matthew Garrett.

The Award for Projects of Social Benefit is presented to a project or team responsible for applying free software, or the ideas of the free software movement, in a project that intentionally and significantly benefits society in other aspects of life. This award stresses the use of free software in the service of humanity.

This year, the award went to Reglue, which gives GNU/Linux computers to underprivileged children and their families in Austin, TX. According to Reglue, Austin has an estimated 5,000 school-age children who cannot afford a computer or Internet access. Since 2005, Reglue has given over 1,100 computers to these children and their families. Reglue's strategy diverts computers from the waste stream, gives them new life with free software, and puts them in the hands of people who need these machines to advance their education and gain access to the Internet.

FSF executive director John Sullivan and Ken Starks

One nomination for Reglue read, "Mr. Starks has dedicated his life to distributing free software in many forms, both the digital form...and by building new computers from old parts, giving a new life to old machines by re-purposing them into computers given to extremely needy children and families. They are always loaded with free, GNU/Linux software, from the OS up."

Ken Starks, founder of Reglue, was present at the ceremony to accept the award. While all free 'as in freedom' software is not free of charge, Reglue focuses on finding empowering free software that is also gratis. He said of his work with Reglue, "A child's exposure to technology should never be predicated on the ability to afford it. Few things will eclipse the achievements wrought as a direct result of placing technology into the hands of tomorrow."

Nominations for both awards are submitted by members of the public, then evaluated by an award committee composed of previous winners and FSF founder and president Richard Stallman. This year's award committee was: Hong Feng, Marina Zhurakhinskaya, Yukihiro Matsumoto, Matthew Garrett, Suresh Ramasubramanian, Fernanda Weiden, Jonas Öberg, Wietse Venema, and Vernor Vinge.

More information about both awards, including the full list of previous winners, can be found at https://www.fsf.org/awards.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software—particularly the GNU operating system and its GNU/Linux variants—and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Photos under CC BY-SA 4.0 Attribution

The Eighth Plague: Locusts (rough)

“4 Else, if thou refuse to let my people go, behold, to morrow will I bring the locusts into thy coast:

“5 And they shall cover the face of the earth, that one cannot be able to see the earth: and they shall eat the residue of that which is escaped, which remaineth unto you from the hail, and shall eat every tree which groweth for you out of the field:” –Exodus 10, KJV

Wrapping up here with the final stanzas of Oingo Boingo’s 1982 “Insects,” which I also used in “Lice” and “Flies.” Lyrics:

Insects make me scream and shout!
They don’t know what life’s about
They don’t have blood
They’ve got too many legs
They don’t have brains in their heads

They know they’ll rule the world someday
They bite and sting me anyway
They bite and sting and SUCK
They bite and sting and SUCK SUCK SUCK
They bite and sting and SUCK SUCK SUCK

Dance (Dance)
Those insects make me –

 

Share/Bookmark

flattr this!

March 20, 2015

Unexpected turn at panel discussion on software patents and Free Software

On Monday 17 March 2015, I participated in a panel discussion organised by the European Patent Office at the Cebit in Hannover. The title of the discussion was “Patents, Standards, and Open Source — a changing landscape”. I prepared to discuss software patents, but something unexpected happened in the panel discussion.

I was invited by Grant Philpott (Principal Director of ICT area in the European Patent Office) to participate in the panel discussion. Beside him as a moderated there were: the following participants: Brian Hinman (Senior Vice President and Chief IP Officer, Royal Philips), Koen Lievens (Director DG1, European Patent Office), and myself.

To prepare I first read the EPO’s position on software patents again, and then prepared for the discussion together with our current interns Marius Jammes, Miks Upenieks, and Nicola Feltrin. So they had to read some articles — including one of my favourites “The Most Important Software Innovations” by David Wheeler — and we discussed the main arguments in favour and against software patents again. That was a good practice for them, as well as for me. After this we were well prepared to discuss details about software patents.

Before the event, Brian Hinman and myself were asked to prepare a short input statement about the “main IP needs of the ICT sector in the future, how you see these being ideally met, and what will need to change in order to get to that ‘ideal’ situation.” (My notes for this statement are below.) This was the start of the panel discussion.

I was astonished what happened when the audience was included in the discussion: almost all their questions were about Free Software, and almost none about patents. Instead of expected comments like “but how do we give incentives to inventors” or “but we have to secure investments”, people were interested in Free Software specifics. From the 45 minutes on the panel we at least spoke 25 minutes exclusively about Free Software business models, compliance issues, copyright management, and why Free Software is important for our society and the economy. Afterwards I spent over an hour to answer several questions from the audience which we could not cover during the disucssion.

So this discussion took a completely unexpected turn for me. But in this case I was very happy about that.

My introduction statement

Today Free Software runs on the majority of computers around the world: from supercomputers and other servers, to robots or space shuttles, to computers we carry around every day in like phones or tablets, to very small computers we often do not recognise as such.

How did we achieve it, that nowadays the most important operating system is Free Software, every company uses Free Software, and that it is almost impossible to develop other software without using Free Software yourself?

We achieved that because Free Software empowers people rather than restricting them. Based on copyright we use licenses which grant everybody the rights to use, study, share and improve software for any purpose.

  • The right to use it for any purpose, garantees that everybody can participate in using and developing software. So there is no discrimination on who can use the technology or for what you can use it.
  • Every Free Software license grants you the right to study how it works: In a world which is as complex as ours we cannot afford to keep things secret if we want to solve problems. Source code plus documentation is the best way to share the knowledge how IT devices work. Publishing source code is also the best way to enable interoperability and therefore competition.
  • To adopt software to your own needs it is crucial that you are allowed to improve it. Technology should do what you want to do with it, not what others thought it should do. So you are allowed to modify all parts of the software, use only parts of it, experiment with it, and combine programs to create new products.
  • Furthermore you have the right to share knowledge and workload with others. We have many problems in the world, which can be solved with software. But we have few people who can actually solve them in a good way. Let us enable them to concentrate on fixing new problems, instead of fixing one which was already solved. So Free Software always allows you to share the software — modified or not — with others.

We guarantee everybody those rights through copyright.

Obstacles:

  • Legal issues: too many legal issues around technology. Let people be creative to fix other people’s problems, instead of focusing on problems resulting e.g. out of copyright and patents.
  • Licenses: most FS licenses are much easier to understand than proprietary software licenses. Solution: but still we can make them easier to understand and work with, and have fewer licenses.
  • Patents: problematic to have additional monopolies on principles instead of implementations. Burden to do research what other people already did in a field, the need to negotiate with them, dealing with lawsuits. So stronger clarification that patents on software are not allowed. In case it is not clear if it is software or hardware, patents should not be granted.
  • Secrecy: not publishing the source code and thereby preventing others in society to understand how products work or to make interoperable products. This restriction also continues after the copyright period. Solution: at least publicly financed software (including research) needs to be published under Free Software licenses. This way the results can be integrated in all kind of products. Maybe requirement to depose source code.
  • Restricting hardware platforms: someone else controls what you can install on your computers. Solution: clear right that you are allowed to change software on your computers, and as a company also sell those afterwards.

March 19, 2015

Presentation – Linux Collab Summit – Cloud 2.0: Containers, Microservices and Cloud Hybridization

Presented at Linux Collaboration Summit 2015 in Santa Rosa, CA on February 20th, 2015.

Abstract:

In a very short time cloud computing has become a major factor in the way we deliver infrastructure and services. Though we’ve quickly breezed through the ideas of hosted cloud and orchestration. This talk will focus on the next evolution of cloud and how the evolution of technologies like container (like Docker), microservices the way Netflix runs their cloud) and how hybridization (applications running on Mesos across Kubernetes clusters in both private and public clouds).

[Sometimes the embed didn’t work so you can also view the presentation here.]

<iframe src=”//www.slideshare.net/slideshow/embed_code/44943541″ width=”425″ height=”355″ frameborder=”0″ marginwidth=”0″ marginheight=”0″ scrolling=”no” style=”border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;” allowfullscreen> </iframe> <div style=”margin-bottom:5px”> <strong> <a href=”//www.slideshare.net/socializedsoftware/2015-linux-collaboration-summit-cloud-20-containers-microservices-and-cloud-hybridization” title=”Cloud 2.0: Containers, Microservices and Cloud Hybridization” target=”_blank”>Cloud 2.0: Containers, Microservices and Cloud Hybridization</a> </strong> from <strong><a href=”//www.slideshare.net/socializedsoftware” target=”_blank”>Mark Hinkle</a></strong> </div>

Technorati Tags: , ,

March 13, 2015

15th Anniversary and Spring Fundraising Kickoff

I'm so excited to announce our spring fundraising campaign. I know it's not officially spring yet, but it sure feels like it here at Foundation headquarters in Boulder, Colorado. We're kicking off our fundraising campaign in conjunction with some other exciting events. There's so much to celebrate. First, we are proud to be a Platinum sponsor of AsiaBSDCon. This is the tenth AsiaBSDCon, with over 140 attendees planned, and 31 talks, providing a venue for all things BSD in Asia. People from around the world attend this conference to learn about the BSD operating systems, share their knowledge and experience, and work together to develop, hack, fix, improve, and document the various BSD operating systems.

The most exciting news we have is that we are celebrating our 15th anniversary supporting the FreeBSD Project and community worldwide! We have grown from our president and founder, Justin Gibbs, creating a non-profit to support FreeBSD, to an eight member board with 7 staff members. In case you missed it earlier, check out Justin's interview about the history of the Foundation on BSDNow

As the first employee, 9 years ago, I've witnessed incredible growth in our ability to support the Project and community. The year we were founded we raised a whopping $7,000. My first year with the Foundation, in 2006, we raised a little over $100,000. And, last year we raised $2,436,194, spending $877,412 on the project.

When we first started out, we focused on funding project development, conference sponsorships, and travel grants. Fifteen years later, we have increased support in those areas and have now grown to providing legal support for the Project; purchasing and helping manage hardware for FreeBSD infrastructure; providing release engineering support for consistent and timely releases; creating marketing literature and presentations that not only inform people of what FreeBSD is, but also provides detailed information on what's in new releases; attending more conferences to promote FreeBSD; and publishing a professional online FreeBSD magazine, The FreeBSD Journal.

To celebrate our anniversary, we are kicking off a fundraising campaign to help broaden the reach of our mission, by adding 500 new community investors in the next four weeks. What's a new community investor? An individual or organization that makes their first 2015 donation during this spring campaign. 

Why donate to the Foundation? Your donations will help us continue and increase our support in the following areas:
  • Funding improvement and development projects, including: Native ISCSI kernel Stack, Updated video console (Newcons), UEFI system boot support, Capsicum component framework, IPv6 support in FreeBSD, Auditdistd improvements for FreeBSD cluster, and adding modern AES modes to OpenCrypto (to support IP/SEC).
  • Helping to provide consistent and on-time releases.
  • Educating the public and promoting FreeBSD with tools like our high-quality FreeBSD 10X Brochure and company visits to help
  • facilitate collaboration efforts with the Project.
  • Sponsoring BSD conferences and summits in Europe, Japan, Canada, and the US.
  • Protecting FreeBSD IP and providing legal support to the Project.
  • Purchasing hardware to build and improve FreeBSD project infrastructure.
For the last 15 years, you as a community have allowed us to make an major impact on the FreeBSD Project and Community. Please help us continue and increase our support by making a donation today.


Deb Goodkin, Executive Director

FreeBSD From the Trenches: Using autofs(5) to Mount Removable Media

This next FreeBSD From the Trenches story come to us from Edward Tomasz Napierała who shares his work on the new FreeBSD automounter.

My big project for 2014 was the new FreeBSD automounter.  Like any proper FreeBSD Foundation sponsored project, it included the usual kind of documentation - man pages and the Handbook chapter.  But there is no document that shows how it works inside, from the advanced system administrator or a power user point of view.

So, here it is.  The article demonstrates how modular the automounter is, and how easy it is to adopt to any mount-related situation you might have, using recently added removable media support as an example.  (And it shows some related mechanisms as a bonus.)

autofs(5) Basics

The purpose of autofs(5) is to mount filesystems on access, in a way that's transparent to the application. In other words, filesystems get mounted when they are first accessed, and then unmounted after some time passes. The application trying to access the filesystem doesn't even notice this event, apart from a slight delay on first access.  It's a mechanism similar to ones available in other systems, in particular to OS X.  It's a completely independent implementation, it's just that OS X is the other operating system I use.

Automounting requires cooperation of four things: the kernel filesystem, autofs.ko, which is responsible, among other things, for "pausing" the application until the filesystem is actually there; the automountd(8) daemon, which is the component that retrieves configuration information from maps (this includes fetching it from remote sources, such as LDAP) and actually mounts the filesystems; the automount(8) utility for various administrative purposes; and then the autounmountd(8) daemon to, well, unmount the filesystems mounted by automountd(8) after a timeout.

Setting it up is fairly simple: you obviously need to have autofs(5) enabled in /etc/rc.conf:
autofs_enable="YES"
And you need to have the autofs(5) daemons running - just like other deamons in FreeBSD those will get started at system bootup if autofs_enable was set; otherwise you need to start them by hand:
# /etc/rc.d/automount start
# /etc/rc.d/automountd start
# /etc/rc.d/autounmountd start
The kernel driver will get loaded automatically, you can see it in kldstat(8) output.

autofs(5) and Removable Media

Note that at the time of this writing, this is only available in FreeBSD 11-CURRENT. This will change soon.

The main configuration file for autofs(5) is /etc/auto_master; you need to uncomment this line:
/media -media -nosuid
This basically says that there is a /media directory, and automount will mount the "-media" map there, and everything that gets mounted there will have the "nosuid" mount option, for security reasons.

If you already had autofs(5) running before uncommenting the line, you must refresh its configuration by running automount(8) as root; run it as "automount -v" for a detailed explanation of what it does.  It looks like this:
# automount -v
automount: parsing auto_master file at "/etc/auto_master"
automount: done parsing "/etc/auto_master"
automount: unmounting stale autofs mounts
automount: skipping /, filesystem type is not autofs
automount: skipping /dev, filesystem type is not autofs
automount: leaving autofs mounted on /net
automount: mounting new autofs mounts
automount: autofs already mounted on /net
automount: nothing mounted on /media; mounting
automount: mounting map -media on /media, prefix "/media", options "nosuid"
If you run mount(8), you will see so called "trigger nodes" of type autofs(5):
# mount
/dev/ada0p2 on / (ufs, local, noatime, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
map -hosts on /net (autofs)
map -media on /media (autofs)

Basic usage

With all that done, plug a drive into USB, and here is what happens in a real-world case:
[trasz@brick:~]% ll /media
total 9
drwxr-xr-x 3 root wheel 512 Feb 24 12:54 .
drwxr-xr-x 30 root wheel 1024 Feb 24 12:28 ..
drwxr-xr-x 1 root wheel 4096 Jan 1 1980 ADATA UFD
drwxr-xr-x 3 root wheel 512 Feb 24 12:54 md0
[trasz@brick:~]% cd /media/ADATA\ UFD
[trasz@brick:/media/ADATA UFD]% ll
total 10117
drwxr-xr-x 1 root wheel 4096 Jan 1 1980 .
drwxr-xr-x 3 root wheel 512 Feb 24 12:54 ..
drwxr-xr-x 1 root wheel 4096 Nov 24 00:03 .Spotlight-V100
drwxr-xr-x 1 root wheel 4096 Nov 24 00:03 .Trashes
-rwxr-xr-x 1 root wheel 4096 Nov 24 00:03 ._.Trashes
drwxr-xr-x 1 root wheel 4096 Jan 13 11:24 .fseventsd
drwxr-xr-x 1 root wheel 4096 Nov 22 22:44 Bonus
-rwxr-xr-x 1 root wheel 3309568 Nov 24 14:50 DSC05996.JPG
-rwxr-xr-x 1 root wheel 4063232 Nov 24 14:50 DSC05997.JPG
-rwxr-xr-x 1 root wheel 2953199 Nov 25 21:40 DSC05998.JPG
drwxr-xr-x 1 root wheel 4096 Nov 22 18:24 Meshuggah
drwxr-xr-x 1 root wheel 4096 Nov 22 21:06 System Volume Information
[trasz@brick:/media/ADATA UFD]% mount
/dev/ada0p2 on / (ufs, local, noatime, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
map -hosts on /net (autofs)
map -media on /media (autofs)
/dev/da0s1 on /media/ADATA UFD (msdosfs, local, nosuid, automounted)
[trasz@brick:/media/ADATA UFD]% cd /
[trasz@brick:/media/ADATA UFD]% sudo automount -u
[trasz@brick:/media/ADATA UFD]% mount
/dev/ada0p2 on / (ufs, local, noatime, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
map -hosts on /net (autofs)
map -media on /media (autofs)
Two things to notice here: first, the "ADATA UFD" is a factory default
filesystem label on the flash drive.  If there was no filesystem label,
autofs(5) would use device name instead - in this case, that would
be "da0s1".  Second - if you don't want to wait for autounmountd(8)
to unmount the automounted volume, you can use "automount -u".  Or
"automount -fu", if you want to force unmount.

Not So Basic Usage

Take a close look at the directory listing for /media in previous example. Did you notice the "md0" there?  It looks like a device node for memory disk (md(4)), but is a directory.  That's a leftover from my earlier experimentation, and shows an interesting feature of autofs(5)-based automounter: it's not limited to removable media, it can mount everything that's available for mounting.  In this case it's a memory disk (kind of ramdisk, see "man mdconfig").  It can also be an iSCSI lun.  And, of course, a removable media.  How does that work?

GEOM

In FreeBSD, GEOM is a name of what could otherwise be called a block device layer.  It's a piece of code that manages all the "disk-like devices", both physical and virtual: SATA/SAS/FC/NVME/USB drives, memory disks, iSCSI LUNs, partitions, encrypted GELI volumes etc.

GEOM has another meaning: an instance of GEOM class.  The "class" here means the "kind" of device, and the instance is an actual device of that kind. It's easiest to explain it with an example:
# geom disk list
Geom name: cd0
Providers:
1. Name: cd0
   Mediasize: 0 (0B)
   Sectorsize: 2048
   Mode: r0w0e0
   descr: MATSHITA DVD/CDRW UJDA775
   ident: (null)
   fwsectors: 0
   fwheads: 0

Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 250059350016 (233G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e3
   descr: Samsung SSD 850 EVO 250GB
   lunid: 5002538da000f602
   ident: S21PNSAFC02149R
   fwsectors: 63
   fwheads: 16

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 7654604800 (7.1G)
   Sectorsize: 512
   Mode: r0w0e0
   descr: ADATA USB Flash Drive
   lunname: USB MEMORY BAR
   lunid: 2020030102060804
   ident: 14A0711312300023
   fwsectors: 63
   fwheads: 255

# geom part list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 488397127
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 65536 (64K)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 1024
   Mode: r0w0e0
   rawuuid: 42dc1b8b-c49b-11e3-8066-001c257ac65f
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 65536
   offset: 17408
   type: freebsd-boot
   index: 1
   end: 161
   start: 34
2. Name: ada0p2
   Mediasize: 236223201280 (220G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 1024
   Mode: r1w1e1
   rawuuid: 42dc921f-c49b-11e3-8066-001c257ac65f
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 236223201280
   offset: 82944
   type: freebsd-ufs
   index: 2
   end: 461373601
   start: 162
3. Name: ada0p3
   Mediasize: 13836045312 (13G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 1024
   Mode: r1w1e0
   rawuuid: 21a8eef9-a0d4-11e4-ab80-001c257ac65f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 13836045312
   offset: 236223284224
   type: freebsd-swap
   index: 3
   end: 488397127
   start: 461373602
Consumers:
1. Name: ada0
   Mediasize: 250059350016 (233G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e3

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 14950399
first: 1
entries: 4
scheme: MBR
Providers:
1. Name: da0s1
   Mediasize: 7654576128 (7.1G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 28672
   Mode: r0w0e0
   rawtype: 12
   length: 7654576128
   offset: 28672
   type: !12
   index: 1
   end: 14950399
   start: 56
Consumers:
1. Name: da0
   Mediasize: 7654604800 (7.1G)
   Sectorsize: 512
   Mode: r0w0e0
See?  I've used the geom(8) command to get the information about two GEOM classes: "disk", and "part".  The first one returned information about three instances of the disk class: the DVD drive, the SSD, and the flash.  The second one returned information on the partitions known to the system. Everything that is potentially mountable - a physical disk, a partition, encrypted ELI volume, multipath device, RAID3 volume, memory disks, even volume labels - it all has its GEOM class and can be queried in a similar way.  To see all the GEOM instances in the running system, use:
# sysctl kern.geom.conftxt
Now, notice the "Mode" lines.  Like the one for ada0: "r2w2e3".  Those are three usage counters for ada0 GEOM: read, write, and exclusive.  They are non-zero, because ada0 is used: there are three partitions on it; three instances of PART GEOM class hold it opened.  The partitions, just like any other GEOM nodes, have their own counters.  Take a look at the first one, ada0p1: the mode there is "r0w0e0".  This means it's not open by anything.  It's, in other words, available for mounting.  If you check the MD geom class:
# geom md list 
Geom name: md0
Providers:
1. Name: md0
   Mediasize: 1073741824 (1.0G)
   Sectorsize: 512
   Mode: r0w0e0
   type: swap
   access: read-write
   compression: off
   length: 1073741824
   fwsectors: 0
   fwheads: 0
   unit: 0
You will see the same thing: it's not opened.  That's the first thing the autofs(5) "-media" map checks for: zero access counts; if the counts are not zero, it means the node is used by something: it's either mounted (like ada0p2, mounted on /), or there is something "on top of it" - like ada0.

But why there is no /media/ada0p1?  Because it's not mountable; there is no filesystem there.  It's a boot loader partition.  How does autofs(5) figure it out?

fstyp(8)

Before we can do anything with a filesystem, we need to determine what kind of filesystem it is - and whether it actually is a supported filesystem in the first place.  That means we need a piece of code that can take a look at it and determine if it has a format it recognizes.

It is possible to use file(1) for this, eg:
# file -s /dev/md0
Vermaden's sysutils/automount port uses this approach.  There are a few problems with doing it this way, though.  First, the output, for a typical FAT filesystem, looks like this:
/dev/md0: DOS/MBR boot sector, code offset 0x3c+2, \
OEM-ID "BSD4.4  ", sectors/cluster 32, root entries \
512, sectors/FAT 256, sectors/track 63, heads 255, \
sectors 2097144 (volumes > 32 MB) , serial number \
0x668a120e, unlabeled, FAT (16 bit)
It's not particularly easy to parse.  It's even harder to extract the volume label.

Second, file(1) can recognize all kinds of file types, from JPEG to 6502 assembly.  This means that if there are some strange data on the removable media, instead of a filesystem we expect, the file(1) will output something our script wasn't tested against, making the first problem even harder.

Third, file(1) had its share of security bugs, eg CVE-2014-1943, CVE-2014-9620, or CVE-2014-3710.

For this reason I've decided the proper fix would be to just write a new utility. The strange name - "fstyp" - comes from the utility of the same name, installed by default on Solaris, IRIX, OS X, and perhaps most other UNIX systems.

The fstyp(8) addresses the file(1) issues: the output is easily parsable (just a filesystem name, one word), it only recognizes filesystems supported by FreeBSD, and uses Capsicum sandboxing to make sure that even if there is a vulnerability, its impact is limited to incorrectly reporting the filesystem type.  It's a good topic for another article, but in short - in FreeBSD, every process can enter what's called a "capability mode". It's one way - a process can enter it, but there is no way to exit it.  Child processes inherit the mode. In capability mode, kernel will deny all attempts to open new files, create sockets, attach the shared memory segments etc, but the process is pretty much free to do anything it likes with the file descriptors it already had opened before entering the capability mode - and it can receive other file descriptors over a UNIX socket.  So, the fstyp(8) utility opens the device file, then calls cap_enter(2), which switches it into capability mode, and then continues execution, reading from the device to determine what's there.  Should it be compromised, it won't be able to execute /bin/sh, it won't be able to open a socket to transmit the data to some external host, etc.

The "-media" Map

Those are the components underneath the autofs(5), but how does it fit together? Let's start with the actual map.  In FreeBSD, special maps (the ones with names starting with "-") are just executables in /etc/autofs/:
# ls -al /etc/autofs
total 36
drwxr-xr-x 2 root wheel 512 Feb 14 21:18 .
drwxr-xr-x 25 root wheel 3072 Feb 24 11:22 ..
-rwxr-xr-x 1 root wheel 1010 Oct 17 11:26 include_ldap
-rwxr-xr-x 1 trasz wheel 43 Aug 17 2014 include_nis
-rwxr-xr-x 1 root wheel 367 Oct 17 11:26 special_hosts
-rwxr-xr-x 1 root wheel 2294 Dec 6 10:15 special_media
-rwxr-xr-x 1 root wheel 355 Feb 14 21:17 special_noauto
-rwxr-xr-x 1 root wheel 97 Oct 17 11:26 special_null
-rwxr-xr-x 1 root wheel 357 Aug 22 2014 special_smb
See the special_media?  That's the one.  It's a shell script.  The reason it's in /etc is that the system administrator can modify it if required, or add new special maps.

Now, let's try to run it by hand, as root:
# /etc/autofs/special_media
ADATA UFD
md0

# /etc/autofs/special_media md0
-fstype=msdosfs,nosuid  :/dev/md0
That's exactly how automountd(8) uses it, after the kernel component notifies it that it needs the /media directory taken care of.  It's described in more detail in the auto_master(5) manual page.  The shell script is pretty well commented, and I don't think there is any point in explaining it here.

Bottom line:
the core autofs itself doesn't know anything about removable devices; the special map "-media" does: it queries GEOM for the list of all disk-like nodes that are not in use, and then uses fstyp(8) to determine whether they contain a useful filesystem.  UNIX.  Modularity.  Plain text.  ;-)

Cache

Now, let's create a second memory disk, 1GB in size (the "1g" below) to see if it all works as intended:
# mdconfig -s1g
md1
# newfs_msdos /dev/md1
newfs_msdos: cannot get number of sectors per track: \
Operation not supported
newfs_msdos: cannot get number of heads: \
Operation not supported
newfs_msdos: trim 8 sectors to adjust to a multiple of 63
/dev/md1: 2096576 sectors in 65518 FAT16 clusters \
(16384 bytes/cluster)
BytesPerSec=512 SecPerClust=32 ResSectors=1 FATs=2 \
RootDirEnts=512 Media=0xf0 FATsecs=256 SecPerTrack=63 \
Heads=255 HiddenSecs=0 HugeSectors=2097144

# ll /media
total 5
drwxr-xr-x 3 root wheel 512 Feb 24 12:25 .
drwxr-xr-x 30 root wheel 1024 Feb 23 09:04 ..
drwxr-xr-x 1 root wheel 4096 Jan 1 1980 ADATA UFD
drwxr-xr-x 3 root wheel 512 Feb 24 12:25 md0
Whoops.  Where is /media/md1?

There is one more mechanism for the whole thing to work correctly: the autofs(5) cache needs to be dealt with.

The first paragraph mentioned that it's automountd(8) that does all the map parsing - including running the /etc/autofs/special_media - and actual mounting. Doing it every time someone accesses the /media directory - or any directory, for that matter - would kill performance.  For this reason, after the kernel component asks the automound(8) to do its magic, it doesn't do that again until some time later.  In most cases it doesn't matter - the list of NFS exports for a given host doesn't change too often - but in case of removable media it's not acceptable.  The cache needs to be flushed, using "automount -c".  After that, the subsequent lookup in /media will trigger automountd(8), which will query the devices list and refresh the directory contents.

This obviously needs to happen automatically.  And if you actually went and opened /etc/auto_master in a text editor, you would have noticed this:
# When using the -media special map, make sure to edit devd.conf(5)
# to move the call to "automount -c" out of the comments section.
The devd(8) is a daemon responsible for listening for notifications from the kernel and running whatever is configured in its config, /etc/devd.conf. There are all kinds of things there, from running utilities to upload firmware for various USB devices, to launch moused(8) when a mouse gets connected, to switching power profiles, to... discarding autofs(5) caches.  It looks like this:
notify 100 {
match "system" "GEOM";
match "subsystem" "DEV";
action "/usr/sbin/automount -c";
}
If you do "man devd.conf", you will see the description of those events. Note that, just like the "-media" map works the same way for flash drives and encrypted volumes over multipath over iSCSI, this mechanism does not care about any specific hardware either.

Caveats

Two, really.  First: you need to run 11-CURRENT.  Second: the nodes in /media never disappear.  I expect to merge this support to 10-STABLE after the second issue is addressed.

March 12, 2015

IPv6, issues and nginx

Hi all, This would be a bit long in the tooth post about #IPv6, it’s uptake in India, potential solutions for the interim solution and little bit of Debian news as well. First, get the good news out-of-the-way, Debian Jessie is slated to be released around April (at least the hope is). This was mentioned […]

March 08, 2015

New committer: Jason Harmening (src)

February 14, 2015

I love Free Software: Thanks to all the GnuPG contributors

Today is “I love Free Software”-Day. A day to thank all the hard working people behind Free Software. Beside initiating #ilovefs I also try to write a short thank you note to one project every year. After I thanked Coreboot in 2013, and mpd, ncmpcpp, and MPDroid in 2014 this year I want to thank all the people involved in GnuPG coding and promoting.

Unfortunately I do not remember when exactly I started using the GNU Privacy Guard (GnuPG). I just know that I started using a PGP implementation in 2001 on my GNU/Linux machine. First with some friends from our local Free Software group to encrypt and sign our data and communication, which was a very cool feeling. Later I tried to convince close friends and family with whom I had private conversations to set it up so “we can communicate like we do with letters instead of postcards”.

Person with a red ilovefs balloon and a speaking buble saying GnuPG Someone expressing his love to GnuPG Matthias Kirschner CC BY-SA

February 02, 2015

DevConf.cz 2015: Useful Info

I just returned from FOSDEM (will have to write about it when I have more time) and DevConf.cz is just a few days away, so I jumped into the final preparations right from the airport. Are you going to DevConf.cz? Here is some useful info:

  • Venue – I have spoken to several people who were completely surprised that DevConf.cz was not going to be at the campus of FI MUNI, but at the campus of FIT BUT. You can find instructions how to get to the new venue on the conference website. So make sure you’re going to the right place ;-)
  • Streaming – can’t make it to DevConf.cz? No problem! We most likely will stream all six talk tracks. The stream will be available on our Youtube channel. It will also be linked on the conference website. The program starts at 8am UTC every day. If you miss the stream, no worries, recordings will be available on our channel as videos immediately.
  • Party – the conference is not just about talks and workshops. There will be a conference party on Friday. Again in Klub Fléda. You can get a ticket at the Red Hat booth at the venue during Friday. Make sure you’ll get it early enough because the limit is 600 people and we can’t exceed it because of safety limits of the club. Speakers and volunteers won’t need to get a ticket because their badges will serve as such.
  • Apps – you can have the schedule and important info in your pocket. We’ve created apps for Android, Blackberry 10, SailfishOS, just look up for them in respective catalogs. We’ve also created a DevConf.cz guide for Guidebook.com apps. You will find a schedule and important and useful info in it, all for offline usage.
  • Lightning talks – got an idea for a talk? You still have a chance to talk at DevConf.cz 2015, you can propose a lightning talk in the morning, people will vote during the day, and those with most votes will be picked for the last hour of the schedule.
  • Refreshment – besides your brain you also need to feed your stomach at the conference. We will have refreshment at the venue again so that you won’t die of hunger if you stay there listening to talks all day long. As a response to demand, we will have Club Mate (not for free, but for very reasonable price)! At the campus, there will be a nice cafe open if you’d like to have better coffee, some dessert, or beer (they have great beer Richard from a local microbrewery). If you want a full meal, there are several good restaurants within 100m from the campus including a really good Thai place.

See you in Brno!


February 01, 2015

MariaDB turns 5!

I stopped working on MySQL at Sun Microsystems in late 2009 (after a lengthy period of garden leave), to join Monty Program Ab, and was greatly anticipating a MariaDB release that we could take to market. The first GA release of MariaDB came out February 1 2010 – MariaDB 5.1.42. Today is MariaDB Server’s 5th birthday!

We didn’t even want to call it GA back then — we referred to it as a “stable” release. We didn’t make our own builds because we figured source code tarballs were good enough; so builds were made and hosted at OurDelta. It took some months (around August 2010) when we moved release notes to the Knowledgebase (which you’ll notice has moved from kb.askmonty.org to its current location) from the old front page wiki install that we had at askmonty.org.

I didn’t go to the first company meeting in Malaga due to having the chickenpox, so my first meeting was the one we did in Reykjavik, Iceland. We did it towards the end of February 2010, and planned it literally in a month – maybe a celebration that we brought 5.1 to market on time, and also to plan 5.2.

Speaking of companies, we were Monty Program Ab (professionally this quickly became MariaDB Services Ab), then SkySQL Ab (via merger), and finally MariaDB Corporation Ab (via re-branding). Shortly before the SkySQL Ab merger, we even have the MariaDB Foundation appear.

Anyway, what have we released? MariaDB 5.1, MariaDB 5.2, MariaDB 5.3, MariaDB 5.5, MariaDB 10.0, MariaDB Galera Cluster 5.5 & 10.0, a special MariaDB 5.5 with TokuDB build and a special MariaDB with FusionIO improvements build. To boot, we also have three client libraries (connectors, if you must): C, Java, and ODBC.

So 5 major server releases (7 if you count the Galera series), and we’re now working on MariaDB 10.1. I count 88 releases of the server across various versions (with breakdowns: 9 alphas, 11 betas, 7 release candidates and 61 GAs). We’ve had 23 Galera releases and 15 releases for the various client libraries.

We are shipping in all major Linux and BSD distributions. In many, we are even the default

This birthday is a nice time to look back at our achievements, but also to remind ourselves to not rest on our laurels and continue to focus on growth. The last sanctioned press release talks of over 2 million users globally. 

Thank you to all our users. Thank you to all the contributors and developers. Here’s to a lot more adoption, growth, releases and technology improvements!

January 28, 2015

The GHOST Vulnerability

Heads up everybody – a Linux vulnerability known as GHOST (CVE-2015-0235), discovered by Qualys, has recently been publicized. This particular vulnerability is a nasty one, since it allows for remote code execution.

The vulnerability has been exhaustively documented in this Security Advisory, which you may find interesting. In short, the vulnerability exists within glibc in __ns_hostname_digits_dots(), which deals with hostname resolution via the gethostbyname() call.

Am I Vulnerable?

Yes, most likely. In order to address this, you’ll want to ensure that you have updated and rebooted your systems.

Debian and Ubuntu have updated packages for their supported distributions. Run apt-get update && apt-get dist-upgrade to bring your system up to date, and then reboot to ensure no references to the old libraries still exist.

For other popular distributions, please follow their equivalent steps for upgrading packages.  For more information, you can follow our GHOST guide.

Is Linode Infrastructure vulnerable?

No. Our Security Team has worked to protect our infrastructure from this vulnerability and we have taken the appropriate steps to address this issue on all of our systems.

January 22, 2015

FLOSSK mbështetë Wiki Academy Kukës

FLOSSK do të mbështesë Wiki Academy-n e cila mbahet më 22 dhe 23 mars në Kukës. Akademia e Wikipedia-s përfshinë trajnimin e të rinjëve për të kontribuar në Wikipedia duke përfunduar me një vikend të plotë dedikuar shkrimit të artikujve në Enciklopedinë e Lirë Wikipedia.

January 21, 2015

Books and Music in 2014

As tradition mandates, here's my yearly post about music release and books I've read the previously here.

Click on the images for details.
Top 10 music releases of 2014

Top 5 books I've read in 2014
You can also see last year's list.

January 20, 2015

Smart things powered by snappy Ubuntu Core on ARM and x86

“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux - but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that!

Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices.

Transactional updates. App store. A huge range of hardware. Branding for device manufacturers.

In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device.

If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build.

I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds!

Education Freedom Day registration launched!

efd-banner

We have just opened Education Freedom Day registration, scheduled on March 21st, 2015. For its second edition EFD has been moved to March to facilitate its celebration in both the south of the planet and China (at least…) and we hope to cater to more events this year.

As usual for all our Freedom celebrations the process is similar, you get together and decide to organize an event, then create a page in our wiki and register your team. As the date approaches you get to put more information in your wiki page (or on your organization website which is linked from the wiki) such as the date and time, the location and what people can expect to see.

Education Freedom Day is really the opportunity to review all the available Free Educational Resources available, how they have improved since last year and what you should start planning to implement to deploy in the coming months. More importantly it is the celebration of what is available and letting people aware of it!

So prepare well and see you all in two months to celebrate Education Freedom Day!

Celebrate EFD with us on March 21, 2015!

Education Freedom Day registration launched!

And to continue this busy week in announcements we have just opened Education Freedom Day registration, scheduled on March 21st, 2015. For its second edition EFD has been moved to March to facilitate its celebration in both the south of the planet and China (at least...) and we hope to cater to more events this year.

As usual for all our Freedom celebrations the process is similar, you get together and decide to organize an event, then create a page in our wiki and register your team. As the date approaches you get to put more information in your wiki page (or on your organization website which is linked from the wiki) such as the date and time, the location and what people can expect to see.

Education Freedom Day is really the opportunity to review all the available Free Educational Resources available, how they have improved since last year and what you should start planning to implement to deploy in the coming months. More importantly it is the celebration of what is available and letting people aware of it!

So prepare well and see you all in two months to celebrate Education Freedom Day!

January 19, 2015

Komentet e FLOSSK-ut ndaj ligjit për përgjimin e komunikimeve elektronike në Kosovë

Më 19 janar, përmes një letre dërguar Komisionit Parlamentar për Integrime Evropiane, FLOSSK-u ka reaguar ndaj Projektligjit për përgjimin e komunikimeve elektronike në Kosovë. Në këtë letër numërohet arsyet pse ky Projektligj në formën e tanishme është i dëmshëm për privatësinë e qytetarëve të Kosovës dhe si rrjedhojë i papranueshëm për ne.
 

Key Update

I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”

 

In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Brave GNU World <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Kolab Community) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (digitalSTROM.org Board) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
uid                  Georg C. F. Greve (GNU Project) <
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 >
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.

 

Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.

 

 

30 years of FSF

After an exciting weekend celebrating Hardware Freedom Day what could possibly be better than going back to the very inspiring video made to celebrate the 30th anniversary of the Free Software Foundation? Indeed it’s been made using Free Software only and goes through the work of the foundation for the past thirty years. It’s actually nice to look at, positive and very well animated. We will definitely encourage all our software freedom day teams to use it during their events. But let us say no more and let you enjoy it if you’ve missed it so far:

And then, for the ones into this kind of work, and blender in particular, you can find a detailed explanation of the challenges that the makers of the work went through and how they fixed them right here. Definitely a great read into the whole process from design to finish. Great job guys! And of course a happy 30th anniversary to the FSF from the Digital Freedom Foundation and all its members!

30 years of FSF

After an exciting weekend celebrating Hardware Freedom Day what could possibly be better than going back to the very inspiring video made to celebrate the 30th anniversary of the Free Software Foundation? Indeed it's been made using Free Software only and goes through the work of the foundation for the past thirty years. It's actually nice to look at, positive and very well animated. We will definitely encourage all our software freedom day teams to use it during their events. But let us say no more and let you enjoy it if you've missed it so far:

And then, for the ones into this kind of work, and blender in particular, you can find a detailed explaination of the challenges that the makers of the work went through and how they fixed them right here. Definitely a great read into the whole process from design to finish. Great job guys! And of course a happy 30th anniversary to the FSF from the Digital Freedom Foundation and all its members!

January 16, 2015

Linode Datacenter Expansion

Happy New Year! 2014 was a great year at Linode and we’re very excited about all the projects we have planned for 2015.  Here’s a sneak peek at our datacenter expansion plans.

Singapore, SG

First, we’ll be expanding our presence in the Asia-Pacific region with a datacenter in Singapore. This new deployment will support all of the existing Linode services, including NodeBalancers, Backups, native IPv6, and features our latest generation SSD-based servers and 40 Gbit connectivity to each host machine. The facility will be a fantastic alternative/complement to our Tokyo location, with great connectivity to Australia, China, Hong Kong, India and the rest of Asia.

Most of the networking and server hardware has been installed, and our team hopes to have Singapore ready for customer Linodes in the next few weeks by the end of April. Unfortunately, some Cisco gear is backordered until the beginning of April. In the interim, we are adding more transit providers and peers to improve connectivity within the region. (updated 2015-03-10)

Frankfurt, Germany

We’re also expanding our European footprint with a deployment into Frankfurt, Germany. This will complement our UK-based facility. Interestingly, more than 35% of all European cloud traffic flows through Frankfurt and this location will enable customers to comply with Germany’s Federal Data Protection Act (a.k.a., Bundesdatenschutzgesetz (BDSG)) by hosting their data on German soil. Frankfurt is also centrally located in continental Europe, and we believe this will be a great location for all of Europe.

The Linode Frankfurt deployment will be coming in the next few months.

Tokyo, Japan

Our Tokyo facility has been a great success. So much so that we’ve actually exhausted all of the resources available there. Linodes in Tokyo are in limited supply and the datacenter is frequently sold out. While Singapore will be an excellent alternative for a presence in this region, we are also exploring a second Tokyo datacenter.

Stay tuned for the official announcements of these new locations and several other exciting developments coming soon!

January 13, 2015

Django Girls Workshop at DevConf.cz 2015

One of the events that is co-hosted with DevConf.cz 2015 this year is Django Girls Workshop. It’s organized for females who want to learn to code websites using the Django framework. It takes place in our lovely Red Hat lab on the campus of Faculty of Information Technologies of Brno University of Technology on the 5th of February, one day prior the conference.

It has free admission and thanks to sponsors (Red Hat and ElasticSearch), you can even get financial aid to travel to Brno and get accommodation. The deadline for registration is on January 15th, so don’t hesitate and sign up!


December 23, 2014

GNOME Builder copr now for Rawhide only

GNOME Builder is under heavy development. This usually implies that such an application might require very new versions of its dependencies.

Upstream recently bumped their dependencies, and now require things that are only in Rawhide.

I have no intention to provide development builds of Gtk3 (among other things) in a Fedora 21 copr, as that might imply either breaking half of the distro, or having to rebuild it.

As a result, the GNOME Builder copr will from now on be Rawhide-only.

I have dropped the Fedora 21 repos, they won't be updated any more.

If you were using it on Fedora 21, please delete it:

# rm -f /etc/yum.repos.d/_copr_bochecha-gnome-builder.repo

If you still want to try GNOME Builder on Fedora 21, you'll now have to go the jhbuild route.

November 27, 2014

Lollipopp’d

I successfully updated my Nexus devices with Android 5.0 aka Lollipop earlier this week. Finally. After 3 tries with the download failing the first time, the install failing the next time and then it finally going through. Here is what I’m impressed with: * Look and feel polish – the visual change using new material […]

November 22, 2014

Release party in Barcelona

15794067981_0d173ce352_z

Another time, and there has been 16, ubuntaires celebrated the release party of the next Ubuntu version, in this case, 14.10 Utopic Unicorn.

This time, we went to Barcelona, at Raval, at the very centre, thanks to our friends of the TEB.

As always, we started with explaining what Ubuntu is and how our Catalan LoCo Team works and later Núria Alonso from the TEB explained the Ubuntu migration done at the Xarxa Òmnia.

15797518182_0a05d96fde_z

The installations room was plenty from the very first moment.

15611105340_1de89d36b4_z

There also was a very profitable auto-learning workshop on how to do an Ubuntu metadistribution.

15772275826_99d1a77d8b_z

 

And in another room, there were two Arduino workshops.

15610528118_927a8d7cc2_z15794076701_cc538bf9ba_z

 

And, of course, ubuntaires love to eat well.

 

15615259540_76daed408b_z 15614277959_c98bda1d33_z

 

Pictures by Martina Mayrhofer and Walter García, all rights reserved.

 
 

November 08, 2014

OpenStack on a diet, redux

Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago.

The key ideas in that draft are:

Only call services “core” if the user can detect them.

How the cloud is deployed or operated makes no difference to a user. We want app developers to

Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible.

Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement.

Require that “common” services can be self-deployed.

Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it.

Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service.

For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could.

Keep the whole set as small as possible.

We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed.

In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services:

  • Keystone (without identity, nothing)
  • Nova (the basis for any other service is the ability to run processes somewhere)
    • Glance (hard to use Nova without it)
  • Neutron (where those services run)
    • Designate (DNS is a core aspect of the network)
  • Cinder (where they persist data)

I would consider these to be common OpenStack services:

  • SWIFT (widely deployed, can be self-provisioned with Cinder block backends)
  • Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block)
  • Horizon (widely deployed, but we want to encourage innovation in the dashboard)

And these I would consider neither core nor common, though some of them are clearly on track there:

  • Barbican (not widely implemented)
  • Ceilometer (internal implementation detail, can’t be common because it requires access to other parts)
  • Juju (not widely implemented)
  • Kite (not widely implemented)
  • HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast)
  • MAAS (who cares how the cloud was built?)
  • Manila (not widely implemented, possibly core once solid, otherwise common once, err, common)
  • Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project)
  • Triple-O (user doesn’t care how the cloud was deployed)
  • Trove (not widely implemented, might make it to “common” if widely deployed)
  • Tuskar (see Ironic)
  • Zaqar (not widely implemented)

In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete).

November 03, 2014

osquery is neat

Facebook recently made opensource, osquery. It gives you operating system data via SQL queries! Its very neat, and you can test this even on MacOSX (it works on that platform & Linux). It is by far the project with the most advanced functionality, linked here in this post.

I noticed that rather quickly, there was a PostgreSQL project, called pgosquery, based on Foreign Data Wrappers with a similar idea. (apparently it was written in less than 15 minutes; so a much lower learning curve than the regular MySQL storage engine interface)

I immediately thought about an older MySQL project, by Chip Turner (then at Google, now at Facebook), called mysql-filesystem-engine. This idea was kicking around in 2008. I was intrigued by hearing about this at a talk (probably at the MySQL Conference & Expo); it’s a pity no one took this further.

On a similar tangent, did you also know that there is the option to use MySQL as storage via FUSE (see: mysqlfs)? An article by Ben Martin shows some practical examples.

At its heyday, MySQL had many storage engines (maybe around 50). Wikipedia has an incomplete list. I see some engines on that list, and think that some of these folk are also creating MongoDB backends — competition. At MariaDB we are probably shipping the most storage engines of any MySQL-based distribution, however I think we could be doing an even better job at working with upstream vendors, and figuring out how to support & augment business around it.

October 23, 2014

Ten years of Ubuntu

Today marks 10 years of Ubuntu and the release of the 21st version. That is an incredible milestone and one which is worthy of reflection and celebration. I am fortunate enough to be spending the day at our devices sprint with 200+ of the folks that have helped make this possible. There are of course hundreds of others in Canonical and thousands in the community who have helped as well. The atmosphere here includes a lot of reminiscing about the early days and re-telling of the funny stories, and there is a palpable excitement in the air about the future. That same excitement was present at a Canonical Cloud Summit in Brussels last week.

The team here is closing in on shipping our first phone, marking a new era in Ubuntu’s history. There has been excellent work recently to close bugs and improve quality, and our partner BQ is as pleased with the results as we are. We are on the home stretch to this milestone, and are still on track to have Ubuntu phones in the market this year. Further, there is an impressive array of further announcements and phones lined up for 2015.

But of course that’s not all we do – the Ubuntu team and community continue to put out rock solid, high quality Ubuntu desktop releases like clockwork – the 21st of which will be released today. And with the same precision, our PC OEM team continues to make that great work available on a pre-installed basis on millions of PCs across hundreds of machine configurations. That’s an unparalleled achievement, and we really have changed the landscape of Linux and open source over the last decade. The impact of Ubuntu can be seen in countless ways – from the individuals, schools, and enterprises who now use Ubuntu; to proliferation of Codes of Conduct in open source communities; to the acceptance of faster (and near continuous) release cycles for operating systems; to the unique company/community collaboration that makes Ubuntu possible; to the vast number of developers who have now grown up with Ubuntu and in an open source world; to the many, many, many technical innovations to come out of Ubuntu, from single-CD installation in years past to the more recent work on image-based updates.

Ubuntu Server also sprang from our early desktop roots, and has now grown into the leading solution for scale out computing. Ubuntu and our suite of cloud products and services is the premier choice for any customer or partner looking to operate at scale, and it is indeed a “scale-out” world. From easy to consume Ubuntu images on public clouds; to managed cloud infrastructure via BootStack; to standard on-premise, self-managed clouds via Ubuntu OpenStack; to instant solutions delivered on any substrate via Juju, we are the leaders in a highly competitive, dynamic space. The agility, reliability and superior execution that have brought us to today’s milestone remains a critical competency for our cloud team. And as we release Ubuntu 14.10 today, which includes the latest OpenStack, new versions of our tooling such as MaaS and Juju, and initial versions of scale-out solutions for big data and Cloud Foundry, we build on a ten year history of “firsts”.

All Ubuntu releases seem to have their own personality, and Utopic is a fitting way to commemorate the realisation of a decade of vision, hard work and collaboration. We are poised on the edge of a very different decade in Canonical’s history, one in which we’ll carry forward the applicable successes and patterns, but will also forge a new path in the twin worlds of converged devices and scale-out computing. Thanks to everyone who has contributed to the journey thus far. Now, on to Vivid and the next ten years!

October 16, 2014

Ubuntu Security Update on Poodle (CVE-2014-3566) and SSLv3 Downgrade Attack

The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.

Vulnerability Summary

“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards­ compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” -https://www.openssl.org/~bodo/ssl-poodle.pdf

A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned).  Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566).  Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks.  For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies.  Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries.  Therefore, the downgrade attack is currently known to exist only for HTTP.

OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV).  When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).

The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken.  However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6).  As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.

Ubuntu’s Response:

Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere).  Ubuntu’s response provides a path forward to transition users towards safe defaults:

  • Add TLS_FALLBACK_SCSV to openssl in a USN:  In progress, upstream openssl is bundling this patch with other fixes that we will incorporate
  • Follow Google’s lead regarding chromium and chromium content api (as used in oxide):
    • Add TLS_FALLBACK_SCSV support to chromium and oxide:  Done – Added by Google months ago.
    • Disable fallback to SSLv3 in next major version:  In Progress
    • Disable SSLv3 in future version:  In Progress
  • Follow Mozilla’s lead regarding Mozilla products:
    • Disable SSLv3 by default in Firefox 34:  In Progress – due Nov 25
    • Add TLS_FALLBACK_SCSV support in Firefox 35:  In Progress

Ubuntu currently will not:

  • Disable SSLv3 in the OpenSSL libraries at this time, so as not to break compatibility where it is needed
  • Disable SSLv3 in Apache, nginx, etc, so as not to break compatibility where it is needed
  • Preempt Google’s and Mozilla’s plans.  The timing of their response is critical to giving sites an opportunity to migrate away from SSLv3 to minimize regressions

For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit http://www.ubuntu.com/usn/.

 

September 18, 2014

TL;DW for Clojure Data Science

Edmund Jackson talked at the 2012 Clojure/Conj, and you can see his talk here.

I took these notes as I watched it:
  1. What is "data science"?
    1. "That realm of endeavor that requires, simultaneously, advanced computational and statistical methods."
    2. Some people aren't sure whether "data science" is a thing, or just data analysis dressed up with a fancy name. That question amuses me.
  2. What's new, such that everybody suddenly cares about data science?
    1. widely available computing resources, open source tools such as R, and large amounts of data available in private companies and in public
    2. Compares to early days of Linux, when there was a bunch of new stuff that everybody could hack on
  3. Interactive tools aren't enough; you're not taking some data, analyzing it, and coming back with the answer. You need platform features like native language speed, data structures, language constructs, connectivity, and QC in order to embed your analysis in business processes.
  4. The tools with better analysis features (e.g., R, Mathematica) lack the platform features, and the tools with better platform features (he focuses primarily on C++ as his example here) lack the analysis features.
  5. Python is in the sweet spot, with platform features and (via numpy, scipy, and pandas) analysis features. But:
    1. It's full of mutable data!
    2. The mode of expression in imperative languages poorly matches the content of expression when you're dealing with maths.
  6. F#, Scala, and Clojure are all functional, and therefore (immutable data, more natural expression of maths) better alternatives than Python.
  7. Clojure yay! points:
    1. Native: Incanter, Storm, Cascalog, Datomic
    2. JVM: Mahout (ML on Hadoop), jBLAS, Weka (Java lib with many ML algorithms)
    3. Interop: Rincanter (call out to R), JNI
  8. From here he goes into calculating the entropy of a distribution, and the relative entropy of different distributions.
  9. Demonstrates using relative entropy fns in Datomic queries

September 11, 2014

Mozilla Webmaker at Olivarez College Tagaytay a success

2014-09-05 09.48.21

The Mozilla webmaker party at Olivarez College Tagaytay is a success last September 5, 2014. Which was attended by different department from Olivarez College Tagaytay at Computer Laboratory 2.  Since they only have 20 system units on their laboratory they created a two batches of participants, one in the morning and the other is in the afternoon. The event discussion is about Introduction Mozilla which was discuss by Me, The second lecturer discussed and demo “Thimble” by Mr. Ian Mark Martin and lastly Mr. Leo Caisip which  discussed  about “Popcorn Maker“, Both  attended the Mozilla PH orientation for web maker mentor last August 16, 2014  at Mozilla Community Space Manila. The event ended at exactly 4:00pm as mostly in afternoon participated by the nursing department.

DSC_2881

2014-09-11 12.59.54

We also distributed some Mozilla Swag (Bollard, Mozilla Sticker, Mozilla Tatoos and Mozilla Pins) for participants after the event. As part of the successfull event, based on their survey they are requesting for another event semilar to this.  but internet on the school is not that stable during that day but still we managed to make the event successfull.

 

DSC_2859

Pictures can be found here:  https://www.flickr.com/photos/83515207@N04/sets/72157646987948838/

September 04, 2014

TL;DW for "How To Design A Good API and Why it Matters"

Josh Bloch's Google Tech Talk video How To Design A Good API and Why it Matters is about an hour long, and well worth your time. It's focused on OOP, but has lots of good principles that can be followed elsewhere.

In case you don't have an hour right now, here's a summary/index kind of thing that points out the bits I thought were most important.
  1. 6:27: Characteristics of a good API:
    1. Easy to learn
    2. Easy to use, even without documentation
    3. Hard to misuse
    4. Easy to read and maintain code that uses it
    5. Sufficiently powerful to satisfy requirements
    6. Easy to evolve
    7. Appropriate to audience
  2. 7:52: Gather requirements, but differentiate between true requirements (which should take the form of use cases) and proposed solutions.
  3. 10:02: Start with a short spec; one page is ideal.
    1. Agility trumps completeness at this point.
    2. Get as many spec reviews from as many audiences as possible, modify according to feedback.
    3. Flesh the spec out as you gain confidence.
  4. 15:10: Write to your API early and often
    1. Start writing to your API before you've implemented it, or even specified it properly.
    2. Continue writing to your API as you flesh it out.
    3. Your code will live on in examples and unit tests.
  5. 17:32: Write to SPI [Service Provider Interface]
    1. Write at least three plugins before your release.
    2. Application in Clojure-land: Not sure...
  6. 19:35: Maintain realistic expectations.
    1. You won't please everyone.
    2. Aim to displease everyone equally.
    3. Expect to make mistakes and evolve the API in the future.
  7. 22:01: API should do one thing and do it well.
    1. Functionality should be easy to explain.
    2. If it's hard to name, that's a bad sign.
      1. Example of bad name that I can't leave out of this summary: OMGVMCID
  8. 24:32: API should be as small as possible but no smaller
    1. "When in doubt, leave it out." You can always add stuff, but you can't ever remove anything you've included. (The speaker calls this out as his most important point.)
  9. 26:27: Implementation should not impact API.
    1. Do not over-specify. For example, nobody needs to know how your hash function works, unless the hashes are persistent.
    2. Don't leak implementation details such as SQL exceptions!
  10. 29:36: Minimize accessibility of everything.
    1. Don't let API callers see stuff you don't want to be public, and that includes anything you might want to change in the future.
  11. 30:39: Names matter: API is a little language.
    1. Make names self-explanatory.
    2. Be consistent.
    3. Strive for symmetry. (If you can GET a monkey-uncle, make sure you can PUT a monkey-uncle, too.)
  12. 32:32: Documentation matters.
    1. Document parameter units! ("Length of banana in centimeters")
  13. 35:41: Consider performance consequences of API design decisions.
    1. Bad decisions can limit performance -- and this is permanent.
    2. Do not warp your API to gain performance -- the slow thing you avoided can be fixed and get faster, but your warped API will be permanent.
    3. Good design usually coincides with good performance.
  14. 40:00: Minimize mutability
    1. Make everything immutable unless there's a reason to do otherwise.
  15. 45:31: Don't make the caller do anything your code should do.
    1. If there are common use cases that require stringing a bunch of your stuff together in a boilerplate way, that's a bad sign.
  16. 48:36: Don't violate the principle of least astonishment
    1. Make sure your API callers are never surprised by what the API does.
  17. 50:03: Report errors as soon as possible after they occur.
  18. 52:00: Provide programmatic access to all data that is available in string form.
    1. Rich Hickey makes a similar point here.
  19. 56:15: Use consistent parameter ordering across methods.
    1. Here's a bad example:
      1. char *strncpy (char *dst, char *src, size_t n);
      2. void bcopy (void *src, void *dst, size_t n);
  20. 57:15: Avoid long parameter lists.
  21. 58:21: Avoid return values that demand exceptional processing.
    1. Example: return an empty list instead of nil/null.

August 22, 2014

GNU hackers unmask massive HACIENDA surveillance program and design a countermeasure

After making key discoveries about the details of HACIENDA, Julian Kirsch, Dr. Christian Grothoff, Jacob Appelbaum, and Dr. Holger Kenn designed the TCP Stealth system to protect unadvertised servers from port scanning.

According to Heise Online, the intelligence agencies of the United States, Canada, United Kingdom, Australia and New Zealand are involved in HACIENDA. The agencies share the data they collect. The HACIENDA system also hijacks civilian computers, allowing it to leach computing resources and cover its tracks.

Some of the creators of TCP Stealth are also prominent contributors to the GNU Project, a major facet of the free software community and a hub for political and technological action against bulk surveillance. Free software is safer because it is very hard to hide malicious code in a program anyone can read. In proprietary software, there is no way to guarantee that programs don't hide backdoors and other vulnerabilities. The team revealed their work on August 15, 2014 at the annual GNU Hackers' Meeting in Germany, and Julian Kirsch published about it in his master's degree thesis.

Maintainers of Parabola, an FSF-endorsed GNU/Linux distribution, have already implemented TCP Stealth, making Parabola users safer from surveillance. The FSF encourages other operating systems to follow Parabola's lead.

The Free Software Foundation supports and sponsors the GNU Project. FSF campaigns manager Zak Rogoff said, "Every time you use a free software program, you benefit from the work of free software developers inspired by the values of transparency and bottom-up collaboration. But on occassions like these, when our civil liberties are threatened with technological tools, the deep importance of these values becomes obvious. The FSF is proud to support the free software community in its contributions to the resistance against bulk surveillance."

The Free Software Foundation works politically for an end to mass surveillance. Simultaneously, the Foundation advocates for individuals of all technical skill levels to take a variety of actions against bulk surveillance.

About Julian Kirsch, Christian Grothoff, Jacob Appelbaum, and Holger Kenn

Julian Kirsch is the author of "Improved Kernel-Based Port-Knocking in Linux", his Master's Thesis in Informatics at Technische Universitat Munchen.

Dr. Christian Grothoff is the Emmy-Noether research group leader in Computer Science at Technische Universitat Munchen.

Jacob Appelbaum is an American independent computer security researcher and hacker. He was employed by the University of Washington, and is a core member of the Tor project, a free software network designed to provide online anonymity.

Dr. Holger Kenn is a computer scientist specializing in wearable computing, especially software architectures, context sensor systems, human machine interfaces, and wearable-mediated human robot cooperation.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

About the GNU Operating System and Linux

Richard Stallman announced in September 1983 the plan to develop a free software Unix-like operating system called GNU. GNU is the only operating system developed specifically for the sake of users' freedom. See https://www.gnu.org/gnu/the-gnu-project.

In 1992, the essential components of GNU were complete, except for one, the kernel. When in 1992 the kernel Linux was re-released under the GNU GPL, making it free software, the combination of GNU and Linux formed a complete free operating system, which made it possible for the first time to run a PC without non-free software. This combination is the GNU/Linux system. For more explanation, see https://www.gnu.org/gnu/gnu-linux-faq.

Media Contacts

Zak Rogoff
Campaigns Manager
Free Software Foundation
+1-617-542-5942
This e-mail address is being protected from spambots. You need JavaScript enabled to view it

"Knocking down the HACIENDA" by Julian Kirsch, produced by GNU, the GNUnet team, and edited on short notice by Carlo von Lynx from #youbroketheinternet is licensed under a Creative Commons Attribution NoDerivatives 3.0 Unported License.

August 13, 2014

SFD Tagaytay 2014 at Olivarez College

I am now again an official organizer for SFD 2014, but this time I will organized the event in Tagaytay City which will be hosted by Olivarez College in Tagaytay. The said event is scheduled on September 27, 2014.

SFD2014

The venue is on their “AMPITHEATER” where it can hold more than 500 participants. Here are some pictures of the exact venue.

cpdc-20140804131542221  cpdc-20140804131124356We also launch the online registration feel free to register using the this URL : https://www.eventbrite.com/e/software-freedom-day-2014-at-olivarez-college-tagaytay-tickets-12455543867

August 12, 2014

websites on this server

June 30, 2014

Scancation - Scanning the Standing Stones of the Outer Hebrides

I just came back from a vacation where Kio and I went and visited most of the megalithic monuments on the islands of the Outer Hebrides in Scotland. Stone circles are all over the place on these islands and the biggest one is the Callanish Stone Circle. One of the cool things about these places is that there is very little history known about them and so all you can know about them is from your experience of being around them. Most of them all taller than me and you get the sense that these places were the sacred spaces of 5000 years ago.

One of the things I say a lot at MakerBot is that they really make the most sense when you connect your MakerBot to your passion. Since I'm into rocks. I scanned a few of my favorite stones and ran them through 123D Catch which makes a 3D model from up to 70 photos of the object. It’s pretty cool to think that yesterday I was walking among these stones and today I’m printing them out on the MakerBots in my office. 

It’s interesting to note that this feels a lot like the old days of vacation film photography. The process of processing the photos into a 3D model feels a lot like when I used to develop celluloid film after a vacation.

Someday, printing 3D models will be normal for everyone, for now, it’s just normal for all the MakerBot operators in the world.

If you decide to go on your own scanning vacation, aka scancation, here’s my process and tips for acquiring models. I use a Canon S110 camera and then upload my photos later to the 123D Catch site and then upload all the models and a zip file of all the photos to Thingiverse because the photogrammetry software will get better someday and I want to have an archive of the photos so I can make better models later.

 

  • Lighting conditions matter. A cloudy sky is much better than a sunny one so that you can get all the details of your subject. 
  • Fill the frame, but make sure to leave some area around the object in the picture. 123D Catch uses reference points in the object to make everything fit together. 
  • Use all 70 pictures allowed by the software. The more pictures, the better the scan. 
  • Scan weird things. Sometimes the most iconic stuff of a location isn’t the most obvious. Some friends of mine scanned all of Canal St. in NYC and said the interesting parts were the giant piles of trash bags which are one of the local overlooked pieces of landscape art.
  • Don’t forget the top view. If you are capturing a subject that is tall, do your best to get above it and take a picture. A quadcopter could be handy for that
  • Fix it up with Netfabb. After I upload the photos into the 123D Catch online portal, then I use Netfabb basic to slice off all the weird parts and cut a flat bottom onto the object.
  • Make sure to upload your scans to Thingiverse. We can all make models of your SCANCATION. 

 

Do you have any other scanning tips for those that would like to experiment with vacation scanning? Leave them in the comments!

June 22, 2014

the meaning of a word

i learned the word "feminist" at my first job. I was 15 and a trainee engineer in a hydro power scheme. I recall one young man I worked with asking me urgently if i was a feminist. I asked what that was. he said, "women who hate men". oh.. i'm not one of them....

why would i get a job as the only woman deep in a power station if i hated men? It was a long long time before i heard any other definition of feminist.

June 20, 2014

Launceston June Meeting

G'day all

For this month's Launceston meeting, Phil will be giving us an introduction to NAS4Free, a BSD licenced fork/continuation of FreeNAS.

2:00pm
Saturday 28th June
Royal Oak
Launceston


As usual, some of us will be meeting for lunch beforehand at 1:00pm.

Hope to see you there!

Google Maps Link

NAS4Free Website
-----
Gov Hack 2014: June 11-13th (Hobart venue)
OpenStack 4th Birthday: June 17th (RSVP here: http://taslug-openstack.eventbrite.com.au/ )
Next Launceston meeting: 2:00pm July 26th (Topic TBC)

June 11, 2014

Hobart meeting - June 19th - (The aptosid fullstory)

Welcome to June. Yep. short days... stout beers. And source. LOTS OF SOURCE! I'm in the
middle of my exam session at uni so won't have time to prepare the usual slides and news
this month.

When: Thursday, June 19th, 18:00 for an 18:30 start
Where: Upstairs, Hotel Soho, 124 Davey St, Hobart.

Agenda:

18:00 - early mingle, chin wagging, discussion and install issues etc

19:00 - Trevor Walkley - aptosid fullstory


    This months talk will be given by Trevor Walkley, an aptosid
    dev,(bluewater on IRC), on building an iso using aptosid fullstory
    scripts which are currently held on github (and the 'how to do it' is
    not well known).

    A live build will take place (hopefully debian sid will cooperate on the
    night) followed by a live installation of the build to the famous milk
    crate computer belonging Scott, (faulteh on IRC).

20:00 - Meeting end. Dinner and drinks are available at the venue during the meeting.

We will probably get to a discussion on the Hobart LCA 2017 bid, ideas for upcoming
Software Freedom Day in September, Committee nomination and voting,
so our pre-talk discussion should be packed full of jam.

Also in June:
28th - Launceston meeting
July:
11-13th - Gov Hack 2014 - There's at least a Hobart venue for this event.
17th - OpenStack 4th Birthday - RSVP here: http://taslug-openstack.eventbrite.com.au/
September:
20th - Software Freedom Day - events in Hobart and Launceston

June 10, 2014

Integrate ToDo.txt into Claws Mail

I use Claws Mail for many years now. I like to call it “the mutt mail client for people who prefer a graphical user interface”. Like Mutt, Claws is really powerful and allows you to adjust it exactly to your needs. During the last year I began to enjoy managing my open tasks with ToDo.txt. A powerful but still simple way to manage your tasks based on text files. This allows me not only to manage my tasks on my computer but also to keep it in sync with my mobile devices. But there is one thing I always missed. Often a task starts with an email conversation and I always wanted to be able to transfer a mail easily to as task in a way, that the task links back to the original mail conversation. Finally I found some time to make it happen and this is the result:

To integrate ToDo.txt into Claws-Mail I wrote the Python program mail2todotxt.py. You need to pass the path to the mail you want to add as parameter. By default the program will create a ToDo.txt task which looks like this:


<task_creation_date> <subject_of_the_mail> <link_to_the_mail>

Additionally you can call the program with the parameter “-i” to switch to the interactive mode. Now the program will ask you for a task description and will use the provided description instead of the mail subject. If you don’t enter a subscription the program will fall back to the mail subject as task description. To use the interactive mode you need to install the Gtk3 Python bindings.

To call this program directly from Claws Mail you need to go to Configuration->Actions and create a action to execute following command:


/path_to_mail2todotxt/mail2todotxt.py -i %f &

Just skip the -i parameter if you always want to use the subject as task description. Now you can execute the program for the selected mail by calling Tools->Actions-><The_name_you_chose_for_the_action>. Additional you can add a short-cut if you wish, e.g. I use “Ctrl-t” to create a new task.

Now that I’m able to transfer a mail to a ToDo.txt item I also want to go back to the mail while looking at my open tasks. Therefore I use the “open” action from Sebastian Heinlein which I extended with an handler to open claws mail links. After you added this action to your ~/.todo.action.d you can start Claws-Mail and jump directly to the referred mail by typing:


t open <task_number_which_referes_to_a_mail>

The original version of the “open” action can be found at Gitorious. The modified version you need to open the Claws-Mail links can be found here.

Who's Online

We have 98 guests online
Digital Freedom International (Aka SFI) is the non-profit organization at the origin of SFD and CFD. DFI handles sponsorship contracts, official team registrations, sending out schwags to teams, the annual Best Event Competition and many other things. Hundreds of teams around the world manage the local celebration and help to send out a global message. So do drop by and attend an SFD and CFD event nearby!

Login Form