Sunday, November 20, 2011

MINIX versus Linux versus BSD

This morning an article was posted to Slashdot in which Andrew Tanenbaum is interviewed.  One question and answer from the interview seemed to draw the most reaction on Slashdot. asked: "If you could return in the past to change the MINIX original proprietary licence to the GPL licence, do you think your system might have become the dominant free OS today?".  Andrew Tanenbaum answered:

Never. The reason MINIX 3 didn't dominate the world has to do with one mistake I made about 1992. At that time I thought BSD was going to take over the world. It was a mature and stable system. I didn't see any point in competing with it, so I focused MINIX on education. Four of the BSD guys had just formed a company to sell BSD commercially. They even had a nice phone number: 1-800-ITS-UNIX. That phone number did them and me in. AT&T sued them over the phone number and the lawsuit took 3 years to settle. That was precisely the period Linux was launched and BSD was frozen due to the lawsuit. By the time it was settled, Linux had taken off. My mistake was not to realize the lawsuit would take so long and cripple BSD. If AT&T had not brought suit (or better yet, bought BSDI), Linux would never have become popular at all and BSD would dominate the world. 
Now as we are starting to go commercial, we are realizing the value of the BSD license. Many companies refuse to make major investments in modifying Linux to suit their needs if they have to give the code to their competitors. We think that the BSD license alone will be a great help to us, as well as the small size, reliability, and modularity.

My first UNIX experience was in the 1984-85 timeframe and my first job out of college was developing software for intelligent I/O controllers for a UNIX System V mini-computer.  I remember what commercial UNIX was like in those days. You may have wanted it but the options were limited and expensive.  On the 80286, you had XENIX.  When the i386 came out, there were a number of nice options including Interactive 386/ix and even SCO UNIX.  Yes SCO had a decent product back in the day.

When I could finally afford a computer capable of running some UNIX-ish system, I found myself in on the consumer side of what the question is about.

From my perspective, I didn't use MINIX because I viewed it as an educational and teaching OS. Its desired user base was not "real" users doing non-academic work.  We had experimented with it in our labs at work and found it quite primitive in comparison to the "real UNIX" we were used to.  Personally, I found the acquisition process painful as well.  But all UNIXy systems were painful to get back then.  The focus on educational users was it for me.  I don't ever want to be the "odd user" of anything who is not in the desired target audience for a product.

Why did I choose a Linux distribution over a BSD?  I honestly don't remember.  My vaguest recollection is that I preferred System V more than BSD systems and Linux leaned to System V.  I doubt this was a factor for most others though.  If I had to guess, I would go back and look at how you had to obtain it, community responses to newbies, etc.. Was the AT&T lawsuit have factor? Maybe. Linux was certainly perceived to be immune from that lawsuit among those I knew.  It did not suffer from that heritage.

Finally, I want to look back with the free software community building experience I have.  When viewed from this perspective and the prism of time, I think the answer has a lot to do with what we should have learned from Google Summer of Code. A project has to be easy to obtain, get started with, contribute to, have a vibrant and friendly community, etc.. The license is important but as long as it is imposes legal impediments or obligations, that won't stop most people.  In the old days, Minix was not really easy to obtain and was not focused on general use. It was not available as an impulse download. That was enough of a hurdle to stop a lot of folks.  

When one examines the choices faced by someone who wanted a UNIX-like system on their personal computer in the early 1990's, it is easy to see how Linux was the default choice.  It simply did not have the "targeted to teaching operating systems" stigma, was easy to obtain, and didn't have a lawsuit looming over its head.

But one of the nice things about free software is that if there is interest, a project will continue on.  MINIX 3 is a great OS that has a BSD-style license, is easy to obtain, and they are clearly interested in MINIX 3 being used for more than teaching operating systems design.  Variety is the spice of life.  I would recommend that you give it a try and tell them I sent you.

Thursday, November 10, 2011

Open Source and Generational Differences

It is time again for another entry from guest blogger Chris Johns. Chris and I have chatted and emailed a lot over the past few months about the issues in this post. They are tough because it is always hard to question your decisions and embrace change. But it is critical to do so on anything that is long-term in your life. RTEMS is a long-term software projects and we need to embrace self-examination and change.

Developers start projects to scratch itches or to bring about change. They join projects as users because they need to use a piece of software. They get involved because they to need to fix bugs or develop new features. The reasons are many, well documented and understood by those who work in or around open source software. What happens when a project becomes old enough that generational change is needed and those who start a project reach an age where they do not have the energy, mental capacity or desire is not well understood. As a project and its leadership age do they move from being intensive productive developers to mentors and governors of the project. Understanding this change is difficult as the interests and focuses of the newer generations are different and sometimes clash with the original developers yet both are right and neither are wrong. The primary function of the project maybe the same, the way it is developed and maintained can be different. Open source is starting to reach this point and some projects have such a long life cycles in user projects it is starting to become an issue. RTEMS is such a project. It is used in space flight and some new projects do not take flight until 2018. Being open source each user has the code and can make changes long past the life of the project, but it is the project and community this discussion is about. 

RTEMS is now 22 years old. It is able to drink, vote and hold a drivers license in most countries. It has experimented with a few things it should not have and so far has not been in trouble with the law. You could say it has had a stable and happy up bring. RTEMS is now looking to the future and life without the current custodians.

RTEMS at its core is a collection of C source files that are built into a C library and linked with user application code to provide single executable image often embedded into a custom piece of hardware. The key factors for the user of this device is performance, resources and stability. The key factors for the developer of this device is availability of source code, easy to use software interfaces, easy to integrate into a team environment, and stability of the project. The key factors for the maintainers of RTEMS is the ability to effectively integrate changes, respond to hardware changes, stable infrastructure and the ability to attract new developers. Developers are the food source that feeds, refreshes and sustains a project.

RTEMS in its post toddler years moved to a new version control tool called CVS that allowed concurrent development of the code. It was liberating because a single set of code did not have to be maintained. Before CVS patches were emailed to the maintainer, merged and then released back to developers as tar files. With CVS this task could be spread among a number of trusted developers. RTEMS also moved from custom makefiles to autoconf and automake. This improved the productivity of the developers allowing the code to be configured and built on a range of host operating systems. RTEMS still uses these same tools 10 to 15 years later and they still work. The developers are comfortable with their work flow and know the problems or issues they have. Why the need to change? There are problems and over time these have grown in size as the project has grown. What were problems are now distance memories and all we have left is the new problems that came with the tools. 

We have files in places that have long since lost there meaning. The board support packages is an example. They are located under 'c/src/lib/libbsp' when they could be located in 'libbsp' or even 'bsps'. This path does not effect the build time or the disk space used and the developers know this path very well so why is this a problem. It makes no sense. Any new users of RTEMS, and by new I mean anyone who has joined in the last 10 years, would have no idea why this structure exists. RTEMS use to have an Ada version and all code was under 'c' or 'ada' and the source was under 'c/src'. Why not move the files? We cannot because CVS does not have a rename command and repository hacks are something we discourage. 

Would we move them if CVS allowed it? Maybe, however this effects the build system. Why is that a problem, is there something wrong with it? Building RTEMS is complex. As a user of RTEMS a release comes with all the autotool's generated files in place ready to work. You can configure RTEMS with a few options passed to configure, plus provide a few more on the command line to the build a range of BSP specific options and then at runtime you have a large array of configurations and runtime options. Are these documented? Only a small number are. The user needs to look into the source to find the full set and even for a seasoned developer this can be complex and not accurate or complete. As a user you just build RTEMS and that does happen and it does it well. By well I mean you get a library of code that is stable and will perform the task asked. As a developer you need to work with the build system and this is where problems start to appear. Performance is an issue. A clean check out from CVS requires a bootstrap to generate all the autoconf and automake files as they are not held in the repository and this can take a lengthy period of time even on large hosts and fast disks. Fortunately this is not often needed as the maintainer mode helps how-ever it makes build-bot type support on check in difficult if not impossible. Also contributing to this is the repeated installing of header files. If you build all 120+ BSPs you will install over 50,000+ header files. This is just building RTEMS and does not include installation of the build output. When installing the 50,000+ files are copied to the install paths. Does this seem normal or ok? Maybe there really needs to be this many headers, or maybe header files have been added to RTEMS following a common template with little regard to the consequence and over the years this has grown to this figure. Most users are only interested in one or two BSPs so this is not a major issue. For a maintainer is it a problem because they need to make sure everything builds and works. 

 I suppose the important questions regarding the build system are "Is it efficient given the new generation of build tools?" and "Does it aid or inhibit the development process?". These are debatable questions which span the boundaries of technical merits, broad range support, supported hosts, and personal preferences. This last one being the most contentious.

The question the current developers and maintainers of RTEMS need to ask is not "Are these tools working and doing the job they are suppose to?", rather if we handed the project to a new group of developers and maintainers "What would new maintainers think of the state of the project?". While we may be comfortable and able to release and maintain RTEMS it may look to a new generation as something from a time past. 

Change is never easy. There needs to be leadership, desire and willingness to refresh to bring about change. It is easy to be negative and to find fault in any new change, then offer no path forward. Leading is not always about "What I think is right", it is about being honest and openly critical of how we work and approach problem solving, and it is about providing paths to new ways of solving problems we face in the project. Not all paths will succeed how-ever being open to change means a new path can be taken until a solution found. Inviting new and young talent to follow these paths and find solutions involves them in the project. They become responsible for various parts and that builds pride and commitment. The hope being someday they will be managing and leading the project.