Sunday, February 24, 2013

RTEMS Texinfo Tools Update

A couple of years ago, Chris Johns and I began to discuss that RTEMS has had a long and successful history as a free software project. One aspect of this discussion resulted in the Open Source and Generational Differences. This post reflected on how new developers can have a tendency to avoid projects that use "uncool" tools. They want to be on the leading edge of technology. However, another aspect of using uncool tools is that they are old. RTEMS-based applications often have lifespans measured in decades from development, through fielding, to long term sustainment. This insight lead us to begin to review the tools we depended upon for long-term viability and that they continued to offer a high quality solution. The transition from CVS to git has been the most visible outcome of this effort.

But lurking within the RTEMS source tree was a dependency on a long dead tool - texi2www. RTEMS was the last user of this and had to include the source code in our own tree. This was exactly the type of situation Chris and I had realized could happen and it already had without us noticing. In November 2011, I posted to the GNU Texinfo Help Mailing list asking for advice on converting to the more modern texi2html program. Unfortunately, I learned that texi2html was considered deprecated and being re-implemented in Perl and the new implementation would be known as texi2any. This led to me converting us away from the stone cold dead texi2www to texi2html. With the recent release of texinfo 5.0, I began to ensure that our documentation would build with either texi2html 1,82 or texi2any from texinfo 5.0 and that the build infrastructure could detect which to use.

The goal of this post is to point out how we invoke tools, initialization file differences and the minor changes to our documentation required to support both tools. The other tools in the texinfo package such as makeinfo did require us to make changes to the source but did not change their invocation. I will detail the changes to our texinfo source files after showing the command line differences for the html converters.

The commands executed by our build infrastructure are generated by an autoconf based build infrastructure. This sometimes leads to longer than absolutely necessary command lines. I have made no attempt to shorten or clean these command lines up. These are for the RTEMS C User's Guide whose main texinfo file is c_user.texi.

The following is the invocation of texi2html 1.82:
 texi2html -D use-html --split node --node-files -o ./ --top-file index.html --init-file=../texi2html_init \  
   -I /home/joel/rtems-4.11-work/rtems//doc/user -I /home/joel/rtems-4.11-work/rtems//doc \  
   -I .. -I . --menu /home/joel/rtems-4.11-work/rtems//doc/user/c_user.texi \  

This is the corresponding invocation of texi2any from texinfo 5.0:
 texi2any --html -D use-html --split node -o ./ --init-file=../texi2any_init \  
   -I /home/joel/rtems-4.11-work/rtems//doc/user -I /home/joel/rtems-4.11-work/rtems//doc \  
   -I .. -I . /home/joel/rtems-4.11-work/rtems//doc/user/c_user.texi  

Notice that both command lines are quite similar. However, texi2html requires the --node-files argument to produce individual html file names which are based on the section or chapter name. By default, they will be named using a pattern line DOC_nnn.html.

The other thing to note is that they both accept initialization files. However, the format of the initialization files is very different between the two implementations. Texinfo supports hierarchically structured documents and allows the author to provide links to the next section, previous section, and the section that contains or is logically above the current one. The RTEMS Project has a tool which automatically constructs the node markups based on chapter, section, and subsection headings. Thus, the RTEMS documentation is fully hierarchically linked with no manual node definition required. The RTEMS documentation build system uses the initialization file to define a custom header, footer and to modify the navigation buttons.

This is the file texi2html_init generated by our build infrastructure:
 my $button_text = '<a href="../index.html">Library</a>';  
 push @SECTION_BUTTONS, \$button_text;  
 push @CHAPTER_BUTTONS, \$button_text;  
 push @MISC_BUTTONS, \$button_text;  
 push @TOP_BUTTONS, \$button_text;  
 '<A HREF="" target="Text Frame">  
 <IMG align=right BORDER=0 SRC="../images/rtems_logo.jpg" ALT="RTEMS  
 Logo"> </A>  
 <H1>RTEMS On-Line Library</H1>  
 'Copyright &copy; 1988-2011  
 <A HREF="" target="Text Frame">OAR Corporation</A>  

The initialization files reflects the internal implementation of the two programs and the format used by texi2any is different. We have an initialization file which accomplishes similar things in the generated HTML files but looks different. For example, the texi2html output has navigation icons while the texi2any output has textual links.

This is the file texi2any_init generated by our build infrastructure:
 set_from_init_file ('AFTER_BODY_OPEN',  
 '<A HREF="" target="Text Frame">  
 <IMG align=right BORDER=0 SRC="../images/rtems_logo.jpg" ALT="RTEMS  
 Logo">  </A>  
 <H1>RTEMS On-Line Library</H1>  
 texinfo_register_handler('setup', \&add_button);  
 my $button_text = '<a href="../dir.html">Directory</a>';  
 sub add_button($)  
  my $self = shift;  
  foreach my $button_type ('SECTION_BUTTONS', 'CHAPTER_BUTTONS',   
               'MISC_BUTTONS', 'TOP_BUTTONS') {  
   my $buttons = $self->get_conf($button_type);  
   push @$buttons, \$button_text;  
  return 1;  

There were only a couple of issues encountered with our use of texinfo which required modifying the source.
  • Missing @item in @itemize lists now results in warnings. 
  • The order of menu definition, @top and its @node, and file @include statements in the top level texinfo files had to be reordered. Texinfo 5.0 is not as forgiving on this.
In addition, I spotted mistakes in our documentation when reviewing the various output forms. The entire range of patches can be viewed online here.
NOTE: As of 24 February 2013, these have not been committed. When they are committed, links will be provided to the git repository.

We would like to take advantage of features in the newer tools and are investigating using a print on demand service for RTEMS manuals. I hope there is experience in the texinfo community about this but, if not, I suppose I will pester the maintainers until the results are satisfactory and report on what I had to do.

Thanks to the Texinfo maintainers Patrice Dumas, Karl Berry, and Eli Zaretskii folyzr being incredibly patient and helpful through this process.

Friday, February 15, 2013

GSOC Presentation at University of Tennessee at Chattanooga

Earlier today, I returned to my alma mater, the University of Tennessee at Chattanooga, to give presentations on RTEMS and the Google Summer of Code 2013 (download here). About 25 people were in attendance including two faculty members. Thankfully, my wife Michele had driven and let me do final review on the presentations. Chattanooga is about a two hour drive from Huntsville and in a different time zone (Eastern not Central). We had allowed time for traffic and parking problems but had no traffic. We ending up arriving about forty-five minutes early.   We were met in the parking lot by a student who provided a visitor’s parking pass. This greatly simplified having a car on campus. Parking at any university seems to be a challenge.

After no A/V difficulties, I put up a montage of pictures from some of the projects which use RTEMS.  Those who attended the GSOC 2012 Mentor Summit will remember the slide from the lightning talks. It is memorable because someone from another project presented it. I had forgotten the talks and went to the Google Store. The montage highlights awesome projects based on RTEMS including the BMW Superbike, Curiosity, Herschel, Milkymist, Solar Dynamic Observatory, and MMS. As students came in, there were plenty of questions about the projects.  I created the slide to give at an RTEMS friendly workshop where most knew what RTEMS was and I wanted to highlight users. It turns out this is a great slide to get conversations going. If other FLOSS organizations can brag on where there software is used, then a user montage is a good thing to have.

I presented the official GSOC slides first. I felt it was important to emphasize that all types of FLOSS software was represented and that all of the organizations were interested in student participation. Being effective and appropriate to participate in GSOC requires organizations to provide wish lists, mentors, regular interaction with students, friendly communities, etc..

I then moved to the RTEMS specific presentation which very briefly introduces RTEMS but focuses more on recent activities, ongoing activities, and our wish list. It highlights areas we want improvements to occur even in software development process areas. As the last slide came up, I realized I was finishing on time and had presented thirty minutes, leaving fifteen to twenty minutes for questions. I ended my talk by reminding them that I would love to see them all as RTEMS contributors but would be just as happy to see them involved in the FLOSS community on ANY project. We are a collection of organizations but do have common goals.

There were a lot of questions on GSOC followed by some on RTEMS. One student asked where GSOC work occurred. There were questions on how the mentoring worked and what mechanisms were used to communicate with the mentors. I noticed students were packing up and realized they had ten minutes to get to their next class. There were no more questions but I hung around a while.

The big surprise was when a student came up to me while I was packing up. He asked about real-time and SMP as a potential area for Ph.D. work. I told him that I thought it was an open area of research and with some literature research he should be able to find a good area. Years of research into uniprocessor real-time systems and scheduling have given us practical engineering solutions. But the complexities of modern pipelines, caching, and interactions of multiple cores break some of the underlying assumptions. I am concerned that this same level of maturity has not been reached in SMP embedded systems which require rigorous analysis of predictability.

My wife generously waited in the presentation room while I visited with the only faculty member left from when I was a student there Dr. Jack Thompson. Then my wife and I walked around campus, enjoying a pretty day and reminiscing. After all, it was only one day after Valentines Day and we met one another while students here. 

Sunday, December 23, 2012

Transfer Information to Another HTC One S

I am still using my trusty T-Mobile G2 while waiting for the new Nexus 4 to arrive. It has only been on order since Thanksgiving so maybe sometime in January it will arrive. In the meantime, my wife has an HTC One S which has a small crack in the corner of the screen. This isn't a real problem but the USB connector has become quite flaky and it often doesn't get charged. I ordered her a replacement and this is the hopefully short story of how I transferred her data and information to the new phone.

First, I did some research. It quickly become apparent that between Google's sync and backup plus what I could copy from the filesystem, all that was left would be SMS/MMS. Message Sync looked promising so I loaded it. I then performed the following:

  • Placed the phone in Airplane mode to prevent new data from showing up.
  • Used Message Sync to back up her SMS/MMS to the phone's storage. 
  • Mounted the old phone on a computer.
  • Used rsync to mirror the phone to an external USB drive. 
  • Unplugged the old phone and powered it off.
At this point, I thought I was ready to power on the new phone and begin the restore process. I hooked it to the computer so it would get power and then proceeded to answer the initial questions and connect it to our wireless LAN. Within a few minutes, I could see gmail showing up. I realized that many of the directories probably had content which didn't need to be transferred so I focused on her pictures, downloads and the messagesync directory.

At this point, I loaded the Message Sync application from the the Play Store and then did a "synchronize" operation. My wife had about 7500 SMS/MMS messages and the synchronization took a bit of time. I wrote this paragraph and previewed it in the time it took to synchronize. So not too bad from a time perspective. But it failed to restore anything. :(

We have used Backup to Gamil for automated SMS/MMS backups and it can restore SMS but not MMS. Unfortunately, it wanted to restore all 75000 SMS she has sent and received over the years. This is a fail. No MMS and way too many messages.

Third attempt was SMS Backup and Restore. Seemed simple enough but no hint of MMS support. This doesn't appear to be a very fast application but maybe it will get the SMS from one point to another.

I have given the new phone to Michele. I will wipe the old one once she thinks all is OK.

Sunday, September 9, 2012

New vim Trick (to me)

In a recent class, someone mentioned that the reason they liked the editors in IDEs was that they could collapse comments. Collapsing comments was something I never ever even thought about doing. When I look at source code, I look at it in its entirety. I am old school in that I believe programming can be an art form and that source code should be just functional and nice to look at.

Today, I was cleaning up my many browser tabs and noticed that I had Google'd "vim collapsing comments" which quickly got me to an explanation. This was very simple to turn on and I added a total of three lines to my .vimrc file. The first line turns on folding and instructions vim to use syntax as the guide.
:set foldmethod=syntax
In a few minutes of playing around with C and C++ files, it was clear that is collapses at least comment blocks and the code within {...} sections. There are commands to open (e.g. zo) and close (e.g. zc) folded sections. I quickly recognized that if I were going to use folded sections, I would have to bind these to function keys to make them easier to use.
:map <f9> zo
:map <F10> zc
When I open a file, all foldable sections are hidden. Pressing F9 will expand them and pressing F10 anywhere in the section will collapse it again.
Given that I have been using vi since 1986 without this enabled, it will be interesting to see if I like it or not in the long run. Time will tell.

Someone may wonder about what F1 to F8 do. If anyone expresses interest, I will post that.

Saturday, July 14, 2012

New TCP/IP Stack Progress Report

It is no secret that OAR has been working furiously on an update of the very old FreeBSD TCP/IP stack in RTEMS. This effort builds upon a port of the USB stack portion of FreeBSD 8.2 implemented by Embedded Brains. Kevin Kirspel did some initial work on incorporating the TCP/IP FreeBSD files and bringing over RTEMS support code from the old port. This blog entry attempts to capture the current state of the project and highlight the challenges still remaining.

The rtems-libbsd code base is currently being debugged using the Intel EtherExpress Pro (e.g. fxp) NIC on the pc386 BSP on qemu. The highlights of the status are as follows:

  • Kernel with TCP/IP enabled and FXP NIC completes initialization successfully
  • Initialization of the loopback interface IP address and route is currently failing. This appears to be due to RTEMS having a limited subset of proc and ucred (e.g. process and user credential) structures and these not being completely correctly initialized yet.
  • User space has 51 of 56 methods in old libnetworking/libc directory compiling cleanly
  • Popular PCI NICs compile but no testing
  • Target specific code
    • Internet packet checksum (e.g. in_cksum) support for architectures supported by both RTEMS and FreeBSD is in place. For targets only supported by RTEMS, the intent is to use one of the implementations that is in 100% C with no assembly support.
    • cpufunc.h file in place for all architectures even though on many it is an empty file
    • FreeBSD PCI Bus and Legacy Bus drivers are x86 specific. RTEMS will need the equivalent drivers for all architectures. The hope is that this driver can be used across all targets with minor modification. If not, the backup plan is a minimal cross-target RTEMS specific version. Initial indications are that the x86 specific version is target independent.
  • BSPs need new sections added to their linkcmds.
  • No NICs for ISA or System on Chips.
We feel that we are close to having the fxp driver working on pc386 and qemu. But there is a lot of work left to ensure that the stack is ready to become the preferred TCP/IP stack for RTEMS. The community is encouraged to pitch in and take on some of the following:
  • Test FreeBSD PCI NIC drivers currently in tree
  • Update NIC drivers in RTEMS not found in FreeBSD
    • RTEMS Project may be forced to deprecate some NIC drivers if not updated
  • Port other FreeBSD NIC drivers of interest
    • no ISA NIC drivers
  • Help get rest of libc methods to compile
    • ensure set of user APIs is complete
  • Select best in_cksum implementation for all architectures
  • Bus support methods for all architectures.
    • currently assume simple memory access is OK and it is untested
  • Help in testing user space code
    • includes network demos, servers, clients, etc.
  • Need to write API verification tests for each network method
    • tests similar to those in psxhdrs which ensure that you can invoke the method using only the header files in the man page.
  • Optimizations in code space, resources, and execution time
    • effort has focused strictly on getting it to work
In general, OAR has focused on the "depth" part of the project. There has been an enormous amount of effort expended so far. Although some of the work has been sponsored, much of it has been done as OAR overhead and volunteer effort, including Kevin Polulak who is a Google Summer of Code student. Once OAR can successfully ping an RTEMS target with the new stack, it will definitely be time for the RTEMS Community to rally and help speed the transition to the new TCP/IP stack.

Friday, May 18, 2012

Technical Debt and RTEMS

Dr. Dobb's recently had an interview with Ward Cunningham who developed the first wiki among other notable contributions. The interview is interesting and I recommend reading it. But I wanted to pass along some thought on one term that resonated with me.
Technical Debt: Cunningham uses this term to refer to work that needs to be done before a desired change can be implemented or work required to propagate a desired change across a codebase. In the RTEMS world, we have multiple examples of this. 
The most common case of RTEMS technical debt is when a single change must be implemented across all or a set of BSPs at the same time. Recently, Jennifer and I converted the MIPS port from Simple Vectored (SV) to Programmable Interrupt Controller (PIC) interrupt model. We did this because the MIPS/Malta we were developing a BSP for had a more complicated interrupt structure than previous MIPS boards. It was logical to use the PIC rather than the SV model on this BSP. But to ensure that all MIPS BSPs were consistent, we had to implement the same change for six other BSPs.

There are cases with RTEMS where some preparatory work must be done before something else is implemented. A prominent case of this was the CPU Scheduler Plugin Framework work by Gedare Bloom. Before the plugin framework existed, the RTEMS CPU Scheduler was bits and pieces of code embedded in the places where threads changed states and priorities. The plugin framework captured those decision points and made it possible to change the scheduling algorithm at application time configuration (e.g. confdefs.h in RTEMS terms). This was refactoring and clean up needed to be able to implement SMP support for RTEMS.

There are also cases where both types of debt must be paid within a single area. The RTEMS file system infrastructure has evolved over the past few years. Sometimes, there are desirable changes which must be propagated across all file system implementations and for other changes which have required cleaning up an area before refactoring or reworking it.

Not paying your technical debt can lead to long-term pain on a project. The most prominent case of this with RTEMS is when the PowerPC was converted some SV to PIC interrupt model. In contrast to what Jennifer and I did for the MIPS, this time only some BSPs were converted to the PIC model. This meant there were two different BSP interrupt models that co-existed for years. Worse, they were not named at the time and were known as "old" and "new" for years. And if that wasn't bad enough, the method and type names were not the same as the implementation for the x86. It has taken years of paying technical debt to clean up this mess.

It is also something that RTEMS developers would like to avoid repeating. So when you submit a patch and you are asked to modify some code you don't care about to make it consistent or told to be consistent with another BSP, remember that we are just asking you to avoid incurring technical debt. After all, it will probably be someone else who has to pay it. Better to avoid it altogether.

Tuesday, May 15, 2012

RTEMS Build System Ruminations

This post is a collection of ruminations after a recent post on the RTEMS Users mailing list. The post asked about a few  issues the user was having (italics for quotes):
  • my changes not "taking"
  • is there any way to limit what bootstrap operates on?  Since it takes quite a while to complete and most of the BSPs are of no interest to me, I would like to avoid bootstrapping them.
Gedare Bloom and Ralf Corsepius replied to the post and I thought it would be a good idea to take the time to write up some issues, workarounds, and benchmarks for various operations using the current build system on a git checkout with no modifications. Any build times quoted in this article were performed on a quad-core computer in the RTEMS Build Farm or my personal laptop with the following specifications:
  • RTEMS Build Farm Computer
    • Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
    • 4 GB RAM
    • Seagate 320 GB 7200RPM HDD (ST3320613AS)
    • Western Digital Caviar 1 TB 7200RPM HDD (WD1001FALS)
  • My Laptop
    • Intel(R) Core(TM)2 Duo CPU T7500  @ 2.20GHz
    • 4 GB RAM
    • Hitachi 160GB 7200RPM HDD (HTS722016K9A300)
The CPUs and disks in these computers are reasonably comparable in performance except that the build farm machine is quad-core. I would expect that the quad-core machine is somewhat faster in single core straight line computation than the clock speed indicates.But I would not expect a 2-3x speed difference in single core performance.

I will be the first to admit that neither of the above computers is the fastest one available today. The fact that one could spend money and get faster computers is important. These are reasonable computers and not obsolete. In fact, when I look at potential laptop upgrades, I am still surprised that my old laptop's CPU is rated much faster than those found in many available today. You have to move to a higher end laptop to beat that. When teaching RTEMS classes, I see attendees with computers that are both faster and much slower than mine. And performance is likely to be much worse in Cygwin or virtual machines than either of the two computers above.

The first issue was my changes not "taking". I personally always configure RTEMS with the --enable-maintainer-mode option which is documented as follows:
--enable-maintainer-mode  enable make rules and dependencies not useful
                          (and sometimes confusing) to the casual installer
This tends to ensure that any changes to and files are taken into account in a build tree and the appropriate files regenerated. There are limits to this in that if you changed a compiler flag then it will not result in everything being recompiled. However, if you change the way in which a configure option is interpreted and that is propagated into cpuopts.h or bspopts.h, it should result in impacted files being recompiled.

But what if a .h file changes? Based upon my experiment adding a one line comment to confdefs.h, every test "init file" was recompiled and every test executable was relinked. This is as expected.

Now let's consider changing a C file in cpukit. As an experiment, I added a one line commit to cpukit/sapi/src/exinit.c. This file contains rtems_initialize_data_structures() and thus every RTEMS application is dependent on this object file. The library librtemscpu.a was properly updated but no test was relinked. This untracked dependency is one possible explanation for my changes not "taking".

What is a C file in the BSP changes? As another experiment, I added a one line comment to c/src/lib/libbsp/shared/bootcard.c. Just as in the cpukit experiment, this file is required by every RTEMS application. The file librtemsbsp.a was properly updated but no test was relinked. This untracked dependency is another possible explanation for my changes not "taking".

Gedare makes a point of stating that long-time RTEMS developers know these deficiencies and work around them. Personally, I often find something like this in my command history:
rm -f `find . -name "*.exe"` ; make >b.log 2>&1 ; echo $?
That command ensures that tests are relinked. It covers up the fact that the library dependency is not properly tracked.

The first part of the second issue was is there any way to limit what bootstrap operates on? The solution to this is to only run bootstrap from the lowest level directory that has a that you modified or added. Often this is just a single BSP or when initially adding a BSP, the directory c/src/lib/libbsp/ just above your new BSP. Gedare Bloom answered this quite thoroughly and I am just going to quote his answer:
Except when you add new files / modify files you need to re-run bootstrap at the closest level to your modified that contains a file. 
So for example if you add a .c file into say cpukit/score/src then the file needs to be added to cpukit/score/ and then you need to re-run bootstrap from cpukit because that is the closest parent directory with a in it. For BSPs usually you just have to deal with the libbsp/CPU/BSP directory.
In order to run bootstrap from there I use a shell variable that points to my RTEMS root directory ($r) so that I can just ... "cd cpukit ; $r/bootstrap"
The second part of the second issue -- Since it takes quite a while to complete and most of the BSPs are of no interest to me, I would like to avoid bootstrapping them. -- is more complicated. When you initially clone the RTEMS git repository, you have to bootstrap the entire tree. A full bootstrap takes a long time and appears to be very single threaded. On the build farm machine described above, this takes 5m18.331s of user time and 0m48.900s of system time for a total of about 6 minutes to complete.On my laptop, this took 7m56.394s of user time and 1m6.840s of system time for a total of about 9 minutes.  Having a quad-core CPU does not help. The bootstrap process has not significantly improved in time in years. I recall various computers used over the years for RTEMS developing taking from 5 to 12 minutes to execute a complete bootstrap. And this time is much longer on Cygwin due to the inefficiency of the way it must implement POSIX process forking on MS-Windows.

Another thing to note is that bootstrap -p ONLY has to be run when you have modified a and changed the set of header files it installs. This generates the files. It does not need to be run after cloning the RTEMS repository because the files are checked into git. Many people run it more than it needs to be run. 

The need to bootstrap and git branches do not get along as well as one would hope.  As Ralf Corsepius explains in the post in the thread:
One final advise: Do not switch git-branches in git checkouts. As git does not preserve timestamps, while make and the autotools are relying on timesstamps, this will break time-stamps on generated files and eventually result in havoc - You (need) a toplevel bootstrap with each "git branch checkout".
To avoid this, my advise is to use multiple checkouts instead.
I note that even with --enable-maintainer-mode enabled, my experience is that you do often get stuck bootstrapping from the top of the tree when switching branches. The builds will end with a cryptic message in the output. This is a serious hindrance to using git. The typical git usage pattern does not include having multiple clones for different purposes. This is what branches are designed for.

How long does it take to build RTEMS? The answer to this question depends on a lot of factors including the obvious like the computer you are using and the not so obvious such as how you configured RTEMS and did you use the -j option to make to enable parallel jobs.  If you configure RTEMS to include all of the tests, then the build time is significantly longer since there are 399 total tests to compile and link. If you enable only sample tests, then this number drops to 13. The execution of the configure command itself is not a huge factor in build times taking only about 4 seconds on my laptop. It is the actual make that takes so long. The make actually results in a lot of configuration being performed. On my laptop, I got the following times for configure and make when POSIX, TCP/IP, and all tests enabled for sparc/sis (forgive the bad line wrapping):
$ time ../rtems/configure --target=sparc-rtems4.11 --prefix=/home/joel/rtems-4.10-work/bsp-install/ --disable-multiprocessing --enable-cxx --disable-rdbg --enable-maintainer-mode --enable-tests --enable-networking --enable-posix --disable-deprecated --disable-ada --enable-expada --enable-rtemsbsp=sis >c.log 2>&1
real 0m12.511s
user 0m1.970s
sys 0m2.138s
$ time make -j3 >b.log 2>&1
real 10m1.806s
user 8m9.319s
sys 2m8.838s
Building all tests on the quad-core computer at -j7 resulted in a build time of approximately 5 minutes. Given the large number of tests, this indicates that there is opportunity to take advantage of multiple cores during a full build of RTEMS.

Building only the sample tests (e.g. --enable-tests=samples) on my laptop, resulted in a build time of 3m4.420s real time with system and user coming close to adding up to real time. That means 2/3 of the build time is compiling and linking the tests.

How much of the make time is actually configuration? After posting this, I was asked privately how much of the make is spent in configuring versus compiling. To answer this question, I found the first file in RTEMS compiled for the target (e.g. cpukit/score/cpu/sparc/cpu.c in this case) and introduced a compilation error. Then I manually fixed the compilation error and invoked make again. The second make invocation is likely verifying that the configuration didn't change so configuration overhead didn't go to zero but it is close enough.
$ time make -j3 >b1.log 2>&1
real 1m38.264s
user 1m8.027s
sys 0m13.357s
$ time make -j3 >b2.log 2>&1
real 1m29.903s
user 1m56.809s
sys 0m24.112s
I repeated this experiment on the quad-core build farm machine and got the following results:

$ time make -j7 >b1.log 2>&1
real 0m36.652s
user 0m10.918s
sys 0m10.383s
$ time make -j7 >b2.log 2>&1
real 0m50.618s
user 1m37.673s
sys 0m27.296s
Looking at the above, it is pretty clear the the configuration part of make is a significant portion of the entire build time. On my laptop it was slightly over half, while on the quad-core computer, it was about 40%. It  also appears the the configuration stage is unable to take advantage of multiple cores as user plus system time are less than the real time in both cases. On the quad-core, it took nearly three times more real time than CPU time which likely indicates that it is I/O bound. 

In contrast, the build portion of the make command's actions are clearly parallelizable. On the dual core laptop, the approximately 90 seconds spent in the second step used 120 seconds of CPU time. This indicates that there both cores were utilized for about 2/3 of the build time. On the quad-core machine, we see about 51 seconds of real time consuming 135 seconds of CPU time for about 2/3 utilization again. The build time was reduced about 45% by moving from from the dual core to the quad-core computer.

I am personally a proponent of continuous integration and testing. It would be a boon to the RTEMS Project if there were a buildbot and system to get build and test execution feedback on every commit/ Even better would be able to get this feedback before the patch is officially committed. Considering that building all source for one BSP with all tests takes 5 minutes on a reasonable quad-core computer and NO TESTS WERE RUN, one can see the challenge.There are approximately 145 BSPs in the tree currently when one considers variants. On this computer, it would take over 12 hours to build all BSPs and tests. This assumes a fresh checkout and a single bootstrap. If you did that for each BSP, the build would take over 24 hours without executing any tests. Add in a test build of each target in multilib configuration and documentation, and that time goes up even further. That is in a SINGLE CONFIGURATION -- this does not include verifying that BSPs build with and without networking or POSIX enabled. 

This is completely unacceptable for a continuous integration and test effort. According to, we have had an average of 2.34 hours between commits since moving to git. No single solution will allow us to have a fast enough turn around on build and testing. In order to achieve a turn around under 2.34 hours, we will have to address the speed of the bootstrap, speed of the build process, distribution of building and testing, be smart about only building and testing areas impacted, and ultimately throw more hardware at the problem.

As final food for thought, this is just for RTEMS itself. This does not account for the testing that should be done on the GNU tools we rely upon (e.g. binutils, gcc, gdb, and newlib). A full build and test cycle for all targets  can take up to 4 days on the same quad-core computer. This time can vary based upon the languages being built and tested but GCC simply has a lot of tests.