Notes for February 16

Projected topic: Building a Linux Kernel using a cross compiler. Unfortunately, because the network was unavailable, this was unable to be demonstrated fully. Below are descriptions of the steps we will follow when cross compiling, and a few extra notes concerning Linux.

1. Administrative

Before beginning, there are three lab rules to cover.

  1. Log in as user, never as root. Using the root account can cause you to accidentally overwrite your system libraries and programs when doing your cross compile. Not a good thing.

  2. Use corona or nopal lab PCs, they are faster. The other machines are best used for miscellaneous work, such as looking stuff up when you are cross-compiling.

  3. Leave notes at a machine if you have started something and have not yet finished it. This can let students know you intend to come back, or that you have given up, and they can pick up wherever you left off.

2. Setup

To get the cross compiling process started, there are a number of extra things you'll need. A list of these things can be found on the SlugOS page. One of these we went into a little depth on was the revision control software (either CVS or SVN). Why use special software instead of a standard database? It turns out that databases can't handle the kind of things revision control does - it is possible that there is never a time when everything has been "checked in" and incorporated. Because things may never be consistent, databases have a hard time with this. While revision control is an older idea, it really gained popularity in the early 1990s thanks to Linux and the open-source community's needs. There are many different revision control options, and many people are adamant supporters of one or the other (kernel developers use Git, other projects use Monotone - interesting note, apparently Monotone was the inspiration for Linus Torvalds to create Git).

Besides version control, there are several other tools, again which can be found on the SlugOS page.

3. Cross Compiling

A cross compiler creates binary files whose target destination is different than where they are compiled (i.e. these will go to the embedded system). So, for us, we can compile a Linux kernel meant for the NSLU2's processor (an ARM architecture) on our x86 variant machine. Notice this means we can't run the program. In some applications, they create emulators that allow you to simulate the target architecture in your development environment. There is no such emulator for the NSLU2.

There are a number of sub-topics concerning cross-compiling for us. Two of those are mentioned below.

3.1. Toolchain

Instructions on how to build a development toolchain can be found here. These are details that we probably don't ever have to get in to, but if you want to, it is pretty interesting. One major point to take from the toolchain is that the project uses BitBake, which is a Python program that can completely create a package from "recipes" (get it - baking, recipes?). It will automagically connect to the internet, download needed packages, figure out what dependencies you need, compile it, whatever is needed to create a package. Seems like a useful feature!

3.2. Master Make

Make is a Linux utility that can automatically configure and compile packages for you. We have already seen it once with make menuconfig when we built the kernel. It uses a Makefile, and takes as an argument a word (I believe make calls these words targets, and we had decided in English they would be objects). You can think of this as kind of like a script that also has some logic built in, so that it can figure out what it needs to do based on commands you give it and the state of your operating system.

Following we give a sequence of commands if you were making SlugOS version 5.3. One caution is that you should be connected to the internet before doing so. If you are not, certain things in make will fail. Luckily, make remembers where you stopped and resumes right from there! So making SlugOS may turn out to be an involved process, but if you monitor the process and use the error messages as guidance to download needed packages, things should go OK (don't forget you'll always have to manually install the Intel drivers for the NSLU2). These steps are covered in detail here. The basic sequence of commands, using MasterMake is:

     mkdir slug53
     cd slug53
     wget --cache=off
     make <optional target>

The .tar.gz files can then be unzipped by using

     tar xfj <fname>

It is good practice to make a directory with the same name as the .tar.gz file i.e.:

     mkdir <fname>
     tar xfj <fname>

After these files are unzipped, use the make command with their name. For example if you unzipped setup, then you would use make with setup as a parameter.

     tar xfj setup.tar.gz
     make setup

4. Linux Software Manual Installation

The second-to-last thing we covered in class was how to manually install a Linux package (instead of the easy-to-use Slackware interface). In general, you can use the following sequence of commands (I have included descriptions of each command after the # - this is not valid command line syntax, so ignore everything after and including the #):

su -             #this will switch you to the root user, and move to the root directory
tar xfz filename #this will extract the zipped and tar'd file you give it
                 # if you use 'tar tfz filename' it will "tell" what files are in the archive
vi INSTALL       # this will show you the install instructions - at least skim these for what you have to do
./configure      # this builds the Makefile you will be needing in the next step...
./make           # this will actually build the executable, but doesn't do anything with it yet
./make install   # this checks the Makefile for the location the executable should be and puts it there

You can Google make and find a lot of information - one quick description that seemed easy for me to follow was this one.

5. Command-Line Browsers

We finished class with a demonstration of command line web browsers! When would you use this? The NSLU doesn't run X, so you may use it then. You may have skipped installing X because it takes too much space, but still want occasional web browsing abilities (happens a lot!). Or maybe you want to impress your friends/coworkers/onlookers. Either way, we talked about Links, which is started (as you might expect) with the command:


If you want to add some fancy graphics, use:

links -driver fb

Once you're in Links, you can press 'g' to enter a URL or file location to browse.

On a side note, there is another browser called Lynx, which is all text-based. You'll see a lot of this similar naming stuff in Linux, either homophones or words from the same "category" - Links vs Lynx, Less vs More, Nano vs Pico, etc. Before this class, I had never heard of Links. Learning there was a Links (with an i) was pretty cool for me - wow, graphics from the command line!

Feb16Notes (last edited 2014-05-25 18:15:48 by localhost)