In this issue:
This months authors:
M. Raja Naresh, Manohar Vanga
This newsletter is intended to invigorate the Twincling community with code intensive articles, programming tips and tricks, explorations of new technologies and unlocking secrets of established ones!
We are mosltly a bunch of lazy programmers so the magazine is not strictly monthly and a new issue will only be released if we have enough material available from the community
If you have an article you have written that you wish to share with the community, we request you to send it to us! Articles, feedback and queries regarding the newsletter can be sent to contribute@twincling.org.
Ever forgotten your root password or wanted access to a computer whose root password you didn't know? Here's how to do it! The only requirement is that the bootloader (I assume GRUB) doesn't have a password set (most good sysadmins should set one anyway).
In the GRUB menu hit the 'e' key to edit the kernel=
arguments to look like the following:
root (hd0,0)
kernel /boot/vmlinux root=/dev/hda1 ro init=/bin/sh
initrd /boot/initrd
Now press 'b' to boot and wait for it to boot up. That should give you a root shell immediately. Before you try changing the password, you need to remount the root filesystem as it is mounted as read only. Do the following:
bash# mount -o remount,rw /
bash# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
bash#
If you don't remount the root filesystem, you will get the following error:
bash# passwd root
Changing password for user root
New UNIX password:
Retype new UNIX password:
passwd: Authentication token lock busy
bash#
Enjoy!
Ever wanted to write a bootloader for a Linux kernel? If so the file Documentations/x86/boot.txt
in the kernel source code provides an extremely detailed description of the boot protocol used by it.
contribute@twincling.org
! They will be included in the next newsletter!Some of us are basic users, some of us are intermediate users and some of us are hungry power users of the bash
shell. Here are some interesting scripts to speed up or in some cases spice up your everyday bash
usage!
An important thing to do is to keep your personal scripts separate from the ones your Linux distribution provides. A good way is to create a bin
directory in your home and add it to the search path:
$ mkdir ~/bin
$ echo "export PATH=$PATH:~/bin" >> ~/.bashrc
Remember that this makes the ~/bin
directory that last path searched, so if you have a script in there with the same name as a system script, you will have to call it explicitly with the entire path.
These are some scripts that are useful for bash shell users. The rule of thumb is that if you are using a specific command quite often, it is a good idea to sit and write a simple script for it. Below are some basic scripts that every programmer can use to improve their productivity!
~/bin/tx
)#!/bin/bash
if [ $# -ne 1 ]; then
echo "tar.gz archive extractor"
echo "Usage: tx [archive-name]"
exit
fi
tar -xvf $1
~/bin/tjx
)#!/bin/bash
if [ $# -ne 1 ]; then
echo "tar.bz2 archive extractor"
echo "Usage: tjx [archive-name]"
exit
fi
tar -jxvf $1
Have you ever had an archive that, when extracted, messed up your entire directory because it extracted all its files to the current directory rather than its own subdirectory? If so, then this script will let you view the archive before you decide to extract it!
~/bin/tv
)#!/bin/bash
if [ $# -ne 1 ]; then
echo "Archive Content Viewer"
echo "Usage: tv [archive-name]"
exit
fi
tar -tvf $1
Another useful script to always have around is one that creates and archive of the current folder for you. Below are two scripts to do this in both tar.gz
and tar.bz2
format. The final output archive is saved in the parent directory with the same name as that of the directory itself.
~/bin/maketar.tgz
)#!/bin/bash
cd `/bin/pwd`
n=`basename \`/bin/pwd\``
if [ -e ../$n.tar.gz ]; then
echo -n "File ../$n already exists. Continue? [y/N]:"
read choice
case $choice in
n|N) exit;;
y|Y) break;;
*) exit;;
esac
fi
cd ..
tar cvf - $n | gzip > $n.tar.gz
~/bin/maketar.bz2
)#!/bin/sh
cd `/bin/pwd`
n=`basename \`/bin/pwd\``
if [ -e ../$n.tar.bz2 ]; then
echo -n "File ../$n already exists. Continue? [y/N]:"
read choice
case $choice in
n|N) exit;;
y|Y) break;;
*) exit;;
esac
fi
cd ..
tar cvf - $n | bzip2 -9 > $n.tar.bz2
If you find the auto-completion of the maketar.*
utilities above to be irritating, you can add a symbolic link called simply maketar
and point it to your more commonly used one (tar.gz in my case) as shown below.
$ cd ~/bin
$ ln -s maketar ./maketar.tgz
Don't forget to do a "chmod +x <scriptname>
" so that they are executable!
Linux has a couple of special devices that are very useful for programmers! This article takes a look at some of them and then tries coming up with interesting things to do with them!
This special device in Linux discards any input that is thrown with it. It is used to discard output through redirection. Another use for it while benchmarking some activity where printing to screen would possibly give incorrect results! For example, if measuring the latency of a network, writing information to stdout during the benchmarking process will add an additional overhead and return larger values than expected. In such cases, it is better to pipe all output to this file.
The "always full" device is a special device that always returns No space left on device (symbol ENOSPC) on writing, and provides infinite number of null characters to any process that reads from it (similar to /dev/zero). This device is useful when testing the behavior of a program when it encounters a disk full error!
$ echo "Hello World" > /dev/full
bash: echo: write error: No space left on device
This file acts as an infinite source of pseudo-random numbers! This is extremely useful in generating unique data for testing! Try printing out the contents of this special device:
$ cat /dev/urandom
This file gets its randomness by taking random data from an entropy pool, which is noisy data collected from device drivers and other parts of the kernel. There is another file called /dev/random
which is a blocking version of /dev/urandom
. If you print the data in /dev/random
, the output will block waiting for random noise to be created by device drivers (for example do a cat /dev/random
and move your mouse around!). The difference with /dev/urandom
is that it keeps reusing the data from the entropy pool if there is nothing new available, which allows it to be non-blocking.
This file acts as an infinite source of 0's! Try the following:
$ cat /dev/zero
There should be no visible output, but there are actually thousands of zeros being emitted! You can verify this by piping its output to the 'tr' command:
$ cat /dev/zero | tr '\0' '1'
The 'tr' command above replaces all occurrences of '\0' with '1''s. You should see a lot of 1's being printed onto the console on running the above command!
This is a character device file that is an image of the main memory of the computer. It may be used, for example, to examine (and even patch) the system. It refers to non-kernel memory. If you wish to access kernel memory, you can use the /dev/kmem
device instead! Byte addresses in mem are interpreted as physical memory addresses. References to non-existent locations cause errors to be returned.
Here are some nice ways to play around with the above devices.
We can use the /dev/zero
file to create files of any size! Since we have an infinite source of byte values, we can do things like:
$ dd bs=1k count=1024 if=/dev/zero of=/tmp/onemegfile
The 'dd' command take an input file (specified by 'if'), an output file (specified by 'of') and dumps the requested number of blocks (specified by 'count') of a given size (specified by 'bs'). The command above dumps 1024Kb of data into the file /tmp/onemegfile
! Playing around with the bs
and count
options, we can create any size file!
Ever wondered what it would be like to REALLY screw up your computer? You can use the /dev/urandom
device to REALLY destroy things! For example, you could do something as stupid as:
$ sudo cat /dev/urandom > /dev/sda
DO NOT TRY THIS AT HOME! If you really want to try it, set up a virtual machine and knock yourself out! Think of this as worse than "rm -rf *
". We are basically dumping random data onto the first hard disk on the machine!
A relatively safer (still potentially unsafe!) version of this is to redirect output to the /dev/mem
or /dev/kmem
device. This will simply hang your system and panic the kernel. You will need to restart to make things right again. Lot of data can still be lost based on what the kernel was doing when you tried this sneaky trick:
$ sudo cat /dev/urandom > /dev/mem
If you ever find yourself looking for warnings and errors between all the commands that get dumped when you run a Makefile using make
, then you can use the /dev/null
device to dump all stdout
. Since errors have their own stream, stderr
(It is just set to the same as stdout
by default so it prints to screen), we can do the following:
$ make 1> /dev/null
In the above the "1
" is the file descriptor for standard output (stdout
) and we are piping it to the null device to be discarded. The stderr
data which is file descriptor value 2 still prints to screen!
We can use the 'dd' command to dump the binary of the bios to a file! We can do this by using:
$ sudo dd bs=64k count=1 skip=15 if=/dev/mem of=/tmp/biosdump
This command dumps 64Kb of memory to the file /tmp/biosdump
. Since the BIOS is always available in memory, one can dump the BIOS to a file and reverse engineer it! You could also try looking at all printable strings available in the BIOS binary using:
$ sudo cat /dev/mem | strings | grep bios
Try playing around with different grep
arguments to see if you can find any interesting information about your machine!
Linux is filled with many interesting things and there is a lot of fun in finding them and coming up with interesting ways to play with them! Go ahead and explore!
The following tutorial is meant for newbie Debian maintainer and covers the topic of making Debian packages (.deb files). This tutorial is neither a complete tutorial nor the required way to make Debian packages. Making a Debian package can be irritating and time consuming for newbies(I know it was for me). After much of reading and testing I have been able to decode most of the process to make one. It is also true that the process changes and depends on the package you are trying to build. To add to the complexity, there are multiple ways to create a Debian package with multiple tools. This tutorial builds a Debian package from a simple hello-world binary. To make things easier I have tried to make a Debian package with just the minimum requirements.
Note: This is not the intended way of creating a Debian package and has been simplified for the sake of demonstration. It is purely meant for understanding the individual components of a Debian package.
Create a folder using the following name convention: (package name)-(version no.)
$ mkdir hello-0.1
$ cd hello-0.1/
Fire up your favourite editor and create a file hello.c
inside the above folder, and you know what to write. Compile the code with the following:
$ cc -o hello hello.c
Now, you need to make replica of the subportion of the root filesystem and copy the binary hello
into one of the folders.
$ mkdir usr
$ mkdir usr/bin
$ cp hello usr/bin
Create a folder named DEBIAN
in the current directory and copy a file control
into it.
$ mkdir DEBIAN
$ touch DEBIAN/control
$ vi DEBIAN/control
Enter the following into the file:
Package: hello
Version: 1.0-1
Section: base
Priority: optional
Architecture: i386
Depends: libc6 (>= 2.10.0)
Maintainer: Your Name
Description: Hello World
Named must your fear be before banish it you can.
(the space before each line in the description is important)
To get a clear picture the hello-0.1
directory is supposed to look something like this: (considering you are in the hello-0.1
directory run the following commands)
$ ls
DEBIAN usr
$ ls usr/
bin
$ ls usr/bin/
hello
$ ls DEBIAN/
control
The fields in the control
according to Debian policy manual is described below.
The name of the binary package. It must consist only of lower case letters (a-z), digits (0-9), plus (+) and minus (-) signs, and periods (.). They must be at least two characters long and must start with an alphanumeric character.
This field specifies an application area into which the package has been classified. The other options which could be mentioned by the Debian archive maintainer are: admin, cli-mono, comm, database, devel, debug, doc, editors, electronics, embedded, fonts, games, gnome, graphics, gnu-r, gnustep, hamradio, haskell, httpd, interpreters, java, kde, kernel, libs, libdevel, lisp, localization, mail, math, misc, net, news, ocaml, oldlibs, otherosfs, perl, php, python, ruby, science, shells, sound, tex, text, utils, vcs, video, web, x11, xfce, zope.
This field represents how important it is that the user have the package installed. There are totally five priority levels in Debian package management which are "requsired", "important", "standard", "optional" and "extra". Each of the field value is in the increasing priority of installation.
The rest of the fields are pretty much self explanatory. The tool which we are going to use (dpkg-deb
) makes use of the DEBIAN/control
file to place the binary copied into the appropriate location as per the replica filesystem we made. All there is left now is to issue the final command.
$ cd ../
$ dpkg -b hello-0.1
You should be able to see a hello-0.1.deb package in the same directory. To build the hello-0.1.deb package issue the following command.
$ dpkg -i hello-0.1.deb
Since /usr/bin
is already included in the PATH variable you should be able to run hello
binary from your shell. To check the contents of the .deb package you could run the below command. Refer to the man page of dpkg
.
$ dpkg-deb -c hello-0.1.deb
Creating a Debian package this way is just archaic and neither can you upload this package to the Debian repository this way nor is in use anymore. This is just the skeleton requirements for creating a Debian package. Although, this tutorial will be useful to explore furthur topics regarding Debian package management in coming newsletters.
In this article, we'll take a look at how a buffer overflow attack works (specifically a stack overflow). We'll do this by writing a simple program to demonstrate the dangers of buffer overflows.
Here is a simple (and extremely naive) program that checks an input password until the correct one is received. On receiving the correct password, it prints a message on the screen and ends.
Stupid Password Program (boverflow.c)
#include
int main(int argc, char *argv[])
{
char password[20];
int test = 1;
do
{
printf("Password: ");
scanf("%s", password);
if(strcmp(password, "cookies") == 0)
test = 0;
} while(test);
printf("Not-so-top secret password protected area\n");
return 0;
}
Let's first compile this program:
$ gcc boverflow.c -o over
$ ./over
Password: chocolates
Password: chickens
Password: pigeons
Password: dodos
Password: cookies
Not-so-top secret password protected area
It works. If you enter the right password ("cookies" in this case), it ends the loop and goes to the "secret" area. What is wrong with this code and how can we break through it?
First, the scanf
function does no bound checking, so if the password you enter crosses 20 characters (we allocated 20 characters for the password
variable), the memory that comes right after password will be overwritten. How do we know what comes right after the password variable? We can see it in the code! The variables are allocated on the stack in the order they are written in. The first variable allocated on the stack will be password
followed by test
. If we overflow the password
variable, we end up overwriting the test variable!
From the code we can see that the test variable (conveniently...) is what holds the whole loop together. If set to 0, it will simply break the loop and move into the "secret" area! How do we overwrite the test
variable with a value of 0? Remember that we need to set it to integer value 0, so putting a character '0' will not suffice. Ever more conveniently, C uses the value 0 (\0
) to mark the end of strings. So if we enter a string that is exactly 20 characters long, the '\0' will overflow into test and break the loop! Let's try to confirm:
$ ./over
Password: aaaaaaaaaaaaaaaaaaaa
Not-so-top secret password protected area
Whaddayaknow! It works! The string we entered is exactly 20 characters long. We just broke through simple (read moronic) security with some good ol' thinking! Let's make it harder this time. Let's try and break a program to which we have never seen the source. Well actually, I have compiled the same source but with different values for the password
strings length and have set a different password. We are assuming that we know how the program is organized but don't know the specifics. Go ahead and change the values in your code and pretend you don't know them (I changed the length of the password string to 10 and changing the correct password). Now try running it:
$ gcc -ggdb boverflow.c -o over
./over
Password: aaaaaaaaaaaaaaaaaaaa
Password: cookies
Password: ^C
$
Looks like this one is putting up a fight! Let's start by realizing that we've just compiled it with debugging enabled (once again, how convenient!). So we can use a debugger such as GDB to figure out what the hell is going on and then break through it! In the case that you are given a binary and want to know whether it was compiled with debugging symbols or not, you can use the file
command as shown below:
$ file ./over
./a.out: ELF 32-bit LSB executable, Intel 80386, .... for GNU/Linux 2.6.8, not stripped
The executable above is not stripped (ie. the debugging symbols haven't been removed). So we can proceed by stepping the program in GDB:
$ gdb ./over
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i486-linux-gnu"...
(gdb) _
Let us start by setting a breakpoint at the main function and running the program:
(gdb) break main
Breakpoint 1 at 0x8048455: file boverflow.c, line 6.
(gdb) run
Starting program: /home/mvanga/Programming/test/boverflow/over
Breakpoint 1, main () at example.c:6
6 int test = 1;
(gdb)
Let's try and look at where on the stack the variables are stored this time:
(gdb) print &password
$1 = (char (*)[10]) 0xbfb67c16
(gdb) print &test
$2 = (int *) 0xbfb67c20
(gdb)
Whoops! Looks like GDB is a show stopper by telling us the size directly (10 in this case)! Let's just go ahead and crack the program:
$ ./over
Password: aaaaaaaaaa
Not-so-top secret password protected area
Let's make it even harder! Let's strip the variable so we have no way of using GDB for our dirty work. You can strip an executable as shown below:
$ file over
over: ELF 32-bit LSB executable, Intel 80386, .... for GNU/Linux 2.6.8, not stripped
$ strip over
$ file over
over: ELF 32-bit LSB executable, Intel 80386, .... for GNU/Linux 2.6.8, stripped
$
Now what? Fear not! objdump
to the rescue! The objdump
tool can be used to inspect ELF files and can provide us with extremely valuable information. Lets try disassembling our program with objdump
:
$ objdump -d ./over
You should have got a mess of assembly as output. Lost? Don't be! Remember that our program uses scanf
which is a library function and thus will still be available to view whether the executable is stripped or not. Poking around a bit, you can figure out that the main function is somewhere around here:
8048444: 8d 4c 24 04 lea 0x4(%esp),%ecx
8048448: 83 e4 f0 and $0xfffffff0,%esp
804844b: ff 71 fc pushl -0x4(%ecx)
804844e: 55 push %ebp
804844f: 89 e5 mov %esp,%ebp
8048451: 51 push %ecx
8048452: 83 ec 44 sub $0x44,%esp
8048455: c7 45 f8 01 00 00 00 movl $0x1,-0x8(%ebp)
804845c: c7 04 24 80 85 04 08 movl $0x8048580,(%esp)
8048463: e8 ec fe ff ff call 8048354 <printf@plt>
8048468: 8d 45 da lea -0x26(%ebp),%eax
804846b: 89 44 24 04 mov %eax,0x4(%esp)
804846f: c7 04 24 8b 85 04 08 movl $0x804858b,(%esp)
8048476: e8 c9 fe ff ff call 8048344 <scanf@plt>
804847b: c7 44 24 04 8e 85 04 movl $0x804858e,0x4(%esp)
8048482: 08
8048483: 8d 45 da lea -0x26(%ebp),%eax
8048486: 89 04 24 mov %eax,(%esp)
8048489: e8 e6 fe ff ff call 8048374 <strcmp@plt>
804848e: 85 c0 test %eax,%eax
8048490: 75 07 jne 8048499 <strcmp@plt+0x125>
8048492: c7 45 f8 00 00 00 00 movl $0x0,-0x8(%ebp)
8048499: 83 7d f8 00 cmpl $0x0,-0x8(%ebp)
804849d: 75 bd jne 804845c <strcmp@plt+0xe8>
804849f: c7 04 24 98 85 04 08 movl $0x8048598,(%esp)
80484a6: e8 b9 fe ff ff call 8048364 <puts@plt>
80484ab: b8 00 00 00 00 mov $0x0,%eax
80484b0: 83 c4 44 add $0x44,%esp
80484b3: 59 pop %ecx
80484b4: 5d pop %ebp
80484b5: 8d 61 fc lea -0x4(%ecx),%esp
80484b8: c3 ret
If we know how the stack works in the Intel architecture, we can see that the stack pointer makes room for variables by subtracting the sizes of the variables from itself. The stack pointer also always points to the last used word in memory. All local variables are indexed using the base pointer (ebp) which is set to the stack pointer before all the subtractions for local variables (thus the offset is negative since the stack grows downwards). We know that the password pointer is being passed to the scanf
function, so lets take a look at the assembly just before the call to scanf
is made and look for something related to the ebp register:
...
8048452: 83 ec 44 sub $0x44,%esp
8048455: c7 45 f8 01 00 00 00 movl $0x1,-0x8(%ebp)
804845c: c7 04 24 80 85 04 08 movl $0x8048580,(%esp)
...
8048468: 8d 45 da lea -0x26(%ebp),%eax
804846b: 89 44 24 04 mov %eax,0x4(%esp)
804846f: c7 04 24 8b 85 04 08 movl $0x804858b,(%esp)
8048476: e8 c9 fe ff ff call 8048344 <scanf@plt>
...
The scanf
function is being passed the address ebp-38
(0x26 in Hex is 38 Decimal). This is where our password
variable must lie. But where does the test
lie in memory? We can see from the 8th line above that the value 1 is being stored into ebp-8
(we set test=1
in the beginning of our program). That's where test must lie. So what is the length of the password input string? It's:
Address of password = ebp - 38
Address of test = ebp - 8
Length of password = mod((ebp - 38) - (ebp - 8)) = 30
Thirty characters! Let's confirm by breaking apart this program!
$ ./over
Password: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Not-so-top secret password protected area
Tada! Quite simple really. Buffer overflows are the stupidest (yes surprisingly common) mistakes you can make in your code so keep a lookout for them! How could you have written the code so that it would be safe? Check for bounds whenever you take an input and are about to put it into a pointer. You can also use safer functions to get the job done. For example, in the above program we could have done:
scanf("%29s", password);
This would have limited the length. We could have also used fgets
to read from stdin
in a length constrained way:
fgets(password, 29, stdin);
If you don't want to preallocate the strings but want to input strings of any length dynamically, you can use the 'a' flag of the scanf
function:"
scanf("%as", &password);
The 'a' flag makes scanf
allocate the string based on the input. It returns a pointer to a char *
, which is why we need to pass a pointer to our string pointer (notice the &).
Happy hacking!
The greatest thing about the Linux kernel is that it really puts the fun back into computing! As developers, it is an extremely interesting toy to play around with and customize to our own liking. This could mean recompiling the latest kernels, trying out various configurations, writing kernel modules for fun or perhaps just trying out some destructive commands ("sudo cat /dev/urandom > /dev/sda
" anyone?)! One mantra should always be to never use your development machine as a testbed as it can have potential dangers. Since most of us do not have the luxury of having multiple machines to play around with, we can use an alternative method involving virtual machines. Read on!
In this article, we will set up a virtual machine so that we can use it as a test machine for all our "experimental" work. I assume you are running a Debian-based system and will use apt-get
. If you are using another distribution, you will have to do some work of finding out which packages correspond to the ones mentioned here on your particular distribution. We use QEMU in this article as it is relatively lightweight and allows us to have finer control over the execution.
Let us dive right in and install QEMU with:
$ sudo apt-get install qemu
You can also choose to install from source by following the instructions on the QEMU website.
We now need a distribution to run in the test environment. Luckily, the Debian website provides QEMU images for the x86 processors. Grab the debian_lenny_i386_small.qcow2
image (Debian Lenny, minimal, no graphical environment) from the Debian QEMU images page. The qcow2
format is a QEMU disk image format. It stands for Qemu Copy-on-write. Using this format QEMU can use a base image which is read-only, and store all writes to the qcow2 image. Among the QEMU supported formats, this is the most versatile format and it is used to get smaller images, optional AES encryption, zlib based compression and support of multiple VM snapshots. In short, its great! You can also download this image quickly using wget
as shown below:
$ mkdir ~/sandbox
$ cd !$
$ wget -c http://people.debian.org/~aurel32/qemu/i386/debian_lenny_i386_small.qcow2
The !$
trick above lets us pass the previous commands arguments to the current command. This can be seen with a simple echo !$
. So now we have finished downloading the image and are ready to run QEMU with it.
Run QEMU using the downloaded image as the first hard disk:
$ qemu -hda debian_lenny_i386_small.qcow2
Let it boot up. At the login prompt, you can login using the default passwords provided on the Debian QEMU images page:
All images are 10GiB images in QCOW2 format on which a Debian Etch or Lenny
system has been installed. The small images correspond to a "Standard system"
installation, while the other images correspond to a "Standard system" +
"Desktop environment" installation. Other options are as follow:
- Keyboard: British English
- Language: English
- Mirror: ftp.de.debian.org
- Hostname: debian-i386
- Root password: root
- User account: user
- User password: user
Login as root. The network should be detected automatically by QEMU and you should be able to access the internet from your QEMU instance. You can try something like the following to test internet connectivity:
# ping google.com
If you're network is not up and running, take a look at the excellent article on setting up QEMU networking. Once this is done, we can now install the additional required packages to make this a usable testbed. Here are some of the recommended and interesting packages you can install:
# apt-get install sudo # For root usage from user logins
# sudo apt-get install apache2 # For playing around with apache!
# sudo apt-get install linux-headers-$(uname -r) # For kernel development
We also want an easy way to access the virtual machine without having to mess around with QEMU. I personally use SSH to log into the virtual machine and work. This way, I can use my shell shortcuts and not have to keep switching between windows. You can set this up by installing the SSH tools on the virtual machine:
# sudo apt-get install openssh-server
You also want to set up the hostname of the virtual machine so you can identify it easily. Edit the /etc/hostname
file and put your chosen name there:
/etc/hostname
# Set the name of your machine in this file
sandbox
You should also edit your /etc/hosts
file and add the name as an alias for the local ip (127.0.0.1
). Modify the file to look like this:
/etc/hosts
127.0.0.1 localhost
127.0.1.1 debian-i386 sandbox
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
You can set up a startup script for the virtual machine on your host as shown below. I have redirected the SSH and TCP ports to maintain the original ports on the virtual machine rather than modify the services on the guest to use different ports (using the -redir
option in QEMU)
~/bin/sandbox
#!/bin/bash
# Not the cleanest script, but it works :-)
running=`ps -e | grep " qemu"`
echo $running
if [ "$running" != "" ]
then
dialog --title "Whoops!" --yesno "QEMU already running! Kill?" 6 45
ans=$?
if [ $ans -eq 1 ]; then
exit
fi
killall sandbox-qemu
killall qemu
exit
fi
qemu -nographic -hda /home/mvanga/sandbox/debian_lenny_i386_small.qcow2 -redir tcp:5555::22 -redir tcp:5556::80 &
As you can see, I have used the -nographic
option to prevent all graphical output and save processing power. I use the command line for any work. We can now try to SSH into the machine from our host! We have redirected the ssh port on the guest to port 5555 of the host, so we can ssh into the guest from the host using the -p
option:
$ ssh -p 5555 localhost
If you installed Apache on the guest, you can test it out by opening up your browser and navigating to localhost:5556
(we have redirected the http port 80 of the virtual machine to port 5556 of the host).
I would also suggest writing short scripts in your ~/bin
directory (or set up aliases in your "~/.bashrc
" file) to ssh quickly into the guest:
~/bin/ssh5555
)#!/bin/bash
ssh -p 5555 user@localhost
~/bin/ssh5555r
)#!/bin/bash
ssh -p 5555 root@localhost
Voila! A working debian test system which you can mess around with! You might want to make a copy of the image in a safe place in case you mess things up (isn't that why we made this?). Once you have made a copy, you can try out a destructive command when SSHed into the system, such as:
# cat /dev/urandom > /dev/mem # Do not try this at host!
or perhaps you would like something more subtle?
# :(){ :|:& };: # Jaromil bomb anyone?
Happy hacking!
When I first started out using Linux, one of my main problems was understanding how to compile programs under Linux. While the simple "./configure
", "make
" and "make install
" worked for most programs, I found myself getting stuck when something failed. I also started feeling like I didn't have any control over the installation. This article starts exploring these issues so that beginners can gain a better understanding of how the process works.
Let's not go into boring details of how things work and instead let us just learn by trying to compile a new program from source! For the sake of keeping things fun, we will install DOSBox from its source code. DOSBox is an Intel x86 emulator with the DOS operating system. Using it, you can "re-live" the good old days with the help of DOSBox, it can run plenty of the old classics that don't run on newer computers! If you've ever longed to play the classic Prince of Persia again and kill Jaffar in that last level, then you can do it with DOSBox!
The first thing we need to do is get the source code from the DOSBox website or you can download the Linux source from this link. Once you have it, create a new directory in your home called source
for keeping all the source code and extract the source code there:
$ mkdir ~/source
$ mv ~/Downloads/dosbox-0.73.tar.gz ~/source
$ cd source
$ tar -xvf dosbox-0.73.tar.gz
$ cd dosbox-0.73
We can now try compiling the code. Make sure you have the GCC toolchain installed on your distribution before you proceed. The most common way of releasing programs is using the GNU Autotools utility. To install software that has been created with Autotools, you simply use the following steps:
$ ./configure
$ make
$ make install
If you are unsure whether the package you are trying to install is made with Autotools, then look in the source directory. If it uses Autotools, it should have a "configure
file which is executable. There should be no file called Makefile
as this is generated by Autotools. Doing an ls
in the DOSBox directory will show that it uses Autotools:
$ ls -l
total 764
-rw-r--r-- 1 mvanga mvanga 12634 2009-05-20 13:39 acinclude.m4
-rw-r--r-- 1 mvanga mvanga 32675 2009-05-20 13:40 aclocal.m4
-rw-r--r-- 1 mvanga mvanga 232 2007-07-30 02:39 AUTHORS
-rwxr-xr-x 1 mvanga mvanga 367 2007-02-04 05:46 autogen.sh
-rw-r--r-- 1 mvanga mvanga 26352 2009-05-20 08:19 ChangeLog
-rwxr-xr-x 1 mvanga mvanga 44593 2008-12-11 04:05 config.guess
-rw-r--r-- 1 mvanga mvanga 7463 2009-05-20 13:40 config.h.in
-rw-r--r-- 1 mvanga mvanga 11406 2010-02-02 16:41 config.log
-rwxr-xr-x 1 mvanga mvanga 32724 2008-12-11 04:05 config.sub
-rwxr-xr-x 1 mvanga mvanga 366585 2009-05-20 13:40 configure
-rw-r--r-- 1 mvanga mvanga 16928 2009-05-20 13:34 configure.in
-rw-r--r-- 1 mvanga mvanga 17992 2002-07-27 09:08 COPYING
-rwxr-xr-x 1 mvanga mvanga 17867 2008-12-11 04:05 depcomp
drwxr-xr-x 2 mvanga mvanga 4096 2009-05-27 12:17 docs
drwxr-xr-x 2 mvanga mvanga 4096 2009-05-27 12:17 include
-rw-r--r-- 1 mvanga mvanga 3833 2009-05-26 14:15 INSTALL
-rwxr-xr-x 1 mvanga mvanga 13620 2008-12-11 04:05 install-sh
-rw-r--r-- 1 mvanga mvanga 91 2007-03-02 05:21 Makefile.am
-rw-r--r-- 1 mvanga mvanga 19569 2009-05-20 14:29 Makefile.in
-rwxr-xr-x 1 mvanga mvanga 11135 2008-12-11 04:05 missing
-rw-r--r-- 1 mvanga mvanga 27110 2009-05-20 14:24 NEWS
-rw-r--r-- 1 mvanga mvanga 51390 2009-05-27 06:06 README
drwxr-xr-x 13 mvanga mvanga 4096 2009-05-27 12:17 src
-rw-r--r-- 1 mvanga mvanga 932 2009-05-20 13:05 THANKS
drwxr-xr-x 2 mvanga mvanga 4096 2009-05-27 12:17 visualc_net
Notice that the configure
script is executable. Run it using the following.
$ ./configure
You should see some output similar to the following (may vary based on different computers)
$ ./configure
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking target system type... i686-pc-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make sets $(MAKE)... (cached) yes
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking for a BSD-compatible install... /usr/bin/install -c
checking for ranlib... ranlib
checking for sdl-config... no
checking for SDL - version >= 1.2.0... no
*** The sdl-config script installed by SDL could not be found
*** If SDL was installed in PREFIX, make sure PREFIX/bin is in
*** your path, or set the SDL_CONFIG environment variable to the
*** full path to sdl-config.
configure: error: *** SDL version 1.2.0 not found!
Whoops! Looks like we're missing one of the dependencies for DOSBox! As the error message is telling us, we're missing the SDL package. We can search for it using the apt-cache
tool (on Debian based machines. Look for an equivalent for your distribution in the documentation). Since we are compiling from source, we can deduce that the source requires the SDL library. Libraries in Linux are named using a standard format of lib[NAME]
. So the SDL library will probably be called libsdl
. Let us search for it using apt-cache
:
$ apt-cache search libsdl
fische - Stand-alone sound visualisation for Linux
libsdl-console-dev - development files for libsdl-console
libsdl-console - console that can be added to any SDL application
libsdl-erlang - Erlang bindings to the Simple Direct Media Library
libsdl-ruby1.8 - Ruby/SDL interface for Ruby
libsdl-ruby - Ruby/SDL interface for Ruby
libsdl-sge-dev - development files for libsdl-sge
libsdl-sge - extension of graphic functions for the SDL multimedia library
libsdl-sound1.2-dev - Development files for SDL_sound
libsdl-sound1.2 - Decoder of several sound file formats for SDL
libsdl1.2-dev - Simple DirectMedia Layer development files
libsdl1.2debian-all - Simple DirectMedia Layer (with all available options)
libsdl1.2debian-alsa - Simple DirectMedia Layer (with X11 and ALSA options)
libsdl1.2debian-arts - Simple DirectMedia Layer (with X11 and aRts options)
libsdl1.2debian-esd - Simple DirectMedia Layer (with X11 and esound options)
libsdl1.2debian-nas - Simple DirectMedia Layer (with X11 and NAS options)
libsdl1.2debian-oss - Simple DirectMedia Layer (with X11 and OSS options)
libsdl1.2debian-pulseaudio - Simple DirectMedia Layer (with X11 and PulseAudio options)
libsdl1.2debian - Simple DirectMedia Layer
libsdl-ocaml-dev - OCaml bindings for SDL - development files
libsdl-ocaml - OCaml bindings for SDL - runtime files
libsdl-image1.2-dev - development files for SDL 1.2 image loading libray
libsdl-image1.2 - image loading library for Simple DirectMedia Layer 1.2
libsdl-mixer1.2-dev - development files for SDL1.2 mixer library
libsdl-mixer1.2 - mixer library for Simple DirectMedia Layer 1.2
libsdl-net1.2-dev - Development files for SDL network library
libsdl-net1.2 - network library for Simple DirectMedia Layer
libsdl-stretch-0-2 - stretch functions for Simple DirectMedia Layer
libsdl-stretch-dev - development files for SDL_stretch library
libsdl-ttf2.0-0 - ttf library for Simple DirectMedia Layer with FreeType 2 support
libsdl-ttf2.0-dev - development files for SDL ttf library (version 2.0)
libsdl-gfx1.2-4 - drawing and graphical effects extension for SDL
libsdl-gfx1.2-dev - development files for SDL_gfx
libsdl-pango-dev - text rendering with Pango in SDL applications (development)
libsdl-pango1 - text rendering with Pango in SDL applications (shared library)
libsdl-perl - SDL bindings for the Perl language
lgeneral - A "Panzer General" - like game
Looks like a lot of output, but looking carefully, we can find the package we want! Let us install it with:
$ sudo apt-get install libsdl1.2-dev
Now let us try the reconfiguration of DOSBox again:
$ ./configure
Great! It works! If in your system, there are more dependencies, you can solve them in the way described above. The configure
script has some other options that you might find useful. You can take a look at the program-specific options of configure
using:
$ ./configure --help
A common option to use with the configure
script is the --prefix
option. By default, all programs get installed relative to the root directory. If you would like to install a program relative to a specific directory, you can use the --prefix
option to specify it. For example
$ ./configure --prefix=/opt/dosbox
The above will make sure that when you run the make install
command, it gets installed to the /opt/dosbox
directory. So the binary that would have probably gone into /usr/bin/dosbox
will now end up in /opt/dosbox/usr/bin/dosbox
(thus the name of the option --prefix
). This is great if you want to keep your programs organized in your own directories!
The role of the configure
script is to probe your system for architecture specific information and generate the Makefile
that is tailored to your specific environment. After the configure script runs, you should now see a Makefile
in the top level directory. You can now compile DOSBox using:
$ make
If you have errors during this phase, it generally means a bug in the coding or a difference in compiler versions. It can also mean you have not installed some dependencies. Usually, looking at the error in the compilation and seeing the file with the error is a good idea. A quick Google search can also reveal solutions. Generally, most programs put installation instructions into either the README
or INSTALL
files, so make sure you go through them before beginning.
Once it is done, you can install it using the make install
command.
$ make install
The make install
command might require root privileges as some of the directories it writes to (eg. /usr/bin
) are only writable by the superuser. This step will generally fail only if there are insufficient write privileges. The extra privileges are also not needed if you used a --prefix
into a directory that your user can write to. In general, the make install
command simply copies the compiled binaries (which you can also see if you do an ls
now) to specific places in the file system.
When I am just testing out programs, I generally skip the installation step and just run the program from the same directory. For example, you can choose to skip the installation for DOSBox and run it directly by issuing the command below.
$ ./dosbox
You might wonder where the files generally end up in a standard make install
. The various places where program parts are put is shown below:
/bin Global binaries. Generally programs don't put their binaries here.
/etc Configuration files (these include things like cron tasks and log rotation information)
/usr/bin This is where most of the program binaries end up
/usr/man Man pages (in the man<NUMBER> directories)
/usr/info Info files. More descriptive than man
/usr/src The source code or library headers
/usr/lib The libraries (.a and .so files)
/usr/share Generally information about used libraries such as their licenses
As you can see, most of the stuff goes into the /usr
directory. This folder is meant to contain all the user binaries, their documentation, libraries, header files, etc. User programs (like telnet, ftp, etc.) are also placed here. In the original Unix implementations, /usr was where the home directories of the users were placed (that is to say, /usr/someone was then the directory now known as /home/someone). In current Unices, /usr is where user-land programs and data (as opposed to 'system land' programs and data) are. The name hasn't changed, but it's meaning has narrowed and lengthened from "everything user related" to "user usable programs and data". As such, some people may now refer to this directory as meaning 'User System Resources' and not 'user' as was originally intended.
There you have it! You now have DOSBox installed and running! You should now grab a few of those classic games (Yes, it can run doom!) and enjoy the nostalgia! You might wonder why one would rather compile from source when they can use package managers to do the job (after all, I used it to install dependencies!). The reason is a simple one of control. When you compile from source code, you can modify parts of the process to suit your needs. You could also modify the code itself if you wish. There is also an added benefit of knowing how to install from source, which is that there are many programs available on the web that are not part of any repositories and are available only in source form. As for package managers, use them for convenience. Use source compiles for control.
Happy Hacking!