Happy First Birthday!

January 6, 2008

This blog has now been running for a year; the first post was Hello World on 17th Jan 2007.

I hadn’t realised it had been going for so long; in that time, I’ve made 41 posts, so I haven’t quite managed to make one post per week :( I have been a bit slack lately, for which I do apologise. New Years Resolution: I must make more posts here!

In the meantime, my main site, steve-parker.org, has celebrated its seventh birthday, having been born in June 2000 – looking forward to making the 8th birthday celebrations this June!


Ordering items

November 7, 2007

There are lots of small little quirks to the *nix shells; this is just one of them.

If you want to list the files in a directory, then ls will list them all for you, in alphabetical order.

If you want to list them by size, you can use ls -S; by timestamp: ls -t, and so on.

But ls is a particular utility. What happens when we do this:


for myfile in *
do
  echo "My file is called $myfile"
done

We get an alphabetically sorted list (see man ascii for the actual detail; they’re sorted by ASCII value, so numbers first, then uppercase letters, then lowercase letters).

This can be a pain, but it can also be quite useful. If you’ve got a bunch of files:

1.install.txt
2.setup.txt
3.use.txt
4.uninstall.txt

Then you can play with them in order, just by using the asterisk:

for i in *
do
  echo "File $i" >> all.txt
  cat $i >> all.txt
done

And it will sort them into order for you (“1″ comes before “2” in ASCII, and so on…)

Or you could just do this:

more * > all.txt

Because more will prefix each file with its name in a header, if there is more than one file to process.


Maths in bash shell

September 28, 2007

http://snap.nlc.dcccd.edu/reference/bash1/features_29.html has a bash-specific technique for processing calculations:

$[ expression ]
$(( expression ))

See the link for further details.


IFS – Internal Field Separator

September 26, 2007

It seems like an esoteric concept, but it’s actually very useful.

If your input file is “1 apple steve@example.com”, then your script could say:

while read qty product customer
do
  echo "${customer} wants ${qty} ${product}(s)"
done

The read command will read in the three variables, because they’re spaced out from each other.

However, critical data is often presented in spreadsheet format. If you save these as CSV files, it will come out like this:

1,apple,steve@example.com

This contains no spaces, and the above code will not be able to understand it. It will take the whole thing as one item – the first thing, quanity, $qty, and set the other two fields as blank.

The way around this, is to tell the entire shell, that “,” (the comma itself) separates fields; it’s the “internal field separator”, or IFS.

The IFS variable is set to space/tab/newline, which isn’t easy to set in the shell, so it’s best to save the original IFS to another variable, so you can put it back again after you’ve messed around with it. I tend to use “oIFS=$IFS” to save the current value into “oIFS”.

Also, when the IFS variable is set to something other than the default, it can really mess with other code.

Here’s a script I wrote today to parse a CSV file:

#!/bin/sh
oIFS=$IFS     # Always keep the original IFS!
IFS=","          # Now set it to what we want the "read" loop to use
while read qty product customer
do
  IFS=$oIFS
  # process the information
  IFS=","       # Put it back to the comma, for the loop to go around again
done < myfile.txt

It really is that easy, and it’s very versatile. You do have to be careful to keep a copy of the original (I always use the name oIFS, but whatever suits you), and to put it back as soon as possible, because so many things invisibly use the IFS – grep, cut, you name it. It’s surprising how many things within the “while read” loop actually did depend on the IFS being the default value.


Logic

September 7, 2007

Whilst not directly related to shell programming, understanding of basic logic operations – AND, OR, NOR, XOR, NAND, etc, are as important to shell programmers as to C, Java, .Net and other coders.

My recent interactive logic gate page seems to have become quite popular; it’s just a simple implementation of each of the major logic circuits in use. If you want to see more, say so – I’ll add anything you ask for ;-)


25 useful commands in Linux/UNIX for Beginners

August 22, 2007

The (often a bit geeky for this blog) FreeBSD-World website has a good “Top-25″ list of 25 useful commands in Linux/UNIX for Beginners (note: new URL updated 31 Aug 2008)

I’m not sure that #24 (dig) and #25 (host) are absolutely necessary, #18 (startx) is possibly outdated these days, and the compression tools (6-9) are much of a muchness, but apart from that, #1 – #23 should be familiar to anyone who claims to be experienced with UNIX/Linux. If somebody was missing one, it would have to be #18 (startx), as (a) it’s not needed on servers, and (b) modern *nix distros will boot into a GUI automatically when possible.

So what’s the list?
25. host
24. dig
23. mkdir
22. rm
21. cp
20. grep
19. ls
18. startx
17. nano / vi
16. pwd
15. cat
14. man
13. kill
12. locate
11. ifconfig
10. ssh
9. gzip
8. bzip2
7. zip
6. tar (I would put 6-9 in one category, personally. rar should probably be in there too)
5. mount
4. passwd
3. ping
2. tail
1. top


vi

August 1, 2007

I use vi every day; to me, there is no better text editor. It is apparently a little intimidating for the newcomer, though…. here’s a beginner’s guide to the VI editor.

I would write one of these, but – much as I love vi, and I really do, everyone who uses vi, seems to have their own experience of it, and their own shortcuts. We’ve all got our own quirks. One thing’s for sure; vi is not notepad!


Understanding init scripts

July 25, 2007

UNIX and Linux systems use “init scripts” – scripts typically placed in /etc/init.d/ which are run when the system starts up and shuts down (or changes runlevels, but we won’t go into that level of detail here, being more of a sysadmin topic than a shell scripting topic). In a typical setup, /etc/init.d/myservice is linked to /etc/rc2.d/S70myservice. That is to say, /etc/init.d/myservice is the real file, but the rc2.d file is a symbolic link to it, called "S70myservice". The “S” means “Start”, and “70” says when it should be run – lower-numbered scripts are run first. The range is usually 1-99, but there are no rules. /etc/rc0.d/K30myservice (for shutdown), or /etc/rc6.d/K30myservice (for reboot; possibly a different scenario for some services), will be the corresponding “Kill” scripts. Again, you can control the order in which your services are shut down; K01* first, to K99* last.

All of these rc scripts are just symbolic links to /etc/init.d/myservice, so there is just one actual shell script, which takes care of starting or stopping the service. The Samba init script from Solaris is a nice and simple script to use as an example:

case "$1" in
start)
	[ -f /etc/sfw/smb.conf ] || exit 0

	/usr/sfw/sbin/smbd -D
	/usr/sfw/sbin/nmbd -D
	;;
stop)
	pkill smbd
	pkill nmdb
	;;
*)
	echo "Usage: $0 { start | stop }"
	exit 1
	;;
esac
exit 0

The init daemon, which controls init scripts, calls a startup script as "/etc/rc2.d/S70myservice start", and a shutdown script as "/etc/rc0.d/K30myservice stop". So we have to check the variable $1 to see what action we need to take. (See http://steve-parker.org/sh/variables2.shtml to read about what $1 means – in this case, it’s either “start” or “stop”).

So we use case (follow link for more detail) to see what we are required to do.

In this example, if it’s “start”, then it will run the three commands:

	[ -f /etc/sfw/smb.conf ] || exit 0
	/usr/sfw/sbin/smbd -D
	/usr/sfw/sbin/nmbd -D

Where line 1 checks that smb.conf exists; there is no point continuing if it doesn’t exist, just “exit 0″ (success) so the system continues booting as normal. Lines 2 and 3 start the two daemons required for Samba.

If it’s “stop”, then it will run these two commands:

	pkill smbd
	pkill nmdb

pkill means “Process Kill”, and it simply kills off the two processes started by the “start” option.

The "*)" construct catches any other uses, and simply replies that the correct syntax is to call it with either “start” or “stop” – nothing else will do. Some services allow for status reports, restarting, and so on. The one thing we do need to provide is “start”. Most services also have a “stop” function. All others are optional.

The simplest possible init script would be this, to control an Apache webserver:

#!/bin/sh
/usr/sbin/apachectl $1

Apache comes with a program called “apachectl” (or “apache2ctl”), which will take “stop” and “start” as arguments, and act accordingly. It will also take “restart”, “status”, “configtest”, and a few more options, but that one-line script would be enough to act as /etc/init.d/apache, with /etc/rc2.d/S90apache and /etc/rc0.d/K10apache linking to it. To be frank, even that is not necessary; you could just link /usr/sbin/apachectl into /etc/init.d/apache. In reality, it’s normally good to provide a few sanity-checks in addition to the basic stop/start functionality.

The vast majority of init scripts use the case command; around that, you can wrap all sorts of other things – most GNU/Linux distributions include a generic reporting script (typically /lib/lsb/init-functions – to report “OK” or “FAILED”), read in a config file (like the Samba example above), define functions for the more involved aspects of starting, stopping, or reporting on the status of the service, and so on.

Some (eg, SuSE) have an “INIT INFO” block, which may allow the init daemon a bit more control over the order in which services are started. Ubuntu’s Upstart is another; Solaris 10 uses pmf (Process Monitor Facility), which starts and stops processes, but also monitors them to check that they are running as expected.

After a good decade of stability, in 2007 the world of init scripts appears to be changing, potentially quite significantly. However, I’m not here to speculate on future developments, this post is just to document the stable interface which is init scripts. Even if other things change, the basic “start|stop” syntax is going to be with us for a long time to come. It is easy, but often important, to understand what is going on.

In closing, I will list the run-levels, and what each run-level provides:

0: Shut down the OS (without powering off the machine)
1, s, S: Single-User mode. Networking is not enabled.
2: Networking enabled (not NFS, Printers)
3: Normal operating mode (including NFS, Printers)
4: Not normally used
5: Shut down the OS and power off the machine
6: Reboot the OS.

Some GNU/Linux distributions change these definitions – in particular, Debian provides all network services at runlevel 2, not 3. Run-level 5 is also sometimes used to start the graphical (X) interface.


Shell Pipes by Example

July 22, 2007

Pipes, piping, pipelines… whatever you call them, are very powerful – in fact, they are one of the core tenets of the philosophy behind UNIX (and therefore Linux). They are also, really, very simple, once you understand them. The way to understand them, is by playing with them, but if you don’t know what they do, you don’t know where to start… Catch-22!

So, here are some simple examples of how the pipe works.

Let’s see the code

$ grep steve /etc/passwd | cut -d: -f 6
/home/steve
$

What did this do? There are two UNIX commands there: grep and cut. The command “grep steve /etc/passwd” finds all lines in the file /etc/passwd which contain the text “steve” anywhere in the line. In my case, this has one result:
steve:x:1000:1000:Steve Parker,,,:/home/steve:/bin/bash
The second command, “cut -d: -f6” cuts the line by the delimiter (-d) of a colon (“:“), and gets field (-f) number 6. This is, in the /etc/passwd file, the home directory of the user.

So what? Show me some more

This is the main point of this article; once you’ve seen a few examples, it normally all becomes clear.

EG2

$ find . -type f -ls | cut -c14- | sort -n -k 5
rw-r--r--   1 steve    steve       28 Jul 22 01:41 ./hello.txt
rwxr-xr-x   1 steve    steve     6500 Jul 22 01:41 ./a/filefrag
rwxr-xr-x   1 steve    steve     8828 Jul 22 01:42 ./c/hostname
rwxr-xr-x   1 steve    steve    30848 Jul 22 01:42 ./c/ping
rwxr-xr-x   1 steve    steve    77652 Jul 22 01:42 ./b/find
rwxr-xr-x   1 steve    steve    77844 Jul 22 01:41 ./large
rwxr-xr-x   1 steve    steve    93944 Jul 22 01:41 ./a/cpio
rwxr-xr-x   1 steve    steve    96228 Jul 22 01:42 ./b/grep
$

What I did here, was three commands: “find . -type f -ls” finds regular files, and lists them in an “ls”-style format: permissions, owner, size, etc.
cut -c14-” cuts out the first 14 characters, which mess up the formatting on this website (!), and aren’t very interesting.
sort -n -k 5” does a numeric (-n) sort, on field 5 (-k5), which is the size of the file.
So this gives me a list of the files in this directory (and subdirectories), ordered by file size. That’s much more useful than “ls -lS“, which restricts itself to the current directory, but not subdirectories.

(As an aside, I have to admit that I only concocted this by trying to think of an example; it actually seems really useful, and worth making into an alias… I must do a post about “alias” some time!)

So how does it work?

This seems pretty straightforward: get lines containing “steve” from the input file (“grep steve /etc/passwd“), and get the sixth field (where fields are marked by colons) (“cut -d: -f6“). You can read the full command from left to right, and see what happens, in that order.

How does it really work?

EG1 Explained

There are some gotchas when you start to look at the plumbing. Because we’re using the analogy of a pipe (think of water flowing through a pipe), the OS actually sets up the commands in the reverse order. It calls cutfirst, then it calls grep. If you have (for example) a syntax error in your cut command, then grep will never be called.
What actually happens is this:

  1. A “pipe” is set up – a special entity which can take input, which it passes, line by line, to its output.
  2. cut is called, and its input is set to be the “pipe”.
  3. grep is called, and its output is set to be the “pipe”.
  4. As grep generates output, it is passed through the pipe, to the waiting cut command, which does its own simple task, of splitting the fields by colons, and selecting the 6th field as output.

EG2 Explained

For EG2, “sort” is called first, which ties to the second (rightmost) pipe for its input. Then “cut” is called, which ties to the second pipe for its output, and the first (leftmost) pipe for its input. Then, “find” is called, which ties to the first pipe for its output.
So, the output of “find” is piped into “cut“, which strips off the first 14 characters of the “find” output. This is then passed to “sort“, which sorts on field 5 (of what it receives as input), so the output of the entire pipeline, is a numerically sorted list of files, ordered by size.


Shell Cheatsheet

July 14, 2007

There doesn’t seem to be any decent shell CheatSheet out there, so I have undertaken to write one.

This is my first attempt; once I actually printed it out, I realised that the the font was actually rather large, so I had room to include much more than I had originally sketched out.

Cheatsheet You may notice that there are a lot of blanks; Shell Cheatsheet has the same content, with some ideas for new content. Please tell me what you want to see in there – There is still lots of room, it’s only about 60% full.

What would you fill the other 40% with?

Does this seem to be a useful cribsheet? What would you like to see in it?

Also, what format would you prefer? PDF? PNG? GIF? DOC?!


Follow

Get every new post delivered to your Inbox.