Update on Shell Scripting Recipes book

April 23, 2011

Wow, it’s been nearly two months since I last made a post about the upcoming book on shell scripting. I’m really sorry, I had intended to give much more real-time updates here. The book focusses on GNU/Linux and the Bash shell in particular, but it does cover the other environments too – Solaris, Bourne Shell, as well as mentions for ksh, zsh, *BSD and the rest of the Unix family.

In terms of page count, it is currently 89% finished. There is still the proof-reading to be done, and whatever delivery details the publishers need to deal with, so the availability date of some time in August is still on schedule. I notice that http://amzn.com/1118024486 is already offering a massive discount on the cover price; I have no idea what that is about, I’m trying not to take offence – they can’t have dismissed the book already as I have not quite finished writing it yet! So hopefully you can get a bargain while it’s cheap.

The subject matter has the potential to be quite boring if presented as a list of tedious system administration tasks, so I have tried to make it light and fun whenever I can; it’s still with Legal at the moment, but I hope to have a Space Invaders clone written entirely in the shell published in the book. People don’t tend to see the Shell as being capable of doing anything interactive at all, so it is nice to write a playable interactive game in the shell. The main problem in terms of playability is in working out how much to slow it down, and at what stage! Of course, being a shell script, you can tweak the starting value, the level at which it speeds up, and anything else about the gameplay. If the game doesn’t make it in to the book, I’ll post it here anyway, and will welcome your contributions on gameplay.

Other than games, I’ve got recipes for init scripts, conditional execution, translating scripts into other (human) languages, even writing CGI scripts in the shell. There is coverage of arrays, functions, libraries, process control, wildcards and filename expansion, pipes and pipelines, exec and redirection of input and output; this book aims to cover pretty much all that you need to know about shell scripting without being a tedious list of what the bash shell can do.

There is a status page at http://sgpit.com/book which also has order information; you can pre-order your copy from there.

Shell Scripting Recipes

March 3, 2011

This is just a heads-up that my Shell Scripting Recipes book is due out in August 2011.

I hope to publish more details here as things progress; for now, it is well on the way, but it is not too late for readers to contact me (steve@steve-parker.org) if there is anything that you see as vital for a Shell Scripting Recipes book which was maybe missing from some other book you saw.

Shell Scripting Recipes by Steve Parker

Part I covers Language and Usage; all of the concepts of the Shell and how it works.
Part II is Recipes using System Tools. This covers the commands that are necessary for shell scripting, and includes quite a few surprising ways to use them.
Part III is Recipes using Shell Features. This is similar to Part II but it gives concrete uses for the theory presented in Part I.
Part IV is Recipes for Systems Administration. This provides (and explains) various recipes for real-world systems administration tasks of and beyond the ordinary.

I do intend to keep you appraised of progress; you can also follow my personal blog at http://steve-parker.org/urandom/ for more detailed updates. The RSS feed for that blog is http://steve-parker.org/urandom/rss.php.

lsof, fuser, nohup, disown, bg, fg, and jobs

February 4, 2011

Bit of a cheeky one here – what does anybody want to know about these topics?

There is a book in the pipeline, and I have lots to say about all these things, but am very interested to hear what you think is easy / hard / intuitive / arcane / stupid about these commands and the whole job control side of Unix/Linux and the different shells.

lsof is great, but almost only GNU/Linux; fuser is good, but restricted in how much it actually tells you – you have to go digging into PIDs to see what has to be KILLed or otherwise dealt with.

What, oh faithful few who may still be following this terribly intermittent blog, do you want to see on the subject of processes and job control in the *nix shell?

2010 in review

January 2, 2011

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Fresher than ever.

Crunchy numbers

Featured image

A helper monkey made this abstract painting, inspired by your stats.

About 3 million people visit the Taj Mahal every year. This blog was viewed about 26,000 times in 2010. If it were the Taj Mahal, it would take about 3 days for that many people to see it.

The busiest day of the year was November 9th with 142 views. The most popular post that day was Simple Maths in the Unix Shell.

Where did they come from?

The top referring sites in 2010 were steve-parker.org, google.com, google.co.in, ubuntuforums.org, and rackerhacker.com.

Some visitors came searching, mostly for bash maths, suid bit, shell script timestamp, awk one liners, and bash field separator.

Attractions in 2010

These are the posts and pages that got the most views in 2010.


Simple Maths in the Unix Shell January 2007


Timestamps for Log Files March 2007


suid shell scripts – setting “the SUID bit” April 2007


IFS – Internal Field Separator September 2007


Calculating Averages March 2007

inodes – ctime, mtime, atime

October 7, 2010

http://www.unix.com/tips-tutorials/20526-mtime-ctime-atime.html has a really good explanation of the different timestamps in a Unix/Linux inode. GNU/Linux has a useful utility called “stat” which displays most of the inode contents:
$ stat .bashrc
File: `.bashrc'
Size: 3219 Blocks: 8 IO Block: 4096 regular file
Device: fe00h/65024d Inode: 33 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ steve) Gid: ( 1000/ steve)
Access: 2010-10-07 01:11:21.000000000 +0100
Modify: 2010-08-19 21:22:20.000000000 +0100
Change: 2010-08-19 21:22:21.000000000 +0100

As Perderabo explains in the above-linked post:

Unix keeps 3 timestamps for each file: mtime, ctime, and atime. Most people seem to understand atime (access time), it is when the file was last read. There does seem to be some confusion between mtime and ctime though. ctime is the inode change time while mtime is the file modification time. “Change” and “modification” are pretty much synonymous. There is no clue to be had by pondering those words. Instead you need to focus on what is being changed. mtime changes when you write to the file. It is the age of the data in the file. Whenever mtime changes, so does ctime. But ctime changes a few extra times. For example, it will change if you change the owner or the permissions on the file.

Let’s look at a concrete example. We run a package called Samba that lets PC’s access files. To change the Samba configuration, I just edit a file called smb.conf. (This changes mtime and ctime.) I don’t need to take any other action to tell Samba that I changed that file. Every now and then Samba looks at the mtime on the file. If the mtime has changed, Samba rereads the file. Later that night our backup system runs. It uses ctime, which also changed so it backs up the file. But let’s say that a couple of days later I notice that the permissions on smb.conf are 666. That’s not good..anyone can edit the file. So I do a “chmod 644 smb.conf”. This changes only ctime. Samba will not reread the file. But later that night, our backup program notices that ctime has changes, so it backs up the file. That way, if we lose the system and need to reload our backups, we get the new improved permission setting.

Here is a second example. Let’s say that you have a data file called employees.txt which is a list of employees. And you have a program to print it out. The program not only prints the data, but it obtains the mtime and prints that too. Now someone has requested an employee list from the end of the year 2000 and you found a backup tape that has that file. Many restore programs will restore the mtime as well. When you run that program it will print an mtime from the end of the year 2000. But the ctime is today. So again, our backup program will see the file as needing to be backed up.

Suppose your restore program did not restore the mtime. You don’t want your program to print today’s date. Well no problem. mtime is under your control. You can set it to what ever you want. So just do:
$ touch -t 200012311800 employees.txt
This will set mtime back to the date you want and it sets ctime to now. You have complete control over mtime, but the system stays in control of ctime. So mtime is a little bit like the date on a letter while ctime is like the postmark on the envelope.

This is a really clear, thorough explanation of ctime and mtime. Unfortunately, it is not possible to find the original creation time of a file, though that is somewhat meaningless as things are copied, moved, linked, changed; what is the creation time of a file which was created, removed, then created afresh, for example?

Interview with Steve Bourne

August 26, 2010

ARNnet have an Interview with Steve Bourne

I believe you can write shell scripts that will run either in the Bourne shell or Bash. It may have some additional features that aren’t in the Bourne shell. I believe Bash was intended as a strictly compatible open source version of the Bourne shell. Honestly I haven’t looked at it in any detail so I could be wrong. I have used Bash myself because I run a Linux/Gnu system at home and it appears to do what I would expect.

I have nearly finished reading Coders At Work – Steve Bourne could have been an interesting interviewee for that book.

When I first posted this link at urandom, I was not aware that I myself was quoted, at the top of page 5 of the 7-page interview:

Unix Specialist Steve Parker has posted ‘Steve’s Bourne / Bash scripting tutorial’ in which he writes: “Shell script programming has a bit of a bad press amongst some Unix systems administrators. This is normally because of one of two things: a) The speed at which an interpreted program will run as compared to a C program, or even an interpreted Perl program; b) Since it is easy to write a simple batch-job type shell script, there are a lot of poor quality shell scripts around.” Do you agree?

It would be hard to disagree because he probably knows more about it than I do. The truth of the matter is you can write bad code in any language, or most languages anyway, and so the shell is no exception to that. Just as you can write obfuscated C you can write obfuscated shell. It may be that it is easier to write obfuscated shell than it is to write obfuscated C. I don’t know. But that’s the first point.

The second point is that the shell is a string processing language and the string processing is fairly simple. So there is no fundamental reason why it shouldn’t run fairly efficiently for those tasks. I am not familiar with the performance of Bash and how that is implemented. Perhaps some of the people that he is talking about are running Bash versus the shell but again I don’t have any performance comparisons for them. But that is where I would go and look. I know when I wrote the original implementation of the shell I spent a lot of time making sure that it was efficient. And in particular with respect to the string processing but also just the reading of the command file. In the original implementation that I wrote, the command file was pre-loaded and pre-digested so when you executed it you didn’t have to do any processing except the string substitutions and any of the other semantics that would change values. So that was about as efficient as you could get in an interpretive language without generating code.

I think that the points were presented to Steve Bourne in reverse order; his answer to the first point seems to relate to “b” (quality of scripts), and his longer answer to the second point seems to relate to “a” (performance).

Regarding performance, as he says, the real cost is of the Unix exec() call, which makes “cat /etc/hosts | grep localhost” half as fast as “grep localhost /etc/hosts”. There is nothing that the shell itself can do about that.

Regarding quality, deliberately obfusacated C is an institution; my point was merely that it is easy to write a bad shell script simply by not knowing how to write a better shell script. As this quote was from the introduction to a shell scripting tutorial, it should hopefully be clear from the context that the tutorial aims to enable the reader to write better shell scripts.

Solaris 10 SMF Manifests

July 17, 2010

I have recently written a web service which creates Solaris 10 SMF manifests based on the information you give it.

It creates a ZIP file with the XML Manifest file, and the startup/shutdown script, based on what you tell it.

There is much more that SMF can do – create entire new runlevels, and so on – but this does the basic single-instance startup and shutdown stuff that /etc/init.d scripts did.

Feel free to go and check it out at sgpit.com/smf/


Get every new post delivered to your Inbox.