Happy First Birthday!

January 6, 2008

This blog has now been running for a year; the first post was Hello World on 17th Jan 2007.

I hadn’t realised it had been going for so long; in that time, I’ve made 41 posts, so I haven’t quite managed to make one post per week :( I have been a bit slack lately, for which I do apologise. New Years Resolution: I must make more posts here!

In the meantime, my main site, steve-parker.org, has celebrated its seventh birthday, having been born in June 2000 – looking forward to making the 8th birthday celebrations this June!


Redirection – Simple Stuff

May 30, 2007

Nobody deals with the really low-level stuff any more; I learned it from UNIX Gurus in the 90s. I was really lucky to have met some real experts, and was stupid not to have better understood the opportunity to pick their brains.

Write to a file

$ echo foo > file

Append to a file

$ echo foo >> file

Read from a file (1)

$ cat < file

Read from a file (2)

$ cat file

Read lines from a file

$ while read f
> do
>   echo LINE: $f
> done < file
$

Pipes Primer

May 8, 2007

The previous post dealt with pipes, though the example may not have been the best for those who are not accustomed to the concept.

There are a few concepts to be understood – mainly, that of two (or more) processes operating together, how they put their data out, and how the get their data in. UNIX deals with multiple processes, all running (conceptually, at least) at the same time, on different CPUs, each with a standard input (stdin), and standard output (stdout). Pipes connect one process’s stdout to another’s stdin.

What do we want to pipe? Let’s say we’ve got a small 80×25 terminal screen, and lots of files. The ls command will spew out tons of data, faster than we can read it. There’s a handy utility called “more“, which will show a screen-worth of text, then prompt “more”. When you hit the space bar, it will scroll down a screen. You can hit ENTER to scroll one line.

I’m sure that you’ve worked this out already, but here is how we combine these two commands:


$ ls | more
<the first screenful of files is shown>
--More--

What happens here, is that the “more” command is started up first, then the “ls” command. The output of “ls” is piped to the input of “more”, so it can read the data.

Most such tools can also work another way, too:

$ more myfile.txt
<the first screenful of "myfile.txt" is shown>
--More--

That is to say, “myfile.txt” is taken as standard input (stdin).


Pipelines in the Shell

May 3, 2007

One of the most powerful things of the *nix shell, and one which is currently not even covered in the tutorial, is the pipeline. I try to keep this blog and the tutorial from overlapping, but I really must rectify this gap in the main site some time.

In the meantime, this is what it is all about.

UNIX (and therefore GNU/Linux) is full of small text-based utilities : wc to count words (and lines, and characters) in a text file; sort to sort a text file; uniq to get only the unique lines from a text file; grep to get certain lines (but not others) from a text file, and so on.

Did you see the common trait there? Yes, it’s not just that “Everything is a file”, nearly everything is also text. It’s largely from this tradition that HTML, XML, RSS, Email (SMTP, POP, IMAP) and the like are all text-based. Contrast with MS Office, for example, where all data is in binary files which can only (really) be manipulated by the application which created them.

So what? It’s crude, simple, works on an old-fashioned green-and-black screen. How could that be relevant in the 21st Century?

It’s relevant, not because of what each tool itself provides, but what they can do when combined.

Ugly Apache Access Log

Here’s a line from my access.log (Apache2) – Yes, honest, it’s one line. It just looks very ugly:
12.106.111.10 - - [02/May/2007:12:09:58 -0700] "GET /sh/sh.shtml HTTP/1.1" 200 33080 "http://www.google.ca/search?hl=en&q=bourne+shell&btnG=Google+Search&meta=" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
That’s interesting (well, kind of). But it doesn’t tell us much about this visitor. They came from Google Canada (google.ca), searched for “bourne shell”, and clicked on the link to give them http://steve-parker.org/sh/sh.shtml. They’re using FireFox 1.5.0.3 on Windows.

Great. What happened to them? Did they like the site? Did they stay? Did they actually read anything?

Filtering it

We can use grep to filter out just this visitor. However, this gives us lots of stuff we’re not interested in – all the CSS, PNG, GIF files which support the pages themselves:
$ grep "^12.106.111.10 " access_log
12.106.111.10 - - [02/May/2007:12:09:58 -0700] "GET /sh/sh.shtml HTTP/1.1" 200 33080 "http://www.google.ca/search?hl=en&q=bourne+shell&btnG=Google+Search&meta=" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:09:58 -0700] "GET /steve-parker.css HTTP/1.1" 200 8757 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:09:59 -0700] "GET /images/1.png HTTP/1.1" 200 471 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:09:59 -0700] "GET /images/prevd.png HTTP/1.1" 200 1397 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:10:00 -0700] "GET /images/2.png HTTP/1.1" 200 648 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
... etc ...

This is looking ugly already, and not just because of the small width of the webpage – even at 80 characters wide, this is hard to understand.

Filtering Some More

At first glance, I should be able to pipe this through a grep for html files:

$ grep "^12.106.111.10 " access_log | grep shtml

However, Apache also logs the referrer, so even the “GET /images/2.png” request above, includes a .shtml request. So I can use “grep -v“. I’ll add the “-w” option to grep to say that the search term must match a whole word. So – “grep -v gif” would let “gifford.html” through, whereas “grep -vw gif” would not. I’ll add “\” to the code, so that you can cut and paste it… the “\” means that although the line breaks, it’s still really part of the same line of code:
$ grep "^12.106.111.10 " access_log | grep -vw css \
| grep -vw gif | grep -vw jpg \
| grep -vw png | grep -vw ico
12.106.111.10 - - [02/May/2007:12:09:58 -0700] "GET /sh/sh.shtml HTTP/1.1" 200 33080 "http://www.google.ca/search?hl=en&q=bourne+shell&btnG=Google+Search&meta=" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:10:32 -0700] "GET /sh/variables1.shtml HTTP/1.1" 200 48431 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:13:23 -0700] "GET /sh/external.shtml HTTP/1.1" 200 41322 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:13:45 -0700] "GET /sh/quickref.shtml HTTP/1.1" 200 42454 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"
12.106.111.10 - - [02/May/2007:12:14:27 -0700] "GET /sh/test.shtml HTTP/1.1" 200 48844 "http://steve-parker.org/sh/sh.shtml" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.3) Gecko/20060426 Firefox/1.5.0.3"

This pumps the output through a grep which gets rid of CSS files (“| grep -vw css“), then gif, then jpg, then png, then ico. That should just leave us with the HTML files, which is what we’re really interested in.

It’s really hard to see what’s going on here. The narrow web page doesn’t help, but we just want to get the key information out, which should be nice and simple. It should look good however we view it.

If you look carefully, you can see that this visitor accessed /sh/sh.shtml, then /sh/variables1.shtml (after a minute), /sh/external.shtml (after 3 minutes), then /sh/quickref.shtml (about 20 seconds later, presumably – given the referrer is the same, in a new tab). A second later, they opened /sh/test.shtml (which also suggests that they’ve loaded the two pages in tabs, to read at their leisure).

Getting the Data we need

However, none of this is really very easy to read. If we just want to know what pages they visited, and when, we need to do some more filtering. awk is a very powerful tool, of whose abilities we will only scratch the surface here. We will get “fields” 4 and 7 – the timestamp and the URL accessed.
$ grep "^12.106.111.10 " access_log \
| grep -vw css | grep -vw gif \
| grep -vw jpg | grep -vw png \
| grep -vw ico | awk '{ print $4,$7 }'
[02/May/2007:12:09:58 /sh/sh.shtml
[02/May/2007:12:10:32 /sh/variables1.shtml
[02/May/2007:12:13:23 /sh/external.shtml
[02/May/2007:12:13:45 /sh/quickref.shtml
[02/May/2007:12:14:27 /sh/test.shtml

Okay, it’s the info we wanted, but it’s still not great. That “[” looks out of place now. We can use cut to tidy things up. In this case, we’ll use its positional spacing, because we want to get rid of the first character. Cut’s “-c” paramater tells it what character to cut from. We want the 2nd character onwards, so we just add it to the end of the pipe line:
$ grep "^12.106.111.10 " access_log | grep -vw css | grep -vw gif | grep -vw jpg | grep -vw png | grep -vw ico | awk '{ print $4,$7 }'|cut -c2-
02/May/2007:12:09:58 /sh/sh.shtml
02/May/2007:12:10:32 /sh/variables1.shtml
02/May/2007:12:13:23 /sh/external.shtml
02/May/2007:12:13:45 /sh/quickref.shtml
02/May/2007:12:14:27 /sh/test.shtml

And that’s the kind of thing that we can do with a pipe. We can get exactly what we want from a file.

Moving on

At the start of this post, we mentioned sort and uniq. Actually, “sort -u” will do the same as “sort | uniq“. So if we want to get the unique visitors, we can just get the first field (“cut -d" " -f1“) and sort it uniquely:
$ cut -d" " -f1 access_log | sort -u


Regular Expressions

April 18, 2007

http://etext.lib.virginia.edu/services/helpsheets/unix/regex.html has a good introduction to Regular Expressions – grep, sed, and friends.

It includes a brief discussion on Backreferences (aka “the stuff that * matched”)


Harnessing the flexibility of Regular Expressions with Grep

February 14, 2007

There are two sides to grep – like any command, there’s the learning of syntax, the beginning
of which I covered in the grep tool tip. I’ll
come back to the syntax later, because there is a lot of it.

However, the more powerful side is grep‘s use of regular expressions. Again, there’s not room
here to provide a complete rundown, but it should be enough to cover 90% of usage. Once I’ve got a library
of grep-related stuff, I’ll post an entry with links to them all, with some covering text.

This or That

Without being totally case-insensitive (which -i) does,
we can search for “Hello” or “hello” by specifying the optional
characters in square brackets:

$ grep [Hh]ello *.txt
test1.txt:Hello. This is  test file.
test3.txt:hello
test3.txt:Hello
test3.txt:Why, hello there!

If we’re not bothered what the third letter is, then we can say “grep [Hh]e.o *.txt“, because the dot (“.”) will match any single character.

If we don’t care what the third and fourth letters are, so long as it’s “he..o”, then we say exactly that: “grep he..o” will match “hello”, hecko”, heolo”, so long as it is “he” + 1 character + “lo”.

If we want to find anything like that, other than “hello”, we can do that, too:

$ grep he[^l]lo *.txt
test2.txt:heclo
test3.txt:hewlo
test3.txt:hello

Notice how it doesn’t pick up any of the “Hello” variations which have a “llo” in them?

How many?!

We can specify how many times a character can repeat, too. We have to put the expression we’re talking about in [square brackets]:

  • “?” means “it might be there”
  • “+” means “it’s there, but there might be loads of them”
  • “*” means “lots (or none) might be there”

So, we can match “he”, followed by as many “l”s as you like (even none), followed by an “o” with “grep he[l]*o *.txt“:

$ grep he[l]*o *.txt
test2.txt:helo
test3.txt:hello
test3.txt:Why, hello there!
test3.txt:hellllo

Follow

Get every new post delivered to your Inbox.