Lifeblog – Review and Thoughts

I’ve been lucky enough to have taken part in a Lifeblog trail for Nokia in the UK over the past few weeks.

We were given a lovely new Nokia 6630 phone (that unfortunately we have to return at the end of the trial), equipped with Lifeblog and just asked to evaluate it.

Here are my thoughts, experiences and opinions on using Lifeblog.

Well firstly Lifeblog is really two pieces of software. One part runs on your series 60 based smartphone and the other runs on a fairly high spec PC running Windows. The phone stores your messages, photos, videos, etc until you can sync up with a PC to download them. I’ll cover each part separately, then as a whole.

The phone based software is excellent. All content appears in Lifeblog automatically. So I now no longer have to open various different applications to see different content. Lifeblog captures SMS and MMS messages both sent and received. It also captures any photos or videos I take. Content is kept in order, so I can cycle through by day and see all my data in order. This is great at keeping messages in context.

Best of all is the ability to post to a blog directly from the phone handset. Because Lifeblog is so well linked into the way the phone works, it means I can quickly select the content I want to blog about, and get it up on my site very rapidly. Behind the scenes, Lifeblog uses a flavour of the Atom protocol to communicate with the blog. Six Apart‘s Typepad service is supported by default, but other services are coming on stream now with a Lifeblog plugin available for Moveable Type, and a gateway into Flickr. I was even able to link Lifeblog into my own homebrew Perl based blogging system. Going over my website, you’ll probably see the posts I’ve sent via Lifeblog as I include a little strapline at the bottom of each entry highlighting the fact.

From a social point of view, as I always have my phone with me, I can blog wherever and whenever I like. It’s great that the high end Nokia phones have megapixel cameras as the images are so sharp. Lifeblog does shrink the image when posting to the web, but that’s just great, it saves me money in data charges. It’s amazing to be able just point the phone at something, and know it’ll be online a minute a later. I’ve been showing off this ability to the guys at work to much excitement.

Now for the PC side of Lifeblog…

Unfortunately this is where I’ve been having some problems. The concept is great, but its current incarnation still needs a bit of work done on it. For example, it won’t run on my 1ghz laptop. It keeps asking to update DirectX, even though I’m on the latest version. This is a shame as it’s my main machine. However, it will run on my office desktop machine, so I can share my experiences of that.

The PC version of Lifeblog takes over the whole screen when linked in. Microsoft Windows disappears and Lifeblog takes over.

The screen looks beautiful, and has the same timeline experience as the mobile version, though it contains everything that was ever in your handset. It’s great being able to scan back and see old message and photos being kept in order. There is also the ability to post to a blog from here as well, though I’ve not actually tried that, being such a fan of posting from the handset.

It’s easy to sync between the PC and the mobile phone. It just uses Nokia’s existing PC Suite software to connect up and from there it’s just an option on the menu to copy everything across. Very simple. During data transfer, Lifeblog show’s you the content coming across in real time on the screen.

Now for the overall take on Lifeblog.

I think it’s bloody brilliant. Nokia’s concept of a Digital Shoebox works really well. It’s a place to keep all that content that may otherwise be lost or backed up in various places all together. As the mobile phone takes a central role in modern lifestyles, the ability to automatically use it as a multimedia diary is very powerful.

The downside is the software needs a powerful PC to run on. This will probably be addressed as the software matures and older computers are replaced. The other side is the cost. I’ve been lucky at being able to use a full version as part of the trial instead of having to pay for it. The price point is a little too high I’d say at present, but a reduction here would really boost uptake.

There is a free version of Lifeblog available from Nokia that can store up to 200 items. If you have a compatible phone, I’d really urge anyone to give it a try. Beware though, it can be addictive 🙂

This review was based on Lifeblog 1.5.

Nokia Release Series 60 Patch For Perl

It looks like Perl on Nokia Series 60 phones is getting closing as Jarkko Hietaniemi has just commited a patch to the Perl 5 Porters mailing list that enables Perl 5.8.x and Perl 5.9.x to work on Symbian smartphones. The message specifically states that it is known to work on Nokia Series 60 phones. The port is copyright Nokia.

I’m now officially very excited! Perl could very soon be running on my Nokia 6630!

A quick delve into the attached README reveals…

The attached patches enable compiling Perl on the Symbian OS platform:
Symbian OS releases 7.0s and 8.0a; and the corresponding Series 60
SDKs 2.0, 2.1, and 2.6.

Note that the patches only implement a “base port”, enabling one to
run Perl on Symbian, the basic operating system platform. The patches
do not implement any further Symbian OS or Series 60 (an application
framework) bindings to Perl. (A small Symbian / Series 60 interface
class and a small Series 60 application are included, though.)

It also seems that the patch allows Perl to be embedded into Series 60 C++ applications.

Since the primary way of using Perl on Symbian is a DLL (as described above),
I also wrote a small wrapper class for Series 60 (C++) applications that
want to embed a Perl interpreter, and a small Series 60 demonstration
application (PerlApp) using that wrapper class. As a bonus PerlApp knows
how to install Perl scripts (.pl, or hash-bang-perl) and Perl modules (.pm)
from the messaging application’s Inbox, and how to run scripts when invoked
via a filebrowser (either the one builtin to PerlApp, or an external one).

It’s fantastic to see that Nokia are working on getting Perl onto their smartphones. I’ve jealously looked on as Python developers have had their language implemented, now it seems that Perl could well be nearing an official launch.

Datasherpa And Automatic Page Tagging

A new product called Datasherpa has just been launched by Clickstream Technoligies with the aim of ensuring all pages served by a webserver are automatically loaded with web analytics tags.

They claim their new product eliminates the burden of creating, inserting and testing page tags, and ensures all pages are tracked accurately.

It’s a really simple idea, and bloody good one. We’ve been caught out before at work when a page hasn’t been correctly tagged and we’ve lost valuable traffic information.

It sounds like it would be really simple to build as a mod_perl handler for Apache. The handler would scan each page served, probably using the HTML::Parser module or even just a simple regular expression, detect the closing page </body> tag, and just before that insert the tracking tag corresponding to the virtual host being served.

We use Webtrends at work, and this approach sounds like it should work really well with their system. It may even be worth mocking up a proof of concept quickly.

Grayscaling Images With Perl

One thing that caught my interest today was how to convert a colour image into grayscale.

It turns out the basic algorithm is very simple. Basically it’s just…

grey = 0.15 * red + 0.55 * green + 0.30 * blue;

This can be turned into a Perl subroutine using the following code.

sub grayscale {
my ($r, $g, $b) = @_;
my $s = 0.15 * $r + 0.55 * $g + 0.30 * $b;
return int($s);
}

Here we pass in the RGB values of the colour we want to turn into gray. We apply the algorithm and return the integer value of gray.

The value we get for gray is used to replace each of the values for red, green and blue.

We can test this subroutine out with the help of the Perl GD module (available for free on CPAN).

#!/usr/bin/perl -w
use GD;
## grayscale subroutine
sub grayscale {
my ($r, $g,$b) = @_;
my $s = 0.15 * $r + 0.55 * $g + 0.30 * $b;
return int($s);
}
## create a new GD object with the data passed via STDIN
my $image = new GD::Image(*STDIN);
## iterate over the number of colours in the colour table
for (my $i = 0; $i < $image->colorsTotal(); $i++) {
## get the RGB values for the colour at index $i
my ($r, $g, $b) = $image->rgb($i);
## convert the RGB to grayscale
my $gray = grayscale($r,$g,$b);
## remove the original colour from the colour table
$image->colorDeallocate($i);
## add in the new gray
$image->colorAllocate($gray,$gray,$gray);
}
## make sure we output binary
binmode STDOUT;
## pass the image as a raw GIF to STDOUT
print $image->gif;

This code takes an image piped in from STDIN and outputs a grayscale GIF version of the image to STDOUT.

If the code was called convert.pl it would be called as ./convert.pl <test.gif >>test_result.gif.

Here’s a conversion I did earlier of a GIF image of Kitt, Bev and Justin at the Emap Performance Awards 2004 using the above Perl code.

Kitt, Bev and Justin in colour

Kitt, Bev and Justin in grayscale

CellTrack’ing Between Colchester And London

I’ve been looking at CellTrack program for series 60 phones recently.

This is a native series 60 Symbian application that can record details of the current mobile phone cell your phone is using. It also lets you annotate each cell if you want.

Celltrack is something I downloaded for my Nokia 7610 a while ago, and have just installed on the Nokia 6630.

Screenshot of CellTrack running on a Nokia 6630

On Monday, while the train was running slow, I had it running and started to annotate stations so I could tell where I was in the evening when it’s dark outside. CellTrack has a feature that allows you to log used cells to a flat tab seperated file. In my case, as I have the software installed on the 6630’s MMC card, the file can be found in the directory E:NokiaOthersCellTrack and copied off using the Nokia PC Suite.

Here’s the journey I took on Tuesday morning by train. I turned on CellTrack at Marks Tey station and had it running to just before the train pulled into Stratford station in East London.

Time Cell ID LAC Cell Name Description
07:26:08 12972 629 XXBC97 B Marks tey station
07:27:15 12973 629 XXBC97 C Approaching marks tey
07:27:35 8812 629 XXB881 B Approaching kelvedon
07:28:03 4340 629 XXB434 A no info
07:29:01 4339 629 XXB433 X Kelvedon station
07:29:25 4341 629 XXB434 A Approaching kelvedon
07:31:40 16772 629 XXBG77 B Between witham and kelvedon
07:32:10 16774 629 XXBG77 X Between kelvedon and witham
07:32:43 2084 629 XXB208 X Approaching witham
07:34:09 2086 629 XXB208 F Witham station
07:36:34 382 629 XXB038 B Approaching witham
07:37:15 2086 629 XXB208 F Witham station
07:37:55 7249 629 XXB724 X Hatfield Peveral station
07:38:33 7251 629 XXB725 A Approaching hatfield peveral
07:39:30 13877 629 XXBD87 G Approaching hatfield peveral
07:39:40 13878 629 XXBD87 X Between hatfield peveral and chelmsford
07:39:52 13879 629 XXBD87 X Between hatfield peveral and chelmsford
07:41:17 3910 629 XXB391 A Approaching chelmsford
07:41:37 3912 629 XXB391 B Approaching chelmsford
07:42:07 16055 629 XXBG05 E Chelmsford station
07:43:01 3877 629 XXB387 G Chelmsford station
07:43:52 16057 629 XXBG05 G Approaching chelmsford
07:44:10 3879 629 XXB387 X Approaching chelmsford
07:44:24 5282 629 XXB528 B Approaching chelmsford
07:44:46 16779 629 XXBG77 X Between chelmsford and ingatestone
07:44:58 16778 629 XXBG77 X Approaching chelmsford
07:45:08 16779 629 XXBG77 X Between chelmsford and ingatestone
07:45:31 16780 629 XXBG78 A no info
07:45:49 2073 629 XXB207 C Between chelmsford and ingatestone
07:46:01 367 629 XXB036 G Between chelmsford and ingatestone
07:46:11 12354 629 XXBC35 X Between ingatestone and chelmsford
07:46:25 12355 629 XXBC35 E Between ingatestone and chelmsford
07:47:03 2073 629 XXB207 C Between chelmsford and ingatestone
07:47:21 369 629 XXB036 X Approaching ingatestone
07:47:32 11240 105 XXBB24 A Approaching ingatestone
07:48:14 11242 105 XXBB24 B Ingatestone station
07:48:34 3755 105 XXB375 E Ingatestone station
07:49:14 3756 105 XXB375 F Between ingatestone and shenfield
07:49:30 11239 105 XXBB23 X Between shenfield and ingatestone
07:50:09 16872 105 XXBG87 B Approaching shenfield
07:50:35 16875 105 XXBG87 E Approaching shenfield
07:50:49 3661 105 XXB366 A Approaching shenfield
07:51:42 3662 105 XXB366 B Shenfield station
07:51:54 3663 105 XXB366 C Shenfield station
07:55:03 531957 0 XXB-76 X ?:no info
07:55:25 531957 65535 XXB-76 X ?:no info
07:55:59 0 0 XXB000 A ?:no info
07:56:50 7240 105 XXB724 A no info
07:57:26 3788 105 XXB378 X no info
07:57:52 3789 105 XXB378 X Approaching gidea park
07:58:09 2068 105 XXB206 X no info
07:58:19 16035 105 XXBG03 E Gidea park station
07:59:31 19568 105 XXBJ56 X no info
07:59:45 5057 105 XXB505 G no info
08:00:16 197140 3008 XXB-12 F *:Gidea park station
08:01:09 10925 105 XXBA92 E no info
08:01:26 5058 105 XXB505 X Approaching gidea park
08:01:59 6249 700 XXB624 X Approaching gidea park
08:02:18 1381 700 XXB138 A no info
08:02:30 197214 3009 XXB-69 A no info
08:03:19 4829 700 XXB482 X no info
08:03:23 8611 600 XXB861 A Seven kings station
08:03:49 7748 600 XXB774 X no info
08:04:49 11170 700 XXBB17 A Approaching ilford
08:05:17 9724 600 XXB972 X Manor park station
08:05:39 3325 600 XXB332 E Approaching manor park
08:06:02 9726 600 XXB972 F Manor park station
08:06:16 17536 600 XXBH53 F Approaching forest gate
08:06:44 17535 600 XXBH53 E Forest gate station
08:07:55 1335 600 XXB133 E no info
08:08:19 14197 600 XXBE19 G no info
08:08:38 10334 700 XXBA33 X Maryland station

So what do some of the columns mean? Well Cell ID is the ID taken from the actual cell. LAC means the location area code of the cell. I’m not sure what Cell Name actually is, the CellTrack site says it comes from the cell broadcast as I have a service number set. The description is the text I entered to give a rough location to the cell.

As I said before, the log file has the data in tab seperated format. The data is recorded in the following order…

  1. Date
  2. Time
  3. Cell ID
  4. LAC
  5. Country
  6. Net
  7. Signal
  8. Signal dBm
  9. Cell Name
  10. Description

This makes it very easy for us to write a data extractor using Perl. Here’s the code I used to generate the table above.

#!/usr/bin/perl -w
use strict;
## Perl script to parse the CellTrack trace.log file, and split selected
## contents into an HTML table.
## Robert Price - rob@robertprice.co.uk - March 2005
## start the table, and print out a table header.
print "<table>n";
print " <tr><th>Time</th><th>Cell ID</th><th>LAC</th><th>Cell Name</th><th>Description</th></tr>n";
## iterate over each line, placing the contents in $line.
while (my $line = <>) {
## clean up the data a bit.
chomp($line); # loose trailing linefeeds.
$line =~ s/r//g; # loose any rogue carriage returns.
$line =~ s/t */t/g; # remove preceeding spaces from data.
## split the data in $line into variables.
my ($date,$time,$cellid,$lac,$country,$net,$strength,$dBm,$cellname,$description) = split(/t/,$line);
## create a copy of $time, and format it so it has colons between hours and minutes.
my $nicetime = $time;
$nicetime =~ s/(d{2})(d{2})(d{2})/$1:$2:$3/g;
## print out the data we're interested in.
print " <tr><td><a link="$time" />$nicetime</td><td>$cellid</td><td>$lac</td><td>$cellname</td><td>$description</td></tr>n";
}
## close the table.
print "</table>n";

You may have noticed I didn’t bother to print the country or network used. Well that’s because it’s always the same for me. The country is 234 (UK) and the network is 33 (Orange). This may be more interesting when travelling abroad and using roaming.

WSSE Authentication For Atom Using Perl

Atom uses the WSSE authentication for posting and editing weblogs.

Mark Pilgrim explains more about this in layman’s terms in an old XML.com article, Atom Authentication.

This information is passed in an HTTP header, for example…

HTTP_X_WSSE UsernameToken Username="robertprice", PasswordDigest="l7FbmWdq8gBwHgshgQ4NonjrXPA=", Nonce="4djRSlpeyWeGzcNgatneSA==", Created="2005-2-5T17:18:15Z"

We need 4 pieces of information to create this string.

  1. Username
  2. Password
  3. Nonce
  4. Timestamp

A nonce is a cryptographically random string in this case, not the word Clinton Baptiste gets in Phoenix Nights (thanks to Matt Facer for the link). In this case, it’s encoded in base64.

The timestamp is the current time in W3DTF format.

The for items are then encoded together to form a password digest that is used for the verification of the authenticity of the request on the remote atom system. As it already knows you username and password, it can decrypt the password the nonce and timestamp passed in the WSSE header. It uses the well known SHA1 algorithm to encrypt the pasword and encodes it in base64 for transportation across the web.

We can use Perl to create the password digest, as shown in this example code.

my $username = "robertprice";
my $password = "secret password";
my $nonce = "4djRSlpeyWeGzcNgatneSA==";
my $timestamp = "2005-2-5T17:18:15Z";
my $digest = MIME::Base64::encode_base64(Digest::SHA1::sha1($nonce . $timestamp . $password), '');

The password digest is now stored in the variable $digest.

We can also create the HTTP header from this if needed.

print qq{HTTP_X_WSSE UsernameToken Username="$username", PasswordDigest="$digest", Nonce="$nonce", Created="$created"n};

Please note, to use this Perl code, you have to have the MIME::Base64 and Digest::SHA1 modules installed. Both are freely available on CPAN.

Update – 22nd November 2006

Some more recent versions of Atom expect the digest to be generated with a base64 decoded version of the nonce. Using the example above, some example code for this would be…


## generate alternative digest
my $alternative_digest = MIME::Base64::encode_base64(Digest::SHA1::sha1(MIME::Base64::decode_base64($nonce) . $timestamp . $password), '');

When using WSSE for password validation, I now always check the incoming digest with both versions of my generated digested to ensure it’s compatible with different versions of Atom enabled software. One of the best examples of this is the Nokia Lifeblog. Older versions expect the nonce to be left, newer versions expect the nonce to be decoded first.

Plain Scones Recipe

Delicious, simple to make scones, that go just right with a dollop of fresh whipped cream. This recipe only takes about 20 to 30 minutes to complete.

You will need…

  • 250g self-raising flour
  • 40g butter
  • 150ml milk
  • 1.5 tbsp caster sugar
  • pinch of salt
  1. Heat the over to gas mark 7, (220 C / 425 F).
  2. Sift flour into a bowl and add the butter.
  3. Rub the butter gently into the flour until it resembles bread crumbs.
  4. Add the salt and sugar.
  5. Slowly mix the milk in using a metal spoon to form a soft dough.
  6. Knead the dough with your hands to bind it.
  7. Roll out the dough so it’s about 2cm thick.
  8. Use a 4cm pastry cutter to cut the little scones out.
  9. Keep reforming and cutting until all the dough has been used.
  10. Place the scones, with a little dusting of flour on top, onto a greased baking sheet and put into the oven.
  11. Remove the scones after 12-15 minutes and cool on a wire rack.
  12. Enjoy with butter, cream and/or jam.

Scones go off very quickly so eat them within a few hours of baking. They are lovely when still warm!

UPDATE: I originally recommended 225g of plain flour, but I have since increased this to 250g as I found the dough too wet to shape sometimes.

Photo of fresh scones #1
Photo of fresh scones #2

Precompiling Templates With Template Toolkit

I’ve been playing about with configuration options in the Template Toolkit to try to improve the performance of a site I maintain.

I’ve been focusing on the caching and compiling options in particular.

By setting the COMPILE_DIR and COMPILE_EXT options, Template Toolkit automatically compiles all the templates it uses to the specified directory. Once they are compiled, Template Toolkit will try to use them instead of the original template wherever possible. This seems to be giving some real speed increases and also reducing the load on the server.

my $template = Template->new({
COMPILE_DIR => '/tmp/compiled_templates',
COMPILE_EXT => '.ttc',
});

Here we are storing our compiled templates in the /tmp/compiled_templates directory. Template Toolkit replicates the directory structure of the original template under this automatically. We’re also saying we want all compiled templates to end in the file extension .ttc.

It definately seems to be a quick win for improving the performance of Template Toolkit based sites.

Parsing RDF In Perl With RDF::Simple

In this article I’ll describe how to parse and extract data from an RDF file using Jo Walsh‘s RDF::Simple::Parser module in Perl.

RDF::Simple::Parser does what it says on the tin, it provides a simple way to parse RDF. Unfortunately, that can make it hard to extract data. All it returns from a successful parse of the RDF file, is what Jo calls a “bucket-o-triples”. This is just an array of arrays. The first array contains an list of all the triples. The second array contains the actual triples broken down so Subject is in position 0, Predicate is in position 1 and Object in position 2.

Let’s define these as constants in Perl as they’re not going to be changing.

use constant SUBJECT => 0;
use constant PREDICATE => 1;
use constant OBJECT => 2;

I’m going to use my usual example of my parsing my FOAF file, and I’ll be extracting the addresses of my friend’s FOAF files from it. See the example in What Is An RDF Triple, for a full breakdown of this.

We’ll define the two predicates we need to look for as constants.

use constant KNOWS_PREDICATE => 'http://xmlns.com/foaf/0.1/knows';
use constant SEEALSO_PREDICATE => 'http://www.w3.org/2000/01/rdf-schema#seeAlso';

We need to load in the FOAF file, so we’ll take advantage of File::Slurp’s read_file method to do this and put it in a variable called $file.

my $file = read_file('./foaf.rdf');

Before we can use RDF::Simple::Parser, we need to create an instance of it. I’ll set the base address to www.robertprice.co.uk in this case.

my $parser = RDF::Simple::Parser->new(base => 'http://www.robertprice.co.uk/');

Now we have the instance, we can pass in our FOAF file for parsing and get back our triples.

my @triples = $parser->parse_rdf($file);

Let’s take a quick look at my FOAF file to get an example triple.

I know Cal Henderson, and this is represented in my FOAF file as…

<foaf:knows>
<foaf:Person>
<foaf:nick>Cal</foaf:nick>
<foaf:name>Cal Henderson</foaf:name>
<foaf:mbox_sha1sum>2971b1c2fd1d4f0e8f99c167cd85d522a614b07b</foaf:mbox_sha1sum>
<rdfs:seeAlso rdf:resource="http://www.iamcal.com/foaf.xml"/>
</foaf:Person>
</foaf:knows>

Using the RDF validator we can get a the list of triples represented in this piece of RDF.


Triple Subject Predicate Object
1 genid:ARP40722 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://xmlns.com/foaf/0.1/Person
2 genid:ARP40722 http://xmlns.com/foaf/0.1/nick "Cal"
3 genid:ARP40722 http://xmlns.com/foaf/0.1/name "Cal Henderson"
4 genid:ARP40722 http://xmlns.com/foaf/0.1/mbox_sha1sum "2971b1c2fd1d4f0e8f99c167cd85d522a614b07b"
5 genid:ARP40722 http://www.w3.org/2000/01/rdf-schema#seeAlso http://www.iamcal.com/foaf.xml
6 genid:me http://xmlns.com/foaf/0.1/knows genid:ARP40722

The part we are interested are triples 5 and 6. We can see that triple 6 has Predicate value the same as our KNOWS_PREDICATE constant, and triple 5 has the Predicate value of our SEEALSO_PREDICATE constant. The part this links the two is that triple 6 has the Object value of triple 5’s Subject.

We know if we search for triples with the same predicate as our KNOWS_PREDICATE we’ll get triples that are to do with people I know. We can use Perl’s grep function to get these triples, then we can interate over them in a foreach loop.

foreach my $known (grep { $_->[PREDICATE] eq KNOWS_PREDICATE } @triples) {

We are only interest in the triples that have the same Subject as matching triple’s Object. Again, we can use grep to get these out so we can interate over them.

foreach my $triple (grep { $_->[SUBJECT] eq $known->[OBJECT] } @triples) {

Now we just need to make sure that the triple’s Predicate matches our SEEALSO_PREDICATE constant, and if it does, we can print out the value of it from it’s Object.

if ($triple->[PREDICATE] eq SEEALSO_PREDICATE) {
print $triple->[OBJECT], "n"
}

Let’s put this all together into a working example…

#!/usr/bin/perl -w
use strict;
use File::Slurp;
use RDF::Simple::Parser;
## constants defining position of triple components in
## RDF::Simple triple lists.
use constant SUBJECT => 0;
use constant PREDICATE => 1;
use constant OBJECT => 2;
## some known predicates.
use constant KNOWS_PREDICATE => 'http://xmlns.com/foaf/0.1/knows';
use constant SEEALSO_PREDICATE => 'http://www.w3.org/2000/01/rdf-schema#seeAlso';
## read in my foaf file and put it in $file.
my $file = read_file('./foaf.rdf');
## create a new parser, using my domain as a base.
my $parser = RDF::Simple::Parser->new(base => 'http://www.robertprice.co.uk/');
## parse my foaf file, and return a list of triples.
my @triples = $parser->parse_rdf($file);
## iterate over a list of triples matching the KNOWN_PREDICATE value.
foreach my $known (grep { $_->[PREDICATE] eq KNOWS_PREDICATE } @triples) {
## iteratve over a list of triples that have the same subject
## as one of our KNOWN_PREDICATE triples object.
foreach my $triple (grep { $_->[SUBJECT] eq $known->[OBJECT] } @triples) {
## find triples that match the SEEALSO_PREDICATE
if ($triple->[PREDICATE] eq SEEALSO_PREDICATE) {
## print out the object, should be the address
## of my friends foaf file.
print $triple->[OBJECT], "n"
}
}
}

The example will load in the FOAF file, parse it and print out any friends of mine that have FOAF files defined by the seeAlso predicate.

The Current UK Population In JavaScript

One interesting webpage going round the office yesterday were the UK Population Statistics.

Looking at the figures for 2002-2003 we saw the total UK population grow from 59,321,700 people to 59,553,800 people.

That means the population grows by approximately 1 person every 2 and a half minutes.

I’ve knocked together a quick JavaScript that can calculate the approximate population of the UK assuming this increase is constant.

We work out the per minute increase in population. We also know the starting population of the UK on the 1st January 2003, all we have to do is work out how minutes have elapsed between then and now, and multiply that by the population growth per minute.

var startDate = new Date("January 1, 2003 00:00:00");
var perMinuteIncrease = 232100 / (365 * 24 * 60);
var startPop = 59553800;
// return the estimated current population of the UK.
function currentPopulation() {
var currentDate = new Date();
var diffMinutes = Math.floor((currentDate.getTime() - startDate.getTime()) / 60000);
return Math.round(startPop + (diffMinutes * perMinuteIncrease));
}

To get the current population you just call the currentPopulation() function.

Let’s see what the approximate current UK population is now according to this script…



Just reload/refresh the page to get an updated population count.