Using Disqus On WordPress Behind A Proxy

I had to implement the Disqus Comment System WordPress plugin on a website that will be located behind an outgoing proxy server. By default the Disqus WordPress plugin does not support proxies, so it is unable to run if a proxy is blocking it’s access to the internet.

Since WordPress 2.8, WordPress supports proxy servers using a few defined values, WP_PROXY_HOST and WP_PROXY_PORT. I have now forked the Disqus WordPress plugin on GitHub, and added support that will look to see if these exist, and use them if they do.

To use it, add the following to your wp-config.php file…

define('WP_PROXY_HOST', '');
define('WP_PROXY_PORT', '3128');

Changing the values to match those of your proxy server of course.

Now replace the url.php file in wp-content/plugins/disqus-comment-system/lib/api/disqus/url.php with the url.php found in my github repository.

Visit your WordPress admin panel and you should now be able to activate and configure the Disqus plugin successfully.

I have issued a pull request for my changes to be pulled back into the main plugin, but it’s up to Disqus is they want to implement this or not.

POSTing JSON To A Web Service With PHP

I needed to POST JSON formatted data to a RESTful web service using PHP earlier, so I thought I’d share my solution.

There are a couple of approaches that could be taken, using the CURL extension, or file_get_contents and a http context. I took the later way.

When POSTing JSON to a RESTful web service we don’t send post fields or pretend to be a form, instead we have to send the JSON data in body of the request, and let the web service know it’s JSON data by setting the Content-type header to application/json.

$article = new stdClass();
$article->title = "An example article";
$article->summary = "An example of posting JSON encoded data to a web service";

$json_data = json_encode($article);

$post = file_get_contents('http://localhost/rest/articles',null,stream_context_create(array(
    'http' => array(
        'protocol_version' => 1.1,
        'user_agent'       => 'PHPExample',
        'method'           => 'POST',
        'header'           => "Content-type: application/jsonrn".
                              "Connection: closern" .
                              "Content-length: " . strlen($json_data) . "rn",
        'content'          => $json_data,

if ($post) {
    echo $post;
} else {
    echo "POST failed";

Here I’m first creating an example PHP stdClass object, populating it with some data, then serialising it to a JSON string. The real magic is using file_get_contents to POST it over HTTP to my test web service. If the POST succeeds, then it’s displayed, else an error message is shown.

It’s important to note I send the header, Connection: close . Without this, your script will hang until the web server closes the connection. If we include it, then as soon as the data has POSTed the connection is closed and control returned to the script.

HTTP Methods For RESTful Web Services

The use of HTTP methods when designing and building a RESTful web service is very important.

Most developers only use the GET and POST methods offered by the HTTP specification, and for most web sites this is fine.

However, there are other useful methods in the specification that are often overlooked by web developers, but as a RESTful web service developer you will need to know. The most important of these are PUT and DELETE.

REST uses HTTP methods as a one to one mapping to the CRUD (Create, Read, Update and Delete) operations you may be familiar with as a developer. These are

  • Create a resource – POST
  • Read a resouce – GET
  • Update a resource – PUT
  • Delete a resource – DELETE

There are also two other terms associated with RESTful services that you should be aware of, “safe” and “idempotent”.

A safe request is a request to read some data, not to change any server state. GET requests are safe as you should be able to GET a resource any number of times without affecting it’s state.

An idempotent request is request that however many times it is invoked, the end result is the same. GET, PUT and DELETE are all idempotent. You should be able to DELETE a resource any number of times, once deleted it’s gone, trying to delete it again won’t change the fact it’s gone.

POST is the method that causes problems, it is neither safe or idempotent. Using our CRUD example from earlier, POSTing to a server, could create duplicate resources.

For more details on other HTTP methods, such as HEAD and OPTIONS, have a look at RFC2616 – The HTTP 1.1 specification.

Uploading Files To A LAMP Server Using .NET

We had to setup and run a webcam for the Q Awards red carpet earlier this week.

The original plan was to have an old Axis ethernet webcam, connected to a wifi network using a Dlink bridge. After a nightmare of not being able to get the Dlink to work, we gave up and went for a USB webcam connected to a laptop approach.

I had to write some software to handle the capture of the image and the upload to the webserver. Because we wanted to do a few custom things on the way we weren’t able to use out of the box software.

I’ll post on how to use a webcam with .NET another time. What I wanted to document was how to upload a file using .NET to a LAMP server.

It turns out to be easier than I thought for .NET, one line of code can achieve this using the My.Computer.Network.UploadFile Method.

For example, to upload test.txt to we can do the following (in Visual Basic)…


Now we need some code at the other end to store this upload. As I was building a webcam application, the uploaded file, an image in this case, was always overwritten.

The uploaded .NET file uses an bog standard HTTP upload as described in RFC 1867.

If we use Perl to read this file, we can use the standard module to do most of the hard work for us.

The uploaded file is passed a parameter called file, so we can create a CGI object and get a reference to this using the param method. Once we have this, we can treat it as a filehandle and read the date off it. Then all we have to do is to write it out to a file.

The following code achieves this.

#!/usr/bin/perl -w
use strict;
my $CGI = new CGI;
my $file = $CGI->param('file');
open(FILE, ">/path/to/my/file/test.txt") or die $!;
while(<$file>) {
print FILE $_;
print "Content-type: text/htmlnn";
print "OKn";

Using Twitter From Perl

The world and his dog is currently looking at Twitter and eyeing up the possibilities it offers.

I thought i’d jump on the bandwagon, and have a look at the Twitter API.

I wanted to post to a timeline, so the solution is to use one of the update methods. I chose to the XML one, though there is a JSON one also available.

To post to the timeline, Twitter expects an HTTP POST request with a status parameter containing the message you want to post. It associates this to your account by using HTTP’s basic authorization functionality.

It’s simple to throw together a spot of Perl code to post messages to Twitter knowing this. Have a look at this example…

my $message = "A test post from Perl";
my $req = HTTP::Request->new(POST =>'');
$req->content('status=' . $message);
$req->authorization_basic($username, $password);
my $resp = $ua->request($req);
my $content = $resp->content;
print $content;

You need to set $username and $password to your username and password, and $message to whatever message you want to appear on your timeline (in this case, “A test post from Perl”).

Lifeblog Proxy Idea

Sitting in a Lifeblog debrief earlier, one thing that struck me was that others had the same problem as me regarding wanting to post to multiple blogs.

It seems most would like to seperate a work blog from a personal blog, but unless it’s hosted on the same Typepad account for example, Lifeblog doesn’t let you do this. From a service point a of view it’s a one to one match.

Posting on Lifeblog

Sitting there, my mind was mulling the problem over, and it would appear that a simple Lifeblog proxy would solve the problem. If blogs are hosted on the same service and accessible by the same username and password, Lifeblog lets you post to different blogs. Why not just build a service that can proxy between various Lifeblog compatible blogs, so you wouldn’t have to host them all together.

Posting on Lifeblog via a proxy

So how may this work from a technical perspective.

Well Lifeblog posts using a flavour of the Atom protocol. For security it uses WSSE encryption on the posts. This means that the proxy would need to it’s own username and password to authenticate against when talking to Lifeblog. The various blogs it would be proxying onto would also need different username and passwords, and proxy would have to insert these as it passes the post onto the relevant blog. We could potentially store all the blogs we’re allowing posts to in an XML config file. For example…

<name>My Blog</name>
<name>My Blog 2</name>

Here all the blogs are listed, along with their name, posting url, username and password. The proxy would take this list and return a localised list of blogs that when posted to, would just pass the relevant data across. So this means there are two areas to break the proxy down into.

First, the list of blogs. This reads the XML and returns a list of localised blogs and posting URL’s that Lifeblog can use to upload content.

Secondly, the actual localised posting URL needs to remove the Lifeblog WSSE authentication, and replace it with the correct username and password for the real blog before passing it on to the real upload URL.

It could be as simple as that. Maybe I’ll mock something up in Perl to test the theory out.

Anyway, who’s to say this just has to proxy Lifeblog. It could alternatively be a gateway that could translate into one of the common blogging API’s, instantly opening up Lifeblog to millions more users. Now that would be cool!

UPDATE 23/04/05

Hugo emailed me to say Lifeblog 1.6 can handle some of what I have suggested…

Actually, Lifeblog 1.6 can have post to more than one account, and is
available for the Nokia 6630, 6680, 6681, 6682. Unfortunately Lifeblog
1.5 (for 7610, 6670, 6260, 3230) can only post to one blog. And the PC
can post to multiple accounts.

Lifeblog Posting Protocol Example

As you may have seen by the couple of test posts on this website, I’ve managed to get Nokia’s Lifeblog application posting entries to my blog.

Lifeblog uses a flavour of the Atom protocol to handle web posting. Nokia have also very kindly posted the Lifeblog posting protocol specification, which details how Lifeblog works. However, there are some differences I’ve found between the spec, and how Lifeblog 1.5 actually works. Here I’ll document what I’ve done to get posting working.

Currently Nokia only claim that Typepad enabled blogs are supported, but now my homebrew blog supports live posting too.

Lifeblog works in two stages…

  1. Getting a list of supported blogs
  2. Posting to the users preferred blog

Let’s have a look at how the application works in practice.

Firstly you have to enter where Lifeblog can find the list of supported blogs into it’s web settings menu. When you first attempt to post a blog entry from your phone, this address will be called to retrieve the list of supported blogs.

It will send a WSSE header for authentication. I have explained how to implement WSSE in Perl before so won’t go through it again. Once validated the list of supported blogs needs to look something like this.

<?xml version="1.0"?><feed xmlns=""><link type="application/x.atom+xml" rel="" href="" title="robertprice"/><link type="application/x.atom+xml" rel="service.feed" href="" title="robertprice"/><link type="application/x.atom+xml" rel="service.upload" href="" title="robertprice"/><link type="application/x.atom+xml" rel="service.categories" href="" title="robertprice"/><link type="text/html" rel="alternate" href="" title="robertprice"/></feed>

I return this with the MIME type of text/plain, though the spec says it should be application/atom+xml. I’ve not had it fail on me doing this. It is important to note however is that when I inserted linefeeds to make the XML slightly more legible it failed. So I’d recommend not having any linefeeds in the XML to ensure it works correctly.

Notice how all the entries have the same title, robertprice. This is how Atom knows all the different links are related to the same blog.

If the WSSE authentication fails, just return a 401 Unauthorized error. In Perl, you can get a CGI script to do this using the following code.

print "Status: 401 Unauthorizedrn";
print "Content-type: text/plain; charset=utf-8rnrn";
print "Unauthorisedrn"

Now Lifeblog tries to send the actual entry to the atom scripts. Entries are in two parts, so your scripts need to keep track of the session being used. This is done by use of the id element that you send back to with your reply to the first message. Lifeblog then makes sure the second part returns this id, allowing you to keep state.

For example, here’s a test entry of me trying to post a note to my blog.

Lifeblog sends the first batch of XML…

<?xml version="1.0" encoding="utf-8"?>
<entry xmlns="" xmlns:dc="" xmlns:typepad="">
<content type="text/plain" mode="escaped">Just a quick test note</content>
<summary>Sat 19/02/2005 12:05 Text note</summary>

We reply with a bit of XML, and 201 Created HTTP response, assuming the WSSE authenticates. In Perl we can do something like this…

print "Status: 201 Createdrn";
print "Content-type: application/atom+xml; charset=utf-8rn";
print "rn";
print q{<?xml version="1.0"?>};
print q{<entry xmlns="">};
print q{<title>blog entry</title>};
print q{<summary>blog entry</summary>};
print qq{<issued>$issued</issued>};
print q{<link type="text/html" rel="alternative" href="" title="HTML"/>};
print qq{<id>$tag</id>};
print q{</entry>};
print "rn";

The id is returned in the variable $tag. Mark Pilgrim has an excellent article on his site about what makes a good atom id tag. In this example we can use a simple bit of Perl code like this…

my $time = time;
my ($sec, $min, $hour, $day, $month, $year) = (localtime(time))[0,1,2,3,4,5];
$year += 1900;
$month += 1;
my $tag = ",$year-$month-$day:/lifeblog/$time";

Now lifeblog sends the second part of the data. In our example, it looks like this.

<?xml version="1.0" encoding="utf-8"?>
<entry xmlns="" xmlns:dc="" xmlns:typepad="">
<title>Lifeblog post</title>
<content type="text/plain" mode="escaped"></content>
<link rel="related" type="text/plain" href=",post-2:test-123"/>

We reply with a confirmation XML message, again with the 201 Created HTTP status code.

print "Status: 201 Createdrn";
print "Content-type: application/atom+xml; charset=utf-8rn";
print "rn";
print q{<?xml version="1.0"?>};
print q{<entry xmlns="">};
print qq{<title>$title</title>};
print qq{<summary>$summary</summary>};
print qq{<issued>$issued</issued>};
print q{<link type="text/html" rel="alternative" href="" title="HTML"/>};
print qq{<id>$tag</id>};
print q{</entry>};
print "rn";

In this case, I’m assuming the values for title, summary and id have been extracted from the XML sent by Lifeblog. I’m also returning an issued date, this can be generated using a little bit of Perl, for example…

my $issued = sprintf("%04d-%02d-%02dT%02d:%02d:02dZ",$year, $month, $day, $hour, $min, $sec);

It’s trivial to extract the XML, in my case, I just used XPath expressions to get the relevant data.

Let me just go over the difference between the two XML postings made by Lifeblog.

In the first part it sends over the item we want to post. In this case it’s the note. The content is in the <content> tag, and lifeblog also adds some Dublin core metadata telling us the type is Text and the format is Note. If we were sending an image, the metadata would be missing, and the content would be in base64 with the MIME type of image/jpeg.

The second part contains the lifeblog data. Here we have the title, created date and any other data we added on lifeblog. If we had entered some body text, it would have appeared here in the <content> tag. It also contains the <link> tag that contains the id tag of the first part so our script can rebuild the two XML items.

Now you should have enough information to make the glue for Lifeblog to link into your own blogging system.

I’m just using the trial version of Nokia Lifeblog 1.5 on my Nokia 7610 phone, and it’s working just fine for me! Well done Nokia for producing such a useful bit of software.

WSSE Authentication For Atom Using Perl

Atom uses the WSSE authentication for posting and editing weblogs.

Mark Pilgrim explains more about this in layman’s terms in an old article, Atom Authentication.

This information is passed in an HTTP header, for example…

HTTP_X_WSSE UsernameToken Username="robertprice", PasswordDigest="l7FbmWdq8gBwHgshgQ4NonjrXPA=", Nonce="4djRSlpeyWeGzcNgatneSA==", Created="2005-2-5T17:18:15Z"

We need 4 pieces of information to create this string.

  1. Username
  2. Password
  3. Nonce
  4. Timestamp

A nonce is a cryptographically random string in this case, not the word Clinton Baptiste gets in Phoenix Nights (thanks to Matt Facer for the link). In this case, it’s encoded in base64.

The timestamp is the current time in W3DTF format.

The for items are then encoded together to form a password digest that is used for the verification of the authenticity of the request on the remote atom system. As it already knows you username and password, it can decrypt the password the nonce and timestamp passed in the WSSE header. It uses the well known SHA1 algorithm to encrypt the pasword and encodes it in base64 for transportation across the web.

We can use Perl to create the password digest, as shown in this example code.

my $username = "robertprice";
my $password = "secret password";
my $nonce = "4djRSlpeyWeGzcNgatneSA==";
my $timestamp = "2005-2-5T17:18:15Z";
my $digest = MIME::Base64::encode_base64(Digest::SHA1::sha1($nonce . $timestamp . $password), '');

The password digest is now stored in the variable $digest.

We can also create the HTTP header from this if needed.

print qq{HTTP_X_WSSE UsernameToken Username="$username", PasswordDigest="$digest", Nonce="$nonce", Created="$created"n};

Please note, to use this Perl code, you have to have the MIME::Base64 and Digest::SHA1 modules installed. Both are freely available on CPAN.

Update – 22nd November 2006

Some more recent versions of Atom expect the digest to be generated with a base64 decoded version of the nonce. Using the example above, some example code for this would be…

## generate alternative digest
my $alternative_digest = MIME::Base64::encode_base64(Digest::SHA1::sha1(MIME::Base64::decode_base64($nonce) . $timestamp . $password), '');

When using WSSE for password validation, I now always check the incoming digest with both versions of my generated digested to ensure it’s compatible with different versions of Atom enabled software. One of the best examples of this is the Nokia Lifeblog. Older versions expect the nonce to be left, newer versions expect the nonce to be decoded first.

Thoughts On Nokia 7610 Web Browsing

One thing that is really annoying me at present, is Nokia not shipping a decent browser on the 7610 phone. I’ve found the built in web browser usable, but it does have it’s limitations.

It doesn’t support file upload for starters. This would be a brilliant feature to have on available on the phone. All the photos I take currently have to be downloaded to my PC, or sent via the phones email software to be used. Just think file upload was there. It would be trivial to build web based applications to upload photos to sites and use them. I guess Nokia want to make sure this is limited to ensure a market for their Lifeblog product, which allows photos to be uploaded via (what appears to be) an Atom interface. However, Lifeblog is a commercial product, and I don’t really want to pay extra for what I would have thought was standard functionality for a phone of this type.

Opera For Mobile would appear to offer a superior solution for web browsing on the 7610. It supports all current sites out of the box, re-rendering them if necessary to fit the screen on the phone. It also allows itself to be used and launched by third party software. The sore point here is that you only get a 14 day free trial before you have to buy it, and it’s really expensive. Opera do offer bulk purchase deals, and it would have been nice for Nokia or Orange (my network provider) to have supplied this with the phone. For individual purchases, the cost is just too high, and I can’t justify it with my current usage levels. If I bought it I may find my usage levels (and data bill) increasing, and justifying the purchase price, but not at present. At a time when mobile networks are trying to increase their data revenues, surely providing good connectivity software to users would help them claw back any bulk purchase cost in profit from the increased data traffic. It’s also a shame that Opera can’t create an free version, maybe supported by ads. People are more inclined to spend on their phone, so the right ads could well subsidise the cost of giving it away for nothing.

I had installed Opera when I first got my 7610 a few months ago, but never really used it. This was a mistake as when I really want to evaluate it, I find my trial time has run out. As I didn’t have anything saved for it, I tried removing it and reinstalling it. The software is clever enough to know it’s been installed before, and won’t let me have another trial period. I don’t think Series 60 Symbian phones have the equivalent of the Registry in Windows, so it must be putting a file somewhere. I would assume this would be on the C drive, and not on the MMC drive. I’ve had a delve around with FExplorer and found an Opera folder in the images directory. I deleted it, but it hasn’t made a difference. Does anyone know how this works, or how to reset the trial period? I really want to give Opera a serious try, and I don’t want to have to buy it full price (yet) or get another phone to do so.

Incorrect HTTP Status Codes

A third party I work with made a change to some web code today that managed to knock out a system I have that relies on the data served from it.

The server handles content from other web applications and when it becomes busy, spits out a generic warning message saying to come back later. However, this also affected the data feed I was taking from it.

I didn’t realise this, as their server always returns the HTTP status code 200 meaning everything is OK. In these situations, web based applications should be returning 503, meaning the service is temporarily overloaded.

My code was checking for errors based on this code, but as a false status code was returned, it messed up. Time to add some more sanity checking to my code, and ask that others try to follow the correct HTTP status code specifications.