As part of my Draw Something Solver, I have worked out how to load and extract images from an HTML 5 canvas, now I need to work out what letters the images represent.
I am hoping that as the images of the letters are all computer generated I can simply compare them and create a lookup table.
I initially tried a basic serialisation by running
join() over the ImageData as it was type of array, but this didn’t work as it wasn’t an array as such.
Canvases offer a
toDataURL() method, that saves the canvas out as a base64 encoded png or jpeg image. Base64 is a text based encoding, so could be suitable. When I tried this, I found the string to be very long, and not very pratical to use.
So I have a long unique string I need to perform a lookup and match against, this is crying out for a hashing function to turn it into something more manageable.
Running the resulting code over several images has produced some disappointing results. Not all letters appear to consistantly encode to the same SHA1 hash, which means there is some tiny difference in the images of the letters sometimes, even though visually they look the same to the human eye.
I now need to look at another approach for letter detection. There are two I can think of, the first is to convert the image into one less complex, a simple black and white image that should hopefully be more likely to match. The second is to investigate full OCR technology.
I have looked at Grayscaling Image With Perl in the past, so I hope I can reuse that knowledge again with this project.