Monthly Archives: March 2014

In Browser, Computer Vision Based, Pong

Screen Shot 2014-03-10 at 2.09.59 PM To begin, I just couldn't resist using an a Atari harkening font in this one. 🙂

This particular project evolved my GLSL skills and started to teach my brain how to work on a GPU better which, naturally, means I have a very nicely decorated whiteboard to my right at the moment. And I'll be sharing it too, in a moment...

I've been thinking about doing a Computer Vision project for a little while now. I was inclined to do a desktop app and use OpenCV but on doing some research on what goes in to computer vision I realized, "Hey, I know the tools to make this work in browser." So, I did.

My first task to get Computer Vision working was take the RGB feed from my WebCam and convert it to HSV. I've done chroma keys before just using the raw RGB values but with the recommendation that HSV might work better, I figured why not give it a try? This is where the GLSL came in and what I had to write was an implementation of the following:

Let the Red RGB component in the range 0.0 to 1.0

Let the Green RGB component in the range 0.0 to 1.0

Let the

Blue RGB component in the range 0.0 to 1.0

Let a inconsequential float value to prevent division by 0.

The definition of is elided, but it just returns the maximum of the values it was passed in.

had to be accompanied with a related offset value:

So my final result for Hue is actually This was because in the HSV colorspace, is represented as a circle going round from So, in order to get select the right hue, an offset will need to be applied to get the value round to the right place.

And finally

I found some other documentation about this calculation online and found it messy. That might be normal when discussing math like, I don't really know,  this but my engineering brain likes things broken down into their individual components. Hopefully, the above will help someone else out! 🙂 Now to calculate that on the GPU making it time to live up to my promise of bringing the whiteboard in... analysis

I ended up being heavy on mix() and step() in order to reduce the branching I had to write. Branching, as I understand, is getting to be less of a concern in GPU programs but the article saying that branching implementation was improving was from 2011 and that's too recent to assume all GPU's will behave nicely. Below those function notes, there's a tracking of how my values were going to flow through in the vectors so I could end up with a the maximum in the first position, the two values I needed to subtract to get my and finally the offset that went with the selected so I could calculate properly. For the curious, the shuffling that's done with mix() and step() was effectively a vectorized ternary operator.

The conversation of RGB to HSV and writing the GLSL shader to do that was most of the effort. But the shader wasn't quite done yet. The final step was passing Min and Max values for each value: H, S, V. When all three where within the defined min and max, the shader needed to return white, when any of them were out of bounds, the shader had to return black. This way, the user can select an object in the real world by its color that will function as a paddle. Now I did look for and find shader implementations of RGB to HSV online, however, I opted to roll my own to make sure I understood what was going on as this was pretty much the beating heart of this project.

On the JavaScript side of things I did run into a problem. I wanted to use sliders so users could hone in on the color they wanted to select. However, HTML5 only allocates one thumb per slider. I know I could have used jQueryUI for to get two thumbs but I opted to stay away and handle the problem myself. It was also tempting to use three.js to handle the WebGL side of things but I decided to not bring it in either. The changes weren't too significant from my ANSI.WebGL project so I saw no need to take on another dependency.

The JavaScript I did write is pretty straight forward. There's a ball (a div that's been styled to look like aball) that flys around and when it hits a paddle it flies back the other way. Hitting here has been defined as the left or right side of the ball hits a white paddle pixel. When that happens, the column of pixels is counted for connected white pixels to figure out the length of the paddle. If it's above a threshold (provided to filter out artifacts) the calculation figures out where the ball was hit in relation to the paddle then the ball will bounce away at an angle. The only "gotcha" I ran into with this part was when I read the pixels from the buffer, I found them inverted which broke my hit detection! But it was a simple fix to right them thankfully.

For much larger computer vision problems I probably would turn to something like OpenCV. Still getting to understand something like sometimes, we convert to the HSV color space, is I think very important! Understanding how the tools you are using work I hold to be key to actually using them well. It also allows you to answer a question I've mentioned before as being part of my process. 😉

You will need either a recent version of Firefox or Chrome in order to run this project due to the needs of getUserMedia() and WebGL.




The problem I set out for myself of rendering ANSI art in a web browser turned out to have more ins and outs than I anticipated and quickly became a case where I needed to iterate and refine my idea in order to carry it out. In the beginning, I knew doing something like rendering a few thousand <span>s, one for each character so each character could be styled, was not going to be a viable solution because it would not scale to work well on all browsers. So I attempted to go with SVG as my solution and use a mask in order to get all the colors I needed, kinda like using an image brush,  as this would afford me the possibility of limiting the number of elements I needed to create. After I got the right @font-face setup, the solution worked great in Opera (Blink), but not so hot in Firefox.

At that point I considered dropping the whole project as I found myself being pushed to a HTML5 canvas based renderer for speed and I didn't really want to reinvent the wheel as I found an implementation online that solved the problem that way already. Then, I lost my job. 🙁

But! Ok! 🙂 New reasons to putter: Keep my skills fresh, solve an interesting problem, learn something new and have a new project to show off to would be employers!

On project resumption, I opted to try and go the WebGL route. One thing I like about this solution is I only needed one sprite of all the characters to render any color combination I wanted as I'm able to draw each character with a fragment shader. When it reads black for the current texture pixel, put the background color, white put the foreground. Simple rule with a lot of flexibility.

The project was also a great opportunity to play with newer features to JavaScript that I haven't been able to use due to the need to support legacy browsers. In addition, it features my ability organize which represents one of my two greatest strengths as a software engineer. The other being I do this job primarily because I have a passion for it. 🙂