Wikipedia Deconstructed by Jack Wild


How did you start?

I’ve been experimenting with GPU Instancing techniques recently to take advantage of the huge performance boosts it offers, and I thought that this challenge was a good opportunity to take that further, and combine it with physics simulation on the GPU, which is something I’ve done on the CPU a lot before, but not with GPGPU. I thought it would be nice to create an abstract piece which uses the text from the articles and just experimented with that until I got to this.

Which difficulties did you face?

I hadn’t done any GPGPU stuff before, so that was a tricky technique to wrap my head around — in particular finding the best texture type to use to get the right precision for the simulation. A friend pointed me in the right direction and told me to use the FloatType (or HalfFloatType on mobile), as this means that values don’t have to be mapped between 0 and 1, so gives more control and is easier to work with. Debugging the shaders is another tricky one because you can’t rely on console logs, and have to render a texture map to give you an idea of what kind of values you’re getting out of the simulation. UV mapping was also pretty tricky — I render a canvas with all the letters on in a grid, and then have to make sure that each instance of the plane is mapped to the correct area of the texture. And the maths involved in the vertex shader is still making my head spin.

Which technologies did you use?


Who is your favourite creative developer / studio?

I wouldn’t really want to choose — I don’t have a favourite, many studios / developers create work I love, and each has a different strong point.

Authors website