Projection mapping Arthur’s SeatPosted: June 21, 2009
Here’s something fun I found while clearing out the hard drives on my old pc—my undergraduate graphics project. Basically it involved projection mapping photographs of Arthur’s Seat (a big hill in Edinburgh) onto a model created from map contours. Here’s a couple of videos:
Looks like shit, huh? But it was a big deal for an undergraduate in 1997 😉
There were two parts to it; generating a mesh from contour data and then texturing it using photographs.
Building the mesh
The starting point for the mesh was digitised contour data from the UK’s Ordnance Survey (the national mapping agency). As far as I know the data is literally the contour lines digitised from paper maps so they aren’t as clean as one might like; for example there are gaps where text appears on the map. And it’s all encoded into a fun text format called NTF.
After writing a NTF parser (which I think I did in perl) my first attempt to get useful data was to try and create complete contours. I found a couple of papers on the internet that weren’t very useful, then tried taking the end point of each contour segment and trying to connect it to the nearest endpoint of another contour at the same elevation. This produced fairly crappy results (I don’t remember why, but I still ended up with a lot of gaps.)
Time for a different approach. This time I took each digitised contour point and just treated it as an isolated point—a point cloud. Then then assuming that contours don’t cross (that the ground never twists over itself) I can project them all onto the ground plane (i.e. set z = 0) and create a mesh in 2d using a delaunay triangulation. Then bring back the z coordinates and voila, a 3d mesh.
The mesh is nice and detailed, but with more polygons than we really want—I forget how many, tens of thousands? More than you wanted on 1997 hardware, anyway. I implemented Michael Garland’s scape algorithm to decimate the mesh to a specified number of points.
I could now texture the mesh with the OS 1:50,000 map of the area and spin it around in real time on an O2—whee!
[Hardware aside: I was mostly writing this on the University’s Sun workstations with the mesa software OpenGL library. I also had access to the multimedia lab, which had some SGI kit; an O2, an Indy and a Onyx with Reality Engine graphics and a whole 4Mb of texture memory! Around this time I also bought a 3dfx Voodoo Rush card for my PC at home, and getting it to run on that required porting the code to windows.]
I set off with a borrowed SLR and my bicycle to take photos to use for texturing. It started off as a nice sunny day but was raining by the time I got to the back side of the hill so the lighting is horribly variable. I had the film scanned on to PhotoCD up to a resolution of about 3k—you can see the set here.
Calibrating the photos was a two stage process. First marking on the map where I took the photos from and working out those co-ordinates, and then figuring out the camera rotation and field of view (I ignored lens distortion). For this I wrote a little GL app that displayed the photo and the mesh and allowed the rotation and fov to be varied until the horizons matched. This was kind of tedious, so I wrote a little routine to work out the error between the two horizons and threw a numerical optimiser (from Numerical Recipes) at it.
Now I could project the textures onto the model. I had a lot of overlap in coverage, and the original idea had been to use view dependent texturing (an idea borrowed from the Facade paper) where each pixel is textured from the image where the projection is closest to the surface normal. Fancy shading like this required ditching OpenGL and going to renderman (BMRT actually) .
The view dependent texturing looked pretty bad, mainly because of the lighting differences between the photos. I attempted to balance the photos in PaintShop Pro, but it still looked poor. In the end the shader just ended up blending the photos.
If I find more stuff (like more images) I’ll update this post.