Camera Projection

I’m redoing a course on Learn Squared by Matte Painter Steven Cormann on, well, matte painting. He stresses the importance of camera projection in matte painting, but that seems to be mostly in the context of camera moves, creating parallax without using dense 3D geometry. That is, classic 2.5D technique. I’m not considering doing any camera moves, but I am considering creating scenes not just for one shot, but rather for a number of shots from the same ‘location’. A bit like a gamer would see exploring a game environment. So, create a fairly fleshed out environment, and then see what nice pictures I can get while exploring it. Crazy, heh? Creating the shot first and then creating the image to realize it is not working so well for me. The other approach is more like a photographer. I’m hoping that I can be a bit more free about creating an environment (basically a game level) without having to worry too much about whether it will actually look any good. I’ll use very simple assets too, such as cards and projections onto slightly more complex geo.

Anyway there’s a good tutorial on Blender Cookie about camera projection in Blender. There are several possible approaches to this problem. One can map a background image to an object (e.g. cube) using the Window output of a Texture Coordinate node. Then just model against that. It doesn’t matter how simple the geo is with this approach, but one does need to maintain Camera View in the viewport. Another approach is the UV Project modifier which implements classic camera projection. One can navigate around in the viewport without disturbing the texture mapping, but a bit more geo is required to get a good mapping. Otherwise very wonky lines. A third approach is to UV map the geo using the Project from View option, which also requires a decent amount of verts for a clean mapping.

There’s also the issue of how to manage it when the job is done. Steven’s approach is to keep the projection cameras, usually importing cameras and geo into Nuke. However in Blender one can simply apply the UV Project modifier and this has exactly the same effect as mapping with Project from View in the first place.

I think the best approach would be to use the Window output of the Texture Coordinate node while getting basic geo to the right size and shape, then subdividing to get a decent amount of geo and switch to using a UV Project modifier, then applying that at the end to eliminate the need to keep the projection camera.


This scene required only a few hundred polys. I rendered a tree from xFrog and use camera mapping to project it onto geo consisting of a cylinder (for the trunk) and an icosphere for the leaves. The shadows are a bit funky but I’m sure I could fix that with a little more work. A set of assets like this would be very handy for playing around with composition of landscapes, and allow a lot of detail in a scene. I could just use image planes but this is a step up from that, perhaps for mid-ground assets.

So here’s a comparison. Foreground tree is mesh, midground tree is texture projected onto simplified geo, and background trees are billboards. Lighting is a problem.

This definitely works better in cycles. Cloud is an image plane courtesy of PhotoBash. Total scene is about 250,000 polys, nearly all of which is in the hero tree front left.

Matte Painting

I’m thinking of going back to matte painting, the style that uses 3d for some of the elements in the scene, and a lot of photobashing as well. I’ve just been reviewing the camera projection technique in Blender (hence above image which is a cube with projected texture).

I’ve been very uninspired lately, and matte painting might be a little more accessible than pure 3D. Not sure why that should be the case, but I’ve done quite a bit of comping renders into photos in the past and it’s a way to get a fairly complex scene that still has some of the 3D quality, whatever that is. Mostly lighting and shadows I guess.

There’s a good tutorial in the technique on CGCookie, which I’m still subscribed to because I never got around to closing my subscription despite not doing an computer graphics for over a year.

Scene Construction

I moved some low-poly assets that I modelled a couple of years ago into the Asset Browser folder, and constructed this scene by dragging assets from there. It’s basically the same scene I created back then, but different process. I had to fiddle around a bit converting a collection instance into actual individual objects, but once I found out how to do that (not intuitive) the rest was straight forward. So, proof of concept, as usual.

Rendered with Eevee.