I’m redoing a course on Learn Squared by Matte Painter Steven Cormann on, well, matte painting. He stresses the importance of camera projection in matte painting, but that seems to be mostly in the context of camera moves, creating parallax without using dense 3D geometry. That is, classic 2.5D technique. I’m not considering doing any camera moves, but I am considering creating scenes not just for one shot, but rather for a number of shots from the same ‘location’. A bit like a gamer would see exploring a game environment. So, create a fairly fleshed out environment, and then see what nice pictures I can get while exploring it. Crazy, heh? Creating the shot first and then creating the image to realize it is not working so well for me. The other approach is more like a photographer. I’m hoping that I can be a bit more free about creating an environment (basically a game level) without having to worry too much about whether it will actually look any good. I’ll use very simple assets too, such as cards and projections onto slightly more complex geo.
Anyway there’s a good tutorial on Blender Cookie about camera projection in Blender. There are several possible approaches to this problem. One can map a background image to an object (e.g. cube) using the Window output of a Texture Coordinate node. Then just model against that. It doesn’t matter how simple the geo is with this approach, but one does need to maintain Camera View in the viewport. Another approach is the UV Project modifier which implements classic camera projection. One can navigate around in the viewport without disturbing the texture mapping, but a bit more geo is required to get a good mapping. Otherwise very wonky lines. A third approach is to UV map the geo using the Project from View option, which also requires a decent amount of verts for a clean mapping.
There’s also the issue of how to manage it when the job is done. Steven’s approach is to keep the projection cameras, usually importing cameras and geo into Nuke. However in Blender one can simply apply the UV Project modifier and this has exactly the same effect as mapping with Project from View in the first place.
I think the best approach would be to use the Window output of the Texture Coordinate node while getting basic geo to the right size and shape, then subdividing to get a decent amount of geo and switch to using a UV Project modifier, then applying that at the end to eliminate the need to keep the projection camera.
This scene required only a few hundred polys. I rendered a tree from xFrog and use camera mapping to project it onto geo consisting of a cylinder (for the trunk) and an icosphere for the leaves. The shadows are a bit funky but I’m sure I could fix that with a little more work. A set of assets like this would be very handy for playing around with composition of landscapes, and allow a lot of detail in a scene. I could just use image planes but this is a step up from that, perhaps for mid-ground assets.
So here’s a comparison. Foreground tree is mesh, midground tree is texture projected onto simplified geo, and background trees are billboards. Lighting is a problem.
This definitely works better in cycles. Cloud is an image plane courtesy of PhotoBash. Total scene is about 250,000 polys, nearly all of which is in the hero tree front left.
I’m thinking of going back to matte painting, the style that uses 3d for some of the elements in the scene, and a lot of photobashing as well. I’ve just been reviewing the camera projection technique in Blender (hence above image which is a cube with projected texture).
I’ve been very uninspired lately, and matte painting might be a little more accessible than pure 3D. Not sure why that should be the case, but I’ve done quite a bit of comping renders into photos in the past and it’s a way to get a fairly complex scene that still has some of the 3D quality, whatever that is. Mostly lighting and shadows I guess.
There’s a good tutorial in the technique on CGCookie, which I’m still subscribed to because I never got around to closing my subscription despite not doing an computer graphics for over a year.
I moved some low-poly assets that I modelled a couple of years ago into the Asset Browser folder, and constructed this scene by dragging assets from there. It’s basically the same scene I created back then, but different process. I had to fiddle around a bit converting a collection instance into actual individual objects, but once I found out how to do that (not intuitive) the rest was straight forward. So, proof of concept, as usual.
Same image as last post, but I rendered this with Blender 3.0. I’ve often thought that game engines have one advantage over Blender and that is an asset browser. Beats appending from some file (which one, what was the asset named? etc). A course I’ve been doing is by a AAA game developer who creates assets in Maya and ZBrush but imports them into Unreal for environment creation and rendering.
The new Blender (3.0) has an Asset Browser!! As I want to focus on more stylized environments with a more agile design process this definitely seems the way to go. Composition and lighting have always been more interesting to me than the quality of assets or the design of characters/creatures/mechs or arch. viz. stuff. Anyway, I’m looking forward to the new phase in my artistic career.
I changed the angle of the sun and made it a bit more yellow for a late afternoon feel. I should probably post process all my images in Krita for best results. This could perhaps do with slightly higher contrast.
This probably doesn’t look very different from other recent work, but there are a few new features. One is that I didn’t use an HDRI for the sky, just the inbuilt sky texture. Also, I’ve used alphas for the clouds, render a lot faster than using any kind of volumetric cloud, while giving more control than an HDRI would.
The rock is a sculpting following the technique of Julien deVille. These are all insances of the same rock. I added a bit of grunge over the top in Krita after rendering – they were all a bit too squeeky clean. I’m going for a more stylized look following Tyler Smith’s tutorial on LearnSquared. He’s working with Unreal and I must admit just being able to paint on foliage and small rocks (like I could in Vue) is a very appealing workflow. Maybe I should start using Unreal for rendering!
I tried using Eevee for rendering but it was very slow. Cycles was actually faster, and that wasn’t even e-cycles. I must give that a go and see if it’s much faster.
In the spirit of keeping it more like an illustration I’ve upped the saturation a bit on the grass. I think it does give it a bit more punch. I noticed that Tyler Smith, in that LearnSquared course, spent a lot of time tweaking his colours.
And here’s the result of a couple more hours tweaking.
The atmosphere is not so obvious here because the HDRI is blurry from being low res. I’ve found another tutorial on environments, this time on Learn Squared. It’s going for the stylized look, not sure if the above image qualifies for that. Part of his process involves sculpting individual leaves in ZBrush, going for a low poly, game engine result!! Crazy, I know. Of course, lots of baking of textures so I guess it works. He did work on Ghost of Tsushima, so there’s that.
I found a new tutorial on procedural ground texture on YT, and here it is. Heavily complemented by plants from RealGrass and an HDRI from HDRIHaven. In this case displacement of the mesh was simply achieved by moving verts around instead of adding a Displace modifier or adding a Displacement node to an adaptive surface. As usual controlling plant distribution required a mesh of some density, not just a displaced plane.
So this is not too different from previous image, except that I rendered it in layers and assembled in Krita. Just reviewing that workflow. Also I installed regular Blender again because e-cycles has been giving me lots of problems – crashing, renders stopping due to memory errors, etc. Regular Blender might be slower but at least it works. The fault is probably mine, I just don’t know how to use it properly.