So I’m doing a course or two on Inkscape so I can create vector content for my LearnSquared course. The tutor of that course also uses various effects in Cinema4D that I think would best be done with geometry nodes in Blender, so I’ve just acquired a course on that on Udemy. I think it could be really useful for the kind of image we’re making on LearnSquared. Lot’s to do.
I’ve tried out a couple of techniques from the workshop. Plan for the building was created in inkscape and extruded in Blender. Bump on the building was another Inkscape image exported as PNG. Not very apprropriate but proof of concept as I like to say. Crescent is sphere rendered in Blender with it’s own lighting – view layers are handy to isolate elements and lighting. Plus a grungy background from CGTextures composited in in Krita, but not actually noticeable in this jpeg. It’s more obvious in the original file. I must admit that the tutor’s image was much more interesting than this one. Still some work to do I guess.
I was a bit wrong about the approach of the new course I’m doing. The final image is mainly composited in Photoshop (Krita for me). Some vector elements are imported into Photoshop as picture elements. Other vector elements are exported as PNG to be used a bump maps in the 3D app, and some as SVG to be used as a basis to extrude geometry. The 3D app is used both to create complex geo based on the vectors, but also quite simple geo (e.g. spheres). A major use of the 3D app is to create interesting lighting (and materials) to export images for compositing in Photoshop.
The ‘UI’ in the course name is a bit misleading. The aim is to create the kind of scene one finds in sci-fi movies where the characters are interacting with some kind of interface, usually holographic, often 3D, for example a 3D holographic map of a star system or something similar. Future tech.
I’ve started a course on LearnSquared about creating UI type elements, and it requires a lot of vector graphics so I’m preparing with a Udemy course on Inkscape. I’ve never been able to get into Inkscape in the past, don’t really know why as I’ve used Illustrator a bit, but this time through it seems a lot more accessible. Anyway, here’s a fairly simple pattern using some functionality that could be very useful for the UI course.
The course starts with vector elements and then uses them to create 3D elements in a 3D CC app, Cinema4D in the course but of course I plan to use Blender. I was considering trying to use the Grease Pencil in Blender to create the vector content, but decided that Inkscape would probably have more sophisticated features as it’s been around a lot longer. Also, it will be a lot closer in workflow to what the tutor is doing in Illustrator. Anyway, I’m enjoying the journey so far.
I’m redoing a course on Learn Squared by Matte Painter Steven Cormann on, well, matte painting. He stresses the importance of camera projection in matte painting, but that seems to be mostly in the context of camera moves, creating parallax without using dense 3D geometry. That is, classic 2.5D technique. I’m not considering doing any camera moves, but I am considering creating scenes not just for one shot, but rather for a number of shots from the same ‘location’. A bit like a gamer would see exploring a game environment. So, create a fairly fleshed out environment, and then see what nice pictures I can get while exploring it. Crazy, heh? Creating the shot first and then creating the image to realize it is not working so well for me. The other approach is more like a photographer. I’m hoping that I can be a bit more free about creating an environment (basically a game level) without having to worry too much about whether it will actually look any good. I’ll use very simple assets too, such as cards and projections onto slightly more complex geo.
Anyway there’s a good tutorial on Blender Cookie about camera projection in Blender. There are several possible approaches to this problem. One can map a background image to an object (e.g. cube) using the Window output of a Texture Coordinate node. Then just model against that. It doesn’t matter how simple the geo is with this approach, but one does need to maintain Camera View in the viewport. Another approach is the UV Project modifier which implements classic camera projection. One can navigate around in the viewport without disturbing the texture mapping, but a bit more geo is required to get a good mapping. Otherwise very wonky lines. A third approach is to UV map the geo using the Project from View option, which also requires a decent amount of verts for a clean mapping.
There’s also the issue of how to manage it when the job is done. Steven’s approach is to keep the projection cameras, usually importing cameras and geo into Nuke. However in Blender one can simply apply the UV Project modifier and this has exactly the same effect as mapping with Project from View in the first place.
I think the best approach would be to use the Window output of the Texture Coordinate node while getting basic geo to the right size and shape, then subdividing to get a decent amount of geo and switch to using a UV Project modifier, then applying that at the end to eliminate the need to keep the projection camera.
This scene required only a few hundred polys. I rendered a tree from xFrog and use camera mapping to project it onto geo consisting of a cylinder (for the trunk) and an icosphere for the leaves. The shadows are a bit funky but I’m sure I could fix that with a little more work. A set of assets like this would be very handy for playing around with composition of landscapes, and allow a lot of detail in a scene. I could just use image planes but this is a step up from that, perhaps for mid-ground assets.
So here’s a comparison. Foreground tree is mesh, midground tree is texture projected onto simplified geo, and background trees are billboards. Lighting is a problem.
This definitely works better in cycles. Cloud is an image plane courtesy of PhotoBash. Total scene is about 250,000 polys, nearly all of which is in the hero tree front left.
I’m thinking of going back to matte painting, the style that uses 3d for some of the elements in the scene, and a lot of photobashing as well. I’ve just been reviewing the camera projection technique in Blender (hence above image which is a cube with projected texture).
I’ve been very uninspired lately, and matte painting might be a little more accessible than pure 3D. Not sure why that should be the case, but I’ve done quite a bit of comping renders into photos in the past and it’s a way to get a fairly complex scene that still has some of the 3D quality, whatever that is. Mostly lighting and shadows I guess.
There’s a good tutorial in the technique on CGCookie, which I’m still subscribed to because I never got around to closing my subscription despite not doing an computer graphics for over a year.
I moved some low-poly assets that I modelled a couple of years ago into the Asset Browser folder, and constructed this scene by dragging assets from there. It’s basically the same scene I created back then, but different process. I had to fiddle around a bit converting a collection instance into actual individual objects, but once I found out how to do that (not intuitive) the rest was straight forward. So, proof of concept, as usual.
Rendered with Eevee.
Same image as last post, but I rendered this with Blender 3.0. I’ve often thought that game engines have one advantage over Blender and that is an asset browser. Beats appending from some file (which one, what was the asset named? etc). A course I’ve been doing is by a AAA game developer who creates assets in Maya and ZBrush but imports them into Unreal for environment creation and rendering.
The new Blender (3.0) has an Asset Browser!! As I want to focus on more stylized environments with a more agile design process this definitely seems the way to go. Composition and lighting have always been more interesting to me than the quality of assets or the design of characters/creatures/mechs or arch. viz. stuff. Anyway, I’m looking forward to the new phase in my artistic career.
I changed the angle of the sun and made it a bit more yellow for a late afternoon feel. I should probably post process all my images in Krita for best results. This could perhaps do with slightly higher contrast.