I’m gradually getting my new pipeline under control. First shoot lots of photos of subject (well, first find an interesting subject, but I’ll ignore that part). Next process in Photoshop, may be just conversion to TIFF but could also involve masking, depending on the subject. Then load into Photoscan and have it build a mesh.
The exported mesh is usually not at the origin, so load into Blender to move it to the right location, and maybe do some cleanup while I’m at it. A shiny object usually has lots of messed up geometry that is a bit easier to clean in Blender than in ZBrush, for me at least.
Next, load into ZBrush, generate a low poly duplicate with decent quad geometry (using ZRemesher), fill any holes, and UV map. Som additional cleanup is usually required here. I might end up going back to Blender for this part. Then subdivide to approx the same number of verts as the original, and project the high poly detail onto the new mesh at every subdivision level. More cleanup needed on the highest subdivision at least.
Now into XNormal to bake out some maps. At least this is required if the model is going to be used as a low poly asset. For what I’m doing I can just stick with the high-poly cleaned upmesh. However maps such as cavity and AO can be used for material definition even if I am using the high-poly mesh. AO is often used to map grime and dirt onto a model. I’m not totally happy with xNormal for this job, Knald might be better as it seems to give more dynamic control of the results.
For final render I’m using Marmoset Toolbag for the workshop I’m doing, but will probably just stick with Vue for most of m rendering needs.