these days, PBR shading is pretty much standard, as it almost always results in much reduced setup time (saved artist time) and at the same time much better and more realistic looking result. So one thing I miss the most in EmberGen currently is some up to date physically based shading and rendering system.
Encounter with EmberGen’s shading settings feels like a direct throwback to between late 90’s and early 00’s, when people had to do hideous things with all sorts of blinn and phong shaders and non physically based lights, and work very, very hard to achieve even mediocre looking results. EmberGen currently feels kid of like that.
The Volume and Scene nodes mash together tons of quite unrelated parameters into ambiguously and confusingly named parameter sections, making it borderline impossible to separate aspects of scene lighting and volume material shading from each other.
Comparing that to modern approaches like Physically Based Volume shader, where the entire volume material shading set looks like this (or similar to this):
I arrived exactly to that specific issue. Setting up the look of volume in EmberGen unfortunately devolves in this bro science of random trial and error loop testing out all the unphysical, arbitrary properties EmberGen throws at user.
Let’s first take a look at the Volume Node:
The very first red flag is already the presence of the Light parameter set. If the node takes care of describing the properties of the volume, that should have absolutely nothing to do with the light. Volume is a physical material. Light is energy.
Sure that for example fire blurs the line between the two, as it’s a transition of physical material form to an energy form, but this line is not that hard to draw, most of other software allowing for smoke fluid simulation got it right already.
Shadow softness parameter should never really exist in modern world, as it’s derived from the size of light source. It still exists in rasterized game engines, but unless the label of EmberGen viewport lies, EmberGen should be a ray tracer and therefore not need this
Long mid and short range shadows with arbitrary 100/65/25 default values really make no sense. They do something, but it’s really impossible for almost anyone to associate them with any properties of real world light sources. It appears that these are actually meant as an artistic control of the volume appearance, rather than lighting. This is bad, as all these properties should be derived from just two to three general paramaters: Smoke Color, Smoke Density and Smoke Absorption Coefficient.
Parameters from AO Shadows all the way down to Occlusion Color are definitely not aspects of smoke material, but rather global environment lighting/scene lighting. It’s again very confusing mental model if modifying properties of light to affect the look of physical material. If these really need to stay, then these should be parameters of some global ambient light/sky light. But once again, the amount of self shadowing and occlusion that occurs inside of the smoke volume should be a product of 3 main, intuitive to use parameters: Smoke Color, Smoke Density and Smoke Absorption Coefficient, in combination with the external light sources illuminating the smoke material.
Extinction shadowing all the way down to phase eccentricity is a set of fractured parameters which can once again be encapsulated and better expressed with the same set of basic parameters mentioned above with addition of one more, you can see on Blender screenshot - the anisotropy. There should be just single parameters which defines anisotropy of the light scattering within the volume, with default 0.0 value being the evenly isotropic scattering direction, with more positive values making the light scattering more forward biased, and vice versa.
The whole Scattering parameter set can once again be expressed with the same 3-4 physically based parameters mentioned above.
Colliders parameter section once again presents us with a set of settings that seems much more appropriate to be somewhere in a Environment/Sky light section than here. Also, it duplicates lots of parameters from Light parameter set. I don’t think there’s much of a valid use case wanting to have for example different AO intensities and Radii for smoke and colliders. All these should be contained within some node which describes how global environment light behaves, especially in future if EmberGen allows multiple grids/emitters per scene.
Last parameter set, confusingly called Renderer Properties actually presents us with something that’s exactly not renderer properties - it presents us with the actual properties of the smoke shader/material. Not with the properties of ray tracing renderer used to render it. And once again, many of these can be encapsulated in much more minimalistic, easier to set up set of physically based parameters.
Bottom line is that I (and I dare to say many others) are usually able to achieve much better results much faster by using much smaller set of physically based parameters.
My suggestion would be to:
Have a “Scene” node which actually serves as representation of generic, empty scene. The “World” that the simulation happens in.
The scene node would then take in Volumes, Lights, Shapes, Colliders, etc…
By default, scene node without anything plugged in would render pitch black. There are no objects, no volumes and no lights.
Lights, currently Point and Directional would be expanded with 3rd one: Environment. The environment light would carry all the settings related to global scene lighting, such as these:
[images had to be removed due to link limit ]
Point and Directional light would contain all the parameters related to their light and shadow. None of those would be present in the settings of the volume material.
The Volume node would contain only parameters related to the material of the volume, but a simplified, physically based set. This would give us a very clear distinction of which parameters concern light, and which concern volume material.
The Ground currently present in the scene node would be separated into new, optional Ground node which could be added to the scene like any other collider.
The scene node would then continue the same way it does now, be plugged into Capture Node, which should be renamed to “Render” node for more clarity. The “Style” parameter set would also be a lot more appropriately placed in this Render node:
[images had to be removed due to link limit ]
As it contains mainly parameters concerning the rendering of the scene, rather than layout of it. So do Tonemapping and Colors parameter sets.
After these changes, this would be the example workflow if you were to start from scratch without default preset:
You start with an empty scene node.
You plug an Environment Light node into scene node. Now you have empty scene with background sky lighting. This sky lighting will also correctly make all the ambient light (shadows) match the color of the environment. So blue in the case of “Atmosphere” mode.
Right now, the color of ambient, occluded light is gray, despite environment being blue. That really makes no sense. If one wanted neutral gray shadows, which is more often that not desired, then they should set Sky mode to “Uniform Color” and make the color gray.
You’d create Ground Node and plug it to scene node. Now you have empty scene with environment light and ground.
You’d create the same, already existing snake of Primitive>Emitter>Simulation>Volume nodes:
[images had to be removed due to link limit ]
With the exception that the Volume Node would only contain the properties of the volume shader/material, and have no Lights input. Lights are aspect of the scene, not the volume material. Result of this node chain would then go to the Scene node. Now you have a scene with environment light, ground and smoke simulation.
You’d now add directional light to add some sort of sun to your scene. This would add the sun light to the scene, in the physically based, realistic additive way, where the shadows of it would be completely opaque and lit only by the ambient light of the “Environment Light” light source.
This assembled scene would then go into “Render” node, which would have also “Camera” input alongside “Scene” input, basically same thing as the “Capture” node is right now, except that would be the place, where you tweak the rendering parameters, such as tone mapping, glow, rendering style and so on. And then you’d output RGBA channels from the Render node the same way you do now.
Sorry for this giant wall of text. But anyway, the main point is to better separate different aspects of the EmberGen scene so that it’s much easier to set up in physically based manner.
Right now, EmberGen has a great thing going for it - speed, which saves a lot of artist time. But unfortunately it negates significant part of this advantage by UI being set up in a way which consumes lots of artist time during setup process. EmberGen could be much more powerful in terms of artist speed and efficiency if it had it’s super fast fluid simulation system combined with the incredible ease and speed of use of physically based material and light system, which allows one to achieve much better looking results with much less parameters
EDIT: I had to upload all the images on imgur, because according to the site, new users can attach only one image per post. That’s quite frustrating limitation
EDIT 2: So I can not even put more than 2 external links in the post >:( Seems like detailed feedback is mortal enemy of this forum