Difference between 6 point and gradient normal map

Hi there,
I’m a Unity coder and would like to use the Embergen’s normal maps in my shaders.
The definitions of the maps in the documentation are pretty vague.
- Six Point Normal Map: Interpretation of the normals based on six angles used as an input for a flipbook shader in other 3d-packages.
- Gradient Based Normal Map: Interpretation of the normals based on a gradient used as an input for a flipbook shader in other 3d-packages.

Can somebody give me a clearer description ?

I might be able to give some insight into this. Let me first preface this by saying that neither of these are a normal map in a traditional sense. A true normal map records the surface normal of a solid object, however smoke does not have normals like a solid object does.

A normal map is a texture that records the surface of an object to produce a lighting result. These work in the opposite way. Instead of using the surface normal to create a texture that produces a lighting result of that surface normal, we record the lighting result into a texture that resembles a normal map. Its not describing the surface of the smoke per say, its describing the lighting/shadowing result from the lights within the volume. It would be more accurate to think of these “normals” as more like a lightmap bake. It just happens to be in the format of a normal map that a game engine can read.

The 6 pointed normal generates this lightmap normal by using 6 lights, 2 for each axis. For example the x axis (left/right) uses a light on each side. The lighting result is calculated, one of the results is inverted and the 2 are combined to give you the x axis of a tangent space normal. The benefit of the 6 point normals is it can record how the light is scattering within the volume, giving a more realistic result. The drawback is that because it uses lights, it is affected by shadows. If you have strong shadows in your sim in the cavities of your smoke, (due to very thick smoke) the detail in those areas wont be recorded and they will show up as a flat area in the normal map. (0.5 grey) Fortunately, you can mitigate this by adjusting things in the normalmap setting. You can reduce the density so the light penetrates deeper into the volume. This will illuminate the details in the cavities but it will make your normals be much softer than they otherwise would be.

The gradient normalmap is a but closer to the traditional normalmap. It creates a gradient from higher density voxels to lower density ones and gets a direction vector along that gradient. From that vector it can extract a surface normal. These normals more accurately describe the variations in the surface/density, but they are not very good for getting realistic lighting results for something like smoke. It tends to exaggerate every small variation in density that an actual light would just ignore. It makes the normals turn out very noisy/stringy looking. They also tend to lack large shapes and only record the fine details, making it look like the normal has been run through a high pass filter.

If you are planning to add normals to something like a smoke cloud or an explosion, stick with the 6 point normal maps. Play with the settings and you should be able to get a result is is fairly close to how the light interacts with the smoke in your sim.

Hope that helps.

2 Likes

Wow, thank you so much for that detailed explanation.
Really useful information, especially their differences in actual use.

I’m working on interactive wildfire visualization, as the shape and size of the fire/smoke is determined at runtime, I can’t capture the whole smoke column in Embergen.
My smoke column is composed of multiple cards at the bottom (showing plumes) and particles for the rest of the column.

I tried 6 point lighting, but unfortunately it didn’t work well with my piece-wise approach. Each plume/particle shows its own lit and shadowed side, instead of a global lighting. So I created a custom lighting solution.
I will look into adding more detail via normal maps, your explanation puts 6 point normal maps in first place for that experiment.

Thanks again, Patrick

This entire technique is a massive hack and it has its limitations. There is one other limitation that you may or may not have encountered. Since normal maps are originally designed for opaque/solid surfaces, this technique will never work properly for a cloud of smoke that is lit from behind. A normal map cant bake any transmittance/back scattering/subsurface scattering. It is not a true volumetric lighting solution. In the embergen lighting exports there is an option to export the back scattering, but it is ignored when combining the channels to make the tangent normals. You can get a better result than not using a normal map at all, but it will never look as good as a true volumetric object. (Ie: VDB) The particles have no concept of a contiguous volume, every sprite is treated as a separate object. The sprites don’t self shadow either which further breaks the illusion of it being a contiguous volume.

You can get close if you had the entire scene as a single sprite/texture. You can bake out the backscattering map for back lighting, use the normals for side lighting and an ambient occlusion map for the front lighting/diffuse texture. Of course it would still be a flat plane and only work for a distance background object, not an interactive scene.

Since you are already looking at entirely custom solutions, one possible option would be to render all the particles to a render target and do your operations on that. For example, on the back scattering you can get the combined alpha channel and do a levels adjustment on this render target so the less dense parts of the alpha are brighter and the denser parts are darker, (similar to inverting the alpha) then composite that back into the scene. However this could become a very deep rabbit hole and i have no idea how expensive it would be to render/composite everything. It would only really be useful for the back scattering as well. It would however solve the issue with back scattering where every individual sprite has a glowing halo surrounding its entire edge.

One other possibility that you could explore 6 way lightmaps saved in 2 textures. There was a thread of the realtimevfx forums about this awhile ago. It is essentially the precursor to making a tangent normal. It uses the same 6 sided lighting setup, but instead of combining the results into a tangent normal, each lighting direction remains as a seperate channel. Left, top front in 1 texture and right, below and rear in another. The shader is much more complicated/custom to put everything together though, whereas a normal map is plug and play.

Yeah you’re right, getting normals from smoke is a massive hack. But in vfx, hackery pretty much describes the job, especially in realtime vfx :wink:
I’m weary of going the rendertarget route, as compositing that with other transparent elements in the scene (fire, other smoke and water) will be very tricky.
I have tried the 2 texture 6-point lighting, but it looked crap. I can only capture a tiny plume in Embergen and use that on particles, so the lighting it captures doesn’t describe the larger smoke column. Each particle has it’s own light and shadow side.
For each smoke particle I also have access to a vector which points to the outside of the smoke column (created from our high-level simulation data). I’m thinking that as a normal vector and combining it with the normal map data I can get from Embergen. That might add some nice detail. Otherwise, no sweat, current solution isn’t looking bad at all, we’re not aiming for photoreal.