Unreal Engine + EmberGen ideas

I can’t wait to start getting content into UE and finding out ways to use it.

One thing I’m looking forward to in particular is trying out using flow maps and motion vectors to smooth the animation, meaning it can have a larger step between frames and therefore a longer loop. I also want to see what we can do with POM and separate flame/smoke materials.

What are you guys planning on doing?

1 Like

Within our alpha we do have motion vector exports, but the shader required to get them to work inside of UE4 is a bit funky wit what we currently have. We think that our channels may be off and that they need to be swizzled before they work properly inside of UE4. We’ll have to do more R&D on this front.

In terms of POM, haven’t though much about it in terms of fire/smoke, but perhaps our future volume exports could replace something like that entirely. It’d be interesting to see a POM example of smoke in UE4 if you have one.

1 Like

I’ve been looking at examples and it seems like it’s just a bit of math. Get your current time between frames and interpolate the UV to sample somewhere between the previous pixel and the next pixel. I think I might be thinking about that wrong though.

I guess you’d just be transforming the vector into tangent space, inverting it and then scaling it by that inter-frame progress value, then sample that pixel instead.

I developed a workflow like this: Embergen for sim-> export to vdb -> import to houdini and process -> render to atlas -> import to unreal. This is 1 atlas with pretty simple shader ( light from spherical generated normals) https://imgur.com/V8H0Udp

That’s pretty nice! What did you do with it in Houdini?

You may want to add a bit of DepthFade in your shader so it doesn’t show hard edges against other surfaces.

What’s the need to export to houdini to do the atlas? You can generate the atlas directly in EmberGen :smiley:

Did you need extra camera controls or anything?

Hello. I have a simulation cache pipeline from Maya to UE4 .From Houdini, it also works. Volume Raymarching Video Cache - VRVC.
I am using h264 codec for this video cache to UE4. https://www.youtube.com/watch?v=eTNwQbOnLZo
All volume objects cache from Maya simulation. Real-time video with 400 small animated stars.
12 Explosions and other objects, galaxies is static cache.
If you are interested in exporting to programs UE4 and Unity, I will be glad.


I need to got more control over shadow. Multiple lights. I wanted more control over everything in smoke shader. Also I needed to add some vex functions like trimming distanced part of simulation.

Also in houdini I can set up camera once and it stays. For iteration workflow it is a must.

There is some depthfade there. I cant make it stronger as it moves along the wall.

1 Like

I think our beta will provide much better controls for shadows and such. With the upcoming beta we also have cameras so that you can do the iterations you’re looking for. I think you’ll be able to kick houdini out of your smoke/fire workflow here soon :slight_smile:

As for this part: Also I needed to add some vex functions like trimming distanced part of simulation.

What do you mean exactly?

Any tips on how this works? The volumes look pretty good. Are you using an h264 codec as your 3d texture input?

Could you elaborate how you encode the volumetric data with h264. Video is typically a two-dimensional image in time, so do you slice the volumes in a special way to still fit as a 2D video?

for example - smoke pillar - sometimes good looking sim is not stable and sometime there are some leaks out of range. - so for such parts of volume U make some lerp into nth there. Sometimes for fire U need to trim bottom a little. Sometimes U want to make some math operation on volume like power. I love embergen but It is hard to replace houdini :wink:

Yes, I use a codec h264 for input 3d texture. All platforms can work with it.
I worked in film production and game production. It would be very convenient to immediately get a volumetric finished effect cache from AmberGen to UE4 or Unity or other. I’ll be glad to help.
If you use 2d standard building. After the usual atlas(FlippBook), it is still necessary to create a particle system, plus configure - this is the time. For the AR/VR, this is a new level of volumetric effects, also in the movie preproduction it is very much in demand.

1 Like

I use slice layer render in Maya for 3d volume video texture. /////////// plus compiled in the form of video atlas.


I see what you mean. Support for this would be great.

Description with images in the branch.

So more questions then:

Are you using ryan brucks volume shader for rendering or did you make your own?
How are you playing the video in a material in unreal?
What is the file size of the video for some of your bigger volumes?
Why did you choose a video format instead of something else?
Did you come up with the video format idea?
Does compression reduce the volume quality?

1.Basis for rendering Ryan Brucks shader. I modified it a bit.
When the principle of operation of this shader is clear, you can create your own shader it again with your own improvements and optimizations.
2.I use Render Target for texture input.
3.Size of video `13mB for 4k video -150 frames/good quality without artifacts.
Sometimes without visible problems with the image, the file size was 7-9mB for 4k- 4096*4096. Sometimes the file size was 1MB - 60 frames for 4k.If you ask why such a small file, because there are many gaps in the atlas and they are well compressed.
4.Video format is supported by a huge number of platforms and video cards. Also, empty pixels on the texture are compressed and do not take up much memory in the file, unlike the coordinates in the voxel container.
And of course, hardware decoding speeds up the process.
5.This process is controllable and with strong compression is not very noticeable during voxel animation. Inter frame(interpolation) blend solves the problem with strong compression.
All you need to do is create universal effects for all platforms UE4/Unity/other.

1 Like