Render performance for large scale cloud scene


I am trying to optimize my render setup at the moment.

I have to generate a cloud filled sky with the sun coming through to fly closely above the clouds for a couple hundred frames.

to keep this as efficient as I can I use 15 different low-res vdbs < 1MB in the InstanceVolumeCacheLoader (With lod settings to keep times down) and used the particle Instancer to populate my sky with around 500 clouds.

This is my base and later on I will add more high res closer by clouds.

Problem is, even this base already renders around 40 minutes at around 900x500 with low progressions, due to the scattering of clouds through clouds, through more clouds.

The question now is, is this a good way to approach this, or are all the lod calculations and the caching of external files actually making rendering slower? or would a simple geo with a noise applied be a faster option (didn’t seem like it with my first test, but maybe I did something wrong)

Thank for any helpful comments already!

Hi Tim,

I am actually working on the same thing at the moment. I’m not using particles, but rather placing instances on Axis for compositional control. I am currently running into the same issue. Christoph helped me with a camera culling setup that I hope to test today. I needed to get our version of Eddy updated before it would work, and hopefully that means today I will be able to test to see if it helps speeding things up.

I actually ran into an issue where the GPU on the machine locked up on a local render after a day and a half rendering.

I will report back if this camera culling helped. My min/max LOD settings almost seemed reversed for me. Where I got softer, less detailed clouds closer instead of less detailed further away from camera. If I come across anything helpful, I’ll let you know.


Hi Guys,

At the current state, LOD generation happens upfront, before the renderer starts, so there shouldn’t be too much overhead at rendertime. The only thing worth mentioning is that it can generate a lot of unique instances and deteriorate performance.
The max LOD width refers to the feature size in pixels for the highest resolution, the min width to the pixel width at which the coarsest LOD will be selected. Maybe thats why you think its inverted ?

Are you rendering with multiple scattering or is it all single scattered ?

we’ve seen some strange slowdowns with the instancer under certain circumstances and it might very well be that you’re hitting this here. We did some GPU specific optimizations, but maybe there is more to it. I’ll see if i can replicate a setup, where things take way too long.

I suppose your camera is fully inside the volumes ?

For the layer I am rendering, all the clouds are on the perimeter and are secondary. The camera never flies through these. I have a hero setup that gets rendered separately that the camera flies through.

The LOD explanation definitely helps, thanks.

Hey Christoph,

Camera moves in and out of the volumes. I have gotten the initial times for low sample (300 progressions) renders at 600x400 down to about 12 minutes, way better than before, so I guess final resolution and progressions will land around the 1h-1.5h mark. I am now trying to prune the bounds to lower that time further, as that would still be pushing the time frame.

I really wonder how the demo video was made and the setup they had. I’m working on something now, and am fairly new at eddy, besides some random things here and there, and am running 20 mins a frame at HD at 200 progressions and motion blur, and 16 without motion blur. That’s not really cost effective when you add it up. I’m sure I have some inefficiencies, but I’m not sure where yet. I’ve built my 3 clouds just like the tutorial and then instanced them with an environment light and the camera isn’t going through any clouds also.