Monday, 3 November 2025

Back to the old grindstone


This morning I started with some housekeeping. I updated to Blender 4.x's final update (4.5.4 LTS) before the provisional release of 5.0, in two days.



Rendering in tiles

I looked into using non-default tile sizes for rendering. It seems intuitive to use smaller tiles to use less VRAM when rendering. I'm not sure if this makes sense when running an RTX 5090 with its 32GB of VRAM. Currently, the use of VRAM during rendering is obscured by a limitation of the Vulkan API, which can't track usage.

Tile size: 1024x1024 

Tile size: 2048x2048

In a very quick "off-the-cuff" test of my current scene, we see that a tiny amount of render time and memory was saved by halving the tile size.

Let's try using Persistent Data, where Blender stores render calculations for reuse in follow-up renders. This might be good if, like me, you do an awful lot of iterative renders.

First render with Persistent Data

The follow-on render took seven minutes and five seconds. A catastrophic impact on render time, probably indicating that no clearing out render calculations exhausted available memory, resulting in a drop in performance. For a smaller scene, you can save part of the rendering process and save some time. Not here, though.

Tiles: 512x512

Tiles: 2048x2048 (Default)

So the smaller tiles saved less than ten seconds. That could be important for an animation where a few seconds will stack up over the total render time, but for stills, not huge. I'll stay with the default until I encounter a problem, then try rendering with a smaller tile size.

No comments:

Post a Comment

A render to celebrate the release of Blender 5.0

 A major release (x.x to y.0) for Blender is a big deal! I've been using Blender for 13 years, and I've only seen two major releases...