This morning I started with some housekeeping. I updated to Blender 4.x's final update (4.5.4 LTS) before the provisional release of 5.0, in two days.
Rendering in tilesI looked into using non-default tile sizes for rendering. It seems intuitive to use smaller tiles to use less VRAM when rendering. I'm not sure if this makes sense when running an RTX 5090 with its 32GB of VRAM. Currently, the use of VRAM during rendering is obscured by a limitation of the Vulkan API, which can't track usage.
![]()  | 
| Tile size: 1024x1024 | 
![]()  | 
| Tile size: 2048x2048 | 
In a very quick "off-the-cuff" test of my current scene, we see that a tiny amount of render time and memory was saved by halving the tile size.
Let's try using Persistent Data, where Blender stores render calculations for reuse in follow-up renders. This might be good if, like me, you do an awful lot of iterative renders.
Let's try using Persistent Data, where Blender stores render calculations for reuse in follow-up renders. This might be good if, like me, you do an awful lot of iterative renders.
![]()  | 
| First render with Persistent Data | 
![]()  | 
| Tiles: 512x512 | 
![]()  | 
| Tiles: 2048x2048 (Default) | 
So the smaller tiles saved less than ten seconds. That could be important for an animation where a few seconds will stack up over the total render time, but for stills, not huge. I'll stay with the default until I encounter a problem, then try rendering with a smaller tile size.





No comments:
Post a Comment