Blender rendering experiment
I decided to try a more complex render to see what the optimum tile size is for my computer.
The box is a bit dented and is simply textured using an image plugged into the material output displacement via a normal map node (to add a bit of complexity to the render).
The fence has a procedural texture which is difficult to see because it is black, small and not the closest object to the camera, but Cycles still has to render it.
The grass and plants are in six particle systems using Blenderguru's Grass Essentials pack. I added these because they can really increase render times.
The image was rendered at 1920x1080 (100%) with 300 samples. The "Border" check box was ticked so only items within the camera view were rendered and the device was set to GPU compute.
The times for the renders with different tile sizes were as follows:
256x256: 6 minutes 59.03 seconds.
384x384: 6 minutes 45.27 seconds.
512x512: 7 minutes 16.53 seconds.
I know the results from two experiments don't constitute a proper statistical sample but it does look as if a 384 pixel square is a better tile size to use with my computer than a 256 or 512 pixel square one.
I chose 384 pixels because in the earlier, simpler, render the results from 256 and 512 were close and so I just plumped for the point in the middle of the two to see if it would be faster, which it was.
Blender rendering experiment
I decided to try a more complex render to see what the optimum tile size is for my computer.
The box is a bit dented and is simply textured using an image plugged into the material output displacement via a normal map node (to add a bit of complexity to the render).
The fence has a procedural texture which is difficult to see because it is black, small and not the closest object to the camera, but Cycles still has to render it.
The grass and plants are in six particle systems using Blenderguru's Grass Essentials pack. I added these because they can really increase render times.
The image was rendered at 1920x1080 (100%) with 300 samples. The "Border" check box was ticked so only items within the camera view were rendered and the device was set to GPU compute.
The times for the renders with different tile sizes were as follows:
256x256: 6 minutes 59.03 seconds.
384x384: 6 minutes 45.27 seconds.
512x512: 7 minutes 16.53 seconds.
I know the results from two experiments don't constitute a proper statistical sample but it does look as if a 384 pixel square is a better tile size to use with my computer than a 256 or 512 pixel square one.
I chose 384 pixels because in the earlier, simpler, render the results from 256 and 512 were close and so I just plumped for the point in the middle of the two to see if it would be faster, which it was.