The movie Toy Story needed top-computers in 1995 to render every frame and that took a lot of time (800000 machine-hours according to Wikipedia).

Could it be possible to render it in real time with modern (2025) GPUs on a single home computer?

  • Ozymandias88@feddit.uk
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    2 days ago

    Others have covered the topic of modern renderers and their shortcuts but if you wanted an exact replica I think films like this are rendered on large HPC clusters.

    Looking at the Top500 stats for HPCs the average top500 cluster in 1995 was 1.1TFlops, and today that seems to be around 23.4PFlops.

    An increase of approximate 21,000 times.

    So 800,000 hours from 1995 is about 37 hours on today’s average top500 cluster.

    Edit: I found a few unconfirmed calculations that Toy Story was rendered on 294 CPUs in SPARCstation 20s with a combined power of only 8GFlops. This would mean a render time of 325,000 CPU hours, 1,100 wall clock hours. So, No. 500 of the top500 has the theoretical raw power to render toy story in about 15mins. You’d only need around 7Tflops to render it in it’s 75min runtime.

    Still we’re talking multimillion dollar HPC clusters here, not your home rig if you want to render exactly the same thing in the same way.

    If you could update the renderer to support modern GPU hardware then it seems like you would have enough power to achieve similar realtime rendering.