The movie Toy Story needed top-computers in 1995 to render every frame and that took a lot of time (800000 machine-hours according to Wikipedia).

Could it be possible to render it in real time with modern (2025) GPUs on a single home computer?

  • Deestan@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    3 days ago

    Things that can affect it, with some wild estimates on how it reduces the 800kh:

    • Processors are 10-100 times faster. Divide by 100ish.
    • A common laptop CPU has 16 cores. Divide by 16.
    • GPUs and CPUs have more and faster math operations for numbers. Divide by 10.
    • RAM speeds and processor cache lines are larger and faster. Divide by 10.
    • Modern processors have more and stronger SIMD instructions. Divide by 10.
    • Ray tracing algorithms may be replaced with more efficient ones. Divide by 2.

    That brings it down to 3-4 hours I think, which can be brought to realtime by tweaking resolution.

    So it looks plausible!

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      3 days ago

      They used top of the line hardware specialized for 3D rendering. Seems like they used Silicon Graphics workstations, which costed more than $10k back in the day. Not something the typical consumer would buy. The calculations are probably a bit off with this taken into account.

      Then they likely relied on rendering techniques optimized for the hardware they had. I suspect modern GPUs aren’t exactly compatible with these old rendering pipelines.

      So multiply with 10ish and I think we have a more accurate number.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        Remember how extreme hardware progress was back then. the devkit for the n64 was $250k in 1993 but the console was $250 in 1996.

        • magic_lobster_party@fedia.io
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          3 days ago

          Most of that cost was unlikely for the hardware itself, but rather Nintendo greed. Most of it was probably for the early access to Nintendo’s next console and possibly support from Nintendo directly.

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 days ago

            the devkit was an SGI supercomputer, since they designed the CPU. no nintendo hardware in it.

      • BuelldozerA
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        3 days ago

        There is no comparison between a top of the line SGI workstation from 1993-1995 and a gaming rig built in 2025. The 2025 Gaming Rig is literal orders of magnitude more powerful.

        In 1993 the very best that SGI could sell you was an Onyx RealityEngine2 that cost an eye-watering $250,000 in 1993 money ($553,000 today).

        A full spec breakdown would be boring and difficult but the best you could do in a “deskside” configuration is 4 x single core MIPS processors, either R4400 at 295Mhz or R10000 at 195Mhz with something like 2GB of memory. The RE2 system could maybe pull 500 Megaflops.

        A 2025 Gaming Rig can have a 12 core (or more) processor clocked at 5Ghz and 64GB of RAM. An Nvidia 4060 is rated for 230 Gigaflops.

        A modern Gaming Rig absolutely, completely, and totally curb stomps anything SGI could build in the early-mid 90s. The performance delta is so wide it’s difficult to adequately express it. The way that Pixar got it done was by having a whole bunch of SGI systems working together but 30 years of advancements in hardware, software, and math have nearly, if not completely, erased even that advantage.

        If a single modern gaming rig can’t replace all of the Pixar SGI stations combined it’s got to be very close.