• froost@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    Those are valid points, but nothing there is insurmountable with even little bit of advancement.

    For example this is a relatively old example from 2021, before any of the dall-e2 and stable diffusion and video consistency models were out. https://www.youtube.com/watch?v=P1IcaBn3ej0

    Is this perfect? No, there are artifacts and the quality just matches the training driving dataset, but this was an old specialized example from a completely different (now archaic) architecture. But the newer image generation models are much better, and frame-to-frame consistency models are getting better by week and some of them are nearly there (obviously not in real-time).

    About the red-on-red bleed/background separation etc. issues: for 3d rendered games it’s relatively straightforward to get not just the color but depth and normal maps from the buffer (especially assuming this will be done with the blessing of the game developers/game engines/directX or other APIs). I don’t know if you follow the developments but for example using ControlNet with StableDiffusion it is trivial to add color, depth, normal map, pose, line outline, or a lot more other constraints on the created image, so if the character is wearing red over a red background, that is separated by the depth map and the generated image will also have the same depth, or their surface normals would be different. You can use whatever aspects of the input game as constraint in the generation.

    I am not saying we can do this right now, the generation speeds for high quality images, plus any other required tools in the workflow (from understanding the context and caption generation/latent space matching, to getting these color/depth/normal constraints, to do temporal consistency using the previous N frames to generating the final image, and doing it all fast enough) obviously have a ton of challenges involved. But, it is quite possible, and I fully expect to see working non-realtime demos within a year or couple years at the most.

    In 2D games, it may be harder due to pixelation. As you said there are upscaling algorithms that work more or less well to increase the resolution slightly, nothing photorealistic obviously. There are also methods such as these using segmenting algorithms to split the images and generating new ones using AI generators: https://github.com/darvin/X.RetroGameAIRemaster

    To be honest to make 2D games in a different style you can do much better, even now. Most of the retro games have their sprites and backgrounds already extracted from their original games, you can just upscale once (by the producer, or fan-edits), and then you don’t even need to worry about the real time generation. I wanted to upscale Laura Bow 2 this way for example. One random example I just found: https://www.youtube.com/watch?v=kBFMKroTuXE

    Replacing the sprites/backgrounds won’t make them really photorealistic with dynamic drop shadows and lighting changes, but once the sprites are in enough resolution then you can feed them into the full-frame re-generation frame by frame. But then I probably don’t want Guybrush to look like Gabriel Knight 2 or other FMV games so not sure about that angle.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      First of all thank you for the detailed reply.

      +10 for “randomly” linking latent vision. Thats the dude that made ipadapter for stable diffusion which is hands down revolutionary for my comfyui workflows.

      I actually fully agree on all the 3D stuff, i remember that gta video.

      My comment was reflecting the following idea, ahum

      “putting the whole image through Al. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.” -bjoern_tantau

      But on the topic of modern 3D I expect we can go very far. Generate high quality models of objects. Venerate from that a low poly version + consistent prompt to be used by the game engine ai during development and live gaming. Including raytracing matrixds (not rtx but yes sikilar but for detection. (which admittedly coded exactly once to demonstrated for an exam and barely understand ) what i try to say is some clever people will figure out how to calculate collisions and interaction using low poly+ai.

      I am very impressed by the retrogameXMaster but i think it may also depend on the game.

      In these older games the consistency of its gameplay is core to its identity, there pre graphics. hitbox detection is pixel based which is core gameplay and influences difficulty. Hardware limitations in a way also become part of gameplay design.

      You can upscale and given many of em fancy textures and maybe even layers of textures, modded items and accessibility cheats.

      But the premises: “ Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.”

      An ai can coock something up like that But it will be a new distinct Mario game if you change all that much of what’s happening on screen:

      Anyway i am tired and prob sound like a lunatic the longer i speak so again thanks for the good read.