• GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 month ago

    Fun! I always like to imagine what ancient technology would be like with more modern applications. I dunno why, but the idea of browsing the web on a game boy or watching full motion video on an atari 7800 is fascinating.

  • tal
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    There has got to be some kind of simple compression that the Game Boy processor can handle in real time that will let it push a typical frame in the datarate available. Maybe use run length encoding, as it looks like most of those images have large flat color areas.

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Per-pixel 60 Hz video would need ~2.7 Mbps, and 5:1 sounds doable. But at some point VRAM becomes the bottleneck. Video memory is gated off for half of every scanline. You can spam data from the CPU, and the video chip will politely ignore mistimed writes. There is a DMA process - but it’s also not fast enough, and it locks up the CPU.

      Consider an alternative: only changing the tilemap. Even on DMG, it’s easy to blast a whole frame. It doesn’t fit in the vertical blanking period - but the screen takes an entire 60th of a second to update, top to bottom. You can be done with each row long before it’s drawn.

      The GBC’s attribute map gives the screen a full set of 512 tiles, which can be flipped vertically and/or horizontally for 2048 distinct blocks. If you forego color, it can additionally do eight greyscale palettes, e.g. for different brightness or gamma.

      Encoding, though. Yeesh.

      • tal
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        Encoding, though.

        If you’re willing to use a static set of tiles, and do non-real-time encoding, you can probably use photomosaic software.

        Metapixel is apparently packaged in Debian.

        Precomputing an optimal set of tiles for a given video would be a different matter.

        EDIT: This text was written about twenty years ago:

        Metapixel is fast. It takes about 75 seconds to generate a classical photomosaic for a 2048x2432 image with constituent images of size 64x64 and a database of slightly more than 11000 images on my not-so-fast Alpha. Most of this time is spent loading and saving images.

        It’s not intended for encoding video and I suspect probably isn’t multithreaded, so it’s probably possible to parallelize metapixel processes at the frame level to significantly speed that up on modern processors with a lot of cores.

        EDIT2: If there were a way to compute an optimal set of tiles for a given video segment, which I don’t have an out-of-box approach for, if you have spare bandwidth, which you should, you could probably, while streaming tilemaps, send over a new set of tiles for different sections of video and keep the one for the next section buffered on the Game Boy side ahead-of-time.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The first edit gets you something like 8088 Corruption, which naively compared every 8x8 block of input video to every codepage 437 character in every color combination, then played from floppy to screen as quickly as possible. As an O(n2) algorithm it’s very easy to bloat beyond any hope of real-time use - especially as you make things more flexible with tile-flipping and so on. With a fixed-ish tileset you can at least speed up search by averaging colors or building some kind of tree.

          The second edit gets you that time I tried shoving Dragon’s Lair onto NES, and every clever tweak made it mushier.

          Actually - I think the first Command & Conquer homebrewed its own video format, using big chunky tiles. (MPEG1 decoding was bizarrely expensive in terms of both dollars and compute. And it still looked awful.) “The Bitter Lesson” tells us that’s a search problem we should attack with speed instead of complexity.

          So probably just K-means over all the tiles in your group-of-pictures.