• deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    29
    ·
    7 months ago

    Why let humans at Obsidian make Bethesda look bad when they can save money by using AI to make themselves look bad?

      • slimerancher@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Without the use of AI. If they can use AI to make themselves look bad cheaper and quicker, that’s just win win! 😀

        (I still like Bethesda BTW, though my expectations from their next games are at all time low)

    • The Pantser@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      7 months ago

      Yeah AI sucks but could you imagine they take advantage of it in a good way? Like full conversations with NPCs either by voice or keyboard input. Give a NPC a personality and provide what information they know and it’s the player that needs to interrogate them and get the info out of the NPC. They don’t need to give them a full AI but like if you want to play as a charming hunk you could sweet talk the NPC or if you are a brute you could threaten them and get responses based on exactly what you said.

      Additionally they could add a AI level difficulty where the mobs could adapt. Tell the mobs to survive at any cost.

      I want AI in NPCs

      • tal
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        Could maybe make a mod that bridges Skyrim to something like KoboldAI.

        I don’t think that we’re there yet in reasonable performance terms, though. Or technically.

        And I’m not sold that there’s enough of a corpus of text to work with for even the existing mechanisms to chew on. Like, I can make an LLM that generates text that sounds like everyday American English because I have a huge corpus of that to train on. But I don’t have a huge corpus of text of what, say, a elven archer sounds like.

        I have AMD’s RX 7900 XT. Even when that thing is doing zero 3D work, as it would likely be doing in a game, existing software doesn’t permit for convincing text generation in real-time.

        Unless you want to exclude them, console players don’t have a great text input mechanism. Well, I think that they can use Bluetooth keyboards, but I’d bet that only a tiny portion of people have those set up.

        And I’m not sure that existing AI techniques are necessarily great at producing interesting content. Procedural content in games historically has often felt kind of flat and samish – the fact that your game has 65,535 procedurally-generated star systems isn’t the same as having them hand-crafted, not yet at any rate. Starfield caught criticism because the procedural content didn’t feel that interesting.

        Like, I agree that technically, the ability to converse as a human does would be pretty potentially neat. But I think that we’re not at the point, either from a hardware or software standpoint, where that’s ready.