• edge [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 days ago

    Don’t LLMs work on text though? Speech to text is a separate process that has its output fed to an LLM? Even when you integrate them more closely to do stuff like figure out words based on context clues, wouldn’t that amount to “here’s a text list of possible words, which would make the most sense”?

    What counts as a “token” in a purely audio based model?

    Unless they can somehow convert this to actual meaning/human language, I feel like we’re just going to end up with an equally incomprehensible Large Dolphin Language Model.

    I guess the next step would be associating those sounds with the Dolphins’ actions. Similar to how we would learn the language of people we’ve never contacted before.

    • GiorgioBoymoder [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      I assume phonemes would be the tokens. We can already computer generate the audio of spoken language, seems like the tough part here is figuring out what the dolphin sounds actually mean. Especially when we don’t have native speakers available to correct the machine outputs as the model is trained.

    • CarbonScored [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 days ago

      I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.

      I guess the next step would be associating those sounds with the Dolphins’ actions

      Yeah but, we’re already trying to do this. I’m not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven’t gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.