cross-posted from: https://sh.itjust.works/post/18066953

On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

  • Anticorp@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Maybe the outcome of all this deep fake shit is that video and photo evidence will be inadmissible in court. Maybe that’s the goal?