A fake recording of a candidate saying he’d rigged the election went viral. Experts say it’s only the beginning::Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.

  • yetAnotherUser@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    10 months ago

    The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.

    Why is it always the Russian hackers? /s

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      10 months ago

      Because they have state actors running around following the exact same foreign policy strategy as all the other international powers. Make sure your guy is in charge, and who cares how.

  • agent_flounder@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    10 months ago

    Fundamentally we as a species have lost the use of face and voice in a video to establish authenticity.

    A person can spoof an email, and we have cryptographic signatures as a means of authentication.

    So if I record myself saying something I could sign the video I guess (implementation TBD lol).

    But what if someone else (news agency say) takes a video of someone else, how do we authenticate that?

    If it’s a news agency they could sign it. Great.

    But then we have the problem of incentives, too. Does the benefit of a fake outweigh the detrimental effects for said news agency?

    The most damage would be to the person being videoed (reputation, loss of election, whatever). There would be less damage to the media company (“oops so sorry please stay subscribed”). You could add fines but corporate oversight is weak. And the benefit of releasing a fake would be clicks and money so a news company would be a lot more likely to pass along a fake as real.

    So I guess I have no idea what we do. At the moment we are fucked. Yay.

    • CommanderCloon@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      10 months ago

      If it’s a news agency they could sign it. Great.

      But then it hinders the denouncing of police violence, whistleblowing, and allows corporations to sign their claims while what regular people record will be assumed to be false

      Edit: reporting can also be done as a second hand account, reposting videos/photos already in circulation, meaning that either a news company will sign those second hand recordings at the risk of validating AI content, or only their own corporate recordings will be used.

      No one wins in any case

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    10 months ago

    How do we know it’s fake, and he’s not just claiming it’s fake? Unless I missed it the article doesn’t seem to cover that.

    IMCO people claiming real recordings are AI fakes is going to be a bigger problem than actual fakes.

    • Gamera8ID@discuss.online
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      You might be kidding, but the answer is: The outcome plus Occam’s Razor.

      a candidate saying he’d rigged the election

      the candidate…was defeated

      Which is more likely, that he was recorded while lying about rigging the election that he ended up losing or that the recording was faked?

      • RobotToaster@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        10 months ago

        He could just be really bad at rigging elections!

        I guess I missed that rather obvious conclusion while I was looking for some technical explanation, thanks for pointing that out.

        • Gamera8ID@discuss.online
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          No worries.

          I do agree with you that we’re likely to see just as many (if not more) guilty politicians claiming AI fakes when they’re caught red-handed as we will see framed politicians targeted with actual AI fakes. Just not in this case.

    • Nurse_Robot@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 months ago

      IMCO

      I’m not familiar with this one. I assume “in my concerned opinion” but I really want it to be “in my cumble opinion”

  • Cossty@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    I voted for him and I didnt know about this ai audio. Primary social media in Slovakia is FB and I stopped using that like 8 years ago, that’s probably why.

  • illi@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    Anybody could copypaste the article here? Would love to read it but apparently the site has issues with me using Firefox…

    • GeneralVincent@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.

      And if that wasn’t bad enough, his voice could be heard on another recording talking about raising the cost of beer.

      The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.

      While the number of votes swayed by the leaked audio remains uncertain, two things are now abundantly clear: The recordings were fake, created using artificial intelligence; and US officials see the episode in Europe as a frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election.

      “As a nation, we are woefully underprepared,” said V.S. Subrahmanian, a Northwestern University professor who focuses on the intersection of AI and security.

      Senior national security officials in the US have been gearing up for “deepfakes” to inject confusion among voters in a way not previously seen, a senior US official familiar with the issue told CNN. That preparation has involved contingency planning for a foreign government potentially using AI to interfere in the election.

      State and federal authorities are also grappling with increased urgency to pass legislation and train election workers to respond to deepfakes, but limited resources within elections offices and inconsistent policies have led some experts to argue that the US is not equipped for the magnitude of the challenge, a CNN review found.

      Already, the US has seen AI-generated disinformation in action.

      In New Hampshire, a fake version of President Joe Biden’s voice was featured in robocalls that sought to discourage Democrats from participating in the primary. AI images that falsely depicted former President Donald Trump sitting with teenage girls on Jeffrey Epstein’s plane circulated on social media last month. A deepfake posted on Twitter last February portrayed a leading Democratic candidate for mayor of Chicago as indifferent toward police shootings.

      Various forms of disinformation can shape public opinion, as evidenced by the widely held false belief that Trump won the 2020 election. But generative AI amplifies that threat by enabling anyone to cheaply create realistic-looking content that can rapidly spread online.

      Political operatives and pranksters can pull off attacks just as easily as Russia, China or other nation state actors. Researchers in Slovakia have speculated that the vote-rigging deepfake their country faced was the work of the Russian government.

      “I can imagine scenarios where nation state adversaries record deepfake audios that are disseminated using both social media as well as messaging services to drum up support for candidates they like and spread malicious rumors about candidates they don’t like,” said Subrahmanian, the Northwestern professor.

      The FBI or Department of Homeland Security can move more swiftly to speak out publicly against a threat if they know that a foreign actor is behind a deepfake, said a senior US official familiar with the issue. But if an American citizen could be behind a deepfake, US national security officials would be more reluctant to counter it publicly out of fear of giving the impression that they are influencing the election or restricting speech, the official said.

      And once a deepfake appears on social media, it can be nearly impossible to stop its spread.

      “The concern is that there’s going to be a deepfake of a secretary of state who says something about the results, who says something about the polling, and you can’t tell the difference,” said the official, who was not authorized to speak to the press.

      Efforts to regulate deepfakes and guard against their effects vary greatly among US states.

      Some states including California, Michigan, Minnesota, Texas and Washington have passed laws that regulate deepfakes in elections. Minnesota’s law, for example, makes it a crime for someone to knowingly disseminate a deepfake intended to harm a candidate within 90 days of an election. Michigan’s laws require campaigns to disclose AI-manipulated media, among other mandates. More than two dozen other states have such legislation pending, according to a review by Public Citizen, a nonprofit consumer advocacy group.

      CNN asked election officials in all 50 states about efforts to counter deepfakes. Out of 33 that responded, most described existing programs in their states to respond to general misinformation or cyber threats. Less than half of those states, however, referenced specific trainings, policies or programs crafted to respond to election-related deepfakes.

      “Yes, this is something that keeps us all up at night,” said Alex Curtas, a spokesperson for New Mexico’s secretary of state, when asked about the issue. Curtas said New Mexico has plans for tabletop-exercises with local officials that will include discussion of deepfakes, but he said the state is still looking for tools to share with the public to help determine whether content has been generated with artificial intelligence.

      Jared DeMarinis, Maryland’s administrator of elections, told CNN his state issued a rule that requires political ads that involve AI-generated content to include disclaimers, but he said he hopes the state legislature will pass a law that gives the state more authority on the issue.

      “I don’t believe you can completely

      Some efforts to combat disinformation have triggered more distrust. Last year, Washington’s secretary of state’s office signed a contract with a tech company to track election-related falsehoods on social media, which would include deepfakes, a spokesperson told CNN. But in November, the state’s Republican Party submitted an ethics complaint related to that contract, alleging the secretary was using public funds to pay a company to “surveil voters … suppressing opposition views.” The state ethics board declined to move forward on the complaint, which elicited more protest from the party.

      Multiple pieces of federal legislation on election-related deepfakes have been proposed. US law currently prohibits campaigns from “fraudulently misrepresenting” other candidates, but whether that includes deepfakes is an open question. The Federal Election Commission has been considering the idea but has not reached a decision on the matter

      • GeneralVincent@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Ilana Beller of Public Citizen, the consumer advocacy group, expressed cautious optimism over the rate that both red- and blue-leaning states have been proposing and passing legislation on deepfakes, but she said more must be done.

        “We would like to see more from the federal government, from the FEC and from many states that haven’t taken the step to regulate on this issue,” Beller said. Paul Vallas, a former mayoral candidate for Chicago, was the subject of a deepfake recording that characterized him as indifferent to police shootings. Paul Vallas, a former mayoral candidate for Chicago, was the subject of a deepfake recording that characterized him as indifferent to police shootings.

        Some US candidates have been forced to personally figure out how to respond to deepfakes.

        Paul Vallas, for example, ran for mayor of Chicago as a moderate Democrat last year and was targeted by an audio clip posted on X, formerly known as Twitter, by a mysterious account called “Chicago Lakefront News.”

        “These days people will accuse a cop of being bad if they kill one person that was running away. Back in my day, cops would kill, say, 17 or 18 civilians in their career and nobody would bat an eye,” said the voice in the post that sounded nearly identical to Vallas. “We need to stop defunding the police and start refunding them.”

        Vallas’ campaign responded by issuing a statement that denounced the video as fake and deceptive. But by then, it had been viewed thousands of times before being deleted. While Vallas won the first round of voting, he ultimately lost the election in a runoff to a progressive candidate, Brandon Johnson.

        Asked if he thinks the deepfake cost him the race, Vallas said, “No, you know, I think it was a factor in a close election.”

        “We’ll never know who actually created the video, but clearly there was a campaign on multiple fronts to try to misrepresent my record and to try to characterize my candidacy as something that it was not,” he added. “There’s some damage that’s not repairable, so in a close race something like that can be a factor.”

        Michal Šimečka, the leader of the Progressive Slovakia party, understands why some people could have been fooled by the deepfake that falsely purported to capture him discussing with a journalist a plan to manipulate votes at polling stations.

        “It does sound like me,” Šimečka told CNN, referring to the audio, which he said played into conspiracy theories that a segment of the population already believed.

        The fake audio emerged on the barely regulated messaging app Telegram two days before Slovakia’s parliamentary elections and quickly jumped to TikTok, YouTube and Facebook.

        Šimečka said his team and others complained to social media platforms and law enforcement. Despite some platforms removing or slapping factcheck warnings on some posts containing the audio, it continued to spread.

        Šimečka said there’s no way to know whether the deepfake altered the outcome of the election, which his party lost to a more Russia-friendly party, but said, “It probably had some effect.”

        Daniel Milo, who until December ran a center within Slovakia’s Ministry of Interior setup to counter disinformation, said the debacle showed the way in which some major social media platforms lack processes to effectively respond to deepfakes.

        TikTok and YouTube outright deleted copies of the deepfake, he said, while Facebook deleted some, marked others as false but did not touch others. He estimates hundreds of thousands of people saw posts containing the audio. President Joe Biden, who just announced his reelection campaign for president, delivers remarks at North America’s Building Trades Unions Legislative Conference at the Washington Hilton in Washington, DC, on Tuesday.

        He said social media platforms need to “put measures in place” to prevent attempts to meddle with an election.

        A spokesperson for Meta, Facebook’s parent company, said in a statement, “Our independent fact-checking network reviews and rates misinformation—including content that’s AI-generated— and we label it and down-rank it in feed so fewer people see it.” While the statement said content that violates company policies is removed, it did not address why some posts containing the Slovak deepfake were not marked as false.

        While the original source of the vote-rigging deepfake has not been confirmed, Milo said that some of the earliest posts containing the audio came from pro-Russian politicians in Slovakia. He believes it’s not a coincidence that Russia’s government publicly pushed a similar conspiracy theory on the same day the deepfake emerged.

        “In my professional capacity, I do believe that this deepfake was part of a wider influence campaign by Russia to interfere in the Slovak elections,” Milo said.

        Janis Sarts, director of the NATO Strategic Communications Centre of Excellence, a NATO-accredited research organization based in Latvia, said in a statement that there’s no known evidence showing the deepfake originated in Russia, though he also noted that just over an hour before the deepfake surfaced, Russia’s Foreign Intelligence Service (SVR) released a press statement accusing the US of trying to influence Slovakia’s election in favor of Slovakia’s progressive party. The Russian statement specifically named Šimečka.

        “The claims made in the Russian Intelligence Service’s statement and the content of the deepfake that went viral simultaneously correspond to each other. They both target Progressive Slovakia and promote the same false narrative,” Sarts said. He added that one of the politicians in Slovakia who first posted the deepfake appeared on the news of a Russian channel within a day and made similar claims.

        Russia’s SVR did not respond to a request for comment.

        Regardless of the source, Milo said the US and other nations with elections this year should get ready.

        “My warning is brace yourself for upcoming barrage of deepfakes, of audio and video content that will be targeting presidential candidates that will try to polarize and disrupt the social cohesion in the US,” Milo said.

        It was a sentiment echoed by Šimečka.

        “I think this might be the year when we see a deepfake boom in election campaigns all across the world,” he said. “It’s effective. It’s fairly easy to produce. There isn’t regulation to combat it effectively.”

    • Keith@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Have no issues with Firefox and uBO on Android, idk about desktop

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Dangerous approach.

      Bad actors will not watermark their output, or remove the watermark. All watermarking does is lend credibility to misinformation. It’s literally worse than nothing.

      • redcalcium@lemmy.institute
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        10 months ago

        It won’t stop bad actors, but it’ll allow AI companies to cover their asses to avoid being blamed for misuse of their tech, which is one of the reason I think most AI companies will use it soon.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Doesn’t look like Spotify or Apple watermark music, although that author used them as a hypothetical example.

      Universal Music Group used to but moved away from the practice, it seems.

  • tsonfeir@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Everything I am recorded doing in AI generated. It’s up to the courts to prove that isn’t correct. Time to go remove some billionaires and rob some banks.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    We’re going to have to put “circuit-breakers” on elections, if well-timed enemy-product like that can be significant.

    Same as like we have 'em on the stock-market.

    _ /\ _