The New York Times sues OpenAI and Microsoft for copyright infringement::The New York Times has sued OpenAI and Microsoft for copyright infringement, alleging that the companies’ artificial intelligence technology illegally copied millions of Times articles to train ChatGPT and other services to provide people with information – technology that now competes with the Times.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    6
    ·
    edit-2
    1 year ago

    What’s the value of old journalism?

    It’s a product where the value curve is heavily weighted towards recency.

    In theory, the greatest value theft is when the AP writes a piece and two dozen other ‘journalists’ copy the thing changing the text just enough not to get sued. Which is completely legal, but what effectively killed investigative journalism.

    A LLM taking years old articles and predicting them until it can effectively learn relationships between language itself and events described in those articles isn’t some inherent value theft.

    It’s not the training that’s the problem, it’s the application of the models that needs policing.

    Like if someone took a LLM, fed it recently published news stories in the prompts with RAG, and had it rewrite them just differently enough that no one needed to visit the original publisher.

    Even if we have it legal for humans to do that (which really we might want to revisit, or at least create a special industry specific restriction regarding), maybe we should have different rules for the models.

    But to try to claim a LLM that’s allowing coma patients to communicate or to problem solve self-driving algorithms or to diagnose medical issues is stealing the value of old NYT articles in its doing so is not really an argument I see much value in.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Except no one is claiming that LLMs are the problem, they’re claiming GPT, or more specifically GPTs training data, is the problem. Transformer models still have a lot of potential, but the question the NYT is asking is “can you just takes anyone else’s work to train them”.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        There’s a similar suit against Meta for Llama.

        And yes, we will end up seeing as the dust settles if training a LLM is fair use in case law.