• @tal
    link
    English
    10
    edit-2
    22 days ago

    The guy complaining left the company:

    Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.

    I don’t think that he stands to benefit.

    He also didn’t say that OpenAI was on the brink of having something like this either.

    Like, I don’t think all the fighting at OpenAI and people being ejected and such is all a massive choreographed performance. I think that there have been people who really strongly disagree with each other.

    I absolutely think that AGI has the potential to post existential risks to humanity. I just don’t think that OpenAI is anywhere near building anything capable of that. But if you’re trying to build towards such a thing, the risks are something that I think a lot of people would keep in mind.

    I think that human level AI is very much technically possible. We can do it ourselves, and we have hardware with superior storage and compute capacity. The problem we haven’t solved is the software side. And I can very easily believe that we may get there not all that far in the future. Years or decades, not centuries down the road.

    • @Floey@lemm.ee
      link
      fedilink
      322 days ago

      I didn’t think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like “What can this actually do?” He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.

      As for the software thing, if it’s done by someone it won’t be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping they’ll reach some kind of tipping point.