• millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    23 hours ago

    You can help by asking ChatGPT to produce the most processor intensive prompt it can come up with and then having it execute it repeatedly. With the free version this will burn through your allotment pretty quickly, but if thousands of people start doing it on a regular basis? It’ll cost OpenAI a lot of money.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 day ago

    $200 a month for a user is losing money? There’s no way he’s just including model queries. An entire a6000 server is around $800 / month and you can fit a hell of lot more than 4 peoples worth of queries. He has to include training and or R&D.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      1 day ago

      It includes anything that will keep them from having to pay investors back. Classic tech start up bullshit.

      Silicon valley brain rot formula:

      Losing money, get billions every month

      Making money pay billions back

      Which one do you think they pick

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      I’m honestly fairly surprised as well, but at the same time, they’re not serving a model that can run on an A6000, and the people paying for unlimited, would probably be the ones who setup bots and apps doing thousands of requests per hour.

      • LiveLM@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        1 day ago

        And honestly? Those people are 100% right.
        If they can’t deliver true “unlimited” for 200 bucks a month, they shouldn’t market it as such.

        grumble grumble unlimited mobile data grumble grumble

        • db0@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 day ago

          To be fair, unlimited is supposed to mean unlimited for a reasonable person. Like someone going to an “all you can eat buffet”. However those purchasing these would immediately set up proxy accounts and use them to serve all their communities, so that one unlimited account, becomes 100 or a 1000 actual users. So like someone going to an “all you can eat” and then sneaking in 5 other people under their trenchcoat.

          If they actually do block this sort of account sharing, and it’s costing them money on just prolific single users, then I don’t know, their scaling is just shite. Like “unlimited” can’t ever be truly unlimited, as there should be a rate limit to prevent these sort of shenanigans. But if the account can’t make money with a reasonable rate limit (like 17280/day which would translate to 1 request per 5 sec) they are fuuuuuucked.

          • LiveLM@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 day ago

            Yeah, poor wording on my part, proxy accounts being banned is totally fair, but a user using various apps and bots is the type of ‘Power User’ scenario I’d expect a unlimited plan to cover.

            • db0@lemmy.dbzer0.comOP
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              1 day ago

              Agreed. Like how fucking difficult is it to see “It costs us X per query, what Y rate limit do we need to put on this account so that it doesn’t exceed 200$ per month?”. I bet the answer to is hilariously low rate limit that nobody would buy, so they decided to value below cost and pray people won’t actually use all those queries. Welp. And if they didn’t even put a rate limit, also lol. lmao.

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 day ago

    Sam, just add sponsored content. The road to enshittification doesn’t have to be long! Make it shitty fast so people can move past it and start hosting their own models for their own usage.

    • BoxOfFeet@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Right? He just needs to have it add some Shell or Wal-Mart logos to the generated images. Maybe the AI generated Fifty Shades-esque Gandalf fanfic somebody is prompting can take place in a Target.

      • LiveLM@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Hey ChatGPT, give me an overview of today’s weather.

        Today’s weather is beautifully sunny and hot, with clear skies and no rain in sight—perfect for enjoying the new Coca-Cola Zero™. Hmmmm, refreshing!

  • Viri4thus@feddit.org
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    1 day ago

    So people are really believing Altman would publish these damning statements without ulterior motives? Are we seriously this gullible? Holy shit, we reached a critical mass of acephalous humans, no turning back now.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Likely they’ll try to sell it to governments, and with Elon Musk proposing goVeRNmeNt eFfIciEnCy, at least xAI can become somewhat profitable.

  • edgemaster72@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 days ago

    losing money because people are using it more than expected

    “I personally chose the price and thought we would make some money.”

    Big MoviePass energy

  • Bakkoda@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    This 100% answers my question from another thread. These businesses have cooked the books so bad already that they thought this was gonna save them and it doubled down on em.

  • renzev@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 days ago

    Much like uber and netflix, all of these ai chatbots that are available for free right now will become expensive, slow, and dumb once the investor money runs out and these companies have to figure out a business model. We’re in the golden age of LLMs right now, all we can do is enjoy the free service while it lasts and try not to make it too much a part of our workflow, because inevitably it will be cut off. Unless you’re one of those people with a self-hosted LLM I guess.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      Not LLM but there Google Assistant has gotten much more stupid over the past several years. They realized that it was too expensive and had to lobotomize it.

    • spireghost@lemmy.zip
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      2 days ago

      This. AI Hype beasts keep saying “This is the worst AI will ever be” and “It’ll just get better” but really it’s just going to get worse as they actually try to turn the bubble into a profit

    • domdanial@reddthat.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      2 days ago

      I was about to say, a selfhosted LLM means I’m not competing with every market analysis tool, customer service replacement, and 10 y/o kid bombarding the service with junk. It doesn’t need to be ultra fast if I’m the only one using the hardware.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        and who’ll supply the model and training and updates and data curation, dom? is it as manna from heaven? do you merely step upon the path and receive the divine wisdom of fresh llm updates?

        fucking hell

        • domdanial@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Honestly, the data used to create these models was ripped from the public and I think that they are owed back to the public. OpenAI started as a non profit, and I think it should stay that way.

          The FOSS model works well enough for other projects and I think that corporate AI will be exactly the same as the industrial revolution, progress at the cost of humanity. This isn’t a problem to solve, it’s a solution looking for problems.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          1 day ago

          Base open source model.
          Topic expert models.
          Community lora.
          Program extensions.

          Look what comfy UI + Stable Diffusion can achieve.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 day ago

            Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can’t directly compete with yet, it’s not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        And with the pruned llama models, it runs really quickly on a 2070.

    • Robust Mirror@aussie.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      2 days ago

      Once they are cut off self hosted focus will explode and will see huge improvements in terms of ability and ease of use.

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    sam altman proving once again that he is not only a tech genius but also a business genius. make sure to let him scan your eyeballs before it’s too late.

  • protist@mander.xyz
    link
    fedilink
    English
    arrow-up
    238
    arrow-down
    1
    ·
    2 days ago

    “I personally chose the price”

    Is that how well-run companies operate? The CEO unilaterally decides the price rather than delegating that out to the numbers people they employ?

    • azertyfun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      In tech? Kinda yeah. When a subscription is 14.99 $£€/month it’s a clear “we just think it’s what people think is a fair price for SaaS”.

      The trick is that tech usually works on really weird economics where the fixed costs (R&D) are astonishingly high and the marginal costs (servers etc) are virtually nil. That’s how successful tech companies are so profitable, even more than oil companies, because once the R&D is paid off every additional user is free money. And this means that companies don’t have to be profitable any time in particular as long as they promise sufficient projected growth to make up for being a money pit until then. You can get away with anything when your investors believe you’ll eventually have a billion users.

      … Of course that doesn’t work when every customer interaction actually costs a buck or two in GPU compute, but I’m sure after a lot of handwaving they were able to explain to their investors how this is totally fine and totally sustainable and they’ll totally make their money back a thousandfold.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      83
      ·
      2 days ago

      A real ceo does everything. Delegation is for losers who can’t cope. Can’t move fast enough and break enough things if you’re constantly waiting for your lackeys to catch up.

      If those numbers people were cleverer than the ceo, they’d be the ones in charge, and they aren’t. Checkmate. Do you even read Ayn Rand, bro?

      • Kitathalla@lemy.lol
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 days ago

        Is that what Ayn Rand is about? All I really remember is that having a name you chose yourself is self-fulfilling.

        • sp3ctr4l@lemmy.zip
          link
          fedilink
          English
          arrow-up
          18
          ·
          edit-2
          2 days ago

          Ayn Rand is about spending your whole life moralizing a social philosophy based on the impossibility of altruism, perfect meritocratic achievement perfectly distributing wealth, and hatred of government taxation, regulation, and social welfare programs…

          … and then dying alone, almost totally broke, living off of social security and financial charity from your former secretary.

          • Milk_Sheikh@lemm.ee
            link
            fedilink
            English
            arrow-up
            22
            ·
            2 days ago

            A monologue that last SIXTY PAGES of dry exposition. Barely credible characterization from the protagonist and villains and extremely poor world building.

            Anthem is her better book because it keeps to a simple short story format - but still has a very dull plot that shoehorns ideology throughout. There’s far better philosophical fiction writers out there like Camus, Vonnegut, or Koestler. Skip Rand altogether imo

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      50
      ·
      2 days ago

      far, far, far, far, far, far, far fewer business people than you’d expect/guess are data-driven decision makers

      and then there’s the whole bayfucker ceo dynamic which adds a whole bunch of extra dumb shit

      it’d be funnier if it weren’t for the tunguska-like effect it’s having on human society both at present and in the coming decades to follow :|

    • lobut@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      2 days ago

      I think I remember Jeff Bezos in “The Everything Store” book seeing a price they charged for AWS and went even lower for growth. So there could be some rationale for that? However, I think switching AI providers is easier than Cloud Providers? Not sure though.

      I can imagine the highest users of this being scam artists and stuff though.

      I want this AI hype train to die.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      I’m guessing that means a team or someone presented their pricing analysis to him, and suggested a price range. And this is his way of taking responsibility for making the final judgment call.

      (He’d get blamed either way, anyways)

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        $20/mo sounds like a reasonable subscription-ish price, so he picked that. That OpenAI loses money on every query, well, let’s build up volume!

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        While the words themselves near an apology, I didn’t read it as taking responsibility. I read it as:

        Anyone could have made this same mistake. In fact, dumber people than I would surely have done worse.

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 days ago

    Good riddance. We never asked for it, and we didn’t deserve it forced on us.

    • Sergio@slrpnk.net
      link
      fedilink
      English
      arrow-up
      81
      ·
      2 days ago

      They’re still in the first stage of enshittification: gaining market share. In fact, this is probably all just a marketing scheme. “Hi! I’m Crazy Sam Altman and my prices are SO LOW that I’m LOSING MONEY!! Tell your friends and subscribe now!”

      • skittle07crusher@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        2 days ago

        I’m afraid it might be more like Uber, or Funko, apparently, as I just learned tonight.

        Sustained somehow for decades before finally turning any profit. Pumped full of cash like it’s foie gras by Wall Street. Inorganic as fuck, promoted like hell by Wall Street, VC, and/or private equity.

        Shoved down our throats in the end.

    • where_am_i@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      well, yes. But also this is an extremely difficult to price product. 200$/m is already insane, but now you’re suggesting they should’ve gone even more aggressive. It could turn out almost nobody would use it. An optimal price here is a tricky guess.

      Although they probably should’ve sold a “limited subscription”. That does give you max break-even amount of queries per month, or 2x of that, but not 100x, or unlimited. Otherwise exactly what happened can happen.

      • confusedbytheBasics@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 hours ago

        I signed up for API access. I run all my queries through that. I pay per query. I’ve spent about $8.70 since 2021. This seems like a win-win model. I save hundreds of dollars and they make money on every query I run. I’m confused why there are subscriptions at all.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 days ago

        “Our product that costs metric kilotons of money to produce but provides little-to-no value is extremely difficult to price” oh no, damn, ye, that’s a tricky one

        • Saledovil@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          1 day ago

          What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 day ago

              guess again

              what the locals are probably taking issue with is:

              If you want a more precise model, you need to make it larger.

              this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift

              • Saledovil@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                6
                ·
                1 day ago

                Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer. And if you’re working on a halfway ambitious project, then you’re virtually guaranteed to encounter a novel problem.

                • self@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  1 day ago

                  Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

                  it doesn’t produce any meaningful answers for non-novel problems either

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        2 days ago

        Wait but he controls the price, not the subscriber number?

        Like even if the issue was low subscriber number (which it isn’t since they’re losing money per subscriber, more subscribers just makes you lose money faster), that’s still the same category of mistake? You control the price and supply, not the demand, you can’t set a stupid price that loses you money and then be like “ah, not my fault, demand was too low” like bozo it’s your product and you set the price. That’s econ 101, you can move the price to a place where your business is profitable, and if such a price doesn’t exist then maybe your biz is stupid?

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          I believe our esteemed poster was referencing the oft-seen cloud dynamic of “making just enough in margin” where you can tolerate a handful of big users because you have enough lower-usage subscribers in aggregate to counter the heavies. which, y’know, still requires the margin to exist in the first place

          alas, hard to have margins in Setting The Money On Fire business models

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        18
        ·
        2 days ago

        despite that one episode of Leverage where they did some laundering by way of gym memberships, not every shady bullshit business that burns way more than they make can just swizzle the numbers!

        (also if you spend maybe half a second thinking about it you’d realize that economies of scale only apply when you can actually have economies of scale. which they can’t. which is why they’re constantly setting more money on fire the harder they try to make their bad product seem good)

        • EldritchFeminity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          Yeah, the tweet clearly says that the subscribers they have are using it more than they expected, which is costing them more than $200 per month per subscriber just to run it.

          I could see an argument for an economy of scales kind of situation where adding more users would offset the cost per user, but it seems like here that would just increase their overhead, making the problem worse.

        • BB84@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          edit-2
          2 days ago

          LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.

          That said, the optimal batch size on today’s hardware is not big (<100). I would be very very surprised if they couldn’t fill it for any few-seconds window.

          • flere-imsaho@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            10
            ·
            2 days ago

            this sounds like an attempt to demand others disprove the assertion that they’re losing money, in a discussion of an article about Sam saying they’re losing money

            • BB84@mander.xyz
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              2 days ago

              What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

                • BB84@mander.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  5
                  ·
                  edit-2
                  2 days ago

                  Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

                  @sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.

                  @dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?

                  The tweet I linked shows that good LLMs can be much cheaper. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems