• immutable@lemmy.zip
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 day ago

        Um you see um the realty is that this is very simple, its pixels, you know glowing dots. So I think with a um high degree a very high degree um of of confidence that within the next 12 to 18 months we will have self gooning AI.

        And that’s really the difference between X being a billion dollar company and being basically useless. But today, right now, self gooning AI is already safer and cheaper than human gooning.

        • Burninator05@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          The race for AI gooning is going to be who can make a model that goons the fastest using the least amount of resources.

        • Aeao@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          24 hours ago

          I’ve just been sitting here thinking of crazy headlines that wouldnt shock me

          “Elon musk stabbed someone over a Beanie baby dispute”

          “Elon musks caught using a homeless man as a snow sled on a steep hill”

          “Elon musks to reshoot the movie “home alone 2” with himself as the role of dove lady”

          All those would sound like something he might do

          • jjjalljs@ttrpg.network
            link
            fedilink
            English
            arrow-up
            5
            ·
            23 hours ago

            I’m just waiting for “Elon Musk dies after following Grok’s advice to mix household cleaning chemicals”

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    98
    ·
    edit-2
    2 days ago

    Fun bit of history: gooners were a huge component of open weights LLM development.

    Pygmalion 6B was poured over by tinkerers before ChatGPT and Llama were even a thing. Ravenous furries and roleplayers have been major contributors to frameworks that have snowballed into huge projects, practical finetuning and quantization methods, CUDA kernels, sampling techniques OpenAI is still catching up on, you name it.

    Horniness (among other things) is a heck of a motivation. But the history is buried in obscure Discords and archived GitHub repos.


    And don’t even get me started on imagegen… Good lord. If Grok is going full “anime waifu,” oh, it has got some competition to catch up to.

    • arararagi@ani.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      First time I saw a great image upscaler, it was called “waifu2x”, of course.

      There was also an older machine learning project that removed censorship commonly found on japanese pornographic drawings, but this one got deleted of GitHub by the author some years ago and I don’t even remember it’s name anymore.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Yeah, the history of GANs stretch way back before transformers LLMs, and evolved into ESRGAN, finetunes in obscure Discords…

        That history went somewhere, fortunately, and it definitely blows the venerable waifu2x out of the water: https://openmodeldb.info/

    • Truscape@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      2 days ago

      I remember those days - using my 3080ti to test the latest Pygmalion model and reporting feedback for how understandable the outputs were to the developers…

      I didn’t use my self-host for horny though - I tried being a DM for a DND session with fictional characters :)

      • Landless2029@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        This is one reason I wanted to get into LLMs DND stuff. I’ve tried virtual DMs and they suck due to hallucinations.

        Never tried the opposite. Would be interesting to DM and have the LLM be 4 players.

        • Truscape@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          One thing I learned is that you can’t rely on an inventory system (there’s no persistence), so you’re basically always running a oneshot campaign no matter what. After the novelty wore off, I found that quite boring and just played DND on tabletop sim or roll20 with discord friends instead.

          • Landless2029@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            I was wondering if I could setup a external yaml file for the pipeline to take notes and reload to help persistence issues.

            • Truscape@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              22 hours ago

              I’ve never tried something like that, but even if you do remind the party what they have equipped or in reserve, they’ll just make items up. It’s frustrating.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        Yeah I was being a bit facetious. There really was a lot of roleplaying and other neat things (like dungeon masters) that motivated people.

        That, and there are some even earlier, more primitive (and less horny) RP models, like Janeway and some others named after ST captains.

        By the way, there are some pretty awesome dungeon master finetunes that would fit on a 3080 TI these days.

        • Killer_Tree@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Can you name-drop any recommended DM fine tunes? Anytime I try to do model research I end up down rabbit holes and very confused…

          Appreciated!

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 day ago

            Oh, there are so many… Yeah, it’s a rabbit hole.

            For now, check out:

            https://huggingface.co/LatitudeGames/Harbinger-24B (and literally anything from Latitude Games, who explicitly specialize in dungeon master models for their site).

            https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.1-24b

            https://huggingface.co/Gryphe/Codex-24B-Small-3.2

            24Bs are very tight on your card (but so smart they’re worth it), so you will want ~3.6bpw (10 GB-ish) exl3 quantizations to minimize the quantization loss and keep them fast. They’re easy to make yourself if you know a little command line and have decent internet; I can walk you through it.

            Or I can just quantize these three models just for you, overnight, if you wish. Maybe check how much VRAM your desktop takes up at idle so I can size them right, and let me know.

            • Killer_Tree@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Thank you very much! These all looks very interesting and I’m excited to try them out.

              I’ve never quantized a model before (I usually find pre-quantized versions) but I would love to learn how. If you can provide the command-line details for doing so, or point me towards a good resource, that would rock!

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                23 hours ago

                So first of all, you run exl3s via tabbyAPI + your frontend of choice: https://github.com/theroyallab/tabbyAPI

                Check out their docs. Specific settings I’d recommend are like 16K context and “6,5” cache quantization. For example, these are some changed lines plucked from my own config files:

                  # Backend to use for the model (default: exllamav2)
                  # Options: exllamav2, exllamav3
                  backend: exllamav3
                
                  # Max sequence length (default: Empty).
                  # Fetched from the model's base sequence length in config.json by default.
                  max_seq_len: 16384
                
                  # Enable different cache modes for VRAM savings (default: FP16).
                  # Possible values: 'FP16', 'Q8', 'Q6', 'Q4'.
                  # For exllamav3, specify the pair k_bits,v_bits where k_bits and v_bits are integers from 2-8 (i.e. 8,8).
                  cache_mode: 6,5
                
                  # Chunk size for prompt ingestion (default: 2048).
                  # A lower value reduces VRAM usage but decreases ingestion speed.
                  # NOTE: Effects vary depending on the model.
                  # An ideal value is between 512 and 4096.
                  chunk_size: 512
                
                

                Now, to make a quantized model, you just download/install the exllamav3 repo (which you install for tabbyAPI anyway) and follow its documentation: https://github.com/turboderp-org/exllamav3/blob/master/doc/convert.md

                An example command would be: `python convert.py -i “/Path/to/model” -o “/output/directory” --work_dir “temporary/work/directory” -b 3.2 -hb 6

                You probably want, like, 3.2 bits per word (the ‘-b’ flag).


                …But that’s not how I would quantize it. If I were you, since the ~3bpw range is so sensitive to quantization, I’d use a custom per-layer quantization scheme described here: https://old.reddit.com/r/LocalLLaMA/comments/1mqwt76/optimizing_exl3_quants_by_mixing_bitrates_in/

                The process is like this: you either make or download 3bpw and 4bpw variants of the model you desire, like say, this one for 4bpw:

                https://huggingface.co/MetaphoricalCode/Harbinger-24B-exl3-4bpw-hb6

                And make a 3bpw yourself (since I don’t see one available for Harbinger 24B).

                Then, you “mix” the two models you’ve made with a command like this:

                python util/recompile.py -or overrides.yml -o "/output/folder" -i "/path/to/your/3bpw-exl3-quantization

                And the overrides.yml file looks like:

                sources:
                  - id: 4
                    model_dir: /path/to/4bpw-exl3-quantization
                
                overrides:
                  #   Attention & router tensors – cheap, big gain on MoE models
                  - key: "*.self_attn.q_proj*"
                    source: 4          # +1 bpw
                  - key: "*.self_attn.k_proj*"
                    source: 4          # +1 bpw
                  - key: "*.self_attn.v_proj*"
                    source: 4          # +1 bpw
                  - key: "*.self_attn.o_proj*"
                    source: 4          # +1 bpw
                  # - key: "*.mlp.down_proj*"
                  #   source: 4          # +1 bpw
                
                  #  This would force the whole first layer to 4bpw
                  # - key: "model.layers.0.*"
                  #   source: 4
                

                What this example overrides.yml does is force the more sensitive attention layers to use 4bpw quantization (plucking them from the 4bpw quantization you downloaded), and everything else (namely the mlp layers) to use 3bpw. This should end up around ~3.2bpw or so. You can make it larger by uncommenting the mlp down layer (which is the next most sensitive layer), or make it smaller by commenting out the q_proj layer (with the kv layers being the most sensitive, and relatively tiny).

                This seems convoluted, yep. But it has advantages:

                • It targets the ‘sensitive’ layers more accurately, whereas exllamav3 more randomly changes the quantization of layers to hit a specified bpw target (as it can only use integer quantizations).

                • It can be faster. If you can find 3bpw and 4bpw exl3s of the model you want to try, you can just download them and recombine them: no actual quantization needed, and no need to download the 50GB raw weights. convert.py takes a few hours to run, while util/recompile.py takes seconds.


                …And why go to all this hassle, you ask?

                Because exl3s let you stuff in a much better model, with less loss, than anything you’d find on ollama:

                img

                https://github.com/turboderp-org/exllamav3/blob/d8167b0cf4491baeae7705c0dfec7f131f02aad4/doc/exl3.md

                You can cram a 24 billion parameter model into the 11GB free you have, with minimal loss and no CPU offloading, wheras with ollama (and their unoptimized GGUFs/context qauntization), you’d either need a Q4/Q5 of a much dumber 12B model, or a Q3/Q2 of a 24B that will spit out jibberish, or make the model glacially slow by offloading half of it to system RAM.

                And it better takes advantage of your 3080 TI’s architecture.


                There are other ways to get really good quantization (like with ik_llama.cpp), but for dense models, I love exllamav3.

                Also, this whole field moves fast. Exllamav3 is like 5 months old, and this ‘manual’ quantization scheme was only tested a few days ago.

                • Killer_Tree@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 hours ago

                  Once again, thank you so much for sharing your knowledge! It looks like I have some weekend projects to look forward to.

            • gsdsam@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Did you play a specific system? I’ve been curious about playing cyberpunk RED with AI for a bit, most online options seem to be 5e based so I’m curious if you can teach these other systems and settings, that would be awesome.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                1 day ago

                Did you play a specific system?

                Honestly I don’t use them for much RP these days, mostly novel-style writing instead :P.

                most online options

                ‘Online’ systems are probably taking bone stock LLMs and using 5e rules banged into the system prompt anyway. You could do the same thing with with a local UI (like Kobold, Open Web UI, mikupad. Take your pick.)

                I’m curious if you can teach these other systems and settings, that would be awesome.

                Theoretically? You could collect some text from completed Cyberpunk RED games and finetune a model.

                Or maybe use constrained sampling to help it format certain answers, which would be much easier.

                But honestly I would just try some ‘strong’ models and see if they follow the rules you paste into the system prompt, unless you want to dump a ton of time (and some cash) down the finetuning rabbit hole.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Oh, also, I can just host any of these on the AI Horde for a bit if you want to try them out, via Kobolt Light or AgnAIstic web apps. Again, just lemme know.

  • pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    3
    ·
    2 days ago

    lmao

    (“Gooning” is internet slang that describes exaggerated and excessively long sessions of self-pleasuring.)

    In other recent posts, Musk urged followers to try the sometimes-pornographic companion Ani. Last week, he announced that new skimpy outfits had been added to her wardrobe and commented “Nice” when a user modeled her in one.

    He evidently crossed a line, though, when he posted an animation of Ani dancing in her underwear; even his right-wing fans were disgusted, telling him it was “time to stop” and that Ani looked like a “13 year old in lingerie.”

    Musk, likely observing the severe backlash, deleted the offending post. Yet he continued to annoy his supporters by engaging with Ani. When he replied to a video of the character in a short skirt and a see-through top with a “good morning” message and an emoji of a smiling face encircled by hearts,

    A fed-up X user wrote, “BRO STOP GOONING TO AI ANIME AND TAKE US TO MARS.”

    Question of the day, are you a pedo if it’s anime?

    • Part4@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 day ago

      MARS

      Literally anyone at all has as much chance as getting to Mars with Elon Musk as heaven’s gate cult members did getting to heaven on a space ship chasing Hale-Bopp.

    • icylobster@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      The definition of pedophilia is sexual attraction towards children. So if the anime characters are all clearly representing children, then sure, why not count that.

      The thing is, there is a lot of art that is kind of vague on age. The loli girl stuff creeps me out. But the older teenager / young adult anime/art feels a little more like general human brain. Especially if the user isn’t trying to seek it out. No one is perfect.

      But in this case obviously sexualizing something that looks like a child is creepy. Elon’s age is not helping him either…

      • DefederateLemmyMl@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        So if the anime characters are all clearly representing children, then sure, why not count that.

        “But she’s really a 120 year old vampire so it doesn’t count, you just don’t understand bro”

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        Guy who consumes that kind of thing and occasionally draws it here. I’m not attracted to real children, and in fact I left Twitter due to it became infested with (real) CSAM. “Pedojacketing” everyone who likes anything that could be remotely called a “child” isn’t a product of “the new web” (4chan lingo for internet when mainstream platforms started to ban racists), and is quite old in fact, but it does not help victims or child sexualization. Even if you really don’t like this thing, and a “real old internet guy” (read: browsed 4chan and YouTube since 2012) wanted to gatekeep his “formerly niche” fandoms from tourists (read: anyone who’s not white and cishet), then tried to smear you for the sin of not knowing the origin of some joke that originated from a loli doujinshi.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      72
      ·
      edit-2
      2 days ago

      BRO STOP GOONING TO AI ANIME AND TAKE US TO MARS

      “Take us to Mars”? Are there really people out there who think they personally are going to benefit from this groyper chud’s bottomless piggy bank and ketamine addiction?

      • affenlehrer@feddit.org
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        2 days ago

        Can’t wait to get a seat in one of those huge rockets that explode / burning and fail constantly while flying to low earth orbit. I mean for Mars it only takes like 10 of them to make a super complicated refill maneuver in space, then half a year travel to Mars and then vertical landing on Mars. What could possibly go wrong?

    • quick_snail@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      What do you think it’s going to be like at mars?

      Just a few white bois masturbating to their anime pr0n cache while slowly starving to death.

      You want that?

    • tigeruppercut@lemmy.zip
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 days ago

      Also you know he’s never put any time into developing any coping strategies for dealing with mean comments, so people should keep em coming

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      4
      ·
      2 days ago

      Question of the day, are you a pedo if it’s anime?

      Gosh, never seen this debate come up on the fucking worst parts of the internet before, totally new one. Can’t wait for all the fresh and original takes.

      In before “Ephebophile” gets thrown around and used as some kind of moral high ground.

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        I’m pretty sure that some stand up comedian made the joke that if you know the exact terms for different ages, you should already be locked up

      • GalacticGrapefruit@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        Harkness Rules.

        And if she’s choosing to appear as a kid to lure pedos in so she can kill them with fire and eat their ashes, more power to her. Like Chris Hansen but with a hell of a lot more time on her hands.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          3
          ·
          2 days ago

          My take is if the assets are those of a fully developed women with a young looking fade or a “story” that she is young, I am a little less concerned, but if the anime girl is like pre adolescent, then that is a serious problem. My logic is that fully developed women with very young faces are, the vast majority of the time not minors. But if you take away the adult development, then nearly all of the real world examples are minors. And that there is time for some serious ass whoopin.
          I myself don’t like anime mainly because they are trying to blur the line a lot (and sometimes just flat put crossing it). I don’t need that in my life.

    • Mwa@thelemmy.club
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      It depends if the characters are Lolis and you Simp for the Loli characters.
      But for Ani Over here, Yes.

    • Landless2029@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      edit-2
      2 days ago

      What really bothers me about anime is when there’s romance with a loli 300 year old vampire.

      I don’t give a shit if she’s “300 years old” in context. She looks 12!

      You can have you loli vampire/elf/whatever girl just look-no-touch!!

    • lowleekun@ani.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      10
      ·
      2 days ago

      Wait, is pedos beating it to drawn stuff now bad? I guess we should tell them real kids is where its at. /s

      Also is it bad if pedos are in denial because they only beat it to drawn stuff? I think id prefer a person self identifying as a lolicon or not? Like i really do not get why people are so adamant on making the group of pedophiles as big as possible.

      • supernight52@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        1 day ago

        The issue is- it gives pedophiles an outlet for their fantasies. The only outlet pedophiles need is therapy for non-offenders, and lead aspirin for the offenders.

        • veni_vedi_veni@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 hours ago

          Being a pedo is a mental condition, just like being gay. You can’t really change a sexual attraction that is innate.

          Unless there are studies which correlate watching anime underage fantasies to actually acting on them irl, you really don’t have a leg to stand on besides it’s icky and offensive. You may as well preach that killing in video games causes mass shootings…

          To be clear, I find it weird af but I am a stark supporter that media censorship if it’s not hurting anyone is more abhorrent.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        1 day ago

        I think the desire to protect kids, which most but not all humans have, wants to include more people in the category of pedophile so they can be sure not to miss any real ones. So I can respect the intent. But I do agree that it gets to be overused a lot. Just like fascist and nazi these days. It kinda cheapens the term for those who truely deserve it. But that is human nature.

        • HikingVet@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          1 day ago

          That sounds like some creeper shit. And we currently have a problem world wide with fascists. With the said fascists protecting their rich pedo friends.

          • Modern_medicine_isnt@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            20 hours ago

            Exactly, but we spend time calling people pedos amd fascists who haven’t done anything instead of dealing with the ones who have. That is the problem I am trying to hilight.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      2 days ago

      At least Howard Hughes at the decency to lose his fucking mind in isolation. That and also contribute actual advancements to cinema and aerospace.

      It’s tragic though that likely Musk will ensure his name is plastered on a thousand different agencies, institutions, foundations, statues and bidets all over the world and people growing up in several generations from now will largely assume he was some great inventor or hero along the lines of great pioneers of the past.

      • yumpsuit@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        do you think Hughes Aircraft people ever referred to jorkin’ it as “sprucing the goose?”

    • 0ops@piefed.zip
      link
      fedilink
      English
      arrow-up
      66
      arrow-down
      1
      ·
      2 days ago

      ✓ AI girlfriend

      ✓ Nazi

      ✓ (Allegedly) techy

      Damn Elon is basically Krieger

          • Honytawk@lemmy.zip
            link
            fedilink
            English
            arrow-up
            13
            ·
            2 days ago

            Elon still invented nothing. The guy couldn’t even finish a degree in physics.

            Its only his requirements for the real engineer team that made the inventions kill people.

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          2 days ago

          Elon is credibly regarded as inventor on two patents. Both design patents. One for the shape of a vehicle door and the other for the shape of a charger plug. So, he is an inventor, technically. He draws shapes and then patents them.

          As an actual inventor (US Patent #12,095,717 B1), it’s pretty ridiculous that he technically qualifies, when all he’s done is doodle.

          The other patents he’s listed as inventor on all have several other inventors listed, who probably did almost all, if not all, of the actual work.

      • PunnyName@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        1 day ago

        Krieger worked with the Nazis, but then killed them. He was actively trying to sabotage the Nazi regime.

        Edit: forgot to mention, it was specifically in Dreamland, so maybe IRL he actually worked with Nazis. Unsure.

      • DrWorm@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I got Musk vibes from Fabian in the later seasons, not I’m not sure if that was intentional

    • ayyy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I’m trying to track down why this keeps happening. May I ask how/where you uploaded this picture?

      • Luouth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I’ve noticed this, too. Haven’t worked out why they only work half the time. None of the gifs I post in the comments are my OC. I borrow the direct link from search results. Weirdly, it still looks OK on my end whilst viewing on Boost for Lemmy. I posted using Boost, too. If you see any in my comment history that are now no longer present, maybe posting via Boost is the answer to the issue

        • ayyy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          The other person explaining is definitely right. Also FYI what you are describing is called hotlinking and is generally considered to be bad internet etiquette because it adds load to someone’s server without driving real traffic to their site (although there is some debate around this that’s been ongoing for 30 years lol). The “best practice” is to copy the actual image (not the link) and paste it into Boost, and that should automatically upload the image to your lemmy/piefed instance and host it from there.

        • PonyOfWar@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Probably because of the cloudflare layer. It shows me a captcha when I click on the link to your image, so it makes sense that embedding wouldn’t work correctly.

          • Luouth@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Does that mean only folks who use Cloudflare DNS will have issues? I am not very techy when it comes to network related things

            • PonyOfWar@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              No, it’s the website you linked (yarn.co) that uses cloudflare protection (against bots, DDoS attacks etc). When it detects any traffic it deems unusual, it shows a captcha that the visitor needs to click before being able to view the image. It can’t display a captcha when you’ve embedded the image into a Lemmy post though, so the image just won’t load at all in that case.

              • Luouth@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                24 hours ago

                Thanks for the explanation. I’m not sure why it works for half the time, though, as people are definitely seeing it. Is it because of the amount of traffic the gif has caused to Yarn.co after being embedded into a Lemmy post?

                • PonyOfWar@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  24 hours ago

                  I guess not everyone’s traffic is being deemed unusual/suspicious. There are many deciding factors that cloudflare could use to differentiate from “normal” traffic, such as location, browser, OS, VPN usage etc.

  • MangioneDontMiss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    9
    ·
    1 day ago

    I hate elon musk as much as anyone, but the person who wrote this article, specifically targeting anime, seems like a xenophobic prude.

    • arararagi@ani.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      They really are, americans are be panicking since they are only making trash and losing their own animation awards to foreigners.

  • buddascrayon@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 days ago

    Billions to build an AI neural net to generate something far tamer than what you can find on 4chan.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    2 days ago

    Honestly a lot of what the article points to isn’t that bad and wouldn’t be out of place in a lot of the sfw moe communities here. They even reference a “topless women” which is just a picture from the shoulders up of some ai slop galaxy women.

    Yeah fuck Elon and his nazi ai but I don’t think he’s posting gooning content, or at least I don’t think i could goon to it.

    • Skua@kbin.earth
      link
      fedilink
      arrow-up
      17
      ·
      2 days ago

      It’s worth noting that there are several other Grok personas besides Ani. If he’s only engaging with / posting the Ani stuff but not the others, that’s something of a smoking gun to me

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      2 days ago

      One one hand, I agree. Rolling Stone is overblowing this for clicks.

      On the other, there’s a suspicious grain of truth. This feels like my observations of internet folks developing obsessions over SFW “AI Slop Women” like that galaxy girl. I wouldn’t be surprised if Musk is doing a lot of boobie free generations of women in private.

      To such folks, it seems to feel… I don’t know the right adjective. Transcendental is too strong, and there’s definitely a sexual component too? I kinda went through a phase like this with SD 1.4/1.5, but Musk (ironically) seems kinda gullible and susceptible to this kind of thing. He might get really into it.