A chart titled “What Kind of Data Do AI Chatbots Collect?” lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.

  • Gemini: Collects all 10 data types; highest total at 22 data points
  • Claude: Collects 7 types; 13 data points
  • CoPilot: Collects 7 types; 12 data points
  • Deepseek: Collects 6 types; 11 data points
  • ChatGPT: Collects 6 types; 10 data points
  • Perplexity: Collects 6 types; 10 data points
  • Grok: Collects 4 types; 7 data points
  • krnl386@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    20 hours ago

    Wow, it’s a whole new level of f*cked up when Zuck collects more data than the Winnie the Pooh (DeepSeek). 😳

    • Octagon9561@lemmy.ml
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      19 hours ago

      The idea that US apps are somehow better than Chinese apps when it comes to collecting and selling user data is complete utter propaganda.

      • Duamerthrax@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        Don’t use either. Until Trump, I still considered CCP spyware more dangerous because they would be collecting info that could be used to blackmail US politicians and businesses. Now, it’s a coin flip. In either case, use EU or FOSS apps whenever possible.

    • TangledHyphae@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      17 hours ago

      +1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.

  • Sonalder@lemmy.ml
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    1 day ago

    Anyone has these data from Mistral, HuggingChat and MetaAI ? Would be nice to add them too

    Edit : Leo from brave would be great to compare too

    • serenissi@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      22 hours ago

      Nope, these services almost always require user login, eventually tied to cell number (ie non disposable) and associate user content and other data points with account. Nonetheless user prompts are always collected. How they’re used is a good question.

    • exothermic@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      2 days ago

      Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.

      • TangledHyphae@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        17 hours ago

        https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)

        All free and available to everyone.

      • Kiuyn@lemmy.ml
        link
        fedilink
        arrow-up
        23
        ·
        edit-2
        2 days ago

        I recommend GPT4all if you want run locally on your PC. It is super easy.

        If you want to run in a separate server. Ollama + some kind of web UI is the best.

        Ollama can also be run locally but IMO it take more learning than GUI app like GPT4all.

        • CodexArcanum@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          If by more learning you mean learning

          ollama run deepseek-r1:7b

          Then yeah, it’s a pretty steep curve!

          If you’re a developer then you can also search “$MyFavDevEnv use local ai ollama” to find guides on setting up. I’m using Continue extension for VS Codium (or Code) but there’s easy to use modules for Vim and Emacs and probably everything else as well.

          The main problem is leveling your expectations. The full Deepseek is a 671b (that’s billions of parameters) and the model weights (the thing you download when you pull an AI) are 404GB in size. You need so much RAM available to run one of those.

          They make distilled models though, which are much smaller but still useful. The 14b is 9GB and runs fine with only 16GB of ram. They obviously aren’t as impressive as the cloud hosted big versions though.

          • Smee@poeng.link
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Or if using flatpak, its an add-on for Alpaca. One click install, GUI management.

            Windows users? By the time you understand how to locally install AI, you’re probably knowledgeable enough to migrate to linux. What the heck is the point of using local AI for privacy while running windows?

          • Kiuyn@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            2 days ago

            My assumption is always the person I am talking to is a normal window user who don’t know what a terminal is. Most of them even freak out when they see “the black box with text on it”. I guess on Lemmy the situation is better. It is just my bad habit.

            • utopiah@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              normal window user who don’t know what a terminal is. Most of them even freak out when they see “the black box with text on it”.

              Good point! That being said I’m wondering how we could help anybody, genuinely being inclusive, on how to transform that feeling of dread, basically “Oh, that’s NOT for me!”, to “Hmmm that’s the challenging part but it seems worth it and potentially feasible, I should try”. I believe it’s important because in turn the “normal window user” could potentially understand limitations hidden to them until now. They would not instantly better understand how their computer work but the initial reaction would be different, namely considering a path of learning.

              Any idea or good resources on that? How can we both demystify the terminal with a pleasant onboarding? How about a Web based tutorial that asks user to try side by side to manipulate files? They’d have their own desktop with their file manager on one side (if they want to) and the browser window with e.g. https://copy.sh/v86/ (WASM) this way they will lose no data no matter what.

              Maybe such examples could be renaming files with ImagesHoliday_WrongName.123.jpg to ImagesHoliday_RightName.123.jpg then doing that for 10 files, then 100 files, thus showing that it does scale and enables ones to do things practically impossible without the terminal.

              Another example could be combining commands, e.g. ls to see files then wc -l to count how many files are in directory. That would not be very exciting so then maybe generating an HTML file with the list of files and the file count.

              Honestly I believe finding the right examples that genuinely showcases the power of the terminal, the agency it brings, is key!

            • CodexArcanum@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              No worries! You’re probably right that it’s better not to assume, and it’s good of you to provide some different options.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.

      • skarn@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        If you want to start playing around immediately, try Alpaca if Linux, LMStudio if Windows. See if it works for you, then move from there.

        Alpaca actually runs its own Ollama instance.

        • Smee@poeng.link
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Ollama recently became a flatpak extension for Alpaca but it’s a one-click install from the Alpaca software management entry. All storage locations are the same so no need to re-DL any open models or remake tweaked models from the previous setup.

        • SeekPie@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          And if you want to be 100% sure that Alpaca doesn’t send any info anywhere, you can restrict it’s network access in Flatseal as it’s a flatpak.

      • Smee@poeng.link
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        It’s possible to run local AI on a Raspberry Pi, it’s all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.

      • skarn@discuss.tchncs.de
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).

        You’d be surprised how mich can be done with how little.

  • Cris16228
    link
    fedilink
    arrow-up
    102
    arrow-down
    2
    ·
    2 days ago

    Me when Gemini (aka google) collects more data than anyone else:

    Not really shocked, we all know that google sucks

    • Telorand@reddthat.com
      link
      fedilink
      arrow-up
      33
      ·
      2 days ago

      I would hazard a guess that the only reason those others aren’t as high is because they don’t have the same access to data. It’s not that they don’t want to, they simply can’t (yet).

    • will_a113@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Not that we have any real info about who collects/uses what when you use the API

      • morrowind@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        Yeah we do, they list it in privacy policies. Many of these they can’t really collect even if they wanted to

    • will_a113@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      29
      ·
      2 days ago

      And I can’t possibly imagine that Grok actually collects less than ChatGPT.

      • HiddenLayer555@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        23 hours ago

        Skill issue probably. They want to collect more but Musk’s shitty hires can’t figure it out. /s

        • Ziglin (it/they)@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          Yeah I feel like there’s a supposedly missing somewhere. We don’t know their servers so at the very least ‘user content’ is based on trust.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        All services you see above are provided to EU citizens, which is why they also have to abide by GDPR. GDPR does not disallow the gathering of information. Google, for example, is GDPR compliant, yet they are number 1 on that list. That’s why I would like to know if European companies still try to have a business case with personal data or not.

        • Sips'@slrpnk.net
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 day ago

          If it’s one thing I don’t trust its non-EU companies following GDPR. Sure they’re legally bound to, but l mean Meta doesn’t care so why should the rest.

          (Yes I’m being overly dramatic about this, but I’ve lost trust ages ago in big tech companies)

        • Susurrus@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          It doesn’t mean they “have to abide by GDPR” or that they “are GDPR compliant”. All it means is they appear to be GDPR compliant and pretend to respect user privacy. The sole fact that the AI chatbots are run in US-based data centres is against GDPR. The EU has had many different personal data transfer agreements with the US, all of which were canceled shortly after signing due to US corporations breaking them repeatedly (Facebook usually being the main culprit).

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            I tried to say that, but you were better at explaining, so thank you. Without a court case, you will essentially never know, if they are truly GDPR compliant

  • abdominable@lemm.ee
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    2 days ago

    I have a bridge to sell you if you think grok is collecting the least amount of info.