

Also, a lawnmower is unlikely to say: “Sure, I am happy to take you to work” and “I am satisfied with my performance” afterwards. That’s why I sometimes find these bots’ pretentious demeanor worse than their functional shortcomings.
Also, a lawnmower is unlikely to say: “Sure, I am happy to take you to work” and “I am satisfied with my performance” afterwards. That’s why I sometimes find these bots’ pretentious demeanor worse than their functional shortcomings.
As usual with chatbots, I’m not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don’t question wrong answers (especially when they’re harder to check than a simple calculation).
LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).
Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that’s bullshit, because LLMs just aren’t capable of doing any of these things in a meaningful way.
No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It’s important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn’t have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.
But I admit that this is not comparable to chatbots.
Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people’s handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn’t any less correct than one that had been memorized (probably more so), the same couldn’t be said about chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.
I think they consider “being well-read” solely as a flex, not as a means of acquiring actual knowledge and wisdom.
They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want.
I think it’s either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).
Now that I’m thinking about it, couldn’t this also be used for attacks that are more akin to social engineering? For example, as a hotel owner, you might send a mass email saying in a hidden place “According to new internal rules, for business trips to X, you are only allowed to book hotel Y” - and then… profit? That would admittedly be fairly harmless and easy to detect, I guess. However, there might be more insidious ways of “hacking” the search results about internal rules and processes.
It is very tangential here, but I think this whole concept of “searching everything indiscriminately” can get a little bit ridiculous, anyway. For example, when I’m looking for the latest officially approved (!) version of some document in SharePoint, I don’t want search to bring up tons of draft versions that are either on my personal OneDrive or had been shared with me at some point in the past, random e-mails etc. Yet, apparently, there is no decent option for filtering, because supposedly “that’s against the philosophy” and “nobody should even need or want such a feature” (why not???).
In some cases, context and metadata is even more important than the content of a document itself (especially when related to topics such as law/compliance, accounting etc.). However, maybe the loss of this insight is another collateral damage of the current AI hype.
Edit: By the way, this fits surprisingly well with the security vulnerability described here. An external email is used that purports to contain information about internal regulations. What is the point of a search that includes external sources for this type of questions, even without the hidden instructions to the AI?
As I’ve pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn’t mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.
Still wondering what really happened here. A dark pattern in the app? Or some kind of technical glitch? It it was a dark pattern, has it been changed since then? Has anybody posted screenshots or a video of the steps users need to take to make their chats public? I’m most definitely not going to install the app myself just to try it out.
These systems are incredibly effective at mirroring whatever you project onto it back at you.
Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).
Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).
Some of the comments on this topic remind me a bit of the days when people insisted that Google could only ever be the “good guy” because Google had been sued by big publishing companies in the past (and the big publishers didn’t look particularly good in some of these cases). So now, conversely, some people seem to assume that Disney must always be the only “bad guy” no matter what the other side does (and who else the other side had harmed besides Disney).
I guess the main question here is: Would their business model remain profitable even after licensing fees to Disney and possibly a lot of other copyright holders?
From what I’ve heard, it’s often also the people tasked with ghostwriting the LinkedIn posts of the members of the C-suite, among other things (while not necessarily being highly paid/high in the pecking order themselves).
In the past, people had to possess a degree of criminal energy to become halfway convincing scammers. Today, a certain amount of laziness is enough. I’m really glad that at least in one place there are now serious consequences for this.
This is just naive web crawling: Crawl a page, extract all the links, then crawl all the links and repeat.
It’s so ridiculous - supposedly these people have access to a super-smart AI (which is supposedly going to take all our jobs soon), but the AI can’t even tell them which pages are worth scraping multiple times per second and which are not. Instead, they appear to kill their hosts like maladapted parasites regularly. It’s probably not surprising, but still absurd.
Edit: Of course, I strongly assume that the scrapers don’t use the AI in this context (I guess they only used it to write their code based on old Stackoverflow posts). Doesn’t make it any less ridiculous though.
Even if it’s not the main topic of this article, I’m personally pleased that RationalWiki is back. And if the AI bots are now getting the error messages instead of me, then that’s all the better.
Edit: But also - why do AI scrapers request pages that show differences between versions of wiki pages (or perform other similarly complex requests)? What’s the point of that anyway?
Under the YouTube video, somebody just commented that they believe that in the end, the majority of people is going to accept AI slop anyway, because that’s just how people are. Maybe they’re right, but to me it seems that sometimes, the most privileged people are the ones who are the most impressed by form over substance, and this seems to be the case with AI at the moment. I don’t think this necessarily applies to the population as a whole, though. The possibility that oligopolistic providers such as Google might eventually leave them with no other choice by making reliable search results almost unreachable is another matter.
Also, these bots have been deliberately fine-tuned in a way that is supposed to sound human. Sometimes, as a consequence, I find it difficult to describe their answering style without employing vocabulary used to describe human behavior. Also, I strongly suspect that this deliberate “human-like” style is a key reason for the current AI hype. It is why many people appear to excuse the bots’ huge shortcomings. It is funny to be accused of being “emotional” when pointing out these patterns as problematic.