

(this probably deserves its own post because it seems destined to be a shitshow full of the worst people, but I know nothing about the project or the people currently involved)
(this probably deserves its own post because it seems destined to be a shitshow full of the worst people, but I know nothing about the project or the people currently involved)
Did you know thereās a new fork of xorg, called x11libre? I didnāt! I guess not everyone is happy with wayland, so this seems like a reasonable
Itās explicitly free of any āDEIā or similar discriminatory policies⦠[snip]
Together weāll make X great again!
Oh dear. Project members are of course being entirely normal about the whole thing.
Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.
In sure itāll be fine though. Heās a great coder.
(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)
Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of āAIā is something I find particularly galling.
The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.
It isnāt clear that anyone in trumpās government has ever paused to consider than any of their plans might have downsides.
Little table of āai fluencyā from zapier via linkedin: https://www.linkedin.com/posts/wadefoster_how-do-we-measure-ai-fluency-at-zapier-activity-7336442774650556416-nKND
(original source https://old.mermaid.town/@Kymberly/114635617736977394)
The author says it isnāt a requirements checklist, but it does have a column marked āunacceptableā, containing gems like
Calls Al coding assistants too risky
Has never tested Al-generated code
Relies only on Stack Overflow snippets
Angry goose meme: what was the ai code generator trained on, motherfucker?
I donāt think itāsa stretch to see the independence of spacex classified as a national security risk and have it nationalised (though not called that, because that sounds too socialist) and have associated people such as elon declared traitors. Shouldnāt even be that difficult these days, seeing how heās trashed his own reputation, and itāll be good to encourage the other plutocrats to stay in line.
Night of the long knives is in the playbook, after all
AI audio transcription is great.
https://mastodon.social/@nixCraft/114627512725655987
Sean Murray @NoMansSky
Ignore the auto-generated captions. We did not have a secret room hiding deaf kids.
Nintendo never once sent us deaf kids. We were hiding dev-kits. DEV-KITS.
For those of you who havenāt already seen it, r/accelerate is banning users who think theyāve talked to an AI god.
https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/
Thereās some optimism from the redditors that the LLM folk will patch the problem out (āyou must be prompting it wrongā), but assume that they somehow just donāt know about the issue yet.
As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But itās clear that theyāre not aware of the issue enough right now.
Thereās some dubious self-published analysis which coined the term āneural howlroundā to mean some sort of undesirable recursive behaviour in LLMs that I havenāt read yet (and might not, because it sounds like cultspeak) and may not actually be relevant to the issue.
It wraps up with a surprisingly sensible response from the subreddit staff.
Our policy is to quietly ban those users and not engage with them, because weāre not qualified and it never goes well.
AI boosters not claiming expertise in something, or offloading the task to an LLM? Good news, though surprising.
FWIW, maemo still lives⦠Jolla released their C2 phone which runs the maemo-descended sailfish OS about 6 months ago. I donāt know anything about it, other than its existence, and that it doesnāt have the N900 form factor š
Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole theyāve dug for themselves.
From the liquidatorās proposals:
We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.
(emphasis mine)
They donāt appear to actually own any intellectual property anymore (due to defaulting on contracts) so I canāt see this ending well.
Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w
Itās the usual āuninspiring right-centrist doesnāt understand why they were elected, implements a bunch of stupid policies that donāt improve things for anyone but some consultants and donors, hands country over to frothing far-right shitheadā cycle.
I like that Soylent Green was set in the far off and implausible year of 2022, which coincidentally was the year of ChatGPTās debut.
I am absolutely certain that letting a hallucination-as-a-service system call the police if it suspects a user is being nefarious is a great plan. This will definitely ensure that all the people threatening their chatbots with death will think twice about their language, and no-one on the internet will ever be naughty ever again. The police will certainly thank anthropic for keeping them up to date with the almost certainly illegal activities of a probably small number of criminal users.
When confronted with a problem like āyour search engine imagined a case and cited itā, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.
The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isnāt something that is really amenable to technological fixes.
Loving the combination of xml, markdown and json. In no way does this product look like strata of desperate bodges layered one over another by people who on some level realise the thing theyāre peddling really isnāt up to the job but imagine the only thing between another dull and flaky token predictor and an omnicapable servant is just another paragraph of text crafted in just the right way. Just one more markdown list, bro. I can feel that this one will fix it for good.
Itās been a while since I watched idiocracy, but from recollection, it imagined a nation that had:
and for some reason people keep referring to it as a dystopiaā¦
eta
Ooh, and everyone hasnāt been killed by war, famine, climate change (welcome to the horsemen, ceecee!) or plague, but humanity is in fact thriving! And even still maintaining a complex technological society after 500 years!
Idiocracy is clearly implausible utopian hopepunk nonsense.
Todayās man-made and entirely comprehensible horror comes from SAP.
(two rainbow stickers labelled āpride@sapā, with one saying āI support equality by embracing responsible aiā and the other saying āI advocate for inclusion through aiā)
Donāt have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148
I think that these are different products? I mean, the underlying problem is the same, but copilot studio seems to be āconfigure your own llm front-endā and copilot for sharepoint seems to be an integration made by the sharepoint team themselves, and it does make some promises about security.
Of course, it might be exactly the same thing with different branding slapped on top, and Iām not sure you could tell without some inside information, but at least this time the security failures are the fault of Microsoft themselves rather than incompetent third party folk. And that suggests that copilot studio is so difficult to use correctly that no-one can, which is funny.
Hereās a fun one⦠Microsoft added copilot features to sharepoint. The copilot system has its own set of access controls. The access controls let it see things that normal users might not be able to see. Normal users can then just ask copilot to tell them the contents of the files and pages that they canāt see themselves. Luckily, no business would ever put sensitive information in their sharepoint system, so this isnāt a realistic threat, haha.
Obviously Microsoft have significant resources to research and fix the security problems that LLM integration will bring with it. So much money. So many experts. Plenty of time to think about the issues since the first recall debacle.
And this is what theyāve accomplished.
https://www.pentestpartners.com/security-blog/exploiting-copilot-ai-for-sharepoint/
LLMs arenāt profitable even if they never had to pay a penny on license fees. The providers are losing money on every query, and can only be sustained by a firehose of VC money. Theyāre all hoping for a miracle.