sonia: Quilted wall-hanging (Default)
[personal profile] sonia
The Right to Say "No" by Audrey Watters. A rant about AI, eugenics, and Epstein (no details).
There is a real rot at the core of many of our institutions – and certainly at the core of those powerful players operating within and adjacent to them. "Artificial intelligence" emerges from this rot. It cannot be a bulwark against it.


Why Science’s press team won’t be using AI to write releases anytime soon by Emily Underwood at The Last Word On Nothing.
Every time a translator takes a book and puts it in their own words, they are interpreting the material slightly differently. What we found was that ChatGPT Plus couldn’t do that. It could regurgitate or transcribe, but it couldn’t achieve the nuance to count as its own interpretation of a study.

I think that’s because ChatGPT Plus isn’t in society — it doesn’t interact with the world. It’s predictive, but it’s not distilling or conceptualizing what matters most to a human audience, or the value that we place in narratives that are ingrained in our society. [...]

Now, after this experiment, we’re very against using it. After a year of data, we know it can’t meet our standards. If we ever did plan to use it, we’d have to implement super rigorous fact-checking, because we don’t want to lose reporters’ trust.


The AI Invasion of Knitting and Crochet by Jonathan Bailey in Plagiarism Today.
Creating a pattern requires considering the entire work; each step has to fit with and work with all the others. Blindly selecting the next step without that consideration will, more often than not, fail. This is especially true since AI can’t “test” the pattern after writing it, which is a big part of what humans do. [...]

However, the best and simplest advice is to buy from patternmakers that you trust. If you know someone who is a human making high-quality patterns, turn to them first. Rewarding known human creators rather than chasing the cheapest pattern is the best way to avoid buying AI slop.


Edited to add:
I don't care how well your "AI" works by Fiona Fokus.
No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress. I’d even go as far and say they are intentional.


The dark side of AI: Climate chaos, pollution, and injustice by Dwaign Tyndal. "Massive data centers pose serious risks to Black and brown communities."

WorkersDecide.tech, including AI Implementation Bingo. "Frustrated by your employer's generative AI policies? We're here to help you organize."

[personal profile] erinptah's list of a lot more relevant links, content note: teen suicide.

More great links in comments!

Date: 2025-11-25 12:07 pm (UTC)
sabotabby: a computer being attacked by arrows. Text reads "butlerian jihad now. Send computers to hell. If you make a robot I will kill you." (bulterian jihad)
From: [personal profile] sabotabby
Thank you for these. I collect them.
Edited (updated to use my new anti-ai icon) Date: 2025-11-25 12:11 pm (UTC)

Date: 2025-11-25 03:42 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
Janelle Shane was demonstrating that a GPT can't make a good knitting or crochet pattern back in 2019 with her SkyKnit and HAT3000 projects.

It would absolutely be possible to write a program that *could* test knitting patterns -- it wouldn't be able to knit them, of course, but it could create a model of what you'd get if you followed the pattern instructions, and you could then see if it looked right. But a GenAI can't do this, and I don't think even other types of machine learning would be necessary.

Were you the one who'd posted the link to an article about the same problem with AI slop in quilting patterns? I remember reading the article but don't remember how I got to it.

Date: 2025-11-26 09:56 pm (UTC)
merrileemakes: cartoon of 3 boys riding BMX bikes in mid art, performing the very loosest interpretation of stunts. One of them has fallen and is upside down. (stunts)
From: [personal profile] merrileemakes
Thanks for sharing Janelle's article, that was the funniest thing I've read in ages.

Date: 2025-11-26 10:16 pm (UTC)
merrileemakes: A comic speech bubble with the text "What the actual fuuuuck?!" (Wtaf)
From: [personal profile] merrileemakes
I really liked the right to say no article. I did yet another AI training workshop yesterday at work. I've done a few and was even part of my workplace's AI pilot last year. But I didn't like I had wasn't "getting" AI because it's just... bad. It's often outright wrong and the "hallucinations" (fabrications) really irk me. I thought surely I was just doing it wrong or missing something, because there's so much enthusiasm for AI. But the training once again showed me that, even with best practice prompting, AI spits out crap. I think I'm going to start saying no.

I also read the Science AI white paper at work, while compiling an internal newsletter of recent publications relevant to our work. Like its authors I too felt more secure in my job knowing that AI is bad at summarising articles and totally unable to describe them in context.
Edited Date: 2025-11-26 10:17 pm (UTC)

Date: 2025-12-02 06:21 am (UTC)
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
From: [personal profile] silveradept
The experiment was a nice write-up, and I'm glad that people are doing that kind of rigorous experiment with it to see whether or not the chatbots can actually measure up to the job they're being asked to do. Even if they could, the resource usage is still far too much.

I also am trying to say no, and to avoid using the things, and to ask that they be disabled completely for me so that they don't try to "accidentally" help. I don't think they're suitable for my profession at all, even if we might have to learn how to use them because people will ask us how to use them to make things better.
Page generated Jan. 7th, 2026 06:30 pm
Powered by Dreamwidth Studios