AI companies will fail. We can salvage something from the wreckage | Cory Doctorow
www.theguardian.com/us-news/ng-interactive/2026…
AI is asbestos in the walls of our tech society, stuffed there by monopolists run amok. A serious fight against it must strike at its roots
44 Comments
Comments from other communities
I agree with your choice to include the author’s name. Even without reading the article, it seems significant that Doctorow is writing in such a mainstream publication. I’m glad to see it
That’s the autofil choice not mine 😅. Anyway I think that puting his name will force more people to read it
Fun fact: Your local DCs “workers” don’t have weapons. You can already salvage anything you want!
I admire Cory’s perennial optimism, but the more billions the governments pump into this scheme, the less likely it’s going to be. “Too big to fail” and whatnot.
I suspect that the house of cards will come tumbling down as soon as one of the companies in this massive Ponzi scheme fails to pay their bill.
That’s what I’m saying tho. If the government is all-in on this, and basically the only reason the stock market is growing is because of AI related things, they will get a backstop or a bailout. At which point we’ll probably be forced to use it even more to justify that action.
That’s a great article, tanks for posting.
Cory has a way for getting right to the heart of things, and does so marvellously here. Great explanation of why the investments continue despite the dogshit economics of this industry.
So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.
And I am here to tell you they are wrong. Wrong because this would represent a massive expansion of copyright over activities that are currently permitted – for good reason.
He goes on to say that prohibiting AI works from being copyrighted and worker collective bargaining are better solutions, and I really agree with the arguments for this. I also liked this bit about how some of what remains past the bubble could be useful:
And we will have the open-source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video; describing images; summarizing documents; and automating a lot of labor-intensive graphic editing – such as removing backgrounds or airbrushing passersby out of photos. These will run on our laptops and phones, and open-source hackers will find ways to push them to do things their makers never dreamed of.
Not just something but a ton of used RAM sticks and GPUs.
What I do not do is predict the future.
Ok.
maybe he means “ai companies will fail” as not so much as a prediction, but just a given. kind of like “one day, you must die” isn’t really a “prediction,” that’s just the way it is
“The way it is” is based on a huge statistics material. Claiming some future results without huge statistics material is called “prediction”.
The guy is just PRing on the anti-AI sentiment.
Has this not always happened with any new technology?
Hysterical hatred? Not sure. Personal computers were welcomed, cars and planes too (planes were laughed upon, but never hated as far as I know)… Nah, I don’t think that every significant new technology is hated at the start.
Companies fail all the time with any new technology, and some AI companies will fail. In this case it’s just business not hatred. But I can see you’re also not seeing the other perspective of people hating on it by calling it hysterical so it’s a waste of time to argue with you.
it’s a waste of time to argue with you
You’re correct. I believe at this moment I have heard the full specter of anti-LLM arguments and most of the are pathetic; and those few that are somewhat reasonable are not actually anti-LLM but against consumer practices (ragarding as companies who try to shovel-in LLMs to anything without any reason, and end-consumers as well, who use LLMs for the purposes it never was made for and where it is still completely ineffective)
Likely more accurate to say ‘know the future’ instead of ‘predict the future’, but the intent is the same. He doesn’t share what will happen, only what will likely happen.
Hopefully dirt cheap gpus and ram when the bubble bursts
Unfortunately most of the GPUs aren’t usable by gamers. We aren’t talking mining booms where miners buy up gaming GPU stock, then sell cheap when the bubble bursts.
We’re talking companies buying huge GPUs that don’t have video outputs and have an altered software stack to what’s used for gaming, missing all kinds of features and game specific patches.
Granted, many 4090/5090s were also used, and those will be usable by gamers, but even with a significant price drop on those, only richer gamers will find that to be viable.
Somewhat similar story for memory - a lot of it is tied up in HBM, or as GDDR on enterprise graphics cards.
I certainly wouldn’t let something like a cheap RTX PRO 6000 Blackwell Server with 96GB of VRAM go to waste. I’d put it to good use with Blender rendering, running models I actually care about, and maybe some Games on Whales.
Wow. I feel like I just entered a Time Machine back to the 90s and 00s. I didn’t realize Corey Doctorow was still writing blog stuff on other venues.
He’s been writing a lot lately … ever since enshittification hit the fans
His blog: https://pluralistic.net/
Back in them boingboing days.
I miss boibgboing. Never been able to get into it since they paywalled commenting.
Not to mention all the adverts after all the stories … including anti-capitalism stories
The truth is paywalled, but the lies are free!
Since the beginning of human history …
AI is in the hype section of the emerging technology curve. A lot of good will come out of AI once we calm down and stop losing our damn minds.
It won’t be just cheap GPUs either. It will be things like more accurate cancer diagnoses, as Cory says near the top.
What we need is for regulation to catch up and start incentivizing the right things…that is, the things that will benefit society as a whole, not just the oligarchs.
That won’t happen until the brick wall is not only hit, but completely demolished. While Democrats are significantly more rational than Republicans, they are still almost as much in the oligarchs’ collective pockets. Only the Progressives give me any hope, but there aren’t nearly enough of them in Congress yet to make shit happen.
It is already happening.
The distractions is that you only see the AI companies which have been blitz scaling, dumping unlimited amounts of money into orders and plans without any revenue plans for the other side. This move only pays off if they can essentially buy the entire market and lock out any competition (and then the rent-seeking enshittification will being). The long-term prospects of these companies is shaky at best, but that doesn’t matter to the people currently dumping funding into them… they’re going to sell everything at the IPO and leave some other suckers with the bag.
Because of this, there are many many times the amount of advertising and promotional hype than is justified by the actual progress in the field.
Everyone is familiar with this. If someone says AI, do you think of ChatGPT or an LLM? That’s because you’ve been affected by this hype wave that is being intentionally propagated in order to drive valuations for AI companies who are looking to hit an IPO so all of the early investors can get out quick before the bubble bursts (It’s like a crypto rugpull, except it is using the stock market instead of a meme coin).
‘Actual’ AI. By which I mean machine learning, including neural networks, has made huge progress in a lot of fields following the discovery of the Transformer model (the T in GPT). The, very real and impressive, improvements that have been gained are not flashy, they do not make for immediate next-quarter profits and are mostly public discoveries coming out of academia so they benefit everyone which makes them worthless to the people trying to horde emerging technology in order to push this bubble/rugpull.
Your life will be WAY more affected by the slow and incremental work being done in the field of robotics than having a slightly more personable chatbot. Your life or the lives of people you love will be saved by the advances in protein folding which allow rapid development of new treatments which can be customized to the individual. Cancer therapies that are optimized for the exact mutations in the patient’s cancer cells or customized medicine aimed at reducing side effects or harmful interactions.
I was already aware of much of what you said, although I probably hadn’t gazed down the road quite as far.
However, what I was referring to was the legislation the commenter I replied to was calling for. What I meant was that proper legislation wouldn’t happen until the “rugpull” you described actually happens, and the economy finally goes into the recession it would otherwise already be deep into at this point. Being delayed so long may make it especially ugly, I fear - or if may just bridge us into the next thing that keeps the economy propped up, such as the robotics you mentioned.
May you live in interesting times, indeed.
Most of the hype isn’t about machine learning stuff for cancer diagnoses though. When the average C-level guy talks about AI they mean almost exclusively LLMs. Fancy autocomplete is their solution to everything, from summarizing an email to agentic OSes. And that’s just not going to happen.
not “our” minds. theirs.
techbros are the ones causing it to the detriment of everyone else.
PC parts
“Reverse centaur” is a super clunky term, I think we should say “Minotaur” instead.
A Centaur has the head of a person and the body of a beast, a Minotaur is the opposite.
Pretty sure a superb writer like Cory chose* reverse* centaur for a reason - to make it clear that a human is becoming a slave to a machine.
The term isn’t just about which half is human though, it’s about the dynamic flip in humans relationship with tooling.
The “tool” part of the centaur is the horse body. All of the hard to duplicate bits of a human (reasoning, processing, fine motor skills) with the strength and speed of a horse. A reverse centaur is when a complex system is designed that needs a “dumb human” to do the complex bits while the system uses humans as an appendage.
Reverse centaur may sound clunky but is a really elegant one liner for Doctorow’s thesis.
Minotaurs have some potential for badass imagery. Reverse Centaur sounds intentionally clunky.
I’ll salvage me a cheap PC upgrade once this stupid ship finally sinks.
Possibly, but components from AI centers cannot be reused easily (or at all) for consumer machines.
Hopefully we have ways to recycle the old chips. Some of the GPU chips could maybe be put on consumer boards? I’ve already heard of memory chips being recycled, except HBM of course.
Of course a lot of these companies would probably just prefer to put them in a trash compactor lol.
their leaders need to be put into the trash compactor for once, where they belong.
most of these companies will probably fail. it’s expected. but the AI paradigm is not going anywhere
Haven’t read the article yet but hopefully it’s cheap used drives
This man has really built a career and following off saying common sense things…
I don’t know why people keep eating it up or acting like he’s a genius, but at least this time he isn’t coining a new term to “explain” something that leads to everyone using it but not understanding what’s actually happening.
He’s a good writer and leader on the technology front. He gets a bit repetitive but that’s because he is trying to get the word out.
He’s actually built a career on being an author…
I’m also assuming you only read the headline
What you might think is “common sense” may not be for others. There is value in this being documented, otherwise the person without “common sense” may be influenced by someone with an agenda who does document their thoughts.
Same as when people make fun of “obvious” research, there is value in having it peer reviewed as a reference for future researchers.
Oh yeah, because no one has “documented” that AI is bullshit yet…
Only the brave Cory Doctrow could gather and disseminate this to the masses!
/s
The fuck’s with your beef? This is just one article he wrote. The guys written hundreds of pieces, maybe thousands, about all sorts of technology related issues
I don’t think he’s necessarily a genius, but he is a force for good and a great writer. We do need someone to keep stating the truth, keep saying what’s right and what’s wrong, this is literally how we win the information war currently waged against us.
It’s the same of the genocide experts calling out genocide, or the eco-activists calling out climate change. It might be obvious common sense to you, but it might not be to other people, and this is precisely why it needs to be shouted from every rooftop we have available.
Oh, and also, if you actually read his works he clearly does more than just state the obvious and coin new terms (even though both of those are important too). He is deeply and intimately familiar with the technical and social structures of the modern internet, his analysis of various phenomena and trends is usually on-point and has some predictive power. Most importantly he offers solutions to the issues facing us, and practices what he preaches too.
If you think he predicts anything, you’re streets behind
He repeats what anyone could find in a five minute Google search from articles written by others.
I’ve never seen an original thought, he’s the Carlos Mencia of tech.
Timing. Saying common sense stuff before others gives you an edge and being the first to say it with any eloquence, in a way people want to listen and are accepting of what’s being said. A lot of what Cory Doctorow says can be hard to hear or hard to deliver without sounding condescending like NDT. His talent is in the delivery.
He also has a ton of experience in this space, he’s worked for the Electronic Frontier Foundation for almost 25 years. On top of that, he’s whip-smart and enjoyable to read or listen to
Yeah bro…
No one else has pointed out AI will fail before 1/18/26…
On the great Cory Doctrine could identify something that no other human has ever even contemplated.
I heard next, he’s going to release a blog post that Nazis might not be nice people
Again, did you even read the oped
Why do people keep listening to this guy?
Because he knows his shit.
Sure but nobody important cares and he’s just repeating himself ad nauseum.
I’m not sure it’s targeted at anyone “important” (what a classist term!). It is just analyzing the situation and predicting what happens next, for you and I to be prepared. It also hastens the end of the bubble ever so slightly, which is good - the sooner this poison-laced bandaid is ripped off the world’s economy, the faster we can start to rebuild something more meaningful, but now with new productivity tools by our side.
And you’re sure of that because …
In most fairy tales, it takes more than one blow to kill the dragon….