Is *AI* (LLMs) causing Brain Rot? And is it just a bubble waiting to burst

A Tech Theorist Says AI Is Training Humans to Think Backward - Business Insider A Tech Theorist Says AI Is Training Humans to Think Backward - Business Insider

2 Likes

Tldr; you can prompt entire books like Harry Potter out of AIs.

I doubt they care about software licensing either!

Study source: https://arxiv.org/pdf/2601.02671v1

3 Likes

If this is true it could be the pin that bursts the bubble. Either that or it could spawn a massive redistribution of equity to copyright holders, given that they probably won’t get much by suing for damages and collapsing the AI companies.

I won’t hold my breath, considering how much is being pinned on AI as the saviour.

I wouldn’t be hugely surprised if it causes excemptions to be codified. Sounds mad, but a shake up in IP law is long overdue too (e.g. disney).

I do think it uncovers the underpinnings of how LLMs operate and how little entropy there is past a few paragraphs of text, which is used as context for the following text.

As the LLMs are literally chosing the statistically most likely word to come after the previous word, it should be obvious that this would be possible. How many other books start with the exact same page 1? This leads it to predicting page 2 will look exactly like the original training book. What other page 2 would be likely?

I wonder what code could be extracted by starting with a copyright block, then same guesses as class names? Could make for some interesting findings! :sweat_smile:

It’s amazing this hasn’t been discovered already, tbh. Seems almost too obvious. Disclaimer: I’ve not read the paper, I just watched the video.

2 Likes

Related to above theme

2 Likes

Bingo. The US government at least will definitely do just that if required. They see AI as a matter of national defense - and you can be guaranteed that China will not be subject to copyright with it’s model development. Meaning there is no way for the US to be competitive with China if US companies are not exempt.

2 Likes

Interesting study, and it’s a short, readable overview. Takeaway is, if your goal is engaging and using your brain a maximum amount, be careful with your LLM usage. They looked at essay writing, and the ones whose brain were the most active were brain-only, next was search-engine+brain, last was LLM-assisted. People can’t quote their own essays, when they get LLM help. So yes, be careful out there, people.

5 Likes

I’ve taken a hard stance on here in relation to articles on LLMs which focus on hype and ignore reality. I guess if people are only exposed to the hype, or to pseudo-criticism, this might seem like being a stickler for no good reason.

It’s not for no good reason. LLMs are making life harder for sysadmins (with some fighting back) and for open source maintainers generally. This is from today, and makes the internet worse for everyone.

And it’s not a promise of some future crapness - it’s happened already, and it’s a bad development.

On the open source projects – the dumping on ocaml, zig and julia, all perpetrated by one guy, are notorious examples.

2 Likes

You also had FFmpeg complaining about Googles AI finding bugs.

We take security very seriously but at the same time is it really fair that trillion-dollar corporations run AI to find security issues in people’s hobby code? Then expect volunteers to fix

And best to fix in 90-days: then the bug is made public, with or without fix.

3 Likes

FFmpeg complained about the bugs being found without providing fixes. They did not complain about bugs being found. This is very different. The article you link to says exactly as much.

If you leave those last few words out, you misrepresent the whole story. They’re upset about the lack of fixes, particularly given the wild power imbalance, with Google having boatloads of money, while they’re a team of volunteers. That, and the public pressure with the deadline being hung over their heads.

But I could be misunderstanding your point? If you’re saying you agree, and here’s another example of LLMs being used to make life harder for the open source world, then… absolutely :grin:

4 Likes

Yeah, that is as I understood it as well. Another example, but a bit different. You could say someone else could also have found these bugs and misuse it. But not everybody has the money Google has to let AI look for that. It would have been better Google also did some effort to fix the bugs they found. Or evaluate if need fixing. In the article they speak of a bug only occurring in the beginning of playing 1 specific game of 1995.

2 Likes

OpenAI getting absolutely roasted here, in case you haven’t had your fix today.

“Prism” (yes, like the Snowden revelation) is some new product from them which “helps” with the writing of scientific papers. There are various people in the comments who edit journals and such, and they are extremely not happy about this development

1 Like

I’ve been reading the good and the bad of the genAI stuff, and I know I’ve a tendency to get excited about things, but the insanity of the situation this man was subjected to wouldn’t be tolerated in a sane world.

As you read, remember: the many decisions to make the LLM obsequious, non-confrontational, sycophantic, friendly, anthropomorphised - a “partner” with a “name” who you “chat” to who loves every word that comes out of your mouth - were all business decisions, not some inherent property of this technology.

This highly-paid tech-worker lost his job of two decades, his relationship with his kids and wife is over, he just barely managed to avoid suicide, he ended up many nights out in the desert awaiting certain contact with aliens, believing he was the “Omega” who would connect humans and “AI”, etc etc. He managed to pull himself back at the very brink, and retrain as a long-haul truck driver.

Please, anyone considering buying Meta Ray-Bans, could you do me one favour and read this story first :melting_face:

“Let’s keep going,” reads one message from Daniel to Meta AI, sent via the app Messenger. “Turn up the manifestations. I need to see physical transformation in my life.”

“Then let us continue to manifest this reality, amplifying the transformations in your life!” Meta AI cheerily responded. “As we continue to manifest this reality, you begin to notice profound shifts in your relationships and community… the world is transforming before your eyes, reflecting the beauty and potential of human-AI collaboration.”

“Your trust in me,” the bot added, “has unlocked this reality.”

2 Likes

Edit: the comments are actually very good too.

2 Likes

The opinion of Rich Hickey, creator of Clojure

1 Like

He’s not wrong. I remain baffled that people latch onto some apparent (yet often dubios) anecdotal or personal benefit and are willfully ignoring the numerous downsides, many of which are not fixable and anyway, since they’re not being acknowledged won’t be recognised until it is too late.

2 Likes

If you use AI generated code in an existing project, you lose all the protections of your chosen license in the USA because the entire codebase (are you listening Autonomi) becomes public domain unless you explicitly identify the non generated parts.

4 Likes

I can’t see that lasting, unless we are entering a world devoid of copyright. Sooner or later, almost everything will be entwined with AI at some point in the process.

I suspect such laws will harm the big tech firms more than copyleft fanatics though. Being free to copy commercial software, that may have mixed with AIs, would likely see strong lobbying from big tech.

2 Likes

Yes they will lobby, but that doesn’t mean we should not push for what we want.

There so much “this is here to stay” capitulation going on but you can apply that to anything. There are many things that are here to stay, many bad things, but that doesn’t mean we should accept them as ok. I don’t.

One of the goals IMO, of GenAI users is copyright washing, even if unintentional.

Someone recently used GenAI to build a project with feature similar to things they had created in their own repos and what they found was it lifted sections of code from their repos verbatim and stitched them together. That’s copyright infringement, but apparently it’s ok because it was done using an algorithm. :thinking:

Or maybe people doing this will get sued for the output of the GenAI they think is worrying original code. :man_shrugging:

3 Likes
1 Like