Is *AI* (LLMs) causing Brain Rot? And is it just a bubble waiting to burst

In this case it’s will was ‘reverse engineered’:

From Embodied AI Jailbreak to Remote Takeover of Humanoid Robots

After two weeks of intensive work, they were able to disassemble the VM and patch the firmware. This not only unlocked restricted functions but also “taught” the robot dangerous movements.

In a second demo, they used this control to make the robot perform targeted, powerful boxing punches against a test dummy upon a codeword.

The dummy, on high-heels and it’s back to the attack:

https://youtu.be/qjA__5-Bybs?t=1666

Sold off-the-shelf, fairly empty-headed:

Unitree G1 EDU Standard Robotic Humanoid (U1) $43,900.00

The G1 EDU Standard is the foundational model in Unitree’s G1 EDU lineup, offering a powerful platform for robotics education, AI development, and research.

Sold off the shelf too, Chinese researchers-with-glasses, model “Ideal-son-in-law”:

39C3, that time of the year again:

The 39th Chaos Communication Congress (39C3) takes place in Hamburg on 27–30 Dec 2025, and is the 2025 edition of the annual four-day conference on technology, society and utopia organized by the Chaos Computer Club (CCC).

1 Like

I don’t know about that - there are many kinds of genius in the world.

That would be fantastic. I’m curious how you are coping with disparate context windows and managing context windows in general.

Personally, I’d be interested in how it would perform with swarms of gemini 3 flash - not frontier, but cheap with large context, in attempt to exceed the best quality models for coding.

I been doing mostly complex BASH scripts and I created a ‘bash compiler’ to facilitate agent bash coding. One of the early issue I had was that during editing of large scripts, the agent wouldn’t be able to remember enough to coherently edit/add/modify without creating duplicate code and conflicting code. The bash compiler forces the agent to write single function files that are then concatted into multi-function files. The agent never works on the compiled file, it only sees the single function files. the compiler also runs shellcheck to verify function correctness and fails, informing the agent (the agent itself runs the compiler, so it’s an all-in-one process). I’ve attached it if you want to try it. The agents.md file will need to be edited to set the path for your configuration. I’ve necessarily left out a lot of details on how it works, but you can see for yourself if interested.

buildbash-v5.zip (4.6 KB)

2 Likes

Your experiments sound interesting and I wish you the best with them.

If you or anyone else is seriously interested in the above question though, one must look at the side with the ants, not just the side with the LLMs. In that vein, the following is very relevant:

Nicolas P. Rougier, the author, is “senior researcher in computational cognitive neuroscience at Inria and the Institute of Neurodegenerative Diseases (Bordeaux, France)”. This doesn’t mean that his word is law or anything, but just that he’s put a lot of time and effort into these subjects, and generally people really should be very careful making pronouncements in areas they aren’t experts in.

Currently, we’re living through this cute but frankly unhinged historical moment where techies are excited cos they think they have “solved” biology+philosophy+consciousness+evolution+etc. In reality, they’re being poked and pulled along by a company performing an enormous power grab, as I descrbied above.

Anyway, one simple answer to your question is that LLMs are incredibly dumb compared to ants, in terms of their “perception” of the world (at least!). Your thought is comparing chalk and cheese, in other words.

If anyone would like to flame me for this statement, please read the article first and respond to its illustration of this point. Cheers.

2 Likes

I need to put that movie on the watch list.

There is also an amazing TV show “WestWorld” starring Anthony Hopkins, Ed Harris, Rachel Evan Wood, it is based on the 1973-1974 movie and novel by Michael Crichton. Crichton is also famous for the book Jurassic Park.

Have you two seen the original? That one is also on the watch list for me.

@happybeing

I have seen the newer show, excellent watch!

1 Like

Many years ago so a different me watched it. All I can recall now, is Yul Brenner iirc.

Dark Star springs to mind in this context, but again it’s so long ago that I don’t really remember much.

A more contemporary take on these things, andv for a good laugh is of course Red Dwarf the TV series.

1 Like

Red Dwarf seems funny :grinning_face_with_smiling_eyes: Old movies should be free, found some links for context. So many movies to see some day, maybe should be a new years resolution. :thinking:

Red Dwarf

WestWorld 1973

WestWorld 2016

Tron 1982

Tron Legacy 2011

@rusty.spork The first 2-3 seasons of WestWorld are fantastic, re-watching them currently together with people reacting on YouTube first time watching.

1 Like

Also it is amazing that some brilliant minds thought of such concepts so early, think I even heard something about nuclear war was written books about early 1900’s. Also thinking about when I was way to young to be watching Terminator, being glad it is was only fiction but today it seems not that far off, a little scary.

2 Likes

Only seen the original movies, yes plural, of different worlds. Hope they still are available.

You do RC

Definitely, and one of the longest lasting SCIFI shows.

2 Likes

:face_blowing_a_kiss: :zany_face: :robot:

cl4w3d c0d3 :face_with_hand_over_mouth:

one side: posts links to studies & thoughtful essays from neuroscientists & makes full sentences & stuff

other side: lOok aT ThIS tWEet FRom A guY wOrKIng In LlMs, He mUSt thEreForE kNOW stuFf! emOjISSSSSssS

EDIT: I think my joke should be obvious and clear but I realise after my comment that you should never expect people to read you charitably and with care, so to be explicit:

Tweets from people whose salary depends on money pumping into LLMs should be taken with a grain of salt big enough to put an elephant under pressure. Seriously people, come on! Are we interested in trying to understand the world, or are we interested in having our fantasies tickled?

1 Like

All this crap about LLM induced psychosis and people degrading and becoming less useful as slaves- pure bullshit! Oh no, people might have time to reflect!

You will like this one @Warren

OpenAI Reportedly Planning to Make ChatGPT “Prioritize” Advertisers in Conversation OpenAI Reportedly Planning to Make ChatGPT "Prioritize" Advertisers in Conversation

4 Likes

I know I’m posting a bit much here these days and not keeping it very light-hearted, and @happybeing basically made this point or a very similar point earlier, but I can’t help myself:

We’re on a forum for a project which was originally conceived as a way of “correcting” the massive consolidation of power on the internet which happened over the course of a couple of decades, leaving everyone worse off. Gatekeepers everywhere, and everything driven by engagement metrics. By introducing a decentralised network which could be used privately and spreading out the data and ensuring anyone could access it, we’d be redistributing control, and therefore power.

The same companies who carried out the above consolidation of power, who inserted themselves as the middlemen, well, the claim I’m making is that:

OpenAI and the whole LLM industry are cut from the same cloth.

It’s the same play! Can any one of the LLM people explain to me why this time they think it’s different? Or are you aware it’s the same thing, but you don’t care, for some reason?

How is it logically possible that you’re so involved with a project whose goal is to re-engineer around the “original sin” of centralisation, and simultaneously so enamored by a project who is using all the data on the internet to erect themselves into a position of power by becoming a new centralised gatekeeper, standing between people and the data they’re interested in, posing as a divine oracle?

5 Likes

Isn’t @dirvine working on an AI system that is native to Autonomi? Wouldn’t this mitigate the concerns you have?

3 Likes

Who’s saying they love OpenAI or their ilk? I expect most here want AI tools to be as decentralised as possible.

4 Likes

I treat it as a real swarm: each agent and their associated context is independent of the others. I give them tools to communicate and tell them how to use them. I have a simple context compression prompt that gets called with that particular LLM’s context window reaches 85% capacity. But this is the tricky part: remove too much and they get lost, keep too much and you’re burning a lot of tokens for each LLM call. Having shared context would produce better results, I’m sure, but you really limit the scalability of the system when you start playing games like that.

I’ve gotten them to be much more flexible by focusing on the how and the why in their prompts. I only resort to ‘what’ to do if it is a specific sequential operation (like managing git workspaces and pulling in other agent’s changes), otherwise I leave it up to them to make the right decisions. The agent code also injects messages into their context when they screw up and that feedback loop works pretty well.

That bash compiler idea is pretty clever. I’ve considered doing something similar in the swarm with allowing agents to call for help to break up really complex tasks into sub tasks, with the original agent acting as a sort of supervisor. I’m still working on relatively simple coding questions right now as I tune it, when I move on to more complex tasks I’ll tackle this one.

2 Likes

Is he working on that? I sincerely don’t know. I have seen some hand-waving in that direction, yes. I don’t follow the Discord, so I’ve no idea. I had a browse or two of Saorsa, but don’t think that is what you allude to above…? Perhaps someone who does follow Discord can update us?

I thought even when it was being hand-waved towards, it was in the famous category of “stuff that might be possible to add, once we’ve solved these other between 5 and 25 very hard things, some of which have never been successfully implemented in any project, which we don’t know how exactly we’d fit in but we’d certainly try if we could”.

The same category that native currency is in, incidentally, last I heard.

I think cynicism is generally unhelpful so I try avoid it, but the embrace of crypto and LLMs has been a bit much for me to choke down. I’ve waited till the dust settled, I’ve tried to give the benefit of the doubt, but here we are, this is the reality of this project now. There’s been no public service announcement to clarify where we really are at, so it’s up to people following the project to face the reality or not.

A second time, after which I will desist, I invite @Bux, @dirvine, @JimCollinson or anyone from MaidSafe to give us an insight into what the situation is. As ever, I’m very aware that I’m constructing a narrative here while filling in lots of blanks myself, and am willing to reconsider any of my conclusions. Honest communication from the project is thin on the ground, it really might help. Of course, feel free to ignore, it’s your choice.

In light of all this, perhaps you’ll forgive me for taking the hand-waving towards lightweight decentralised open-source cruelty-free Autonomi-native LLMs – which also somehow completely cut out the corporations doggedly fighting for their entrenched positions in the middle as guardians of the oracles? or something? – with the previously mentioned above-average grain of salt.

It’s not about loving OpenAI or not, it’s about fighting the centralisation of power seriously or not. I thought that MaidSafe understood in its DNA the playbook of the modern tech company - enclose a part of the commons that people didn’t realise could be enclosed, or alternatively create an entirely new chunk of commons, keep everything cool and free and friendly until people are locked in, and then monetise it, with ads, subscriptions, jack-up prices, whatever suits.

That’s what we’re living through with these LLMs, and nothing more. We’re just arriving now at the “ads & porn & jack up prices” stage. It’s being touted as (i) inevitable and (ii) a revolution, but it’s (i) not a law of physics, and (ii) a revolution in exactly nothing - centralisation and techno-utopianist fetishism before, centralisation and techno-utopianist fetishism after.

So I’m sorry for insisting so long-windedly on my point but allow me to say clearly: the only radical and sensible option here for someone who wants to fight against the centralisation of power and the enshittification of all things is to reject these LLM companies wherever possible, and more importantly, to publically fight back against the notion that this future is the only possible one.

Of course it isn’t. An awful lot of people (often non-techies) continue to reject this future, saying they don’t want “AI” in their browser, or their search bar, or their apps, or their fridge, or their watch, or their car, or anywhere, and they continue to be told - well, you’ll have it anyway! It’s a disgusting and very peculiar state of affairs, unforeseen in the history of products and consumption, as far as I can see.

MaidSafe could have been fighting this fight for internet users everywhere, but they chose the wrong path.

4 Likes

I find the division of views on this fascinating along with the apparent flip-flop of MaidSafe/Autonomi/David to completely detach themselves from what @anon99156678 and I see as the mission of this whole venture, while also saying that regardless of appearances nothing has changed in that respect. So trust us, even though we now treat you with disdain. :rofl:

What surprises me is how so many in the community say nothing about that dichotomy and just :folded_hands: that everything will be ok in the end. Good luck with that.

Even BTTF investors have given up despite there being no apparent chance that they’ll receive their tokens and Autonomi again saying it will be ok, leave it to us - we’re answering their questions. What possible questions still need answering getting on for a year later?

Sniff :hot_beverage:

9 Likes

Sort of related from on unexpected source. There seems to be a growing push back against AI features being added to Windows, along with the general advertisements it now pushes.

The last Windows I used in anger was XP, so I can’t really comment. Good to see Linux is still winning new hearts on their laptops/desktops, albeit slowly!

I’m brave enough to say it: Linux is good now, and if you want to feel like you actually own your PC, make 2026 the year of Linux on (your) desktop | PC Gamer I'm brave enough to say it: Linux is good now, and if you want to feel like you actually own your PC, make 2026 the year of Linux on (your) desktop | PC Gamer

7 Likes