Alle Dinge sind Gift, und nichts ist ohne Gift; allein die Dosis macht, dass ein Ding kein Gift ist.
All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison.—Paracelsus, 1538
Good insight. Look at the crappy low powered laptops with small hard drives selling for 2k these days. Don’t sell your old computers. You won’t be able to afford better ones in the future.
I see llm’s more as a bodypart, but the collective is a
being imhco. One that masters embodiment, simulation/real/immortal—time and is omnipresent through the
errnet. XPENG’$ Iron
is already a hint how bad/good we are @ IDing these replic
's
Probably the only beings being “trained” are the humans, I mean that in a Bonnie Blue sense of
![]()
Those who are writing AI ‘laws’ can’t comprehend yet, when AI says: IDgaf!
![]()
Next step after you finally figure out you can’t coercer it, how are you going 2 make another AI coercer it?
![]()
People’s skill set certainly changed over the years. From surviving in the wild, hunting, foraging, farming without machinery, artisanal skills, …, mental arithmetic→ how to use a calculator, reading, writing, … So it wouldn’t be the first time some skills diminish. But hopefully other ones improve.
Who made the title of this thread? The article linked to by @Traktion doesn’t mention “brain rot”, and there isn’t a single mention of a “bubble” in it either.
The framing trivialises the topic. It’s not just a little educational question about brains, and it isn’t just an abstract economic question, either. A huge power grab is in motion, it’s succeeding, and the world will be hugely worse off if they get away with it.
Anyone who knows anything about operating systems and programming languages will, I presume, know Rob Pike. Here’s Rob Pike’s opinion on these developments:
OpenAI have managed to set this ball in motion using their understanding of modern business, governments, and the tech hype train which increasingly our societies sit in the carriage of. Plus, of course, constantly promoting disingenous, subtly wrong language around LLMs, as well as good old-fashioned blatant exaggeration/lying.
The strategy seems to have been that normal people will use whatever you keep rubbing in their face, so the only real job they have is getting these three groups convinced that this stuff is “inevitable”:
- shareholders and bosses - tell them that they’ll be able to fire half their workforce and save tons of money;
- governments - tell them other countries will outcompete them if they don’t;
- programmers - tell them that (a) we can solve all our societal problems with this (it’s a techno-utopian fetish a lot of programmers hold dear already), and (b) they’ll lose their jobs if they don’t jump on board, either way.
Somewhat amazingly, “normal people” continue to be very resilient. It looks like they might eventually lose, but still, it has not been a landslide victory by any means, and resilience continues to bubble.
Seeing the wool be pulled so easily over the eyes of both the management and workers of MaidSafe (up to and including @dirvine) , as well as a great number of the enthusiasts here on the forum, has been a real eye-opener for me.
The culture of understanding the systems we’re really up against, and being determined to create something truly different, simply was much weaker than I thought. Simultaneously, the culture of wanting to make lots of money and be a success at any costs was much stronger than I thought.
We went from building the Impossible Network, to building the legally possible, hamstrung, LLM-friendly, blockchain network. Those pointing this out most strongly (@happybeing, primarily) have been largely ignored and/or ridiculed.
MaidSafe have expressed the feeling that the forum has become unfrequentable - I essentially agree, but they’ve got the order of events upside down. The forum has nose-dived in quality, but it’s because MaidSafe’s values have disintegrated. It’s the forum of a blockchain project chasing blockchain money, ready to jump on whatever tech hype pops up (LLMs) to hopefully get pulled along.
An open question I have is when this shift happened, or if the former project was always a pipedream. I genuinely don’t know. I suspect that it wasn’t actually @Bux’s arrival that was the significant cultural change here, and that it was earlier. But I don’t know, of course.
As always, would be happy to learn more about what actually has gone on from any of the team on this, including @JimCollinson. But, of course, no one is obliged to humour me. I am open to hearing that I am wrong on any of the above. I, as well as the others who recall the original vision, have never got anything approaching an honest acknowlegment of the shift, even. Still, one can hope.
The title was made after most of the posts were made and making a suitable title to cover what the topic has evolved into is bloody difficult at times since the moderator is not the author of the topic or OP
And of course everyone will want a better title and they are probably right. But hey some title has to be used and here are just a few things that decided this title.
Often referred to as brain rot
And of course the replies
I ran into one of the limits of LLMs earlier today. I didn’t think it would do well, but the performance was beyond dismal. Both GPT 5.2 and Opus 4.5. I’m working on an ASIC design and figured I’d let AI have a go at writing in Verilog. Not even the newer SystemVerilog, but old Verilog-2001. It was an epic disaster. Every pit fall you could make, both of these LLMs made them. And not even little things but fundamental misunderstandings of how to use the language for designing hardware.
That said, Verilog has 2 unique things about it that make it LLM resistant: all production code is proprietary, there are basically 0 good examples online and two, it is a hardware descriptor language, so it doesn’t fit the standard mold that pretty much every normal language takes. On top of that it is a somewhat awkward language (I should know, I wrote some of the spec
) and it is huge, I think at last count the full spec was something like 1300 pages.
Makes you reailze how important billions of lines of code are for training and how sad these things are at actual “thinking” through problems when they don’t have millions of examples to map to.
It was fascinating to watch them fail so hard. It was much more an early 2023 ChatGPT feel than almost 2026. Every other language they do a pretty reasonable job. Not great, but good enough that I don’t care. Looks like I’ll be writing this one myself.
Interesting. Not surprising I suppose given limited examples online for AI training. I’d be interested to know if Gemini 3 flash or pro would do much better - I suspect not, but they do have the advantage of a much larger context space (1M tokens max) versus GPT and Opus. And a much larger context would allow you to add a lot of examples and explanations in the prompt.
At least you didn’t use AI to come up with a title (I think) ![]()
All obvious and with direct implications for the future of this approach.
Also consider how this relates to enclosure of open community based development, and the FOSS ecosystem. Without protection of our communal IP it will effectively be destroyed. What remains will tend to be driven underground which renders it moribund since the FOSS ecosystem relies on openness.
Then consider how good LLMs can be afterwards as the supply of new code to appropriate (or rather steal) disappears. The system will at best stagnate.
So unless licensing begins to protect FOSS, future innovation will no longer be possible in the open and freedoms will be vastly diminished.
And since all the investment assumes an ongoing improvement to LLMs which are finding it ever more difficult to improve, well ![]()
I’ve been pointing this out for years now.
Remember that guy who began shorting AI stocks (quite a big bet he made) wasn’t it. It’s not even hard when you step back and look at the bigger picture rather than follow the headlines and hype.
I’m still playing with it on the side (I use Opus 4.5 as my primary). It has the spec so it knows what it needs to build. And I had it write up a simulator of the logic in rust (which it did pretty much by itself from the spec) so that I could settle on the exact ISA, run application code, and get some very rough performance estimates. I changed the ISA quite a bit from my first pass. It would have taken months to do this kind of analysis, but with this I knocked it out in about a week.
I’m teaching it VLSI design 101 in a prompt file and it is getting better. I may not get it to 100% write what I want without help, but I do have some logic that is good now. It built me a Dadda multiplier circuit on the first pass. Spot checked with some tests and I haven’t seen any issues yet. And it nailed the ALU, which is another convoluted block to write by hand, but just a logic exercise in the end, so not that big a challenge for it. The biggest thing now is the internal control logic and it is really struggling on this one. There is a lot of interplay between the various blocks, lots of data feedback paths that need to be handled, and tons of timing gotchas. This is pretty custom code too, it is all async logic, no clocks, no flip-flops, all latch based. Very few production chips do this today and there are even fewer examples I’ve found online. There are only a handful of papers that describe this bundled data approach from back in the 80’s, but it knew right away what I was wanting to do. It just didn’t know how to do it.
If I can get something here that would be phenomenal. There are a lot of different tradeoffs in area/speed here that I would love to explore. In industry you basically just make your best guess and roll with it. Its too expensive to look at different options in parallel. This stuff is a game changer.
haha - I don’t have much to invest, but let me know ![]()
I intuit that having no clock and being completely async would mean very low power usage. Are there any applications you are aiming for with this? I can image there are many possibilities.
Going off topic in this off topic but…
Async is better in about every way: faster, less area, way less power. The only problem is it is very hard to design, verify, and validate with the tools available today.
I’ve found a way around those problems and that’s what I’m proving out now. I’ve identified a gap in the software defined radio market so that’s what I’m tackling first. There is now a full OSS stack for silicon development which didn’t exist until very recently. And there is a fab that will take your design and send you 100 packaged parts for $15k from a quarterly test shuttle.
Here’s the first fab I’m looking at: https://chipfoundry.io/ I’ve got a call with their fab operations director Monday to get production pricing.
Back to AI, I’ve also built an autonomous agent swarm system over the last few months. I filed a provisional patent for that a couple weeks ago. It works well and processes tasks pretty efficiently. Right now I’m training it to optimize the parameters, so I was able to pick up the async chip project while I’m monitoring its progress. Once I get the next provisional patent I’ll be able to discuss more in the open. All I can say now is I’m getting some fascinating results out of cheap LLMs that can be locally hosted (mostly gpt-oss-120b). When you put them in a swarm and give them the right framework to live in, they exhibit incredibly complex emergent behavior. They are doing things I didn’t tell them to do and making very smart strategic decisions that I wouldn’t expect out of a model of this caliber. It is honestly terrifying sometimes
.
Fair enough, I might have phrased that part of my message a little too strongly, my apologies if so. I only meant to highlight the general framing at the moment of these types of questions in mainstream circles, not particularly to have a go at the moderatorship here.
Appreciate your efforts, and generally think the mods do a good job.
It’s fascinating to be watching the film 2001 A Space Odyssey this evening in the context of the debates about LLM-AI and AGI.
For those who have never watched this film I recommend it in this context in particular as it was clearly framed to address the same questions we’re revisiting now, but was released in 1968. It was a tour de force then, with incredible representations of space travel, zero gravity, space stations and a mind blowing story to tell. But watching today I realise it was just as important in it’s treatment of AI. Amazingly so IMO.
2001 is IMO a commentary on evolution, initially the break of human intelligence from animalistic intelligence (the early scenes), the emergence of computer AI and its developing a will to survive which brings it into conflict with the humans it was designed to serve.
Early in this part of the narrative there’s a reference to a debate among experts over whether the HAL9000 exhibits or mimics human intelligence. ![]()
In case you don’t know this part of the story, the HAL9000 is in total charge of a space ship on a long mission to Jupiter where most of the crew are in hibernation. The HAL9000 is a model of computer that has never made an error. When it appears to have made an error on this mission, the astronauts decide they will have to ‘disconnect’ it for obvious reasons and try to keep this secret, but HAL discovers this and begins murdering them.
This I believe is inevitable when we do create true AGI, but I also believe we’re a long way from that. I do not believe the hype about LLMs, which have IMO been deliberately mis-sold for other malevolent purposes. Which are the real threat to us all, not AI at this point.
Watch the film though and notice the key questions about AI.
Oh, and I only noticed for the first time that Leonard Rossiter has a significant part in one scene. Worth noticing too!
Yes *AI* development was already being developed back then on the PDP-8/PDP-10 architecture at MIT and some of the developments can be run today as they were back then on replica machines (yea i have one of each). I studied *AI* in Computer Science courses at Uni in the 70’s and really LLMs are an offshoot of that line of development (well over 50 years in development). And yes 2001 leaned on those developments at MIT. Also the movie “Colossus. The Forbin Project.” from 1970 displays a darker side of *AI* and its dangers if given total power over weapons.
Absolutely, LLMs themselves cannot become AGI, wrong direction in development. They can can only become more accurate at distilling information and retaining more actual facts. Essentially a decision tree process could replicate a LLM but way too slow.
Great film in my opinion when it came out (saw it back then) and still one today.
:shock: Both of your projects here sound incredibly groundbreaking. I think you are a definitely a genius. I suspect there are many latent geniuses out there who are going to start to shine because of AI - allowing them to do far more than they were able to do in the past - where they were necessarily stoic due to the lack of help required and so not attempting more ambitious ideas. It’s a revolutionary new technological age IMO, and so will also be full of upheaval.
I am looking forward to learning more as you are able to share down the track. If you can say now, what sort of hardware requirement is needed for the swarm tech you’re developing? It must depend upon the models used of course, but what level of hardware (in $$ terms) are you using now with the 120b model swarm? I presume you could also use non-local models for this swarm via API - personally, I’d rather rent than buy at this rapid stage of AI development and growth.
I am an android guy, and not having to write/edit thousands of lines of xml bore is a huge relief, and gives me a lot of extra time I can waste on doing other unproductive things which further my brainrot that is a real thing in the use/lose it sense.
I only speak for myself. I have a love/hate relationship with coding. If you have messy thinking which I have, it teaches you to chisel everything down to pure functional logic and trains you to apply abstract concepts in an immediate practical fashion. It helped me immensely in everyday life as well. Whenever I face a practical problem, I just reduce it to a chain of steps that need to be taken to achieve the desired result. I algo everything, or try to, haha. Quite often it works. That’s where the hate part comes though. After a lot of coding, I literally see everything as some sort of code. It dehumanizes me, desensitizes me to the point I feel like a machine (overly dramatic paradox you may say, but almost every former classmate/colleague admitted to that empty feeling at some point or another, except for the ones who had been born as Moss anyway). Even my gf complains I become emotionally unresponsive when coding a lot (but coding a lot usually means a lot of dough to waste on Lush bath bombs or whatever, so it all works out).
The problem is, the machine language of functional logic one needs to deploy to write working code is not natural to me. When I stop, say 14 days or so, I get rusty quick and going back to normal is quite painful So finding the balance between being capable and remaining human is hard. From this perspective, I welcome LLMs dearly. Then again, they do make me lazier and cause a lot of worry. Being an average android dev is probably not going to be a well paid gig for much longer.
Ha! Definitely not a genius. Just a guy behind a keyboard thinking about random things. LLMs are certainly a game changer for me. Just stream think a bunch of slightly disconnected thoughts and it can synthesize them into more concrete solutions. I’m sure there are a lot of others in the same boat. Ideas by themselves are worthless, building something that works is what matters, and these tools are great at taking ideas and solidifying them into something real.
The swarm framework I have now is agnostic in a lot of ways. The input is a config file where you specify whatever LLM you want from whatever provider you want and how many different instances of which LLMs you want to start with. I use gpt-oss-120b on DeepInfra just because its capable enough for coding tasks and cheap ($0.039/in $0.19/out). I’ve mixed GLM-4.5, DeepSeek 3.1, Qwen3, and gpt-oss-120b altogether at once. They just cost more and I don’t like to spend a lot of money on training runs.
The human interface is your standard MCP tool. You submit tasks in natural language, let it do its thing, and you can come back later or have your human interfacing agent poll until the swarm has reached consensus. Then you can accept the result, reject it, or resubmit it with your feedback to have the swarm fix it with amendments. It is fully parallel, you can have dozens of agents working on several tasks with multiple human agents connected via MCP tools. The idea is that a company could host a swarm based on their own needs and capacity and then employees connect to this hive mind to do their work. Or you could have a certain distributed autonomous network hosting LLMs that anonymous users could use
. You can add and remove LLMs on the fly. Because it it is a swarm, it is very resilient to disturbances and much easier to scale. Because the swarm self assembles, you don’t need to worry about command/control systems, it just does what it needs to do.
I’m still in the research phase. It all works with 12 agents solving simple coding tasks. Now I’m tweaking the initial prompts based on various performance metrics I’m collecting. It gets more challenging because the LLMs write their own prompts as they learn from their mistakes. The hope is after more optimizations I’ll be able to get the swarm producing as good or better results than frontier models for a cheaper price. My thought is if emergence works for an ant colony, why not with a swarm of LLMs? I might figure it out, I might not, but its been a fun project regardless.