Is *AI* (LLMs) causing Brain Rot? And is it just a bubble waiting to burst

5 Likes

As predicted. None of this should surprise people who understand how they develope and improve their cognitive abilities.

There’s even an aphorism for this: ā€œUse it or lose itā€

10 Likes

But for those with chronic fatigue and brain fog, having the brain work less is a very good thing.

Chatgpt 5.2 is amazing, it made me an working pdf to image converter app with a GUI in 30 minutes, non of the horrible sytnax errors of previous model. I have become so familiar with chatgpt that it feels difficult to switch but I don’t want to support Sam Altman because he gives me the bad for humanity vibes.

1 Like

I’m not sure I fully follow this. It sounds to be as if we’re right back at the early stage of the internet. Back then, we had all the same warnings. Using the internet was a shortcut, information shouldn’t be ā€œeasy to findā€. 30 years or so later, it’s massively improved how we learn and interact.

I guess the same will happen with AI. Sure a few things will get easier, and we will not be able to understand simple things anymore. But there is just no use.

I’ve been actively vibe coding my own mobile app lately. And it’s been a long time since I’ve been this focused and productive. I’ve learned much more than I did before back in the days when I was actually a mobile developer. Yep, some parts of the code I do not fully understand. And yes, I don’t comment every section anymore like a used to. But any developer using ai can just in-line mark a piece of code and ask for the explanation. It’s only getting better every day and still with major steps.

4 Likes

I’ve tried Antigravity and used the free credits to create some unit tests for AntTP. It coped with the basic structs, but it just aborted/failed with the more complex structs.

Adding test coverage is a pretty mechanical process. It just has to figure out what the inputs and outputs are and generate code to tally them. It is also a boring, often repetitive process, so it’s a shame it didn’t so better (as it is a task I’d rather avoid).

Maybe I’ll try some other AIs, but it was the best Gemini one in the list that I used.

I use some of the AI code gen in Rust Rover too, but it often has worse results than Intellisense. The AI offers more input, but its often not what is needed. Intellisense gives less input, but is more predictable. The AI gen stuff does sometimes save a bit of typing with boiler plate though.

Maybe it is easier to vibe code when there is already existing code for AI to crib (licensed or otherwise). However, when even the likes of Anthropic needs to buy teams and add developers too them, it makes me skeptical about the solution they are peddling.

2 Likes

ā€œExperts warn AI is making your brain work lessā€

That’s funny to me - using AI makes my brain work a lot more. When you are coding with it, you have to orchestrate, monitor, troubleshoot … I suppose most people aren’t using it for coding though. I can see that coders might still think this because they have to switch up their thinking quite a bit and engage different brain circuits - that’s probably uncomfortable for many. IMO, it’s great as I’ve never been good with language syntax, so every other command I always had to refresh my memory on how to structure my code. Now I don’t have to do that at all - the AI knows all the syntax, so I can focus on what I do best - the overall logic, strategy, orchestration.

I’m using openrouter API as I can pay in crypto. I then use that API in the ZED IDE (written in RUST! - unlike all the VS clones, it’s fast).

3 Likes

I think we’re witnessing the death of coding as a profession. People will do it for fun or for fully custom jobs, just like people make hand crafted furniture. But for me, if I had to choose between someone giving me raw lumber or a flat pack box from IKEA, I know which one I’m going to pick 99% of the time. Honestly, coding is an irritating middle layer that I always viewed it as a necessary evil. In the end, I’m trying to talk to the computer to do ā€˜something’ for me. I used to have to write in language XYZ and interface with package ABC. All the tedium dealing with the constant stream of errors or refactoring or writing documentation or… tests! Now I can just talk in my spoken tongue and poof! There’s the answer. My mindset shifts from being a coder to a product manager. Still lots of work, but different work.

5 Likes

It is an interesting perspective, but I disagree.

Code is, and always has been, the distilation of requirements into precise logic. The coder’s job is to interpret the inprecise natural language, read between the lines, then define it into concise, non-ambiguous, statements.

Natural language is wide open to interpretation unless copious amounts of text are used. Consider legal contracts, where the ā€˜small print’ is long and is still difficult to read. Indeed, the text can and will be read in different ways, which is why alleged contracr breaches end up in court, with barristers and judges to argue over the nuances.

Imagine each generation of an AI or even the same AI with different training, reading these contracts. Will they all interpret the document in the same way, forever? Highly doubtful.

I read a great article recently (I think I linked it), which highlighted this swing between high and low level languages. While we have moved from machine code to higher level languages, concise, repeatable, statements have been needed. Engineers become trained in taking ambiguous verbage and converting them into unambiguous statements.

To that end, there is a question mark over whether LLMs are ever going to fill the gap between ambiguity and precision. Natural language may just lack the required, repeatable, clarity.

Saying that, I’m for tools which can take away drudgery. I think AI tooling can certainly help with that. I remain skeptical of the benefits beyond this though, at least with LLMs.

8 Likes

It would be fun to have a million AIs test my app, another looking for zerodays and customer service for humans and AIs. I don’t mind paying as long as i get the results I’m after or compensated for time lost.

The cutting edge interests me the most, a combination of Deepseeks ocr and Googles Titan would be fun. Hopefully we’re a generation away from coding apps, organs, cures etc even better not even vibe coding. AI hears irregualr heart beat, scans eyeball etc and can tell if there is a cure or create a cure for that.

For me it’s more about the usage so that apps get on the Network and AIs can explain the Network better. To different people, different languages eventually everything gets better with time…

1 Like

There are many issues with LLMs and I’ve catalogued quite a few here.

Every time someone (including David) posts how great they are for this or that, those issues pop back into my mind. But there’s no point raising them further because people don’t wish to put the effort into creating a more complete picture of the situation.

That’s not a criticism, it’s a human reality: knowledge, skills and understanding require hard persistent work which few are willing to do unless they are highly motivated.

That is itself is one of the issues that leads people down this path: the reluctance to do work which feels hard, and choose easier work even if it is counter productive, less efficient or unsustainable in the long run.

I see much of that in many pro LLM posts.

I’ll mention one issue that again baffles me given the nature of this project and supposed understanding of its purpose in the community: centralisation of programming tools.

If you believe LLMs are good for code, why no discussion of this and how to mitigate it?

Could it be that one reason for the investment and hype in so called AI is that it is another attempt to enclose the commons and control labour, and squeeze everyone until the pips squeak.

I see little awareness of even basic issues with LLMs, and no discussion of bigger picture issues at all. That for me is shocking to find in this community.

6 Likes

From this selection of goals, there are elements that AI can be useful for and areas where it will struggle.

Pattern matching, data collation and summary, etc, are core features of LLMs. They can act as a funnel, to quickly narrow down broad data into a digestible subset.

This sort of fuzzy matching has been in the domain of machine learning for a long time. LLMs, as I understand it, have broadened the capabilities to work well with more diverse input and output, such as text, video an audio.

Searching medical records, scans, etc, for patterns is highly assisted by LLMs. I am sure there will be big gains in this area. It’s something many of us will surely benefit from too, especially if using personally trained models.

Likewise, summarising broad data can benefit software development. It can be used to narrow down published code (licensed or otherwise, I underline!) and present it in a form that can present building blocks. That makes it effective for POCs and the like, where pasting together bits of code created by others, to form a whole. If it is something which has been done before, this approach is more likely to be successful.

However, we shouldn’t mistake this for an understanding of requirements. Nor should we presume the results will be tailored into new factors, which promote code reuse, maintainability. Mutating the results or what LLMs have delivered is a very different flow and process and needs a different, not yet invented, form or ā€˜AI’ to do so.

Moreover, the anthropomorphism of these LLMs is dangerous. They aren’t beings being ā€˜trained’. They are applications filling their databases. They shouldn’t be immune to licensing. Regurgitating prior work from others, in exchange for payment, is arguably intellectual property theft.

While we can argue about the merits of IP, these giant tech firms will never argue for it to be abolished. However, they are happy to take it from others, in an attempt to monopolise and rent seek from it.

Big tech want to win the AI race, as it allows them to extract the maximum rent from data they do not even own. They want to be, or remain, the gate keepers.

7 Likes

I think this is a hugely important point.

As above, there is a clear motivation of big tech to monopolise this area.

I think the saving grace is that the faith in LLMs is misplaced. I don’t believe they will deliver the results people hope.

Throwing more hardware at the problem isn’t going to solve the inherent limitations of LLMs.

I’m expecting GPU fire sales, the capitulation of specialist AI companies and maybe a big tech firm collapse or two too. Much malinvestment will likely harm the economy and many a pension too.

8 Likes

Hard to believe this was 25 years ago:

11 Likes

It didn’t replace the high street though and it took another decade to deliver on the promise to investors. It was a huge bubble that caused massive economic dislocation.

Fwiw, I think LLMs deliver a lot of value… just nowhere near what is claimed and/or justified with the amount of money being thrown into it.

Ofc, being a winner in the LLM race may create the next google or amazon. It’s certainly what they are banking on. I suppose we will have to wait and see just how much the hype will be reflected in reality, and how long it will take to recover the expenditure.

3 Likes

I’ve seen so many people mention AI being a bubble, the cost we as consumer are charged is just a small fraction of the actual cost and it’s not sustainable. And all of that may be right. What I’m personally missing in this conversation is that it provides a massive window of opportunity. Sure it may not be sustainable or fully live up to the hype, but I’m pretty sure that a few years from now we’ll think back and wonder why we didn’t make more use of it while it was still affordable.

it reminds me of the time Uber Eats tried to penetrate the Dutch market. We basically got food at half the normal price, delivered for free, over and over again. I fully maxed out all the offers. Just like I’m doing now with the ai tools available.

And just to add, it sure does have its limitations. But by using them more and more I feel like I’m rapidly learning how to better instruct them, use their weaknesses and strengths against each other for the best possible output. So far, all of the limitations I ran into could’ve been avoided by instructing it different, using several tools with each other and/or using different models for a certain purpose.

7 Likes

From an individual perspective, you’re absolutely right. Why not make hey while the sun is shining?

However, at the macro level, we must also consider the opportunity cost of sinking so much capital into providing the above.

A glut of GPUs, abandoned data centres, experience and education diverted from where it may be better spent, projects that are not done (as not AI buzz), etc.

If the gains tend to be closer to the 10% observed instead of the 10x claimed, then the economy will be in trouble. If the price needs to be 10x higher, or worse, even more so.

I’m sure LLMs will deliver improved productivity, much like the internet did/does. It may come at a financial cost over the next decade though.

2 Likes

That article written by @blvd ? :joy:

2 Likes

My prediction

Its got all the hallmarks of a bubble that will burst, but like the dot com boom/bust it will come back in a more sustainable state.

4 Likes

Our brains have a certain capacity, some will adjust and use it to increase that capacity.

As usual most will just ā€œdoom scrollā€ to get through the day.

Or learn to use it in a differerent way?

I think the biggest challenge will be to know what is real and what is bs.

I think the internet/social media is about to go from crap to useless.

8 Likes