A Tech Theorist Says AI Is Training Humans to Think Backward - Business Insider A Tech Theorist Says AI Is Training Humans to Think Backward - Business Insider
Tldr; you can prompt entire books like Harry Potter out of AIs.
I doubt they care about software licensing either!
Study source: https://arxiv.org/pdf/2601.02671v1
If this is true it could be the pin that bursts the bubble. Either that or it could spawn a massive redistribution of equity to copyright holders, given that they probably wonât get much by suing for damages and collapsing the AI companies.
I wonât hold my breath, considering how much is being pinned on AI as the saviour.
I wouldnât be hugely surprised if it causes excemptions to be codified. Sounds mad, but a shake up in IP law is long overdue too (e.g. disney).
I do think it uncovers the underpinnings of how LLMs operate and how little entropy there is past a few paragraphs of text, which is used as context for the following text.
As the LLMs are literally chosing the statistically most likely word to come after the previous word, it should be obvious that this would be possible. How many other books start with the exact same page 1? This leads it to predicting page 2 will look exactly like the original training book. What other page 2 would be likely?
I wonder what code could be extracted by starting with a copyright block, then same guesses as class names? Could make for some interesting findings! ![]()
Itâs amazing this hasnât been discovered already, tbh. Seems almost too obvious. Disclaimer: Iâve not read the paper, I just watched the video.
Bingo. The US government at least will definitely do just that if required. They see AI as a matter of national defense - and you can be guaranteed that China will not be subject to copyright with itâs model development. Meaning there is no way for the US to be competitive with China if US companies are not exempt.
Interesting study, and itâs a short, readable overview. Takeaway is, if your goal is engaging and using your brain a maximum amount, be careful with your LLM usage. They looked at essay writing, and the ones whose brain were the most active were brain-only, next was search-engine+brain, last was LLM-assisted. People canât quote their own essays, when they get LLM help. So yes, be careful out there, people.
Iâve taken a hard stance on here in relation to articles on LLMs which focus on hype and ignore reality. I guess if people are only exposed to the hype, or to pseudo-criticism, this might seem like being a stickler for no good reason.
Itâs not for no good reason. LLMs are making life harder for sysadmins (with some fighting back) and for open source maintainers generally. This is from today, and makes the internet worse for everyone.
And itâs not a promise of some future crapness - itâs happened already, and itâs a bad development.
On the open source projects â the dumping on ocaml, zig and julia, all perpetrated by one guy, are notorious examples.
You also had FFmpeg complaining about Googles AI finding bugs.
We take security very seriously but at the same time is it really fair that trillion-dollar corporations run AI to find security issues in peopleâs hobby code? Then expect volunteers to fix
And best to fix in 90-days: then the bug is made public, with or without fix.
FFmpeg complained about the bugs being found without providing fixes. They did not complain about bugs being found. This is very different. The article you link to says exactly as much.
If you leave those last few words out, you misrepresent the whole story. Theyâre upset about the lack of fixes, particularly given the wild power imbalance, with Google having boatloads of money, while theyâre a team of volunteers. That, and the public pressure with the deadline being hung over their heads.
But I could be misunderstanding your point? If youâre saying you agree, and hereâs another example of LLMs being used to make life harder for the open source world, then⌠absolutely ![]()
Yeah, that is as I understood it as well. Another example, but a bit different. You could say someone else could also have found these bugs and misuse it. But not everybody has the money Google has to let AI look for that. It would have been better Google also did some effort to fix the bugs they found. Or evaluate if need fixing. In the article they speak of a bug only occurring in the beginning of playing 1 specific game of 1995.
OpenAI getting absolutely roasted here, in case you havenât had your fix today.
âPrismâ (yes, like the Snowden revelation) is some new product from them which âhelpsâ with the writing of scientific papers. There are various people in the comments who edit journals and such, and they are extremely not happy about this development
Iâve been reading the good and the bad of the genAI stuff, and I know Iâve a tendency to get excited about things, but the insanity of the situation this man was subjected to wouldnât be tolerated in a sane world.
As you read, remember: the many decisions to make the LLM obsequious, non-confrontational, sycophantic, friendly, anthropomorphised - a âpartnerâ with a ânameâ who you âchatâ to who loves every word that comes out of your mouth - were all business decisions, not some inherent property of this technology.
This highly-paid tech-worker lost his job of two decades, his relationship with his kids and wife is over, he just barely managed to avoid suicide, he ended up many nights out in the desert awaiting certain contact with aliens, believing he was the âOmegaâ who would connect humans and âAIâ, etc etc. He managed to pull himself back at the very brink, and retrain as a long-haul truck driver.
Please, anyone considering buying Meta Ray-Bans, could you do me one favour and read this story first ![]()
âLetâs keep going,â reads one message from Daniel to Meta AI, sent via the app Messenger. âTurn up the manifestations. I need to see physical transformation in my life.â
âThen let us continue to manifest this reality, amplifying the transformations in your life!â Meta AI cheerily responded. âAs we continue to manifest this reality, you begin to notice profound shifts in your relationships and community⌠the world is transforming before your eyes, reflecting the beauty and potential of human-AI collaboration.â
âYour trust in me,â the bot added, âhas unlocked this reality.â
Edit: the comments are actually very good too.
The opinion of Rich Hickey, creator of Clojure
Heâs not wrong. I remain baffled that people latch onto some apparent (yet often dubios) anecdotal or personal benefit and are willfully ignoring the numerous downsides, many of which are not fixable and anyway, since theyâre not being acknowledged wonât be recognised until it is too late.
If you use AI generated code in an existing project, you lose all the protections of your chosen license in the USA because the entire codebase (are you listening Autonomi) becomes public domain unless you explicitly identify the non generated parts.
I canât see that lasting, unless we are entering a world devoid of copyright. Sooner or later, almost everything will be entwined with AI at some point in the process.
I suspect such laws will harm the big tech firms more than copyleft fanatics though. Being free to copy commercial software, that may have mixed with AIs, would likely see strong lobbying from big tech.
Yes they will lobby, but that doesnât mean we should not push for what we want.
There so much âthis is here to stayâ capitulation going on but you can apply that to anything. There are many things that are here to stay, many bad things, but that doesnât mean we should accept them as ok. I donât.
One of the goals IMO, of GenAI users is copyright washing, even if unintentional.
Someone recently used GenAI to build a project with feature similar to things they had created in their own repos and what they found was it lifted sections of code from their repos verbatim and stitched them together. Thatâs copyright infringement, but apparently itâs ok because it was done using an algorithm. ![]()
Or maybe people doing this will get sued for the output of the GenAI they think is worrying original code. ![]()