The new legislation changes the rules of the game for Autonomi

That’s really well set out and I agree very much with you mostly. I understand your position better, but am not confident it will deliver the protection you seek. I think the value is questionable and the downsides numerous. So I still wonder about other uses you and others are thinking of.

Many think building apps, and some claim to have done this with little previous knowledge (though not with a local LLM). I doubt that is going to fly even for the best models, so let’s see how useful those apps end up being for yourselves and others over time. Regardless, that’s not feasible with a local LLM and I doubt it ever will be.

I’ll give a case that I think has uses: acting on voice commands that are sandboxed to prevent access to personal data or a network. (You can probably achieve that with WASI at the moment.) So, playing music via a local voice assistant for example. But I would not allow it to control my heating, read my emails etc. Many will I’m sure, but it’s a big risk with little or no benefit for me so I won’t be doing that.

Others are routinely using it to save them reading articles, papers, the forum etc. Again, no value for me and plenty of downsides, but even more will do that and bear the consequences, probably without recognising them - because it seems as good, feels so much easier and appears to get things done faster. So they’ll feel ok about it I’m sure.

I am wondering about building a one-off secure, air gapped home automation system, and it’s conceivable there’d be a place for an LLM in that, or not. But anything that’s not provably secure and private (like a black box) should really be a no no for folk here. Every time you update it, you may be downloading something with malevolent content and will have no way of detecting it. Perhaps services that validate these black boxes as far as is possible will develop, but not before a lot of harm happens to create the business case. And then you have to trust the service etc.

So what are the uses people are so keen to take advantage of with a local black box LLM? And do you see any possible risks with that? People say, it’s here to stay Mark, you might as well use it.

But why? It’s like asbestos. The first health risks were emerging in 1907 but it was still legal to put it in UK buildings until 1999. Should I feed it to my kids like John Gummer (cf. BSE)? Should researchers and doctors have just thrown in the towel because it was so difficult to prove how dangerous asbestos was and to get the law changed to protect people from this useful, protective fire resistant material?

I am interested because an LLM is a fun and fascinating thing. I just don’t see enough value to outweigh so many issues, not limited to personal risks.

3 Likes

My guess is everything they are doing now but without all that information going to the highest bidder, with the benefit of everything stored with the LLM being on autonomi so always recoverable if the black box is lost or stolen.

The LLM’s certainly are a fun and fascinating thing and I think someone with your abilities would benefit more than most if you spent some reasonable time understanding how to use them.

1 Like

I experimented with them early on. Since then I’ve followed the reports of numerous people, pro and against, as those models have developed.

I understand the tech, its capabilities, and downsides and it is of too little use to be worthwhile spending time on let alone reskilling for. I like the cognitive abilities I have thank you, and they continue to do what I need when I need it without any of those downsides.

Local models will remain far less capable indefinitely, and still come with risks even if less obvious.

People assume much greater utility than exists, and ignore the many issues and downsides. Agents are the latest dressing up of ChatBots to keep the hype train running. They’re still chat bots with all the same problems, only multiplied, and causing more widespread harm already.

3 Likes

Fair enough. I’d like to see some of the reports from the pro llm people you follow if you find time to post them.

1 Like

I’m unlikely to add to the imbalance here. Simon Willinson, Bruce Schneier (pro and con) and Martin Fowler have all published positive takes and even - to the dismay of many - Cory Doctorow recently had to defend his own use of LLMs (weakly in the view of every comment I’ve seen).

These are all programming/tech guys because that’s my interest. Simon was the most positive, but I stopped following him as that’s his focus and my interest has waned. They’re all very worthwhile reading though as they talk sense, from experience, deal in facts etc rather than hype or wishful thinking.

4 Likes

If nothing else, Mark, bonus point for remembering that arsehole Gummer :slight_smile:

1 Like

There does seem to be a feeling in The Great Discord In The Sky Above Us that we (people not sold on LLMs as future-of-everything, up to and including running our data networks) are just hiding from “the positive side”, as we shallowly glance at negative headlines, fear our jobs and livelihoods will be lost, and then chuckle gleefully at how right we are. Or something like that.

Not saying you’re doing it here, but it really seems prevalent, and I think it’s a copout.

I’ve done most of my reading on the positive side on HackerNews and LessWrong. I’d argue that LessWrong is closer to the real ideological / philosophical roots of these developments. People like Gwern, Yudkowsky, etc.

Tech people think they’re deciding for themselves what to be into, but the tail is wagging the dog here. The tech billionaires set the pace, and the ideologies they’re operating under are loosely described by the TESCREAL stuff I posted over in the LLM thread recently. Anyone interested, look up Emile P. Torres, in particular.

As a live example, I read this (from HN) just now before popping onto the forum

And essentially don’t agree on various points, but I think the guy expresses himself clearly and sanely, and have some sympathy for some of his points.

EDIT: Oh and sorry, meant to say as well: very much appreciated reading your responses. I feel I understand better where you’re coming from now, too.

5 Likes

Unless they happened to be reading the chat at that time they wouldn’t see that and wouldn’t know because of the lack of topics, alerts and useful threading. The search isn’t good either.

3 Likes

SEC Chair Atkins:
“After more than a decade of uncertainty, this interpretation will provide market participants with a clear understanding of how the Commission treats crypto assets under federal securities laws.

This is what regulatory agencies are supposed to do: draw clear lines in clear terms.”

🚨 THE SPEECH THAT CHANGED CRYPTO

SEC Chair Atkins introducing token taxonomy.

Clarity is here 🔥 pic.twitter.com/svcrhS3IuB

1 Like

I am sorry to tell you that for home use local AIs are very limited and they just arent able to compete with corporate AI compute power. This gap is also widening due to costly hardware limitations at home and almost no limitations on funding megacorps. Consumer hardware costs are predicted to increase faster and further at least until 2030. Small models don’t compete, compression algos have their limitations in natural law, even the latest £3000 gaming GPU won’t cut it either. The good ‘open source’ models still need to be run from big AI data centres. Add to this is the fact that moores law is flattening, GPU performance has been taking smaller and smaller steps over the last few years. Big AI is gambling on GPU unit numbers rather than major silicon tech advances. Then there’s energy crises to add to the mix. Of course anything is possible, but with no new hardware technology available, IMO meaningful self owned AI of the future is over already. I am skeptical of Fae’s abilities due to hardware limitations, I’ve seen this before. Rabbit R1 vibes. Sorry.

1 Like

I wonder if someone knows something we don’t. I certainly think Mr Irvine will have considered all this and has increased his knowledge beyond what we can guess at here.

I’m wondering if there is a possibility of people running their individual workloads on the network in some way to get more processing power. In a secure and private way of course. I think there are ways of translating data and processing so that it can’t be known what is being actually processed without having the keys to it. So you can get decent performance for some things purely locally and then call on more resources when you need them. And your ‘AI’ compute resource can earn running workloads for others when you aren’t using it.

I don’t think this has been mentioned at all but I wouldn’t be surprised if that is the direction we are going. Maybe not initially but eventually.

1 Like