Update 03 August, 2023

The llama II models work well. On my mac i can run the 70B parameter model. An easy way to try some local models is grab gtp4all GitHub - nomic-ai/gpt4all: gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue There are a few others, but that’s a good start and wizard 1.1 is pretty good there, but they change every week and even every few days. Well worth seeing what you can do locally and how easy it is.

9 Likes

I agree there’s a place for LLMs, and years ago Jonas and I had an interesting chat imagining this possibility and seeing it as an important element of decentralisation, to restore privacy and freeing people from manipulative, exploitative services.

What I’m concerned about is appropriate application and implementation through attention to the pitfalls of even a personal assistant. I’m suggesting that some things will be appropriate and others will not, because they subject humans to risks that they are ill prepared for, and predisposed to walk into because of how we respond to an LLM style UI. This issue is psychological not philosophical.

Suggesting that an LLM can be a general purpose UI for all SN functions seems to leapfrog those concerns, so I’m trying to highlight their importance and have a discussion about addressing them. But not succeeding in kicking it off.

6 Likes

I see what you mean. I think though we are past that, it’s here and we need to handle it.

I don’t yet underestand.

I feel you are aying this is nto human, we will beleive it is, but it’s not conscious etc. and it’s not intelligent, but then I am in the camp where I wonder what is consciousness, what is intelligence if not things like this. I think agi will be a fully different model, but this LLM stuff is way beyond Eliza, miles beyond it. Eliza for instance could make some people think it was real and clever, but it was not mainstream at all. It was a gimmic as we know and I think agree, this is totally differnt.

I would say we all have and we all shoudl have, but the reality to me is this is here and that is a fact. We can use it or fight it in some way, but I doubt we can just ignore it.

The article was very philosophical and the talk oh human or not and so on is I think philosophy. I am not so concerned over is it really human or is it conning us or is it … I am concerned at what it can do and what it can do is bewildering. It’s a great tool and a great research assistant, but its dangerous if we start to believe in all it says or somehow worship it.

5 Likes

I hear what you are saying though and I do agree, It’s jsut we need to build soemthing test it and then get everyone on the same page as to the possibilitites and pitfalls. We seem to be knida shouting from a distance and it’s wrongly perceived as

me
This is all amazing it will do everying and run our lives for us
You
It’s evil and will make errors persuading us it’s both correct and human

This is because it’s not built and we cannot run it just yet, but soon we can and then we can see how well or not it can handle our data, messages, money etc. I am positive as usual and you are rightly cautious. It’s not a bad place to be :wink:

15 Likes

Does adding this to Safe Network require solving problems comparable to what was need to get Safe Network operational? Or is it pretty much engineering and bolting it all together to make it part of Safe Network?

5 Likes

I give up.

Where do I say anything about ignoring it?

And I’m not saying LLMs are the same as Eliza. Nor do I bring consciousness into it. None of your responses suggest you’ve understood what I’m saying about the dangers raised in the article which were not philosophical IMO. There may have been philosophical points, I honestly don’t remember.

None of my comments about LLMs on this forum have been philosophical and I’ve rejected people wanting to discuss consciousness on the topic I created because, like you, I think it is pointless and irrelevant.

3 Likes

This proposition is, rather than building apps we provide an LLM interface. That becomes the apps and does what we want in terms of data manipulation etc. Some folk will still want apps, some may want search with no AI and so on, but this is the idea. A simple interface to work with safe.

Implementation

What we are looking at is something like gorrila (and API store) and adding SAFE and possible sinmpleX APIs to that. The AI is trained on the APIs (not only ours thousands of apps) and then you can ask it to do X and it will chose the API (app) and carry out the calls.

Timescales

This is nto as difficult as we think, there are projects already doing this and it’s likely we jsut need to submit our APIs then wait on the fine tuning (a few days). Then it’s a case of bundle that with the client and node and provide a lcoal intereface.

Bottom line this is not a big task and not something we need to solve, the industry is already doing this, it’s jsut leading edge.

14 Likes

Thanks for the reply! Hopefully compute can come quickly as well. I’d love to share cpu cycles with Safe Network.

9 Likes

I understand. Perhaps it isn’t hard, perhaps it is hard to do well.

There’s a flavour of, if we don’t do it others will here. This is true, and in a way a good reason for MaidSafe to do it - providing the result is good and not harmful.

Nowhere have I suggested we don’t do this if it is indeed possible and can be done safely. “Safely” might just mean that we limit it to tasks which are not dangerous when it does them incorrectly. Or we design out certain kinds of error etc.

But I’m not hearing that. I’m hearing that because others are going to offer LLM as UI, it should be the general purpose UI for every task on Safe Network, and that there’s no looking at the risk of this approach, or designing to mitigate such risk.

It appears that the LLM is going to be so good that those risks evaporate shortly (if they are acknowledged as present in current LLMs, though I’m not seeing that either).

Maybe it will, but let’s acknowledge that at present they are not good enough to be treated as infallible, and until they are, we need to protect users from their errors one way or another, as they use them to control the operation of certain critical functions.

9 Likes

@dirvine

Curious if a UI plugin system could be developed here (keep it simple of course) to add UI interfaces to particular apps created by Maidsafe itself or third parties for SAFE.

For example, to add or swap LLM’s and interfaces, to add a particular safe social media app interface, to add an trading app (good and services - like SAFE Amazon) interface, to add a financial exchange app interface, or even to add a plugin repo app interface! … Initially people could just publish their plugin here on the forum and third party repo’s could be developed later.

I expect Maidsafe shouldn’t operate a plugin repo itself - could run afoul of regulators, but having the ability to install plugins that allow quick and easy access to user apps is good enough and I imagine it wouldn’t take a lot of time or money to develop this, so perhaps something the community would support.

Assuming no NRS, it becomes more difficult for third parties to share their SAFE apps with the community. So I think this plugin interface ability in the core SAFE UI would help to supercharge third-party app development for SAFE - giving developers an easy means to integrate their app with SAFE. Particularly so if (when) a plugin repo app interface is developed by a third party; as it then becomes quite simple for people to build and share apps on SAFE.

1 Like

I am not sure. It’s a massive struggle to keep up with moves in the LLM / toolkit space right now. I will just be happy to get something simple to test first.

I am not sure no NRS will be any issue at all to us or app devs actually. I hope it will force more decentralisation and perhaps that is a good thing. Another consideration, say you have an app and it’s great, it lives at 0Xabc123 so you tell yer pal. They think its great and do the same and so on. So great apps will get mass appeal very fast, bad apps will die before they get far. This can also be a powerful thing.

Apps soon could become prompts, we just don’t know, It’s really mayhem out there and the innovation is beyond belief, so getting stage 1 is crazy hard as every time I start on a path, there is a huge improvement elsewhere.

I personally feel this is happening without us anyway and the SAFE thing to do here is capture local only LLM type devices and ensure they are truly local first.

11 Likes

Using a simple keyboard, display- and say Notepad-combination can and does already have side-effects according to research done twenty years earlier:

Emotion in Human–Computer Interaction

When confronted with an interface users constantly monitor cues to the affective state of their interaction partner, the computer (though often nonconsciously).

Creating natural and efficient interfaces requires not only recognizing emotion in users, but also expressing emotion.

Ideally, in such computer-mediated communication contexts, emotion would be encoded into the message itself, either through explicit tagging of the message with affect, through natural language processing of the message, or through direct recognition of the sender’s affective state during message composition (i.e., using autonomic nervous system or facial expression measures).

Did ask IT-THE-BRAIN whether this was still the case now twenty years later:

Yes, people are psychologically inclined to see chatbots as another human being. This happens as chatbots create a false mental perception of the interaction, encouraging the user to ascribe to the bot other human-like features they do not possess. This may seem alien, but this attribution of human characteristics to animals, events, or even objects is a natural tendency known as anthropomorphism which has been with us since ancient times.

Increased “humanization” of chatbots can trigger a crucial paradigm shift in human forms of interaction. This comes with risks–and the results may be anything but soft and fuzzy.

Chatbots are automated computer programs powered by natural language processing to engage consumers in interactive, one-on-one, personalized text- or voice-based conversations.

I hope this helps answer your question.

2 Likes

If I understand the context of this discussion I think it is a psychological issue. People will trust AI more than they should because they anthropomorphize it. Makes sense. On the other hand many people really fear flying despite it being much, much safer than driving per mile. So back to simple transactions and other interface things… I would trust a well-written AI much more than myself for routine tasks. I make enough stupid little mistakes being human (send the wrong payment amount, mis-type an account number, wrong e-mail address, etc.) that an AI would really help. Then there are major transactions like perhaps buying a house. Those would obviously require a double check of amount, payee, etc. But I think like flying, self-driving cars, etc. that giving up some control for convenience, efficiency, competence, and even safety is a good trade provided there is data to support it. If we can build in AI and show it is on net a benefit and have some safeguards as appropriate this seems like a hugely powerful and necessary addition to the network/interface capability.

7 Likes

Is the Safe-browser still going to be part of the launch release? If so, is the core Safe-UI going to be built into that or will it be separate?

2 Likes

You get it, and I agree with everything you say. The issue here is that people will tend to trust without discrimination of the kind you describe, because they will over trust a human like UI.

But it is more complex and therefore risky than that. If we’re putting more responsibility in the users’ hands - an inevitable part of decentralisation - they are exposed to new risks (such as irreversible transactions). So it is one thing to do a big transaction with a bank and another to do so with an autonomous network.

So my point is that there are things which a person should probably not trust to a voice/text interface and LLM, and that making everything controllable through that interface creates inappropriate risk.

This can be mitigated in various ways, but a cautious approach will be needed to do this. I don’t think it is sensible, or rather I think it might be very risky to users and the project, to begin with full control by LLM and fix problems which emerge later.

Now maybe I’m wrong and over cautious, but we can only decide that by examining the kind of thing people will be able to do using this LLM, addressing the failures that its mistakes could produce, and deciding what can be done to mitigate them.

Putting up a confirmation dialogue has been suggested. That’s a valid approach to mitigation, but I think that is unlikely to have the effect that people imagine, and not a good way to catch infrequent but highly consequential errors which I believe is the problem we face.

I don’t know this to be the case and will be happy if it is shown not to be, but if the LLM is a comprehensive interface for every kind of application of Safe Network it seems inevitable.

3 Likes

A couple of thoughts and perhaps a partial solution…

I tend to struggle at times to put a perfectly-worded e-mail together quickly, unemotionally, and with the correct attachments. My solution at work is I put an Outlook rule in place that every e-mail I send has a 2-minute send delay (will sit in outbox) unless I put a special string of characters in the title (every now and then need an immediate send). This has worked brilliantly for me and is a huge improvement. It gives me the chance to fix the mistakes I realize I made one second after I hit send.

In CT, my state in the US, whenever someone applies for a loan there is a 3-day hold following closure of the loan. This delays getting the loan money, but it offers a no-penalty period to reconsider. They even provide the letter you just need to sign and deliver and the entire loan application, approval and closing process is nullified at no expense.

So the potential solutions I could see at least to financial transactions involve different kinds of delays and network-managed escrow of payments…

For small transactions a “quick pay” option is likely the best. This would mean you want to give $20 to your friend for lunch and not wait this option would just move the money instantly and irrevocably like we would envision it would.

For moderate online purchases say from e-bay, Amazon, etc. perhaps a 1-hour delay (or other user-selected time) would be in order. There would be no penalty for cancelling the transaction and it would be cancelable by the sender within that period. This kind of delay wouldn’t disrupt anything and would be equivalent to buying something in the past and waiting for the check to arrive and clear. The network would simply hold the payment for whatever time the sender specifies. A simple click would cancel it like it never happened, though the receiver would be notified that a payment was scheduled and then that it was cancelled. An additional feature would of course to simply be able to schedule payments for specific days/times. In that case, though, the receiver isn’t actually notified that the payment was scheduled so it can’t be used to initiate the payment contract.

For large transactions I could see a more complicated escrow where the receiver has some say on the delay. For example, for a large purchase a sender may want 3 days to hold payment, but still enter into a contract. On the other hand the receiver is accepting risk for this arrangement. So perhaps the network holds an additional 5% of the transaction for this delay privilege. Meaning a $1000 would require $1050 to be sent and validate the transaction. If not canceled then 3 days later the receiver gets $1000 and the $50 is refunded to the sender. If the transaction is cancelled the receiver gets the $50 and the sender gets the $1000 back. I could see an infinite variety of options like this negotiated between buyer and seller, sender and receiver. Basically different kinds of smart contracts but with human interaction.

This is likely the tip of the iceberg in terms of options. I think, though, that the idea that all transactions are instant and irreversible needs to change whether AI is involved or not. While the Safe Network can’t be a bank in the sense that humans can manually intervene at any point we need to change the perception that crypto is risky. One false click whether manually or with AI and money can be lost. But all legitimate buyers and sellers want a system to facilitate legitimate social contracts like the banks perform. Any of this type of functionality we can build in (and at no cost vs. all the fees banks charge) the better. It will make people more trusting that if they mis-click or their AI goes rogue that they have options. And mistakes will still happen so perhaps building in a more fundamental ability to reverse transactions would be good. Not actually reverse but maybe a simple way in the UI to enter a compensating transaction with a click of the mouse (or “Armageddon I screwed up again please refund the money for my lambo that I just accidentally sold”). Just some thoughts on a very interesting subject…

6 Likes

“much safer”… I believe this is a case where people always hear and repeat the airline industry’s preferred statistic.

I once looked into it in some detail because something smelled fishy.

The industry use passenger miles without accident, which is based on the trip length times the number of passengers. That’s an interesting way to look at things sure.

But it’s also a bit apples/oranges. Because most trips in a car are short, not cross country, and involve only 1 or a few people. Also, the majority of car accidents are survivable, whereas (iirc) the majority of airline accidents involve some fatalities.

So another way to look at it is trips-per-accident or trips-per-fatality. By these measures, cars do much better, better than general aviation and maybe even better than passenger aviation. I don’t remember exactly. But just think about it for a second: many people drive daily, or even twice daily. There are probably 1 billion+ car trips per day in the world vs approx 100k flights. Yes, there are some car accidents, but that’s only a tiny percentage of the total trips.

I’ve seen a couple articles online that do the math, but I’m too lazy to search for them now. :wink:

4 Likes

Agree about the trip/miles which is why I specified per mile traveled. I just did a google search (for what that’s worth) and the rate of per passenger airline fatalities is approximately 1 per 2 billion miles traveled. For cars it’s about 1 per 100 million miles traveled or about 20x the rate. These are US statistics. So if the question is do I drive from point A to point B or fly it’s safer to fly. This neglects things like that takeoff and landing are the most dangerous flight segments so multiple short flights are more dangerous than one long flight, etc.; it’s just an average. Looking at per trip rates is misleading as it’s not a really a relevant statistic (people travel specific distances) which is why the industry uses per mile traveled by the individual. To be honest I was surprised the difference was as low as 20x; I was expecting a bigger multiple. There are a lot more uncontrolled variables in driving and a lot less oversight and regulation. We have socialized that dying in a car crash is in some ways “normal” while every plane crash makes the national news. Self-driving cars will be interesting because it will be a similar shift in rates and perception. They will likely feel more dangerous, but the statistics will likely (eventually) show a significant average benefit in safety.

4 Likes

Hey JohnM, in general I think we are agreeing with eachother. I will just add a bit more nuance:

Personally, when I get on an airline, I’m thinking I hope this thing doesn’t crash during this trip. I’m not thinking, I hope it doesn’t crash before we’ve travelled 600 miles. And if I were to quantify my “lifetime risk” of flying, I would add up the total number of flights as the primary factor. For me, the miles flown in between the riskiest moments of takeoff and landings would be a secondary risk factor, not the primary.

btw, another stat that could be used is vehicle-miles-per-accident. But then the airline industry loses the ~200x passenger multiplier.

If we insist on using passenger-miles-per-accident, then a more apt comparison would be airlines vs commercial buses. In this case both vehicles have a professional “driver” and there are many passengers. I don’t know who comes out ahead, but I bet its closer.

And if we insist on including ordinary cars, then we should also include general aviation… guys in their cessnas and such on the airline side.

Anyway, sorry for taking this thread way off-topic. I’ll shut up about it now. :wink:

5 Likes

It seems that most of your suggestion could be done in the wallet application. Just not doing the actual DBC generation since that would lock the payment in stone.

4 Likes