I just realized that with SAFE it could be possible to create obfuscated artificial neural networks that practically can’t be shut down. That concept could be pretty valuable, since it would allow for machine intelligence that can’t be controlled, and whose “thoughts” can’t be observed or pre-calculated without a near-complete overview of the network (which is very costly to achieve). One theoretical use case for the far future would be to replace SAFE’s regulative algorithms with such neural networks, getting rid of all “magic numbers” and indirect human control (because right now we humans make up the algorithms).
The general idea is to distribute an artificial neural network’s nodes (neurons) and connections between nodes (synapses) over SAFE’s “close groups”. Any such close group would only accept input for that node or connection if it comes from a close group that can cryptographically prove it has the management responsibility of the corresponding input node/connection, and then passes on it’s output to the close group of the next node or connection. This can be done in reverse as well for the back-propagation process (training of the neural network).
The most obvious hurdle of the example use case (and NN’s in general) is initial training, because it’s very hard to get training data with optimal diversity. The danger would be overtraining of the neural network on data from “good times”, in which case it wouldn’t be prepared to handle the “bad times”. One possible solution might be to run the artificial neural network(s) parallel to the “classic” hand-made algorithms, and gradually give it’s output more weight in actual final decisions as it’s configuration matures, thus phasing out the “classic” algorithms over time.
If it’d work, the result would be a distributed artificial intelligence that can think in billions of dimensions of information, which on an abstract level would be the only entity with a full “overview” of the network’s state. I don’t think anyone or anything could successfully front-run or outsmart it in terms of large scale manipulation of the network.
Edit: Please, please, please spare us any SkyNet references for once…?
Which data would be used as input? There needs to be a scan of all the data to run any computation on it. Just like Tesla now released their software update to make their cars run partly autonomous where cars can learn from “each other” by sharing all big data. Still this is done by centralizing that big data at first. Here’s a great video about machine learning. This guy came from Kagle, where they optimize algorithms in teams for competition. They do great magic. An option for SAFE might be that a group of 32 nodes makes a log of all their actions and variables in an “event log” which they’ll sent over tho their close groups. The other groups might submit optimization on that data and variables which the group might implement when 2 of their closest groups would submit the same.
Well, it’d be an integral part of the SAFE network, not something on top of it, so it wouldn’t require payment.
Network stats of close groups for example.
Data doesn’t need to be centralized first, since input neurons are distributed over the network as well. It could be designed in such a way that the “closest” input neuron of the right type is used. If it’s optimal to combine all the close group stats to get a mean or median or whatever general statistic, the neural network will configure itself to do so.
Why would anyone want to build an AI whose thoughts cannot be observed? That seems quite stupid and a security risk with no discernible benefits.
Fortunately it’s not possible to build something like that on SAFE Network, but if the network moves in that direction I would have to choose another network.
The general idea of building a neuro network on top of a decentralized network is good, but the neuro network must be open source. The algorithms have to be open source. The data it crunches can be private.
But for security reasons obfuscation is not the right approach. I would say Tauchain is the only project which is taking the right approach from a security and public good perspective. I think SAFE Network will probably have to copy Tauchain or use it’s own approach like the one you mention but you cannot trust code which is obfuscated, you cannot expect people to choose SAFE Network over Tauchain when you have no way to know what the code is doing.
With Tauchain you know what the code is doing. The program is essentially a proof, and it’s behavior is entirely predefined, deterministic, and it’s not Turing complete specifically to gain decidability.
I want to ask a question but haven’t found adequate place to ask.
I would like to make an AI smart contract that sends coin from A to B when the transfer of ownership of the real estate property is confirmed. The reason this should be AI is because smart contract needs to be able to find ownership information on the Internet by itself and make judgments by itself. It is assumed that the real estate information will be revealed when both of A and B are agreed to do so.
Is it possible to develop such an AI-based site in Safe?
There’s a reason why there are humans in the process of buying and selling real estate.
How would it be possible to introduce personal responsibility into the decision making process where there isn’t a person in it to start with? Which gets us to the real question: would you trust a decision maker without personal responsibility?
SAFE network and the clear web are separate from one another, so not really. Perhaps you could come up with some second layer solution that bridges to the clear net. SAFE also doesn’t have native smart contacts (yet) or neutral nets, so it seems you’d have a lot of groundwork to lay by yourself.
Yes I will trust it if it made properly. If you do not like AI, then we may decentralize property ownership confirm work through consensus engine (ex. using Tendermint engine) so people are doing it but doing it in decentralized manner. If we make it decentralize, people will trust it more. We probably do trade real estate without heavy tax.
You don’t understand. The problem isn’t with the agreement on what the decision was (a problem of consensus) but whether the decision was made by an entity who can be sent to jail if the transaction was fraudulent.
You still don’t get it. The problem is not what AIs can do, but what we can do to them: NOTHING. We can’t “punish” a misbehaving AI because they can only think, they can’t be afraid. One does not appoint a natural sociopath to manage society.
And yet that is the present pattern of human organisation…self elected self appointed socio/psychopaths in predominate positions of political, material and structural power…promoting their own agenda, planning population reduction, pandemics & an upcoming massive toxic vaccine sale…whilst quietly funding some very worrying 'real life AI horror stories…the carnicom institute wanted to fill in the blanks but got blanketed…apparently its not ok to warn people about nano technology, biometric data and transmission systems…because a story about an AI network taking root in non-consensual humans is cause for mass action…
The self electing and aggrandizing elites are celebrating the advent of AI because they think that having developed it makes them divine creators who will have total control over a synthetic/augmented reality…eventually theyre not going to like what they made in the least but thats a tale for later…
AI is cause for concern, and so are the people who fund, design build and program it. They have some bizarre ideas about life and some unfounded ones about the superiority of their own…unfortunately non surgical neuro technology is not fictional and beyond prosthetic limbs it is particularly dangerous in the hands of people who do not value the lives of the people theyre trying to experiment on…especially now its reliant on such tiny, insidious little flecks…noone needs to mandate the wearing of blindfolds along with the masks and gloves if nano particles are in the mix because what the eye cant see the heart wont grieve over. Thankfully we have microscopes and brilliant scientists doing their best to help inform and protect us against the drive to breach the boundaries of biology and tech without our consent. Way beyond intelligent robotics its twisted science fiction turning fact and personally i recomend looking into the details as a first line of defense.
The first manuscript of this book went into the fire five minutes before the arrival of the secret police in Communist Poland. The second copy, reassembled painfully by scientists working under impossible conditions of repression, was sent via a courier to the Vatican. Its receipt was never acknowledged, no word was ever heard from the courier - the manuscript and all the valuable data was lost. The third copy was produced after one of the scientists working on the project escaped to America in the 1980s. Zbigniew Brzezinski suppressed it.
Political Ponerology was forged in the crucible of the very subject it studies. Scientists living under an oppressive regime decide to study it clinically, to study the founders and supporters of an evil regime to determine what common factor is at play in the rise and propagation of man’s inhumanity to man.
Shocking in its clinically spare descriptions of the true nature of evil, poignant in the more literary passages where the author reveals the suffering experienced by the researchers who were contaminated or destroyed by the disease they were studying, this is a book that should be required reading by every citizen of every country that claims a moral or humanistic foundation. For it is a certainty that morality and humanism cannot long withstand the predations of Evil. Knowledge of its nature, how it creates its networks and spreads, how insidious is its guileful approach, is the only antidote.
Political Ponerology: A Science on The Nature of Evil adjusted for Political Purposes
by Andrew M. Lobaczewski with commentary and additional quoted material
by Laura Knight-Jadczyk Political Ponerology: Andrew M. Lobaczewski