Legislated Algorithms

In the future, governments can pass algorithms as law…

That will make it so that the law gets implemented in accordance with the legislator’s intent, and takes the “courts interpretation” significantly less relevant.

The code does what it does, and without much doubt, that is what it was meant to do.

It would also significantly lessen the need for bureaucracies.

This is already here, but its reactive and enforces the court!s every whim in unintelligent ways. You’d need AI to go beyond that incompatible mix with law.

1 Like

the good side is indeed words can be interpreted … an algorithm does what he is meant to do (most of the time)
the bad side is this algorithm then must have implemented every possible case … could be difficult …

hmhmmm - but because of the impossibility to cover every possible case and still have a good algorithm you might create a simple set of rules (probably lots of simple rules but still not complex ones) … i think i like the idea :slight_smile: simple is good xD
(simple and bad can be identified very quickly as bad and then would be changed)

ps: of course there must not be a mixed structure … either you go with the simple algorithm rules or you use laws that are interpreted … mixing that would only end in having more problems than before …


Hopefully everything will run as well as the Obamacare signup website worked. :wink:

I know that often times with Medicare there are 4-6 layers of “no” bureaucrats, then one more that says “what are our chances at arbitration” guy who nearly always says “yes”. Most functions for “will we pay” can be automated algorithmically, then you move the bureaucrats to the flipping of the “does this person have this condition” bytes. Those questions tend to be a lot more binary and a lot less subjective.

All in all, cutting litigation and bureaucratic costs would be a great cost savings in most everything — a lot of the “programmable money Bitcoin 2.0” stuff is aimed at this kind of thing, in the private sector, but it would work in the public sector as well.

hmmm - and what do we do with all the jobless bureaucrats …?
…i remember the first time i had my motorbike registered very well … the woman there was completely lost trying to copy&paste the necessary informations from paper to her program … i’m pretty sure i would have been faster if i had done it … i wouldn’t want to have her working in my company :open_mouth:

This entire exchange is quite scary.

Do you guys have any idea why I might think so?

I find DAO’s kinda scary all around.

Technology is morally neutral. It can be used for good, it can be used for bad.

But having something that does what it is programmed to do with little possibility of killed is scary,

Things like Bitcoin and Maidsafe may be great benefits, but there can be less popular programs out there doing evil too.

Our opinion doesn’t matter a ton though – DAO’s are not going to be uninvented at this point. It is hard to get the cat back into the bag.

I wonder if you could Merkle tree your medical history for example – to prove certain elements exist, without disclosing the whole kit and kaboodle to three letter agencies?

…and totally irresponsible.

No experiment that had the capacity to cause real harm would ever (in modern times) get ethical approval if the experiment could not be controlled.

Humans have always suffered from arrogance, thinking we’re smarter than we are but at least in the vast majority of cases it’s been possible to at least control failed experiments (even though it can take decades), e.g. myxomatosis.

Granted we can’t easily shut the existing Internet down however there is an element of control as the location of data and code can be determined. If the only way to “control” the SAFE Network is by frying every silicon chip in existence with a massive electro-magnetic pulse then this is surely not an acceptable control mechanism.

In the immediate term the damage that can be done will be fairly local, i.e. only individuals or groups may have their lives ruined irreparably due to content on the network which can never be removed.

If we go a few decades down the road things are potentially much more serious. If we ever achieve truly sentient AI then it will be impossible to contain this if it lives within a global P2P network - short of frying every silicon chip and plunging ourselves back into the dark ages. If the AI has literally permeated throughout an entire global network then we could feasibly find we’re not in a position even to be able to generate this EM pulse as it’d stop us.

Scare stories like this have been doing the rounds since well before the advent of the PC. Had I been asked a few years back “could it ever happen?” I’d have said “don’t be stupid, of course not”. I’ve slowly been changing my mind. It’s great that computers are ubiquitous and pretty much anyone can develop software. However the amount of power this has yielded small groups is frightening. Is it right that a small band of developers (I’m guessing without even going through an ethics committee) can develop an experiment which could potentially devastate life as we know it (in years to come) and there’s no way to control this experiment once it’s been started? If after SAFE is deployed and distributed execution made possible some other group develop some crazy/genius software that has the capacity to destroy us is that acceptable?

As I’ve said before, this network is a nice idea…but totally unethical

What I found scary about the exchange prior to my comment was not the points touched so far by follow-up comments. It was the idea of having legislative dictates enforced by algorithms because it would be a way of bypassing interpretations by courts, etc. The underlying assumption is that legislators or government in general (a) are not subject to corrupting influences, (b) are wise enough to make decisions which merit being implemented universally, and (c) that they have any right to “rule” so in the first place.

Decentralization is about restoring control to individuals and smaller groups. That is what the SAFE Network is about. Algorithms as servants of the individual are fine. As servants of the few “in power,” we’re looking at another beast.

I think you’re wrong about the SAFE Network being unethical. The attributes you point up as being problematic are much more likely in the internet scene as it is developing in lieu of SAFE and other decentralizing tech. Returning Privacy, Security and Freedom to individuals is the only way that it can be kept from being the unethical system you allude to, IMHO.

1 Like

I do enjoy the philosophy that Cody Wilson espouses about some of this…

It is a revolutionary act. All revolutions are “unethical” but by who’s standard? There is never a valid triggering point for a revolution. Somebody’s gotta pull the trigger, then what happens happens.

People who are thinking MaidSAFE is new and better DropBox are underestimating things…

I do tend to agree with the CryptoAnachist spirit — But I certainly expect governments to continue to do what they do, and I forsee what I forsee, whether we like it or not.


@fergish yes i agree making some peace of code a dictator is very dangerous if you think about it a while … e.g. in germany there was a case where a man discovered that there was some seriously bad things were done by the bank his wife worked for (and she helped to do these things … i don’t remember what they did) so he sued them … after that a corrupt shrink attested him to be mentally ill and therefore the case was dropped … after that there were 2 other psychologists who decided also the man was crazy (they never talked to him and only read the first file and decided because of that) … in the end this man was kept in an insitution for several years …

pulling off these things would be way easier with some peace of code deciding …
hmhmm … but still companies have so much money these days … and there is corruption in our jurisdiction … there need to be some kind of change in the system … but i couldn’t say which direction would be “the right direction” …

The only thing necessary for evil to triumph is for good people to obey.

How so?

I am not claiming that decentralised systems are unethical. There are many benefits to them. The unethical aspect is that a potentially devastating experiment is being conducted and if it does go tits up there’s no way to pull on the reigns.

Of course we’re going to have differing opinions on what’s ethical and what isn’t. This is precisely the problem I was getting at by saying small groups of people can develop extremely dangerous experiments that can’t be controlled. Everyone within this small group will think “we’re doing the right thing” and therefore will consider themselves ethical. This is why formal experiments always have to pass through an ethics committee, which contains many different people considering the experiment from different angles.

Let me ask the question this way:

When an indestructible widget was being developed it was known that it would help some people and hurt others. At that time the level of harm was unknown however the makers were surrounded by a small group of like minded people who also agreed it was good and they convinced each other of this fact. Originally it served a small group of people well but then malfunctioned and started to kill people - most of whom had never even heard of this widget.

Original intentions aside. Was it ethical to construct this devastating, indestructible widget given only a very small community wanted it at the beginning?

I’m sure we could exchange stories until the cows come home, where yours will be saying things like “a person oppressed by government X…”. Trust me I see where there are benefits, you don’t have to convince me of this.

The way I see it is that SAFE is potentially very big deal. If a government was to develop something without putting it to a vote that destroyed the world would you say to that government; “chin up chaps, we know you thought you were doing the right thing”? I suspect not, you’d be saying “why the **** didn’t you put this to a vote, we’re supposed to live in a democracy”.

I don’t expect a worldwide vote on the SAFE network. What would be appropriate though is for systems with such wide-ranging potential to have to pass through at least some ethical approval process.

Who knows, I may be wrong and the system is seen by most of humanity as an ethical undertaking. My point is that the tiny population that know about SAFE are not the people to make this decision.

Like the automobile? It’s probably killed millions…

You can argue this about nearly any technology…

Freedom is dangerous.

Since when have automobiles been indestructible? If there was a compelling reason to do so every vehicle could be destroyed on earth if needed…might be hard but it’s very possible. This is my point.

You can also take down all the vaults.

It’s not going to happen with any technology. Once adoption starts, if it provides value, the economy becomes addicted to the benefits.

Without destroying all computers, how?

ok - paint a horror-scenario that would legitimise stopping MS-development

you are talking about a risk what huge risk are you talking about …?
and you indeed could take down all vaults by just shutting down all computers everywhere … people could do it if they wanted (ähm - or just shut down MS on these computers)

I am sure there will be an “uninstall” feature. Most computers have an off switch. There is also a needed Network connection that can be unplugged. If enough folks decide “that’s yucky” the network will not sustain.

The analogy doesn’t break. Cars could be broken too… Cut off gas, remove highways. etc etc. Not going to happen. But they still kill millions more than the horse and buggy might. Or maybe not. Unless you have a control dataset nobody knows and nobody will ever know…

1 Like