When "bad actors" start using SAFE...?

@chrisfostertv a like just wasn’t enough. Yes, yes and yes!

2 Likes

Cheers :blush:

the bad actor is part of the market. We can do nothing. And we should do nothing. If there is no silkroad the bitcoin can not find the value to now. The same as Maidsafe we need the bad dark market to some degree.

When im home I live in Oakland, CA . Gang infested. Hopefully when people trust the SAFE network maybe just maybe they can report who the murderers are. As it is everyone is too scared.

1 Like

Wow, thats as real as it gets.

You have a very different dynamic in the US to us in Australia, we had the guns confiscated. I know the government here are very fearful of 3D printed guns, but then again 3D printing directly threatens the big corporations, so maybe a beat up.

If you read Martin Armstrong, he believes the problem is always government, they will never change their ways and so Revolution becomes the end result.

It certainly seems US police are geared up for that scenario as the #ferguson situation starkly shows. Revolution would be a big mistake though wouldn’t it? the might of the military would crush the people like bugs.

With something like SAFE, communities will have a way of organising against any bad actors, including local gangs, police and governments…who seem to have synergistic criminal relationships to some extent.

I cant see any other solution for routing out systemic corruption, but via community by community organising, starving the beast via education and ethical allocation of capital. Peoples pension funds for example are collectively big players and can assert real pressure.

2 Likes

We cannot stop them but we can do what we can to prove we aren’t like them. Each person should be allowed to interact with SAFE Network in the way they choose to as a unique tribe.

We aren’t all the same tribe, we don’t have the same needs, or fit into the same use cases. I think the majority of people do not want complete anonymity but at the same time we do want to limit and or control what people can learn about our activities (privacy).

I’ve offered my ideas in the past on what could be done. I’m advocating accommodating the mainstream users rather than the bad actors. It may take some developer time to accommodate the mainstream but this is important if SAFE Network is to be useful to the world and not just to bad actors.

We need to let each individual define how they use SAFE Network. Contracts can allow it, programmable privacy can also allow it.

SAFE Network isn’t a coin. People were not really asking for Bitcoin but everyone is asking for secure cloud storage.

SAFE Network just has to be hack proof and cheaper. On the other hand if ISIS uses SAFE Network from the start then that alone would kill the SAFE Network in its infancy. We should do what we can to discourage bad actors or to at least allow people to use it in a way which distinguishes themselves from bad actor usage.

That is why I discussed techniques like revocable privacy, programmable privacy, and contracts. Some or perhaps most people will want someone to know who they are so the majority do not want complete anonymity. I would say that for privacy most people simply want to control access to information which isn’t the same as to be completely anonymous and paranoid.

So for instance this information is shared with people who meet certain criteria. This information is shared with the world. If something happens to you then this information goes to the authorities, the journalists, your family, or perhaps release the full account history if you choose.

So to be as concise as possible I’m arguing for programmable privacy. I want that kind of control or I will not find SAFE Network very useful because I’m not a bad actor. I just want to control who can access my information and make sure only authorized persons can access.

If it’s so secret that no one on earth should access then I wouldn’t want it because it’s a burden. Why would anyone want to be burdened like that? And it wouldn’t protect you at all because you could just be captured, tortured, drugged, brainwashed, it actually puts you in more danger to have secrets than not to.

This is why I would expect the vast majority of people to choose to limit access to their information but no I don’t think people are going to be able to protect a private key with their life unless they are hardcore + trained. So ultimately it’s going to end up being a situation where people will choose exactly where their information goes, who can access it in an emergency, and so on.

I think technology itself is neutral. We can only make sure that we do not do bad things and not participate the detrimental activity. But come on let us face reality if ISIS think communication via Maidsafe is much safer than via email. ISIS still will use it to communicate.

Are we the crazy one’s that think restricting free-speech is a slippery slope? I tend to think not, but it is a difficult argument to make at times.

How is this being achieved by the MaidSafe network?

Saw Fox News today, and the tone was unreal. Anything that ISIS looks at needs to be obliterated, etc. With any luck we (the US) will be arming another group of radicals with even more weapons by the end of the year. Anyway, I think we would have to hope that beneficial stories presented themselves as counter-examples.

This is already built in. Someone has to spend the resources to become 80% of the network, or convince 80% of the network to run a specific variant that will co-operatively remove the privacy and anonymity of that person(s).

Maths

It’s not already built in. Privacy is not programmable. There is no contract network so that I can only do business with people who abide by the same rules as me.

I’m not talking about giving arbitrary individuals in the network the ability to revoke my privacy. I’m talking about selecting the people who I choose to have the ability to do it.

The point is privacy has to be completely programmable. There can be no one sized fits all because everyone is in a different situation with different reasons or privacy needs. Some people don’t want privacy which cannot be revoked by their group if the group votes to do so. Some people want to release information which only people or entities with certain attributes can decrypt.

All of this could exist as contracts if SAFE Network builds a Turing complete scripting layer on top of the foundation. Programmable privacy will allow anyone to come up with any privacy contract they need to define how they would like to interact with SAFE Network.

1 Like

I assume you are being facetious; I don’t recall any mathematical properties of this network that prevent malicious processes from executing on an end users machine. I would love to be proved wrong here (with some linked reading material), as that would be incredible for the end user. I think its important when discussing this system to the average user that they understand what it can provide. I would encourage people using this system to remain vigilante when deciding what processes to run on their computer.

One thing this network could provide is an app store that allows security researchers to digitally sign releases. I don’t recall seeing that done before, and while this wouldn’t eliminate malware or virsuses, it should help concerned untechnical users identify programs that are less likely to be problematic. There should be a way for security researchers to sell membership to a “club” where apps they researched are listed and signed. I don’t know how many users would be willing to pay for such a service, but I certainly wouldn’t mind paying known legitimate researchers.

Are you suggesting that you want a network in which everyone contractually agrees that a selected person(s) is allowed to revoke a persons privacy? A form of limited government instead of anarchy? I think the people here would push for not giving someone special privileges, at least I would argue against it. The constant fear of predators results in us seeking more effective predators who disguise themselves as beneficial and helpful.

You’re suggesting that MaidSafe have different levels of privacy within it? What incentive would there be for someone to join the one with less privacy? The assurance thad “bad actors” can be forcefully removed will be enough to entice people?

To understand MaidSafe it’s really not possible to think in our comfortable world of linear number lines and counting. The term used to describe what I am talking about is ‘non euclidean maths’.

Integrity Check

Do not panic, this is just a reminder that there are nodes on the network (Data Managers) who knows some nodes that should hold data (PmidNodes) and these PmidNodes are managed by the nodes closest to them. You can sum this all up, by just stating, the network knows what data you should be holding.

This proof in MaidSafe uses a mechanism similar to a zero knowledge proof. In this case the check should not require to know the content of any data to be checked, but must know the data is in fact held and held in a manner that is accurate. This means zero corruption or viruses…etc… can have affected the data. This is achieved with the following steps:

  1. A checking group (Data Managers) creates a random string
  1. This random string is sent, encrypted to all holders of the data
  1. The data holder takes this string and appends to the original data and hashes the result
  1. The result is collected at the checking group and compared
  1. If any node returns a different result then it is believed compromised and de-ranked

This mechanism triggers on Get requests and during account transfers etc. It is non deterministic and randomised by use by users. It is considered to be secure and uses zero knowledge, not to conceal content (as anyone can ask for any data), but to ensure any data with a contamination is not required to be transferred.

It is really this simple.

1 Like

@chrisfostertv We are not talking about the same thing. The network does prevent against malicious tampering of data. However, what if you view a publicly available document that the owner intended to be malicious? Anyone viewing the document in the intended target software can be exploited. This network (to my knowledge) cannot detect malicious documents being served to users. This network does prevent the document from being tampered or removed which is an additional feat not possible on the existing internet.

As an example, imagine someone were to put a HTML document with Javascript code on MaidSafe and the existing internet. The experience should be the same, right? Now imagine the “experience” was a remote Javascript code exploit that leveraged a vulnerability in Firefox.

Luckily internet related sofware is improving so the attack vector should be shrinking. And hopefully people avoid install maidsafe software that was poorly (or even maliciously) implemented. I just think its important that people using this system continue the wisdom of being judicious on the programs they install for MaidSafe usage, and be careful about any particular documents they download/view from the network. I think the phrase “virus free” could give the wrong connotation to users.

Programmable privacy. The point is to allow the user to create a contract with SAFE Network which defines how they intend to interact with it. This contract would allow the user to distinguish themselves from a terrorist for instance should governments choose to accuse them due to the clauses in the contract.

The problem with one sized fits all privacy is that it forces everyone to accept a certain mold which simply doesn’t suit everyone. As a result the majority of people who don’t fit that mold will not ever use SAFE Network and you’ll be stuck with only people who fit the mold using it. The problem is that your use might fit the mold of “terrorist suspect” and if that happens there are no human rights or constitutional rules in the real world to protect your secrets.

Programmable privacy puts all the control in the hands of the user. The user defines themselves, their position, their interaction with SAFE Network, the kind of privacy they require from SAFE Network which fits their use case. It basically customizes SAFE Network to the usage patterns of the individual rather than trying to force all individuals to accept a usage pattern which they are not comfortable with.

Privacy contracts can come in the form of templates so people who intend to use the network in a way which fits a common use case can choose a template. In other cases they’ll write their own privacy contract which is designed specifically for their needs.

Some people will want their privacy to be revocable by a trusted group, or will want their families to get their information if something happens to them. Other people will want attribute or other kinds of encryption so that they can communicate with their doctors, lawyers, psychiatrist, but they don’t need or want it to be secret from everyone. They might want it to only be accessible to people who have the right set of attributes.

The point is that “bad actors” will exist but you give the good actors the flexibility to distinguish themselves from the bad. Contracts allow good actors to distinguish themselves from the bad actors. Tribes will eventually form where people who all agree to compatible contracts will interact with and do business with each other.

If someone doesn’t have compatible contracts with yours then you probably will not want to associate with them on SAFE Network. This would mean “bad actors” would be isolated and forced to associate only with each other because the vast majority of people aren’t going to accept an anything goes policy.

So ultimately privacy is just about access control. Programmable privacy is really about having complete control over who, what, when and how your information gets accessed. You could lock it up to be released in 10 years for example, or make it revocable if a jury which you select agrees to revoke it (and only in situations you select).

So just like a contract can have all sorts of clauses, conditions, rules, procedures, the privacy contract could be set up the exact same way. In that way it’s programmable and the result is that SAFE Network would customize around the needs of every individual user rather than trying to fit every user into a mold.

You’re suggesting that MaidSafe have different levels of privacy within it? What incentive would there be for someone to join the one with less privacy? The assurance thad “bad actors” can be forcefully removed will be enough to entice people?

I’m suggesting that SAFE Network be customized to the needs of each individual user. This means a privacy contract to programmatically define the needs of the user. This isn’t about levels but about control and giving as much control to the user as possible.

The user needs to be able to define themselves and determine how they wish to interact with SAFE Network. If users are forced to use it in a way which goes against how they want or need to use it then they’ll resort to Google or Dropbox (which doesn’t give them any power but which fits their morality).

SAFE Network developers cannot define morality for users. It should allow users to program their morality in the form of contracts with SAFE Network and with other groups of users. If you don’t agree with ISIS or any terrorist group then you should be able to define yourself by your contracts. In that way you could use the network in ways which is customized to you without having to have a crisis of conscience or without the risk of being smeared as being part of groups which have entirely different rules.

1 Like

Sponsorship is the worst bad actor there is. Find a way to disable it on Project Safe, and you probably save the world.

1 Like

I’m just a few miles away in Richmond! :slight_smile:

And I don’t think your analogy works since the OPD are just another gang that residents have to deal with (and are especially afraid of reporting). In fact, Oakland is a great analogy for the existing Internet infrastructure with it’s domain awareness center: http://oaklandwiki.org/Domain_Awareness_Center (a growing surveillance network that only law enforcement has access to).

When society mimics the autonomous nature of the safe network, that’s when we’ll truly be free IMO.

2 Likes

Small world, i go to the planet fitness on macdonald and san pablo. Anyway, its going to be very interesting when the public truly understands that they can tell on bad actors irl completely anonymously. But, you’re right. Who do they tell?

1 Like

And that is why everything has to be programmable, flexible, etc.

I don’t think people necessarily can be trusted to tell anything if they are completely anonymous. As a result you need pseudo-anonymity for that. Reputation is a necessity in reporting.

Suppose they don’t know who to tell? Then you have attribute based encryption for that. The encrypted information is the equivalent to Excalibur stuck in a stone. Only an entity with the correct set of attributes can decrypt it. This way you don’t need to know who to tell but you define the qualifications of the reader who can access it.

In other cases they can lock the information in a digital time capsule which decrypts after a specific period of time unless a jury of their peers votes to release it to the authorities.

All kinds of possible contracts, more than we could possibly predict.

Here is food for thought

Suppose citizens fly drones over Ferguson to report on the police. The drones would stream directly into SAFE Network and then if something happens where someone gets shot then a jury pool of people as defined in the privacy contract would have the ability to vote to release the information.

Is that possible right now? Probably not. But programmable contracts would be a requirement for it to ever be possible because these drones wouldn’t have individual owners but would be owned democratically by the town, the tribe, or the members.

Say you can distinguish yourself somehow. What’s to stop the government or some other entity from pointing the finger at some privacy conscious individual who DOESN’T opt to put in place such privacy sharing protocols and accusing them of being a terrorist and pressuring them to release their data? You’re setting people up to be targetted just like people who store food or grow gardens or have survival gear are targetted today. You don’t have to have committed a crime to be flagged, you just need to be an individualist and show some sign of self sufficiency. So just as being self sufficient sets you appart in our dependent sheeple world now if the majority of people have the protocols you describe would that not set the privacy inclined folks apart and spark witch hunts?