Problem: an attacker obtains the credentials used by a user to login to their SAFE Network data. There are many malicious actions that could be performed that involve accessing the data either browsing it for sensitive nuggets, or downloading it en mass, but these all involve the attacker gaining a copy of some part of the data. Deletion/corruption is ineffective because MaidSafe never actually deletes and keeps each version of a file, so data could be undeleted/rolled back.
Solution:
I can imagine two ways of dealing with this, which can ultimately be combined:
Within the users data, but read/written only by the node they login with, SAFE maintains usage profile for how the user accesses private data (such as how often, times of day, duration, where from etc). Independent of this, the network maintains a population of attacker profiles, which are optimised and added to based on ?user feedback? While the user is active, his node and it’s near neighbours compare his usage with his profile, and with the library of attacker profiles. As the likelihood of an attack increases, SAFE escalates through defensive measures such as challenges to the user (e.g. passwords, memorable data, and two factor authorisation), imposing access restrictions to all or part of the user’s data, or limits on the rate of data access (to inhibit the effectiveness of search or mass copying), and raising alarms to alert the legitimate user.
I think we should create a system which does not rely on user configuration, and which can adapt as attack profiles develop. We may or may not give users some way to override defaults, either to strengthen or weaken them, or to broaden or narrow their scope.
Granularity of security, where the user has in advance indicated the level of security to be applied to data stored according to: a) which app saved it, b) which app has accessed it, c) which folder it resides in, d) file size, type (extension), names etc. e) time since last read/written and so on.
The Malicious Applications topic discusses
providing app related granularity of data access - to prevent a rogue app from gaining access to data of other apps used by the same user. Clearly any measures used to combat that specific attack overlap with the more general cases I’m targeting here.
An interesting approach here could be, suspicious activity requires the user initiate a voip call (or similar) with a frequently contacted contact. The contact then signs an access token for the user. The close nodes to the user will pick this up and grant access. This will be the users MPID group, this token could be passed to the MAID group to allow data manipulation though. Only problem is this links (even temporarily) the users public name and private id (MAID).
Something along these lines may be interesting though. Using some form of trusted contacts to access your account, it would be an option for users to switch on high security like this though. The trusted contact could be told the MAID address to send the unlock code to, thereby disallowing the network to make the connection, come to think of it. Probably could be even more secure where the trusted contact has a representation of the MAID, or in such cases the MAID is allowed, but creates a new keypair at the same time. (especially if the user is a trusted unique human )
I also like the option that an app like truecrypt gives : If a user is in a state of coersion, you have a delete password that allows the user to give the attacker some information but in the process delete the sensitive data.
This also mitigates the problem of being declared a criminal for refusing to give data to state agents.
I’m actually not so sure I would (personally) support such an auto-erase feature. I feel the balance is already nicely struck at building a system that leaves no traces on your local machine if you so wish. I feel an auto-destruct mechanism would push the scale too far outside of the realm of basic rights.
We have a basic right to (truly protected) privacy, but an easy-way-out of any trace of responsibility is not a good thing to my mind. The legal system should carefully wield the powers it’s been given, but upon reasonable suspicion requesting access from the actual person is not something that should be blocked.
Agreeing with Ben on this one. A trip system that destroys data isn’t really for anyone except criminals. I encrypt to prevent people from getting into my data without my consent. If police showed up with a warrant and everything his hunky-dory, and the laws agreed upon by the majority are being followed, of course I’d give them access. It’s within their legal bounds. By trigger some deadman switch, I’m just incriminating myself.
I suppose the trip system could be an app on the network if it is not built in.
I can see lots of use cases outside just “criminals” using it. I guess many of us live in countries where the corruption is low, but a vast majority of the world is not like this. People with opposing religious or political opinions are regulary targeted, I don’t believe holding a belief is a crime, but many oppressive societies do.
So maybe I am an activist under a violent state, my door is kicked in, they say give me keys, I know if get them they’ll have access to my contacts thus endangering others too. If I don’t give them, I’ll be harmed, so I give the dummy key.
Or maybe I am a journalist, I work around the world in difficult areas investigating corruption. Same case as above.
I would actually agree with you. It does not always have to be for criminal intent. Like you suggest, this is perfectly possible at application level and maybe that is a very good place to put it.
Apart from that, with the maidsafe system, already without any self-destruct function on your data, your own computers are clean and no traces are left on your drive if you so wish. So it’s already a very strong protection against oppressive systems that want to control the ideas and communications of their people.
But like I said, for me it’s not a clear-cut issue. What is clear is that both in Europe as in the US (and probably elsewhere) it is a criminal act to destroy evidence. So building this into the SAFE network, or into an app brings to software project itself into a grey zone - but I’m no lawyer !!
Yeah, not need to be hanging around in shades of grey, when the system works fine as is.
Although, I do agree with Thoreau on this. The individual, he insists, is never obliged to surrender conscience to the majority or to the State. If a law “is of such a nature that it requires you to be the agent of injustice to another,” he declares, “then, I say, break the law.”
You guys are assuming the user lives like you, and not in an oppressive regime. It’s just as valid for good, to help a persecuted group protect themselves and others from an oppressive state, or from corrupt state officers.
I agree it could be portrayed as for criminals, but so could Tor, and that is US government funded.
We are creating a technology that can and will be used for good and bad, so we need to consider both sides while considering whether features should be included or not on this basis.
i completely agree. Reading over my comment again, I’m dead wrong, I apologize.
What I was trying to get across is whether or not it’s something you bake into the protocol/system, or do you just let someone develop that as an application on top.
EDIT: And by baking it into the protocol, could that create a massive possibility for an attack? An accidental SAFE kill switch.
A proud moment, love seeing this. It is a strength we all need. Spot on @russell this forums working very well. @happybeing your keeping us right man, must be the calming influence of the water I got lost in this one for a while to and could have made the same mistake I think. Tricky areas and tackling them is great fun though.