Web Apps and access control

I’m posting this here in order to raise the general question wrt SAFE Network. It has been discussed before, but I find I’m not clear on how SAFE handles these issues because I think the details have still not been fleshed out and of course things may have changed ‘under the hood’.

If in fact we have good answers to these issues I’d appreciate clarification. For d canoe, in the various ways we can or do address these problems, any that still remain etc

The following relates to Solid, but is useful in that we have similar issues, but are I think in a better position to deal with many of them. I just don’t understand well enough to say much about it!


Apps and access control, leakage/theft of personal data etc…

There’s a useful discussion on this wrt Solid, but I think SAFE has still some way to go in this area too (or at least my understanding of it does :wink: ).

I think there are two things which are hard to get right here, and which often conflict. One is how to prevent ‘bad’ things through limits, controls etc, and the other is UX: how to do this in a meaningful, usable way that anyone can handle, or which defaults to something effective.

Here is Tim summarising the Solid situation and approach, but the earlier posts are also worth reading.

Update: I’ve inserted ‘Web’ into the title so we can dicuss on the key question raised in this post.

16 Likes

I do remember the front end team talking about more granular control like what is used in mobile phones but that was before they ever jumped into SOLID. I bet they’ll get back to it as it would be a natural progression. I’d like to see that too.

3 Likes

Funny you should say that… it’s exactly what I’m thinking through at the moment.

There are plenty of other moving parts that we are working on at the same time, and which may come before it, but these are the chunky problems of the UX, so it’s always rolling over in my mind.

I think a major problem will be in language, and in the user mental modals which have become ingrained over the last 10-15 years; in particular what an ‘app’ is, and how it relates to my data. You can see the struggle with it in the Soild thread, and this is with deeply experienced people.

It’s just assumed that really, when we talk of apps (be them mobile apps, web apps, sovereign web apps, or a host of ‘connected’ desktop apps) that what we really mean, is “custodian of my data”.

What we’ll be moving (back) toward, is more like Tim’s first category; all apps are just tools to manipulate and find new ways of viewing my own data.

But, we’ll still have these overhanging mental models and linguistics—phrases like “allow this app to access my photos”—with strong connotations, and implied requirement of trust, which are just totally different.

On the whole things will be, by default, much safer, but the threats that remain will be quite different in their nature; which shifts the perception of risk, and makes user trust harder to achieve and maintain… hence the heavy burden which the UX has to bear.

19 Likes

For starters, why not treat apps like a “user” in the typical Linux/Unix environment. Then implement groups and typical chmod, chown functionility. Very familiar and versatile. If you want more you can have acls. Not sure if there is a need to reinvent the wheel here.

Edit : This is often done, as mentioned happybeing’s in solid link.

P.s. Linux has won the great OS war of 1998-2018…

8 Likes

This is what interests me, but I’m not clear whether or not we can prevent Web or desktop apps leaking or stealing data that we give them access to. So that’s my first question: can SAFE Browser + Web app be made watertight, to prevent the app from sending our data anywhere other than our own storage, and could that be a useful default (or would it be too restrictive for many apps)?

I think there’s going to be a role for this, but I’m more interested in what we can do for my mum here. Although tbh, even myself, I would probably manage to mess things up with such a fine grained system. Even quite simple models like Android permissions tend to be just clicked through by most people without consideration. I know I would bother to tweak them if I could, but most people rely on the App stores to weed out harmful apps, which is not that effective, and likely to get worse over time IMO.

If we could be sure that an app can’t post our data elsewhere, then what we give it permission to do in our storage - read or write - it’s much less of an issue.

Data Sharing Controls v Data Access Controls

If we could be sure we can control and monitor what it is allowed to send elsewhere, the UX could become much less onerous, and I think be designed to discourage bad behaviour by apps.

For example, say by default apps can read and write most of our stuff (everything except things we regard as needing high level of security say), but have to ask if they want to send anything elsewhere the effect is to:

  1. make it easier to write powerful apps that do great things with our data
  2. discourages apps from trying to send data elsewhere unless for good reason, because by default they have to ask permission every time, which both alerts the user, and makes using the app less convenient

On the other hand, if we use the Android model where apps can freely send information elsewhere (pretty much undetected by the user), and all users can do is say what they can and cannot read, the effect is reversed. Apps are encouraged to ask for extra access all the time, and can get away with doing whatever they like with it until someone in an app store flags this. And then what? Just roll out a new well behaved app and start over, meanwhile existing users of the bad app carry on being screwed.

So what do we think, is it feasible to make SAFE Browser + Web app watertight, preventing the app from sending our data to anywhere other than our own storage, and could that be a useful default (or would it be too restrictive for many apps)?

UPDATE: the answer is yes, this is technically feasible: we can implement fine grained permissions and monitoring of where an app stores data on SAFE. See Nikita’s post.. So now we can think about the usefulness of this, and whether we can come up with a usable UX to support it, that delivers the benefits listed above, without too much inconvenience.

15 Likes

Yes. Very Good ideas and insight. The current state of permissions in Android is terrible. At Devcon a few folks also had conversations about eliminating out of band packets on the oldweb using safe nodes as well.

As for default settings under a Linux permissions mindset, Apps in the same ‘group’ could be given read access to shared data, but only allowed to write in their own folder under the client account.

Your point about proper etiquette for apps in a SafeStore is right on.

I think the challenge comes from ensuring no out of band communication occurs that could leak datamaps.

7 Likes

Thank you to @frabrunelle for bringing object capabilities to my attention.
Adding the concept to this discussion about access control.

“How are Capabilities different from Access Control Lists?”. Fundamentally, Access Control Lists are about authority by identity whereas Object Capabilities are about authority by possession .
See Object Capabilities for Linked Data v0.3

10 Likes

Yeah, I think this is the model I would favour by default too.

6 Likes

Instead, don’t you mean that users would need to explicitly give read access permission to the entity wanting to read/copy the data folder/file?

I assume you are asking me, as that’s my statement…

No, I mean we would be better giving free reign to apps read and write our own storage. So long as we are able to restrict where else they can send data, and that the latter should be ‘ask every time’ by default (or at least something that is more restrictive than permissive).

So by way of an example, firing up a messaging app for the first time, I wouldn’t get the “the app would like to access your photos” dialogue, it would be presumed.

But on sending an image attachment I’d get a “this app is about to send this image to… yay/nae?” pop-up.

2 Likes

I can imagine that many things like this can be presumed OK as well (ie on by default) - where the app is trying to do things like write data to the storage of a trusted friend for example.

But this wouldn’t apply to your example, when I share a photo on my chat stream, what happens is this:

  • a link is published to my own storage, which is readable by friends who have permission to read my ‘chat stream’
  • when a friend is running a chat app that knows about my chat stream, their app will display the photo to them

So I trust my friends, but have not needed to trust the app or its author at all.

2 Likes

I think it’s better to restrict write access to accounts not owned by the client at a fundamental network level. This would eliminate a huge portion of malware attack surface.

If you want to write directly to a trusted friend’s account, then this edge use case should require joint ownership of the account. You prove to the network that he/she is a trusted friend, or trusted group by having joint ownership.

To reiterate, I don’t think this should just be a default option, rather, the network should not allow writing to accounts not owned or partially owned by the user. Which is the way it is now, no?

I’m not what you mean by ‘owned by the client’, but if I read it as owned by the person running the app, this statement seems to call for the same thing I’ve described: ie that apps should have to seek permission to send data somewhere other than the account owned by me as the user of the app.

I’m not really concerned about writing to friends’ accounts at this stage, I mentioned that in was answer to @jimcollinson. I think that’s a rare situation, and one which is relatively easy to handle.

The key question remains: can we restrict where an app sends/writes our data outside of our own storage?

If we can, then I think we can create a security model that is both very easy for users (ie by default), and encourages app devs to compete through better app features, and makes dark patterns more difficult than ones that favour us as users.

The erights wiki and the c2 wiki has some interesting articles on this.

http://wiki.c2.com/?ObjectCapabilityModel

http://wiki.c2.com/?CapabilitySecurityModel

2 Likes

My understanding:

  • All writes have to go through the authenticator
  • Authenticator must keep track of the permissions where it can write

So if the user says “app A can write to any of my locations as long as it’s private” there can’t be a leak. If it posts publicly, ux event.

If the user wants to post publicly, the app can have access to specific public folders. If it wants to post outside of that, there is a ux event.

Am I over simplifying this?

As long as out of band communication is blocked, and the user actually read the message (the harder part I think) I don’t see an issue.

Edit:
Or is the question “pat.ter has access to my super-duper-secret.pdf and also can post publicly on my page”? If this, I get the quandary. However, I see no reason for it to have access to it.

1 Like

@wes:

All writes have to go through the authenticator

I’m not sure this is the case, but maybe someone can clarify.

My understanding is that an app uses the Authenticator to obtain permissions to access the user’s own storage. There’s nothing to stop an app writing to any storage that grants write permission to ‘anyone’, for example.

The second point remains open too: can we prevent ‘out of band’ communication? I think that’s probably easier, and I’m hopeful restrictions on writes outside the user’s own storage might be able to be restricted, but I don’t think that is currently the case and I’m not sure how feasible it is. I think we need expert input on that.

I’m not sure the overhead cost would be, but having an authorized token that tracks what permissions you have that must be passed with your requests seems like a must. Having “write permissions” seems way too broad. I too would like to hear how this is handled.

1 Like

The point I was trying to make was that happybeing’s installed apps should never be able to send data from one account to another unless happybeing is an owner of both accounts. The network should enforce this.

There is an important issue here.

Lets take this situation

  • We decide to run an Image Display APP that seems to have great features
  • APP wants to steal your images
  • Your images are in your private files
  • The APP is given access to the images to display which means the APP is given access to the datamaps for those files. No problems doing that OR is there
  • The APP doesn’t send your pictures anywhere and simply displays them
  • But the APP encrypts the data maps of the images into a data blob and asks to do a perfectly normal function and that is to store your viewing preferences for next time.
    • the APP writes the encrypted blob plus preferences in a MD supposedly with your keys, but in fact uses another key that the writer of the APP knows too
    • The APP writer and everyone who has reversed engineered that key out of the code will then be able to access your private images using the datamaps
    • Yea and before you say how will they find the MD, well the APP calculates an address that falls in a small range of addresses and easy to scan.
  • Now the data map is available to the writer of the APP when they read the encoded blob from the MD
  • You don’t know this has been done

In this case the APP only asked permissions to display the images for you and asked for permissions to store your preferences in a MD for next time.

BUT your images are now available because the datamaps have been given to the writer of the APP and he/she has access to the image files now.

No messages sent, no recognisable image was stored anywhere.

How can we protect against this method of data leakage?

So is writing to a MD with encrypted data considered private. Some are saying yes because if its encrypted then its private. Then what of my above example where its a perfectly expected “private” write to a MD, but in fact its not.

That is not reasonable. There will be “backup APPs” that will take my account and write all/selected datamaps and keys over to my other account. This allows for say one master account that is not used and the credentials stored in my Will. Then I can have family account and private account and business account, but when the unfortunate person who has to be the executor of my estate when I die can access everything without mucking around with multiple accounts. Also means that I don’t have to copy absolutely everything, just the important things they would need.

5 Likes