Security end-point in human brain

This topic continues from thinking about key loggers.

The SAFE network is constructed such that the network does not need to trust any single client, and no client needs to trust the network. However as a human user, I still need to trust my client, as he is the endpoint to the cryptographic chain.

So I disassembled the the problem back to first principles, and then it is pretty clear: the verification chain should not end at the client, but it should end in the only private place: my brain.

Of course it is clear that I cannot remember a private key. Nor can I, when presented a 4000 characters long challenge from the network, compute cryptographic functions on this challenge with my remembered private key, to produce a result the network can verify.

It does however open a question. A question I have not answered yet, but I present to you already:

The human brain is (on average) very strong visually: can we invent a (visual) challenge, that I can ‘solve’ in my head (easily) given the private information - ive never given out - to produce an answer to be transmitted to the network.

Preferably, this operation is not invertible (ie, no trivial rearranging at least). And the network needs to have a way to check the result with an equivalent of a public key. I know the question sounds ridiculous, and one would think it is impossible.

I argue though that the security of this ‘public private brain key infrastructure’ does not have to be as waterproof as the standard PKI. Firstly, it complements the rock-solid cryptographic fortress of the SAFE network. Secondly, it is firstly intended to shoot beyond the client; this way the mathematical PKI infrastructure ends at the client software, but the (weaker) visual PKI extends into the inner most private part: your brain.

So agreed, it is rather mysterious or unsolvable, my question. It might seem more easy to handle if you think about it as an extended captcha: just some private information stored in your brain.

For example (and this is not good enough): I remember my color is blue: the network presents a captcha with different letters in different colours. Only I know that I’m only supposed to type the blue letters and ignore the rest. For the client this could be already hard to solve: the letters are already distorted as with a normal captcha, but the letters don’t have to be in a single color blue - blue is a wider term for humans; and letters can also contain minor parts of other colours, as long as it’s easily visible for the human which letters are blue. Think Pointillism here, from a distance the human eye blurs it into a picture, but the computer only sees the dots, not even mentioning the standard distortion. – So this somewhat solves the criteria that the challenge should be hard to invert for the client software.

The other criteria is that the network should be able to verify the answer with a ‘public’ key; that they get from the client. That does not satisfy for this example.

Crunch those brains of yours, and help us solve this end-point problem !

1 Like

I’ll be thinking on this topic for the duration of my work day. However, how does this help the user trust their client?

1 Like

Note that this is still just a toy model, but it’s to get the basic model laid out. Here is my proposal (so far) for the public verification, the remaining problem:

My public key consist of a ‘large’ set of color ranges (ie groups of red, groups of blue, etc); My private key is that only I know which color I pay attention to. This visual PKI is complemented when the public and private keys are generated with a unique string for every color range in the public key, and the private key only stores the unique string that corresponds with my brain private key (blue).

Now the network needs two new manager groups: one for generating the challenge based on my public key; a second group for verifying the challenge. The challenge-generator-group can be the nodes closest to my public key (the managers should be able to read this public key). The challenge-verifiers should be another group. These are the ones closest to my ‘MAID login chunk’/‘brain verification chunk’. This is not the MAID login details, but rather a redirection chunk of where to find the MAID-chunk, properly encrypted with your password. The public key contains this hash to locate the brain-verification-chunk and its group.

So the challenge-group can generate a challenge, based the different color ranges in the public key. They send the challenge as an image to the client. As this group constructed the challenge with the public key, they take the hash of every possible solution (ie the random letters that were distorted in the “extended captcha” per color range). Importantly a hashed solution is the random letters for that color concatenated with the unique string for that color, also available in the public key. This gives a nice collection of hashes that they send to the ‘verification-group’.

This verification group does not know anything about the public key now, except that for this particular challenge, the solution has to be in this group of hashes they received from the challenge-group. This has to be signed though by the public key managers, as it is crucial that they only accept to verify a challenge originated from the public key managers. (** How is this guaranteed? **)

So at the client side, after logging in normally, the client should only be able to retrieve from the network the private unique string that is associated with my private brain key (blue) and a hash of the public key to locate the generation group and request a new challenge. In my brain I solve the challenge, and only type the \blue/ (private brain key) letters that I read in the image. The client hashes this human solution with the private unique string that he retrieved from the network after the first stage login. This resulting hash he sends to the verification-group (also known from retrieving the first-login details).

The verification-group should now only have a single matching hash in the list of possible solutions that they received from the generation-group IF AND ONLY IF both the first ‘password’ login and the second ‘brain’ login were exactly correct. If they verify correct login, they send back to the client their chunk, ie the encrypted ‘where-can-I-find-my-MAID’ package to the client.

The client receives this package, can decrypt it and continue as normal with the self-authenticated login. A key logger that stole your ‘typed’ credentials can only get through the first stage and retrieve and decrypt the first response from the network: this only tells him where your public brain key (generation group) and your MAID-finding chunk (brain verification group) are on the network and of course the computer-part of your brain private key!!! This is important information, but still useless without the ‘brain private key’.

The proposal needs a lot of sifting and correcting and making it actually compatible with the SAFE network. (For example I left out that the NodeManagers should actually take care of these client requests to query the network. But I simplified the problem to help me get through it.)

I’m really quite happy with this :). It is very likely of course the proposal might not work, but that is your task now: refute it !

NOTE: I used the concept of chosing a ‘color’ as a brain private key; but these could be many different schemes upon user desire. For example color blind people might prefer to mark letters with triangles, squares, dots, etc. Or maybe I want to only read letters that follow a certain type of curve through the picture… For blind people, we still have to think of an audio-equivalent; as a lousy example, only listen to ‘brands of cars’ in a longer “sentence” of words.

It might be obvious to some already that these brain-captcha’s need a higher resolution image to be meaningful from a security and readability point of view. For vision this is not so much a problem. For audio, listening to a long distorted sentence is far from pleasurable; so that is unfortunate.

I hope you guys like the idea though! :slight_smile:

1 Like

I guess my ramblings don’t really invoke much response. Here’s the short version:

login normally with password to unlock the first door. Then solve the human challenge which can only be solved with secret information, you learnt during ‘brain key pair generation’, but afterwards you are never required to input this into the system. Just as a private key is never supposed to leave your possession.

This structure could override any attempt external to the client software (ie key logging, screen grabbing without human reviewing) to gain access to your account.

If someone read through the proposal; please help me figure out how the verification-group can be assured that the generation group was authentically the source of the list of hashes, and not a conspiracy with the client node. This would effectively nullify the human check as an attack would be easily set up and fake answered without the knowledge of the private brain key. This would effectively bring it back to the current security level, where a key logger can gain access to your account.

@dirvine, any guesses whether the SAFE network can ensure that the verification group can also verify that the challenge was generated by the true generation group? Im a bit lost here.

Thanks !! Any comments welcome :slight_smile:

My brain is still hurting from the other discussions. :wink:

I do see where you’re going with this one and I think it has great potential. The concept is next level for some people. I especially like the idea of making it customizable. As you said, some people are color blind, some are deaf, some cannot determine shapes.

Private Brain Key is stored in the users mind, rather than being transmitted through the keyboard. Very clever. I’ll have to think about this one.

1 Like

There are many ways this can still be optimised indeed. As I said it’s just a toy model, which already made my head hurt today too.

Importantly I still see (at least) this one question unresolved, and it is important. Without an answer for that it’s basically a null-operation. A whole lot for nothing. I hope I will never have to be involved in developing a cipher, applying existing tricks is already a brain breaker :smile:

I understand the need for this, but I’ve been taking a slightly different approach, which involves trusting a special client.

The human brain is simply not made to interface with technology in such a tightly coupled way. Asking people to do so is asking them to adapt to technology, and we’ve been trying that for long enough with limited success. I think technology should adapt to us instead, which led to my interest in ubiquitous and context-aware computing.

This is also the reason why I’m focusing my efforts into building a “digital mind” or “digital mind extension” that can act as an interface between our native brains and Internet of Things (IoT). Of course you must trust this piece of equipment as if it were your own brain which, given the state-of-the-art in computer security, is currently hard to justify. Having said that, I think we can get there eventually.

I’m currently playing with the concept of data diodes for that very reason, so that a computer built out of untrustworthy components can actually be provably secure, and trusted to hold my most private memories without any risk of data leakage. I’ve already implemented a secure lifelog storage device with a couple of Raspberry Pis. Of course, it’s currently of limited use in interacting with the outside world (it physically can’t send any data outside) but that’s where we need to work to fix the underlying security (design) issues of computers, from the hardware level up.

Please note that in theory, for the majority of users a “digital mind extension” might actually be more secure than their native brains, since it won’t be as susceptible to social engineering. With context-awareness it may also detect when a user is being coerced into giving away her “keys” or provide access to private data against her wishes.

In summary, I agree that we need to trust the end-point, and that the end-point should be our brain, but just not our biological one.

For really interesting philosophical discussions on the Extended Mind, I highly recommend the work of David Chalmers, and Andy Clark.


I probably have not fully understood your suggestion, but it seems to me both our proposals are complementary! So the two will be stronger together.

One advantage I like after considering in a previous topic hardware extensions that you ‘could trust’, is that all people have already been given a brain. (They might not use it, but they have it nonetheless.) This for me is important to roll out the SAFE network to all people equally.

1 Like

Yes this part is easy. I am thinking your proposal is indeed key logger resistant, which is superb. The only nag I have in my head is the entropy part as the range of colours for example is maybe small. I am still wondering though as I am positive this is a small glitch. The idea is superb. :+1:

1 Like

I agree, the ‘toy-model’ of remembering a single colour from a list of colours is too small to be useful. Making it more complicated, helps a bit: your private key is “only green letters marked by triangles on closed loops, and only red letters marked by either dots or stripes on a straight line” for example, combining characteristics would grow the space of possibilities, but there is a downside: as the public key-challenge either needs to represent all possible options, and for a captcha-like challenge this quickly becomes untenable, or the chance becomes (exponentially) small for your private key to be present in a given challenge. This would force you to regenerate many new challenges for a single login, and harvesting these challenges would leak statistical information about your private key upon analysis.

So clearly the ‘captcha letter model’ doesn’t grow desirably into a bigger space of keys. Maybe something else could. I’ll keep thinking about it.

1 Like

An improved model could be the following challenge:

The generation group produces a distorted image of a grid (eg 5 x 5) of letters. Your private key is the sequence of grid locations you read off this grid. The difficulty level and related strength of the private key can be chosen by each individual user.

From a ‘snake of minimally 5 characters’ to an “unlimited” number of disconnected grid locations; the private key can be arbitrarily hard. Interestingly, the mapping of the challenge to the correct solution is not necessarily unique. ie, if on the grid a letter repeats, the private key is not uniquely determined from knowing the challenge and the correct solution. Of course, a stronger brain key can also repeat a given grid cell.

In the ‘public challenge’ - I should stop calling it a public key, because it’s not a key - there can be an arbitrary number of arbitrary sequences through the grid. Say the public challenge contains 10.000 (will depend on the computational burden) sequences through the grid with associated unique strings. The generation group generates the challenge and the 10.000 hashes for possible solutions and sends this to the verification group. The remainder proceeds accordingly.

This should already give a reasonable space of possibilities while keeping the burden on the network within bounds (ie increases linearly where the space of possibilities grows exponentially). Also the image challenge can stay within a fixed bound.

For blind people a different type of audio challenge still has to be imagined.

I’ll try to write out the proposal in a self-contained pdf document, clearing out some of the initial vagueness. From there on out we can work to correct and improve it.

Really happy again. :slight_smile:


‘Proof of unique human via voice print’ and other authentication ideas like the ones in this thread - Have any of them gained traction for implementation in the Q4 2014 launch ?

Not yet Chris, we have not been able to get in front of the game in terms of the virus/anti virus type wars and also (possibly more importantly) alleviating any mechanism that guarantees privacy. I see this as an area we do continually probe though as the ability to know a unique (but unidentifiable) human gives as a huge potential for many areas.

I am confident we will get it though and we will all know when we do as it will be very obvious and just look right. I like the idea of the network doing this a lot, it will scare some, but could be a killer app/design for project SAFE. I think we are gathering the creative people we need to be able to do this now, this forum has provided a ton of amazing knowledge so far. Its not begun yet and when we get our head out of this manic workload just now then we will get some minds focussed on this.

Thanks for the super fast response David, you guys are brilliant.


If your system can become a victim of a key logger, then this approach could be exploited as well, no?
I mean somebody could replace part of the program with malicious code or install a “screen logger” (those programs that send screenshots of your screen to the attacker) or use some other approach.

(Slightly related to the topic of “visual authentication” - a Google Glass app that allows you to “nod to pay”: 'Nod to Pay' App Lets Google Glass Wearers Spend Bitcoin)

Yes that is right, we need to ensure, strongly protected hardware (somehow validatable by the network) and an autonomous network. We have the latter and I think that we can do some great work with hardware and separation of info (do not give everything to one device) as well as a potential use for the zk-snark stuff, where we can be sure something specific has been executed (but question what else has been ?). It is a huge are a though and will require a good bit of solid engineering.

The good position we will be in if SAFE takes of as we expect is that the network should be solid. The end points are the next thing to secure, but every individual could make the choice themselves to be secure there. I mean you could use hardware you know and an OS you know (or built). It has to be the snoopers next battle I think.

How is this any different from a brain wallet? I don’t see it as more secure unless the individual specifically doesn’t have to remember it.

If the individual has to remember it then it’s security is exactly the same as any other password they have to remember.

There are millions of colours but how many colours can an individual see?

I am not sure, but did read into a very interesting story about this worldwide. In africa on the plains for instance a person from the plains can see a ton more shades of green than me, but cannot tell a blue square in a picture of loads of red squares. Then the study moved to brazil, I think it was and they could see different contrasts again. It seems dependent on your location, just how many shades of a colour you can see. This may be a good thing in this instance though.

1 Like

Well challenged Ben!

I use exactly this kind of scheme to generate a unique password for every device I sign up to: only I know the process to generate the password, and if I can’t remember it, I just go through the process in my head to regenerate it.

So I’ll give this some thought.

1 Like