Last night there was an interesting discussion about limiting the number of accounts per person. It was started by @dyamanaka and several others started to chime in. @benjaminbollen suggested a captcha type mechanism to at least prevent or hinder bots. The conversation moved on and this is where I think there is an interesting possibility.
Imagine the network could take some proof of you as a human and this proof were unique. If this was hashed then the network could allocate you the right to create an account (this is like mining). This means every account was mined by a specific human. That would solve many edge cases and allow us to move forward with a huge amount of opportunities.
If we knew each account was a unique individual the network could really start to prove itself very quickly for functions like, voting, ranking of others as well as limiting data uploads per person based on some fairness algorithm (the easy part) that we implement. At the moment it is a pure quid pro quo, but we are rapidly altering that as we now have farmer incentives.
In any case proof of a unique human would be incredible for a great many reasons.
There are several rules here though.
1: Any client side mechanism can be faked.
2: Any input digital information can be faked.
3: Any mechanism that required digital parsing alone can be faked.
@benjaminbollen figured that voice could be used and this lead us to thinking about perhaps speaker recognition. This would be a system where an individual can be recognised via speech. Instead of using this as a password though (some systems do this), we would instead use it as a mining proof. So a speech sample can be proven unique (within parameters, 1% failure) and this proof could be hashed and stored on the network. This storing would allow the network to allow the user to create an account. The account is not linked in this case, the mined human validation only allows you to create a single account, thatâs all. This means no privacy loss.
Users could have only one account per person then and identified as a unique human. No more accounts could be created for that user. (need for key dispersal backups here, but ignore for now a person losing access and needing to re-create an account).
The problem is that, if we could make this accurate enough, it would be client side. That means it can be scripted as @Traktion pointed out.
So this is where I think @benjaminbollen had a great point, captchas. What we need is a system where the network challenges a person with a voice captcha type mechanism. This would mean including speech recognition to an extent. So we use a captcha type mechanism to request a person say specific things and when we are happy these are what was said, we analyse the results for speaker recognition for each element of the returned results (make sure same speaker in all utterances). Then we do the mining trick. In this case the network is creating the digital information required for the mining attempt, not trusting any client.
This would mean the network requests from the client several lossless audio files that are analysed. These are then thrown away by the network to remove any link with the person and the mined unique identifier. These requests would be as unknown up front as a traditional captcha. If the client kept these, they are of no use, they have been made one time use only as a mining attempt.
Then we have unique anonymous humans
A good thing for us, is that we are not looking for 100% accurate recognition for network access each time, like a password. We are looking for close enough speaker recognition. At the moment I believe the accuracy is circa 99% so not good enough for passwords, arguably, but probably good enough for us. If 1 person in 100 could create 2 accounts at the moment, then itâs still an issue. It is also a marked improvement on any system I can think of today. It may mean that trolls and such like would have to watch out as reputation may be something that will be manageable by the network (huge caution here, I know).
So please fire in with ways to look at this. think we all can attack the edge cases later (like re-creating accounts, people who cannot speak, lifetime reputation etc.) I see these as easier to fix issues and definitely to be dealt with, but only after we see if we could make a system like this work.These are smaller and easier to fix issues in my mind. If we get the tech side to work we can attack each of the possible downsides after that. This is a very interesting opportunity and a good thing, it can be retrospectively added as a new account type (validated unique human) later if required.
Plus the matrix needs to count each battery, surely? (strokes_white_cat)