I don’t know if there’s automatic bounce mechanism or not, but if there was that could only make it easier to “probe” many groups as there would be more available slots in every group every day or hour. Problem is, the bad guys don’t need to control a group for 1 day - 1 minute is enough.
I don’t think this can be handled by either encouraging or discouraging churn. If you discourage churn, then any number of new nodes that come online at the same time are more likely to end up in the same group, it seems. I admit I don’t how groups are formed - say if there’s 100 groups with 30 members, 50 with 31 and 50 with 32, and now when 200 evil new nodes come online, where do they go? At this moment it seems it may be wise to reshuffle the deck a bit, but then one could cause constant reshuffling by having a bunch of VMs come up and down.
There is a solution to the problem & we are going to have a ‘Go’ at it , an ‘Inversely
Go’ , that is . How can I have a ‘Go’ to not win , to make winning impossible at All ?’
I think the benefit of shuffling the pack would be to distribute the rogue nodes more widely. The more widely they are spread, the harder it dominate any particular group, even if rogue nodes try to stick together.
Like you say though, maybe we will get sufficient churn from regular network operation. Something to consider if the network becomes too static though maybe.
Another idea could be to make joining the network tougher. Perhaps some PoW which needs to be done before a node can even join a group. This could be some sort of mining algorithm geared for CPU/GPU effort (like, say, litecoin). It would only need to be hard enough to deter easy ‘hunting’ for slots in target groups. You then have something direct to lose, if you don’t find any of your rogue populated groups.
You know you might be able to pull off a digital version with a screencasting app, a graphics tablet and Gimp or something. There are probably lots of whiteboarding apps actually and if there aren’t one could create them.
@dirvine That really highlights the power of anonymity. The origin of bitcoin is still anonymous and that gives the idea communication power and had to be crucial to its uptake. Its why more than anything else we need anymous communication. It strips away the shil power and the conflicts of interest, it cuts away the distraction.
That makes it slightly easier for the new nodes too penetrate and overtake old groups (although not shuffling them isn’t enough to prevent it, so it doesn’t really matter either way).
PoW: that would be funny. Reminds me of Ethereum’s temporary excursion into PoW while they’re finalizing their PoS protocol.
Ha! Well, it is an option. It is pretty easy for a group to define an answer and request the joining node figures out the question. We don’t need blockchains, network wide consensus or any of that jazz - we could just use it as an entrance fee to get in an initial group.
Maybe it’s a solution without a problem to solve, but that is no bad thing to have, imo.
The best way would be, as it has always been in research, to write up and publish in peer-reviewed journals selfcontained papers describing all details (remember that the devil is in the details).
It doesn’t help to have FAQs, videos, wikis, and loads of papers without the precise details refering to each other and not being peer reviewed, or passing some third party scrutinity test. The fundamentals of the project must be exposed in a selfcontained documentation.
At the core of your project, as I understand from your claims, you have a solution to Nakamoto’s Byzantine General Problem (that is, the consensus problem in an open network which is more difficult than the original Lamport&Co problem) without PoW.
I believe this would be a remarkable contribution. I recommend that you write a self-contained paper with your solution. I am sure you will be able to publish it in a top journal and this will bring academic recognition to you and your project and would reassure future investors. I believe this should have been the first thing to do in this project. Once the foundations are firmly stablished the only controversy could be about the technical implementation.
This is true of research, but remember we are not a research group only, but releasing a project. So it has to be a balance. The background for easy reading of the consensus mechanism is in this post here (in particular sentinel implementation at the bottom). The language of the network | Metaquestions I think/hope it makes sence. In terms of papers there are some already, but maybe not specifically on consensus, more on data security and distributed networking. There are another few blog posts and papers (not in academia yet, some of these take 18 months or more to go through ieee for instance).
Hopefully there will be many papers soon, but we have a load of info out there to read and poke at just now. There has to be an effort to combine these in a better way though as @hillbicks pointed out as much of the algorithms have moved on (as you would expect) over time. The community have been great getting lots of questions and answers and pulling these together as well.
The initial 8 months of my time was documenting and detailing the whole network into detailed patent designs so there is a huge amount of detail there with graphics etc. which is a good backgrounder.
I feel a paper on strictly the consensus mechanism will be warranted at some stage, but we cannot ask the community to wait a couple of weeks at the moment. We will, however need to formalise this over time. The RFC process also helps when these algorithms are upgraded, like the recent routing update for maintaining the kadmlia invariant, https://github.com/maidsafe/rfcs/blob/master/active/0019-new-kademlia-routing-logic/0019-new-kademlia-routing-logic.md These all will form part of such a document though, along with the likes of the blog post linked here.
Thank you for your answer and links. I understand that the priority now is into coding the whole protocol. But I wouldn’t dismiss the importance of clarifying the solution of Nakamoto Generals Problem since this is fundamental for the viability of the project.
From the link you provide, if I understand this properly, you base your solution into structuring the net into “trustable groups”:
I understand that the nodes act as a higher trustable authority that will validate new groups. It would be helpful to explain the mathematical reasons behind the “sizing” of these groups that you propose and why this work. In particular, since there is no PoW, what is the PoS or similar that puts a cost into creating multiple identities that attemps to create malicious groups. That originally the groups are trustable it doesn’t mean that they can’t be tricked. If you have a link for this part of the protocol it would be helpful to understand why it works.
Peer review is another dead end road. Do people actually believe that peer review works? It does not. I can’t remember the exact figure, but somewhere around 50% of all ‘peer reviewed’ research is FALSE/UNREPEATABLE! Aside from the gatekeeping of major journals, peer review doesn’t really catch major errors. A professor of some sort wanted to test this so he made a nonsense paper and dressed it up like real research, and he submitted it to several ‘peer review’ publishers, guess what? They all accepted! The golden standard here is “Does it work?” and frankly, those on this board are anxious enough for the MVP and other deliverables without having to wait for a ‘paper’ to be ‘peer-reviewed’. If you’re still skeptical after watching the videos, reading the wiki and what papers are out there, grab a ticket and wait in line for the release like the rest of us. Is not the proof in the pudding at any rate?
Not nitpicking but here do you mean the Byzantine Generals Problem ? Bitcoin is an example of such a possible solution (proving itself so far, but shows where it would break). There are also a great many Byzantine tolerant networks (check many MS patents and papers on this subject).
I am not aware of a Nakimoto Generals problems, it’s perhaps a variant, but if you have some links it will help. Thanks for the tip if it is unknown to me.
In terms of papers I hope you have found some so far. I do really appreciate papers and work therein, but yes as part of a larger picture. Until it is a law it’s a theory and until it’s a working product it’s a hope, is the way I treat much of this. With the exception of cryptographic algorithms and similar principles (which only after many eyes on and in the wild testing can be trusted).
As even bitcoin shows without network effect bitcoin does not solve the problem, but you cannot really mathematically prove network effect, only show sybil resistance etc. We used some maths (some posted on here) for this and also many simulation iterations to back that up. It’s not formalised in a paper but is available for all to see, so far anyway.
Actually if we add these to the wiki with all referenced papers it may help a lot of folks get answers to questions they may have. Perhaps @hillbicks@fergish et al’ it may be an idea. If folks follow references then there would be a good few papers, the RFC project also should be considered as they move the project and algorithms along.
The paper from Paul Greig is already in the whitepaper section of the wiki, along with some other papers. Unfortunately the IEEE paper is not available on the maidsafe.net host anymore. If it’s still hosted on there I’ll glady change the link.
With regards to the RFCs, which ones should be included? Everything that is listed under active on github? If so, I’ll add a new category for RFCs and copy the content over to the wiki and monitor for changes on github.
I’m not sure how to handle thart article to be honest, since it seems to behind a paywall and I don’t want to be distributing copyright protected material from the wiki server. I don’t think that would be particular wise or nice of me