from what i understand there will be no flemming, the next release is much more than flemming was supposed to be. We will get the network with almost or all features.
Nice to see the change in attitude. Don’t let stress get the better of you. Have a great day.
Must be Flemmwell or Maxming
Paging @anon94252342 - looks like your account has been hacked.
Just kidding, bones beat me to it.
First Testnet
there is some new issues added by David to the routing github repo, am I wrong or does it seem like its related to node aging and it doesnt seems to be to much work if that is all which is needed for node aging… but i probably am wrong…
EDIT; project plan for the last piece remaining - NodeAging is up in Github, very nice indeed
Could be Start of a project board anyhow. Lets see where it goes.
I am really grad to see the final stage of this great and ambitious mission.
Seems even slower connections will be able to participate again.
Unless I misunderstand?
Remove resource proof and relocate from node join and bootstrap.
resource_proof will be relaced with node-participation chechs in later PR. Relocate is moved to node age project and again simplified.
Bootstrap will allow a node to connect to any node and find it’s nodes to
Join
to the network.
The Join mechanism, for now, will be a simple connection to all Elders. Node age will add steps here to supply age and status to the node on join.
Yes, well kinda. Slow enough but fast enough to participate in a vault network. The resource_proof was a mechanism to ensure a minimum cpy/bandwidth. Now we don’t care so much and we don’t know those figures anyway (we guessed high to test the network) and instead vaults (and routing probably) will penalise nodes that are not responsive enough which will depend on the network at that time.
So anyone will be able to join but they may get the boot?
Yes, here is an internal message regarding this that I posted earlier. Best we are all on the same page/train of thought. Hope it helps.
resource_proof was written to test a nodes cpu and bandwith capabilities. It was needed for testnets where we tried vaults from home. We realised that if a node could not have at least 6Mib download then the test failed. So resource_proof was used to check a node had at least that. It stayed in the code from then as starting morphing into something that could be used to test a node every so often, but it would only ever test a nodes cpu and bandwith, not that it would or did perform the functions the network requires. It could still be used as a min check of cpu and bandwidth but it causes complexity to do so.
performant checks pretty much have to be an ongoing test of each node to let the network know the node is working properly and at a level that is acceptable. If it is not then it damages the network and needs removed.
So the 2 things are very different really , but we were running a risk that they were seen/treated equally.
So will we still be restricted to one vault per connection.
As an example I have 200/200 and could probably run multiple but am still very unclear as to if that is beneficial in any way.
This sounds like a good step forward and much more robust and flexible than the old test.
No, but probably one vault per machine though, at least for now. Just because the config files etc. are in a fixed location. Of course we can make that better, but initially it will be like this.
After all the big things that have been accomplished to get to this point it is the 2 little things make me the most excited.
- No invites.
- Multiple vaults per LAN.
Call me simple I dont care
Also, us lower bandwidth folk will be able to run a vault still! That is pretty cool!
Anybody else following this? ? ? ?
What are your thoughts on virtualization? (Example: 16 vaults on a 16 core PC sharing a 1Gb/s connection and 128TB hdd array.)