Fleming Testnet v1 Release - *NOW OFFLINE IN PREPARATION FOR TESTNET V2*

Zsh is a bit niche - sadly. I like oh-my-zsh but theres just too many things dont work immediately with it

Yup. I took it as given that you are probably smart and competent and didn’t need me coming in and explaining things to you. I took it as given that even if you weren’t familiar with a particular topic, you’d be more than capable of looking into it. I took it as given there’s no reason to even suggest that you need to revisit observations that you’ve already made, even if it’s possible I missed a previous comment.

What exactly did you get out of assuming I’m naive? … don’t answer that.

Help me out here.

Your objection to random selection is that 1000 nodes vs 1 node is unfair odds. I agree. But compared to what? If 1000 to 1 would have maxed out both our budgets, doubling the cost means the ratio becomes 500 to 0. That’s not better.

By comparison, if you halve it the odds become 2000/2, which seems useless until a new person can afford to participate and its really 2000/2/1. Or maybe a lot of new people can afford it, and we get 2000/2/1/1/1/1/1/1. Can we find a formula to predict how quickly people drop out as the price increases? Can we find one to predict the marginal benefits of adding new nodes compared to the size of an existing network?

Suppose instead, you don’t increase the costs of hardware. You ramp up the network response time. 1 second? 10 seconds? I haven’t worked out the math, but within a certain range, it should be possible to make the marginal benefits of buying up excess nodes drop off a lot more quickly than the rate at which it excludes new players. But the villains could have lots of IP addresses! Uh huh. I’ve seen that in production, it was a pain in the butt but when we got the formula right it worked really well.

Okay. So you told me I have everything to by sticking it out for one more round. What are you offering?

1 Like

Indeed, it could be similar to what JPL seems to have had experienced, not sure if he re-tried the command to upload but he had enough balance to write on the network:

3 Likes

@david-beinn

It’s (obviously) more work than spamming join connections. I’d consider it a proof of work.

@happybeing

How? All it does is say, if you want me to enter you into my queue and occupy a slot, I need you to invest some effort so the cost is more on your side than mine in case you’re an attacker.

Whether desirable or not I don’t know, but it’s not possible because it implies a naive understanding of the attack. If I have 10,000 separate nodes performing my attack, you’ll have no way of knowing that an attacker is operating at scale, save for dynamic network wide heuristics that are more trouble than they’re worth.

Why is it not good enough to have a node perform a useful task for the network before being enqueued? A legitimate node would see negligible delay, far lower than the amount of time spent in the queue awaiting random selection, and attackers would be opening their action already at a deficit.

2 Likes

First off, let me congratulate everyone on making it this far. Many doubted the vision but here we stand. Weary but proud souls. An exceptional dream has finally manifested.

Unfortunately I feel testing is partially hampered by the lack of a moderated approach. I strongly believe that the team should have encouraged testers to limit the storage they provide to allow greater participation from others and a much larger test network.

Current conditions will keep the test network small and minimize the chaotic interactions inherent in decentralized networks. Without it, chances of discovering edge cases and flaws are greatly reduced.

@maidsafe , please consider asking to this great community to reduce their hard drive contributions in order to increase participation from others and improve testing feedback.

10 Likes

I would second that but I think a maybe slightly more practical approach would be if @maidsafe just keeps filling the network with data, even if it’s junk, just until we get most of the community in. Just an idea. Although the next iteration having a queue will make it a lot less effort. I’m just glad my router wasn’t a blocker this go around.

1 Like

@Stark

Good idea. If that’s not a config option, it should be. Being able to allot a max number of GBs (and/or %) will soon be crucial to offer node operators. Having that now would be useful to help the network test evolution more quickly through nodes filling up more quickly and exercising corner paths in the code.

4 Likes

Do this means that stopped large file upload can be resumed?

It is --max-capacity for sn_node.
I have asked about it some time ago:


It is just implementation of copying --local-addr to --public-addr.
Nothing interesting if you do not use that parameters.

5 Likes

So, I wonder about this, as I see previously from a while back

but now I do not see that final line

$ safe auth unlock
Passphrase: 
Password: 
Sending action request to authd to unlock the Safe...
Safe unlocked successfully

and cannot find any credentials file related to safe.

cd .safe
ls -R ./* > list

./authd:
cert.der
key.der
logs
sn_authd

./authd/logs:
sn_authd.log
sn_authd.pid

./cli:
config.json
safe

./client:

./node:
local-node
node_connection_info.config
sn_node

./node/local-node:
reward_public_key
reward_secret_key
sn_node.log

./qp2p:

What is the significance of not having $HOME/.safe/cli/credentials ?
:thinking:

The issue I can see is that I may want to run a couple of nodes on a data centre instance. And obviously I’d want a fairly automated system.

Additional to that is I make have some Pi or Odroids that are headerless and want to do the same. Do not really want interactive sessions in order to run a node.

Yes that was the plan from memory, the network is still doing work when the person tries.

1 Like

Just trying a refresh setup but getting on first attempt

$ safe auth create --test-coins
Passphrase: 
Password: 
Sending request to authd to create a Safe...
Error: AuthdError: AuthenticatorError: Failed to store Safe on a Map: Insufficient balance to complete this operation

You will need lot of RAM for it to work by the way:

David said this was a know error

3 Likes

this is a great achievement maidsafe team. congratulations. I’m just wandering if you guys are you going to share your thoughts about how you think the network is doing. anything that you are surprised by? any stats and your thoughts on those? i think it’s amazing and if you can optimize it you can truly be pioneers of something unbelievable!

9 Likes

I’m also wondering this. But considering that the network is still up and running since Thursday… it can’t be doing too badly I’d say

10 Likes

For node trust and by extension node queuing I’d always assumed something similar to the following.

Time is the great leveler, as its cost applies to everyone equily, so the length of time a node has been queuing - and providing full services to the network - would be considered a cost it has paid. This is effectively a proof of work exercise, but one thats similtanioulsy useful to the network.

The only way to work around this would be for an attacker to queue multiple nodes similtaniously. But, from I’ve understood, the Safe Net will also have a concept of median node capability; nodes closer to median capability get geater rewards for providing services to the network than those better or worse than the median capability. This is two fold, first it encourage large server farms etc to split their servers into smaller nodes to improve their earnings and second if they were to disconnect and take thousands of nodes with them, network churn is less impacted because data chunk distribution is made more homogonous by being split a cross many nodes.

So trust would be enforced such that:

/ - Nodes need to run & queue for [a week] before they are selected to store data.
- Nodes need to run for [six months] before they are considered trusted.
- Nodes need to run for [three years] to be considered senior.
- Nodes need to run for [a decade] before they become elders.

Where the above periods of time requiered to qualify a node for promotion and the spec for a median node are adjusted dynamically based on the state of the current network. I.e. the age requirement to be an elder may be shorter when the Safe Net is younger or if a lot of elders have suddenly left.

Equily as a node’s ages dictates how long it is allowed to be disconnected/unreliable before it is demoted, i.e. the greater the age the more allowance its given for disconnects within a given window of time.

I’m sure many of you have spent a lot longer thinking about this than me, so I’m keen to hear where any of you think the weakness of this approach might lie.

1 Like

I’m guessing they will let us know once the testnet is done and the info is analysed.

Currently we require Maidsafe to develop the network … in order for us to be able to run it … so this is a matter of degree of when the network will run on it’s own.

If a token-requirement and invite system are only during an initial bootstrap period, then it seems there is no contradiction of principle here.

I presume that there is a way for network knowledge of how many total nodes or sections - or some way for it to know when it’s out of the danger-zone of early-network and bootstrap … once there, then this system could simply be deactivated.

Alternatively, maybe there is a bootstrap version of the network with this feature, then, when Maidsafe thinks it is mature enough, it updates the network without this feature.

2 Likes