Update 19 January, 2023

I am sure this has been answered and my search abilities just suck.

How long can a node be unreachable before it gets penalized/kicked?


I suspect this will be a parameter that is subject to frequent change. There is IMHO a good argument for making this property dynamic, dependent on the current network health/weather.

This is not a plea to open that debate right now. A simple fixed value will be OK for the time being.


True, do you know if there is a window of grace at the moment?

I am thinking along the lines of.
Say I am running a pi, the power supply gets interrupted briefly so it reboots, I setup up systemd to autostart sn_node so we are talking perhaps a minute or two.
Does it just carry on as if nothing happened or is that event going to be an issue.


Think of it more like this.

If a node has more than 2X the faults of its neighbours over a period, then it’s kicked.

We have done a lot of work to make sure timers etc. are not used and this is a big reason for that. So if there is little or no network traffic (clients using the network or nodes dying) then that period could be very long. It can be almost infinite actually.

However, if the network is super busy and data flying around everywhere and a node is always missing, then it will get kicked out pretty quickly.

i.e. As long as a node is back fast enough to not build up too many failures, then it would survive.

(timers and cache kills this stone dead, and rightly so, they are horrible patterns in decentralised networks)


Thanks, now that you say that. I do remember that explanation before!
I knew it existed here somewhere.


Doesn’t there have to be some semblance of a timer? How do you reconcile 2x more than neighboring nodes without one? Are we calculating back to when the nodes started and comparing failures divided by total transactions or averaging failures over some recent period of time?

Also, how do you reconcile any amount of failures compared to zero? If the neighboring nodes have no faults, than 1 fault is more than 2x their neighbors.


It’s more simple, you count the errors of every node and check the relationship between them all.

So it sounds simple but there are lots of thing to consider, such as churn etc. but essentially you start a new generation.

Bottom line is you consider failures relative and never absolute.


Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:

Privacy. Security. Freedom