It doesn’t really make sense to me, because surely any tolerance would result in value drift. Like, if a node is set to be slightly more forgiving, then surely it would be able to run a bit more cost-effectively thanks to not needing to sent out as many messages about the bad nodes? Surely a party making farming software would tweak those settings to be as forgiving as they could get away with?
Or tweak them to be as harsh as possible to try and make as many nodes seem dysfunctional as possible…?
Nodes can set an impossibly high bar (everyone is dysfunctional) or an impossibly low bar (nobody is dysfunctional), both seem to be justifiable.
I’m not sure how dysfunction will actually end up being used, but the details and consequences are going to be important since it’s kind of a way for nodes to ‘vote’ for their ‘preferences’ of what counts as useful work. It’s good to be prodding at it.
I also wonder if the ‘value drift’ is normal and expected considering technology itself drifts toward continuous improvement in performance and efficiency, so dysfunction value should roughly follow that?
I’m not sure how it’ll all play out in the end. It’ll certainly be good to dig more here.
I’m hopeful we can get deeper modelling of dysfunction and related outcomes as we flesh out the module.
I guess they’d also be measuring what impact that has on the network? And if that’s the case… and everything is still working fine, who are we to argue?
The bottom line is that we can’t force folk to use any one node codebase. As long as they can de/serialise and are operating in a way that doesn’t get them punished, it’s all within the limits?
I suppose one regulating factor might be that if you are an outlier in your view of dysfunction, you risk being penalised. So messing with those parameters carries risk unless other nodes do similar?
The issue I see is that what is best for the individual won’t be best for the network. A node is unlikely to measure what impact it has on the network because that would take resources and not provide a direct benefit. Assuming up to a third of nodes are malicious is fair, but I think we should assume all nodes are selfish.
One danger is if node software emerges which automatically adjusts its lenience according to its neighbours. If that node software gets a big enough market share, then a section could end up adjusting to be a bit more lenient than itself causing a positive feedback loop, and then surely the section would collapse (potentially taking data along with it).
I guess this means that, before a network where nodes have discretion is released, there should be a test to see if such a positive-feedback loop collapse is possible. But I would’ve thought it would be easier to have a zero-lenience policy at first.
What you suggest could happen I think yeh. Avoiding unintentional drift and a downward spiral from poor configuration will be important.
The “safe” option should be tested against such conditions.
But again, there’s nothing stopping users deciding to use XYZ. It’s a question of community and governance more than any individual node software I think.
We could not add a config… we could obfuscate things. Someone could fork it to make it easier to tweak cos they will gain X$. Now there’s another v of the software out there to compete against…
To me it seems easiest to make it configurable, but to test well, and to make it clear what changing those values may do.
Exactly this. If you’re out of bounds, then you’re off the network. But bounds will be determined by folk running a given software/config. Perhaps coordinated changes to those will be needed? Hard to say.
It also makes me wonder if we need to see elder candidates ‘dysfunction’ votes before promoting them. What they’re using may be an important factor
Nodes located in areas that have less reliable internet will always want to drift towards more lenient configurations, while those in more stable/reliable internet areas will be pushing towards stricter rules maximising efficiency that the physical network and better hardware gives them the privilege of pursuing.
Perhaps what is required is a “internet/hardware quality” type overlay(s) where Nodes can be perfectly adapted to suite the physical environment they find themselves in. But then specialisation would then naturally emerge where Elders would tend to be located in “quality internet/hardware” tiers raising the issue of data-centre centralisation. Perhaps to help offset that certain tiers are permitted/required to have certain proportion of elders or similar decentralisation type tactics.
One configuration type for the vast spectrum of internet service levels/hardware that exists may always be fighting with itself and I speculate that the “best” will probably win (being fast internet powerful hardware).
Ants also do this:
Here I consider some aspects of the dynamics of the environment that may be important in the
evolution of collective behavior in ants (57). The first is stability, the frequency of change in
the conditions associated with that behavior. For example, how quickly a colony chooses a new
nest site, and moves to it, is probably related to how long the new site will be available. This
first feature of the dynamics of the environment is related to a second, the threat of rupture or
disturbance—both how likely disruption is and how much is at risk if it occurs. For example, red
wood ant colonies, living in a very stable environment, establish very permanent trails from nests
to trees that persist for years (34); turtle ant foragers forage in vegetation that is often disturbed
or ruptured, and they easily and frequently create new trails (55, 59). A third is the ratio of intake
and outflow, in energy or another resource—that is, the relation between how much the behavior
brings in and how much is used to accomplish it. For example, for harvester ant colonies in the
desert, this ratio is low, because foraging ants can easily lose more water to desiccation while
searching for food than they can obtain by metabolizing the water from the fats in the seeds they
collect. A fourth is the distribution of resources in time and space—for example, whether the
distribution is patchy or scattered (27, 68, 104).
Building complex systems is hard :-).
Thank you for the heavy work team MaidSafe! I add the translations in the first post
Privacy. Security. Freedom
It’s complex and so hard to see how this plays out. Changing your dysfunction settings doesn’t help you, it helps others, and if you change them too much it could harm you so there’s an element of stability in this unless you coordinate with a very large group.
So could we see attempts to coordinate in favour of or against different groups (eg server farm v home nodes)? Maybe but that seems hard - to work it needs to be coordinated in time and be enough to influence whole sections or even the whole network so I’m not sure it’s feasible.
For someone creating node software with this in mind they have a difficult job, which means costly (eg becoming the dominant node code, and getting all those nodes to act in unison). Is that feasible, what’s the motivation for the developer, the payoff etc?
I’m not saying it can’t happen but I’m struggling to see why or how.