Limits of the Safe Network Consensus Algorithm

Update 16 September, 2021 covers Method2(B) and it was briefly mentioned a few times since then IIRC, but of course we are on fast moving ground. My questions so far are like stabs in the dark to try and get an idea what the problem is and what and where the limitations are. All we could tell previously is that it must be a pretty decent problem to drive centralised Method1 solution to be seriously considered.

Noone outside of the core devs will be able to even begin to think about the problem let alone possible solutions until the problem is documented/defined more publicly. With that in mind I will try to sum up what we have learned so far (very open to corrections and elaborations!):

Method2 and Method2(B) are possible options and there is no limitation of the Safe Network Consensus algorithm preventing them. The problem lies elsewhere…

Going on what your saying so far the problem appears to have two parts:

A)

B)

On Problem part A)

I refer to paragraph from my previous post “From Day1”. It would be possible to quantify how many trusted individuals (of the “Script Network Maintainers” group) would be required, operating X amount of Safe Network nodes as elders at network launch, to be more or less equivalent to the security provided by Method1 “Script Network Maintainers” group given initial median growth trajectory of similar data networks. Ignoring Method1 external risk factors of running a centralised server network to do the job of course.
[Edit: Main point being that for any selected number of trusted Script Network Maintainers and the servers they administer for Method1, there likely exists Y trusted Safe Node operators that would do the job to an equally probable level of security (and likely more given its decentralised).]

There is a good argument to be made that “it’s just too much to be controlled by too few” for Method1 as well. Method1 will have all the known security and related weaknesses that traditional servers have. Recent experience tells us it will be more so given that some absolutely spectacular hacks have been levied against companies operating in this industry, especially wherever they even slightly centralised risk (so called “distributed bridges” and whatnot). Hacks so audacious it leaves many speculating with compelling but inconclusive proof that only attackers with state level resources and very low level network access and reconnaissance could have pulled them off, repeatedly. By audacious think 1000+ different geographically distributed servers maintained by different “Script Network Maintainers” and all of them all being somehow identified and directly targeted. We have already seen worse.

If Method1 is the only way forward I would recommend a tried and battle tested blockchain based solutions to try to mitigate the risk through decentralisation.

On Problem part B) “autonomous distribution is yet not known how to solve”

This appears to be the heart of it.

Given this limited information and to try and feel out the scope of problem part B) to help understand better. Would there still be problem if there was no complicated distribution schedule with percentages going to different groups? Simply paying Node operators for resources provided (either through Method2 or Method2(B)). Or is this a missing the forest divide by zero question again?