I think this is where we can look to nature. We don’t see these problems in natural systems (deadlocks, corruption etc) because nature has evolved robust mechanisms that work well at their various scales.
The problem with humans is that the mechanisms we have are social and tend to operate well in small groups but not at scale. Since the advent of culture (the passing down of accumulated knowledge, technology and ways of organising ever larger numbers of people), which is what allows coordination of human activity at scale, things don’t work at all well.
This problem becomes mirrored in the systems that we design because they are designed using cultural mindsets, such as central control, simplistic human ideals, perfect concensus etc
When we look to nature we don’t see those mechanisms. Can we put a value on the work of one bee in a hive, in the queen, on drones etc? Does nature attempt to do that? Clearly not - we might attempt to in order to create an abstract model, or to predict behaviours, but any numbers we came up with would have dubious meaning in relation to the complexity that is really operating.
So for example, it is hard for us to come up with better protocols, such as Scuttlebutt which are less precise in some ways, but much more robust in others.
As designers we are taught to think linearly, and in terms of a person writing a piece of code, that leads us to designs which operate in a centralised way: single flow of control and execution, which creates problems when such synchronous systems interact or are applied at scale.
Whereas nature simply doesn’t work that way. If one bee ‘fails’ the hive will not be affected. If the queen dies, the whole hive responds rather than relying on a chain of command that might get deadlocked, or fail to reach a decision because one or two crucial bees must handle this eventuality in a particular way. Nature is often less direct, seemingly less efficient, but gets there even at scale. And it doesn’t put a doomsday weapon in a suitcase, and in the hands of a single being.
So for example, seeing this problem in terms of trying to calculate the value of a piece of code is probably unhelpful - we think that way for the reasons explained - but it frames the problem in a way which is hard or impossible to solve, and even if we accept its limitations, is likely to create solutions which are brittle and contain unforeseen vulnerabilities. So, how to do differently?
I don’t know, so just thinking out loud here… maybe instead, avoid trying to value a piece of code in a mathematical or formulaic fashion, and try to come up with levers that incentivise better things, punish worse things, and not care whether one person’s code gets rewarded exactly fairly relative to another. Life isn’t fair, that’s just not how anything works, it’s a human ideal which actually gets in the way of doing things, so a bit of a red herring.
So I guess the lesson there is to be aware of our assumptions and the way we think (sequential, centralised, fair) and look at how other systems operate for inspiration. It’s hard, but fun IMO 