Following on from the update two weeks ago, we thought it would be helpful to dig into some more of the considerations and reasons why we need to be thinking about horrific content on the Network, what effect it might have, and if there is a path to squaring the circle on tackling it in a way that upholds fundamental rights, and is resistant to censorship.
General Progress
First up, we’re delighted to welcome Benno (@bzee) to the team on secondment from Project Decorum. Those active here, or who attended our hackathon way back when will know Benno and his works, and I’m sure you’ll agree this is very good news. Benno is acquainting himself with the codebase and also looking at item 2 on our progress update this week, which is CPU multithreading.
A common assumption is ‘multi-thread good, single-thread bad’ when it comes to performance, but that’s simplistic and only holds if you actually need multithreading for concurrency. Oftentimes we don’t, especially since CRDTs are eventually consistent, and the implementation in some crates we use seems buggy. In fact, this could be the source of the more perplexing bugs we’re seeing. So, we’re analysing our flows to find where it’s necessary, so we can test the effect of moving to single threads where it’s not.
Meanwhile, @yogesh continues his investigations into a sled DB replacement. Cacache still seems the best so far, although Yogesh has been extending the DB benchmarks using criterion to the rest of the alternatives and was struck with some really astounding results. RocksDB which is a Rust wrapper to the C implementation of Facebook’s RocksDB seems to offer ~10x faster read/write performance(than the quickest alternative and Sled) with a plethora of under-the-hood optimisation options. The team is currently weighing the benefits and the negatives (being a C dep, its prerequisites are CLang and LLVM) to take the call for switching the DB.
Safe as a Grand Commons
The Safe Network has the form of a decentralised, autonomous network of nodes, that simply do their job of handling and serving data as requested by the clients. But its function is to serve individuals and humanity as described though the project’s objectives:
- Allow anyone to have unrestricted access to public data: all of humanity’s information, available to all of humanity.
- Enable people to securely and privately access their own data, and use it to get things done, with no one else involved.
- Allow individuals to freely communicate with each other privately, and securely.
- Giving people and businesses the opportunity for financial stability by trading their resources, products, content, and creativity without the need for middlemen, or gatekeepers.
Its form is defined by its intended function, which also then informs the strategy to deliver, nurture, and support it to meet its objectives in the environment it will be launched into.
We have to be cognisant of the fact that we are neither launching into a vacuum—or as some lab experiment—nor in the face of this, does the technology stand as a neutral entity. It is a response to a web which has been overrun by surveillance business models, the abandonment of privacy, and rampant human rights violations. And It’s also worth noting the history of the incumbent internet monoliths that started their lives with the false assumption that they were merely neutral tech-stacks, and what then became of them.
The Network, when it comes to public data, is intended to be a shared resource, a grand commons, allowing “anyone to have unrestricted access to public data: all of humanity’s information, available to all of humanity.”
Commons are resources that are accessible to all, and are held and maintained for the collective good, be that public or private. That could be natural resources, or an environment, or another resource that’s administered not by a state but by the self-governance and principles of the community which it benefits.
In the case of the Safe Network, that’s public data, but also the infrastructure for private data and secure communication.
Commons are fragile things that need to be continually nurtured and tended. This isn’t a new challenge, or even a technological one… it’s sociological in nature. That commons could be a rice paddy, or a drinking-water well. All fine and serving everyone, until I decide I’d like to drain my paddy—the lowest on the hill—or that the well would be a mighty convenient place to dump my trash.
We have, of course, designed-in mechanisms to cope with bad behaviour of nodes and how these are handled by the Network in a decentralised way. This is vital so the Network can protect itself from bad actors, and hostile threats. These mechanisms are there, when we trace back their trajectory, to serve the objectives of the project and the needs of the humans using the technology: security, privacy, sovereignty, and access to a shared global resource.
But it’s right to understand and acknowledge that threats to the Network don’t just come from the malicious node operators but attacks (even those that can be considered reputational or Sybil in form) can be waged from the client, the uploader, side too. And when the security model of the Network relies on a continuous inflow of data, then it again highlights the importance of accessibility, and how reputation supports utility, which supports resilience.
So we have to explore and dutifully examine the options for defending the Network from the worst kind of content, and how we do that in a decentralised way that respects human rights and is resistant to the whims of hostile state actors.
While the client side is an obvious starting point for filtering unwanted content or communications and vital for protecting individuals, supporting communities, and solving the “welcome to hell” problem, we also need to explore solutions from the node side as well.
Why is this? Because as the present day legal and regulatory landscape will tell you, any issues around moderation and liability always escalate until they reach the payment or storage layer; which in this case is the node operators, the core developers, and the economic end-points.
Or, it all gets pushed back on the app and client developers, who are then liable for content they have no control over, and once again the ecosystem end-points and on ramps are vulnerable, utility and accessibility of the Network dries up, and so does its resilience and security.
So there still are, and there must be, mechanisms for the Network to adapt, change, and course correct over time based on the needs of humans. We aren’t making an indestructible robot, or a virus—we are making a shared resource that is owned by humanity, and it must be answerable to humanity. The question is how does humanity articulate those questions, and demands? That is the problem to be solved.
If we seek to improve what came before it, in building a new web that positively impacts humanity, then we must pursue an approach based on cooperation and broad consensus building. Because not only will this help curb the tendency to overestimate the extent to which technology can be a solution, but it also demands checks on power that would see creeping policy do the same.
Accountability in the pursuit of this starts with an acknowledgment of that tendency, proactively assessing risk of harm, and designing in governance structures with the aim of mitigating them.
What are the Characteristics of the Solution?
The solution will be one that has, by necessity, no single arbiter, nor centralised control. It will be based on globally distributed decisions and consensus on societal norms; it will have decisions corroborated by many entities—even across multiple Networks—with agreement across many globally distributed, independent nodes, all developed with open source. It will be decision making in the commons.
A decentralised web cannot replace the need to continually work together to tend to the needs of one another, and uphold fundamental rights, any more than a previous iteration of the web, or any other technology.
Yet we still need to work toward a solution—along with many other teams and projects facing the same challenges—and the Network itself has design characteristics that make it an excellent candidate for squaring the circle. Globally distributed consensus mechanisms require globally agreed consensus of societal norms, transparency, and decision making without centralised control. All with the context of a Network which maintains privacy and sovereignty or personal data.
And again this is where the nature of randomised and even data distribution throughout an address space, and an internationally distributed constellation of nodes is a primary advantage: it means no-one state actor or jurisdictionally bound entity can have a unilateral say on moderating content. It demands a global approach and consensus and the trust of resource providers through transparent methodologies and policy that focuses squarely on upholding and protecting rights. Because nodes, and their operators, cannot be compelled to drop data, or act in a certain way: it has to be through the collective distributed agreement on what works in the interest of the Network and its users.
We may not have all the answers yet, but we must work diligently and responsibly on it, and face up to it directly in good faith in order to strive toward a solution; failing to do that will have wholly foreseeable consequences for the future of the Network, and unintended consequences for its users. It’s not going to just go away with some quirky legal trick, or some slight-of-hand launch tactic, nor through technology alone: because technology doesn’t uphold fundamental rights, humans do.
Useful Links
Feel free to reply below with links to translations of this dev update and moderators will add them here:
Russian ; German ; Spanish ; French; Bulgarian
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!