Update 16 June, 2022

Following on from the update two weeks ago, we thought it would be helpful to dig into some more of the considerations and reasons why we need to be thinking about horrific content on the Network, what effect it might have, and if there is a path to squaring the circle on tackling it in a way that upholds fundamental rights, and is resistant to censorship.

General Progress

First up, we’re delighted to welcome Benno (@bzee) to the team on secondment from Project Decorum. Those active here, or who attended our hackathon way back when will know Benno and his works, and I’m sure you’ll agree this is very good news. Benno is acquainting himself with the codebase and also looking at item 2 on our progress update this week, which is CPU multithreading.

A common assumption is ‘multi-thread good, single-thread bad’ when it comes to performance, but that’s simplistic and only holds if you actually need multithreading for concurrency. Oftentimes we don’t, especially since CRDTs are eventually consistent, and the implementation in some crates we use seems buggy. In fact, this could be the source of the more perplexing bugs we’re seeing. So, we’re analysing our flows to find where it’s necessary, so we can test the effect of moving to single threads where it’s not.

Meanwhile, @yogesh continues his investigations into a sled DB replacement. Cacache still seems the best so far, although Yogesh has been extending the DB benchmarks using criterion to the rest of the alternatives and was struck with some really astounding results. RocksDB which is a Rust wrapper to the C implementation of Facebook’s RocksDB seems to offer ~10x faster read/write performance(than the quickest alternative and Sled) with a plethora of under-the-hood optimisation options. The team is currently weighing the benefits and the negatives (being a C dep, its prerequisites are CLang and LLVM) to take the call for switching the DB.

Safe as a Grand Commons

The Safe Network has the form of a decentralised, autonomous network of nodes, that simply do their job of handling and serving data as requested by the clients. But its function is to serve individuals and humanity as described though the project’s objectives:

  • Allow anyone to have unrestricted access to public data: all of humanity’s information, available to all of humanity.
  • Enable people to securely and privately access their own data, and use it to get things done, with no one else involved.
  • Allow individuals to freely communicate with each other privately, and securely.
  • Giving people and businesses the opportunity for financial stability by trading their resources, products, content, and creativity without the need for middlemen, or gatekeepers.

Its form is defined by its intended function, which also then informs the strategy to deliver, nurture, and support it to meet its objectives in the environment it will be launched into.

We have to be cognisant of the fact that we are neither launching into a vacuum—or as some lab experiment—nor in the face of this, does the technology stand as a neutral entity. It is a response to a web which has been overrun by surveillance business models, the abandonment of privacy, and rampant human rights violations. And It’s also worth noting the history of the incumbent internet monoliths that started their lives with the false assumption that they were merely neutral tech-stacks, and what then became of them.

The Network, when it comes to public data, is intended to be a shared resource, a grand commons, allowing “anyone to have unrestricted access to public data: all of humanity’s information, available to all of humanity.”

Commons are resources that are accessible to all, and are held and maintained for the collective good, be that public or private. That could be natural resources, or an environment, or another resource that’s administered not by a state but by the self-governance and principles of the community which it benefits.

In the case of the Safe Network, that’s public data, but also the infrastructure for private data and secure communication.

Commons are fragile things that need to be continually nurtured and tended. This isn’t a new challenge, or even a technological one… it’s sociological in nature. That commons could be a rice paddy, or a drinking-water well. All fine and serving everyone, until I decide I’d like to drain my paddy—the lowest on the hill—or that the well would be a mighty convenient place to dump my trash.

We have, of course, designed-in mechanisms to cope with bad behaviour of nodes and how these are handled by the Network in a decentralised way. This is vital so the Network can protect itself from bad actors, and hostile threats. These mechanisms are there, when we trace back their trajectory, to serve the objectives of the project and the needs of the humans using the technology: security, privacy, sovereignty, and access to a shared global resource.

But it’s right to understand and acknowledge that threats to the Network don’t just come from the malicious node operators but attacks (even those that can be considered reputational or Sybil in form) can be waged from the client, the uploader, side too. And when the security model of the Network relies on a continuous inflow of data, then it again highlights the importance of accessibility, and how reputation supports utility, which supports resilience.

So we have to explore and dutifully examine the options for defending the Network from the worst kind of content, and how we do that in a decentralised way that respects human rights and is resistant to the whims of hostile state actors.

While the client side is an obvious starting point for filtering unwanted content or communications and vital for protecting individuals, supporting communities, and solving the “welcome to hell” problem, we also need to explore solutions from the node side as well.

Why is this? Because as the present day legal and regulatory landscape will tell you, any issues around moderation and liability always escalate until they reach the payment or storage layer; which in this case is the node operators, the core developers, and the economic end-points.

Or, it all gets pushed back on the app and client developers, who are then liable for content they have no control over, and once again the ecosystem end-points and on ramps are vulnerable, utility and accessibility of the Network dries up, and so does its resilience and security.

So there still are, and there must be, mechanisms for the Network to adapt, change, and course correct over time based on the needs of humans. We aren’t making an indestructible robot, or a virus—we are making a shared resource that is owned by humanity, and it must be answerable to humanity. The question is how does humanity articulate those questions, and demands? That is the problem to be solved.

If we seek to improve what came before it, in building a new web that positively impacts humanity, then we must pursue an approach based on cooperation and broad consensus building. Because not only will this help curb the tendency to overestimate the extent to which technology can be a solution, but it also demands checks on power that would see creeping policy do the same.

Accountability in the pursuit of this starts with an acknowledgment of that tendency, proactively assessing risk of harm, and designing in governance structures with the aim of mitigating them.

What are the Characteristics of the Solution?

The solution will be one that has, by necessity, no single arbiter, nor centralised control. It will be based on globally distributed decisions and consensus on societal norms; it will have decisions corroborated by many entities—even across multiple Networks—with agreement across many globally distributed, independent nodes, all developed with open source. It will be decision making in the commons.

A decentralised web cannot replace the need to continually work together to tend to the needs of one another, and uphold fundamental rights, any more than a previous iteration of the web, or any other technology.

Yet we still need to work toward a solution—along with many other teams and projects facing the same challenges—and the Network itself has design characteristics that make it an excellent candidate for squaring the circle. Globally distributed consensus mechanisms require globally agreed consensus of societal norms, transparency, and decision making without centralised control. All with the context of a Network which maintains privacy and sovereignty or personal data.

And again this is where the nature of randomised and even data distribution throughout an address space, and an internationally distributed constellation of nodes is a primary advantage: it means no-one state actor or jurisdictionally bound entity can have a unilateral say on moderating content. It demands a global approach and consensus and the trust of resource providers through transparent methodologies and policy that focuses squarely on upholding and protecting rights. Because nodes, and their operators, cannot be compelled to drop data, or act in a certain way: it has to be through the collective distributed agreement on what works in the interest of the Network and its users.

We may not have all the answers yet, but we must work diligently and responsibly on it, and face up to it directly in good faith in order to strive toward a solution; failing to do that will have wholly foreseeable consequences for the future of the Network, and unintended consequences for its users. It’s not going to just go away with some quirky legal trick, or some slight-of-hand launch tactic, nor through technology alone: because technology doesn’t uphold fundamental rights, humans do.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

50 Likes

First. I hope :crazy_face:

11 Likes

Υολο Υολο! second for a milisecond :stuck_out_tongue:

11 Likes

Thanks to all involved for the update.

Normal service is resumed.
We apologise fo any inconvenience.

“Time flies like an arrow but fruit flies prefer a banana”

13 Likes

seem like everything nowadays circle around cencorship…

what about all the other aspects of the network? no updates on everything else?

seems like the networks dev has reached full stop to deal with regulation of cencorship

9 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :racehorse:

8 Likes

Yaaaas it even on the podium but delighted to be here!! As alaays thanks to the team and now to read :slight_smile:

7 Likes

No Engineer is involved in the discussion past a fleeting glance. It’s still all heads down in Engineering :wink:

18 Likes

Glad to hear engineering is doing what they do best :slight_smile:

How, is the crypto blood bath affecting financing for the team?
Are sales if crypto funding sallery or is cash in the bank?

I know a lot of people and other teams are hurting right now.

11 Likes

After many years of following this project through ups and downs, seeing timelines stretch out to near infinity, I never lost hope the project would eventually come to fruition because it’s what the world needed. Now I see the project is seemingly about to destroy itself by bowing to the whims of “global consensus” and censorship. The massive change in messaging from even a couple months ago has shown me the team has lost their backbone and has decided they are now responsible for the worlds’ problems.

It is not a solvable problem from the node side without destroying the project, and it should never even be considered, even for “play acting” for the authorities that be. I see the project teetering on the edge of collapse, and unless they pull back, I feel a good portion of hodlers and long term followers of the project will soon abandon it.

12 Likes

I hear and agree with with what you are saying but when I talk to my wife and the obvious child abuse issues come up she has usually come to the conclusion safe is bad and wants nothing to do with it.

So on this one we have to put our trust in the team to find a solution that is going to work while upholding the project fundamentals.

Things are gong to get interesting!!

7 Likes

I love a good old no rules wild west as much as the next John Wayne but I agree with this change in tact.

A 100 percent libertarian dark network with 100 000 users or an imperfect version with 1 billion.

For me the potential economic paradigm shift that this network could bring was always the exciting thing.

The ability to move away from a narcissistic attention economy to one that pays people for useful applications was always an exciting and much needed prospect.

  1. Can whistle blowers still blow the whistle without fear of identification.
  2. Do we move away from the attention economy to a better model.
  3. Can I still communicate without eaves droppers?
  4. Can I still own my data securely and privately?

If these kinds of things are still true I can’t see an issue with some filtering?

Surely with a decentralized network we can program some kind of management into it. Why can’t the network allocate nodes to help it the filtering decision process. What is and what isn’t filtered? Those nodes would get extra payment for the month before a churn event which allocates new loads. Would the ants make the correct decisions if they’re were enough of them?

16 Likes

Thanks for the update Maid team!

re: multi-threading … Wasn’t most of that removed when going to async await code? Could definitely be the source of some problems.

RocksDB is interesting … but IMO the dependency isn’t worth the trouble I bet. Keep it all in Rust! :wink:

re: content management (censorship), and stuff … I don’t mind what Maidsafe has to do here (within some limits), but am very concerned that the cost in time to beta is going to be seriously affected by all of this.

Is it possible that this “stuff” can be done in-beta? That is to say, after beta launch. The legal issues then could possibly be deferred as company can say, well, it’s in beta, we haven’t finished it yet, it’s still being tested.

We need to get to launch and I fear that these concerns over defining how to deal with bad content will go on forever as there is no line in the sand for what is acceptable.

Cheers for the continued hard efforts!

12 Likes

Yes, Yes, Yes, and Yes.

24 Likes

I don’t know how many here now will know @bzee, but this is great news. Benno is an expert programmer and a great guy. He helped me a lot with my Safe related projects so he’ll add significantly to the development team. Good luck Benno.

27 Likes

I think the team should concentrate on making everything work and let censorship behind.

The world will use it for better or worst, but censorship shouldn’t be part of the equation. The network should be its own entity without limits or walls.

If we start with censorship now, one day it won’t be any better than what we have right now.

Let it flourish and free, it might not always be pretty, but it will be there flying high with the winds of liberty.

So how far are we really from a stable test network? Are there still so many unresolved bugs preventing a test? It was November last year when we were so close to having a proper test.

Are things just not functioning as expected ? I don’t think I can take another do over were Dbc are shoved in the trash for yet another technology.

4 Likes

Thx 4 the update Maidsafe devs

:clap: :clap: :clap: @bzee for helping out with the project

There are so many things in our universe that humans can’t censor…

Keep hacking super ants

7 Likes

I find myself in full agreement with TAJ here

And here also…

Taking myself outside for a good shaking… Something’s no right…;

10 Likes

Yes, Yes, Yes, and Yes is music to my ears. Great update SAFE. Once again the technical stuff is sounding like optimization, which is great. Where are the current technical roadblocks?

8 Likes