Dealing with horrific content or something

I disagree. The correct solution is to understand the underlying cause why someone would want to murder someone. I believe that cause is due to abuse, trauma, manipulation from peers / environment, etc. From that you can do the necessary work to rehabilitate, and also prevent kids from turning into a murderer.

8 Likes

Banning money and videos are like curing the symptoms.

You do realize that a lot of medicine is about managing the symptoms of diseases you can’t cure, and doing that can add decades to people’s lives, right?

Correct solution is to make murder so dangerous that (almost) no one will dare to do this.

But that’s curing the symptoms! You need to fix it so that nobody would actually want to do it.

Seriously, the world is vastly more complicated than you’re allowing for.

4 Likes

Wouldn’t that mean nodes would have to be aware of what they are storing? Maybe I just misunderstood something.

6 Likes

It depends on what is the goal.
Protect potential victims or both victims and attackers.
Of course, healthy society is better, but it is broader problem.

It is ok to use such medicines, when it is impossible to cure disease, but not when doctor is just lazy.

I am not pretending to be 100% correct, thanks for additions.

2 Likes

Official client can watch download attempts and if it sees file with “criminal” hash, it can directly report such event to three letter agency.
Of course, it is possible to remove such feature from open source program manually, or wrap file into password protected archive, but many users can be caught off guard.

1 Like

I’m not sure what you mean by “official client”, but if a three letter agency or anybody else, including myself, is able to see what my node is storing, people may as well use Google Drive. Not even https://mega.io makes this possible, as far as I know.

The possibility of listing “criminal hashes” and seeing where their content is being stored seems to nullify the whole point of the network to me.

Let’s say I’m storing file.txt containing: “Russia is at war. Putin is a prick. Women should be allowed to drive cars. Assange is a political prisoner.” If this file can be traced to my hard drive, nothing is easier than giving me polonium poisoning or, in my case, since I’m a nobody, just putting a bullet in my head and smashing my drive.

4 Likes

I mean that censorship can happen at “client” level, not at “server” level.
If almost no one can download file, it is almost the same as if no one can store it.
Similar approach social networks use - instead of deleting record, they just add “deleted=yes” field to it.

1 Like

Who decides what is “deleted=yes”?

1 Like

I can only imagine the bureaucratic hell of running a corp. these days when it has to deal with things like this. Kuddo’s to the team for continuing to work to push through.

The longer the project goes on the worse things could get politically and the harder it may become for corporate entities to ‘legally’ work on it without having to compromise the original principles of the project.

The worse case scenario is that Maidsafe is driven to the point where they have to walk away from the project.

Is the best way forward ultimately going to be that some stripped down or otherwise compromised version of the Safe Network comes into existence and then forks by non-corporate entities come about and finish the dream? IDK, but maybe is a possibility.

Thanks for the update super ants! Don’t despair and never surrender!

11 Likes

:sascha:

10 characters.

4 Likes

The answer to both of these is consensus. Global consensus on societal norms.

So let’s take something that has broadly unanimous agreement as being something that should have no place in a civilised society, and that all of us would agree would be highly desirable to keep off the Network: the sexual abuse of children.

The question is, how might we go about that, with out that same system being subverted by a regime bent on censorship or suppressing free speech?

Well the Safe Network is in a pretty good position to be able to deliver a solution to this, thanks to the beauty of global decentralised consensus by design.

Think of a potential solution like this…

Should some government entity, or monitoring body, flag a public hash as representing CSAM, then it wouldn’t need just some moderation team to agree, like a centralised model, it would need the agreement, legitimacy (and trust) afforded by thousands of nodes—perhaps hundreds of thousands of globally evenly distributed nodes—to be in agreement.

Not top of this perhaps then it also needs to be cross referenced by multiple agencies, or through independent confirmation, community derived flag lists, or even cross network/project consensus, before consent could be blocked, dropped, or hard filtered. And this would be though the aggregate agreement of independent nodes.

This is the sort of method through which we could require global international decentralised agreement on what should not have a place on our Network. It will be a high bar indeed.

10 Likes

Pedophiles will exist no matter what it is allowed or not. Availability of this content helps them to vent without harming more children. Nothing is 100% good or bad,

Any filtering (if needed) should be on application level and client-side. Imagine if the autors of TCP or HTTP protocol had debates about censorship and what content is allowable - crazy right?

5 Likes

If client/node software comes pre-configured to reject/warn about an illegal content hash match, that’s fine by me as long as the protocol doesn’t have built-in censorship. Then if a third party takes that pre-configuration out (such as for optimisation of farming), or makes new software for the network without those censorship efforts, I don’t see it as MaidSafe’s responsibility anymore.

3 Likes

Should some government entity, or monitoring body, flag a public hash as representing CSAM, then it wouldn’t need just some moderation team to agree

This is not a new idea.

It’s true that most governments would probably accept the idea that every node operator independently chose what to carry, if you set it up that way. Maidsafe would be in the same position as the Apache project or something. It could get some heat off of Maidsafe, and going after node operators would be much harder.

But maybe not all governments would accept that, especially if the resulting system weren’t actually effective in suppressing the material they wanted to suppress. Even if they bought in at first, if it didn’t actually achieve their goals, at least some of them would to come back and ask for more.

And I claim that if the system were actually effective, it would devolve into good old centralized censorship. Here’s why–

The reality is that thousands of node operators aren’t going to be able to individually check whether every hash is legitimate.

  1. They just won’t actually take the time to make those decisions, no matter what. It’s not going to be practical for a node operator to personally review every one of probably thousands of bans, especially not if there are anything like enough nodes to let you call the network “decentralized”.

  2. In the case of “CSAM” (I hate that name), node operators would be legally prevented from checking whether a hash was legitimate. To audit a hash, you would need to have the actual material, and you are not allowed to possess that material at all, for any reason. You Just Have To Take Their Word For It™.

  3. That could easily be extended to non-CSAM cases, too. It already applies to some other categories in various countries: things like classified material, maybe copyright violations, and maybe even some “terrorist content”.

    Governments or whoever are not going to accept distributing all of the material they want suppressed to thousands of independent node operators so that those operators can second-guess the bans.

So, if you were an operator who wanted to filter on your node, you would end up both practically and legally forced to take some authority’s word for what was and was not forbidden and why. The keepers of such lists would be subject to all kinds of pressure to expand their scope. Much of that pressure would be covert. And at least some of the categories cannot be transparent enough for the bans to be audited by any large community.

… and you wouldn’t reach “broadly unanimous agreement” on very much, even if every operator were reviewing every piece of material. There are very, very few categories where you’ll even get close to that. Other than the most extreme cases, I doubt you’ll even get much consensus on exactly what qualifies as “CSAM”.

There also probably wouldn’t be enough authorities to let you choose in a very granular way. In most small jurisdictions, you probably wouldn’t even be able to find a list that forbade all and only the material that was illegal where you were operating… assuming that your own location were even the only jurisdiction that might claim authority over you or that might make trouble for you.

… and if there were that many different block lists available, most node operators wouldn’t even have the time to figure out which lists they should be subscribing to.

In practice, I would expect this to devolve to a situation where there was one “big list” that most operators used, one or two “maverick” lists that just a few operators used, and a small number of operators who simply did not filter. The “big list” definitely would not be transparent and probably wouldn’t be very accountable, either. And it would overblock massively, banning at least everything illegal in any jurisdiction with any meaningful clout.

Individual node exceptions wouldn’t help you here. Unless there were a massive public outcry, most nodes wouldn’t make exceptions even in the most extreme cases. And there would never be a massive public outcry if the material that would cause that outcry had been successfully suppressed.

So there would be a “big list”, or at most two or three. It’s extremely improbable for things to go any other way.

Then we come back to the question of whether the big list would be effective. If the file “terroristic-copyright-violating-child-porn-with-hate-laced-drug-recipes.zip” were on the big list, and most nodes blocked that file, would that actually make the file unavailable to all users of the network, or even to most users of the network?

If the list were effective, perhaps because you’d implemented some kind of voting mechanism, then you you would, in fact, have a centralized censorship system. The voting mechanism would be just a fig leaf, even if it required some kind of large supermajority.

If the big list were not effective, perhaps because you’d actually kept real protection against Byzantine failures in the nework, then you wouldn’t have filtering in any really practical way. At most you’d have lousy performance and unreliability.

There is no actually effective way to do this.

Almost as a side issue, your suggestion would also require each operator to be able to find out what was on that operator’s node, which would be bad. And even if the existing design assumes that, you’d be foreclosing any possibility of hardening things more later on. Maybe you could engineer around that, but maybe not, too. And it still wouldn’t fix the main problem.

4 Likes

I think that some of these issues call into question the immutabilty of data on the network.

What if a piece of content I uploaded was deemed acceptable when I did so but consensus has now changed? If there is an element of censorship or filtering on the network then could I even recover my own data in that situation?

I had thought that the Safe Network was going to be designed and then released into the wild as an autonomous network. At that point then it could not be stopped as it would be autonomous and decentralized, like bitcoin.

5 Likes

Secure Access For Everyone should mean just that. Not Everyone, except the really nasty people.

EDIT: If this goal has changed, I’m out. Simple as that.

8 Likes

I have always feared the power that Safe Network could give repulsive, nefarious or downright illegal actors but when I have had those thoughts, in a quest for clarity, I have always asked myself the same questions: What would the founders of the Internet, during its formative years, have done about those same concerns if they had fully understood the scale of evil that could/would emerge from their “invention”? Would they have tried to put safeguards in place when designing the protocols? Should they have been expected to entertain such controls? And what would the Internet look like today if they had acted on those concerns? Would it be as pervasive or would they have choked out the possibilities along with stifling the objectionable content?

Safe Network is going to be a replacement for that Internet and I have come to the conclusion that it should respond to those questions the same way the Internet’s founding fathers, either through prescience or plain luck, did: They should provide secure access for everyone and hope order and goodness overall will emerge from whatever chaos is created along the way. Designing the network any other way, I believe, will only hasten its death.

14 Likes

There is no way to ban any person on the network. That is by design.

That would be a public file, not your data. It would need to be something horrendous for node operators to refuse it.

who says horrendous, well authorities from several countries plus a consensus group on the network including those nodes holding that chunk.

At least that is an option on the table.

8 Likes

The issue I see with node-level censorship is that it could render anti-government material inaccessible if that government decides to run a huge number of nodes. So maybe the censorship could be solely built into the client software? Would that tick the boxes?

If so, the client binaries could be compiled with a filtering config file like:

# If the client receives a request from the user
# to download a URI or upload a chunk whose hash
# is contained in any of these lists, the client
# will take the relevant action.

# Warn the user that the file is on a list.
URIs_of_filter_lists_warn = []

# Refuse to handle the file and warn.
URIs_of_filter_lists_block = [safe://UKGovFilterList, safe://IWFFilterList, etc]

# Report to the authorities and refuse to handle.
URIs_of_filter_lists_report = {safe://UKGovReportAddress: [safe://UKGovDangerList]}

# If editable, then the user will be able to add/remove lists in the client interface.
editable = True

Then any public illegal content would have to be uploaded/downloaded solely by people who modify this code.

I’d actually find this useful (and have suggested it before), especially if the laws permit it to be editable by default.

1 Like

who says horrendous, well authorities from several countries plus a consensus group on the network including those nodes holding that chunk.

Not really encouraging when you consider the increasing reach of totalitarians…

5 Likes