Dealing with horrific content or something

Undoubtedly

3 Likes

Well, I would focus then only on coin transfers and private data if I were Maidsafe. Leave the public part for web3 projects (they can store their public data as private on Safe). Or maybe some anonymous community can create a public data layer on top of Maidsafes Safe Network.

One issue I see here is that all aspects of the network can become illegal over time. If encryption becomes illegal, then this tactic won’t work again.

1 Like

Just want to note that the biggest issue here is governments censoring the development of the network itself. If they can blackmail MaidSafe into including this file censorship in the protocol, then they have too much power and the network isn’t autonomous.

I have nothing against attempts to get the super bad content off the network in general, but judging by this thread there isn’t an agreed on way to do that, so what we have is a discussion on how to change the protocol to something that the supporters of the network don’t actually want.

But, that government control isn’t just limited to this, and they’ll try to meddle again in the future. Back when this project started, there wasn’t quite so much overreach, but now there is and it’s set to get worse. Whatever happens with file censorship on network launch doesn’t matter, as long as there is a clear path to decentralisation of development and potential removal of censorship afterwards.

3 Likes

It seems that the discussion stops, so I will speak again. I would like to comment on deduplication, since it was mentioned in the topic of the last update. Deduplication doesn’t seem like such a good idea to me now. We found out that there is a possibility of damage to the integrity of the file if censorship algorithms are created by someone. If each of the owners of the nodes that store all copies of at least one chunk install software that censors the network, all copies of the chunk can be deleted at the same time. Thus, the file may be corrupted. The larger the file size, the more likely it is to be corrupted. I understand that the introduction of deduplication algorithms can significantly reduce the cost of uploading files to the network, but in contrast to this, another algorithm could be created - an algorithm that can increase the security of data integrity. The idea is not to remove “extra chunk copies”, but to include more chunk copies in the overall list if the file has been uploaded to the network more than once. In this way, it would be possible to increase the security of the integrity of the file from being corrupted by external censoring software. I understand that this idea is likely to be rejected by the developers, but I could not help expressing it.

If all copies of file data is deleted, then file can’t be recovered, it is obvious.
But thinking about possibility that all nodes can do this at the same time do not produce useful conclusions.
There is no sense in network existence under such circumstances.
It is better to think that some percent of nodes will damage data and other nodes will act as they should.
From such model amount of duplication can be derived.
If nodes delete data frequently, then more copies are needed, if such events are rare, then having less copies is fine.
Making “every node store every file” and “store only single copy in the whole network” is equally bad - something in-between is needed.

If censorship of some kind is implemented, it’s implemented. If you dislike it, better to ensure it is not implemented. Messing around with deduplication misses the point: don’t allow censorship or find efficient ways (eg increased replication) to nullify it.

I’m not sure of the maths, but even for large files I suspect it would require a significant proportion of all nodes to opt in to censorship of a particular file to ensure its deletion. Which in a way is a form of concensus.

Also, the network won’t be the sole holder of most data so it’s not necessarily lost. If it is important it’s likely to be recoverable somewhere.

I’m not arguing for censorship here, like others I think that any filtering should be application layer and under the control of individuals. My point is that we don’t need to be as concerned as some are because as things stand it is not likely to be implemented and should not be used to divert the team or change the implementation.

The point of this debate is to anticipate legal and regulatory moves which might undermine the Safe fundamentals and counter them.

3 Likes

Please read this carefully peolple!

3 Likes

Threat model should include both digital and physical activities.
Original thread looks like “wow, we found that we have not only digital, but also physical threats”.
It do not mean that digital threats are gone of course.

2 Likes

I suggest Maidsafe start expressing themselves, their goals and aims much more exactly and literally. No more ants, nature, and “soon”. Please pay attention to detail, including spelling and grammar. We don’t need abstract dreams of “goodness”. We need unbreakable code.

(peolple → people)

4 Likes

It is an inherent attribute of the design. The chunk’s address is determined from its content. If a chunk of data is stored then trying to store the same chunk always results in the same place to store it.

Yes private data hashing of content has a salt (userID from memory) to make a different address. But that is the only difference.

It would require a redesgn of the storage of this very simple but efficient method of determining the address.

For this discussion the main issue is to find a solution to the regulatory requirements that either minimises the effect or eliminates the need for network changes to be made.

For me having chunks of a horrific file on the network is not the same as having the file available, the datamap is needed to make the file a file. A chunk is gobbledygook otherwise.

@dirvine If the solution must be node side and not in the client (sigh - it should be in the client) then why worry about the chunks and be concerned only with the datamap(&meta). If the Datamap of an horrific file is refused to be stored on the network then that is equivalent to the file not being stored.

This way no chunk that happens by some quirk to be the same as a chunk in an horrific file then the good file will not be destroyed.

@dirvine Important - Correct me if I am wrong but if you have a file of many chunks and chunks are encrypted using the chunk before and after to encrypt the chunk then its possible to have a file where a few sequential chunks are replaced to make an horrific file. For instance an image file that is a good file legal file has a portion replaced to turn it into an horrific file (#bytes kept same, eg JPG with face replaced by underage) will result in only a portion of the chunks being changed (one before and one after or even rest). Thus some chunks remain the same.

This means that I could upload perfectly legal images of nudes or whatever and someone changes a bit of it would result in many chunks not being changed and some changed. The chunk list of the horrific file would result in the good chunks of the legal file being deleted because they exist in the bad file.

This is like the example where a file modified only sees some chunks replaced, except this is for the specific case I outlined.

Then this would have to be unacceptable since good files will be destroyed because a horrific file had some of the same chunks

But in the end as @happybeing says, the point is the regulatory system and working towards a method that does not require censorship within the nodes. Maybe if in the client is not acceptable to the devs then censoring the datamap is the cleaner solution since it does not destroy encrypted chunks that may belong to other files as well.

1 Like

Many formats allows garbage at the end of the file by the way.
So you can take large legal file (.zip), replace its start with illegal file (.jpg) - and lots of programs won’t complain about it.

W just don’t know yet, there may be no issue at all node side and all client side is fine to handle this. It’s a huge investigation really.

Yes, just like changing an actual file, but in our case you need to alter the data map and data map holders won’t accept your data map. So it should be a non-issue in terms of chunks. But if you can inject a new data map, it could really be anything. How you would do that is not clear though unless you directed folk on a website like register to a new link and then show horrific content there.

3 Likes

I posted the Strategy Aims we are building plans around a few days ago which are important in the context of this discussion. Because we all want the project to meet its objectives. We all need a better Internet. My hope it you all manage to have a read of that and let the implications soak in.

A lot has changed in the 16 odd years since this project started. It’s a long time in the history of the Internet. In fact, the Safe Network’s conception is closer in time to the launch of the world wide web, than it is to the present day.

I say this to make the point that the context and environment we will be launching the Network into matters. It matters in how it will be received, perceived, used, adopted, attacked, and abused too.

The Network is intended to be a shared resource. A knowledge commons.

Commons can be fragile things that need to be continually nurtured and tended to. This isn’t a new challenge, or even a technological one… it’s sociological in nature. That commons could be a rice paddy, or a drinking-water well.

Yes the network will be autonomous but it won’t be all powerful, and it will always have human’s defining and organising the parameters of its operation, use, development, and its future.

Those people could be acting with good intentions, or with malice, and their decisions could have foreseeable, or unforeseen consequences. But they are still decisions taken by humans, for the benefit of other humans. The key is how decentralisation works to distribute power rather than concentrate it.

There still is, and there must be, mechanisms for the Network to adapt, change, and course correct over time based on the needs of humans. We aren’t making an indestructible robot, or a virus—we are making a shared resource that is owned by humanity, and it must be answerable to humanity. The question is how does humanity articulate those questions, and demands? That is the problem to be solved.

We are having this discussion in the light of the fact that we do not have the answers yet. Neither do any other teams and projects grappling with the same problem.

But we must work diligently and responsibly on it, and face up to it directly in good faith in order to solve that problem. As it’s not going to just go away with “This One Neat Legal Trick”, or some launch tactic, or technology alone. It’s rolling up sleeves time.

18 Likes

I have an answer.

build a decentralized, content neutral protocol/technology, walk away from it (provide no centralized ownership target), and let the chips fall where they may.

It is that, or become just another controlled, regulated, captured, censored entity.

That’s how I see it.

This project needs to decide if it is cypherpunk or not. There isn’t really a middle path.

9 Likes

After their public statements regarding this topic you should already know this is not a cypherpunk project.

1 Like

my stand is that nodes and network should be neutral and then maidsafe should make a client with a system for filtering out downloads for each juristiction. just like tcp udp ftp http https etc all are neutral!

5 Likes

Wasn’t that the point from the beginning? Maidsafe has taken a wrong turn if they concern themselves with any 3rd party content whatsoever.

Wrong. The safe network itself was supposed to be a decentralized collection of computers that just manage chunks of random noise. If the storage technogy and communications are ultimately secure then it may be possible for a knowledge commons or perpetual web to emerge and grow on top of that. Content was only supposed to exist at the client end points. Clients are ultimately responsible for and in control of what they upload/download and are accountable to their local laws and regs for it.

6 Likes

They aren’t concerning themselves with content, they’re concerned with getting the network built in an increasingly hostile environment.

7 Likes

I’m perplexed… then what is the topic title of this thread? Dealing with an increasingly hostile environment or something or something else?

Maybe read the thread?

2 Likes