Dealing with horrific content or something

Very few node operators are going to maintain their own curated list of relatively small numbers of hashes. Instead they’re going to opt in or out of large and growing hash lists, so maintaining your own copy is a large and growing overhead.

Add to that the fact that as David notes, comparing hashes while fast, doesn’t work - hence their popularity with people who make policy without understanding implementation, and assume the adversaries will just hold up their hands and give up.

So you end up using ever more complex mechanisms as the cat and mouse game evolves. Examples already exist to detect images which are very similar, but inevitably there’s a cost in both processing load and false positives. But how hard are they to circumvent? Pretty damn easy IMO, unless you centralise and then you’ve lost the game.

Like the breakable encryption debate, I don’t see a solution that satisfies the two camps here, but I do recognise the importance of being in the debate with those wanting to ‘protect the children’ whether through genuine concern and naivity, or to strangle and suppress autonomy and choke off freedoms. To be in this debate you have to have tried to satisfy their desires, to have studied the problem and a wide range of potential solutions, so that if they say, ahah but you could do X, you can explain precisely what that would involve and what it would achieve in practice.

I think it’s a mistake to assume MaidSafe are breaking with the fundamentals, to hold up your hands and walk away, but obviously some will do that.

Where they end up I’d like to know, because this project, David, and all those who have ever worked for MaidSafe have shown that they are unwaveringly committed to the original vision, and there’s still not been another project that comes close in over fifteen years.

11 Likes

In general, I fully share your position that censorship should be carried out at the client level, and nothing else. But I don’t really understand exactly this argument where you write “Lets be real that is never going to happen” Why “that is never going to happen”? This is similar to the usual moderation on Twitter or Facebook, but with the use of additional software. Users send links to “bad content” to moderators - moderators enter the link into software specially written for this, and receive a list of chunk hashes that match the hashes on the host (nodes) side. Can’t this process be automated to such a state? Additional costs for software development and wages for moderators will be required, but the task does not seem to be super complex and unsolvable.

Another aspect of the censorship problem bothers me: the ability to instantly and completely remove all copies of a chunk from the network. Suppose 95% of the nodes agreed to install software to automatically remove chunks with “bad content”. This scenario doesn’t seem all that far-fetched to me. This means that the probability of maintaining the integrity of the files will be very small. This means that the network can no longer be considered a secure data repository. I wrote above that network devs (or fork devs) could try to change the network in such a way that the user has the opportunity to upload content to the network in such a way that the chunks are located only on nodes that are not prone to censorship (the remaining 5%) for additional fee. But it is still a mystery to me whether this can really be implemented, and how best to do it. I would like to hear your opinion on this matter.

1 Like

This really is where the rubber meets the road: What is the nature of evil and what are our individual and social relationships to it? Can’t agree on how to transcend it if we can’t identify it.

Arguments FOR censorship are framed as it being necessary to counter evil.

Arguments AGAINST censorship assume that suppression of individual freedom of speech and conscience is evil.

Neither of these angles can be anything other than emotional in nature (as opposed to rationally discerning) in the absence of a clear understanding of and agreement on what makes evil evil, or what evil really IS (other than a rhetorical hot button).

For anyone who’s interested, we can discuss it in this off-topic thread>> https://forum.autonomi.community/t/what-is-the-nature-and-character-of-evil/37126.

4 Likes

It seems to me that, regardless of the MaidSafe company’s or foundation’s decision, the node software will be modified by someone to be able to shun content at the node based on lists (hereafter called shunlists), and that multiple SAFE networks with different rules will exist, because these possibilities are inherently and practically doable. Given that this seems inevitable, I think it’s proper for MaidSafe to have a guiding influence on designs that support these.

I sympathize with the desire to exclude any mechanisms that the authoritarians could abuse for censorship, but these mechanisms will be created anyway and adopted by some at some point, and the authoritarians will already impose a variety of laws banning or regulating the software if they want to, without regard for understanding, and so whether or not voluntary shunlists exist at first will not make a significant difference in their ability to mandate censorship, in my estimation, but it will gain more support from the public. I’m unconvinced that avoiding shunlists at the node would avoid malicious legislation - if the mechanisms for it don’t already exist, they’ll just mandate that they be created.

Do I understand correctly that the crux of whether or not data on SAFE can be shunlisted is that elder nodes enforce verification of whether the designated adult nodes continue to store it? I.e. if elders do not honor any shunlists then adults that choose to shunlist would be demoted to the point of being kicked out, but if elders choose to honor some shunlists then they would not enforce the requirement of storing that data?

Do I understand correctly that anyone who can access an already-uploaded file’s mapping of chunks (or read a register) (which for private files and registers is restricted but could permit multiple users), they then know and so can report its chunks’ (or register entries’) hashes to be included in shunlists? If so, then lists of chunks will be created as detected into the future, and lists of whole files will not be the only kind. Also, I’d think that the organizations which have collections of files could automate the process of generating chunk lists and do it themselves without too much difficulty before too long.

I’ve been following (to some extent) the progress of hopefully-unstoppable software projects for twenty years, and I’d already come to think that voluntary shunlists that are enforced by hosters/providers/nodes are an appropriate way to counter abhorrent content while respecting freedom. I imagined the lists would be provided by many independent more-trustworthy-than-states groups like how ad-blocker lists are. If involving lists from states is necessary to avoid being destroyed by them, and if these are honored only where they are corroborated by additional independent lists, then I could accept this as an unfortunate survival tactic until humanity outgrows the authoritarian mode.

It is not enabling censors to support the fact that, since node owners inherently must be able to identify chunks by their hash, they can choose to shunlist some and this should be and inherently is their choice. It is another question whether or not they get penalized by their network for which content they choose to do so. If MaidSafe does not officially provide more-convenient support for shunlisting, then others probably will, if this open-source software becomes widespread.

That there can be multiple SAFE networks with different rules for different moral philosophies is unavoidable and perhaps desirable (for social evolution, but not for simplicity of adoption). (With the global address space of content-addressing, some integration between different networks could be possible I’d think.)

While I am opposed to laws restricting information, I’m certainly going to resist facilitating truly evil content, within the locus of what I own/control/influence, same as I’d resist facilitating the abuse of hard drugs while still being opposed to laws criminalizing their use. I want tools that are not restricted by others, and I do not want to facilitate vileness. I know that vile content could still be hidden in ways and that my nodes might store a tiny bit of that, but I’m going to do what I reasonably can to prevent it. If I ran a business offering goods or services, I wouldn’t refuse it to unknown strangers even though I know some tiny percent are probably evil and that my product will assist them in continuing to exist, but if I knew that a particular individual was evil then I’d refuse them. Voluntary shunlists that are corroborated by independent sources is where I draw the line - I won’t support anything beyond that.

Having said all this, I do think that filtering at the client sounds like a better way for reasons of technical and jurisdictional simplicity.

About whether work on development should be done pseudonymously: There is a place for that of course, but we should not fool ourselves that we’d have much of a chance of remaining unidentified over time if they decide to use the extent of their resources to find us. While there are many developers with skills, there will be few with enough familiarity with this project’s complexity, and that small number makes crushing them more feasible. I’m personally concerned about my safety if I were to work on apps for platforms like SAFE, but I decided to not fear exposing my legal identity because they have already known about my interests for a long time, I’d already be in their narrowed-down pools of targets, I’d waste a lot of time dealing with spycraft and worrying that I’d made a mistake, and I’d feel safer knowing that an increasingly-growing part of the public can support me in meat space, versus being easily crushed without witnesses because my skills at perfect untraceability still weren’t good enough.

3 Likes

I think you miss the point. The statement is for compliance. Its up to the user to follow regulations and use the correct client that is the official Safe network client.

Its mine. I just reiterated the self evident fact, its the design of the network. The client does a lot of work the nodes do not and they work together for the network to work. For instance for the DBCs to work the client does most of the work getting together signatures, building the DBC and the nodes approve the signatures are valid and write the spent book.

And as such any filtering implemented in the official client is actually in fact implementing into the Safe Network

Same for the Node

But for regulations from the foundation & Maidsafe side they are working with official software so hacking the software up is not in the scope of the questions being asked of them. Hacked code can be made to do anything so rather than chasing ones tail, you have to work with official code.

Doesn’t matter for the questions being asked of Maidsafe. This issue applies to ALL systems filtering according to the lists supplied.

Upload and download are both contained in the ONE client. To meet the requirements of questions, it needs to be up and down. But that applies to the official client Maidsafe/foundation is implementing. see above about others.

The point is that the answers to the questions have to be satisfied and it would seem up and down is required if any filtering is required. Some are looking at stopping storing chunks at the Node level which is UP and I totally disagree with it in the Node. The compromise is to include the UP at the client level where anything like that is required. Remember this is for the official client used.

Hacking code can occur in the Node software or the client so that issue is not an issue for meeting the regulations of the questions.

And this is why it is absolutely the wrong place to implement it. Not to mention someone at Maidsafe has to be authorised to handle the material and self encrypt the original material to make a list for the chunks at the node. But at the client then the whole file exists and can be hashed then checked against the list.

Any way to remove doing it at the Node. Nodes can be hacked as much as a client so no advantage at Node. Do it at the client so a Maidsafe employee doesn’t have to do self encryption of the actual bad files to make the chunk list. The lists are ALL file hashes.

Also it causes extra processing at the Nodes where its better to offload that to the client. Also client has the whole file to hash so obvious place to do it, no need to self encrypt the original file by staffer to make node list. I am absolutely surprised that doing it at the Node was even considered (Safenetwork s/w == Node s/w + client s/w and functional network doesn’t work without both working together)

Honestly I cannot believe doing it at the Node level is even being considered any more, other than as an exercise

The authorities are not going to let Maidsafe have an employee have all the horrific material so they can self encrypt the files to make a chunk list.

You see the list created are hashes of the actual file. The hash is made when a file is determined to be horrific and then added to the list and the file is never looked at again by anyone in the organisation (or elsewhere) making the file list.

The organisation is not going to make a special set of hashes for every company that is implementing the filtering. They just say that the company uses the whole file hashes.

That is why it is never going to happen.

Thus the client is the place to do it. The file being checked exists and can be hashed and checked against the list handed out. Thus even the authorities would demand it be done at the client and is why I am shocked that doing it at the Node is even being considered.

This is not a technical consideration but a social consideration since no one is handing out a folder of the actual horrific files that could end up in the hands of another person let alone accidentally be released due to a mishap

Nah the modification would be done at the client. Someone cannot just decide to do it at the Node because the Node only sees a chunk of the file and no way to identify it as being a part of any file unless the original file is held by someone who then self encrypts it to make a hash list containing hashes of the chunks. Who is going to be able to do that, no one so the client is the only place.

Being done at the client then allows the one network and the user uses the client to adhere to their ethics or their countries requirements or generic one. Simple.

Nope they just do the whole file hash list. Imagine the 1000’s of filter software makers all asking them to do a list using their flavour of hashing/encryption. Nope they use whole file and simple hash, least work for the authorities. This has been done for decades now and you are not going to get them to do a separate list for you.

4 Likes

There appears to be at least some small possibilities, rather than being impossible, that lists of chunks could be made and so nodes could decide to honor them:

  • Investigators or unlucky bystanders could report files they happen upon, and since they accessed a file they now know its chunks’ hashes and can report these.
  • Organizations that have files archived could eventually decide to generate chunks lists.

That’s what they currently do. Doesn’t mean no other organization will ever compile their own chunk list (which indeed couldn’t include chunks for archived content that is not shared).

That would be a deterrent to doing it for all of them, but SAFE has the potential to become a top priority.

I can imagine those authorities being too lazy, but there could be other organizations that would make their own lists from the findings of their own investigations.

I agree this looks better. I still think that eventually, with massive adoption, enough people will be uncomfortable with the prospect of storing even a single self-encrypted abhorrent chunk and would rather shun it, and so there will eventually be some interest in compiling chunk lists and modifying the node software and forking if necessary a separate network that honors this. (I know that changing a single bit or pre-encrypting will circumvent lists, but I think people will still want to try to shun to the extent they can eventually.)

I’ve moreso been looking at raw technical possibilities and long-term eventualities. Your point is well taken about needing to focus on surviving the expectations of contemporary authorities.

1 Like

Hmm…finally!! I had to read the whole thread just for this. Though I don’t completely understand technical side of things, I was wondering how maintaining such lists of hashes is scalable. Don’t we quickly end up with millions/billions of hashes that need to be compared with and blacklisted? As, @dirvine mentioned, change a pixel and you would have an entirely new list of hashes to be black listed.

My shot at this. Sorry for being naive as I don’t have much idea how safe network works technically

Every file(which means hashes of that file) would carry a weightage of 10

  1. All governments of all countries in the world share a weight of 3
  2. The independent governing bodies of SAFE would share a weight of 3
  3. Commoners downloading a file would carry a weight of 4

When serving a file, SN would also add an option to flag a file. If enough people flag the file, after a certain threshold, SN would start showing a message to users something like “This file has been flagged, do you agree”. After a certain threshold, it would be marked as ‘Dirty’ and nodes drop it

Now, this process can be expedited by government’s votes or governing bodies votes. There should be someway of identifying them(not sure if this is possible).

Finally, when a file has been dropped from the network, all the users who helped flagging that file will carry more weight for subsequent grading.

Sorry, if this is the dumbest idea you have ever seen :grinning:.Just curious to know if this is even possible

3 Likes

There is a difference between authoring software (free speech) and providing a service.

If maidsafe or the foundation is offering a service around the software, then it can be held liable for the content on said service.

otoh, if maidsafe and/or the foundation simply author the software, and people freely run it on their own without any renumeration to the authors or any agreement whatsoever, then the authors I believe have a much stronger position in terms of avoiding any responsibility for content on the network.

The devil may be in the legal details surrounding if maidsafe authors benefit substantially from eg safecoin valuation, etc.

I would think the cleanest would be for maidsafe to author the code, and then sort of throw it over the fence for the community to launch/run, with a fair coin launch that does not reward developers.

I doubt that will happen, but throwing it out there.

Alternatively, perhaps the censorship type stuff could be written cleanly as a module that is easy for node operators to remove. So perhaps maidsafe /foundation launches the network with it in place and then “oops” the community of individual node operators removes it. Otherwise, if the system has promise, surely 3rd parties or community members will fork it, and maidsafe/foundation will lose out anyway, because the community values censorship-free network… anything else is watered-down more of the same.

but it’s best if the system is intentionally designed/authored to be censorship resistant (content neutral) and that permeates all aspects, come what may. anything else is… why bother?

just my thoughts.

6 Likes

My short response to this is it’s dangerous for the foundation to be held legally liable at all either financially or for the content of the network. It’s just giving ammunition to governments to shut down the network, which if all goes well, would be impossible. This is something the Maidsafe Network should never have agreed to. The SAFE network will for all intents and purposes become an overlay for the INTERNET and allow people to become uncensorable which means they won’t have to lie. Much of our society is made up of people who say they believe one thing and believe another in order to get along in society. They deliberately misrepresent their public belief system to avoid punishment. Whole demographics of society do this. I don’t think the Maidsafe Foundation is taking this into account. When SAFE goes live and people can be truly anonymous and uncensored they will be able to express what they actually feel and think which is something very DIFFERENT from what they currently express publically and what may be considered morally and ethically, not to mention legally, acceptable.

Moreover all this talk about having fiat on and off ramps just sounds rather ambiguous to me. What precisely is Maidsafe proposing here? Either you can track who someone is or you cannot. If you can then their privacy is compromised. If you cannot then the whole process of KYC and preventing money laundering becomes moot because at some point you can just convert cash to SNT and create a new anonymous ID. Which is it? Is privacy ensured or not?

As for knowing your businesses for developer payment I find that to be extremely fishy. Are you saying that the only developers you’ll give token rewards to will be those that provide their real world ID in one form or another? What about all those anonymous developers out there that write good code but don’t want to be indentified? Lots of good open source code has been written and contributed to by anon devs. Moreover not all code is written for profit. That doesn’t mean it shouldn’t be rewarded. So again if only the “above table” devs get rewarded then that sounds like some pretty fishy politicking by Maidsafe to me.

Many of the things centralized governments want to censor: sexual deviance, terrorism/rebellion, non conforming religious belief, radicalization, or political embarrassments of one kind or another are exactly the kinds of things we need a decentralized anonymous internet for so that they WON’T get censored and that we can end up talking about them. We can’t solve anything if we keep driving everything underground and keep trying to sweep everything under the carpet. If you believe x is immoral then instead of censoring it then talk about it with those that disagree with you. If we have a problem with y political issue don’t censor it, talk about it and solve it. If there is an uprising over z value don’t try to hide it, discuss it openly because it’s bound it come up again if it isn’t resolved. History tends to repeat like that.

7 Likes

The response from the community here has been a breath of fresh air for me.

The recent Ukraine and Musk threads have been hard to stomach, and I’d been telling myself - the difficulty of intelligent communication is surely related to the general state of communication on the internet. Each person’s tiny sliver of “reality” that they are fed on the slimenet becomes more and more disparate and extreme, and maybe this is the kind of thing that results eventually from that.

So to be reminded of why people are really here is great, and I don’t think we should be too harsh on Maidsafe either - I appreciate David’s reassurances, and it is true that it has become a regulatory and legal hellhole. Investigating all avenues there is clever, as long as no comprompise on privacy and data integrity will ever be made.

10 Likes

Also, I’ve mentioned it before but it’s screaming out at me here now: I really think it’d be clever to reach out to the people at GNU Taler, Christian Grothoff seems like an approachable character, he’s at the Polytechnique in Lausanne. A partnership or discussions of some kind would be a great thing to have a record of.

The point is - non-blockchain, private for the buyer, and auditable at the seller. This is very elegant. Governments who want taxes then must focus on sellers, getting them to register properly and declare properly, etc. Taler is working and tested in the real world, but a very young project.

Maidsafe’s message can then (sincerely) be - oh, we think money laundering is terrible, therefore we’ve been working with our friends here, who have this novel and excellent approach to making seller’s auditable while preserving buyer’s privacy.

Here’s a quote which explains my point I hope:

https://nlnet.nl/project/GNUTaler/

GNU Taler is an advanced electronic payment system for privacy-preserving payments, also in traditional (“fiat”) currencies like the Euro and the dollar. Unusually, the entire Taler system is free/libre software. Unique to the GNU Taler system is that it provides anonymity for customers, while delivering various anti-fraud measures. Payments can in principle be made in any existing currency, or a bank can be launched to support new currencies.

9 Likes

That was one of my first thoughts.
Next ones: it’s the developers, who can add censorship to code, so if someone want it to be implemented, they will not do it themselves, they “ask” developers.
Also there is a reputation problem. There are lots of SJW/cancelling/etc. activity in modern world and even if such people can’t physically reach developers, they can cause inconvinience by making bad PR.

I think the idea is just to move problem solving to different people.
And I’m sure - they will come up with solutions.
Also, if I understand correctly, 1: regular persons can act as such ramp. 2: people can make financial activities without leaving the system.

1 Like

cypherpunk principles and corporate c.y.a. are incompatible. The former is rare, the latter is common.

At some point, project leadership, or more importantly the community will have to choose. Walking a “middle path” is choosing the latter.

There is a reason both bitcoin and monero developers chose to be anonymous.

8 Likes

out of scope for the project. Satoshi never bothered with this.

All the safe network needs to do is have a functioning payment system. if the system is useful, exchanges and other 3rd parties will create on/off ramps, and bear whatever legal burdens. and there is always p2p, which really should be the most important anyway.

4 Likes

I simply don’t understand why this is even being discussed. Nobody goes after Tim Berners Lee or Al Gore for stuff posted on the Internet, or the Tor Project team for what they provided.

I would tell the regulators it’s an agnostic layer for data privacy and protection and it’s hands off from there.

Why not talk to the people at the EFF or the founders of Tor and get their ideas and support. The EFF have big time money and lawyers for this kind of stuff. If they see value in your project, they could be a powerful ally.

Frankly, this sudden turn of events is disheartening. I’m firmly sided with the no censorship at all crowd. If people want to create filtering apps and lists on top of the network, more power to them. But keep that off the primary network layer.

9 Likes

Things have changed. Legislators are going after individuals involved in delivering online services so these are real world problems. As David says, sticking you heard in the sand would be unwise, you have to prepare, understand and respond to the challenges as they arrive.

I understand why people are concerned, I have concerns too, but I can see that these are already issues, and that the jeopardy for anyone involved in this kind of project is increasing and from multiple directions at once.

If I was working for MaidSafe I’d be very concerned if they didn’t have an expert working in this, and as a supporter of the project who has followed Heather long before she joined MaidSafe I know this is in good hands.

9 Likes

Yes, b/c they were and are extremely well-connected with governments. For example, the World Wide Web is governed by W3C whose 450+ governing and standard-setting members include governments. Tor was sponsored by the US State Department.

1 Like

Personally, I think some kind of self-regulation by the thread users or channel users etc. is the way to go. An algorithm or some sort of govt. body oversight is going to be corrupted. An Internet full of dobbers !

If there is no censorship then things devolve quickly into a cess pool.

1 Like

That is an excellent idea, they surely are aware of these issues and would have some ideas. @dirvine

This is why I think doing it at the client (which is an essential part of Safe) is the best compromise if something has to be done. Because then Maidsafe can produce the official version and the user then has the responsibility for what they up/down load and the official client can enforce the regulations thus satisfying any requirements of governments. Of course Maidsafe are not responsible for what version of the client that the user uses, its the user’s responsibility then. Maidsafe cannot be held accountable for any hacked code a user or node operator uses. Keeps the censoring off the core protocols and nodes and all the potential for delays and errors and the user chooses.

5 Likes

It has a slight issue, as you say we cannot control the client. But the data remains on the network hosted by people and they then may be attacked?

2 Likes