A censorship-free platform for the world

I just posted this to another community with regards to Dr. Mercola losing a lawsuit against YouTube for taking down his content.

Nothing new for the community here, but I post it as a high-level summary and reminder of where I hope Safe Network goes in terms of censorship resistance.

I reiterate my view that a granular rating and filtering system will be critical to avoid getting bogged down in moderation and censorship. Only by providing an automated and well-functioning solution to “protect the children” and weed out spam can a censorship-free system hope to survive and flourish with any kind of mainstream audience.

The world needs a platform where:

  1. content is stored in a decentralized, redundant fashion globally.
  2. anyone can publish anonymously, possibly with a fee.
  3. content persists forever, or at least best effort.
  4. anyone can rate content along many different criteria/axes.
  5. so-called moderators have no more power than anyone else, they can only apply ratings to content.
  6. AI automatically rates new/unrated content along multiple axes.
  7. Default filters hide unrated and nsfw type content from general audience.
  8. anyone can personalize their filters however they wish.
  9. anyone can share their filter(s) as a template for use of others.

With such a system, censorship (and the power it wields) is a thing of the past.

I fear that without such a ratings + filtering system, the Safe Network will either:

  1. Cave to political and legal pressures and implement some kind of human-controlled moderation (censorship) system for deletion/hiding of content. (most likely)
  2. Become so demonized by the press and “leaders” as a place for hate speech, porn, violence, and illegal activity of all kind that it will become basically a new dark web, or outright outlawed to run nodes.

I believe that only by having a well thought out and fully automated system for creating a default-safe for work and safe for children environment can the Safe Network hope to avoid serious legal/political scrutiny and achieve a worldwide audience while also providing a censorship-free “persistent web”.

If sufficient interest develops in ratings+filters approach, I would be willing to put together a bit more detailed proposal. It’s something I’ve been thinking about since at least y2k, and still waiting for some site/project to do it right.

14 Likes

I think for us moderators is not a thing at the network level. Apps may introduce those though, but not be able to kill existing data, it just won’t appear in their apps.

So the line between app providers and the network is something for us all to reason about more.

I like these discussions and also in particular automated approaches with clear code and logic as well as AI based systems where the fine tuning is made clear.

Lot’s to think about, but a worthwhile line of thought for sure.

11 Likes

100%

App level only moderation. I have ideas for a social app that would be moderated (tagged) by it’s users building a consensus on any post’s content.

On a network level moderation becomes unfeasible - a huge burden for the network to insure that public uploads are not in violation of X,Y, Z laws of hundreds of differing jurisdictions.

6 Likes

I hope that apps would allow the users to decide the levels of moderation and maybe which set of moderators to “follow” OR none at all.

Many users would particularly like a level of moderation where vile illegal images are blocked at least. The network technically is not censoring so that will be an issue for public and some private forums.

6 Likes

Might not be bad idea to have the new client AI be a first testing ground for filtering incoming data (maybe an option for what is filtered too) on browsed, searched, viewed network data.

I think part of Danda’s point is that this should be close to foundational. A layer 2 perhaps.

But in my eyes if this client AI (which I love the idea of btw) is peoples window into the network, it could be where this filtering is most effectively applied.

Obviously outside of that I think most apps will take moderation into their own hands for legal purposes if they are not anonymous, such as known individuals or public companies, etc.

but the point being that all content is created equally at a base layer as the network should not know or care what data it is storing, maintaining, or sending.

5 Likes

Yes, I think it should be a component/api provided by the project for apps to use.

Otherwise, apps will each do their own thing, and we are back to blacklists/moderation/censorship and/or the wild west.

Further, unless I’ve missed something, Safe Browser is still intended to be a thing eventually and it would be perhaps the foremost candidate to use ratings/filtering APIs.

Imagine if web browsers had such functionality built-in since the 90’s. I think the web would’ve evolved quite a bit differently. Sites like reddit, youtube, facebook or even amazon reviews could just hook into the APIs instead of creating their own content rating, moderation, and filtering schemes.

9 Likes

Filters are needed, sure.
As for integrating them directly into network - I doubt it’s a good idea.
No one knows how exactly the best filtering system looks like.
Attempt to guess it most likely will result in having broken system integrated at base level.
Having filtering separate from the core allows to have several iterations of filtering system, at the end arriving at the best solution.
Remember - people will try to exploit such system, it will be very hard to predict how exaclty and to develop protection mechanisms.
Also SN should host many different kinds of content, some of which may be important to functioning of services. If bad filtering system will block chunk with important metadata - there will be problems.

4 Likes

Very much agree, and if it was to be done somehow then who would be responsible for missed illegal stuff. A real nightmare to manage at a network level and also its not really a autonomous system when humans have to maintain the filtering mechanism and lists. And which government do they listen too?

Filtering systems at the application layer are definitely the way to go in my opinion and this allows people a lot of choice as to which system(s) they use or not use.

If I am on a electronics forum I don’t want to be bombarded with relationship stuff or pron so I’d prefer there is moderation I can subscribe to. But if someone is on a NSFW forum then they expect pron etc. Or a relationship forum they expect relationship stuff. One size does not fit all

In addition is that this is a world wide network and baked in censorship/filtering means that governments of various types will demand their version of censorship/filtering

4 Likes

One more thing to think about, no matter how filtering will be implemented:
Ratings are data, which means that either someone should pay for storage or, in case of free storage, it will be abused to store random data.

4 Likes

Do you mean that every user have “moderation” rights?
If not, then how (socially and technically) regular user can be promoted to moderator?

So if arbitrary user rates something as nsfw, it will be hidden by default?

Expanding on the idea about “different kinds of content”:
With hundreds of applications existsing comes lots of data formats.
But network stores just plain bytes.
How moderator can know if he should rate such 4 byte chunk file as nsfw or not?
B9 AA BC B4

1 Like

chunks are information theoretically secure. They don’t hold enough information to give us anything reasonable. So this has to be at file level in the app.

4 Likes

I mean that there is no such thing as a moderation role that gives one person more power than another. There are only ratings criteria such as grammar, quality, agree, violence, humor, profanity, sexuality, etc. Any user can rate based on these.

It could be that some persons just love to rate so much that they spend all their time rating new content, in which case their ‘votes’ are more widespread. But this is like someone who clicks up/down every comment on reddit. That doesn’t make them a subreddit moderator.

So if arbitrary user rates something as nsfw, it will be hidden by default?

everything would be hidden by default, except to:

  • the user that created it.
  • anyone that has explicitly disabled the ‘unrated content’ filter. (Apps would present a parental control that makes it so kids can’t do that.)

In this sense, the act of disabling the unrated content filter makes one a moderator. But anyone can choose to do so at any time, so it is not a special power.

btw, ‘nsfw’ might be a filter that is a composite of more granular ratings: violence, sexuality, hate-speech, etc.

In order to facilitate new content being rated quickly, I envision an AI bot that rates new content on nsfw type criteria.

One place where it may be useful to have some kind of “special” role is in the selection/maintenance/translation of rating criteria. ie, how do criteria get added or removed and applied (or not) to which types of content. To begin with, that might simply be up to the app developer, but one could imagine it becoming more of a group/committee type decision.

I hope that’s clear. It’s a simple idea overall, but there’s some nuance.

back up to high level: on the web today, moderation is used as a form of censorship to control what is possible for others to see. The premise of the proposed system is to provide a framework that encourages the creation of ratings metadata that enables a filtering system to provide a default ‘safe’ view of available content while allowing individuals to customize and bypass filters if desired. It is an acknowledgement that “bad stuff is out there” but “we don’t have to view it” If we don’t wish to. The ratings metadata can have many further uses beyond providing a default safe-for-work-and-kids environment.

4 Likes

Apps would define what are individual content objects, and which rating criteria should apply to them.

For example a video or audio file might have a criteria for ‘sound-quality’ while an ebook or blog post might have a rating for ‘grammar’.

2 Likes

It would not be directly “in the network” in the sense that the network primarily deals with chunks, which are essentially meaningless data on their own.

Rather, I would think the ratings data would be associated with individual files in a FilesContainer or possibly with an entire FilesContainer. It could exist either within a FilesContainer or externally; that is an implementation detail.

It could be implemented as RDF or Solid data. As metadata expressing knowledge, these are a great fit. And it should then become possible to query the metadata graph for like/related content, perform statistics, recommendations, all sorts of things, solely using criteria+ratings+content links.

Regardless, the point is that the ratings would be associated with meaningful content and should be available for any app that wants to display that content or learn about it or add ratings data.

Such a system does not necessarily need to be built by MaidSafe. The important thing is that it be a library that is available early on for use by Safe Network apps, especially including the Safe Browser such that the entire ecosystem gets built up atop it. One can liken that situation to the rust ecosystem where foundational libraries like serde and tokio exist outside the “standard library” but are used by most apps that need serialization or async, respectively.

A difficulty is that as a “public good” there is not an immediate profit incentive for a 3rd party to create such a system, except perhaps in a self-serving way, which then might ruin the utility.

It would seem to me that it is absolutely in the interests of MaidSafe to support or directly work on such a project because of its potential to deflect strong and valid criticism that the Safe Network harbors all kinds of vile content.

Consider these alternative rebuttals to the obvious criticisms:

  1. The Safe Network is a content-neutral technology. It is up to App developers and users to decide what will or will not exist on the network. We take no responsibility whatsoever.

  2. The Safe Network is a content-neutral and censorship-free technology. Anyone can store any content they wish and anyone can view anything they wish. However the platform also provides the world’s richest set of rating and filtering capabilities such that apps default to a work and child friendly view of available content. While Safe Network developers are not responsible for content on the network, the rating+filtering system empowers individuals and families to view only the types of content they are comfortable with, and to be warned (or prohibited in the case of children) about content they might find objectionable.

5 Likes

One more thing I will point out.

This system is about more than just making a default “safe” experience. It is about adding rich metadata to content in a way that’s never really been done well before.

Let’s look at a few popular content rating systems in terms of their rating criteria (axes).

Facebook: Like. (yes, or abstain).

Reddit: Promote: (up or down. +1, -1)

Amazon product reviews: Like. (1…5 stars)

In general we can say that almost all sites today use a single axis that requires a person rating to distill all their impressions into a single value. In doing that, a lot of valuable information is lost.

We have an opportunity to collect this lost information into a machine readable format. Such information should be valuable to both manufacturers and content producers.

Let’s say I am reviewing a Chainsaw on amazon type site. What might be some good ratings criteria? Well they are specific to the type of item. Perhaps: power, ergonomics, quality, price. If I’m rating a movie they might be: plot, humor, impact, quality, sexuality, violence, profanity, acting, soundtrack, etc. If rating a post or comment on a social media site they might be: agreement, quality, grammar, profanity, hate-speech, humor, spam, sexuality, age-level, political-bias, etc.

Of course not everyone will rate at all. And some might only rate along one axis. But some may find it fun to rate, or if they feel strongly about something may take the time to fill out all available axes. Now we are collecting very granular information that can be used as input for filters, but also can be used for manufacturers to improve their products, film-makers to improve their films, commenters to improve their comments, and so on.

and of course for researchers to write papers about. :wink: And companies will spring up that process and summarize and repackage these ratings in all kinds of ways. Basically a new industry.

Essentially we are attaching an optional survey to every single post, comment, file, “thing” on the network. And then providing a tool for people to browse the network with their agent filtering based on the survey results.

Now we still have the problem that much content, perhaps the majority, will go unrated by a human being. So how can “safe” but unrated content be surfaced? Well that is where AI can come in, to provide some initial ratings by checking for obviously nsfw content. Probably multiple ratings bots would arise written by different parties. It could be that only “trusted” bots are allowed to give out these initial ratings, so there is a potential lever of power there that would need to be considered carefully.

9 Likes

Such idea works with assumption that people will try their best to make correct ratings.
But other options are possible:

  1. Like on forums some people upvote almost every message, there will be people in such system who prefer quantity over quality.
  2. Some people will make intentionally wrong classifications, reasons are not very important.
  3. Since identity in decentralized system is “cheap”, people can make bots, who overwhelm system with wrong ratings.
4 Likes

Yes rating bots would be a concern.

My first thought is that we shouldn’t let perfection be the enemy of the good. Of course the system will not be perfect, but it could be an improvement over doing nothing. As is true of any system.

and the alternative is what? give up?

Rating systems have proven useful on social media platforms even though bots are possible there as well. Reddit might be the closest analogy where signup for a new account is super easy and anonymous. As far as I know bots haven’t taken over the ratings there, but then I’ve never really looked into it either.

A small fee or proof-of-work could be required for each rating to disincentivize automation. I would tend to think the p.o.w. would be better as a starting point as anything that discourages a human from rating is detrimental to the system.

As for human bad actors… well they only get one “vote” per criteria per object. Its really no different than on any social media site. If there is only one or a few “votes” then such individuals can have an outsize impact, but as the number of votes adds up, a group consensus starts to form. I think we have to assume that the majority of humanity is “basically good”, else all hope is lost, no?

edit: it could also be that filters have a “minimum number of ratings” setting that can be applied to each criteria. So that’s a lever for adjusting the default filter-set with regards to nsfw ratings consensus.

3 Likes

I did not looked in detail there too, but my guess is that admin ban bots by IP and remove their votes.
I saw such situation with GitHub (signup is easy too): person starred my repository, I looked at his profile, he made 100+ stars in 1 day or so after registration. Next day star was removed from my respository (most likely, alongside with its “author”).

Maybe.
But if PoW is calculated during 1 second, it means 1 computer can make 86400 wrong ratings per day.

Filter can have list of trusted voters in it.
But it means disaster for default filter.
However it may be fine for custom-made filters.

I think that spam bots is a more general problem for the entire network. Probably the ratings system would rely on whatever incentive mechanism keeps nodes from being filled up by spammers.

In general though, both black hats and white hats will exist. Who will win… I can’t say for sure, but I think we shouldn’t just give up.

You are listing problems vort… that’s helpful to a degree, but do you also have ideas for solutions?

If not a ratings+filtering system… then what do you propose?

Or just let the Safe Network become darknet v2? a high-tech 4chan?

4 Likes