Bad people using SAFE network (a group dynamics perspective)

Ok I know we have addressed before what if there is child porn or hate speech on the SAFE network. I think we agreed yes there probably will be some, just like the current internet, but most people are good people so its just a few bad apples and overall a positive experience. If that’s how it pans out, well I wish it was ALL good people, but I would call that acceptable.

The issue I want to address is what if that becomes our “brand.” It seems like with these anti-censorship platforms that are currently out you hit a kind of critical mass of assholes. When that happens its now the offical nazi club house and you get bigger and bigger groups of these “people” congregating. On the other side nice people are like I want nothing to do with this shithole so they are driven away and you get a higher asshole concentration.

I think that would be a total disaster if after all the hard work and good intentions we get that kind of group dynamic happening. Right now it’s early. I think we can right our course and not hit this iceberg. That’s about where I am like out of ideas. I think we can agree on that goal, but HOW do we accomplish it?

Edit: this topic was kinda inspired by the current convo in the marketing thread and @Sotros25 has some good ideas:

edit 2: so I think there’s at least two categories of how to approach this problem. First is momentum… If we can start in the right direction with partnerships and friendly dApps then maybe over time that positive momentum grows and drowns out the negative momentum I as alluding to with “critical mass assholery.” This tactic makes the most sense if we do it early and set the stage so to speak. The second category is more for the later game. I’ll call it policing (even though there won’t be a group that is the “police”) What fits in this category is a rating system and blacklists.

I think both types of solution can be beneficial and not mutually exclusive to each other. That said, we can get the policing going whenever, but we only get one chance to set the stage at the start. Lets not let the solitary opportunity to use this approach pass and regret it later when nothing can be done.

9 Likes

Don’t think about what bad people can do, think about what you can do as a good person, friend. And do it :dragon:

For example, I want to see the following headlines in the media:

The community behind the world first Perpetual Web is creating a FREE repository for every free book ever published


The Perpetual Web strikes again with FREE streaming platform for emerging new musicians


Is the SAFE Network the New Internet Archive?

If we can raise donations for each of these things and more, we will be able to promote the network as a public good. :dragon:

5 Likes

It’s illuminating that many comments about freenet / i2p / ipfs / zeronet discuss exactly this but with no real insight.

I put some relevant quotes but these linked pages of comments are really worth looking at for a broad look at the perceptions and ideas around SAFE, not just the bad actors concept.

freenet

What isn’t a solved problem is: make a darknet/freenet that your mom would feel comfortable using.

If you donate 10 GiB of disk space to Freenet, then you can be sure that at least 5 GiB of that is going to be dedicated to child porn.

Yes, Tor has CP, but I didn’t look for it so I didn’t find it.

i2p

We don’t need more tools, we need tools that non-geeks can use

zeronet

I argue that if you cant see the benefits we’ve done a terrible job explaining them to you.


Search is ‘the internet’ for a lot of people, so having a search that doesn’t show that info will be an important milestone.

Moderation / filtering is done ‘on the server’ eg facebook algorithm, reddit moderators, user flagging of content, codes of conduct. This can’t happen on SAFE so take ‘many servers doing filtering’ and change it to ‘many clients doing filtering’ and you’d probably be on the right track. I think collaborative moderation is on the right track too.

5 Likes

@dimitar and that’s what I mean by momentum :slight_smile: All of these headlines make cool people say oh this is a place for cool people so I should be there to. The same as if there were headlines like “neo-nazis find a new home” could make nazis say oh this is a place for me.

@mav yes for like my mom “internet” means google something and select a link from page 1, usually the first one on page one. I like the idea of collaborative moderation to decide what results will be on your search page. The only thing is we need to get off on the right foot. If there are more pedos then normal people, then they will upvote the CP to the top of the search and that play totally backfired lol. It is true that all you really need is one good moderation team and then subscribe to them and not the pedo team… still I would be worried that my mom would not be able to set that up on her own flawlessly right away. That’s why I contend it’s still really important that overall the moderation is good and not just a bunch of watchlists of places the haters like to hang out. I mean I think I am just repeating my previous argument that critical mass is what we need to watch out for. If the group is on average dishonorable that’s the direction this ship will sail even if there are a few good people trying to turn it around.

2 Likes

Does google run a search company or a moderation company; where is the line since they do both and they cannot easily be separated (they happen to use a server to achieve it, but search will almost always benefit from some added context about who is doing the search and therefore what they probably mean). So I am sure some company will set up search for SAFE and the browser will come preloaded with it, just like browsers like chrome and firefox currently come with a few search engines installed by default.

It may be that search in the SAFE browsers is installed as ‘vanilla search provided by X company’ and the user can optionally add or remove other contextualisers such as their friends or trusted companies, a bit like you can log in to google to have your search results delivered with additional context.

The best part is the added context comes in the form of data (about what the user prefers) so the context can be saved to the SAFE network and loaded and updated and shared as needed.

Hmm yes, see reddit et al for what ‘group’ moderation turns into… it’s gonna be a tough problem to solve and I’m very curious to see what comes up in this topic.

I am very dubious about the hype around AI but I think this is a genuine area where a personal AI assistant to help manage your preferences would be helpful (and I use preferences in a fairly technical sense here, like economists would use preferences in contrast to true beliefs or objective utility).

2 Likes

I need an AI that helps me find the best (adult) porn! I do like the idea of learning about the user, so like for my mom it learns she doesn’t want boob pics (I don’t think so anyways hehehe.) Only thing there I would worry about is AI takes time to “learn things.” So I might get served the child porn initially until I tell her no no no that’s not the kind of porn I meant!

Not a super fan of this. It feels like centralization. Who decides what companies search is going to be the default? Now of course you can remove it and pick a new one or edit the contextualisers within… But thinking of mom again… she is just gonna use the one that was packaged as is.

I mean I might be backpedaling a bit here. I started with the assumption that we all know hate speech is bad objectively. What about the porn though? Some people like me think its ethical if not depicting anyone below 18… but I know lots of bible thumpers think all porn is unethical. So you start to get into moral relativism here. So we ask who is doing the search and if its a white supremist help him out getting to the most hateful sites on the web?

I think you have unboxed a very difficult question here and I am curious what others think… Do we say some things are bad and just stop them?.. or do we say nothings good or bad, there are only preferences that differ between people?

1 Like

We had similar discussions in the past:

And yes, the ideal would be to have a very useful legitimate applications (both for individuals and for enterprise) before it becomes coopted by the underworld.

9 Likes

how you remember and find something relevant you said in 2016 is beyond me.

Anyways these definitely speak to the issue I wanted to bring up. its not the few bad actors doing shady things in some back alley that is really concerning. What is concerning is if that becomes our brand and the ratio of bad to good people accelerates towards only bad people.

I can’t think of any totally uncensored platform where that did not happen. But maybe someone can call me on that and point one out. Why is the SAFE network going to be the first one that has all this freedom, but doesn’t get pulled into this abyss? Is there something different about the product that will make it immune? I don’t think so really. I think what’s left is how we pitch it and position ourselves in the market. So I totally agree with you that it’s sorta a race to sell it to good people faster then criminals can sell it to bad people.

1 Like

I thought the same thing! @piluso You are sharp as a tac brother!

5 Likes

A few things to consider

How much browsing do most people do before they install adblocker? For most it’s no browsing at all without that critical bit of extra filtering.

How many lists do people use to for their ad filtering? Turns out quite a few, usually just the lists which are selected by default, but many more are opt in and it depends on the user. The filter lists are a very good analogy of a curated data filtering experience that I’d expect to see on SAFE. Unfortunately not enough people a) know to install it in the first place and b) know that they can and should customise their filter lists.

There are people who use /etc/hosts to do adblocking, as well as pi-hole, which is another level of filtering. Point being there are lots of layers of filtering available and I suspect a similar ecosystem will happen in SAFE. The defaults of the most popular browser / platform is what will matter most from a branding perspective. It’ll be important to get that right.

A filter list can also be a category list, for example, you may want to never see images of factory farming but a journalist might use the very same list to research animal cruelty. A list is never just ‘show this’ or ‘hide this’, different people will use it for different reasons. And this feeds into the blurry line between search and moderation. They end being very similar.

How does advertising fall into the mix of filtering as a ‘genre of data’? It’s a legitimate business, but it’s also potentially malicious and damaging, so how is the ‘genre of data known as advertising’ going to be handled? A really tough question and one that I feel was not adequately addressed or foreseen in the early days of the internet.

If the very first experience of the network involves connecting to your friends and family rather than doing a ‘naked’ search, it would give a chance for the filter lists to be prepopulated with probably-correct biases, which the user could then guide further by their click-throughs and possibly manual intervention. That very first action on the network is going to be interesting. I know I have not browsed the internet for a very long time without adblock, since the very very very first thing I always do is install adblock, before loading a single page. Just in the last two hours my browser has blocked 330 trackers/ads. How is that a reasonable amount of advertising to be exposed to?

There’s a risk that filter lists become the fuel for enforcement agencies to regulate vaults, eg ‘if we search your computer and it has any of the chunks on this list you will be fined’. This could lead to ‘hot potato’ chunks which give very high churn in that region of the network, obviously not desirable and would lead to a kind of geographical centralization in the regions where the law is less stringent.

I know there are many arguments about edge cases and tricks and loopholes, congratulations for being so clever, but returning back to the original topic title, the way filter lists are branded and presented may affect the resilience of the network in the context of the overall broader social environment.

5 Likes

I think you are getting close to selling me on the filtering idea. Couple of things though…

A filter just puts a wall between me and the bad actors. It doesn’t make any less of them on the network. Are there any consequences if they are actually a large chunk of the network as long as they just stay in their own area? I feel like that might still give us the wrong image (even if after trying it out you quickly realize how easy it is to just avoid)… as @piluso was saying it could hurt us to be perceived as this criminal network by outsiders we want to sell this thing to.

Secondly I am not so sure about going with the assumption you will be similar to your friends/family in terms of what you want to privately view online. What if you have a weird uncle that likes to browse erotic goat pictures? I do think there needs to be a starting point for sure and not just ‘naked’ searches at first to see what you like out of a basically random assortment of options. Maybe like “would you like to answer a few questions about your browsing habits to help us give you more relevant results?” Then if you opt in its like a 20 question survey devised by some clever psychologists to capture how answers on these Qs correlate with what people wanted to view online.

2 Likes

I think the most important goal - is creation of robust categorization mechanics.
If user needs porn - that is not a problem.
The problem is that porn can be placed in non-appropriate places.
For example, inside discussion about quantum physics.
Same applies for spam.
Users should be able to see what they want and do not see what they do not want.

3 Likes

Yeah, very interesting, like asking ‘how big is the network’ vs ‘how big is the network for me’.

A lot of what I describe about filtering can be derived from eigenmorality (it’s long and dense but a very good read). There are lots of ways to explain the ‘solution’ to various filtering scenarios but I think reading that article will hopefully give a pretty good idea of what I’m getting at. Happy to elaborate further if you like. It talks a lot about web search.

It may not be the right approach in the end but it’s a handy thing to keep in the mental toolkit anyhow.

4 Likes

If we can somehow get a ‘bully group’ on the network that actively hunts, tracks and lists bad content and actors we could try to seperate them from normal forums, and as a bonus they’d try to crack into the network to find out what accounts own what id’s.

(That’s a good thing, if they succeed maidsafe can patch it)

However I fear that that will only lead to more attention, we’d want as little as possible.
Either way, I agree that we as a community must seed the place and steer it in the right direction even though we still have to figure out a feasable way to do so.

3 Likes

I sort of tried to go this way with some comments some time back. Didn’t go well, not well thought out apparently.

I Like @mav idea.

I wonder if there was a way for an autonomous network AI to just boot those pricks off our network?

1 Like

Haven’t read everything, apologies if I missed this point.

I see the danger @andyypants highlights but I don’t think it’s as significant as we might think in terms of discouraging participation.

The reason is because SAFE is a brand to us because we’re geeks who are here because we see particular characteristics. But we over focus on those.

To most people SAFE won’t be what they encounter because it is like the internet, a platform. Most people think the web is the internet, or even Facebook, or that Google is, because that’s what they use every day when they go online.

People don’t say, I’m not touching the web, internet, Facebook, Google because they host criminals, terrorists, or child porn, although all do have all of these in various measures.

The dark web is so labelled, because it really only offers one thing, a place to do stuff that most people don’t need and are probably uncomfortable with, and is often illegal. Since that’s all it offers, that’s pretty much what the name means. It isn’t really a thing either, but a collective term for anything that is primarily for that kind of activity. SAFE won’t be only for that, or even mainly for that, though like other ‘platforms’ some ‘dark’ stuff will go on.

The fear is that it will be dominated by such users but I doubt that.

IMO most people won’t come to SAFE Network for what we call the fundamentals, but for the services that are enabled by them. I think they will come for a range of services everyone can understand as valuable regardless of the fundamentals.

Take Syncer as an example, since I’ve just been working on it. Install it and you would have a local drive that is automatically backed up to the web, works as fast as your hard drive but is unlimited in size, and from which you can retrieve every version of every file you’ve ever saved to it. (David has envisioned this for a long time BTW, Syncer just looks like it might be a way to get a pretty impressive implementation of this kind of thing going quickly).

There will be many ‘ordinary’ apps of this kind. Irresistible features for everyday use that are nothing to do with uncomfortable stuff. So I think most people won’t even think of that as SAFE Network, but another thing they get by going online.

8 Likes

Reality is what reality is.

In a network that is permisionless, perpetual, and effectively censorship-proof, there will be some bad actors and reprehensible content. As the network is a reflection of society.

I hope/believe that over time, as freedom and prosperity grow, people will become more enlightened, and there will be relatively less of such “bad content” because there are relatively less “bad people”. But that is a long-term, multi generational goal, and SAFE Network must deal with the here and now,

But as I said in the other thread about youtube/filtering, we cannot stick our head in the sand. There will be both good and bad content, depending on one’s personal or societal definitions, and we must face that and provide (or at least encourage) tools for people to have an enjoyable experience by filtering out content they find highly objectionable.

Basically, I see this as one part messaging, and the other part technology to empower individuals and parents, schools, etc to have a “safe” view of SAFE Network.

Messaging: The SAFE Network is a tool/infrastructure. “Bad” content is regularly sent over phone lines or internet cables. Crimes are regularly committed in cars/trucks. We do not monitor every phone call or have an inspection for every automobile trip. Sometimes the bad must be taken with the good because the good is so very useful to society, or has the potential to be. The SAFE Network lets everything in, but empowers you to view/see only the content you wish to. Then expound about all the SAFE Network benefits, etc, etc.

Technology:

  1. Granular rating criteria. Provide a framework whereby rating criteria can be applied to each piece of content (must make sense for the content-type) and a slick interface for people to rate things. Content-type could be as simple as mime-type, to begin with at least. A few examples of possible granular rating criteria: quality, grammar, obscenity, profanity, sexuality, violence, racism, humor, agreement, nsfw, child-friendly, etc, etc.

  2. A mechanism to reward rating new/unfiltered content as an act that is helping the network. Debatable if needed, as people may choose to do this on their own, or non-profit orgs, or governments could sponsor. Also, if the network provides incentive/reward, how does it prevent people from rating badly/wrongly and getting rewarded for it? Interesting to think about.

  3. A way to define/extend/edit criteria for particular content-types. This is possibly a “political” area, so some care needs to be taken with the change-control process.

  4. Provide a filtering system whereby people can easily share and customize filters, including filtering out new/unrated content by default if desired.

Both the rating and filtering tech would ideally be baked into the SAFE API and available for every SAFE App to use.

7 Likes

I’ve been thinking about this sort of thing quite a bit in relation to developing a search app.

If we’re serious about a decentralised network, then search must be decentralised too. The thing that brought me round to this was actually these worries.

I don’t want to be responsible as an app developer for linking to harmful and abusive material, but nor do I think that I, or any other individual, organisation, company or government should be responsible for making decisions about what is censored on a network level. I think that tension is what Facebook and Twitter are struggling with at the moment, and which Google has managed to compromise on in a way that has not offended anybody too much, but which, I personally believe, is causing a lot of hidden problems.

At the moment I’ve actually come round to preferring a model where search might be based purely on connections to contacts and trusted websites, as @mav seems to hint at above, rather than searching for everything and then filtering it.

The logical conclusion of this would probably be that the network would be like a giant social network (or perhaps that’s just the true meaning of a network,) rather than a resource in the sky where we expect to find all that we need and desire. As someone who likes Wikipedia and hates social networks that’s something I’m struggling with at the moment, but it’s an interesting thought experiment that I think might be worth pursuing.

3 Likes

Haha,

Once again you make a good case for the exact opposite to what I’m saying @danda!

The idea of having ratings baked into the network API is really interesting, but it would certainly put a slightly different spin on the way people saw the network (not meaning that as necessarily a bad thing though.)

2 Likes

search might be based purely on connections to contacts and trusted websites

Sounds iike a web of trust model.

In my experience, these are too computationally and data/mem expensive. Consider that just 6 degrees connects everyone in the world.

Also, if we look at just first degree trusted connections (eg family, friends) how much content have they actually looked at and “approved” somehow? (And isn’t the approving act itself a rating?) Now consider all the rest of the content in the network that they’ve never seen or heard of. It all essentially unrated as far as you are concerned.

That is solveable by traversing enough connections, but the amount of data quickly becomes huge. Plus the entire network graph of social connections for the web of trust is a potentially huge privacy problem in itself.

Lots to solve there.

3 Likes