Tried searching but couldn’t really find any relevant threads.
When I was watching @lightyear 's talk from Fosdem I somehow got the impression that moderation would be made impossible. (no one can delete/censor your data type thing), I’m sure I just misunderstood it though, but I’ll ask anyway
Say in a hypothetical scenario I run a forum of some sort on the safenetwork where people have to register to post. Will it be possible for me then to moderate that forum? Ie, “censor”/delete posts that contain stuff I don’t want on my forum. For example someone spamming racist remarks, or worse.
Yeah I understood that it just gives total power to the individual, to empower the people to moderate for themselves, by blocking whatever content or users themselves.
Or optionally subscribe to moderated lists made by other trusted people
The will be different models from what we already have (centrally collated) to completely user controlled (as @whiteoutmashups describes).
Users will choose which are most common through popularity, but money will also be able to proliferate content and sites, use various tactics to manipulate attention.
Question is how good users are at discerning which are really best and which dodgy for one reason or another. This is where hope for decent reputation systems comes in.
In many ways not really that different to what we have now, except that there will be more truly decentralised options and it will be harder to centralise control of the messages.
I tend to think of twitter as being a bastion of this kind of user centric model - on SAFEnetwork there can be lots of similar user controlled services with no barriers to setup and growth.
So my hope is for more choice, and better balance between commercial power and individuals, because users are (big if) discerning enough to gravitate towards services that serve them rather than manipulate them, or farm them for profit.
I think these problems will be solved by collaborative filtering rather than by moderation or censorship.
I will be able delegate decisions to others that I trust and if enough of those personally appointed “oracles” agree on something I will accept it as truth.
On the current internet, we have a lot of implied authority that we depend on: HTTPS, trustworthy companies behind well-known domain names, app stores that verify the producers and the apps, and so on. (Please realize that I worded the above in a tongue-in-cheek manner on purpose. No, we shouldn’t put so much trust in so many places, but still: it’s not the Wild West that SAFE will be.)
So, we will need a web of trust, completely from the ground up, with no central authority.
In fact, I’m sure we’ll have a few different competing webs of trust, and that’s a good thing.
They will be used for a lot of things:
verifying app installs: whether an app is safe, whether it’s okay to ask for access to my camera, and so on
blocking bad content: there’s sh*t out there I don’t want to stumble on even by accident; I want my little nephew (and my own potential future offspring) away from it even more
blocking some obnoxious fools from my life
rating content: I know that my friend likes the same music, so I’ll sign up for whatever he decided was worthy of listening
following news: whatever gets popular with a certain group of people, I consider it worthy to read