Unethical uses of the network and "safe police"

It may be because of censorship have some ground to stand on, i.e. there is a demand for censorship. The best censor is the one inside your head. Technical measures should help this censor.

The “100% anti-censorship system” looks like a pendulum swing from over-censorship to under-censorship, ignoring optimal value. I imagine even if by some magic the whole world communications will be based on this SAFEnet (or similar “0% censorship approval here” solution), there will be another pendulum swing to over-censorship again.

I prefer some optimum censorship level. Let’s call it “Personal censorship with a default policy”:

  1. By default you see content that is moslty approved to be good for local society;
  2. You can turn on “100% free uncensored” mode if you want. Freedom is opt-in, not opt-out;
  3. You can set up your own filtering according to your tastes;
  4. You can entrust the filtering to some outsource entity if you want;
  5. Receiving any published information is not a crime in any case;
  6. Advertising some information may be a crime. Publishing things so they may be received by those who have not opted in to freedom may be considered advertising.

Such system preserves freedom, but looks much less anarchic. I don’t want SAFEnet to be anarchic, just pro-freedom. There is a difference between no wall and a little wall that is allowed to be passed by anybody.
Freedom should be for cheap, but not for free. Not all people around are really ready to carry the load of freedom.

The main “face” of Google is a search results page. Will SAFEnet also run a search engine for users?

The main face for Google is a blank level playing field spash page.

If we get unsponsored conflict free ad free search we really dont need the ads. In a system like SAFr the price of ads would be to high as its likely to all be paid to the end user on a global opt in basis as there will be no power to coerce attention.

There is no pendelum here sponsorship (bribery based censorship) and dejure and defacto propagand/censorship/drowing out are obsolete as are organizational secrets and non transparency. The cost of freedom is paid in a distributed fashion inherantly.

The only filter regime will be the one the end user optionally selects through an end user interface over which the end user will retain total control. SAFE conceptually heightens the awareness over the conflicts. I think the days of media systems that take money from anyone but their end users is over. Much will be done at the sustaining cost.

The message of top down systems was sponsorship and surveilance or money is power. The message here is limits on the power of money, its really bottom up level playing field, its anti plutocracy. I think the rich really do lose control of the narrative hopefully permanently.

Huh??? Freedom is choosing, not some binary opt-in/out

SAFE does not court censorship on any level, it is a system that freely allows people to store/retrieve information.

On top of SAFE are applications if you wish.

And if you want to have an APP that suggests/filters the public content to your preferences then you the user could write one or use one already written to do that. To imply censorship is to imply that SAFE is somehow censored. Just like going into a library, you search out what you want to see/read and the get the material. Today’s libraries have programs that can help you search according to your preferences. But the library is not censored itself. (Not the best analogy as most libraries have rules to what they buy, but the point should be obvious)

If you don’t like a particular subject then do not search for it, and if you have an application that also removes search results that you don’t want, then great, but SAFE is not censored, just you removed some search results you didn’t want to see. Maybe you already have them, maybe they are for the wrong country or maybe one is timid.

The BIG problem with any system of censorship is that the good will always be caught up with the perceived bad. No form of censorship has helped progress the human race. Selection based on personal preferences without denying others is the way forward. If its criminal then the police can handle that using their more than enough powers.

Scared that without censorship the world will fall into decay? The fix for that is make sure the people/children get a good balanced education.

4 Likes

@vi0oss

I like your discussion of i2p, and certainly there are going to be some good questions for the MAIDSAFE team once they build a launcher and get the network live for the default settings and portal of the reference client. I hope they will be careful and conservative in their default settings and only list squeaky clean stuff in the reference client downloaded straight from the open source project homepage.

On search engines:

You can choose to run all your SAFEnet traffic through a third party, central hub, but then you will be giving that third party information about your online puts and gets. And it really won’t be all that different from the existing internet. Decentralized by design, centralized by convenience and for profit.

Look, the censorship that Google does is not free. It’s not free as in beer: It costs them money to hire armies of foreigners in the backwater countries where wages are $1 an hour to look at all the stuff that people put on the internet and selectively click “ban”. And It’s not free as in speech: they recoup their censorship imposition costs by collecting data about all their customer’s searches, then running algorithms on that data to determine who to sell that customer’s attention or information to in the form of advertising or police informing, respectively. Sometimes they are simply forced to give up the information they collect for free by government coercion. Which is why they stopped operating in some countries, but that’s just playing politics.

If that’s what you want, some 1000s of sweatshop workers clicking ban 1000000s of times a day so you don’t have to tolerate the content they review showing up in your walled garden, then you really don’t want SAFEnet. If you are willing to give up all your information about what you say, see, hear, and read, and when and where you do so, to a team of corporate executives and their servants who will then in turn sell that information to the highest bidder or give it away to the flash of a badge, then you really don’t want SAFEnet.

But with all of that said, there will be people that set up search engines and police content on the SAFEnet, that is a well proven business model and one that I expect will be successful on SAFEnet, so all that you are asking for is a portal that acts as a moderated search engine, I can assure you that your wish will be granted.

4 Likes

I see maidsafe being a alternative internet and not just another darknet, and yes as normal average joe, I wish to minimize bad actors data onto my hard drives by having the freedom to censor the data by blocking maid safe keys onto my system.

Yes, ultimately user should have total control. But by default it should be in safe mode.

At the early stages of SAFEnet I expect most users will be techno-geeks (or like) that will all opt-in to their total control. But if masses will use SAFEnet, I expect about 50%-80% of users will just use whatever is defult.

Imagine the difference between some dangerous contrustion site surrounded by “warning, dangerous area” signs and tapes (but you allowed to go in if you want) and the construction site without any border at all.


Also having some optional_but_activated_by_default filtering policy tells people around “Yes, we are concerned about terrorism/extermism/whatever threats and have taken some steps to prevent such things on our territory.”. That way it will look more like some serious respectable long-lasting entity and less like a teenager anarchist revolutionary group.

The “hard drives” level is too low for a good censorship. At this level the system will prevent you from seeing what content is stored. You will be just storing SAFEnet’s content without knowing whether it’s good or bad.

Benevolent censorship should work on user interface level and in some apps (notably “default” or “official” ones).

3 Likes

@vi0oss

Yes, ultimately user should have total control. But by default it should be in safe mode.

I think this is the key issue of this topic. It hangs on two things:

  1. personal opinion: should v should not

  2. practicality, which is a whole different thing and will simply not be available (i.e impossible at launch without delaying the project significantly to sort out both the what and the how).

I think 2) kills opt-out censorship stone dead at launch. Your only practical option is opt-in censorship at a later date - exactly the same as it has always been, for essentially similar underlying reasons. I think it’s a natural well established pattern in culture, also in nature.

5 Likes

There will be search engines on the network, however SAFE has not created a search engine. SAFE is the network and search engines for data on the network will be created by third parties.

Apparently there will be some necessary built in search functionality at launch, but not yet full blown search. Another thread explored this.

@vi0oss The only default filter I’d want would be the one that filters censorship itself and in particular ‘money as speech’ or sponsorship and conflict of interest based sponsored media.

As neo said above, if you dont want it don’t search for it. A lot of emphasis is on search accuracy. Also the start screen should be completely neutral to have it otherwise would be SAFE trying to sell stuff which would be discrediting. Even accurate treanding would be too much for a start up page because it might suffer manipulation or give the appearance of promotion and conflict of interest. This stuff can never relax because it the kind of slippery slope that toying with addiction is in an intentionally set up money addicted world.

2 Likes

The conundrum is how do you know if your search engine is accurate without being able to keep statistics on who searches for what with what search terms and which links are followed and which links are not etc. You need data to measure, and that data is the very thing you are trying to prevent the need for.

Everything is simple till you look at it and it is not.

1 Like

@jreighley Agreed, but inside a DAO might be as safe as it gets and the free/open code could be set up to dis-aggregate as soon as possible in the process almost the way self authentication does? Also people could choose not to use such a portal or filter. To me everything is opt in by default.

1 Like

I read most parts of this thread and to me this is an interesting discussion without having an easy solution. You are most likely right when saying that “the fix for that is make sure the people/children get a good balanced education” but this is an ideal, a normative plan, a good plan, sure, but it doesn´t have to do anything with pragmatic behaviour in everyday life.

Let´s look at this scenario:

The SAFE network runs in a solid way. Several thousand people use it, which raises their privacy and gives them a better control over their data. State regulations don´t care for it, since it is pretty small and irrelevant (now that I am writing this, it really sounds like the internet in its early days). Applications are developed on top of SAFE, everyday Joes start using the system. Then, someone starts storing illegal pornography on the system and immediately there is a debate on whether this strange technology should be banned immediately, legilators get under presure and eventually ban the access. This, of course, doesn´t change the fact that the pornography will still be around, but a large part of the mainstream demographic would stop using the system because people would say: “ah, you are supporting this network that harms people”. Looking back in the close past, in a way this is what happened to Silk Road. Silk Road had an ethical code: there were no elements allowed that serve to harm other people. The hype came with the open drug store, the shitstorm started with alleged weapons that were sold over Silk Road. So, I wonder, is it really productive to have no measure at all to flag content to make some content less visible than others? While I don´t see a fair and proper implementation that wouldn´t be able to lead into mean censorship, this is a question the project will have to deal with over and over again once it receives more attention.

1 Like

But the Silk Road ran on Tor, which didn’t get banned. And neither is the Bittorrent protocol banned on a state level, despite it being used for copyright infringement on a massive scale. Is there any P2P or privacy/anonimity enhancing protocol that is actually banned in the Western world?

3 Likes

Acknowledged, Silk Road is not a good example because it was a market not a protocol. Point is that while the protocol will happily exist over time, the second backbone to a Cryptocurrency economy, the social infrastructure, can easily be challenged by regulative measures. Startups get into the economy because they see chances for future economic models - the mainstream is risk averse and will react to regulations, in particular when someone can argue that a certain technology is used by the “scum” of society. While we could argue that this is the same with cash, so there shouldn´t be any problem, the argument is a double-edged sword and may end up worse since it is one of the reasons why there are efforts to move into a cashless world, where there is only bank-money.

Future will tell whether and which political systems are trying to control. Anyway, imho this is not about whether or not Bitcoin or any other CC will survive (they will), but about how society will be able to use it.

1 Like

Silk road was busted because it was centralized…

Done in a decentralized manner, there would be no central server to raid or take down…

Drug Trade will always be “bustable” as it requires shipping a physically detectable product, but the markets will are evolving to be fairly immune from government shutdown…

Of course you could flag content (and that would be stored in a DB that would probably run on the “traditional” Internet), but that doesn’t do anything for those who aren’t subscribed to that “service”.

But, if you know you want to avoid (arbitrary) “unethical” sites, you don’t really need a filter - you simply don’t go to sites that you deem unethical.

How do you end up on an unethical Web site in the first place? You click on a link. Solution: don’t click on links except those provided by the site itself. If the site provides links to sites you find unethical, don’t visit it. What’s the point of visiting a site where every 2-3 days you click on a link which is blocked (and you don’t know why?)?

MaidSafe will surely be used by the scum of society. Self-censorship won’t and can’t change that. And you’re mixing personal ethics of thousands of users (each of who has a different standard) with laws of many countries (each of which is different), which is clearly oranges and bananas. On top of that there is no way for the regulators to prevent any kind of content from appearing on the MaidSafe network.

1 Like

@Artiscience do you know of any network protocol or storage system (not provider but system) that has been banned by the governments in supposed free countries. Countries like China are a different kettle of fish.

I would be interested to know if there is one, as I do not know of one. Except the export ban USA had on encryption technology some time ago, but even they lifted that ban because they saw the ban as wrong/useless, but still it was not stopping their citizens from using it.

Basically I think you will find that the governments would be doing something they have not yet done and that is ban/outlaw a protocol/system which is already used by significant number of people world wide.

Even Australia which is at the beginning stages of its own “great firewall” would not consider seriously banning a protocol. A lot of things would have to go down first before they banned protocols.

If there is bad stuff on SAFE then the police will just have to do real police detective work, like they used to do before they got the idea they could get a button that they only have to press. Police today feel they have a right to listen/read anything we do/say. Whereas before the internet they had little hope of intercepting notes passed from one person to the next, yet they were able to take down crims.

This idea that we have to know that the private data stored on our node is ethical is an extension of the notion that we have a right to snoop on the people unwittingly storing data on our vault. No one knows which vault their data is going to be stored on and we have no way of knowing who owns the private data stored on our vault.

In order to know who/what has/is stored we have to break the security and privacy of SAFE and so we no longer have a SAFE system but another system. If you are worried that some unintelligent (to the vault) data stored on your vault is “bad” then as others have said SAFE is not for you.

2 Likes

Maybe the answer to this question of “unethical” uses could be answered by considering another system that has been in place for a very long time. That is bank’s safe deposit boxes (vaults)

The banks do not “police” what customers store in them and usually provide private areas for their customers to inspect their box. While the banks could in theory check each box for “unethical” material, they don’t and judging by the occasional news reports, all sorts of “bad” things are stored. I am sure that the shareholders of the banks would not agree with those ‘bad’ items being stored, but they “allow” it because that is the “system” and they are not responsible for those items. If any body does not want that then they do not become a shareholder of the bank.

1 Like