Search as a natural monopoly, censorship implications

Hello all,
I am interested in any discussion and ideas on mitigating the risk that Search Engines (SE) pose as a natural monopolies [1] and all the problems that will entail for the Safe network. Researching these forums on the topic it appears that the general idea is that website/data content producers will submit their data to one or more SE for indexing as they wish, and each different search engine provides custom interfaces to their indexed MaidSafe network content. Going by the research, most likely one to three SE will rise to dominate them all and as more people start using the MaidSafe network they will naturally gravitate towards the top SE almost exclusively. This appears to open the Safe network up to all the problems of censorship that the network has been designed to mitigate. Pressure on the maintainers of the one or more of the top SE would be feasible by Identifying who they are the moment the SE intersects in any way with the existing web, an event likely to happen/perhaps unavoidable given the nature of Search Engines.

A first response might be that users can just choose another SE should any one start to censor, plus the data will still exist on the Safe network so it is not a problem. The counter-argument being that with SE censorship the majority of the users will not even notice they are subjected to it, even if they are in the minority of vigilant users that really care about that kind of thing. When content is not indexed, left to go stale or slyly down-ranked then it as good as if the data does not exist on the network for the vast majority of people. The power to decide what to censor/filter/rank or not is no longer in the hands of the MaidSafe user.

One possible solution would be for a Search Engine to be quickly deployed that is held in the hands of a worldwide distribution of maintainers that are determined to uphold MaidSafe ideals, gambling that the first-mover advantage is enough to afford it the natural monopoly over the longer term. This seems a fragile solution however now we know just how persuasive a powerful adversary can be against a worldwide group of SE maintainers - see the worldwide extra-judicial targeting of the secretive maintainers of the specialised Search Engine “Wikileaks” [2][3] as one example, or the rise of Baidu to dominate in China attributed to government pressure against foreign search providers [4].

Is distributing the search function itself technically feasible within the MaidSafe framework? Say for example a default Safe network distributed Graph Database [5] that each content provider themselves index their own data into if they want it to be searchable, and end users query the database using an intermediary thin layer query system complete with their to-taste ranking/filter bubbles/censorship options?


[1] Market Dominance and Quality of Search Results in the Search Engine Market

[2] Prosecution of Anonymous activists highlights war for Internet control

[3] Snowden Documents Reveal Covert Surveillance and Pressure Tactics Aimed at WikiLeaks and Its Supporters

[4] Internet censorship in China - Wikipedia

[5] Graph database - Wikipedia

1 Like


Search will be a choke point due to its conflicts of interest including sponsorship, oligarchy and profit. Cutting the cord on Search and going with user ranking and user driven spam filtering systems seems
to be where the community would like to go. I think there is also a consensus that that these should be ad free to prevent the most obvious trust and conflict of interest issues.

This approach fits what also seems to be a consensus on the need to cut the cord on ISP/Cable gateways and networks and replace them with open source secure hardware and software primarily controlled (or owned) by the end users. And it fits with the overall need to decentralize and distribute political and material power.

Assuming a DAO we maybe be able to get goods at cost or much closer to cost, as the DAO would replace shareholders, execs and even employees. The DAO could be driven in some way by an open source community or sealed up, presuming trust. AI presents another interesting take on Search. Assuming better AI techniques, that don’t backfire, AI would seem to make the world more transparent and bring more direct corrective public process to bear possibly supplanting some of the more laborious state type procedural protections. AI and more aggressive Search work to make the world more transparent and make the world more aware and possibly responsive to quality of life issues by speeding the conveyance and accuracy of information while improving analysis. The privacy tech on SAFE will also make it possible to anonymously report in a way that collectively puts a stop to the tradition of using organizational secrecy to manipulate the public. This will be closely tied to better Search type tech, like “Watson Debater,” to aid public investigation. Personally I see this leading to the failure of much of the for-profit system, especially the stuff that was basically extractive welfare for the rich. Their gateways and scams and ways of maintaining them will not survive increasing public scrutiny. Dispelling constant and ubiquitous fraud and its propaganda supports could be like dispelling the superstition of yesterday and lead to a much brighter future.

1 Like

Hi Warren, It is an interesting idea to cut the cord completely on the Search Engine and run with user ranking and user driven spam filtering systems instead. I am not sure how that would work/compete for example if one wished to find the “video with smiling cat” or similar vague search terms - I supect user ranking would be complementary to search function. Also the SAFESearch App has scored very highly in the most wanted applications thread so there is definatly the demand.

Is distributing the search function itself considered/to be a basic low level function of the MaidSafe framework - or will it be implimented (centralised?) by various third party developers at the application level?

Of interest here perhaps(?) might be the Hyperdex project, a key-value and document store which relies on a distributed, sharded architecture. See FAQ and the Document Storage sections for quick overview of how they have incorporated Search via “hyperspace hashing”.

1 Like

Thank you. I think the aim is always to go as decentralized as possible.