Parental control mechanisms - Heading off bad press

I haven’t followed everything here - what is the badge system you’re referring to @chadrickm? Thanks.

The walk away videos by @russell

1 Like

Relative to spam filtering and ways to filter out any any all sponsorship mechanisms I think a parental controls are almost irrelevant. Although I recognize that not having some parental control ability even a function that doesn’t work or is a work in progress will all almost guarantee that the useless totally perverse mainstream media labels Maidsafe a terrorist support network for money laundering and human trafficking. But that is their opinion of the free internet anyway (Oliver and crew are exceptions.) I’ve noticed that youtube now has licensing nonsense prominently displayed under the videos and midway through the videos multiple modal ads are appearing. Network TV is something that should have been destroyed anyways because its worse than teen smoking etc., but now the world’s most valuable asset aside from stuff like water and oxygen is being converted into.

Honestly the best thing is to get rid of profit for the content regimes because they exist to censor. They shouldn’t be involved, and there business models shouldn’t be allowed because they lead to the theft of society, they convert society into bullshit.

1 Like

wow dude that’s a beautiful way to put it.

as long as it isn’t restricting anybody else’s (children included) free will,

which people are born with,

then all good with me!

build anything you want on it! for sure!

Just don’t try to force people to use it!

if they want to use some piece of software that only accesses certain things, let em!

if not, then that’s that!

1 Like

duck duck go is nice if one could ad to that the features of the early version of verbase, before they sell out to ads. .

I’ve been asked to contribute to this thread as someone with some expertise and experience in the area (insofar as it relates to the web) and have a tech startup, TwoTen, which provides solutions for controlling content access. I should prefix my response with a confession that I don’t entirely understand how MaidSAFE works, nor have I read the entire thread in detail, so apologies if my response has been mostly covered already or is entirely irrelevant.

Setting the scene: I’m fairly sure that any filter that allows older children sufficient room for them to continue to use it is easily circumvented; moreover I am not convinced that trying to prevent teenagers from accessing content they know exists serves any useful purpose. Such filtering attempts are more likely to lead them to circumvent any protection put in place, leaving them in a more precarious position than would otherwise be the case.

So what I say regarding how MaidSAFE content could be tailored is in regard to younger children and specifically the age group my company’s tech is focused on, which starts with “my first browse” and ends at the end of primary school (more or less 2-10…).

I’m comfortable that the tech we’ve developed for the web would be adaptable to MaidSAFE and moreover could be used as a portal through to web content. Our approach has been summarised as “whitelist plus” which is a reasonable if a little over-simplified summary. We rate content (not sites) using adapted film ratings, so for the UK that’s U, PG, 12A, with some porosity at the boundaries of the ratings. Parents or teachers set the level they see as appropriate for a given child and they have local overrides, so there’s lots of room for different value sets, and we use some personalisation at the edge to guide the child rather than them coming up against a legacy-style “you’ve been blocked” message.

The nature of our approach is fairly brutal from a tech perspective i.e. circumventing it is only possible by the old-fashioned approach of, well, not using it. The filtering all happens end user side with a central service providing real-time rating information per content pull request. In the web context the client can be box on the local network or software on device.

Our approach tries to provide the best of both worlds; it’s not in the way of those not using it, while allowing those who have or care for younger children to ensure only age-appropriate content is available to those chiildren; it also leaves them to decide what “age appropriate” means for each child, while retaining the ability to grow with the child. The anonymised central service with edge clients allows content filtering while still delivering on the privacy focus of MaidSAFE.

Questions and comments welcome.

5 Likes

Hello everyone :wink: I’m not a specialist in parental control software, but the fact of little experience may help someone to make right variant of parental control software.

In proceeding my post, I’4d like to tell you my case , with my young daughter who like use my latop in internet and spent too much time for internet serfing. :relieved: Icould not prohibit her in doing it, I wanted only control her and was about filtering the contet she is looking through . One my colleague recommended me keylogger software [www.refog.com][1] [1]: http://www.refog.com Now I’m happy to tell every one that my daughter is protected from adult content and harmful sites also.

Doesn’t using a keylogger only prevent children from having privacy? They’ll still be able to go to every website, but now you can see what websites they’ve been visiting.

2 Likes

@luckybit, that is brilliant. More generically, maybe contracts could publish information to certain public name(s), when the programmed circumstances were met, one of which could be a multi-signature validating a request to publish.

This might also be used to automate other tasks, such as an information feed scanner which posted notices and withdrew them based on the current data. Maybe a network of information digester/publisher nodes running in an app could form a learning network like I think @dirvine has talked about. I’m sure many more uses could be devised.

Regarding an app that enforces a contract, could the user turn off that app while doing something contrary to the app rules and escape its notice? I guess I’m thinking, where does the app get information from? It seems it would either be from outside network resources, or from code running in the app and perhaps monitoring your activity in some way that you have opted-in to. Maybe it would have a setting to launch by default and if it was not launched near the beginning of connecting to the network, it could detect a time lag and take note of you not running it. Bad marks for trying to avoid contract enforcement. ?

Very promising idea.