Creating for the Safe Network as a web developer

I’ve been following a while, and as Testnet3 approacheth, I’m super keen to start seeing how I could get involved.

For me, I love the idea that websites could be funded in another fashion as opposed to advertising. And as a fledgling author with a book waiting to be self published, the Safe Network could be a great idea to wrestle control away from tax doger e-pubs like Amazon. But before then I’d like to start learning about the app layer and how that might function.


I’m looking for projects I could join or learn from (are there examples about yet?), or really any reading about how traditional sites could work as ‘apps’ on the network.

I guess first off knowing how network domains etc might be accessed would be good. And then on how one might serve static content, before diving in and trying to figure out more complex apps that could use noSQL and Safe for storage and modification of DB data.

Does anyone know of any resources or guides out there as yet? Or does anyone have a project that needs a web dev ( js (getting toward full stack)/html/css/ php).




I’m with ya buddy that’s why I hoping someone (hopefully maidsafe pod) would develop a squarespace equivalent with safecoin wallet and other SAFEnetwork features built in

I can’t help you find a project @joshuef, but here are a couple of links that may be of interest:


Thanks, @nicklambert. That’s a good start.

So theoretically, I could make a static ‘site’, HTML files CSS etc and just save them, this would then be accessible on the Safe Network (only if ‘published’). If I chose to attach a payment address, I’d get paid by GET request.

As for DB: the whole thing is essentially NoSQL; so simple node apps with JSON storage should be viable here. Cool.

A questions off the back of that: Payment options / popularity / caching. Say I have something incredibly popular that doesn’t involve a lot of GETS (maybe its a single page site / book; maybe a video or song).

One download could be enough to get the data but then continue to use it for a long time. Is there anyway baked in to Safe that allows one to set the monetisation rate? Or would this be extra over and above the Safe Network (you’d need to have an app that required payment for every play, or X Safecoin / minute for example). (I guess otherwise you’d end up with sites with a lot of pages for the content, much the same way as you get articles split up so you can show more pages / ads these days. Which is, IMO, less than optimal…).

Does network caching affect GET requests to content?

Is this sort of thing under discussion somewhere, or being implemented? I guess then there’d need to be a way to inform the user of the cost. Or perhaps they could allow any charges below a certain rate by default. HMMmmmm.

The potential for this is amazing. To have an internet that intrinsically supports the content creators would be brilliant. I could imagine people paying a subscription to get Safecoins so they could use it. Spotify style. Only it would be cheaper and much less going to a middle man. That would be… rad.


I’m not 100% that the same payment/reward options will exist for SAFE web pages that will exist for apps, but I will check and come back to you on this thread. It has been discussed that the consumption of data would not require safecoin, but potentially tipped, or paid as an option. It may be possible for a web page owner to restrict access to specific pieces of content, requiring payment for full access, but I do need to check. I personally think it would be good if we can give as many options as possible and let them decide.

In terms of payment, I believe it does and I don’t believe either the farmer or content creator is rewarded for cached data, it was referenced in this [thread] (Net neutrality - #3 by chrisfostertv).

I’m awaiting, too. For web developers, there would need to be either a dedicated browser (search SafeSpace) or extensions for FF and Chrome. There is a topic here, but I’m not sure it’s currently being worked on:

So us web developers may be a bit behind the rest of the pack. Unless we all pitch in to create the extensions once the API is out :smile:

I guess first off knowing how network domains etc might be accessed would be good.

Not sure how that would work, either. Would be nice to have a core developer step in here. If I put, for example, safe://somekindofstring into the address bar, what exactly should happen in the background? Which files should be fetched?

I was talking with a guy last night about SAFE and how the network rewards content creators.

He specializes in custom fonts and the question arose, will I get paid if my font becomes widely used.

That one threw me a bit…mainly because of deduplication, fonts are files…so if only one copy exists, how can anyone else utilize it on their own SAFE site?

Maybe this is a situation where you only reference the original and the owner gets paid?

Fonts could be a nice earner if the creator gets rewarded.

The guys have continued work of the FF ext and have also been looking at the same for Chrome. It’s @Viv’s team who are working on this specifically. They have been working across a number of projects so not sure of the browsers status, but maybe Viv can provide a brief update.

Firefox POC(handle custom url scheme and communicate with backend c++ code which can establish network/filesystem access) has been completed. This mailing list thread should have the info with the github url to the sample that demonstrates the approach we took. Please do note this is a POC(doesnt actually connect to the network yet), we’ll be picking it back up once the testnets are available and the MPID functionality is released to tie with the network. Chrome seems a bit more tricky since it did not allow custom url scheme handling like what we got with firefox.

I think the behavior is still being debated, but the majority seemed to favor url’s corresponding to a public share in the network under a given MPID(public user id)

so say a url syntax of

safe:<MPID-Name>/<Pub Share name>/<path to file being fetched>.


In the current POC implementation, this will get the extension triggered to fetch the data with the given url and in the native code side(when the API’s for public share are available), make the corresponding calls via the NFS Api to fetch the data.


Awesome. And I guess dynamic websites will use javascript to make API calls to the extension, which proxy the request to the MaidSafe client on the machine?

Thanks for the links @Viv ! (although first one seems to be dead). I’ll dive into the addon github when I’m not at work.

Does that mean that MPID-Name is likely to be a type of domain then? (I read in the roadmap there’s a domain name registrar of some kind being planned… which I guess would supersede this setup eventually?)

@eblanshey I think something like that would work. With dynamic sites, you’ll probably still need a server for API calls etc, but the data it uses could equally be fetched/stored on the network. (until there is processor resource options being shared on the network.)

In general though, I think this is a pretty important topic (how content creators can earn Safecoin), as if there is a way for directly earning from your content, without a middle man, it could well bring A LOT of content into the network, as such a scheme doesn’t exist anywhere right now. The potential is massive.

But with that there’s also questions linked to what, @chrisfostertv was saying. (I think here, you could just link to the font in much the same way as one uses the googlefonts api right now). But taking it further, there’s for sure going to be issues with copyright infringement (what’s to stop anyone copying a file and putting it up as their own?). For sure it would make a lot of sense to share videos via Safe. All the benefits of torrenting + anonymity = piracy (with pirates making money from GETs).

I’m not saying it’s something that needs fixed per se (or that it could be “fixed”). They are problems inherent in a network like this, and consequences of wanting to create something open, encrypted and anonymous.( For sure it should just encourage creators to get onto the network and set up better portals for their own content, clearly marked as ‘official’… )

Cheers for spotting that out. Have updated the link to the mailing list thread.

This POC was just for static sites. Well client side processing. @joshuef has the idea I think. As long as we don’t rely on a server we’re all good or until we get the network to do the processing for us(even better).

I’ll be honest, I’m a bit confused. What makes the functionality of a website different than a desktop application? Desktop applications do not require a server to do the processing, and neither do websites on the SAFE network. I thought the whole idea here was that the SAFE network acts as the decentralized key value store, thus acting as the actual database for all apps on the network. That means all apps, whether desktop or websites, can make API requests to the network to get the data they need for their app.

For a website, this would typically mean Single Page Apps (SPAs) where the javascript would be downloaded on the initial page load, and subsequent AJAX API requests would be made to the network (the db) to retrieve the data. No servers are required. Desktop applications would work the same way.

Unless, of course, you mean that with your current implementation, the URL scheme you mentioned above would act as the API endpoints for the key value db (which would be called via AJAX calls). But I wouldn’t call that a “static” site.

If I am missing something, please do enlighten me.

1 Like

@eblanshey I was meaning a server for generating dynamic content (think of your Wordpress CMSs etc). But yeh, If it’s truly a key value store, then it could easily be interacted with in a similar fashion to NoSQL DBs as you say. (Which would be sweet).

Depending on how the browsers deal with the data returned by the network, ajax calls to a Safe URI could be interpreted as HTTP responses to complete the picture.

But yeh, servers are only needed if you need to vary your content at load (although this could be done via JS, it’s probably not the best way).

Sounds like what was discussed in a related thread recently. No need for servers going forward.

It very much is a key/value store. That’s how It works at the core.

What’s the point of that? Why not just do it the way I described in my post?

This is what I wanted to clarify with @Viv. Will we be able to make AJAX calls to these URLs (as a way to access the SAFE db), or will it only support URLs entered in the address bar?

The ceo of the Grid said something funny during this startup talking. That their website is a static website, but it looks dynamic:

What was even funnier, was that he said that the Grid is not for businesses & consumers, but for humans. hihihihi (sorry i like to have a laugh, when I go through life (Aday without laughter = like a day without sunshine))

You can check out his explanation about how it works here:

Maybe this tool could still give us dynamic looking static sites on Maidsafe.

Disclaimer: I’m a founding member (in other words I bought an $96 grid package). I talked about it before here. Be warned that post contains a refferal link!!!


Hmmmm I forgot to say, the Grid sites can/will be spit out as Github repo, so your site/content stay yours

Wordpress is one of the most popular CMSes going (according to them, 23% if all sites) because of its ease of use in generating content and admin areas (backend editor etc) and its open source nature /modifiable nature. Making this work on the network could be a great way for use to easily allow a massive portion of the internet onto the Safe network without having to reinvent the wheel.

There’s a bunch of reasons you might want a server for generating markup on the fly. The main one is speed of the site. It’s faster to use server side languages to build a page than it is JS implementations these days, increasing user experience. Even if Safe might speed things up (with its auto promoting of popular content etc), in the end the work is still being done in the browser, eg. this could mean six or seven API calls to generate your homepage with latest blog posts as opposed to one call for grabbing the whole page markup as with server side setups these days, slowing everything down.

If we were to still use Wordpress on a server to generate static pages (that could vary as we added new posts etc…) and save those posts to Safe (if it behaves as a simple drive this should be relatively trivial), then we’d get the best of both worlds. Speed and distribution by Safe.

Above that, it would / could mean we don’t need proper VPS servers as many Wordpress hosted installs are these days as running WP on your computer would suffice in generating the content, which is saved and served from Safe (which could also act as the DB for Wordpress too!). Eliminating another set of costs.

SPAs are slower only the first time they’re loaded. Every subsequent interaction on the website will be faster, as it only loads the data it needs. This is why doing it server-side is overall slower and inefficient, since it does so much work unnecessarily. Once an SPA is loaded and you want to, say, read another blog post, the API will only fetch the article text, instead of the sidebar, header, javascript, and a ton of other stuff which has already been downloaded.

I think an SPA blog site would be a cool wheel to reinvent :slight_smile: It is definitely not a pleasure working with wordpress.