Ways To Implement Shared Data Distributed Apps (e.g. multi-user blog, etc.)

This is where StructuredData will kick in, when mixed with multiple owners, groups can be created. Now this leads to something I have not discussed yet as I am trying really hard to release small bits at a time (issuing RFC’s with each small part and I have barely started), to ensure all Engineers, community etc. grasp it all a bit at a time and completely grasp it.

So (impatient folks :wink: you know who you are :smiley: ) to dive right into the deepest part of this with no safety net, here are some options (there are so so many ways, I am not even looking at the best or anything like that here, just samples, I am 100% people will come up with better more sophisticated mechanisms, just imagine semantic data types here, etc.

Create a blog type - say type 5555 and a blog page, say identity a4567b234ec456456...01 So we have a type that is a blog and an identifier.

Now a comment type - say 6666 and now a user creates comment 1 so the identity is a4567b234ec456456...01

Next commenter can add another comment a4567b234ec456456...02

and so on.

This can be taken further where the comment type has not only content (unencrypted content, I hope :slight_smile: ) but also a parent field. So you can have threaded comments.

This gives commenter full control of first come first served commenting ability (as they will occupy the 6666 Identity space on the network.

So now we want moderated comments. So we can demand that the 6666 type has blog owner as part of 2 part multisig (or not read it) which can allow either party to delete.

Or we can have a type where the comment type 6666 requires that the type contains the blog owner (or owners if blog is owned by a group, multi-sig) only to decide to keep or not.

This goes on and on, so you can have the flexibility you want, even without multi-user access to a specific time and place on the network. In the case here, the App (blog reading) implements the rules of the types (demands the layout of the data element) or considers the element invalid, so ignores it.

I promise we will document and show many ways to achieve this and much much more as soon as we get feature complete for dev bundle 3 at least (then we have more than promised out there and we can create some great apps and get the tech debt and security sprints out the way).

This is a super super simple and crude way of doing what you want, there is so much more but you would not fail with this for sure.

Now create a Map type and link to Reduce types and you can see system wide Map-Reduce capabilities for big data analysis (of public (non encrypted) data). OR perhaps a type that contains semantic index like name: then link that as with the blog for full semantics of name: types, then add city: town: population: etc. Then you can look at adding new owners based on certain characteristics to gain more knowledge, then perhaps these identities can pool their findings, maybe they are AI or robot’s ? The list goes on to the point it sounds mad, but this really is nothing yet. None of this has come close to what I imagine we will be able to achieve. The link with this and public data is going to be a huge step, but really we need to get to dev bundle 3 first. Next sprint dev bundle 2 will be made available so we are closing in.

None of this even has included private shares yet, these will add many more features and will require synchronisation. So a lot of work to get through. Anyhoo hope this helps a wee bit.


The missing pieces are slowly starting to come together :smile: Thanks for the explanation!


Thank you so much David, my holiday is now ruined :wink: and you or I, or probably both could be in big trouble with my woman. Oh dear…

EDIT: but wait, she has some Safecoin… phew!

EDIT2: this is massively helpful David. Its exactly what I needed to keep me quiet…for a few days at least. I promise not to pester you or any of the devs til I’m back from hols and a bit more. That should be a whole sprint :wink:


To me it’s now starting to feel the design of (MPID) messaging needs to be revisited and can almost be an extension of the structured data type.

My remark goes two way: initially I was also thinking along these lines, but the ‘problem’ is that when you want to put the “next” comment, you first need to know the index of the last comment. The client side should not have to care about it, and additionally we want the network to resolve concurrency issues when posting many new updates. (also what if a preceding comment gets removed, and a client would be iterating over the comments, gaps in the sequence would cause problems.)

So that pushed me to think more of the messaging design we already worked out, where sending groups kept the message, but an alert was sent to the receiving groups. The whole messaging design can now be reduced to (and generalised for (computing, commenting or) any data type) one reserved data type, an ‘index-type’ that keeps a vector of the child types posting to this type. It allows to put the whole messaging infrastructure to disk just like any other data type, and groups would be able to update the index-type with new messages received.



That won’t really be a problem, as the page + comments are parsed then it’s all there. So you will have all comments UID in them and can respond to any you wish (threaded or not). So like a blog with discuss extension.

Added to messaging the structuredData will expand capabilities significantly. Adding deterministic caching means also twitter like streams will also be possible regardless of how popular a particular user is. So much to get in place, but it does offer so much in returns and with the current dev speed these things can all just fall into place nicely now.


This is a neat view, needs further thought, but I think you are onto a pretty expansive design here. We did look at multi capability messaging but never finished that. This would allow that as well, what I mean is email like messages, system level messages, simple messages (chat etc.). This covers that. Nice one.


These RFC’s may get a little hard to keep track of.

Would it be an idea to start a public ‘Language of the Network’ Mindmap, where you could frame the whole picture and gradually zoom into each area in depth. With Mindmaps you can create links at any node to relevant docs, RFC,posts,podcasts, videos etc

Maybe an internet Mindmap could parallel with a native StructuredData equivalent on SAFE…using the tech to document the tech.


It is also very hard to engage with them on github for anyone not familiar with it.

It took me quite a while last night to make a list of them and figure out how to view the document for each, as they don’t appear in the repo yet - you need to find the outstanding pull requests, and then in each find the “View” button, none of which is obvious to the none github geek.

Once they appear in “rfcs/proposed” in the rfcs repo this will be much easier.

Are you watching the RFC repo, you should recieve emails with links when there’s activity.

1 Like


+1 for @chrisfostertv 's advice there - I had the same frustration as you until I sussed the watching feature.

Hint- dont watch too many threads at first or you can become overwhelmed with unread notifications.


Yea add slack, all our repo’s plus dependent ones and mumble to these with the internal dev mailing list and ye will truly be overwhelmed :slight_smile: (that’s why I get wary with lots of private messages as it’s literally hundreds of notifications I need to read per day). It’s really great there is so much info public though. Our hope is at least github will capture history of evolving documents like this and I personally am pushing for more of that, so there is a lot of flurry then quite from us as planning means mad dash to read catch up and plan, then a couple weeks quieter while sprint is on, then it starts again.

I like though, that folk see part of what happens in planning as saying we are planning sounds like “Oh well” wonder what that means, a break perhaps. Then you find it’s the opposite, planning is manic and meetings galore with a ton of questions. If we get it right sprints should be quiet so we will see, they seem to be gradually heading that way, so sprint is real head down & code the tasks. The rhythm is starting to set in though.


Thanks @chrisfostertv Yes, I’ve been watching the rfc repo, as David says though, it would require a lot of time keeping up with them. I have all the notifications filed as threads so I could do that but haven’t yet as there are so many! It looks to me as if it will be easier to read them as presented on github than add emails. Anyway, I thought it would be best to go get the RFC docs and read them first, then pick up the comment threads. I now know how to do this, but it’s not obvious and took a while to figure out.

1 Like