Thanks to all with technical understanding. I know many of us are a bit of a nuisance to the conversation. At this point , I think, we are all ambassadors of the SAFE Network. We must be able to field questions that come our way to promote the network to those on any level as we are able. It’s not necessary for me to know most things, I have trust in David and team. I ask, in case I will be asked. I want to be able to answer. So, again, thanks for your patience.
It’s not your fault. It’s mine. I took my lesson. I knew AES it’s an encryption but I don’t know why I was sure it was an asymmetric. My bad I should have read more about it before.
About me, it’s my first time I participate on an open source project like that. I still have a lot to learn. And try to not make the same mistake again.
Baptism of fire
I think it’s how we all learn and it’s valuable. Sometime the core gets lost in frustration, it’s worth the hassle though as it helps others solidify knowledge.
This continues to intrigue and is frustrating trying to fill in the gaps! I re-read the SD RFC carefully and am thinking that you plan to extend the network handling of certain types to support more than the PUT/GET/UPDATE which is described for SD. Is that correct?
And if so, that it is these extensions to the handling of these types that will extend the power of SD in the some of ways referred to in the RFC, as in:
As these data types are now self validating and may contain different information, such as new protocols,
rdf
/owl
data types, the limit of new data types and ability to link such data is extremely scalable. Such protocols could indeed easily encompass token based systems (a form of ‘crypto-currency’), linked data, natural language learning databases, pre-compilation units, distributed version control systems (git like) etc.
Ref: https://github.com/maidsafe/rfcs/blob/master/active/0000-Unified-structured-data.md
Assuming I’m on the right line so far, I’m curious about what to expect for blog/comment types (or be able to do by defining our own such types for these kinds of application).
For example, without extra functionality - just using a naming convention such as you describe - we could enumerate blog posts by doing a GET for the SD, either inferring the hash from a deterministic (predictable) naming scheme, or by having maintained an index of blog posts that have been saved, along with some caching of post meta data (title, abstract, author, date, thumbnail etc.).
This would be one interpretation of what you’ve described, but is this as far as it goes or am I right in thinking there will be more to it, and that the network is going to implement handling that makes types such as these much more useful than this? If so, can you explain any more at this stage? I’d love to have you explain a lot about the ideas mentioned in the RFC, but focusing on the blog and comment example is fine if it is something that can be explained easily.
Very good question. I’m also interested in roughly what kind of “native” types we can expect in the future. Maybe we’ll see “traits” for SD types like traits in Rust? It would be pretty amazing to be able to create your own SD type and assigning one or more pre-defined traits to it, so the network knows how to handle this type based on it’s traits. The cost of creating an instance of such a type could be dependent on it’s traits as well. If certain traits are heavier on the network, the type is more expensive.
What we will do (and have done this sprint) is supply a dns type for registration of services. It’ just an SD using Put Get etc. that’s all. A purpose was to start showing how to use the SD types for different things. Then there is an SD type that holds directories for users. A user can have 2^512 directories and all unique to that user. That’s another SD only (data part encrypted). So an enumerable data structure, a disk if you like, it can be public (non encrypted data content but signed (we have implemented these to :)) So enumerating public and private dir types as SD types using Put Get Post so far.
So we can show enumeration and protection of an SD namespace if you like (this is Spandans RFC coming that provides collision proof SD’s)
It’s so simple it’s almost invisible to see
What we need to be able to add is append-able data types, to make blog comments even easier, but that is an RFC for sure, for now it can be done using the tools we have. It’s just a matter of selecting your type of SD for clogs and link somehow to comments (in my example I used the blog ID as the entry point for the comment SD type, using the blog ID as the identifier and the version of the SD as a comment identifier) This will be made very much more clear soon.
Krishna and Viv have done some stellar work on the front end planning, so implemented node-wm and found deficiencies and now switched (in just a few days) to the electron libs (atom back end). When that’s all up and running (now the stabilisation sprint will happen) we can not only explain how to do this but also just fire together a blog + comments app and shove it in examples. It should take us literally a few hours to do that I think. (I discount all the front end niceties, that is beyond my mechanical brain )
As an aside, I am watching Eve as wee bit more these days, they are closing in on something worth looking at, a new programming paradigm indeed GitHub - dirvine/Eve: Better tools for thought ← watch that space and wish them luck.
Thanks David. Again it sounds fantastic, but I’m not yet able to understand what “it” is sufficiently to make use of it.
Explanation would be as great help. I think example code is fine, but for most (including me) reading the MaidSafe code is not helpful beyond simple stuff as there are next to no comments and to understand any area one really needs to study it to a level that isn’t practical for users of an API. Its really hard to read the code and understand it in small chunks, so I’d advocate producing an explanatory document that walks through a set of API calls to explain how each particular thing works - starting from posting a blog, then posting a comment, then another, then deleting a comment etc.
I know its a lot of work to write, but the productivity gains on the API user side will be massive.
I agree, we will certainly need to explain this clearly in the new dev site that’s being created. This needs to be as close to obvious as possible. I think it will be though, it’s a click
and it’s there thing, then it seems to simple well that’s how it seems, I hope so.
As we move on it will all get clearer, one thing the forum does not lack is inquisitiveness and that’s great.
I have a couple of basic questions around this, apologies if they have already been answered.
What happens if I make a put request with some data that already exists on the safe network? Do I get to reference/use it for free of charge?
There’s often a lot of comparison between the safe network and the centralised services like Dropbox. Would “deletable data” be required in order to offer a similar service? I.e. you pay for a block of space that you can fill and clear as frequently as you choose?
It will be guaranteed to exist, that’s all, you won’t know any difference and neither will the network or anyone trying to snoop.
No it can never be the same, well like an aeroplane is a transport mechanism like a horse, but you really don’t want to hang a hay bag on a 747
In SAFE you pay a tiny amount, once and forever. Deleting to try and get back some of that tiny amount really is just a remnant of old world thinking we need to be able to explain better. Any suggestions on the forum to explain this would be good I think.
I’ll try to help with this one. Let’s approach this from a Pirate perspective… literally!
Centralized Cloud Storage
When you upload/store a file to a centralized cloud service like Dropbox, you’re purchasing a plot of sand to bury your file. The cloud service charges you every month to maintain your plot of sand, making sure no one else can use it… except them when you stop paying.
Since you are the “sole” owner for that plot of sand, you can add/remove the contents.
SAFE Network Storage
When you upload/store a file on the SAFE Network, you pay for the “ability” to upload/store on the entire island.
The natives (non-client nodes) break up your file into small chunks and bury them throughout the island.
Then a treasure map (data map) is made which is required to retrieve your file. You personally cannot add/remove anything on the Island. That part is done by the natives in accordance with the rules of the SAFE Network.
Delete on SAFE means you destroyed your treasure map to that file, but not the content that is buried. If no one else has a map to that content, then it’s lost.
De-duplication is a hidden effect that happens on the SAFE Network. If multiple people try to upload/store the exact same file, the natives will recognize it and avoid duplication, thereby reducing the total storage needed.
I hope this gives a basic idea of how SAFE is different from Cloud Storage.
Very cool
20 chars is a lot
How does one recognize the duplication if it is encrypted?
we all encrypt it the same way from the same source (the data itself).
To encrypt something you need a password. That’s the cool thing about self-encyption. When you have a 10MB file, it get’s split up in 10 pieces on your computer, each piece will have a hash. So you have 10 pieces and their hashes. Now you use the hash of the second piece, to encrypt the first pierce. So the hash of the second piece is actually the password of the first one. This means that when you and I both self-encrypt the same 10MB file and put it up to SAFE, it’s all encrypted but they’re still identical. So they end up at the same Vaults when you PUT the data to the network.
So what if I take the source file, and change the meta data, then I could duplicate the file, right?
According to what you said, the software recognize by size, If that’s the case, what if I have two different files but saved as 10mb…? File b does not save in the network because it looks the same as this file a10mb…
He didn’t say that?
Indeed. I’m not sure why you want to do that though, and the network will charge you for uploading this duplicate.
I think I read somewhere that the metadata and the content of the file will be two different pieces which make sense.