This is actually pretty big news. Meeco is one of the leading lights in the Personal Information Economy. Sure they’re still a startup, but so is everyone else in that space, and I have it on good authority that they’re the one of the firms most likely to make it. Katryna Dow has been pursuing her vision relentlessly for the last five years, putting in an appearance at pretty much every privacy event going. Would seem to be a great match in terms of both vision and technology.
Me and Katryna are looking at other non blockchain options for ‘immutable me’. There may be a way forward with the SAFEnet which is something I am exploring in the coming few weeks. Provenance is powerful for self asserted identity as well as for many other use cases.
What do you think of the concepts covered in that article and appropriateness of the SAFEnet?
Katryna has been relentlessly pursuing the idea of digital self sovereignty. She recently was position 14 in the top 100 most innovative thought leaders in IDentity for 2016 http://mecast.to/jy3o8m
Glad to know there is someone on here familiar with them @JPL
We are working on some really great concepts at the moment. Look forward to sharing more soon.
Above my paygrade really, but I’m of the belief that a blockchain is best suited to bodies that are answerable to the public and that’s about it.
Linking ‘people’ to the blockchain is a huge mistake in my opinion. A well informed person would never consent to having their identity put on a chain and any company that does it in the background is liable to be labelled Authoritarian in the long run.
I hold a similar position on blockchain where transparency and immutability is required . Public access to provenance on products or government spending, bureaucrat expenses etc. Not great for human identity and privacy, great for surveillance and compliance. Although that is somewhat simplistic. I am more interested in the concept of persona trees and dr Irvine’s datachain. That, I would like to explore. Don’t want to hijack the thread with this so will likely engage it this in another thread.
it’s really a good news !
I’m guessing no recording unfortunately
But could someone who attended please provide an overview of what was said?
I know a lot of these things are same thing different place but recording and uploading it should be standard fare…
Been away for awhile. Good to see c’mon guys! Welcome @Michael
Can you provide a recommended minimum for CPU and bandwidth to pass the resource-proof test, both after initial implementation and also for the long-run (after launch)? This would help in selecting the proper processing units (ODROID, Pine64 etc…) for max efficiency. Also, lets say a dedicated farmer did have enough resources to get 1000 nodes up and running, and maybe start them sequentially in order not be tested all at once; are you saying it’s possible to run multiple nodes on one instance, or would it be better to segregate them into virtualized environments (connected to NAS)?
At this point those are unknown figures. Until the code has been tested and put into alpha and/or beta testing in a larger network it will be a difficult thing to give you any figures that can be relied upon,
Having said that my experience was that a ARM 32 bits 1GHz single core credit card size computer ran a vault/node with very little use of resource, other than storage for the chunks.
More like bandwidth will affect the figures than the setup.
I remember it was specified somewhere that a challenge would be periodically sent to all nodes in a group expected to hold a data. The input would consist of a random key plus a random string. Each node would retrieve the data, concatenate it with the string and hash the result. Nodes returning a bad hash or replying too late would be expelled or deranked. Is this idea still planned?
It seems a very good one because this tests both the honesty and global capacity of nodes, without need to measure separately CPU, bandwidth and disk performance.
or actually a random piece of data that the vault is supposedly holding so it can prove it indeed holds it? or is that what you referred as the random key?
The random string would be replaced by a random offset so that misbehaving nodes do not store just a fixed subset of data when the data seems never fetched.
Sorry, I wasn’t clear. The challenge sent to a node is composed of 2 parts:
-
a key chosen randomly among those supposedly stored by the node
-
a random string
The node appends the string to the data associated with the key, hashes the result and returns this hash. The aim is that the node proves that it holds the data without sending it to save on bandwidth.
Your remark indicates that the same result could be achieved if the node was asked to send a part of the data at a random offset (here the challenge is composed of a random key plus a random offset).
Thanks @tfa for clarifying. No, I wasn’t suggesting to use an offset, I think my understanding is just the same as yours, assuming that when you say a key you mean the XoR Name/ID.
NAT traversal seems to be really tricky. If Crust works in real situations then that’s a major achievement. But what happens when/if future changes are made to NAT solutions on the internet?
NAT has been a standard for a while, and it’s lived past its expiry date by many years. Instead of being changed, it should just go away. It’s long been overdue: the first IPv6 alpha for Linux came out 20 years ago, and Windows includes a production quality implementation since 2007. NAT is the workaround that should just die. I’m getting redundant here
I think everyone forgot it’s Tuesday.
oh yeah …,…
Next dev update was announced before the holidays to be Jan 10th my friends.