How Autonomi is different from other technology?

If we have temp data, then each chunk will need a timestamp. So no dedup for temp data. Permanent data won’t have a timestamp. So should be no issue with perm data dedup with such a method.

Nodes would no longer replicate expired chunks during churn. So there is no client side deletion with temp data.

Adding timestamps is one of the difficult parts to this, but not impossible.

Yes, but we also offer some things that Amazon can’t:

Whatever the price :ant: charges for permanent data, we can do temp data even cheaper.

1 Like

I will make another argument against temporary data: trying to reach consensus between nodes is a whole different level of complexity.

3 Likes

I doubt it’d be much different to the current consensus on whether a node should hold data.

Currently, if a node is responsible for some data and doesn’t demonstrate availability, I guess there’s a chance of being shunned (I’m not clear on how this works, but I assume it does).

So, with temporary data nodes are just asking the same question of each other + is the data expired. If it isn’t expired, shunning should happen if it’s not available, but no shunning if it is expired and not available.

If the time of validity were built into self-encryption, nodes won’t need to gain consensus on when the data is valid until - it’ll baked in.

I’m saying this confidently with little idea what I’m talking about, so let me know why I’m wrong in the likely event that I am :laughing:

1 Like

Time is not part of the network protocols and so time is meaningless as that can be faked by the uploader, the requester to delete or the node that says its time.

And if encrypted with time then the node has no clue if a chunk has pasted the time or not, the chunk is encrypted.

If you want a node to delete based on time then you need two things 1) time to be part of the protocol everywhere, a global clock, and 2) some meta data to be able to use the time of this global clock to see if it matches the hash of the chunk in question. And keep doing this every tick of the global clock. or for 2) some greater complexity for knowing if a chunk has past the time allocated.

Its not as simple as it seems when you want a node to have no knowledge of the chunk other than its encrypted data and hash (for addressing)

For a lean fast quick network complexity has to be kept to a minimum.

1 Like

Consensus for what? If client and chunk accepting node agree on timestamp, that’s all that’s needed. Timestamp is added onto encrypted chunk – outside encryption so readable by all nodes - this breaks dedup as hash of upload will be unique, so we lose that for temp data. Nodes will drop expired chunks on churn. No need for consensus.

Can there be errors or attacks here, maybe some small amount, IDK, but it doesn’t have to be perfect for it to 99.99% work.

2 Likes

I don’t see how it could be that hard to add a timestamp as metadata for a chunk… I expect the only consensus issue is agreeing on pricing for temp data on upload based on the time it’ll be valid for, which would be a bit more complex than the current quoting process.

Maybe there’s a reason adding non-encrypted metadata to the chunks is tricky? Or has it just not been done as there hasn’t been a motivation / a keen focus on keeping things as simple as possible for launch?

If it could be done, would it add much complexity, or is it probably easy?

It could open the door to temporary data, and posisbly enable the optional deletion of private data by the owner (though shouldn’t be enabled for public data for censorship resistance and breaking the perpetual web reasons).

1 Like

I don’t think it’s a simple thing, but I am certain it’s possible and given the benefits, it’s worth a lot to the network to add it to the road map, so I’ll keep pushing for it.

2 Likes

Adding it to the roadmap implies it will be done, rather than it may be done. No need to decide that until we find out how immutable data alone copes.

2 Likes

If it doesn’t cope, the network dies … so IMO, we do need to do all we can to insure all data copes. Reducing the risk for cascade collapse is important and temp data is a mitigation of risk strategy that we ignore at everyone’s peril.

That’s a different argument to it being nice to have, etc.

If temporary data pricing is linked to permanent data pricing, the latter will drag the former down with it, if your assertion is true. The ant nodes still have to host both.

1 Like

Sure, but I’ve been making this argument all along.

This seems is an assumption on your part. Why would it be linked in the first place? IDK, how it might be managed, but Can’t see it being treated as some percent of perm data cost.

How wouldn’t it be linked?

Ant nodes would host both kinds of data, right? So, temporary data would need to subsidise the cost of permanent data, should new permanent data become scarce. That would surely increase the cost of temporary data beyond that of a non-permanent data network.

This begs the question - why wouldn’t you choose a temporary only data network instead?

Autonomi is REALLY different from other technology.

What other tech would attempt a TGE with vast sections of the core technology untested and unreliable?

How many of you have successfully uploaded a single file or a directory recently from a home network? Testing only internally in data centres is simply unrealistic, misleading and downright dangerous to the entire project.

boooooooooooooooooooooooooom :smiley: thx

Well, one thing I’m happy about not having native token, is that basically everything on the network can fail without causing massive shitstorm. So, while not optimal to go with so little testing, the TGE is not as much “point of no return” as one might think. I personally don’t even think about it as “Network Launch”, but merely a Token Generation Event.

I’m almost certain, that this thin ice of a network we are walking on, will break when up- and downloading becomes easy for everyone. But, maybe that’s just a couple of fails until it starts to work?

4 Likes

Yes and no – IIUC there could be no real damage except to the reputation of the project - and at this stage that is a VERY major factor

4 Likes

There will certainly be pain points, but at some point, getting it out there and fixing forward is the way to go.

You are right.

But there may be all sorts of plans and maybe even binding contracts that makes moving the TGE practically impossible.

And one thing I’m personally keenly waiting is to get done with the mess of all these proxy tokens eMAID, MAID and unclarity about the issued amounts etc. I don’t mean that there would be any factual inclarity, but this all must be pretty dense to make sense of for anyone approaching the project nowadays. Aftet TGE, there is The Token, and communicating about it will be so much easier.

2 Likes

Very true, tonight I was at my first Glasgow Astronomical Society lecture since COVID. I met a bunch of old pals, some of them advanced geeks but really did not feel like explaining Autonomi to them cos its just so confusing.
I stuck to “Im working with a project that will redefine data storage and security, I’ll tell you more about it next month.”

1 Like

Lets see if I have this network failing due to temp data not being able to be deleted correct. Hmmmm temp data is going to be the major data on the network.

And if its filling up the network so that means people are earning huge amounts because of all the data upload, but no one buys new drives or add a ton more nodes to cash in on these uploads. Hmmm right, understood. No new nodes due to huge returns. Network dies due to older data is never useful and never deleted. Interesting dynamics. Drives never get bigger when people upgrade their computers. Got it.