Upgrade from safenode 0.109.0 to 0.110.0 using safenode-manager

Truly BOfH-esque.

Glad you are on our side :slight_smile:

You sure you are not a Kiwi?

1 Like

Take a step back and reconsider before doing a major redesign. What are the benefits of keeping the xor address between restarts/upgrades? @neo detailed the drawback, but how likely is that attack? Other ways to mitigate that attack while still retaining the benefits of retaining the xor address between restarts?

Because not retaining the xor address between restarts/upgrades is going to create a lot of problems, e.g., having to massively reshuffle all of the data on the network every single time thereā€™s an update. Please take a look at why AWS has snowball to appreciate how unfeasible that would be for a network hoping to hold any large quantity of data.

So please keep the current xor address retention design and letā€™s quantify the attack risk and design mitigation strategies if deemed sufficiently serious. No knee-jerk reactions please.

3 Likes

That is just one attack vector. I can detail more. The moment you can engineer the xor address is the moment you can take over the network. My attack just outlines one way to engineer the xor address.

BUT the elephant in the corner is the feature that was removed for beta testing to ensure a smoother set of testing as it would make testing more difficult.

That is the node doing a secondary encryption of the chunks as it stores them. So that at rest the chunk cannot be read by other applications/malware on the computer or accessed after node is stopped by reading the disk contents. The key is kept in memory so no other app can access the key.

That means the node on restart has no knowledge of how to decrypt the chunks it is storing and they become useless.

Back to square one when this feature is reintroduced and is one reason the ā€œredesignā€ is going to be done. This isnā€™t a knee jerk reaction by David, but rather returning to the design that it was supposed to be

9 Likes

Any idea how much extra RAM, CPU load this will cause?

2 Likes

The secondary encryption?

Consider that https does encryption and no one notices that load anymore, it does load down the system a little doing https encryption on the fly. I would expect that the secondary encryption would be no greater than that.

And is only done on a GET and PUT (decrypt and encrypt)

3 Likes

@neo have you considered the following?

I understand what youā€™re saying. But I still believe that a thorough quantification of the risk and considering strategies to mitigate it is very worthwhile.

A voice is missing here. Who implemented the current design with xor retention and why? Their practical experience and what weā€™re seeing with bandwidth requirements needs to be heard.

1 Like

bandwidth usage - at least atm - is many times the stored amount of data per day anyway ā€¦

if that doesnā€™t change drastically (very drastically) i see no point in retaining data

4 Likes

And it will only be worse with a system with changing xor address on every small update, i.e., if data isnā€™t retained then even more bandwidth will be required. Bandwidth has a hard limit, but storage doesnā€™t. Folks need to get practical here.

Same for nodes needing to be larger in size. When I first suggested that, I was pilloried, but practical experience is showing the reality to everyone.

still for me a complete whipe and restart doesnā€™t matter at all bandwith-wise ā€¦ because it will increase the overall daily usage in the 1 digit percent area ā€¦ if at all ā€¦

reducing continuous load connection cound and bandwidth usage wise would make a difference ā€¦ resets donā€™t

2 Likes

I hear what youā€™re saying, but thatā€™s right now. It doesnā€™t matter much right now. But think forward to when the network has exabytes of data stored (at least thatā€™s the hope). Do you want exabytes of old data having to be reshuffled every single time thereā€™s a small update.

how precisely does that change the picture?

then the continuous load would still be many 10s or 100s exabytes per day anyway ā€¦

3 Likes

I fear we might be speaking past each other.

Imagine if you have exabytes of old data y already stored. Then you have a certain amount of x new data. A system that retains old data where it was will only have to deal with the bandwidth load of x new data. But a system that doesnā€™t retain old data where it was will have to deal with the bandwidth load of x new data AND y old data. So itā€™ll compound the issue youā€™re talking about.

And when you add the compound factor of time, then y will be a far larger burden.

okay - i think the trouble with keeping the data is rather obvious now - so how about first doing the simple thing and removing the old data and in a second step optimize it and think of possibilities of doing it smarter once we have a smooth running and stable network ā€¦?

the current goal anyway is that itā€™s not a few people providing exabytes of data but everybody just a couple of gigabytes from what i thought :wink:

3 Likes

You have me completely confused. How does deleting previously stored data remove burden? It only adds to the burden because the data still needs to be stored somewhere. New and old data alike. You choice is to deal with bandwidth requirement of new data alone OR new+old data if you delete old data.

So many obvious things being conflated here.

Again, who designed retaining the xor address? Would love to hear that personā€™s opinion. Otherwise, Iā€™ll assume that design comes with libp2p, and if thatā€™s the case, Iā€™m sorry but Iā€™d rather go with the practical experience of that team whose routing design moved us so far along.

The change may have slipped through with the switch to libp2p.

Stepping back, I think whatā€™s lacking at this stage is review and oversight from people who know the project but arenā€™t working on delivering it, including theoretical red team thinkers doing attack thought experiments.

Also, AFAIK Iā€™m the only person exercising the APIs against the real network, outside and possibly inside the team. We can see running against testnets is one thing and a real network another. So the project needs more of that as soon as possible, not later when they get around to documenting the API and adding support for languages etc.

Itā€™s not too late for any of the above and I think without it it will be harder to prevent things dragging out much longer than anticipated or necessary.

Iā€™m not optimistic though because I donā€™t see much listening to the community going on compared to the past. Earlier I accepted that change on the assumption that the network was pretty well done, partners lined up, docs and marketing straightforward and so on. But I see I was mistaken and so am calling for a shift in perspective. Back to transparency, realism and building together to get things sound and on an achievable track.

The community has been downgraded if not sidelined and in retrospect I think that was a mistake - or rather premature. Some wishful thinking driven by the idea that the hard work was done and it was matter of turning the handle like any other project perhaps. But this is not yet a project like any other.

Some here know that, and thereā€™s a lot of value in the collective knowledge of the community. I think the change is premature and regrettable. Yet we see that regardless, members the community continue to surface issues, including what seem like design decisions or oversights from literally years ago.

I am not critical of anyone here, if Iā€™m calling it correctly, this is nobodyā€™s fault and we are all part of how we got here including myself.

My point is that I want to be able to voice some things that might be hard to say or hear inside Autonomi in case Iā€™m right.

I still hope the narrative weā€™re given isnā€™t supported largely by wishful thinking. If Iā€™m wrong, fantastic, Iā€™d much prefer that to be the case and to be watching a successful launch in a couple of months. :pray:

14 Likes

removes dev/testing burdon right now because thereā€™s more important things to do :wink:

2 Likes

This still makes no sense. What are you talking about? Haha. You still need to store the data somehow. Deleting data will only make bandwidth requirements worse as someone must recall that data and store it (and you have to send that data out before deleting it; double whammy lol)

and you keep saying we should make the pilots seat of the jumbo jet out of carbon fibre instead of glass fibre/metal because that really would reduce the weight of the thing dramatically and therefore is essential of an economic operation of the whole machine

keeping or throwing away the data is a drop in the ocean of bandwitdth usage - but iā€™m out of here now - i have better use for my time ā€¦

3 Likes

Youā€™re articulated this very beautifully. Iā€™m 100% in agreement.

Was thinking the same. And if so, best to leave as is.

Completely agree.

YES

I really think this isnā€™t the time for further design changes. @maidsafe @bux @dirvine, can we please focus on addressing the API issues that @happybeing has identified and launch this network based on what has been tested over the past year?

3 Likes

so bogard can execute one of the attack angles coming from keeping the xor address like @neo outlined above :handshake: - valid plan

2 Likes