Network performance concerns

Wouldn’t there be a minimum latency of:

number of hops x average latency of hops

And the unavoidable intercontinental hops are going to be killer in that equation.

In other words, as I noted elsewhere, you’re never going to get interactive (phones, action games) communications at an acceptable performance level if they are routed entirely within SAFEnet, even if overhead is reduced to zero.

1 Like

In this case we would be fine :wink:

Seriously though, there is rarely never and rarely guaranteed in Engineering really. The use case etc. differs and no one tool does everything. So for instance voice calls, route the connect through safe then create a secured channel between the two endpoints. This would use routing to establish a secure link. So establishment may be slower than a straight call, but possibly acceptable. That is one simple example, but as I say there is no one size fits all, it’s just we do not yet know what each use case entails or what advances will be made in this field.

I am not even sure what real time in any case there is circa 80-100 milliseconds between your eyes and brain, then on to your muscles. Just like every rainbow is unique for every person who sees it (even standing together), the different world we all see “changes” at different speeds for us all (just a side point really).

i.e. if we had quantum entanglement we would be fine :wink: More likely though terabit/byte broadband in X months/years The the ground moves under us as to what is achievable.

When we started this, the net was almost saturated with only adsl and still many modems, look at where we are today, 4K movies, several per time streamed into many rooms of houses.

It’s all up for grabs but the easiest way to predict the future as they say is make it :smiley: so we try our bit and see where we fit. Design for tomorrow and realise we live in times of exponential growth in this space, that means change faster than we can imagine it in many cases.

17 Likes

That’s what I was driving at. But it takes you to say it, before the Facebook-like groupmind endorses it with likes. :laughing:

Well, the stats I quoted were (off the top of my head) a few tens of ms for inside a country and perhaps 150 ms between continents (US to Australia, for example). So you would need only a couple of the latter to introduce a very perceptable delay, many times what is considered acceptable for first-person shooter games.

1 Like

It’s really just determined by the network speeds of the people using the network. They can’t really control that.

Once safecoin comes into play and vaults are rewarded their quality in terms of connection speed and time online will skyrocket. Only 35 droplets on this test too.

5 Likes

If the QE comes, and its there in Q computers now I thought over very short distances, but if it comes it will be one distributed global computer, as if it were closer than local, imagine the throughput. Still, you’ll be able to do regional games. Wouldn’t the low earth orbit sats help- no, any sat jump adds way too much latency. Still I think the most compelling use case is 5g mesh on open hardware verified handsets. And hopefully IOT mesh if the IOT standard opens a bit where it could accomodate SAFE.

Optimized for SAFE, thinking that for a long time. .

2 Likes

@Warren

And I assure you, you’re not by any means the only one thinking it. :slight_smile:

1 Like

…and with 1 vault per machine limit, plus extra activity from increased allowances: 1.4GB Down/ 1.9Gb Up in 24 hours for me, I should be pulling in some good coin at that hit rate :wink:

6 Likes

Nope, you totally missed the point, no doubt because you didn’t follow my links. The fundamental limit (minimum latency of the network) is determined by:

Intercontinental backbone speeds (latency in the range of 150ms, check the table I linked to) and the geographically-agnostic nature of SAFE’s implemenatation of XOR space.

And the study I linked to was indicating a maximum acceptable latency of about 50ms for fast-paced games.

That is even if SAFE’s internal overhead is reduced to zero. No-one has contested those technical facts.

So @dirvine, after some arm-waving about future technology, says that maybe SAFE could be used to set up secure out-of-channel connections, with the actual data traversing the non-SAFE Internet. And I noted that I already addressed that possibility with my post on obfuscated file transfer (you can search for the term, but no doubt you won’t :wink:

1 Like

No not arm waving (why do folk keep saying that ?). I was saying the secured channel could be set up via safe, not out of band, but in SAFE. Then a secured direct connection is created, still using SAFE tech to resist MiTM etc. I do not see how any of that is arm waving, can you expalin please. I try to keep things clean and clear for many to understand, maybe that is arm waving, but I think it’s a bit disrespectful to talk of arm waving. I could state examples of the exact algorithms and authenticated encryption streams for such schemes etc. if you want to be more exact? (they are coded up in secure_serialisation crate if you need to see the detail of the code).

I would hope I do not arm wave, but if I do then lets do something to stop that perception.

16 Likes

The latency of SAFE is not determined by “the network speeds of the people using the network” but by the presence of long-range hops around the world. Look up the table I linked to (but you won’t, of course) from a backbone provider listing latency (ping time) within and between countries and regions.

It should be derived from the fastest member of a group that can communicate with you. This is a part we are working on now. Every member sends back a validation message (which is lazily accumulated), You accept the first and ask for the data. This should move us from the slowest node determining the speed of recovery to the fastest one. Should be a large improvement.

9 Likes

That could be called extrapolation but in the context of what has to work here and now, it is arm-waving.

So any look to the future and using advances in tech is arm waving? Ok then I stand accused and guilty as I always will :wink:

3 Likes

Well, that’s where I start to get out of my depth technically but anyway:

Latency (response time for a request) is not the same as bulk speed (throughput of bits per second). Your neighbour in XOR space might have great “speed” (throughput) but poor latency (because he’s on the other side of the world). Could you clarify just which it is that the team is working to optimize?

It’s arm-waving if such speculation of the next (or after that) generation of network technology is outside the time horizon of alpha to beta to production SAFEnetwork development. These technologies have a typical generation time of (I would guess) 5-10 years. Is that the time you’re looking at for a production SAFEnet, because i get the impression that it is supposed to be much less.

Measure the output, if after all variables are taken into account and a node is first to respond, the it’s the fastest at that moment in time (it can change in the next millisec). Take all the science and semantics out of it and the outcome will be very much in favour of getting the data from the node that can give it to you fastest. Then average that over many many chunks and you have the current network average speed.

Latency, throughput, congestion control, sliding windows, icmp, tcp, sctp, multilink tcp, b/w, cpu, disk space, caching etc. all taken into account, the outcome is what is best measured. A bit like programming, focus on proficiency of small parts is premature optimisation. Best to measure form the highest point, generally what happens at the user side after all the magic or whatever has happened.

5 Likes