Continuing the discussion from Wait, The Safe Network CAN'T Be Used As A VPN?:
Warren’s post raised some interesting questions that have been touched on in other threads in other categories. Topic is left as uncategorised because we don’t seem to have a suitable category for it
From my understanding of the dynamics in the communications/processing the overall speed of the system/network is determined by a few aspects while the other areas are miniscule in comparison. The discussion has to rely in part on previous assurances from the dev team while other areas rely on physics.
Please leave quantum physics/computing as theoretical as no one has them for the foreseeable future, and yes these would make SAFE almost instantaneous as it would most communications protocols/systems.
In my understanding the areas that have greatest affect on speed
Speed of light and resulting speed in wire/fibre
The time required to send data around the world will probability slow down the network the 2nd most. We know that there will be many hops for any chunk/SD to be sent from vaults to the requester. We also know that each hop is between close nodes in XOR space, can mean any physical distance.
It would not be unreasonable to assume that any transfer will involve hops between continents. In a true XOR space system it is expected that for even population distribution the average hop would be 1/4 way around the world. BUT the major internet population areas are USA and the EU. Asia/India are catching up fast. So the average distance for the next couple of years is more likely to be reasonably less. Lets use an overall average of 4,000kM using USA/EU as the major users of SAFE for the near future.
At 4,000kM and speed of light in a medium/repeaters approx 1/2 speed in a vacuum, we see that average delay is going to be 26 milli Seconds one way and handshaking requires at least 3 transmissions giving approx 80mSecs delay when sending a packet.
With large packets a 1MB chink may require 2 or 3 packets to the time is approx 80mSec delay for handshake and 3*26mSec for each packet. Thus approx 160mSec delay for each chunk due to electrical/light delay.
Data Rate
This is the second component in the communications delay issue and the major factor in the delay. A packet is not usable until it is fully received and checksum verified. In simple terms the real lag is the speed of the physical electrical/light layer and time required to receive the whole block over the physical layer.
With the current internet we can relay on the connecting backbone communications being greater than any ISP<–>Customer link and will not be significant.
For sender & receiver having a 10MBit/Sec UP/Down link, the data rate delay for a 1MB chunk is approx 0.8 seconds. The delay will be controlled by slowest of the senders UP link speed and the receiver’s Down link speed.
Internet congestion
Some countries have “international” congestion due to the lack of adequate undersea links.
Many/most ISPs have congestion issues.
Most of congestion occurs during “peak” times
This will slow down packets across the internet by an indeterminate amount. Sometimes a few milli seconds and sometimes hundreds of milli seconds
SAFE Routing
Unsure at this time, I just don’t know the dynamics that will be required to successfully connect two nodes together. But it is assumed that the equivalent of one/few hop(s) will be required sometimes.
Areas with low impact on speed
Processor - most work in hops is data movement which is very fast (micro seconds)
Other Hardware - This works in nano seconds speeds and requires little work.
When data reconstruction (decryption) OR Node processing (decisions) occur the processor has more work to do, but we have been assured that this will be low as one would expect from decrypting and decision making. Expect less than a couple of milli seconds per chunk on a PC
Routing processing, minimal - realise that PCs are already routing data when communicating on the net.
Summary
In my opinion using the analysis above the delays (lag) from the communication medium and passing chunks/data from one node to the next many times to get a chunk/data from vault to receiver is by and far the determining factor for the speed of the network.
Processor & hardware speeds play only a minor part in the delay equation. It is in terms of microseconds compared to milliseconds per hop. The largest processor/hardware delay will be the en/decrypting of the data at the client machine, but on modern processors that will still be much much less than even one average hop distance delay.
For each hop using fast connections 10MBit/s upload we can see that the average expected delay will be on the order of (take 0mSec congestion delay)
150 mSecs + 800 mSecs + (max) 1 mSec + 0 mSec ~= 950mSec (ave per hop)
link delay + transfer delay + H/W processor + congestion
For a network requiring 5 hops per chunk the delays can be expected (for a USA/EU person) to be on the order of 4.5 - 5 seconds for the first chunk. Chunks can be in parallel so ~5 seconds for 1MB or 1GB The Hardware contributes about 10 mSecs in the whole delay. 1 in 400 (0.25%)
Caching allows often used chunks to have less hops.
What can be done to speed this up?
Yes quantum entanglement communications but seriously really looking for practical solutions for todays technologies.
I can think of
- increased caching to reduce hops overall. This is a overall average speeding up, but useless for private data usually accessed occasionally.
- Somehow reduce hops while keeping anonymity/security at promised levels. Do not know how though.
- have more vaults in rented farms with high bandwidth. This means the average distance is reduced and data rate between those nodes increases 100 fold. Only good if enough hops occurs between these “virtual” machines
- Reduce max chunk size. At 1/2 the size the average delay becomes 150+400+1 ~= 550 ms and ~ 2.5 seconds for 5 hops, and then parallel getting means approx same time for 1/2MB file and 1GB file.
- EDIT: pass the chunk packet by packet through the “hop” node so that the transfer delay can be reduced to one packet size and nullifies benefit from reducing the chunk size
Please can others help out with these figures as they seem to be rather high for a simple 5 hop chunk transfer. Have I made a major mistake in my napkin analysis