Jan 14 Update

Hey everyone!

To date, 48,875 wallets have been automatically entered into the Random Rewards pool. And guess what? There’s still 12 days to go! To get entered, all you need to do is run your node and host uploaded data to help us test the network ahead of launch.

Speaking of the network…
The team shared some incredible updates with me today:

  • We now have approximately 150,000 nodes running—potentially making this our biggest testnet yet!
  • Together, at the current levels, these nodes have an overall hosting capacity to store an impressive 1PB of user data, spanning 53 countries and more than 380 cities (as best as we can tell).

Let’s push forward so we can build the future together!

On building…
We’ve started sharing more about what we’re doing here. We know some of you are eager for everything to be perfectly in place, and rest assured, we’re working hard to make that happen. But here’s the thing—many in Web3 thrive on discovering credible projects and exciting technology before anyone else, and we believe we tick both of those boxes!

Oh, and one more thing…
The team at Altcoin Daily just released a video about us! Please take a moment to share and comment if you can—it really helps spread the word and supports what we’re building together.

:heart: Rusty & @Gill_McLaughlin

30 Likes

Can I sneak a First! In?

10 Likes

Any maths on that figure.

In fact there is 64GB possible per node which gives nearly 2PB of RAW storage at 5 responsible nodes per chunk.

To have 1PB of data means all nodes have 32GB worth of chunks they are responsible for.
EDIT: This is the maths they are going for, just a slight misname of “data stored”. 32GB is the average expected used size of nodes that the network is sized for.

You will be fact checked if this is marketed as such, and it does not end well with this sort inflation of figures. If its done for this then people think what else is it being done for.

150K * 64GB == 9.6PB and at 5 responsible nodes per chunk is nearly 2PB Raw capability

But nodes are nowhere near storing that amount of data. Last I looked it would be around 1-2% of that amount of responsible chunks

13 Likes

Are there nodes in China? :face_with_peeking_eye:

3 Likes

check the map above :slight_smile:

5 Likes

Looks so and even almost North Korea. :astonished: Might be Southside on holiday :joy: :rofl:

7 Likes

Its on the border in South Korea. Kim is not running nodes :laughing:

5 Likes

I like that the Balkan Peninsula is also lit up!
:freedom:


Check out the Dev Forum

4 Likes

Just fired up a new server!

7 Likes

Yeah, let’s not trip over the maths at this late stage!

@rusty.spork Did you mean the capacity of the network is 1PB?

But anyway, I’m thinking that 16,384 maximum number of records x 4.1MB maximum record size x 150,000 nodes makes a maximum capacity of the network to be 10076160000 MB = 10076160 GB = 10076.16 TB = 10.07616 PB. But we know the network would get hugely expensive anywhere near that so shouldn’t happen.

But I don’t see how any nodes can be 10% full at the moment to get to 1PB of data stored. And I can’t make it work any other way that I can see.

The biggest number of records I have on a node is 204. Leaving aside whether the node is responsible for them and even making the number of records to be 500 in case mine are rubbish this isn’t enough data in the network to justify 1PB of data stored. 500 x 4.1MB max record size x 150,000 nodes is 307500000 MB = 307500 GB = 307TB.

Or what am I doing wrong?

I hope that 1PB of data stored isn’t a count from the uploaders of what they think they’ve uploaded!

1 Like

Already chatted with them

Yes they are using the metric of 32GB average size of a node. Note Average. They will be using this figure because that is what an optimal size/fullness of a node would be

I still work in binary for sizing. So yes 1PB in decimal.

Remember that any sizing has to account for minimum of 5 replica so raw sizing is 1/5 of diskspace available

There was mis-wording which they acknowledged and will be changing in future communications I have been told. Something like “Available Data Storage” rather than “Data Stored” which implies actual data being stored.

6 Likes

Hmm. If chunks are the “stuff” that files are made of, and files A and B both are made of chunks that include a common intersection of N chunks, and A resides on the network already, then if B is uploaded, will those N chunks in common be replicated, or simply pointed to in the map created for B?

For that matter, what do chunks look like? If I were to dig down and examine a chunk, what might I see? What is the likelihood that any two files might have chunks in common?

What is the optimal chunk size? Or does that vary?

Just some random thoughts emanating from the discussion about storage space… how much is shared between files… etc.

1 Like

binary mince mostly.
Most chunks are 4.2Mb but you get the occaisional short one like here

1 Like

I believe the short answer is NO
but I also think the chances of two files containing an exactly similar chunk are vanishingly small unless perhaps you are overwriting old logs or similar.

2 Likes

They would not be replicated since the XOR address for the chunk is derived from the hash of the chunk. So exact same chunk will always give the exact same XOR address. So “double replication” cannot be a thing. IE if the client tries to upload it then nothing needs to be stored unless the client thinks another node should have it (ie routing mistake)

It will be difficult to get this situation where chunks are shared. But a small file made up of numbers with leading spaces and exact size of 18 bytes did cause most of them to have a common chunk. Wasn’t the same when i tried millions but it was a pattern where a huge number had one of the 3 chunks the same. And that was repeated many times over the 5 million count. IE say 1 to 10000 had a common chunk and 10001 to 200000 had another common chunk. So for that test I did a MD5 of the number to get unique chunks.

3 Likes