New network going up now, testing 4MB chunks. Time to reset your nodes

But please don’t use the Blockchain to validate if a chunk has been paid for… I don’t think we want to start baking that stupid think into our base layer…

4 Likes

That’s the plan, via servers run by the Foundation. It’s in the September plan.

2 Likes

As I understand the plan is for the Foundation Oracle to tell the nodes when it has been paid and check if the data has been uploaded and then inform the smart contract that it can release the money to the node.

These are two different operations - the verification of the recorded data cannot be done on the blockchain, unless you call the release of the money from the smart contract such a record?


Check out the Dev Forum

2 Likes

Connections on my router seem way down with this network. Apart from a peak into a couple of thousand when a node is starting it seems to be around 800 to 2200 for 5 nodes. I was having about 3.5k all the time with the last network.

Getting a bit of shunning though…

1 Like

Well - yeah… On initial upload - when payment is done via Blockchain a node validates it has been paid via Blockchain … But I’m with Dimitar here - that’s something slightly different…

1 Like

I’m going by this from the September plan:

Uploaders

Uploaders will receive quotes from nodes, along with their wallet addresses and will be responsible for paying nodes in ERC-20 tokens. For large files with many chunks, a smart contract that batches payments for all chunks will be used to make the process more gas-efficient. Once a payment is made, the uploader will notify the nodes to check the blockchain - including a chunk or upload identifier in the notification message.

Endpoint

The Foundation will operate its own Ethereum nodes and provide a default RPC endpoint that nodes can use to read blockchain data.

See network-roll-out-update

I’m dumb obvs! Can you explain the difference with:

4 Likes

I meant on every replication later on…which would mean we would need the Blockchain to be around forever

1 Like

Ok, but that’s not what @Dimitar is talking about is it?

1 Like

I’m in too, you lot won’t shake me that easy :slight_smile:

7 Likes

Or rebooting the PC before Reset is finished.

2 Likes

Should we be seeing rewards balances with the safenode-manager status --details command? All of mine say 0.00000000, and I have several hundred nodes running.

1 Like

That’s what I call dedication, you running generators?

4 Likes

I am but I snuck in a few dedicated’s home rig is still OOS.

5 Likes

Uploads haven’t started yet

2 Likes

So I’m not alone then?
Nobody has nanos?

Are all these chunks/records I am storing, left-over crap from folk restarting upgraded nodes, rather than zapping them and starting with clean vaults? <----- theres a word you never hear these days :slight_smile: BYKWIM

2 Likes

About 250MB records for 8h running and no nanos.
About 100 nodes

1 Like

Uploads have started now (slow at first). We reached a 100K network size a few minutes ago.

More will be scaled up in due time in the hours and days to come throughout this week.


The 2000 nodes from our end peaked at average of 85% CPU usage on the higher capacity droplets in the first hour of going live, and the memory peaked on average at 10GB (25 safenodes per droplet).

Since then for the past few hours, the average host CPU is hovering at 18%, and memory levels are at 6.5GB. Therefore, given the current network size, we have decide to enable uploads (slowly at first).

No crashes of safenodes thus far until like the first few hours of the prior testnet. :crossed_fingers: .

10 Likes

Received my new drive today. Is there some application that requires storage?

16 Likes

What a beast! :muscle:t2:

9 Likes

I don’t know how it happened, but at some stage over the years of following this project, seeing 24TB drives with names like “IRONWOLF PRO” became a viscerally thrilling thing. I’m pretty sure that didn’t use to be the case. Thanks a lot, MaidSafe

10 Likes