But please don’t use the Blockchain to validate if a chunk has been paid for… I don’t think we want to start baking that stupid think into our base layer…
That’s the plan, via servers run by the Foundation. It’s in the September plan.
As I understand the plan is for the Foundation Oracle to tell the nodes when it has been paid and check if the data has been uploaded and then inform the smart contract that it can release the money to the node.
These are two different operations - the verification of the recorded data cannot be done on the blockchain, unless you call the release of the money from the smart contract such a record?
Check out the Dev Forum
Connections on my router seem way down with this network. Apart from a peak into a couple of thousand when a node is starting it seems to be around 800 to 2200 for 5 nodes. I was having about 3.5k all the time with the last network.
Getting a bit of shunning though…
Well - yeah… On initial upload - when payment is done via Blockchain a node validates it has been paid via Blockchain … But I’m with Dimitar here - that’s something slightly different…
I’m going by this from the September plan:
Uploaders
Uploaders will receive quotes from nodes, along with their wallet addresses and will be responsible for paying nodes in ERC-20 tokens. For large files with many chunks, a smart contract that batches payments for all chunks will be used to make the process more gas-efficient. Once a payment is made, the uploader will notify the nodes to check the blockchain - including a chunk or upload identifier in the notification message.
Endpoint
The Foundation will operate its own Ethereum nodes and provide a default RPC endpoint that nodes can use to read blockchain data.
I’m dumb obvs! Can you explain the difference with:
I meant on every replication later on…which would mean we would need the Blockchain to be around forever
Ok, but that’s not what @Dimitar is talking about is it?
I’m in too, you lot won’t shake me that easy ![]()
Or rebooting the PC before Reset is finished.
Should we be seeing rewards balances with the safenode-manager status --details command? All of mine say 0.00000000, and I have several hundred nodes running.
That’s what I call dedication, you running generators?
I am but I snuck in a few dedicated’s home rig is still OOS.
Uploads haven’t started yet
So I’m not alone then?
Nobody has nanos?
Are all these chunks/records I am storing, left-over crap from folk restarting upgraded nodes, rather than zapping them and starting with clean vaults? <----- theres a word you never hear these days
BYKWIM
About 250MB records for 8h running and no nanos.
About 100 nodes
Uploads have started now (slow at first). We reached a 100K network size a few minutes ago.
More will be scaled up in due time in the hours and days to come throughout this week.
The 2000 nodes from our end peaked at average of 85% CPU usage on the higher capacity droplets in the first hour of going live, and the memory peaked on average at 10GB (25 safenodes per droplet).
Since then for the past few hours, the average host CPU is hovering at 18%, and memory levels are at 6.5GB. Therefore, given the current network size, we have decide to enable uploads (slowly at first).
No crashes of safenodes thus far until like the first few hours of the prior testnet.
.
What a beast! ![]()
I don’t know how it happened, but at some stage over the years of following this project, seeing 24TB drives with names like “IRONWOLF PRO” became a viscerally thrilling thing. I’m pretty sure that didn’t use to be the case. Thanks a lot, MaidSafe
