I got to uploading larger files by changing (source)
-const DEFAULT_TEST_COINS_AMOUNT: u64 = 777_000_000_000;
+const DEFAULT_TEST_COINS_AMOUNT: u64 = 4294967295_000_000_000;
Trying to display the account keys ended up being more tricky than it was worth.
With this I could do some tests on large files. I used my desktop with 64 GB memory for this, since 16 GB on the laptop was going to be a bottleneck.
Keep in mind these tests are kinda silly, they’re not going to be possible on the real network, but pushing nodes to extreme stress can sometimes uncover useful things along the way.
Started with trying to upload a 5 GiB file using the
cat /tmp/5gb.dat | safe seq store -;
hack.
Creating the file to upload using dd
took 117s (this is an indicator of disk speed so gives some point of comparison)
client took 145s
chunk never stored
cpu idle after 145s
max memory consumed was 25.6 GB (nodes never got started on this chunk so this is all memory consumed by the cli).
nodes cannot process this and did no work, cpu use stayed very low, ultimately failing at the nodes with “Error deserializing Client msg”:
[sn_node] INFO 2021-01-11T20:24:23.928531348+11:00 [src/node/mod.rs:229] unimplemented: Handle errors.. Logic error "Error deserializing Client msg"
But the nodes did return an xorurl which the cli printed out, kinda odd. Fetching it would not work since no nodes stored any chunks.
Then tried uploading a 2 GiB file. The nodes didn’t balk at this like 5 GiB but were ultimately not successful at storing it.
dd took 46s
client took 97s
chunk never stored
cpu idle after 498s (34 Mbps)
Maxed out 64 GB Memory (got to 540 M swap)
nodes did a lot of work trying process this, but they failed with “Not enough space”:
[sn_node] INFO 2021-01-11T20:35:36.814288921+11:00 [src/node/node_duties/messaging/network_sender.rs:134] Sent to section with: MsgEnvelope { message: CmdError { error: Data(NetworkOther("Not enough space")), id: MessageId(16a602..), correlation_id: MessageId(42c648..), cmd_origin: Client(46412b..) }, origin: MsgSender { entity: Entity::TransientSectionKey::Bls(b42f83..)Section(Metadata), sig: None }, proxies: [] }
I think it’s just a bit too close to this limit once other account data is factored in:
const DEFAULT_MAX_CAPACITY: u64 = 2 * 1024 * 1024 * 1024;
(source)
Anyway I’m satisfied that I’ve pushed this hack as far as I want. There’s a few errors to look into a bit further perhaps, but overall this hack isn’t gonna last and doesn’t represent realistic situations so there’s not much point getting too invested in it.