Got a fun little hack for storing large files.
Previously I’ve maxed out at 160 MB using
$ safe files put /path/to/file
for immutable data, which took 45 minutes to upload. Files larger than that failed to upload.
But we can store large files like this:
$ cat /path/to/my/big/file | safe seq store -
I tested it using random data of a fixed size (500 MiB):
$ dd if=/dev/urandom of=/tmp/data.bin bs=1M count=500
$ cat /tmp/data.bin | safe seq store -
Writing the initial 500 MiB file using dd
took 13.4736s (311 Mbps)
Completion of the cli command took 32s (131 Mbps) but the nodes were still computing and their chunk stores empty. As far as the client is concerned, the file has fully uploaded at this point.
The chunk was written to the node chunk store after 167s (25 Mbps) but the nodes were still computing.
The nodes stopped computing after 186s (22 Mbps). This is the ‘true’ upload time, although the client never knows it.
This stores on the nodes as a single large chunk (need to fix that for the real network) and does not use self-encryption to break it down into smaller pieces.
This isn’t going to be possible (I assume) in the real network but just thought it was nice to find a way to store large files on the network. Also a good reminder that people will be trying all sorts of weird stuff and won’t necessarily follow the intentions of the api/cli.