As long as those par blobs are for a single chunk. Ever try to fix a 20GB file that had par sections. The time taken to repair data is not linearly proportional to the size of the original data section but some sort of exponentially increasing function.
There was some discussions in the past about self encryption having par ?slices? within the chunk in order to repair the chunk if some transmission error occurred. But then transmissions errors are actually quite rare since faulty packets are detected and resent.
replication is perhaps overkill but does solve some other issues that in a global system starts to overwhelm the issues. I am sure both storj’s method and SAFE’s method will work. But the question is overall how well will each compare. In SAFE’s design, storj’s method would require some redesign and make delivery of chunks quite a bit more complex for what savings.
Storj’s design if I am not mistaken relies on the storage nodes to be continuously active with small downtimes. Maybe up for weeks at a time. But SAFE does not have this requirement, and could actually work with nodes with much higher downtime. EG some nodes up for days at a time and some a day or less and its done so that running a node includes a wider cross-section of the world’s population