A few thoughts/opinions that came to mind:
It’s very logical that vaults should be fixed size for network KISS, especially considering XOR space. You want a nice contiguous block of storage with no fragmentation. In other fields, linux swap partitions and most virtual machines use a preallocated fixed size for best performance.
For the user to get the same effect as dynamic vault allocation, it seems like the easiest thing to do would be to run a script at the APP level that automatically spins up a new vault process with the user’s credentials. Taking advantage of some virtualization might make this easier or more intuitive for the user. A lightweight virtual machine image could be setup with all of the user credentials with a fixed but reasonable vault size (ex. ~32GB to ~128GB for a 1TB hdd). Copies of this machine image could be spun up or down in parallel as desired. Seems like some sort of virtualization like this would be standard practice for vaults running on server hardware anyway. The only downside from the user’s perspective would be that the new vaults would start out as infants, rather than borrowing the nodal age of the original. However, I think this is a good thing for keeping the network healthy. Hypothetically I could also see that the farming rewards may balance out since some of the younger nodes are storing proportionately larger amounts of hot data. Time and experimentation will tell.