For sanity’s sake we made quotes valid for an hour so that’s plenty enough for big uploads.
I think holding the last 3 (or even 5) price changes could also work to some extent but in periods of heavy churn the price will change so quickly that we will end up in the same scenario of people being rejected after paying multiple times. The node’s contract with itself in the quote remains the simplest way for the node to keep track of pending uploads without having to store them locally.
If you mean no upload should take more than an hour I don’t think that’s a valid assumption. It may be acceptable, but it would be useful to understand the boundary conditions (eg expected upload time for given upload sizes on different speed connections).
In times of fast price changes it also seems unlikely people will want nodes to ignore them and allow anything <1hr old so…
…maybe a node can use a combination of both time and recent store cost changes? Then accept if either criterion is met. That might help reduce unnecessary rejections, making the node more efficient.
I was thinking along the lines of a many GB sized video. I’ve seen youtube videos 28GB in size for 1080p quality.
Thus if a client requests quotes from up to 28,000 nodes for 28,000 chunks just so the person running the client can know if they can afford to upload the file, your 1 hour time frame wouldn’t be enough because much of the world is still limited in upload speeds of 40 Mbits/sec (5MB/sec) or less. And that is assuming they can max out their uploads. Now imagine higher quality videos.
Now I fully understand this is the starting point and agree that time is a reasonable starting point and maybe as @happybeing suggests a combination of time and history of a few price changes could work well in the longer term.
Just wondering, do you expect this to be happening once the network is no longer a tiny one or a micro one like our testnets. Surely (don’t call me Shirley) you wouldn’t expect heavy churn to cause nodes to just fill up too much would you. In my mind if it is happening a lot then this suggests there will be many issues occurring considering bandwidth &/or quotas will be causing issues for people running many nodes at home.
Good example! I see where I might have misguided you. The way we pay for data is the following: we pay and send chunks by batches (default being 64 chunks), so the hour frame is the time allowed to upload those 64 chunks. Each batch goes one after the other until we finished all the 28,000 chunks. So the time frame isn’t per file, but per chunk, and the biggest concurrent competition a chunk will ever have is with its own batch!
This means you need an internet connection of roughly (64MB / 3600s = 18kbps) to be in time.
But then one might ask, how about with HUGE batches, like 28,000 chunks? Having HUGE batches would create a world of problems on its own, from memory usage to transaction/spend/cashnote size (why here), and thus would not necessarily increase throughput.
Presumably batch size will remain adjustable by the client? So clients with slower connections can adjust/adapt?
took me just over an hour to upload a single album - OK thats on a crappy ADSL line but thats what many if not most have in the UK
Ah OK you force uploading in batches of upto 64 chunks.
I did realise it was a quote per chunk. My thought was some people might like to know if its worth uploading their 28GB (or lots more) file before starting. I guess then they would only get an estimate first then if satisfied then start the process and the client does it in batches of 64 with (potentially) new quotes for each chunk batch by batch.
Very much so and I would never suggest that. I guess I was pointing out that a person cannot get an exact price on large files by this method, just an estimate. Mind you I don’t know a way to ever get an exact price without say long everlasting quotes and I argued against that long long ago when it was proposed.
Are you able to comment on the issue you suggested that there will be times where large churn is occurring and the price is changing very fast. Is that just for micro (testnets) through to small networks only or will it also be occurring in large networks like a million or more nodes which we hope Safe will exceed by 100s of times in years to come
If enough people want it then either in the official release or someone will make a mod for the client to do it. Its in the client code so it can be changed without needing any changes to the nodes.