Bandwidth usage in vault configuration - RFC research

I’m in the process of writing an RFC for managing bandwidth consumption of vaults.

It would be useful to get some input from the community before it’s submitted for review.

Bandwidth settings are very important for vaults, since many users have bandwidth caps and hitting that cap results in an extremely negative experience of slow internet, possibly for several weeks. Even without caps, having the entire connection consumed in bursts can significantly disrupt other activities.

In my experience with coin farming/mining, users simply want to get started asap and sometimes don’t do enough research, especially when getting started is easy. Without a bandwidth cap setting, new farmers will get stung by running a vault and maxing out their bandwidth. This could result in unnecessary negativity toward safe, similar to the negativity experienced in bitcoin when users were stung by wallets lacking encryption.

There are a couple of configuration options required:

  • maximum instantaneous bandwidth so it doesn’t consume the whole connection (needs individual up and down settings)
  • total monthly (or weekly etc) consumption according to the plan and usage by non-vault resources.

But there’s a catch to bandwidth settings related to vault ranking. If bandwidth limits are reached, the vault appears unavailable (and effectively is unavailable, albeit temporarily). How should this be treated in the vault ranking algorithm? Should it have the effect of the vault being offline, or should it have some other in-between effect?

This is a tricky bit of configuration to design, but I feel it to be essential. Looking forward to hearing your thoughts on this topic.

Some relevant reading:

8 Likes

I see no benefit in adding complexity trying to understand why a vault went offline. If a vault is useful to the network, then the network ranks that; if it’s unreliable for whatever reason, then it gets reassessed periodically and shunned until it become useful again.

Even the best vaults will see breaks in performance, so there perhaps needs to be an easy route that allows strong vaults to return quickly. If a vault returns to the network with a signed kudos, then perhaps the network can adopt it promptly.

There obviously needs to be both setting for upload and download limits… so on each, a range plus unlimited. Bittorrent:Transmission’s settings for that are easy to understand… a range of [5,10, 20, 30, 40, 50, 75. 100, 150, 200, 250, 500, 750]kB and unlimited. Perhaps the lower end of that is not useful to SAFE, so could be truncated higher minimum, so that helps users understand the minimum useful contribution that’s worthwhile.

I wonder just that capping of up and down, is sufficient. Anyone with monthly cap can manage their own - unless there is a real risk to the network that certain large ISPs action cutoff period the same for all users, in which case the end of month might risk losing those users support. Average noob with a monthly bandwidth being asked what total bandwidth contribution they want to make and averaging that, is going to get confused. So, keep it simple and good hosts will learn quick enough.

3 Likes

Those with soft caps and excessive charges/throttling on breach of the cap would likely want an easy way to manage this in the client.

Fwiw, this stuff can be managed at the router for those with decent equipment. The same goes for bandwidth limits. It would be much more user friendly to be able to manage such things in the app though.

Total use for the application would be trivial to average but we shouldn’t want to look to be managing a users bandwidth, because that’s a liability. I would expect a user would want some other bandwidth manager but ~total/daysinmonth is simple enough to avoid it maxing out because of SAFE - and perhaps that would be a rolling calculation where bandwidth is not maxed out.

Couldn’t you set the vault to average the bandwidth it uses over time? So if the vault knows it only has x amount of bandwidth to play with over y amount of time then it’s allocation and usage would be x/y. So when it wanted to devote resources to the network it would be resources times Usage over Time or R * u/t. You could set it for a smooth curve over a week’s time or it could spend what it needed at the start and as the week or month dragged on it would have less resources, but also less time.

I expect there are two user groups - one will want a simple cap like transmission so it doesn’t interfere with other use; and the other will want some total sum of bandwidth that can follow some formula like you suggest.

1 Like

I agree less complexity is better. If it results in churn then so be it.

Agreed; torrent apps usually only have settings for max up/down speed and I think that’s enough. Users can manage their own cap, and set max up/down appropriately.

Tor mostly relies on two configuration options - ‘normal’ speed and ‘burst’ speed, as explained in bandwidth shaping options available to Tor relays, set to the lower speed of up/down (usually up). There’s also a hibernation configuration option for managing bandwidth over longer periods.

1 Like