Update August 5th, 2021

It depends! It could be a good idea, but charging fees is extra work so ideally you handle spam in less costly ways when you can.

1 Like

Do you really want to go down this DARVO path.

Do you really want to go down this DARVO path. I said what I said with the quotes to show your lies, the evidence is in my posts. I did not claim to know what you were thinking other than you being passionate about this which seemed obvious, what I did do is call out the tactics being used to attack me (calling out tactics is not claiming to know what you are thinking). Are you really wanting me to repeat again the evidence, no one else is even reading these posts by me now anyhow?

I offered to accept responsibility (ie guilt) and suggested for you to give a truce with only one request in it.

Again in clearer language and I will leave off the request (I thought it was reasonable but hey)

I am willing to accept responsibility and ask you for a truce. I do not care if you hate me or think I am the liar, just ask for a truce.

I would not think the issue of quote running out mid way through an upload would be a major issue since any event-time frame used would give sensible amount of time to do uploads when the person has to obtain tokens and has slow uploads. Warnings can be given (ie text output) if an old quote is used that its about to expire or quote is for data so large that it cannot be expected to upload in time.

I do agree with Jim that (especially) for testnets perpetual quotes is reasonable to go with due to the extra UX work required. Get quotes out then consider enhancements.

The old Joke comes to mind. Forgive me its from the 70ā€™s and meant for humour only.

In the development of an operating system for a mini-computer (very old joke) (from PDP-11 list)

  • Dev: Hey Boss I have a new feature I built into the file system
  • Boss: and it is?
  • Dev: The files will remain on the diskette and no faulty programme or other user can wipe them on the user.
  • Boss: How much extra work will this involve to implement
  • Dev: Actually it will take less work than we planned on
  • Boss: Wow, how.
  • Dev: we simply do not implement this ā€œdelete fileā€ function that was planned and we get this great feature.

There is more than just DDoS attacks, there is simple ubiquitous use of quotes for larger data sets (eg music/movie/etc files) and its affect on the system to adjust things to compensate for events, such as larger than normal influx of data, market changes affecting farmers willingness to stay, larger outages.

The quoting system introduces an out of band influence on the algorithm to adjust store cost which aids in balancing the network. It affects the rewards given since what is given to the network is now directly (perhaps with a pool to help) given to farmers (delayed due pay when section split).

I am personally concerned with the reliance shown on the network rejecting upload request if the node cannot store the chunk. I know this has to be there obviously, but in my mind, human nature would say that it will cause bad reports on the network. Yes I know eventually they will be able to upload the chunk. We need to have the system balanced in a way where this is a rare event because uploading is gently (rising to aggressively) discouraged at a time when spare space is becoming low.

@tfa 's suggestion of including a rejection of upload when the quoted store cost of that chunk is less than a %age of the current store cost. EG when store cost is x% above quoted then the upload is rejected.

So the rejection message could be ā€œno spaceā€ when there is no space available to store the chunk, or ā€œexceptional circumstancesā€ for when the quoted price is just way too low at the moment.

Obviously the ā€œT&Csā€ of quotes needs include that the network will not accept quoted prices during times when the network is under exceptional loads. IE that message is displayed when getting a quote.

2 Likes

With my proposition you can keep simplicity and prevent the risks mentioned by @neo and @Antifragile.

To clarify things say that x is set to 10%. In addition to the data you plan to store in the quote you would also store a limit price equal to current cost plus 10%.

The quote would be perpetually valid at the condition that at time of payment new cost is less than this price limit.

This is enough for normal usage of quotes and avoid risks that old cheap quotes are used at times the prices are high because the network has less free space.

You reduce this risk to simple spam. I donā€™t and I am not alone. Itā€™s better to stay on the safe side by just eliminating this risk.

2 Likes

Or rather a lack of a 1b (then x and y before t, else fail) Iā€™d say?

Or ā€œcan of wormsā€.
I agree that it could be easy to get a reaction of the sort, that this canā€™t be good.
So far havenā€™t seen detailed reasoning that actually convinces.


Then the ā€œsolutionsā€.

In my initial rudimentary walk through of the 2. Pay for upload based on Prevailing price (could be less, could be more than Quoted).) there all the problems started heaping up with trying to get absolute order, agreement across network etc. Thatā€™s a can of worms indeed. Itā€™s not uncommon that even very experienced developers get trapped in those approaches. A large part of previous devs in Maidsafe did.

That doesnā€™t say that there isnā€™t a gem in there when shuffling around the pieces a bit more. Just that from my initial glance it wasnā€™t promising.

This hits the same issues that I described in the reply to @VaCrunch (linked above).

I can try explain in more detail, but see in that post if you get what I mean?

But hey, the ā€œrisks mentionedā€ still lacks convincing reasoning for showing that itā€™s significant. Some of them have even been shown to be invalid.

Spendning a lot of time on solving a ā€œmaybeā€ problem when there are a bunch of more pressing onesā€¦ Well. Iā€™ll have to bow out from this discussion for a while at least.

As said before, development moves on, itā€™s KISS.

4 Likes

What youā€™re asking here is not impossible, but itā€™s far from straightforward because in the forming of this assumption we canā€™t know:

  • The size of data the user is uploading
  • The connection speed of the user uploading
  • The stability of the users connection
  • How long it has taken the user to obtain the tokens
  • The cost to the user of any of the about
  • The speed at which the Network is operating
  • The volatility of the storecost or the token price

Given we canā€™t know these things, how can we define what is a ā€œsensible amount of timeā€? We canā€™t.

So give we canā€™t really define what is sensible to the user, weā€™ll have trouble in designing a solution that is sensible too. So we canā€™t really say whether it will be a major issue either.

And in terms of it adding complexity to the UX, or time to design and develop that UX, it doesnā€™t really matter how major it is, or how often it comes up, if it happens at all it needs to be provided for.

This sounds simpleā€”just a text outputā€”but these failure states quickly add up to a lot more complexity, and many more flows multiply out of them, and not just for MaidSafe with the core tools, but for anyone wanting to build apps for the Network too.

And again, we cannot really adequately define ā€œabout to expireā€, ā€œdata so largeā€ nor ā€œexpected to upload in timeā€ again because of what I listed above. And this is not even taking into account the elephant in the room of introducing time.

Donā€™t worry, I get that this is just humour, zero offence takenā€¦ But it is a classic example of putting the needs of a systemā€”or development of said systemā€”above the needs of the person using the system. Or perhaps not really even understanding the needs of the user to begin with. And we can be in danger of doing that here.

7 Likes

There is a lot I donā€™t understand about this discussion, but the above sums up most of my confusion.

2 Likes

From what he said, in his mind it would be the reissuing of his input DBCs into the output DBCs belonging to the recipients. (And this is the only thing I see working as a commitment as well).

That is a commitment (as long as we do not have reissue-possibility for sender baked in, I donā€™t think weā€™ve included that now) in that your tokens are no longer owned by you.

But, as long as you donā€™t send them to the owners, they donā€™t have them either, so a commitment has been done, but no payment yet.

Looking at then rejecting that if current Op cost has risen: itā€™s an issue if the network then tell the user after the commitment ā€œSorry, not enoughā€, because the user has already spent it. There are work arounds to that, but not very smooth.

Also, the argument that rejecting chunks when network is full is very bad for the network, that people will be turned off en masseā€¦ I think it applies more to the ā€œyour payment is not enoughā€ a millisecond after you ā€œpaidā€ (committed) - which is something that would happen all the time. In contrast, network full isā€¦ far away and not in any way shown to even be likely to be seen by a user.

4 Likes

Wow, just caught up with this whole thread and it seems like there are a lot of valid comments that address different parts of the whole but that people are talking across each other. My summary:

  1. Stability in quotes is necessary for good UX. The minimum requirement is that once a user sees a price, they can pay is and be assured that that upload will be honoured, even across client sessions and/or upload failures.
  2. Perpetual quotes introduce an economic asymmetry that can only ever cause the price of uploads to be less than the true value to the network at that time. This issue certainly exists and has an (unknown) impact between 0 (nothing) and 1 (deadly).
    2.1 As a corollary to this, quotes themselves have economic value.
    2.2 This is an economic issue related to, but seperate from an economically driven ddos attack.
  3. There is disagreement as to what constitutes an appropriate path and which development orders cause extra work

In my understanding of the network code, perpetual quotes are the simplest implementation. The basic steps for a quote (ask nodes to give the input variables for a quote, make quote, get it signed, present to client) have to be implemented in all cases and not adding any condition to check for validity (a perpetual) is the simplest possible implementation, and also fulfills 1. Additional terms and conditions can be added without changing the nucleus of the quote primitives.

If issue 2 can be demonstrated (through modeling, or measurement of a test network), there are solutions of different classes. A non-exhaustive list, without much regard to quality:

  1. Add limits to network acceptance of ā€˜perpetualā€™ quotes (no change in quote generation code, added complexity to simply accepting payment, maybe some UX issues)
    1.1 ā€¦ limit when storecost ā€˜greatlyā€™ exceeds quoted value
    1.2 ā€¦ limit in ā€˜timeā€™ (churn events) (UX change) before quote no longer valued
    1.3 ā€¦ ?

  2. Add limits to quote generation/issuance
    2.1 ā€¦ rate limit quote requests (smooth out instantaneous velocity and reduce problem surface to meso/macro shifts in the network)
    2.2 ā€¦ limit quote size to some fraction of network free space
    2.3 ā€¦

  3. Attempt to directly recapture some of the economic value of a perpetual quote
    3.1 ā€¦ small fee for a perpetual quote
    3.2 ā€¦ offer two pricings ā€˜pay nowishā€™ and ā€˜perpetualā€™, with the network adding some measure on top of the instant quote (e.g. calculate an option price based on Black-Scholes etc. and add it to the current price)
    3.3 ā€¦ slippage pricing: have the quoted price scale somewhat with the size of the data being quoted

In the end, there are a wealth of solutions that can be built on top of the simplest (unlimited perpetual) system, and I have good faith that when we can measure/simulate the problem, we can work on it then.

14 Likes

I will point out that this is not the argument being made.

No one I know of said people will be turned off en masseā€¦, just people will talk about it and some will be pissed off. That it should be a last resort and effort made so it does not happen normally. And not to use it as a way to simplify code and algorithm so any issues can be ignored because of the rejection will happen

The ā€œvery badā€ is a relative term of course and if I said very bad then I was meaning in PR terms.

Thought Iā€™d clear that up for you

Thereā€™s also another element to the UX (which I briefly mentioned easily) which perpetual quotes take a lot of pain out of, and thatā€™s all the flows around obtaining tokens to pay for uploads.

So I get a quote for an upload, so I think I know the token cost to get it on the Network. So off I go to an exchange, or a 3rd party, or a friend (or farming of course) to obtain the required amount. There are potentially a number of hoops to jump through here, and a bunch of variables exchange rates various transaction costs. And of course we canā€™t know how long this will take the user either.

Can can you imagine the frustration of returning to the upload process moments/hours/weeks later only to find our Iā€™m a few fractions of a token short! Itā€™s tear your hair out and never bother again time.

Iā€™ve had this very pain with existing crypto before, as Iā€™m sure many others have tooā€¦ and itā€™s not fun.

But with the firm pricing this goes away in a flash, and designing tightly integrated purchase solutions becomes so much more straightforward.

Again, just to reiterate, Iā€™m not saying it is a silver bullet, but itā€™s a worthy starting point that we can always build upon as and when we need to.

9 Likes

Wonā€™t delve deep into the stuff now, so much else to do :slight_smile:
But just wanted to give a couple of quick comments:

From my POV, we canā€™t say what is the true value of the system op cost.
The op cost is calculated using a design time set algorithm which is unfortunately fraught with assumptions as long as it is coded by us devs, and also has limitations in what information it can capture.
The big benefit of it is simplicity, but I canā€™t see how it in any way is able to reflect true value. It is a value, and it will adjust towards truer rather than less true.

That is also why any scaling factor can be more or less dropped from the equation.

can only ever cause the price of uploads to be less than

This is a scaling factor.

100% correct.

Edit:

Actually, I have to correct myself here.

This is not true :slight_smile:

The op cost is an inverse function of network size. If you get a quote now, it is very likely that you can later get one cheaper. So there is plenty of opportunity to pay the network more than it asks for :slight_smile:

3 Likes

For godā€™s sake man, spend your time on the network and not on punters like me. :slight_smile:

can only ever cause the price of uploads to be less than

This is a scaling factor.

If it were by a constant fraction, Iā€™d agree with your conclusion here. I think the argument is more subtle, e.g. that the ā€˜underpaymentā€™ factor depends partially on external systems (not knowable by network) and that external actors may be able to make a better, more information complete, estimate of the instantaneous value of the factor than the network, and exploit it via the perpetual system. Severity of this is, as I said, somewhere between 0 and 1, with options to iterate and mitigate if it actually is a practical problem.

5 Likes

Thereā€™s also a share set aside for community interaction :slight_smile:
I had a built up stash of it :wink:

(Hey, make sure you look at my added edit also)

Yes, I was about to say that it is simplified.
What I mean is that that is the major contribution of it. The effects you describe are on the margin.

6 Likes

Yes, I expect the cost in SNT to on trend decrease in the long term. This would make the vast majority of quotes superceded in the future by cheaper quotes on average, but you would never exercise the more expensive one and thus the network does not keep the ā€˜profitā€™.

The converse is not true however, if you get a cheap quote and the price (in the short/mid term) gets much higher then you have an exerciseable disparity vs the networks best estimate of its value. The network will eat these ā€˜lossesā€™. This is what I meant by my statement ā€¦ i.e. from the networks view, payments will always be at or lower than its current best estimate of value.

From where I sit, this would be one argument to add an options-like pricing to the perpetual quote. As volatility decreases, the option value will trend to zero, but in high volatility times (e.g. a price pump), the volatility will cause the option premium to raise quite quickly, preventing much volatility based exploit. I.e. if the network price was calculated as (however you do it) + Blacks-Scholes on the volatiliaty of (however you do it), that would take care of a ton of the attack surface. And as the network grew, volatility would decrease, option value would trend lower (and volatility in the option value would trend lower) until it was a so-called scalar factor.

4 Likes

Not with perfect information, but that wonā€™t be there. The moment a quote leaves a section it is stale, and potentially more expensive. This is true for every quote that will ever be sent out by a section.

At some point a user will stop polling and accept the one it has. Every user will do this for every payment they will ever do.

Then there is a wide range of how soon they will convert the quote to a payment and same wrt how soon they will upload. That there is additional time, where the mechanism of lowering the cost is in play.

It is fair to say that a significant part of payments will come in at a time when current price on a quote is lower. This translates to me as a constant flow of higher than expected payments. Donā€™t you agree?

1 Like

Yes. Agreed that this will be the normal ā€˜growth regimeā€™. IMHO, we are exploring the corner cases ā€¦ I feel like the growth cases (both low and high volatility) will be handled just fine by the network. I think that gradual degrowth (in terms of farmers) is OK up until a certain limit (but that the feedback loops should catch it), but that there are some interesting corner cases in terms of volatile degrowth events that deserve as much thought as we can give themā€¦ when you are talking high-value ā€˜foreverā€™ networks no amount of thought is too much.

2 Likes

:ok_hand: Yeah, I like the idea.

2 Likes

This can be done prior to actual re-issue so the client has set up the transaction and also revoked his key with this transaction. As it is committed then it can be replayed many times (re-issue always accepted until outputs are spent).

So commitment means client commits (sets up payment and creates the pederson commitments and bullet proofs), writes that to the SpentBook and later does the real re-issue and issues the DBCs with storage requests and those are guaranteed indefinitely.

Hope that makes sense.

2 Likes

There has been a user who decide to leave the forum requesting full deletion. I was assured that the reasons had nothing to do with the discussions in this topic but more personal to them.

Please accept my sincere apologises for the disruptions that this will cause to anyone trying to read through the topic.

1 Like