Update 24 August, 2023

We continue to refine features ready for the next testnet, notably introducing pay-per-chunk and UTXO.

Paying per chunk means treating each chunk individually rather than the client bundling them into a merkle-tree and asking nodes for a quote on that file. Now clients query each node in the close group for the price of storing each chunk before sending it (whereas previously we attempted to get a price over the whole network, which was quite inaccurate), and pay the nodes returning the chosen quote directly. The nodes send their public key with their individual quote, and clients pay to that key. As mentioned last week, with the current group size of 8 the client should choose a price that guarantees at least 5 nodes will store the chunk.

Payment details including DBCs are now sent with the chunk, rather than a spent-proof as before, with chunks being stored only if the given node has been paid.

Operating at the per-chunk level is more granular than at the per-file level, which should smooth out transactions and allow for more precise payments to nodes and more accurate auditing. This also makes it easier to begin implementing rewards, as it should simply be saving DBCs sent with new PUTs at the node.

Which brings us neatly on to UTXO (unspent transaction output) which is a model a bit more like bitcoin, where the spentbook is stored on the network. Also, BLS one-time keys are used to de-link the owner from the transaction and prevent double-spend. Refactoring DBCs to incorporate these features is ongoing. @anselme and @bochaco, who are the ones deepest into this, are currently away, but we promise a full write-up when they return.

Bug fixes

Slow client connections have been fixed, as have a few testing issues with logging and file uploads and downloads. We’ve sorted the issue of reuploads reusing incorrect cached payment proofs for the moment by eliminating the caching stage – we’ll reintroduce that at a later date. And we’re now verifying data is copied to a majority of the close group before storing it to reduce storage errors.

General progress

@joshuef has been driving the pay-per-chunk changes, which allows attaching payment info to each chunk. He’s also fixed some bugs around this caused by attached payments changing the file size.

@Qi_ma is focusing on resolving niggling problems with testing and benchmarks related to file uploads, network communication and test reliability.

@roland is optimising how record distance ranges are set during replication and looking at fault detection to identify and reject invalid records.

@bzee has been implementing more connection debugging, and trying to find the source of the delayed join (and PUT receipt) some folk were seeing in the last testnet.

@chriso is implementing faucet as a service for testnets as part of his work into automating testnet deployment. He’s also extended the testnet inventory report to provide peer IDs to help bootstrap connections.

The travails over QUIC and libp2p continue to occupy @bzee, who has also investigated reasons why unroutable peers remain inactive and don’t get dialled/added by other peers.

And @anselme has been researching and thinking through the evolving transaction system.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

53 Likes

First! Xxxxxxxx

20 Likes

second! and now to read.

19 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

7wsxm3

21 Likes

May the Fourth be with me!!!

Thanks everybody for all their efforts.
Is there any point in trying this at home with ~ 50-100 local nodes or would it just be a disappointing waste of time at this stage?

14 Likes

Beauty! Thanks for the work team.

Sounds like the next testnet is going to be amazing.

Cheers :beers: :clap:

14 Likes

Thanks 4 the update and your hard work Maidsafe devs

Great to see the introduction of pay-per-chunk and UTXO :exploding_head:

Who knows soon we’ll also have smartcontracts :sweat_smile:

Keep up the good work and keep hacking super ants

13 Likes

Fantastic work to all the team.

I’m looking forward to the next test net just got a new vps on a 6gb connection with 4tb of storage so looking forward to giving the next one a good run for it’s money :slight_smile:

13 Likes

This great, definitely a better system.

I know it is early days, patience is a virtue and all that good stuff, but :wink:… it would be nice if the user was provided the quote and is given the opportunity to either approve or deny the put.

17 Likes

More granular and more accurate!

8 Likes

I agree, but perhaps only when it’s mental what si being proposed?

5 Likes

Idk, nearly agree but who decides what is a high or mental price for another.

It is not necessarily only about is the network low on storage and prices are high but perhaps I am low on SNT and I think uploading a dir is going to use half my balance, nasty surprise when my wallet is empty after.

12 Likes

I am wondering if the situation where

  • the client got the quote for storing the record from a node
  • the client generates the DBC
  • during this small time the node has had to change its charging price and thus will reject the record store request with payment.
    • well the node will increase price at some point and this is an edge case, but one that will happen often enough to need consideration
  • now the client has the DBC destined for the node but now the node doesn’t accept payment because it is too low.
  • but since the node got the DBC but is refusing to store the record, the client has lost that amount of SNT and either coughs up another DBC or the amount is lost forever.

I realise its early days and I’m sure it has not been considered yet seeing as there are more important things to get going, but I would like to think that it is placed in the list of things to account for.

I’d say the solution is that the node will return an error stating price increased by “X” amount and the client only then has to resent the chunk with the original DBC and a DBC with the increased amount. And this can be a repetitive procedure if somehow the price increased yet again (Rare I’d expect but can happen). This would mean the procedure must allow for multiple DBCs to be sent with the record.

In fact the payment to the node can consist of a DBC for the reward and DBC for payment to the foundation. Thus 2 DBCs unless the amount is too small and the Node needs to send 30% of payment DBCs to the foundation. @joshuef ?

It would be operator (user) overload to ask on each chunk of a 15GB movie.
It might be OK if asked for whole 15GB of chunks
It would be nonsensical to ask for forum posts and for apps doing database work. EG point of sale, the salesperson could not be expected to approve 20 data base updates when selling an item for every sale.

I would think the client configuration has a value for maximum amounts for record store and if the 15GB averages out at less than max per chunk then the user is simply informed of cost. Or something similar. Data base work might require warning levels and max levels and expected to be set higher than chunk records to prevent a sales person being hit with errors while selling a product.

8 Likes

This is what we have atm. We’re just missing some summation of DBCs “for me” at the node and we’d have this in place.

Aye, that’s the plan :slight_smile:

8 Likes

I keep going back to this poll for storage costs and conclude, let everyone pay what they want to pay for storage.

Undoubtly the 1.1-2 safecoins PUTS will be farmed first, thus resulting in the endgame of paying 1 nano per gb in this example.

When comparing decentralized storage with centralized storage, I feel like I would be cheating myself to not get the 1TB for free from Terabox, before I spent a nano on SAFE Network.

Today I would be willing to spent a nano for 1TB, tomorrow I would get 1 gram of DNA storage for that.

How much we spend on computation and storage will change over time. It’s like going from rewarding AGI to ASI, just my clueless consumer pov.

9 Likes

Congratulations @maidsafe on all the tremendous efforts!

13 Likes

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

10 Likes