Pre-Dev-Update Thread! Yay! :D

C A L M D O W N!!! They are just Github email notifications but the anticipation!!!

Sorry seen a few and gets me too excited! ha

3 Likes

Well I’m grabbing and attempting to build the latest with

git pull && cargo build --release --features local-discovery,quic

Dunno if that is the correct flags I am giving it but no doubt I’ll find out shortly.

5 Likes

Lots of pay per chunk stuff just merged. :partying_face:

16 Likes
13 Likes

I read a lot of the PRs and have long stopped looking at the lengthy automated summaries generated by Reviewpad which were added a few months ago.

Are they of use to anyone? Maybe MaidSafe code reviewers read them but I doubt it.

The only overview of the PR is now the title which is enough some of the time but mostly not.

I think a short overview of the purpose and effect of the PR before the Reviewpad section would help, and to remove the latter unless people are using it.

For anyone wondering what I’m talking about see the description of code changes at the start of the PR conversation here: feat!(protocol): make payments for all record types by joshuef · Pull Request #697 · maidsafe/safe_network · GitHub

I find it helpful myself, though I agree can be a tad lengthy at times!

6 Likes

Seems like a good idea only ramping up once storage is getting fuller, and having more granular steps initially, which is where the market would hopefully be operating.

I have a concern that things could break down with the lack of granularity in the later stages, e.g. if nodes are filling up, and a point is reached where there’s not quite enough incentive to boost supply (encourage nodes to increase storage sufficiently), and also not enough demand to jump the price people will pay for storage to the next level, so the market, and network usage / growth could stall.

Could the same, or a similar price ‘curve’ be made, but with far higher granularity, e.g. 128 levels vs 12? I expect that would lead to a far more effective market mechanism for balancing supply & demand than the massive price jumps (0.000000019 to 0.000000036 to 0.000000126 to 0.000001591).

It’s great to see this all being implemented and tested after years of it being only theory :slight_smile:

5 Likes

Right now we go off record count. IT could be more granular with a higher max record count eg.

It may be that that becomes entirely configurable down the line perhaps. We’ll be trying this curve and others I expect to see how things feel. (And collecting what data we can w/o a real economy)

10 Likes

That sounds good. True, right now there’s no real cost, so the steps aren’t a problem, and glad there’s no reason to expect more granularity can’t be added in the future.

3 Likes

Interesting to see if this IGD functionality of Libp2p will help people connecting from home. PR just merged, I don’t know if we still need a release before it can be applied to our testnets:

Some good looking merges in Maidsafe’s Github too!

16 Likes

And its now possible to create node (“server?”) From home also?

1 Like

No, not yet unless you can setup port forwarding.

3 Likes

Again the place for true spies to spy on is around Libp2p.:man_detective:

From a PR comment:

We are thinking of cutting a new (breaking) release and thus would like all the queued PRs to be ready for merge :slight_smile:

There should be some goodies for us too.

10 Likes

Dbl the anticipation today. Make it stop :crazy_face:

7 Likes

my gut tell me it will be huge :smiley: YAY

2 Likes

I think the team were to busy and forgot what day if the week it was!!

But secretly rooting for the huge option :slight_smile:

2 Likes

Jim’s getting his presentation outfit ready :joy:

3 Likes

I certainly hope he has is dapper Dan on for a presentstion :slight_smile:

1 Like

Really curious to hear all about this PR:

4 Likes

Does that mean effectively moving from 8 copies of each piece of data to 5 copies?

If so, it’ll be interesting to see whether 5 is sufficient.

1 Like