Beta Phase Two starts tomorrow

The client will go for lowest price, so would it matter if someone set the price higher?

No, but just a nano lower, collecting all the payments.

Don’t think pricing will be a problem, the market will solve it somehow. Can it be gambled by avoiding the price algorithm? 1 solution could be that the network finds the mean and choose a random node ± 1 standard price deviation or something. adding that maybe the median might be better in this case.

1 Like

The pricing is not the problem. But if the network is sold to general public as following some king of automatic pricing system, but in reality some folks circumvent it and earn more it may lead to mass exodus on the revelation.

And on the other hand, if the pricing is left on the free market alone, then folks will use all their resources, without any margin, to get maximum earnings. Then even a small amount of nodes going offline, and offloading their chunks to the rest of the network, would lead remaining nodes and thus the network to get fuller than full, leading to data loss and all sorts of cascading effects.

1 Like

Yeah something like that might work. I’m not sure where the system stands as of now.

1 Like

If this kind of method could somehow prove the total space used, and how much is available to be used by the network, it could also prove that the node is charging the ‘correct’ price given their level of utilised vs available space.

I have no idea how it could work technically, but I think it could prevent a lot of issues with gaming, which as you say, could become a serious problem if people can game the system, taking payments by undercutting and underprovisioning and then dumping their nodes, burdening the network with all of the data that they took the payment for.

2 Likes

What goes around comes around, they get just deserts, they will have poked the void and the void poked them back, they’ll complain, they learn there is no free lunch …

Yes and some complain about that

you could suffer from a cascading effect. One big operator (28K during previous phase) gets overloaded and all 28K drop off. Then next with 8K nodes (or more) then overflows and then the next with 5K (from memory) overflows. At some stage yours overflows. All within an hour, you’d have to watch like a hawk since there would be little warning because your nodes start off just filling up more and more over the hour and not all at the same rate, then bam your disk fills and they all get shunned

You could have a script watching disk space and kill off nodes one by one to keep enough free disk space

Nodes hold more than just the records they are one of the closest 5 for. This is by design.

So a network that is 1/2 full could mean the average node is keeping 3/4 or more of its max records. Given enough time and every node will end up holding 100% of its max records due to churning that is forever happening across the network.

2 Likes

In this scenario of nodes holding 100% of max records, what is it using to determine store cost? I guess it doesn’t appear as 100% full for store cost calculation?

This is a feature of @aatonnomicc’s anms script. NTracking/anm at main · safenetforum-community/NTracking · GitHub
Doubt that it has ever been used this way but if any one of LoadAvg, FreeMem, MaxNodes or DiskFree is exceeded then the action is to stop nodes until the situation is resolved.

1 Like

bawws deep status under control :rofl: :rofl:

image

2 Likes

Dont get carried away

1 Like

im about done for today ill review the situation tomorrow

1 Like

It is the chunk count that the node is responsible for. IE the chunks for which that node is one of the closest 5

Both figures are exposed in the /metrics for the node and vdash shows the “responsible” figure since that is what is used for quoting

The other chunks are deleted one by one as more room is needed to store the chunks that that node is one of the closest 5

2 Likes

If you delete them manually what happens?


Check out the Dev Forum

You will likely wreck indexing within the program

Also you remove redundancy built into the network, especially Sybil attack protection

You see it provides caching
it provides protection from an attacker gaining control over the 5 closest nodes to many chunks
it provides protection for major outages where the 5 closest nodes are all in the outage

3 Likes

lets hope the 0 is soon gone :slight_smile:

2 Likes

How long after launching the nodes should I expect earnings?

Yesterday I started 3 nodes on one computer but did not get any nanos, after some time I shut them down and started 5 nodes on a new computer (where the earnings appeared earlier, so these nodes are working properly), but so far neither Lanuchpads nor Discord Bot show any nanos.

At the same time, I noticed that this morning the computer had screen lock on and the resource consumption was not very high, although I have the screen set to power and sleep mode - “never”, for both mains and battery power. What could be the reason for the lack of earnings?

Keep in mind, if you are not having luck, you may need to port forward and change the connection type.

1 Like

With a small number of nodes, it can take time to get nanos. One thing to check in your settings, is to make sure you have both these set to “never”. Also as I mentioned to Erik, port forwarding may be needed.

2 Likes

As I wrote I have both of them set like this to “never” - but I want to make sure, I understand that turning on the screen lock, does not disable the nodes?

How should the ports be redirected, is there any instruction?