How much disk space per node are you having?

And after how long after the nodes are operational should any earnings appear?

15 nodes should see some earnings after 6 hours to 12 hours, but not guaranteed and since the network is so large now each node is earning less often

3 Likes

The current nodes have been working for about 12 hrs but without earnings, I will see if the change to home-network has any effect.

Sometimes the nat detection that is used by the automatic setting gets the connection type wrong which prevents your nodes receiving requests for quotes (ie no earning can happen) and by setting it directly as home-network then they should be able to earn.

The nodes have to be restarted after changing. Best to do a reset and start the nodes all over again

That’s what I did. But I am still wondering if turning on the lock screen on the laptop can disconnect the nodes? I have sleep mode set to - “never” but the screen turns off and the lock screen turns on. Previously this setting didn’t interrupt nodes but now I’m not sure, data transfer at night was low, hence I wonder…

The lock screen doesn’t affect programs running, or should not.

If concerned then look at the activity led on the port of your router the laptop goes into. If wireless then there might be an activity led for wifi

3 Likes

But aren’t we forgetting about inactive chunks? The 60% fullness is calculated from the active chunks - the inactive chunks remain on the system…

Here again - what about inactive chunks

… Fullness is calculated from active chunks (and at least in the current network nodes store many more inactive records than active ones)
… From what I’ve seen so far I’d assume in a 60% filled network pretty much all nodes might be storing max records…

1 Like

We should expect there to always be a large number of nodes that are under provisioned. The more this is the case, the greater the risk it can destabilise the network.

So I think it would help to give warnings about this to anyone doing so, whether deliberately or by accident.

This could be:

  • error messages from each node to the terminal with increasing alarm (severity terms such as, BEWARE, WARNING, CRITICAL etc) as storage on the system drops below a perceived safe level. I realise this will not necessarily be able to take info account the number of nodes running, but folk who run more nodes will likely be more able to monitor and manage this themselves. So I still think it’s helpful. However, maybe nodes can determine how many others are running on the same system without difficulty.
  • same, but part of the metrics with visibility in launchpad and other monitoring setups. (Such apps would of course be able to do their own threshold calculations and launchpad also imposes its own limit on node numbers.)
  • same but to log file for those who keep logs enabled.
  • launchpad, formacio, vdash and other node monitoring apps can do their own calculations and flag warnings, but having good guidance on what the thresholds and warning terminology would help everyone work towards a common experience which has benefits.
5 Likes

And you’re ignoring the inactive chunks too… And that having less space than max size shouldn’t even be possible anymore anyway when the network is filled to reasonable levels…

When we go live will folks still be able to run under provisioned nodes? I’m rocking LP so not an option on my end at this time.

Unless nodes are tweaked to purge inactive records.

It is a fair point that will make under-provisioning a little more difficult.

2 Likes

And because storage space isn’t very expensive a very questionable investment of time and energy… This probably isn’t a trivial mod anymore that needs to be kept in sync with the network on every upgrade…

Given that nodes already have the code to do this I expect it is trivial and given people are under provisioning now, it’s apparently that they believe it is worthwhile.

Things may change as the network stands on its own economics, idk, but it’s not obvious to me that it will change much.

1 Like

From looking at disk usage, with the current network, there is loads of space, even with inactive chunks. Obviously, on a fuller network, it will be a different story.

1 Like

On tiny PCs with 2.5" drives, storage may be at least a 3rd of the unit price. Maybe nearly half. So, I wouldn’t discount the cost of storage too much. It’s definitely a factor.

2 Likes

I think it is more about the rewards right now. Folks are getting paid more than the storage fee suggests, which makes over provisioning worthwhile. In short, the test net incentives are provoking this behaviour.

4 Likes

Where are you getting these numbers from? You quoted 50% in Discord? Now 60%?

https://github.com/maidsafe/autonomi/blob/main/ant-networking/src/record_store.rs#L543-L578
    // When the accumulated record copies exceeds the `expotional pricing point` (max_records * 0.1)
    // those `out of range` records shall be cleaned up.
    // This is to avoid :
    //   * holding too many irrelevant record, which occupies disk space
    //   * `over-quoting` during restart, when RT is not fully populated,
    //     result in mis-calculation of relevant records.

Unless I have the wrong function here, I mentioned earlier about the 10% threshold in Discord.

2 Likes

Okay - so from what I’m seeing below 10% fullness the service is inactive - above 10% it cleans records with distance larger than [random-number]

In the current network we don’t seem to clean that’s why the inactive records dominate that much.
But that doesn’t really give an indication on how large that effect will be in a network later on in steady state :thinking:

=) thank you very much for the insight - seems like my assumption might indeed be incorrect and effectively used storage space might be closer to active records when network filling level rises :thinking:

2 Likes

Yes, I believe this is the current scenario once the threshold is reached.

1 Like

It’s been nearly a day, the nodes still haven’t earned any Attos, I don’t know if changing the connection to home-network hasn’t had an effect, or could this be a result of the recent change extending the current node rewards programs?

Do you have any further advice?