Update 29 August, 2024

Welcome Ermine Jose! Great job team! Nodes from home have never run so good as they are right now. Still no nanos to be seen, however, through the bot, but 11 nanos earned apparently, fingers crossed.:crossed_fingers:

9 Likes

Well one factor is that some large portion of a chunk’s memory is caching disk (record storage) in RAM.

But if you swap that RAM to disk (presume its all on SSD for this) then you are no longer saving access time nor disk I/O nor speed.

In fact to access one of those records in the RAM cache of the record store that is swapped out requires a OS event to swap back in that section of RAM and swapping out another section of RAM, and then resume execution of the node program which then accesses that RAM cache of the chunk store.

So when you consider the effects

Without swap

  • record is retrieved from RAM cache

With SWAP and some of the RAM cache of the record store is swapped out

  • node program retrieves a record from the RAM cache of the record store
  • OS event triggered (RAM page fault)
  • OS swaps out some RAM to make room
  • OS swaps into RAM those pages from swap needed to satisfy the RAM page fault
  • OS eventually resumes the node program
  • node program completes the RAM access

Now tell me why have the cache when swapping in some cases makes one record retrieval into 3 or more disk I/O (swap out previously, swap out to make room, swap in of needed RAM)

Also with protocols you might get a section of the code not used as much swapped out then then the comms needs it, it has to be swapped in. All time wasted when building buffers to be sent out. No longer can the protocol program rely on RAM speeds when executing instructions to build buffers and transmit packets. Imaging if TCP/IP protocol stack was swapped out at times, you think browsers etc are slow now wait till tcp/ip stack is swapped in and out and in and out

3 Likes

I have used around 50…100GB swap to spinning disks without issues. On datacenter vpses I always use nvme swap. In addition you can use swap compression. All depends on the configuration. Swap is great to prevent loosing nodes unnecessarily.

3 Likes

My comment was tailored to Southside’s computer setup with a lot more RAM than he needs. So good practices ensure no problems in future when the nodes are under load.

Using swap may work, but also may have the situation where a combination of transmission lag + swap disk spin up after sleeping + swap into memory on a cpu loaded machine during higher node traffic can cause your node to be in error due to lags.

So good for 99% of the time but running under suboptimal conditions. If like Southside your machine has a lot more than enough RAM then swap for nodes is very much sub optimal.

Losing a node due to oom because you have no swap means you had too many nodes in the first place.

These are not user programs where swapping is inconsequential, but they are network level protocol communication programs where filling buffers and unpacking buffers in a timely manner is very important and affects transmission times a lot. The more nodes you have the more important is the operations of the nodes is unhindered by having to swap buffers back into memory

Now there are many swap strategies and the optimal one would be to have the node software set to a state where they cannot be swapped. Just like the TCP/IP stack cannot be swapped. These are low level programs of the same processing importance as any other protocol service is and should be treated accordingly as such.

On windows the swap algorithm is so bad nodes would end up considered bad within a week if that many nodes are running they absolutely needed swap space.

3 Likes

On my birthday! yay!

4 Likes

It feels great that we are at this point.

Stable software releases and a running network.

Have to pinch myself occasionally.

4 Likes

I is faaast. BegBlag dowloaded in 1 seconds 453 milliseconds :slight_smile:

It is very situational, it may help if you have low RAM but strong CPU.

My experience running nodes is that on most machines once the machine starts swapping the increased IOwait leads to CPU inefficiency (and overload) which leads to longer queues using more RAM and that completes the circle of death.

9 Likes

Thx 4 the update Maidsafe devs

Welcome Ermine Jose

Wow just wow, what a great update
Would be nice to have a overview of all the command lines you can use to interact with the Network

Keep coding/hacking/testing super ants

Oops…

7 Likes

Uhm… This means wave 1 total pool now is smaller than wave 2 total pool - right?

Sum of all rewards were doubled on wave start it I recall correctly - and now it’s:

Wave 1: 250k * 2 * 1.25
Wave 2: 250k * 2 * 1.5

Right?
… I mean we signed up to a game with unknown rules so I guess it’s all up to you - just thought I’d point out that this is not how it had been communicated in the beginning - thought all pools would be of same size (I understood wave 3 is now half the size and half the rewards - half of wave 1s rewards or half of wave 2 rewards? Or maybe I got it all wrong?)

3 Likes

Who knows. It’s all part of the chaos over there.

4 Likes

Also check my post earlier, above the one who got dropped as a baby and could not control himself from attacking me at every available moment, no matter the cost or reason.

On the leaderboard check the waves and what they get, 2x wave 1 and compare to wave 2, not much difference as so large % in wave 2 only get 50 per week, those higher up in the rankings get relative higher rewards of the total pool because of that.

And when they get 1.5 the gap will get even lower.

My understanding is that wave 1 already had a double and when wave 3 starts it gets an additional 25%.

Wave 2 will only get a 50% increase because wave 3 did not fill.

Wave 3 gets no double or increase.

Or maybe I did :laughing:

3 Likes

Thank goodness we’re not litigious here. :laughing:

5 Likes

From start wave one 250 000

When wave 2 started wave 1 doubled to 500 000.

But wave 2 started at 500 000.

Those higher in the wave 2 leaderboard get relative much higher rewards as % in wave 2 earning nothing only get 50 a week, giving those higher up relative larger share of their pool.

So the gap between wave 1 and 2 is quite small and will get even smaller when wave 2 will be getting 1.5x the 500k.

I just want the next challenge now, 12 weeks of the same is a long time. Lets start testing token conversion already :partying_face:

6 Likes

Are you sure about this? I clearly missed that, why?

Is that what you are saying too @riddim.

2 Likes

Example 20th place wave 1 gets 222 a week, when wave 2 started x2 = 444, when wave 3 start x1,25 = 555.

20th place wave 2 gets 373 when their wave started, when wave 3 starts they get x1,5 = 557.

So 20th place in wave 2 gets more per week at the moment than 20th at wave 1, which is strange because it was said that wave 1 would award the OG’s for being long term supporters to the project.

But hopefully there will be some rewards further on, testing the coin process and so on. As it was said something about further incentive plans towards launch and beyond.

At the end of the day, Maidsafe are not out to get anyone. I am going to not comment further as I apparently don’t know all the details.

However lets just keep it fun.

6 Likes

We are having fun, it might just be logical thinking mistakes. Right now it seems to effect the OG’s but when we see the future plans and bigger picture then there might be good reasons, otherwise some will probably feel like they are not being treated well.

Don’t understand why wave 1 did not get 1,5 larger pool as it does not make sense right now.

Whoopsie

Didn’t want to start a storm :sweat_smile:

Just was a bit confused because what I read there didn’t make sense in my mind…

3 Likes