I have a SBC device with 8GB RAM, and intel CPU 4 cores, total 4 threads. Each node is using less than 400MB each. One with 1TB SSD as its drive and one with 2TB SSD
I have a desktop with 256GB RAM, and 24 cores, total 48 threads. Each node is using around 800MB each. Has a 2TB SSD for root and 2TB for home
Both machines are running the same distro of linux, the same version of linux with similar loaded applications.
Same method of starting nodes (directly running safenode with script), same time between starting nodes (around 5 minutes)
Now others are showing that nodes are using around 200MB
My nodes are earning fine, not getting shunned, and seem to be working absolutely fine. Plenty of bandwidth available above what they are using.
Can some explain why the larger (RAM/threads) machine is using so much RAM compared to smaller devices and much more than others who have reported their node memory usage.
Is there something about RUST where it is allocating more memory because there is more memory available???
How many physical threads are there for that machine (or cores if VPS). I wondered if there might be something to do with physical cores/threads the machine has
Right now I have 429 nodes using 123.6 Gb of a nominal 128Gb â 288Mb each
This is on a AMD Ryzen 9 3900 12-Core Processor effectively 24 CPUs
Also - O/T looks like lottery nodes are back. I had a clear outlier on 48760 nanos, Around 10:30 local time it got another huge reward to take it to 114860. Also noted my total nanos is no longer divisible by 10 - currently 0.000166946
Shouldnât be either. That algorithm involves powers and divisions. The 10 nanos is only the minimum value and also 10 is one of the factors involved.
Explains why I dropped from 11th place to 21st overnight.
Damn now i am upset. Why in all things we call good is my nodes using so much RAM. Good connections and all that. Even my SBCs are using more than 288MB, although not much more, maybe 50MB to 100MB which any node could
Can any devs give me hints on how to track down why my nodes are using more memory. Is there any log entries that can give me clues to the usage of memory in the node. Like stack size, heap size, etc
With a little more digging I seem to have found a bit of a pattern
So I build the nodes myself using a copy of the git code from the day of the release to ensure I can build the node if I want to test things.
The node software I am running is from a copy of the code obtained on the day the last release was released for the beta testing. I used the build instructions from the github to build the node
This causes nodes to use 800+MB on the big PC and around 400MB on the SBC
Now when i test with the official release the big PC is now around 300MB per node.
@joshuef@chriso not sure who to ask, but there must be some difference in the build. What can it be? Is there an environment var that I need to have. Maybe something to do with removing debug code or something.
I will try that, thank you. It could very well cause a difference in memory usage allocation.
Also later on i will go through the whole to see if an env variable is set earlier that affects the RUST build process. Because if setting the arch to that doesnât solve it then something else must be causing it.
EDIT: Damn I cannot get musl-tools for opensuse. Having to downgrade to a ubuntu VM to build. Trying nowâŚ
EDIT2: @chriso I got this warning/error when building
dropping unsupported create typeâcdylibâ for target âx86_64-unknown-linux-muslâ
Not sure if this is going to be a problem ??? or is normal for musl
Testing it now, and is looking good with memory dropping below 300MB after some minutes on the large machine. Will be checking to make sure its long term. EDIT: after many hours the nodes are 200-300MB which around what others are reporting so confirms that is the most likely solution
Any thoughts on the âwarningâ and its effects on the binary???
dropping unsupported create typeâcdylibâ for target âx86_64-unknown-linux-muslâ
I ran into this memory problem early in the beta when running on a huge server with lots of ram and lots of cores. Was looking at up to 1.6GB per node.
After hours checking the RUST language I couldnât find any reason for this other than the generic build method. I couldnât find build instructions either (wrt keys) so I have been running with swap and compressed swap ever since. Hugely frustrating at the time but just one of many. Now itâs amusing to see this being cleared up a bit.
More info is that it is opensuse on both machines and freshly installed about 3 months ago using same version/release.
Yes I wondered about that.
@Southside shared his memory usage and it was around 233MB average for his machine with 128GB memory, yet others with RPi saying much less than 200MB, like 120-150MB
So even with the release supplied by Maidsafe the memory seemed to have some correlation to system RAM and maybe CPU
I did read that RUST allocates stack for each thread. Is the number of threads for the node have any correlation to the total physical threads of the device (virtual threads in VPS)