Update 28th November, 2024

The nodes. I am just saying that this is incorrectly displayed in LP

4 Likes

What, are we on 65GB nodes now? :exploding_head:

3 Likes

We have been all this time from when 32GB nodes were introduced. You see 1/2 the records were transaction records and so they said a full node with all 16K records used that there would be a tad over 32GB max used. Transaction records were only bytes in size (maybe 1KB)

Now those transaction records are not there so the 16KB can be nearly all 4MB in size and thus 64GB max

But technically and not market speak they were 64GB nodes all the time, which Jim acknowledged early on after being made aware of it. And technically it was not a real thing that 1/2 the records had to be transactions, some nodes might only have had 1/4 of the records as transaction records and some 3/4 of the records as transaction records.

1 Like

But does that mean 100 nodes need 3,2TB or 6,4TB HDD space?

6.4TB since that is 100 x 64GB

Each node has 16K records max

hdd’s are so expensive, now I need more of them.

Not like you aren’t going to make money from it

Suggest you get the biggest that still fits in the sweat spot of $$$/TB

EDIT: also suggest not getting SMR drives (shingle) but CMR. CMR may cost a tad more they are better at writing data

1 Like

Can I tell them that when i buy them, I pay them a few month later. :smile: My balance has been going down the drain since june, since the rewards started. I only got 2 cpu’s, 6TB, 32GB ram and I need 32x3 RAM, 1 cpu, 1 mobo, 3x6TB hdd’s, 1 PSU.

Think the sweetspot for hdd’s might be 6TB so planing on getting 6TB ones and setting them up in a 0 raid or the linux thing if ever get my head around that.

1 Like

I have SMR drives with a passion because they are no good for most uses and the maunfactures tried to make out they weren’t using them when they were.

However, I think Autonomi could be the one valid use for them. I think the relatively small amount of writing from nodes won’t cause a problem with destaging from the larger write buffer they need to put in front of the awful write performance of the drives. As long as nodes are not started all at once. Which they can’t be anyway because of CPU and bandwidth considerations.

I need to test all this.

EDIT - Yes, I mean HATE them not have them!

1 Like

While I agree (and assume its a typo “I have SMR”), I think if I were buying new drives for a project where they are running storing data 24/7, even low rates, that I’d go for CMR drives even if they cost more.

But yes the write rate is not high, except on node startup.

Yes, I meant hate!

So would I, but if there is a chance there is a use case out there for all the millions of otherwise useless drives that have been manufactured I’ll invest a bit of time in it.

2 Likes

Thanks, has anybody looked at having the safenodes in fresh install on say Linux make use of btrfs F/S to dynamically expand periodically, the size of block volumes for safenodes as they fill up? I am looking at exploring this in a future setup on Linux, now that I just shutdown this MSWIN11 daily driver as a Noderunner .

I haven’t and not sure if others have

We have a setup with our own deployment now where the node storage is on an additional volume that is expandable.

I would generally recommend a node operator to run their nodes on some kind of external storage.

If your node storage gets full, you can still have them tick away gracefully. If the storage on the root partition gets full, that’s bad for the OS and the node manager state can become corrupted.

6 Likes

Thanks @chriso This helps out on this journey we are taking to build a robust ‘turnkey’ low cost appliance for Autonomi

2 Likes