BasicEconomyTweaks [Early Technical Beta] [OFFLINE - see new beta test - part deux]

I’ve thrown some new nodes since the price went through the roof, but they’re not getting much data at all, and still upload price is not going down. I don’t think the balacing system is there in this iteration, or if it is, is not working as it should.

4 Likes

Like how much they have? My node that I have had from close to beginning has 1291 records.

These are the last 5 I spun up

------------------------------------------
Timestamp: Tue Apr 02 08:22:53 EDT 2024
Number: 10
Node: 12D3KooWQywUDngwNmUB8BpWENsYBxhVib2ARmUWWka8kNw6SeZr
PID: 538223
Memory used: 41.0195MB
CPU usage: 1.5%
File descriptors: 412
Records: 848
Disk usage: 190MB
Rewards balance: 0.000000010
------------------------------------------
Timestamp: Tue Apr 02 08:22:53 EDT 2024
Number: 11
Node: 12D3KooWKS8v2uWUG4GeeqdJNJb8yP4VfqNuL1acxjJqY7k63Ryn
PID: 509334
Memory used: 30.9258MB
CPU usage: 1.4%
File descriptors: 368
Records: 621
Disk usage: 152MB
Rewards balance: 0.000000472
------------------------------------------
Timestamp: Tue Apr 02 08:22:53 EDT 2024
Number: 12
Node: 12D3KooWSpq8uJZaJ5prsaTDEgJXCHtMuAHJieDPsUEbEiv9znHL
PID: 493392
Memory used: 38.9375MB
CPU usage: 1.3%
File descriptors: 301
Records: 741
Disk usage: 185MB
Rewards balance: 0.000008720
------------------------------------------
Timestamp: Tue Apr 02 08:22:53 EDT 2024
Number: 13
Node: 12D3KooWDprQ7HnXGzj6TZJwF4LJzExjaPB7ffTypcMYsopaBzQG
PID: 493352
Memory used: 31.8438MB
CPU usage: 1.6%
File descriptors: 268
Records: 1304
Disk usage: 307MB
Rewards balance: 0.000083139
------------------------------------------
Timestamp: Tue Apr 02 08:22:53 EDT 2024
Number: 14
Node: 12D3KooWS9eqBnhjxkw6Smhj84rHazMZnTKwgCaswpwKbVcnf9rx
PID: 538208
Memory used: 35.1836MB
CPU usage: 1.6%
File descriptors: 401
Records: 1046
Disk usage: 245MB
Rewards balance: 0.000000010
------------------------------------------

Older ones are more full

I have a few nodes running since sunday, I don’t think they have got much or anything at all. Is there anyone trying to make uploads?

Is there a way to get the number of records my node is “responsible for”? I can get the number of files in the records dir easily.

Also I’m curious if the price Calc is based on how many records are “in the node” VS how many records my node is “responsible for”. That may explain why the price isn’t going down if it’s the former.

You will be able to grep relevant records in your logs, with the new release I think. Was recently merged.

1 Like

This is why the price doesn’t go down when we spin up new nodes. It’s based on records stored. So it doesn’t matter if new nodes spin up and take some of the load off the Fuller nodes. The “from” nodes don’t delete records, and the formula is only looking at records on the node. With this, price will only ever be based on the max number of chunks a node ever held.

2 Likes
5 Likes

Could nodes simply communicate with their node-manager – the current price they are using plus their current storage capacity.

The node manager could then calculate a median of each and return those values as a way for nodes to target the median price and also adjust their current price relative to the median storage and node storage and all communication is local.

edit: I think though that ultimately we need a more market oriented mechanism. I don’t think clients are comparing prices (and allowing users to set limits) yet, but assume they must do this somehow in the future. That being so, then If nodes get turned down by a client (get asked for a price, but don’t receive a chunk), this should be a signal should reduce it’s price, and for every time they successfully get a chunk, that should be a signal to increase the price.

Those taken into consideration, I think we would have a basic market system.

4 Likes

All the nodes in the manager will be in (roughly) different groups. One could have 30% free, other could be 70% free. (ideally not the case, but it showing to be true)

Imo, pricing Calc must be done on relevant chunks, not all stored.

Edit : I see now it just got merged.

Hi, just another eager newbie here learning/testing/running safe nodes!

BTW many thanks to the Autonomi team for their 18 years of dedicated effort in creating such an epic project. The hope it offers for humanity is truly inspirational!!

On Friday I set up an ubuntu server VM in Azure, then got 10 safenodes running. After subsequently installing the helpful vdash tool today, I see there appears to be plenty PUTS and GETS, yet Records still = 0?

Skimming through the verbose log files, I can see there appears to be entries like ‘ValidChunkRecordPutFromNetwork’, so I would have thought that might have equated to Records in vdash being > 0?

As far as I can tell, everything otherwise appears to look like its running ok, so I can’t really tell if its maybe just too low a volume of chunk saving activity so far for vdash to indicate anything yet, or perhaps whether there may be something server related that I’ve maybe overlooked? Perhaps I just need to leave it running for a good while longer? Any clarification or advice would be greatly appreciated on whether this all looks as expected?

6 Likes

It’s either a problem with your node (but if so, probably not something you can fix right now) or with the information vdash is using. What node version you have:

safenode --version
safenode cli 0.104.41

BTW your nodes are doing better than mine which for the first time don’t even connect, but I think an update to safenode will fix that soon.

Welcome to the band of noderunners! I think we’ll have a new testnet soon so you should be ready for that and then let’s see how that goes.

5 Likes

I’d love to see this test net roll into the next one with a rolling node update. Is that a thing yet?

4 Likes

Thx, I currently have safenode cli 0.105.2.

Is it perhaps just a compatibility issue with with the way vdash is reading logs in the newer version of safenode? Or do you think I should try reverting to specifically using 0.104.41?

Anyway, I’m aware its early days and there’s all these teething issues to sort through. Happy to wait for any later release that’s in the pipeline if that’s maybe more helpful for wider testing purposes.

3 Likes

It’s about time I’d a look into this.

Sorry I think I was unclear, I was trying to ask: if I’m stuck behind my network’s firewall, why are PUTs and GETs showing up in the first place. I guess it means that my node sets up correctly, gets requests for PUTs and GETs in the usual fashion, but then can’t actually store anything because my network won’t let the data through when the time comes. Actually now when I write it out, maybe I’d no question, it must be that, I suppose.

What do you use for your statistics? Very clean-looking whatever it is. (I feel like I asked this before, but am not sure if it’s deja vu).

Yes, I believe so, had a quick look there and there are loads of ones like this:
“[2024-04-01T20:47:42.944008Z WARN sn_networking::event] OutgoingConnectionError to PeerId(“12D3KooWKScPGfag1qrHPYd4oxzsGMJZee348QAagwHKNMyAzAeQ”) on ConnectionId(232) - Transport([(”/ip4/99.43.124.25/udp/12164/quic-v1/p2p/12D3KooWKScPGfag1qrHPYd4oxzsGMJZee348QAagwHKNMyAzAeQ", Other(Custom { kind: Other, error: Custom { kind: Other, error: HandshakeTimedOut } }))])"

I enjoyed figuring out these one-liners (regex101 and explainshell are great websites). Running the two on the logs of this node of mine 12D3KooWEh6FABd3GEZcAipq44kZqRReCREnyGYbCyEHWieTgUFE I got 357 and 98, so, ~27.45% of them ended up like that. I can provide full logs if that is any use. Of my five nodes, I presume it’s similar on the others but haven’t ran the commands on all of them.

@aatonnomicc, doing the Lord’s work spreading the tunes :pray:

6 Likes

Welcome, and may the chunks soon be upon you :bowing_man:

I also have no records or earnings. I just fired up a few nodes at home with no port forwarding though, so that makes sense that I got no records, I was just messing about, seeing what happens. Your vdash output looks suspiciously similar to mine though, see this screenshot I took after a few hours BasicEconomyTweaks [Early Technical Beta] - #387 by JayBird

So I’d be having a peek at the server if I was you! From reading other posts, it seems people get records fairly soon after spinning up nodes. Perhaps a more experienced tester than I can jump in and provide you some more specific tips soon.

2 Likes

Can do I’m up for that plan :slight_smile:

1 Like

Yes, they’ve messed with it again. You can try the earlier version or wait and see if I can fix this and update vdash which I’ll have to do anyway. I’ll have a look now and update this reply shortly.

EDIT:
Yes, the INFO log message that was providing a count of relevant records has been changed. It is now a DEBUG level message, and the format has changed so until they make it an INFO message again I won’t update vdash.

I had marked their source to try and prevent this when they make changes but it hasn’t worked. cc @joshuef (any chance you can switch this message from DEBUG to INFO in record_store.rs: debug!("Relevant records len is {relevant_records_len:?}");. And find a way to let me know of changes. The comments I inserted haven’t worked and one has been left hanging where there isn’t a log message statement (at record_store.rs line 453 here). Thanks.

1 Like

Many thanks, though there’s no rush on my account. I’m happy to wait till you have a chance to catch up in your own good time! I very much appreciate you’re efforts with the very helpful vdash monitor.

4 Likes

Its too expensive

There is nothing to stop you trying and find out for yourself. Good to see you finally contribute something positive here - well done

2 Likes