BasicEconomyTweaks [Early Technical Beta] [OFFLINE - see new beta test - part deux]

I’m a little worried it’s going to just roll through them all and bang them all in. I’d like to be able to control it. Do one, then 3-4, then as they stabilize keep it going.

There is an rpc command to upgrade the node, so it would be easy to script a solution to do this.

It’s outdated.

:smile: aaaah - I’d just go for the big bang anyway :sweat_smile:

Guess ideally the network/system approaches a state robust enough to just survive it

This is an old thing and definitely will not work correctly. I’ll need to have it removed.

If you don’t want to upgrade all the nodes at the same time, you can do that with the node manager.

Noted. Thanks. I was waiting for the next update to try it.

Btw, out of 15 nodes, I have only one with no rewards.
Log file attached in case it can be useful
safenode.zip (1.0 MB)

Amazing experience thus far, thank you to everyone who made this happen (I’m loving vdash as well).
Last testnet I was able to collect records and nanos but this time I’m collecting nanos but no records. This is a droplet which is seeing constant CPU usage, the inactive nodes were started but without ‘& disown’ but the active ones are still stacking puts/gets.

Any ideas as to why I’m not seeing records this time? I wouldn’t have thought I could earn without records. Thoughts?

7 Likes

See @web’s post, similar to yours and my replies:

1 Like

As @happybeing says, this testnet is perhaps winding down. @web was getting puts and gets but no records or earnings last I heard - any updates there? Getting earnings with no records does indeed seem paradoxical, I didn’t see anyone else with that happening.

You’ve given me a little clue here as to how I can better manage my nodes in the future. Reading a Baeldung tutorial on Linux Job Control here now. Cheers.

4 Likes

Iv got an upgraded script to run with safe node manager

bash <(curl -s https://raw.githubusercontent.com/safenetforum-community/TIG_Stack/main/maid.sh

Got a bit of a readme it’s a work in progress so if anyone wants to put in pull requests are most welcome :grin:

4 Likes

Ahhh thanks, I hadn’t seen that. So Grafana is where these fancy-pants graphics are coming from. Very cool. Will investigate!

EDIT: last knowledge-get of the day: sn-node-manager is where people go for all their smoothly removing nodes, checking the status of everything, etc. Finally, the penny dropped. A great day all round

3 Likes

Whoever draws this has a lot of talent.

1 Like

Try this:-

ls $HOME/.local/share/safe/node/ | while read f; do echo ${f} ; ls $HOME/.local/share/safe/node/${f}/record_store | wc -l ; echo ; done

Edit
Just realised you are talking about something different: how to get a count of records you are definitely responsible for - not the total of records you are storing.

1 Like

not working here

1 Like

Yea Grafana is the new place to be. read the readme in the git hub there are still a few niggles.

But if you want to try it out @scottefc86 was about to create a dedicated thread :slight_smile:

5 Likes

Ahhh danigt, apologies @happybeing ! I did see a comment here and had reworked a PR to avoid removing it, but might have lost that change in amongst some rebasing :man_facepalming: I think we should set up LogMarkers for everything we want out, and that would warn us of unused variants there I think. I’ll try and set that up to get this back in.

edit: @happybeing, there’s a log here: safe_network/sn_networking/src/record_store.rs at da05ca03e04700aec3338164606e5094d0461053 · maidsafe/safe_network · GitHub

(seems like the comment was duplicated in the merge, but the log line lives on? ),

Can you link us and cc @chriso there, I’m not sure what the safeup issue is.

edit:

@happybeing I’ve added a commit formalising LogMarkers for sn_networking over here: fix(networking): use REPLICATE_RANGE constant for responsibility dete… by joshuef · Pull Request #1558 · maidsafe/safe_network · GitHub

We can add more logs that are forming an effective API surface in there to hopefully avoid this in future :+1:

6 Likes

Has anyone created a Helm chart for deploying? I should be getting my Pi 5 POE hats in the next couple days and was going to setup a 5 node Kubernetes cluster.

I just wanted to see if someone has done that work so I could save some time.

How well go this testnet…?

That is a very good idea! It would make the network growth a bit more realistic.

For this network - and others before it - I think the issue we have is that just about all the people who will join in have done and with all the resources they are able to sling at it. Yes, we’re getting a couple of people coming in a few days late with a few nodes but not enough to move the needle now it’s very full.

3 Likes

I don’t think that even removing the directories like that will kill the processes. If you want to kill all the safenode processes you can just do

killall safenode

But I don’t that is what you’re after either. I assum you are wanting to kill only the ones that are unresponsive. That is a bit more involved but can be done like this using ‘lsof’ which lists open files. The safenode process will have the log file open.

1.Work out which of the safenodes is not working right by looking at the logs in:-

/home/$HOME/.local/share/safe/node/<peer_id>/logs/safenode.log

Run this:-

lsof | grep '<peer_id>'

eg.

lsof | grep '12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi'

You’ll get something like this:-

lsof | grep '12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi'
safenode  31319                          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31321 tracing-a          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31322 tokio-run          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31323 tokio-run          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31325 tokio-run          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31326 tokio-run          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
safenode  31319 31327 futures-t          ubuntu    3u      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834                          ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834 47835 tokio-run          ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834 47836 tokio-run          ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834 47837 notify-rs          ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834 47839 vdash              ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log
vdash     47834 80575 tokio-run          ubuntu   13r      REG              259,1 10084674     532775 /home/ubuntu/.local/share/safe/node/12D3KooWFo7KPU3cf2XuBqbgwDmZJWg33oTnM2wUx4fzijJkgAQi/logs/safenode.log

Ignore the entries for vdash there because I’m running vdash. You now have the process id of the safenode which is 31319 in this case.

So then you can kill that with:-

kill 31319
4 Likes

So far so good. Most feature rich one in many years, and probably the most stable to date. Some more tweaks in the works and we’ll see if they’re backwards compatible or can just upgrade.

Should hop in and give it a try!

3 Likes