+++ Beta Network is Now Live! +++

Yes. though it’s actually the KeyBytes inside it.

yes

you can have a read of the code of libp2p key

Yes

6 Likes

Not sure what to do with my nodes besides thin them down and double check router settings.

Only running 10 and they were doing great before but now, even most having decent peer counts, earning zero nanos.

If anyone has any advice, lemme know!

We are waiting for an update to fix this. I’m seeing the same thing. It should be soon.

2 Likes

How do I do a safe wallet send 42 b060b8633fb52d3? this it with a non owner node btw.

In Safenode1/wallet is where I can find my pub/priv key

1 Like

@19eddyjohn75 if your trying to send out from a node wallet best way iv found so far is to stop the node and move the wallet to the client folder then send the coins out and put the wallet back in the node folder and restart the node.

there is a script here that should work but be ware if its a large machine with lots of nodes and lots of nanos restarting all the nodes could send the machine into melt down.

NTracking/scrape.sh at main · safenetforum-community/NTracking · GitHub

1 Like

Yeah i had a 170 nodes running, somehow i mess around and bump it up to 190 nodes, now that vps is basically the t1000. Can’t ssh into it anymore, can’t say if i can still access it (funny that i try to give the Network as many nodes as possible, but now got a 100+ nodes melting)

Would be fun if we made some tuts on the Network showing all these know how.

Thx 4 the script :beers:

2 Likes

I’m playing with a version of that script running it hourly on cron job so as to spread the amount of node restarts around so I don’t kill the machine.

found out the hard way trying to run it on a machine that had been running for a week that its best to run it frequently :slight_smile:

2 Likes

Having some issues where every roughly 48 hours all my nodes seem to randomly unilaterally stop without any interference? unsure why… any suggestions folks?

1 Like

@neo saw something similar IIRC

He’ll be along once the sun rises over the billabong.

Keep the logs…

Yea, my logs said I ran out of disk space. Yet a couple of nodes survived and quite happily writing to the drive which had 1.5TB free space. And ironically the node still wrote to its log on a “full” disk LOL

Oh and when it detected this it triggered self death of the node and its directory

Supplied the log file to a dev and no word at all from them. @rusty.spork did tell me they were notified of it

1 Like

Does this happen when nodes are under provisioned for 5GB per node or when logs reach 3GB.

Is that 3GB allocation separate from the 2GB?

Nope, the space is only used when used. Most nodes are using less than the 2GB and the 3GB is only an estimate of how munch space the logs will take when there is enough logs to met the max logs to keep value.

But yes, it you have 20 nodes and only 50GB free space, it will eventually run out of space and from my look at the logs the nodes kill themselves when they detected they are out of disk space. And I suppose kill the directory as well so that other nodes may survive.

But in my case it happened consistently for 2 days and after seeing it the first time it was within an hour most of the nodes would self destruct after they all started. Somehow one of the nodes that self destructed left its log file. Problem has not resurfaced since I left the nodes off for a day.

2 Likes

Unfortunately this seems the norm. I have had the odd exception, but generally only after they have fixed or are working on and need more information. And then only in a github issue I think.

Once or twice I’ve asked Rusty to check and he will either say they know about it or are working on it.

I think it would help us if they let us know the current state of any reports:

  • received and noted
  • assigned
  • being fixed
  • PR ready
  • waiting for x y or z
  • etc

They must - I hope they must - have this info to hand.

4 Likes

Give me a list of questions for @chriso and we will get them (semi) formally submitted as per the agreement of the other week

Im still waiting on the list of error msgs. And the promised rationalsation of the Discord channels.
However given the encouraging noises re the new Launchpad, perhaps a n00b friendly log inspector is not such a priority. I still think it would be a usefl tool though

2 Likes

You have to try and be a bit patient, and understand that we are all very busy. I don’t actually know why Aaron directed this one to me, because I’m not sure I would actually be the best person to debug the issue. I didn’t request the logs. That’s a miscommunication.

As far as I knew, it was something you had experienced in isolation. I wasn’t aware that anyone else had experienced the problem.

I’m not even sure how I would go about reproducing the issue. We haven’t encountered anything similar in the nodes that we are running.

Give me a bit of time and I might be able to do something about it next week.

3 Likes

I am, I was trying to convey mostly that i have no answer even though I have seen it and submitted the logs. I never expected it to be worked out until you guys fluked seeing it or others also submit their logs.

Maybe for others just acknowledgement of seeing the logs will be great as they know they were heard and not being ignored.

There was just yesterday someone on discord who mentioned they had nodes running for some time then suddenly they stopped. That is something I hear on discord enough times to know there is some issue. After we weeded out the laptop issues of hibernating/windows updates and so on that causes a OS reboot or cpu going “off”, we still have the reports of most of ones nodes just stopping. Auto restart I think for those with it enabled masks this issue.

Its something to do with the file system interface. Is there any way for the node to lose track of which directory/filesystem it is using for storage. The errors are like file not found when trying to retrieve a record, and disk full. Almost as if it changed filesystems on the linux box, as odd as that would be.

3 Likes

For the people who encounter this, how many nodes are they running?

Some 5, some 10, sort of figures maybe 20, usually not the large numbers as that would suggest problems elsewhere like cpu issues or disk actually filling up. I had like 30 or 40, I’d have to check again but for me I had 1.5TB free on the home drive, over 100GB free in /tmp and over 1TB on root drive. And 48 thread cpu doing like 5% work and over 200GB free RAM

1 Like

Thanks. Is there also something leading you to believe that everyone is experiencing the same issue here?

Its more like there is some issue for those we couldn’t find an issue for. Cannot say its the same issue. But it kinda suggests it when nodes just suddenly die without an obvious reason. Its more like there is nothing else around

Question: Am I right in assuming that the nodes when shutting down for what it thinks is no disk space that they would delete their own directory under the node directory. When mine died this was one of 3 directories left and the only one with logs, the other 2 had their (empty) wallet sub directories. I am guessing the log for this one survived because the file had not yet closed when the directory was requested to be deleted.

1 Like