It’s most likely because you have to refer to the node manager like this: sudo ~/.local/bin/safenode-manager
.
Sorry, this will be remedied soon.
It’s most likely because you have to refer to the node manager like this: sudo ~/.local/bin/safenode-manager
.
Sorry, this will be remedied soon.
Maybe try hosting it somewhere else.
I’ve been a bad boy! I have a node reporting as Shunned because of ‘BadQuoting’. I don’t think I’ve seen that one before.
[2024-05-09T21:03:25.439991Z WARN sn_networking::event::request_response] Peer NetworkAddress::PeerId(12D3KooWJ8rrFWkbesi77JFKMtSDJgvEh9ZWWTcPKxp9eTAbPdKz - 181935f1e372f03cb2172a5a22e849ca02aa86869432fc4a8ad0e1d975b1c887) consider us as BAD, due to "BadQuoting".
Only 1 entry like that.
Or is it the other node thinking I’m bad when I’m not?
This node has earned a small balance. So it’s not all bad.
Edit
These are the log entries right before that message which might be usefule:-
[2024-05-09T21:03:25.138707Z INFO sn_networking::record_store] Cost is now 20 for quoting_metrics QuotingMetrics { close_records_stored: 82, max_records: 2048, received_payment_count: 2, live_time: 10096 }
[2024-05-09T21:03:25.140273Z INFO sn_networking::log_markers] StoreCost { cost: 20, quoting_metrics: QuotingMetrics { close_records_stored: 82, max_records: 2048, received_payment_count: 2, live_time: 10096 } }
[2024-05-09T21:03:25.140880Z DEBUG sn_node::quote] Created payment quote for NetworkAddress::ChunkAddress(2fdab9 - 18305ff65925f11f580318a09ade37e20b016dd4ad9f84713db1e2bb2cf7dde3): PaymentQuote { content: 2fdab9(00101111).., cost: NanoTokens(20), timestamp: SystemTime { tv_sec: 1715288605, tv_nsec: 140723060 }, quoti
ng_metrics: QuotingMetrics { close_records_stored: 82, max_records: 2048, received_payment_count: 2, live_time: 10096 }, owner: "user" }
[2024-05-09T21:03:25.332643Z DEBUG sn_node::quote] Verifying payment quote for NetworkAddress::ChunkAddress(2fdab9 - 18305ff65925f11f580318a09ade37e20b016dd4ad9f84713db1e2bb2cf7dde3): PaymentQuote { content: 2fdab9(00101111).., cost: NanoTokens(20), timestamp: SystemTime { tv_sec: 1715288605, tv_nsec: 140723060 }, quo
ting_metrics: QuotingMetrics { close_records_stored: 82, max_records: 2048, received_payment_count: 2, live_time: 10096 }, owner: "user" }
[2024-05-09T21:03:25.337359Z DEBUG sn_transfers::wallet::data_payments] The new quote has 90 close records stored, meanwhile old one has 89.
[2024-05-09T21:03:25.337665Z DEBUG sn_transfers::wallet::data_payments] The new quote has 89 close records stored, meanwhile old one has 88.
[2024-05-09T21:03:25.337791Z DEBUG sn_transfers::wallet::data_payments] The new quote has 90 close records stored, meanwhile old one has 89.
[2024-05-09T21:03:25.338441Z DEBUG sn_transfers::wallet::data_payments] The new quote has 90 close records stored, meanwhile old one has 89.
I’m earning, but no records. This has happened to me on last couple of testnets. Can’t figure it out but I’m guessing I’m storing records if I’m collecting nanos.
It may be due to some change in the log files.
If Autonomi change the messages which vdash is using, then the status in vdash won’t be correct.
I don’t know if that’s the case but somebody could have a look. It’s not difficult to see what vdash looks for and to see if that has been changed in the Autonomi code.
I don’t have time to myself atm.
If you are using sudo, you must give the full path to the binary
so do sudo ~/.local/bin/safenode-manager stop --service-name safenode1
Its a bit of a PITA having a binary that needs root privs in your .local/bin/ but we all know this is just temporary and we wll either get safenode-manager working without the need for root or there will be another workaround - but for now its a bit of extra typing
You can do sudo pkill but thats because pkill lives in /sbin or some other dir that is in the $PATH for root
@MuchaLechuga you are on vdash 0.17.1 try updating vdash to 0.17.6
cargo install vdash
I have records on 0.17.6
A couple of uploads, good speed (the first filed choked in the last chunk for a very long time, the second and much bigger one, went like a charm, last chunk also took longer but nothing compared with the other file, actually before choking it was yielding around 21mbps in a 1gbps line).
Edit: stupid me, files had already been mostly uploaded by myself from another client before…
raul@raspberrypi4:~> $ ls -ltrh *.iso
-rw-r--r-- 1 raul users 203M sep 23 2022 openSUSE-Leap-15.4-NET-aarch64-Build243.2-Media.iso
-rw-r--r-- 1 raul users 4,2G may 8 14:47 openSUSE-Tumbleweed-DVD-x86_64-Current.iso
raul@raspberrypi4:~> $ time safe files upload -p openSUSE-Leap-15.4-NET-aarch64-Build243.2-Media.iso
Logging to directory: "/home/raul/.local/share/safe/client/logs/log_2024-05-10_00-43-56"
safe client built with git version: 16f3484 / stable / 16f3484 / 2024-05-09
Instantiating a SAFE client...
Connecting to the network with 49 peers
🔗 Connected to the Network Chunking 1 files...
"openSUSE-Leap-15.4-NET-aarch64-Build243.2-Media.iso" will be made public and linkable
Splitting and uploading "openSUSE-Leap-15.4-NET-aarch64-Build243.2-Media.iso" into 407 chunks
**************************************
* Uploaded Files *
**************************************
**"openSUSE-Leap-15.4-NET-aarch64-Build243.2-Media.iso" e94ef66956173e460fc04b013491aedfac7bd94b33957307085ff8e0a66d9d55**
Among 407 chunks, found 143 already existed in network, uploaded the leftover 264 chunks in 6 minutes 32 seconds
**************************************
* Payment Details *
**************************************
Made payment of NanoTokens(22307) for 264 chunks
Made payment of NanoTokens(3810) for royalties fees
New wallet balance: 0.999973883
real 7m18,533s
user 3m33,416s
sys 1m3,260s
raul@raspberrypi4:~> $ ls -lh *.xz
-rw-r--r-- 1 raul users 649M sep 23 2022 openSUSE-Leap-15.4-ARM-JeOS-raspberrypi.aarch64.raw.xz
raul@raspberrypi4:~> $ time safe files upload -p openSUSE-Leap-15.4-ARM-JeOS-raspberrypi.aarch64.raw.xz
Logging to directory: "/home/raul/.local/share/safe/client/logs/log_2024-05-10_00-51-40"
safe client built with git version: 16f3484 / stable / 16f3484 / 2024-05-09
Instantiating a SAFE client...
Connecting to the network with 49 peers
🔗 Connected to the Network Chunking 1 files...
"openSUSE-Leap-15.4-ARM-JeOS-raspberrypi.aarch64.raw.xz" will be made public and linkable
Splitting and uploading "openSUSE-Leap-15.4-ARM-JeOS-raspberrypi.aarch64.raw.xz" into 1298 chunks
**************************************
* Uploaded Files *
**************************************
**"openSUSE-Leap-15.4-ARM-JeOS-raspberrypi.aarch64.raw.xz" 665244f4f93263cdc70fe50b043c6cc06ca7731966c0d953c4d275ff01af1fc9**
Among 1298 chunks, found 1297 already existed in network, uploaded the leftover 1 chunks in 4 minutes 16 seconds
**************************************
* Payment Details *
**************************************
Made payment of NanoTokens(255) for 1 chunks
Made payment of NanoTokens(45) for royalties fees
New wallet balance: 0.999973583
real 6m6,141s
user 6m11,572s
sys 1m35,062s
raul@raspberrypi4:~> $
Our suspicion is upload issues are relay/node/shunning related, that’s kind of what we want to assess here.
Ah, another thing that we’ve not mentioned in the OP (i’l edit), we have the --upnp
flag on safenode
which could well open ports for you (if your router supports this).
We’re debating making that a default behaviour, one to try in an upgrade soon, I think.
Ahhh, interesting, cc @qi_ma a good node should never see this or? Do we have something buggy going on there?
For information sake, and maybe Maidsafe can keep a list somewhere for general information for people
Starlink does not have port forwarding nor uPnP
–home-network does work for starlink
Finally getting around to getting the environment setup for this testnet. Just to confirm, is this testnet based off the latest code base in ‘stable’ branch (the versioning for sn_node and sn_cli seem to match OP):
For those who require compiling from source code, it be nice to extend the instruction template with a Github tag/branch/commit id reference etc as an additional section / note (maybe a one liner git command to clone the code base at the right checkpoint)?
Sorry, if I missed this information somewhere in the OP (couldn’t find it). For now, I am going to assume this is off ‘stable’ branch until further notice or correction. I am eager to test out the safenode (A vs B vs C scenarios (three scenarios)).
I swear someone spiked the punch in the punchbowl. Getting drunk on excitement
Some downloads - if 20 yr old MIT lectures on Ada and aerospace interest you
**************************************
* Uploaded Files *
**************************************
"RocketLab2003.pdf" 7c24c6441f5289937fb76ae0572affc9f46db99738ef1bf4ec7bc067563a7976
"LHC_Note_78.pdf" a6492225717b8dac9477a9487e06588db41dd3b2a2a95a3eee2c97ced0915968
"1_intro_to_cp.pdf" 5ef5dbb489c60dc7ce892677ff62e333118942c3900a37b7cf42004f40dbcb01
"aldrdg_space_war.pdf" 3ba1b1d21ea9b7da7e013acd68122b787596970db9368d549ecb61c2117385bb
"orb_str_des_ver.pdf" cc04491d50c2d19dfe86336170be058c98efcf73e45eb731f7ce81fd49ce3580
"PS3sol.pdf" 2552bf87634fc3f28090023faf3b0c06f96a00eb23ca4ade25c88e600ca915a7
"reliability_eng.pdf" 9329481c333c91cccf1abb92ad13b992aa303e87351cb7b1b2e3cd9c9d7efb7c
"EPAC1992_1545.PDF" c21b5e74db9b793368d511a5e8ea1dd41db674d790ef2fc31864c72dd98cd680
"russian_fighters.pdf" 885510b452b08a7c8f538224de9ab454ba8799dcffb3b86fff129b4b22968e80
"spce_shtl_orbtr.pdf" 20865581a3941a24959d4a587421faf89ac01be61b78616d188cab26ee0b8a5c
"PS3_2003.pdf" 44e8accd67744a3c5346af666919215833df25c7f8e46900034de98eee99dc5a
"shtl_strct_dynmc.pdf" bac6b6ce87ebaf4385ab7ef4b45353902b013d1b93a0566fa7379dae19598bad
"interfaces_ac.pdf" 56e0ad9bed595b126a1b2d5477570e2e82f0b6fbd08db64453dff951cca75850
"adagide_instrctn.pdf" 2017880b7c187003cbdc51cdeb877491febbb106a101a692fedb927425df1a1b
"c02_ps02_fall03.pdf" 8f7ba7843a59ceb63008440c51862479326654f6eff56366c43fbdf03abcef37
"moser_strl_loads.pdf" 17cfa3a06e30e4fb84de798c2308b82689cc7dc1f8afd1f71f281f60474957af
"HW1_03.pdf" 3e2f601d8784b28ed9ce6c3a84a817f5e00612ba64f238668cab1f8f9b1701f1
"LTA2003.pdf" 1f0e5952d95de8dd4f4450405f36c9f3c478227a528615f0c7755260587d9dcc
"3_ada_syntax.pdf" 2edbc3e15d2ae3b0de846d7d256985bfe180bfd2681e240545abd373b7455d1d
"costs.pdf" 41d5070081ee3fc9e4f6e114e68e2186a15a365e61efe053cc4dd3f088ff481e
"mud2.pdf" 97384836631653eabc52645954494b370630af459efec84149e371c7f95924f9
"2_hellowrld_spdr.pdf" 169647e5468be80ef5fe5dc3a3fd840a99640bfe31ef8a4ae2cec0f9c0a1403d
"mud1.pdf" ca897b480d566361a6d247b4698bf0c42da4c839c0cd99f381d7d7244fabc1ef
"ada95.pdf" 2038081948e146eae86aa1784d1f8e452432cf67e619555c61b1120a9e3f4c20
"c02_ps02_sol.pdf" c7bf2226cdc596481a5a9f0d1066736952856be27b85b3b77e31e6929ca6868d
"HW2_03.pdf" e8c18e11b7054f201a9b99c0dfed08a5af9bed53f1b4c9427d64188fa8b0c96e
"cohen_orbtr_subs.pdf" 744c0c770821f6334f7b0147d1c0968fd0f1e472593158746cfa4ba96103e36a
"esas_presntn.pdf" 499edf8131f0b63c6d2cf8beab95edfeb8334f20be6e8a1181997ce43793fc58
Among 117 chunks, found 0 already existed in network, uploaded the leftover 117 chunks in 2 minutes 38 seconds
Files uploading and downloading fine, nodes running and earning, what can I ask more?
Well, actually there is one thing.
I upgraded my punch net nodes, wich worked in 2 steps (the upgrade managed to re-start only the first 15 nodes, I had to do a safenode-manager start
as well), but that obviously means that the previous rewards have been kept mixed up with new rewards, wich I suppose could cause problems?
safenode-man
ager balance
=================================================
Reward Balances
=================================================
Refreshing the node registry...
safenode1: 0.000012886
safenode2: 0.000006900
safenode3: 0.000003345
safenode4: 0.000026906
safenode5: 0.000008315
safenode6: 0.000036575
safenode7: 0.000007180
safenode8: 0.000058758
safenode9: 0.000018132
safenode10: 0.000003932
safenode11: 0.000021081
safenode12: 0.000006932
safenode13: 0.000011971
safenode14: 0.000007907
safenode15: 0.000008273
safenode16: 0.000024733
safenode17: 0.000024110
safenode18: 0.000031593
safenode19: 0.000043419
safenode20: 0.000030799
safenode21: 0.000008517
safenode22: 0.000041617
safenode23: 0.000021125
safenode24: 0.000019747
safenode25: 0.000035628
safenode26: 0.000065581
safenode27: 0.000005424
safenode28: 0.000020625
safenode29: 0.000022375
safenode30: 0.000012041
safenode31: 0.000065639
safenode32: 0.000008883
safenode33: 0.000032905
safenode34: 0.000010524
safenode35: 0.000014056
safenode36: 0.000008165
safenode37: 0.000008500
safenode38: 0.000006439
safenode39: 0.000007561
safenode40: 0.000028860
safenode41: 0.000005913
safenode42: 0.000012587
safenode43: 0.000024873
safenode44: 0.000021336
safenode45: 0.000057365
safenode46: 0.000008084
safenode47: 0.000017240
safenode48: 0.000012485
safenode49: 0.000020959
safenode50: 0.000006695
safenode51: 0.000000020
safenode52: 0.000000010
safenode53: 0.000000010
safenode54: 0.000000010
safenode55: 0.000000020
safenode56: 0.000000000
safenode57: 0.000000010
safenode58: 0.000000030
safenode59: 0.000000010
safenode60: 0.000000000
safenode61: 0.000000040
safenode62: 0.000000020
safenode63: 0.000000010
safenode64: 0.000000000
safenode65: 0.000000110
safenode66: 0.000000040
safenode67: 0.000000000
safenode68: 0.000000010
safenode69: 0.000000000
safenode70: 0.000000080
safenode71: 0.000000000
safenode72: 0.000000000
safenode73: 0.000000000
safenode74: 0.000000000
safenode75: 0.000000010
Running 3 scenarios now:
Note: safenode-manager gives me a service error with --upnp (I manual added in the /etc/init.d/safenode
service definitions) likely due to not recognizing the extra args (failed ops). I realize --upnp is outside the scope of this testnet, but will let it chug along.
For group C, it seems to have created Upnp mapping on my router:
However, it seemed to try to have tried port 54066 first (external vs one being opened didn’t align), but then it detected it properly (port 50644)… then detected as if it didn’t (port 54066) (I haven’t really investigated reproducibility or looked into it more yet):
{"timestamp":"2024-05-10T03:07:32.350551Z","level":"INFO","message":"external address: confirmed","address":"/ip4/X.X.X.X/udp/50644/quic-v1","target":"sn_networking::event::swarm"}
{"timestamp":"2024-05-10T03:07:32.517969Z","level":"INFO","message":"external address: new candidate has a different port, not adding it.","address":"/ip4/X.X.X.X/udp/54066/quic-v1","our_port":"50644","target":"sn_networking::event::swarm"}
{"timestamp":"2024-05-10T03:07:33.183961Z","level":"INFO","message":"external address: new candidate has the same configured port, adding it.","address":"/ip4/X.X.X.X/udp/50644/quic-v1/p2p/12D3KooWDNeATJdCpRr1EEFWrktQZayxc9NqsqVgCqMNSxUz6fEU","target":"sn_networking::event::swarm"}
{"timestamp":"2024-05-10T03:09:27.520938Z","level":"INFO","message":"external address: new candidate has a different port, not adding it.","address":"/ip4/X.X.X.X./udp/54066/quic-v1","our_port":"50644","target":"sn_networking::event::swarm"}
the safenode pid in Group C (1 of 3 ) is listening on port 50644 which matches the router’s UPnP status:
udp 0 0 0.0.0.0:50644 0.0.0.0:* 7109/safenode
The 3 safenodes in Group C do have connected peers > 20+.
Will circle back in a few hours with charts (need to let some data get collected first) . .
How are your port forwarding nodes going. In punchnet they didn’t earn, but same for most people
In this testnet they also have not earned
I am starting to wonder if port forwarding is working for earning. Every thing else seems to be working, puts and gets are happening, plenty of peers etc
My home-network are doing fine and earning on starlink. Just the large amount of errors like I get for every home-network node.
Here is a preview off the first hour off data in Group A & B & C:
DCUTR for Group B & C = Success rate at 100% thus far,
LIBP2P Identity Received/Sent Ratios for Group B vs A is no longer 2x higher compared to PunchNet test (baseline)
LIBP2P KAD Inbound Requests Ratios for Group B vs A is also not 5x higher compared to Punchnet (baseline)
LIBP2P KBucket Routing - same as PunchNet in terms of ratios between Group B vs Group A
LIBP2P Swarm Outgoing Connection Errors - Group A is still lot higher than Group B
UPnP is showing a lot promise (very similar trend lines as Group B), when compared to Group A for a wide majority of the panels/charts above
CPU & Memory look stable across Group A & B & C (different baselines but stable).
Group A & C seem to have a lot more connected peers on average, also a large variance, compared to Group B
Port forwarding aka Group B has earned the most for me so far in this testnet, and the prior. Its also the group with the least amount of Outgoing Connection errors thus far.
Outgoing errors with Transport Other (i.e. HandshakeTimedOut) within LibP2P Swarm for Group A dwarfs Group B & C (could be a continued area of focus there in future (TBD)).
Will circle back in 8+ hours, and reflect more on a larger time frame (above charting was done rather quickly for now).
it is yep.
if you try and spend them yeh. If you don’t it should not be an issue I think?
just a dumb question …
in the official instructions in the first post there is no mention of the --home-network flag … and if i start a node without it (behind router) everything looks normal … it’s just not earning … which is normal for freshly joined nodes anyway … i very much assume this is not intentional because for sure it is needed when running Nodes behind a router without port-forwarding …?
I started 20 nodes with --hone-network using safenode-manager and they work and earned