Light on resources too, my little AArch64 2GB is cruising at a steady 25% CPU / 20% Mem.
feat: paying for Chunks to upload them
maidsafe:main
← bochaco:feat-chunk-payment
Light on resources too, my little AArch64 2GB is cruising at a steady 25% CPU / 20% Mem.
Please share your binary and all networking info
In other news it looks like NatNet [May 26 Testnet 2023] - #57 by Southside was anothe rcrap prediction from me and @scottefc86 was correct about my hat-trick
O/T I doubt @scottefc86 will see this, probably still pished after his teams miraculous escape from relegation.
Giggle of the day here yesterday was seeing the hurting huns implode (again) after their favourite German team HSV got gubbed for promotion
after Heidenheim scored two in injury time to deny the neo-Nazi favourites promotion to the Bundesliga.
https://12ft.io/proxy?q=https%3A%2F%2Fwww.thesun.co.uk%2Fsport%2F22508469%2Fhamburg-fans-invade-pitch-promotion%2F
Just click - Scots and Scousers dont share links to the Sun
Not sure what you mean by networking info?
I forgot about node process for some time and when I looked at it several minutes ago, I found that it is using 680 MB of RAM.
Probably memory leak is still there.
Also I see suspiciously high network activity without new chunks appearing. May be balancing algorithms doing something wrong.
Can developers look at the state of this network?
Looks like so.
Most likely, because there are no instructions on how to upload files in 1st message of this topic.
When network is doing approximately nothing, it is stable, yeah.
Yet there are clearly described goals of the test net, and specific clarifications:
It was never in doubt . As for yesterday it was a day I’d rather forget and nothing to celebrate so no booze for me
There was a bit of uploading by people early on to test functionality and when I’ve been connecting new nodes I’ve uploaded a small file from a VM with just ‘safe’ on it and downloaded it again to see if one of my nodes receives data. So there has been a bit of uploading and downloading. Not at the same breakneck pace as previous networks admittedly!
I know, my node stores 235 MB of chunks.
That’s what I’m talking about.
Less load → higher uptime.
What ports did you open?
Or is this an arm64 cloud instance?
Im a bit slow, dehydrated from working on the car in this heat.
Well you are an Everton fan so why act surprised?
Harsh but fair…
Ahh, no not a cloud instance. I am running it from home. I typically start at what has become the standard 12000.
Just remember the port flag when starting the node.
Windows:
[2023-05-29T15:25:25.664288Z INFO safenode] Starting node …
[2023-05-29T15:25:25.664480Z INFO safenode::network] Node (PID: 13068) with PeerId: 12D3KooWKpKiF5ysQc1qzq61uuwTf9z2RE317xmJ932bzGVZAgoa
…
…
[2023-05-29T15:25:32.306651Z WARN safenode::node::api] NAT status is determined to be private!
[2023-05-29T15:25:32.306679Z INFO safenode] Connected to the Network
[2023-05-29T15:25:32.306769Z INFO safenode] Node is stopping in 1s…
this much is true…
and it appears that the Natnet has done its job and fulfilled the objectives.
why keep it running? OR why NOT keep it running?
I suspect it would fail if we all tried to up/download the same volumes as before and for reasons that may not add too much to our existing knowledge.
I did note earlier that larger files - a 350Mb video and BegBlag.mp3(~15Mb) failed to upload - dunno if this is a filesize filter or a limitation? I think most folk who have been uploading have been very restrained but I dunno for sure.
I think most folk who have been uploading have been very restrained
As they would be if uploading was not free of charge.
Not exactly the same but small networks being bombarded with data is not really a level playing field either.
maidsafe:main
← bochaco:feat-chunk-payment
safenode --log-dir=/tmp/safenode --port=22000
so this should work?
safenode --log-dir=/tmp/safenode --port=12000
I’ll try later, darling wife just in after a trying day looking after her mum, I am now on soothing cuppa and then dinner duties…
So, a bit late to this but as it looked quick to try, my result below…
It didn’t seem to be stopping after a few minutes and centred then on suggestion of:
Not enough peers in the k-bucket to satisfy the request
This text will be hidden[2023-05-29T19:09:28.892397Z DEBUG safenode] Current build's git commit hash: ---- No git commit hash found ----
[2023-05-29T19:09:28.892605Z WARN safenode::peers_acquisition] No SAFE_PEERS env var found. As `local-discovery` feature is disabled, we will not be able to connect to the network.
[2023-05-29T19:09:28.892679Z INFO safenode]
Running safenode v0.1.0
=======================
[2023-05-29T19:09:28.894513Z INFO safenode] Starting node ...
[2023-05-29T19:09:28.894845Z INFO safenode::network] Node (PID: 139539) with PeerId: 12D3KooWSSz5c7WX3Xf17q3f5xPmoszaf9AranGLpjiPyTyJZ3ek
[2023-05-29T19:09:28.899062Z INFO safenode::network::event] Local node is listening on "/ip4/127.0.0.1/tcp/46519/p2p/12D3KooWSSz5c7WX3Xf17q3f5xPmoszaf9AranGLpjiPyTyJZ3ek"
[2023-05-29T19:09:28.899249Z INFO safenode::network::event] Local node is listening on "/ip4/192.168.0.15/tcp/46519/p2p/12D3KooWSSz5c7WX3Xf17q3f5xPmoszaf9AranGLpjiPyTyJZ3ek"
[2023-05-29T19:09:28.899395Z INFO safenode::network::event] Local node is listening on "/ip4/10.7.2.2/tcp/46519/p2p/12D3KooWSSz5c7WX3Xf17q3f5xPmoszaf9AranGLpjiPyTyJZ3ek"
[2023-05-29T19:09:31.897605Z TRACE safenode::network::event] AutoNAT outbound probe: Error { probe_id: ProbeId(0), peer: None, error: NoServer }
[2023-05-29T19:09:50.901104Z DEBUG safenode::node::api] No network activity in the past 22s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 242, 249, 57, 78, 188, 1, 64, 0, 69, 76, 190, 65, 5, 167, 200, 50, 3, 117, 56, 125, 175, 237, 86, 185, 134, 153, 84, 68, 203, 195, 158, 148])
[2023-05-29T19:09:50.901123Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 242, 249, 57, 78, 188, 1, 64, 0, 69, 76, 190, 65, 5, 167, 200, 50, 3, 117, 56, 125, 175, 237, 86, 185, 134, 153, 84, 68, 203, 195, 158, 148])
[2023-05-29T19:09:50.901251Z TRACE safenode::network::event] Query task QueryId(0) returned with peers GetClosestPeersOk { key: [0, 32, 242, 249, 57, 78, 188, 1, 64, 0, 69, 76, 190, 65, 5, 167, 200, 50, 3, 117, 56, 125, 175, 237, 86, 185, 134, 153, 84, 68, 203, 195, 158, 148], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533349, tv_nsec: 296194129 }), end: Some(Instant { tv_sec: 533349, tv_nsec: 296194129 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:09:50.901292Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:10:22.901693Z DEBUG safenode::node::api] No network activity in the past 32s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 19, 198, 189, 90, 247, 166, 188, 254, 8, 170, 73, 182, 250, 249, 84, 186, 70, 133, 232, 85, 152, 51, 49, 125, 174, 113, 85, 21, 238, 47, 62, 100])
[2023-05-29T19:10:22.901725Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 19, 198, 189, 90, 247, 166, 188, 254, 8, 170, 73, 182, 250, 249, 84, 186, 70, 133, 232, 85, 152, 51, 49, 125, 174, 113, 85, 21, 238, 47, 62, 100])
[2023-05-29T19:10:22.901912Z TRACE safenode::network::event] Query task QueryId(1) returned with peers GetClosestPeersOk { key: [0, 32, 19, 198, 189, 90, 247, 166, 188, 254, 8, 170, 73, 182, 250, 249, 84, 186, 70, 133, 232, 85, 152, 51, 49, 125, 174, 113, 85, 21, 238, 47, 62, 100], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533381, tv_nsec: 296849014 }), end: Some(Instant { tv_sec: 533381, tv_nsec: 296849014 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:10:22.901968Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:10:43.903645Z DEBUG safenode::node::api] No network activity in the past 21s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 71, 192, 209, 88, 6, 49, 167, 134, 70, 197, 191, 102, 114, 70, 198, 11, 221, 120, 23, 22, 17, 171, 239, 243, 113, 193, 97, 18, 207, 205, 132, 56])
[2023-05-29T19:10:43.903687Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 71, 192, 209, 88, 6, 49, 167, 134, 70, 197, 191, 102, 114, 70, 198, 11, 221, 120, 23, 22, 17, 171, 239, 243, 113, 193, 97, 18, 207, 205, 132, 56])
[2023-05-29T19:10:43.903920Z TRACE safenode::network::event] Query task QueryId(2) returned with peers GetClosestPeersOk { key: [0, 32, 71, 192, 209, 88, 6, 49, 167, 134, 70, 197, 191, 102, 114, 70, 198, 11, 221, 120, 23, 22, 17, 171, 239, 243, 113, 193, 97, 18, 207, 205, 132, 56], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533402, tv_nsec: 298830583 }), end: Some(Instant { tv_sec: 533402, tv_nsec: 298830583 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:10:43.904017Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:11:01.898161Z TRACE safenode::network::event] AutoNAT outbound probe: Error { probe_id: ProbeId(1), peer: None, error: NoServer }
[2023-05-29T19:11:18.905624Z DEBUG safenode::node::api] No network activity in the past 35s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 243, 77, 21, 205, 206, 253, 76, 166, 23, 212, 52, 0, 29, 227, 113, 76, 93, 135, 167, 138, 249, 43, 25, 233, 178, 132, 125, 21, 192, 66, 124, 157])
[2023-05-29T19:11:18.905677Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 243, 77, 21, 205, 206, 253, 76, 166, 23, 212, 52, 0, 29, 227, 113, 76, 93, 135, 167, 138, 249, 43, 25, 233, 178, 132, 125, 21, 192, 66, 124, 157])
[2023-05-29T19:11:18.906006Z TRACE safenode::network::event] Query task QueryId(3) returned with peers GetClosestPeersOk { key: [0, 32, 243, 77, 21, 205, 206, 253, 76, 166, 23, 212, 52, 0, 29, 227, 113, 76, 93, 135, 167, 138, 249, 43, 25, 233, 178, 132, 125, 21, 192, 66, 124, 157], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533437, tv_nsec: 300900987 }), end: Some(Instant { tv_sec: 533437, tv_nsec: 300900987 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:11:18.906140Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:11:44.907088Z DEBUG safenode::node::api] No network activity in the past 26s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 197, 67, 29, 213, 31, 97, 91, 167, 180, 61, 37, 76, 150, 146, 136, 254, 194, 243, 254, 80, 61, 113, 14, 193, 171, 88, 181, 56, 41, 66, 254, 15])
[2023-05-29T19:11:44.907137Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 197, 67, 29, 213, 31, 97, 91, 167, 180, 61, 37, 76, 150, 146, 136, 254, 194, 243, 254, 80, 61, 113, 14, 193, 171, 88, 181, 56, 41, 66, 254, 15])
[2023-05-29T19:11:44.907405Z TRACE safenode::network::event] Query task QueryId(4) returned with peers GetClosestPeersOk { key: [0, 32, 197, 67, 29, 213, 31, 97, 91, 167, 180, 61, 37, 76, 150, 146, 136, 254, 194, 243, 254, 80, 61, 113, 14, 193, 171, 88, 181, 56, 41, 66, 254, 15], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533463, tv_nsec: 302287614 }), end: Some(Instant { tv_sec: 533463, tv_nsec: 302287614 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:11:44.907565Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:12:16.909572Z DEBUG safenode::node::api] No network activity in the past 32s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 208, 29, 171, 67, 148, 37, 130, 215, 6, 140, 58, 186, 113, 148, 229, 22, 44, 219, 237, 92, 224, 24, 100, 247, 161, 3, 187, 51, 58, 19, 75, 48])
[2023-05-29T19:12:16.909639Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 208, 29, 171, 67, 148, 37, 130, 215, 6, 140, 58, 186, 113, 148, 229, 22, 44, 219, 237, 92, 224, 24, 100, 247, 161, 3, 187, 51, 58, 19, 75, 48])
[2023-05-29T19:12:16.910011Z TRACE safenode::network::event] Query task QueryId(5) returned with peers GetClosestPeersOk { key: [0, 32, 208, 29, 171, 67, 148, 37, 130, 215, 6, 140, 58, 186, 113, 148, 229, 22, 44, 219, 237, 92, 224, 24, 100, 247, 161, 3, 187, 51, 58, 19, 75, 48], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533495, tv_nsec: 304903926 }), end: Some(Instant { tv_sec: 533495, tv_nsec: 304903926 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:12:16.910145Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
[2023-05-29T19:12:31.898402Z TRACE safenode::network::event] AutoNAT outbound probe: Error { probe_id: ProbeId(2), peer: None, error: NoServer }
[2023-05-29T19:12:54.911619Z DEBUG safenode::node::api] No network activity in the past 38s, performing a random get_closest query to target: NetworkAddress::PeerId([0, 32, 220, 89, 21, 200, 118, 38, 119, 185, 16, 4, 141, 34, 41, 54, 161, 218, 247, 86, 193, 154, 164, 210, 223, 229, 157, 241, 71, 128, 32, 17, 109, 29])
[2023-05-29T19:12:54.911669Z DEBUG safenode::network] Getting the closest peers to NetworkAddress::PeerId([0, 32, 220, 89, 21, 200, 118, 38, 119, 185, 16, 4, 141, 34, 41, 54, 161, 218, 247, 86, 193, 154, 164, 210, 223, 229, 157, 241, 71, 128, 32, 17, 109, 29])
[2023-05-29T19:12:54.911943Z TRACE safenode::network::event] Query task QueryId(6) returned with peers GetClosestPeersOk { key: [0, 32, 220, 89, 21, 200, 118, 38, 119, 185, 16, 4, 141, 34, 41, 54, 161, 218, 247, 86, 193, 154, 164, 210, 223, 229, 157, 241, 71, 128, 32, 17, 109, 29], peers: [] }, QueryStats { requests: 0, success: 0, failure: 0, start: Some(Instant { tv_sec: 533533, tv_nsec: 306846578 }), end: Some(Instant { tv_sec: 533533, tv_nsec: 306846578 }) } - ProgressStep { count: 1, last: true }
[2023-05-29T19:12:54.912021Z WARN safenode::network] Not enough peers in the k-bucket to satisfy the request
repeats and discourse complaining about length of post
The was with a vpn connection and then also without.
I have also now a Windows computer which didn’t look different for that k-bucket error. Thought the detail of paths was not clean and parroted as \ \ instead of \ in the paths.
perhaps I’m just too late for a network to still be there?.. but surprised it didn’t just stop itself.
Noting at the top of the log
As
local-discovery
feature is disabled
perhaps something I’ve missed?
Did you set the environment variable as in the OP?
nope… that sounds like a good idea …trying again then
…
Better then as…
PeerId: 12D3KooWH6Eb6zkx6oWRmsn6bWCnAGkVj3wPEqVPuExr8uoXpzKy
Error: We have been determined to be behind a NAT. This means we are not reachable externally by other nodes. In the future, the network will implement relays that allow us to still join the network.