I can only imagine that there is a slow node you are uploading to, or hicup
Maybe someone who knows more could chime in here
I can only imagine that there is a slow node you are uploading to, or hicup
Maybe someone who knows more could chime in here
All 1 resource proof responses fully sent, but timed out waiting for approval from the network. This could be due to the target section experiencing churn. Terminating node.
Trying to join , it seems we are not enough yet ! I’m waiting for aws access…
I did connect to the community network with all the right IPs/ports/networkname etc. I did a test for resource proof even while it succeeded there’s still a problem. here are the last lines of myvault log:
INFO 12:41:11.248979700 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 16% of data sent. 170/410 seconds remaining.
INFO 12:41:41.249538900 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 18% of data sent. 140/410 seconds remaining.
INFO 12:42:11.250987900 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 21% of data sent. 110/410 seconds remaining.
INFO 12:42:41.251089300 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 21% of data sent. 80/410 seconds remaining.
INFO 12:43:11.252580900 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 21% of data sent. 50/410 seconds remaining.
INFO 12:43:41.258177500 [routing::states::node node.rs:2316] Node(917ae8..()) 0/1 resource proof response(s) complete, 21% of data sent. 20/410 seconds remaining.
INFO 12:44:01.211202200 [routing::states::node node.rs:2344] Node(917ae8..()) All 1 resource proof responses fully sent, but timed out waiting for approval from the network. This could be due to the target section experiencing churn. Terminating node.
DEBUG 12:44:01.211202200 [routing::state_machine state_machine.rs:267] State::Node(917ae8..()) Terminating state machine
“All 1 resource proof responses fully sent, but timed out waiting for approval from the network. This could be due to the target section experiencing churn. Terminating node”.
EDIT: I’m trying again, but it’s only connected to 2 peers. I guess we need more Vaults to start up the network??
@anon40790172 We might need to organise in advance those who will be “seed” nodes and coordinate the startup
I remember reading somewhere that “hard coded contacts” are excluded from doing resource proof. That would make sense. So indeed we need 8 hard coded contacts at least to start the network. Maybe make it even 10 or 12.
As more people get use to using droplets? or aws instances (free tier) we can have a good user run network set up after each test.
i just started runing a node on this user test network how do i set it so i can make my own network as well ?
i’ll move my vault across to this network next week, happy to have ip listed once its moved as i’ll have it running 24x7 once its setup.
rup
hiya,
my crust config file including my vault ip…
will now leave running
{
“hard_coded_contacts”: [
“35.167.139.205:5483”,
“52.65.136.52:5483”,
“138.197.235.128:5483”,
“73.255.195.141:5483”,
“185.16.37.149:5438”
],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “user_network_12_b”
cheers
rup
Updated config including my vault. I think we only need 1 more ![]()
{
“hard_coded_contacts”: [
“35.167.139.205:5483”,
“52.65.136.52:5483”,
“138.197.235.128:5483”,
“73.255.195.141:5483”,
“108.61.165.170:5483”,
“185.16.37.149:5438”,
“78.46.181.243:5438”
],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “user_network_12_b”
}
{
“hard_coded_contacts”: [
“35.167.139.205:5483”,
“52.65.136.52:5483”,
“138.197.235.128:5483”,
“73.255.195.141:5483”,
“108.61.165.170:5483”,
“185.16.37.149:5438”,
“78.46.181.243:5438”,
“31.151.192.2:5438”
],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “user_network_12_b”
}
OK we have 8 now.
EDIT: Screwing up the lines here. Just made an edit.
Theres a " missing at the end of the hard coded contacts and a “.” before the last ip addresss – that might catch someone out
Sorry for being picky ![]()
Funny, I make it 9 … ![]()
{
“hard_coded_contacts”: [
“192.168.1.64:5483”,
“35.167.139.205:5483”,
“52.65.136.52:5483”,
“138.197.235.128:5483”,
“73.255.195.141:5483”,
“108.61.165.170:5483”,
“185.16.37.149:5438”,
“78.46.181.243:5438”,
“31.151.192.2:5438” ],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “test_network_12_b”
}
@anon40790172 I think you’re missing a comma after “78.46.181.243:5438” ![]()
@southside beat me to it
Handy hint - just do a scan down the hard coded addresses - check they all have opening and closing " and that they are all separated with commas - except the last
otherwise you get
iubuntu@ip-172-31-26-108:/maidsafe/safe_vault-v0.13.0-linux-x64$ sudo ./safe_vault
thread ‘main’ panicked at ‘Unable to start crust::Service ConfigFileHandler(JsonParserError(SyntaxError(“invalid syntax”, 8, 6)))’, /media/psf/Home/Dev/Rust/routing/src/state_machine.rs:205
note: Run with RUST_BACKTRACE=1 for a backtrace.
INFO 16:53:49.315146957 [safe_vault safe_vault.rs:95]
![]()
I just did, should be good now. but I’ll update to JPL’s version anyway.
You have the wrong network name… replace with user_network_12_b
I wouldn’t just yet - it’s not working
erm, 192.168.1.64 is a local ip
Ahhhh, I already wondered. Will wait for an update now
.