Maybe I should take mine down I dunno what’s best.
I have 10 vaults running almost a week on different droplets and all they have similar timeout log as mentioned above… I would take 1-2 down to see if routing table size of the networks drops, but I have no more vaults giving this info in the logs ??
Edit:
If the team likes the logs, you can find them here: Vault_Logs
btw
still up and running
didn’t spread vaults in regions yet so all located on digitalOcean NY3
8 vaults booted same evening. 2 (9 and 10) 1 day later
@digipl
Here is a new version. I had to restart my vault. Anybody else has a changed ip or an additional one? As long as its the same IP it can even be offline from time to time. Mine is running just fine, no errors I can see.
I try again and impossible to connect. Even if I finish the resource proof the Vault close with this message.
All 13 resource proof responses fully sent, but timed out waiting for approval from the network. This could be due to the target section experiencing churn. Terminating node.
DEBUG 14:42:08.633857700 [routing::state_machine state_machine.rs:267] State::Node(66fc07…()) Terminating state machine
Have you tried a config that’s a list back to front?.. Purely guessing but I wonder if it calls the first ones first and then doesn’t try others that are less busy contending with churn etc.
All my vaults are still up. The network seems fine to me.
My vaults are continuously sending 150-200 Kbit/s… nload gives 4.38GB Incoming / 3.99GB Outgoing
The vault logs keep giving the same messages as I posted before so I don’t now if they are working with of against the network someway now ?
Maybe they slow down the network, maybe they do fine ??
Am I still in the routing table ??
Maybe someone could check the routing table while I’ll take 2 of them offline. If you can give me a post withing 30min then I can check it before I’m away for an hour and otherwise another moment later.
Update: I’ve tested myself by changing ‘safe-launcher.crust.config’ to only connect to one of my vaults and the connection was ok.
I could retrieve the files in my private folder etc …
So my vaults still do the job. only the logging is dead somehow.
I ran a node on AWS and it worked fine for over 60 hours. I took it down when everybody got in trouble. There have been quite some updates in routing over the last week. Think 12c will be very stable.
I had kinda given up on this being unable to reconnect at all and thinking that we had learned all we were going to learn. However your logs tend to suggest that new nodes are being accepted.
Can you post you crust.config so I can have another try please?
I think this is the error we had. The users bootstrapping before the split, like me and most of us, have problem to re-establishing the connection after the split. That is why, despite all the attempts, we could not reconnect.
There’s a great amount of work done in Routing since we’ve had TEST 12b. Another problem was probably the FailedExternalReachability thing. No problem if you connected to the droplet linux nodes, but certainly a problem if you connected from windows to windows. I’ve tested quite some with the community nets, and my guess is that 12c will be very stable. I hardly can’t wait .
Indeed.
I’m still not connecting with the safe_vault.crust.config above. test_12_b was fun, as was community_network_feb_2017.
They have served their purpose and I learned a lot.
I’m not going to waste AWS credits on it any more and I’ll wait for 12_c.
Looking forward to getting tore in when it’s ready
Thanks to everyone.