User run network based on test 12b binaries

Maybe I should take mine down I dunno what’s best.
I have 10 vaults running almost a week on different droplets and all they have similar timeout log as mentioned above… I would take 1-2 down to see if routing table size of the networks drops, but I have no more vaults giving this info in the logs ??

Edit:

If the team likes the logs, you can find them here: Vault_Logs
btw

  • still up and running
  • didn’t spread vaults in regions yet so all located on digitalOcean NY3
  • 8 vaults booted same evening. 2 (9 and 10) 1 day later
3 Likes

@digipl
Here is a new version. I had to restart my vault. Anybody else has a changed ip or an additional one? As long as its the same IP it can even be offline from time to time. Mine is running just fine, no errors I can see.

{
“hard_coded_contacts”: [
“35.167.139.205:5483”,
“108.61.165.170:5483”,
“52.65.136.52:5483”,
“206.116.50.52:5483”,
“99.199.170.208:5483”,
“31.151.192.2:5483”,
“86.184.57.178:5483”,
“13.80.119.160:5483”,
“85.93.19.51:5483”,
“35.157.162.33:5483”,
“185.16.37.149:5483”,
“35.156.220.253:5483”,
“107.191.39.213:5483”
],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “community_network_feb_2017”
}

1 Like

I try again and impossible to connect. Even if I finish the resource proof the Vault close with this message.

All 13 resource proof responses fully sent, but timed out waiting for approval from the network. This could be due to the target section experiencing churn. Terminating node.
DEBUG 14:42:08.633857700 [routing::state_machine state_machine.rs:267] State::Node(66fc07…()) Terminating state machine

Have you tried a config that’s a list back to front?.. Purely guessing but I wonder if it calls the first ones first and then doesn’t try others that are less busy contending with churn etc.

1 Like

More o less yes. I deleted most IP and try to connect only with few. In most cases the vault exit immediately.

Now I try with the @rand_om config but the same problems. Strange, I think will wait until Test 12C.

1 Like

All my vaults are still up. The network seems fine to me.
My vaults are continuously sending 150-200 Kbit/s… nload gives 4.38GB Incoming / 3.99GB Outgoing

The vault logs keep giving the same messages as I posted before so I don’t now if they are working with of against the network someway now ?
Maybe they slow down the network, maybe they do fine ??
Am I still in the routing table ??

Maybe someone could check the routing table while I’ll take 2 of them offline. If you can give me a post withing 30min then I can check it before I’m away for an hour and otherwise another moment later.



Update: I’ve tested myself by changing ‘safe-launcher.crust.config’ to only connect to one of my vaults and the connection was ok.

I could retrieve the files in my private folder etc …
So my vaults still do the job. only the logging is dead somehow.

2 Likes

My vault terminated and I can’t join anymore… At the end of the resource proof, it just terminates although bandwidth is no issue whatsoever.

/edit
I even tried @davidpbrown 's suggestion and changed the order of the IPs, but didn’t change anything.

2 Likes

I’ll close one and try to reboot it to see if here is the same problem in an hour…

Update: Closed a vault and it can’t rejoin neither.

1 Like

I ran a node on AWS and it worked fine for over 60 hours. I took it down when everybody got in trouble. There have been quite some updates in routing over the last week. Think 12c will be very stable.

6 Likes

Vault here will be running for a week at noon tomorrow.

Visited these just now:

safe://hello
safe://safe-blues.jpl
safe://eeek.southside
safe://testsite.lostfile
safe://weare.live

Still using @riddim’s setup for the Launcher:

{
“hard_coded_contacts”: [
“86.184.57.178:5483”,
“13.80.119.160:5483”,
“85.93.19.51:5483”,
“35.157.162.33:5483”,
“185.16.37.149:5483”,
“35.156.220.253:5483”
],
“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: null,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “community_network_feb_2017”
}

4 Likes

safe://play.safe/ SPA from the web
:keyboard:

2 Likes

Ha! safe://whynot.allofme

2 Likes

hiya polpolrene,

has 12c been released?

if so do you have current link?

rup

still chugging away…

INFO 15:43:50.298130238 [routing::states::node node.rs:277] ----------------------------------------------------------------
INFO 15:43:50.298142460 [routing::states::node node.rs:278] | Node(8a5cb9…(1)) PeerId(ecca0561…) - Routing Table size: 9 |
INFO 15:43:50.298148085 [routing::states::node node.rs:279] ----------------------------------------------------------------
INFO 15:43:50.303576761 [safe_vault::personas::data_manager data_manager.rs:1041] This vault has received 11519 Client Get requests. Chunks stored: Immutable: 1016, Structured: 128, Appendable: 0. Total stored: 540730776 bytes.
INFO 15:43:50.431889984 [safe_vault::personas::maid_manager maid_manager.rs:221] Managing 10 client accounts.
INFO 15:43:50.495336498 [safe_vault::personas::maid_manager maid_manager.rs:221] Managing 11 client accounts.
INFO 15:43:50.526573733 [safe_vault::personas::maid_manager maid_manager.rs:221] Managing 12 client accounts.
INFO 15:44:20.343862062 [routing::stats stats.rs:203] Stats - Sent 525000 messages in total, comprising 1750507942 bytes, 194 uncategorised, routes/failed: [87327, 471, 287, 15, 5, 3, 2]/24921
INFO 15:44:20.343903305 [routing::stats stats.rs:211] Stats - Direct - NodeIdentify: 153, CandidateIdentify: 14, MessageSignature: 294891, ResourceProof: 115/9397/61377, SectionListSignature: 3602
INFO 15:44:20.343915973 [routing::stats stats.rs:221] Stats - Hops (Request/Response) - GetNodeName: 93/11, ExpectCandidate: 168, AcceptAsCandidate: 138, SectionUpdate: 136, SectionSplit: 0, OwnSectionMerge: 0, OtherSectionMerge: 0, RoutingTable: 40584/62, ConnectionInfo: 100/214, CandidateApproval: 82, NodeApproval: 6, Ack: 59032
INFO 15:44:20.343927844 [routing::stats stats.rs:241] Stats - User (Request/Success/Failure) - Get: 14991/11188/3803, Put: 1025/1013/12, Post: 1063/1054/14, Delete: 6/5/1, Append: 0/0/0, GetAccountInfo: 776/776/0, Refresh: 18904

rup

5 Likes

No, not yet. Maybe today’s update? I guess you would see quite a big topic surrounding it.

I had kinda given up on this being unable to reconnect at all and thinking that we had learned all we were going to learn. However your logs tend to suggest that new nodes are being accepted.

Can you post you crust.config so I can have another try please?

Thanks :slight_smile:

I think this is the error we had. The users bootstrapping before the split, like me and most of us, have problem to re-establishing the connection after the split. That is why, despite all the attempts, we could not reconnect.

2 Likes

There’s a great amount of work done in Routing since we’ve had TEST 12b. Another problem was probably the FailedExternalReachability thing. No problem if you connected to the droplet linux nodes, but certainly a problem if you connected from windows to windows. I’ve tested quite some with the community nets, and my guess is that 12c will be very stable. I hardly can’t wait :heart_eyes:.

3 Likes

Indeed.
I’m still not connecting with the safe_vault.crust.config above.
test_12_b was fun, as was community_network_feb_2017.
They have served their purpose and I learned a lot.
I’m not going to waste AWS credits on it any more and I’ll wait for 12_c.

Looking forward to getting tore in when it’s ready :slight_smile:
Thanks to everyone.

2 Likes

Sounds like this bug they found

Maybe terminate your VM and fire up a new one so that you appear to be a completely new candidate.

3 Likes