I just connected. 1 node in routing table.
Wow, it is zooming along.
It quickly went to four entries, but soon after dropped to one. These are the relevant lines:
WARN 18:46:35.504650667 [safe_vault::vault vault.rs:134] Received Event::Disconnected. Restarting Vault INFO 18:46:37.519574385 [routing::core core.rs:1211] Running listener.
What does āReceived Event::Disconnected. Restarting Vaultā mean?
Please remember not to use the --first option. That is only for the initial seed, that announces the named network. Iām wondering if some of the instability is due to nodes sending improper requests. Can an ill-formed request crash the seed?
It gets to ā9 connections to 9 peers - direct: 9, punched:0ā then drops them one by one until it is down to one.
Looks like a situation where the Vault decides to restart itself. Maybe because some errors or something, or to less nodes joining?
Am I right in thinking the OP is out of date - pre the ācommunity3ā id?
Iām wanting to run a safe_launcher for some testing - testnet3 is constantly failing. Can someone explain how I do this (or point me to the relevant post)?
EDIT: Wait, I found it here (community1 not community3 )
It appears weāre stuffed with so few nodes - clients canāt connect
sorry my vaults where down while i was at work we are in cummunity1 then � [added 2 nodes a minute ago - trying to add 2 more now]
Itās up to 4 now. We wonāt see all that many until they officially shut down testnet3.
community1 might last a long time, undergoing upgrades of the binaries. It is like a car starting on a winterās day and hardly any fuel and low battery.
We need seven for launchers to connect.
The testnet allows me to connection but thatās all now, for several hours.
eh - all my vaults disconnected a second ago Oo
What Iād like to know is: If there are ā16 connections to 16 peers - direct: 16, punched: 0ā then why is the routing table size 3.
I take āpeersā to be other vaults, and āclientsā to be launchers. Correct?
EDIT: It turns out that āpeersā includes both vaults and clients (i.e., launchers), with clients counted twice. So when I run a launcher, the number of peers goes from 8 to 10.
today my vaults have big problems to keep connected Oo i donāt know whats different or if my ineternet is slow today ⦠2 running ⦠i hope they stay connected
I did a restart so could I ask other vaults to also restart?
There were 7 entries in my routing table before the restart. The magic number! Thereās five right now but if the others restart then that should bring it back up to 7.
Then itās time to fire up your launchers!
By the way, I cheated a wee bit by dropping in a newly compiled safe_vault (from source code hot off github), still nominally at version 0.8.1. I saw how many commits they had done today to the routing crate and reasoned that it couldnāt hurt. The compile pulled down the latest routing as well as kademlia and one or two others. Not sure what is really new except for routing.
Then I started some other vaults with the same binary, but listening on different ports than 5843, but otherwise with identical configs pointing to the starter vault. That is, I changed the number in their configs, on the line āātcp_acceptor_portā: 5483ā. Again, I figure that it canāt hurt, what with all the powerful hole-punching, because before I did that I was seeing many errors of āaddress already in useā from the multiple vaults on one machine, I thought that it seemed it might help. And indeed, the logs seem less verbose, due to one of todayās commits to routing and I donāt see that particular error so much.
I have put that newly-compiled Linux safe_vault binary here if you happened to want to drop it into your safe_vault folder and then restart.
Donāt forget to make it executable, since that flag could be lost during download:
$ chmod +x ./safe_vault
Of course you know that you can download a file at the commandline, with:
$ wget "http://91.121.173.204/safe_vault"
Note: You DONāT need to use the binary Iām running, the one from the (testnet3) redistributable (linked to in the OP) will do fine. I put the self-compiled binary in to get a bit more performance, and to see how it ran.
hi all,
back on now comm1, 6 nodes in tableā¦
rup
Iāve now got seven in my routing table.
Registered a new account (fast).
The demo app timed out authorizing. Trying againā¦
Uploaded: http://test.bluebird.safenet
lol⦠i think 7 is me!!
node a808
rup
connected nice and quick, connected to http://test.bluebird.safenet without any issueā¦
demo app not really working as not able to publish site or upload files.
will leave vault running, enjoyā¦
nite!
rup
I had to re-create my account, username and website from a couple of hours ago, although each step went quickly. That was after I had restarted all but one vault. the one I didnāt restart was the seeder.
Thereās 7 vaults in my routing table. At that number, does routine churn destroy all data eventually, or is data lost only if vaults are manually restarted? So if the 7 are left alone then would data survive? I suspect it has more to do with the vaults more distant; chunks are getting stored there but you have no way of observing those vaults, so youāre counting on there being enough of them. The minimum therefore has to be more than 7. Just 7 vaults gets you in as a client, but for your data to survive there has to be some unknown-to-me number greater than 7.
Hereās an interesting, fun experiment:
-
Create a safenet website, on whatever SAFE network your launcher is connecting to.
-
On a Linux machine, run the launcher if it isnāt already running.
-
Give the following command, substituting the name of your webpage for name-of-your-webpage.
$ wget "http://name-of-your-webpage.safenet"
You will get an error message: āunable to resolve.ā
- Now do this:
$ export http_proxy=http://visualiser.maidafe.net/safe_proxy.pac
That command sets an environment variable that is accepted on most Linux systems as the system proxy. And by the way, it doesnāt matter whether you have set your browser proxy.
-
Now if you do the preceding wget command, it downloads an index.html.
-
Open that file in your browser, and you will find it is an Apache test file, from a Maidsafe webserver, and not your uploaded webpage!
-
Now do this:
$ export http_proxy=http://127.0.0.1:8101
-
Run the wget again. You will download your SAFE webpage!
-
Run this command to put your environment back to the way it was:
$ unset http_proxy
Experiment with other variables, options, values and utilities, such as ftp_proxy, curl, and so on.
Hereās another fun experiment: How to keep your data alive FOREVER! (or not, lol)
The results arenāt in yet for this one:
The premise is that inactivity on test nets is what eventually kills launcher sessions, credentials, IDs and safenet webpages.
-
Create a safenet webpage.
-
On a Linux box, run the launcher.
-
Create a shell script that downloads your webpage. Something like:
#! /bin/sh
# keepalive, a script to download my webpage
export http_proxy=http://127.0.0.1:8101
wget "http://test.bluebird.safenet"
# to clean up afterwards:
unset http_proxy
Then:
$ chmod +x keepalive
and:
$ crontab -e
⦠to open your cron job editor, and add a line like this, and save:
0 */1 * * * ~/bin/keepalive
⦠or whatever the path is to your script.
That will do the download every hour, at 1 oāclock, 2 oāclock, and so on.
15 minutes ago crashed the machine on which I was running a bunch of vaults. I fear that I might have lost some data (sorry about that) including my own. I can login quickly but the demo app times out. But my safenet webpage is still there!
The seed vault, on my cloud server, has been rock-sold, though, and running continuously since the last reboot 11 hours ago.
EDIT: After thinking some more about this I have concluded that the failure of demo app to run is due to inactivity of some particular data that is only accessed when demo app is run. My hourly cron job has been exercising both the safenet website and the launcher (i.e., the credentials, if that makes sense), but without starting up the demo app often enough, something is lost.
Is someone uploading a large amount of data? It seems to be bogging down with frequent time-outs.
I was finding that my non-SAFE usage of the Internet was timing out; SSH sessions were crashing, due to the six vaults on my LAN sucking up all the bandwidth.
To deal with this I have segmented my LAN into two VLANs: one that allows the vaults 80% up and down bandwidth, and the other for my other usage.
So now I can easily access the Internet, but the vaults are struggling with the amount of requests they are getting, with frequent time-outs.
Launcher users: It would help to hold off on uploading your music collection going all the way back back to The Doors, and instead to add an extra vault or two.
The current version of safe_vault makes no distinction between vaults that have a lot of bandwidth and those that have only a little. So the seed vault is very stable because, at least in part, it is on a huge pipe by itself, while the six on my LAN are struggling, sharing 80% of a consumer broadband connection.
I am loth to reconfigure the cloud server to run multiple vaults, since it has been doing so well, rock-solid for days now.
Iāve added a feature request for some way of adjusting this. For all I know there might already be a hidden way to do this.