Many nodes announced here didn’t last for long. Depending on the user setup, nodes at home can be fragile (computer switched off, poor bandwidth, IP address that can change).
When that’s the case I would advise using VPS which are more reliable. It is possible to find some that are cheap and powerful enough to pass resource proof tests (like Hetzneror Cinfu mentioned earlier).
I actually stopped running my node since Wednesday because I could not connect to the alpha network from other devices at home. This is something I didn’t consider. There’s probably some solution having to do with ports, but I didn’t bother looking into the exact issue.
I don’t recommend Cinfu. I don’t know why but the vault is not stable and the connection is lost once or twice a day. Others like Linevast o Crowncloud work without problems.
Hetzner has been working flawlessly as far as I can tell, and I’ve also managed to set up a VPN on the same server which if it proves OK (so far so good) will save me almost all the Euro 5.88/month cost.
Our experience here suggests that cloud vaults will dominate the network because long term reliability is so important. Much more so than we realised in the first couple of years since crowdsale I think.
Some have been saying this for a while (@mav for one) and I think they are right, which bothers me.
I too find it hard to see a way around this fact, it’s like arguing with physics. If farming tends to reward the first node to return a chunk then that intrinsically favours machines with high bandwidth and and powerful CPUs, meanwhile node-aging favours those with the most uptime. Spread the rewards too widely and you sacrifice performance and reliability.
The high bandwidth upload requirement is also putting off home vaults, IMO. Only big enthusiasts will pay to host a vault when they can’t earn anything from it, especially for a POC network (with put limits, etc).
The safe_vault on my Raspberry Pi stopped at Jan 15 03:32 GMT, ending with the logging below.
I just started safe_vault (node 958753) again and it keeps running for the moment with network size = 23.
...
INFO 03:32:21.376023929 [<unknown> <unknown>:3566] Node(d28087..(1)) Dropped eee3ba.. from the routing table.
INFO 03:32:21.379442465 [<unknown> <unknown>:368] ---------------------------------------------
INFO 03:32:21.379521423 [<unknown> <unknown>:369] | Node(d28087..(1)) - Routing Table size: 1 |
INFO 03:32:21.379567465 [<unknown> <unknown>:370] | Exact network size: 2 |
INFO 03:32:21.379614704 [<unknown> <unknown>:371] ---------------------------------------------
INFO 03:32:21.576311968 [<unknown> <unknown>:3566] Node(d28087..(1)) Dropped 9c12e8.. from the routing table.
WARN 03:32:21.596595058 [<unknown> <unknown>:117] Restarting Vault
INFO 03:32:23.806263469 [<unknown> <unknown>:113] Created chunk store at /mnt/exhdd/tfa_v/csr/safe_vault_chunk_store.wxAJ7HNQBiQL with capacity of 34359738368 bytes.
ERROR 03:32:30.965110502 [crust::main::bootstrap mod.rs:210] Bootstrapper has no active children left - bootstrap has failed
INFO 03:32:30.965370085 [<unknown> <unknown>:269] Bootstrapping(208657..) Failed to bootstrap. Terminating.
For users or organizations wanting to be highlighted as a sponsor in the web site, I have updated README.md file file with instructions on how to install Docker.
@ridserver02, your node doesn’t appear in honor roll and you can’t display its name in the galaxy because the host name doesn’t contain a double dashes.
@maidsafe-Z390-GAMING-X, same problem and additionally your node doesn’t work. You seem to have a very powerful PC, so I suspect a network problem. Did you open and forward all needed ports (for docker and safe_vault)?
Edit: is it possible that your port 5483/tcp is already allocated, maybe another instance of safe_vault?
Edit 2: command to properly stop a previous docker node is docker swarm leave
All good =) did only have a little bit of time to start it up and didn’t worry about the naming - thanks for mentioning
Ps: oh now I get it - with the correct host name I can just select my server for being shown in the galaxy - cool! I might adjust that when I have some time again
In terms of interpretations, this would be a scenario not expected hence the error log. What its trying to indicate is the vault is coming to a conclusion of not being responsible for a particular chunk due to a NodeLoss event, which wouldn’t be expected as when NodeLoss occurs, we’d expect to possibly be responsible for more info than less info if anything. With a NodeAdded event, this can be expected to stop being in the close group/responsible for an existing chunk.
As to why this is occurring here, that’s gonna be tricky might need to get logs from this instance and maybe others after checking this to see why this flow gets triggered as shown in the logs.
Understand it can be hard to tell what went wrong, if you need me to do something like sending you guys logs or anything then I’am at your service, will try to help as much as I can. The vault has been running for several weeks without problems, noticed a few (around 6-8) node lost events in a row from time to time but the latest days then I get this long list of node lost events.