Another client errror today with the new users. I have no time to recreate other users, so for now all my sites are frozen.
Are there plans to build the vault into the launcher, or is the plan to keep them separate?
or perhaps just have the launcher with the messaging capabilityā¦ vault-lite. Then all launcher while passive are adding strength to the network.
A reason not perhaps would be drawing on network limits ā¦ not all networks are for free. If messaging is lite impact perhaps itās still not unreasonable.
I left my vault on today while I was at work and picked up 762MB of data, thatās a lot more in 10 hours than I picked up before in a few days, 3 times more I thinkā¦
very pleasing
cheers
Al
Are there any safe sites working for you?
I tried several sites listed in this topic and also from Test 6 Safesites! and none of them seem to be working. The status in launcher logs is always āIN PROGRESSā.
I am alone in this case?
Not alone, this testnet is a real stinkerā¦by running a vault the clearnet performance takes a hit at times.
This test has shown the vault to be a resource pig, but itās not obvious how when looking at regular resource utilization.
One weird thing I had was with DNSā¦I had 8.8.8.8 as primary and Google services were suffering significantly. When I reverted to the ISPās DNS, things cleaned upā¦weird that running a vault would have that impact.
Iām not sure how it all works, but we have one vault process with what seems like 8+ threads each with different ingress/ egress ports.
By limiting the upload bandwidth of the vault to slightly less than my connection is capable of all lagging disappeared. Before that opening google frontpage took ~10 seconds.
Is there any reason why the vault couldnāt have a simple settings window, for example to limit itās outgoing speed or something?
Or does itās config file already have something like that?
The best answer is probably that they are busy doing other things. Since there exists numerous ways to achieve it with already existing tools this should probably be no priority before launch.
I think it will, at the moment all vaults are equal but soem are more equal than others
The problem is that if your vault needs to do some work, then it must, but all of them do the same and transfer a ton of data on every churn event. Itās one of the main parts of data chains. However when folk do limit the vaults then they will earn less safecoin (obviously) and if itās limited too much the vault may get kicked form the network (this is what the new routing table RFC allows easier).
So 3 issues we need to resolve is
- Allow the limits
- Reduce (vastly) data xfer during churn
- Allow network to decide when to kick a vault
The reason at the moment itās not in place is number 2 really. The current testnets have this issue and it also causes data loss in high churn (which slows down computers, which makes folk switch off which creates more churn ;-( ). This will improve a lot as the code now takes on the routing table and data chain rfcās as we progress.
What makes a vault running on a 1mbit line different from one running on a 100mbit line limited to 1mbit?
Nothing, just the speed is what it is. If the network could not handle vaults as low as 10Kbps then they would get kicked off, if limited to that then the same. If a vault could not deliver a chunk it should in a reasonable time then it will get downgraded. These times are dynamic though and the network should calculate them in āreal timeā. Nodes that cannot deliver data can still have value though. If they then cannot handle control packets they will get kicked.
This may be a silly question but do the vaults still work if my computer is asleep? When I wake it up and check the terminal it has activity logged and there are nodes in my routing table so I assume yes but just want to confirm
nope, none of the safenet sites work for me and also the two accounts I had are not working but Iāve been able to create a new account using the exact same pw, the demo app let me in and let me choose a dns name I had used before and almost let be upload my regular website but it failed at %79, the vault seems to be fine, I donāt think I have noticed any impact on pc performance or cable connection, but then they are both pretty beefy services.
few drinks apols if the above is jibberish
Al
Makes sense to me! And same applies. No access to safenet sites at all and for me the demo app has been unable to connect for three days now.
So I am not alone unable to access any safe sites.
Yes, we have been warned in the OP:
But testnet 5, which was supposed to be destructive, worked like a charm, so I have been spoiled beforehand and didnāt expect failure of testnet 6. A few words from Maidsafe explaining the sequence of events that lead to this state would be welcomed.
Not a sequence of events, but with every vault acting the same as every vault and much more data per vaults this test shows the need to limit upload sizes right now. The network seems more than able to handle video etc, but the vaults cannot cope churning all data every churn event. They will not need to do that soon though with changes to routing and data chains etc.
So the network can handle very small mites per file right now, but large videos are not working well with churn due to the increased capacity with small number of nodes, most importantly though every vault churning all data to every other vault on node lost/gained is way too much. Thatās good news though as we do have answers and they are actively being worked on.
Should we quit the vault ?? Mine still running same speed as usualā¦
I wonder if thereāll be a way in future to know where the limits lie. Users will have more confidence if they know the network is well inside its limitations. Perhaps itās not a linear problem than can be made obvious like that but would be nice to have some real tangible evidence of network stability over time. Legacy is an important factor in attractive users and more detail will speed that along.