Sorry, the message you quoted was just me saying that for future testnets we may tie to specific versions. However, we weren’t doing that for this one. Hope that makes it clear.
If you’ve used the binaries on Windows previously, they were being installed to this directory: C:\Users\<username\.safe.
It’s not really very common on Windows to use . prefixed directories. On Linux, programs will respect a convention that says these directories should be hidden, but this is not the case on Windows; in Windows Explorer, the .safe directory will not be hidden. It’s now changed to just use safe rather than .safe.
You can manage your environment variables on Windows by just typing in something like “environment” into the start index, and you should get an option to “Edit the system environment variables”. You can then remove any .safe entries. This isn’t strictly necessary, it was just something to keep your environment variables clean.
As it happens, 0.83.38 was a release that just happened a few hours ago, and it’s a bad release that doesn’t have any assets attached. Don’t know why yet. This is also not Windows specific - it would fail on any platform.
If you used --version to get an earlier version I think it would be fine.
The use of environment variables on Windows I think will vary depending on whether you’re using Powershell or not. Personally, on my Windows PC, I always just use Powershell now rather than cmd. The set command works for cmd, but I’m not sure about Powershell. In Powershell, you use $env:VariableName = "Value".
Next question: safenode is not installed, so I cannot start a node. Is this not part of safeup?
PS C:\Users\a> $env:SN_LOG="all"; safenode --log-dir=$TempDir\safenode
safenode : The term 'safenode' is not recognized as the name of a cmdlet, function, script file, or operable program. C
heck the spelling of the name, or if a path was included, verify that the path is correct and try again.
I know the main focus here on the installation SOP, ease of use, and being able to use safeup for multi-platforms, but is anyone else getting any chunks stored on their node in the past 12+ hrs?
Here is where my current dashboard stands (no RequestIDs or Chunks):
I did a manual grep for request_id, RequestId, Chunk, StoreChunk, and nothing yet, .
I am starting to wonder if something changed, or folks simply haven’t uploaded enough yet? .
FWIW, I see only ~1000 log entries for Network inactivity (not recorded in the panel above), but nearly, ~30,000 PEER_CONNECTION_CLOSED / PEER_CONNECTION_CONNECTED in the panel above, along with roughly ~35 OUTGOING_CONNECTION_ERROR_OPERATION_TIMED_OUT messages for the same time frame.
Also, is there a hard cap on 1024 chunks written to disk on this testnet as well?
It can take a while before chunks start flowing, so I will keep the safenode running as long as the network stays up.
I was curious on the impact from the following PRs (since they show as merged a while back):
I still find it fascinating and to be honest I am pretty surprised on the # of peer connections connected and closed that are taking place even when there is no chunks to replicate or store yet.
The ratios of those two numbers, compared to the overall operation timed out messages, and network inactivity is orders of magnitude higher. Maybe its all just an expected workflow of the libp2p and kad implementation as is? Hmmm.
I know its a work-in-progress, so lets wait and see on future testnets.
I have 100 nodes running and have chunks on a lot of them but by no means all. Maybe 1/4? I’ll edit with a breakdown when I’m back at my desk.
I’m not very surprised because I’m sure we are still beholden to the spread of nodes across the hash space. If we want to see more nodes in use I think we need to upload a multitude of small files. A large file will just write to a small proportion of nodes. Uploading a 1GB file will do something very different to the network than uploading 1 million 1KB files. Or indeed 1 billion 1 byte files!
Impact on what? On uniformity of data distribution across nodes?
Most likely, dropped connections just means that they will be reestablished slightly later.
I’m thinking a lot of small files would cause a lot of nodes to have a little bit of data on them. And a single large file to have a small number of nodes having a lot of data on them.
And it will be interesting to stress the system with a lot of small uploads because while we alll think that chunky media files are what people will want to upload and download and will be the bulk of the data very small files are an equally valid use case and any system has to be able to cope with their challenges.