then put in cmd:
safe networks switch wild-testnet
5. upload some file to network:
first create file in directory cli (some file)
put in cmd:
safe files put heyheyhey.txt
(you can notice in this windows settings isnt .txt file visible at the end on file, this is in windows setting can be turned on or off, if you are not sure whats your setting is, you can put in cmd command “dir” that show you all files in folder you open before with command cd) you can see there is file heyheyhey.txt
6. download file from network:
put in cmd adress and name of file (which you find on forum):
safe://hygoygym7tsj5hhyyykd1aqpw3djxea6om6xku568ahm7hy7gfn6q5gy7xr > 1.jpg
(this file was probably deleted? if it would be wasnt deleted, then you will see this image in folder here
Tried a random 10MB PUT, and it worked on the 1st try in an alpine LXC container. Yessss!!
Looking forward to more frequent testnets! Thank you Maidsafe team!
Personally, I am really interested in the format and content of the json logs for sn_node binary, so different types of dashboards can be built for monitoring purposes.
I applaud the approach, although we know the network can already do more than this.
Data integrity is fundamental, so isolating it makes perfect sense.
Testing this on my win PC for once, rather than my usual linux arm instance, but I’m running into some issues…
I can successfully cat the test image (as well as the others linked, included @Southside cock…) but windows is telling me the files are corrupt?! @Southside, can you please verify this is the content of your co…ok that sounds wrong…content of your cock01.jpg file?
This is how the start of it looks in hex:
Important, but also there should be a disincentive for nodes getting too full. Some way to make sure nodes don’t go over something like 50% of their capacity, otherwise, if global disaster strikes and a huge amount of data needs re-spreading, then there might not be enough space. I’d suggest having junk ‘bonus’ data designed specifically to occupy node storage space.
Maybe for every chunk of user data stored, an equally-sized random bonus chunk could be stored on the same node, and presentation of both could be necessary for the full GET reward (if those are still a thing, if not then some equivalent).
If a node is offered a user chunk but is already full (of user chunks and bonus chunks), then the node should tell the others in its section that it is full rather than risk not getting a bonus for another chunk. If the section needs the node to take the chunk (due to catastrophe maybe), then it could exchange two of the node’s bonus chunks for a single bonus chunk applicable to two different user chunks.
Uploading various sized files indicates the lag on startup (larger files are faster because they can use the existing connection from the early chunks)
Ok, update on this, re-tried to cat the file and it now works fine…not sure what was happening before but please ignore and don’t waste time investigating that…weird stuff