Upload failure in my first batch - tried twice same result:
Error:
0: Failed to upload chunk batch: Transfer Error Failed to send tokens due to Wallet has pre-unconfirmed tx, resent them and try later…
When updating client I deleted wallet at start, failed to get tokens from the faucet, so Southside sent me some and that went through without an error … so not sure what’s happening.
Making a private file subsequently public doesn’t seem to make it downloadable. It also doesn’t change its ID (which may possibly have something to do with it?).
Steps:
Create small testfile “2meg”
dd if=/dev/random of=2meg bs=1M count=2
Upload 2meg without -p
safe files upload 2meg
Logging to directory: "/root/.local/share/safe/client/logs/log_2024-01-06_10-03-36"
Using SN_LOG=none
Built with git version: b633736 / main / b633736
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network Starting to chunk "2meg" now.
Chunking 1 files...
Uploading 4 chunks
**************************************
* Uploaded Files *
* *
* These are not public by default. *
* Reupload with `-p` option *
* to publish the datamaps. *
**************************************
"2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
Make 2meg public by reuploading with -p flag - notice the hex address hasn’t changed
safe files upload -p 2meg
[snip]
"2meg" will be made public and linkable
Starting to chunk "2meg" now.
Starting to chunk "2meg" now.
Chunking 1 files...
Files upload attempted previously, verifying 5 chunks
All files were already uploaded and verified
**************************************
* Uploaded Files *
**************************************
"2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
Hey! I know what might be causing that. When the xor addr of the file is provided, we don’t try to check if the datamap exists locally. Instead, we directly query the network for the datamap and since the file is private, the datamap would not have been uploaded and hence the error.
I will put up a quick fix to check if the file is private before proceeding. Thank you for reporting this
I do not know if this is supposed to happen, but yesterday the token download worked fine:
And today, after restarting the command (to check), an error appeared:
PS C:\Users\gggg> safe wallet get-faucet 134.209.21.136:8000
Logging to directory: "C:\\Users\\gggg\\AppData\\Roaming\\safe\\client\\logs\\log_2024-01-06_13-00-35"
Built with git version: ba2bb2b / main / ba2bb2b
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network
Requesting token for wallet address: b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31
Failed to get tokens from faucet, server responded with: "Failed to send tokens: Transfer Error Failed to send tokens due to The transfer was not successfully registered in the network: CouldNotSendMoney(\"Network Error GetRecord Query Error SplitRecord { result_map_count: 3 }.\")."
To be sure, I reinstalled the Client binary files and re-entered the command to download tokens, then the connection error appeared:
PS C:\Users\gggg> safeup client --version 0.86.90
**************************************
* *
* Installing safe *
* *
**************************************
Installing safe.exe for x86_64-pc-windows-msvc at C:\Users\gggg\safe...
[########################################] 6.72 MiB/6.72 MiB
safe.exe 0.86.90 is now available at C:\Users\gggg\safe\safe.exe
PS C:\Users\gggg> safe wallet get-faucet 134.209.21.136:8000
Logging to directory: "C:\\Users\\gggg\\AppData\\Roaming\\safe\\client\\logs\\log_2024-01-06_13-03-29"
Built with git version: ba2bb2b / main / ba2bb2b
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network
Requesting token for wallet address: b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31
Error:
0: error sending request for url (http://134.209.21.136:8000/b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31): error trying to connect: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)
1: error trying to connect: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)
2: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061) 3: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)
Location:
sn_cli\src\subcommands\wallet.rs:268
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Is it because I retyped the command to download tokens? Is something wrong?
I can’t think of a way of checking the response time of individual nodes.
But I might have an answer for the CPU usage increasing over time. It is something I was puzzling over. Then I realised that as the number of records stored on a node as time goes by means more GETs as long as the downloads people are doing (and I assume MaidSafe have tests running) stays roughly constant and that means more CPU usage and more outbound bandwidth used to serve them.
If the default cli/API upload can cause small files to be uploaded in plain text I think that’s a problem. So padding (or whatever) to prevent this should be a default, with the option to disable that.
When using the Safe (app/API) this won’t be necessary because everything with be encrypted, if it is inside the Safe.
But the CLI and API are separate, in fact used to build the Safe, so we still need to be careful and so a sensible default is to ensure everything is encrypted unless explicitly turned off (eg by a CLI option, or a flag in an API call.
data_map can be very small.
A directory is really a list of data maps (pointers to file) → so is larger, but still not encrypted.
So think of a dir as a register full of data maps and possibly other registers (sub directories)
The root dir (register) can live encrypted in our account packet and therefore all your private files data maps are contained in registers, all the way back to the root dir.
So all we need to do is provide this directory structure and folk upload their files in their own “SAFE disk”. With registers encrypted (which we can do easily be deterministically creating new BLS keys on the fly) then there are really no data_maps for folk to worry about.
I hope this makes sense.
To make some files public then just using a plain unencrypted register is enough. Only you can add files to the dir, but everyone can see them.
I assume that this will not cause removal of the ability to upload a single small file as now, causing it to be stored in an unencrypted chunk?
In which case the CLI and API should still be designed defensively, so that only happens if explicitly requested (eg providing an option on the CLI or setting a flag in the API to disable default padding).
If the API to allow that is withdrawn, or perhaps left undocumented that is different.
But if the official API (or CLI) allows people to upload data they think is being encrypted but is not, I don’t think that’s a good position for a network which claims to be secure and private from the ground up.
So it’s not just a matter of whether there’s a good use case but also about making the system live up to its promises.
I agree. Right now it has been like this for testing the nodes and replication etc. It has to change before release IMO and that should be fine. It’s a showstopper for building apps really right now.