NewYearNewNet [04/01/2024 Testnet] [Offline]

here is 199 coins for you @raulillo

f0cc71e6cc44d7cc7551cccc1adaccb3ccb6ccddcc15dbcc9dcc4f661cd3cc0f37c5cca9cc219dcc3ff0cc3bd6cc667be6cca3cc95cc292fe1cc53e1cca7cc19654c14e5cc7206a7cc9acc7914a8ccdfcc90cc27729bccb2ccecccbeccefcc699fcc91cce2cc5d4812441ececc81cc1584cc8acc64cbcc3e7d64096b5881ccafccdacc84cc193752cdcccacc34b9cc95cc6000dc466bb3cc48f0cc7c07c5cc91cc6c94cc93cc4e07f6cca4cc73b1cccccceecc65f5ccc5ccb0cc6539438acc47bcccc9cc380372eacc3228d5cc91cc86ccd9cc01e3ccdcccdbccaaccdccc555a1b627788cca6cc88ccf6cc4644799ccc44021884cc83ccaecc5e4fafcc85cc125ccccc60c8cc70cdcc89cc92cc7ed7ccc7cc1f58231d61fcccc4cca2cc1ae1ccc0cce1cc8bcc4fd5ccf0ccceccfdcc6daeccf8ccf5cc1041d8cc355b95cc6e00dc31cfcce0cc1b163d198dcc479acca5cc80cc8bcc10bdcc46d9cc66accc34ceccfaccffcc48b8cc094babcc8cccd8cc66d7ccd1cc301550abccb1cccacc80cc93ccc6cc31319bccd9ccbfcca6cc3000dc9391646574707972636e45a981
2 Likes

Thanks a lot everyone!

2 Likes

Upload failure in my first batch - tried twice same result:

Error:
0: Failed to upload chunk batch: Transfer Error Failed to send tokens due to Wallet has pre-unconfirmed tx, resent them and try later…

When updating client I deleted wallet at start, failed to get tokens from the faucet, so Southside sent me some and that went through without an error … so not sure what’s happening.

Making a private file subsequently public doesn’t seem to make it downloadable. It also doesn’t change its ID (which may possibly have something to do with it?).

Steps:

  1. Create small testfile “2meg”

dd if=/dev/random of=2meg bs=1M count=2

  1. Upload 2meg without -p
safe files upload 2meg
Logging to directory: "/root/.local/share/safe/client/logs/log_2024-01-06_10-03-36"
Using SN_LOG=none
Built with git version: b633736 / main / b633736
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network                                                     Starting to chunk "2meg" now.
Chunking 1 files...
Uploading 4 chunks
**************************************
*          Uploaded Files            *
*                                    *
*  These are not public by default.  *
*     Reupload with `-p` option      *
*      to publish the datamaps.      *
**************************************
"2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
  1. Try downloading 2meg
root@localhost:~/testfiles# safe files download "2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
[snip]
Downloading "2meg" from e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6 with batch-size 16
Error downloading "2meg": Network Error GetRecord Query Error RecordNotFound.
  1. Make 2meg public by reuploading with -p flag - notice the hex address hasn’t changed
safe files upload -p 2meg
[snip]
"2meg" will be made public and linkable
Starting to chunk "2meg" now.
Starting to chunk "2meg" now.
Chunking 1 files...
Files upload attempted previously, verifying 5 chunks
All files were already uploaded and verified
**************************************
*          Uploaded Files            *
**************************************
"2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
  1. Try downloading again
safe files download "2meg" e2e40b66937ff9e882947f885f2347c826cf1264f305140d2d2b7be017ed48d6
[snip]
Error downloading "2meg": Network Error GetRecord Query Error RecordNotFound.
3 Likes

Morning update:

Number of connections seems stabilized around 85k …137 per node

CPU usage is slowly going up on all machines…any idea how to check response time of individual nodes?

Total traffic since start 1 TB download, 0.87 TB upload…1.65 GB down, 1.59 GB up per node

1 Like

The address shouldn’t change as it’s the hash of the datamap. What should happen is the datamap gets uploaded and so can be used to download.

People may have been hitting this problem since the start but for some reason MaidSafe haven’t.

4 Likes

Hey! I know what might be causing that. When the xor addr of the file is provided, we don’t try to check if the datamap exists locally. Instead, we directly query the network for the datamap and since the file is private, the datamap would not have been uploaded and hence the error.

I will put up a quick fix to check if the file is private before proceeding. Thank you for reporting this :smile:

13 Likes

So data_map per file is a issue because of file size. Not encrypted causing security issues.
Padding inefficient/undesirable?

A register will pass the minimum threshold for encryption?

Am I following?

1 Like

I do not know if this is supposed to happen, but yesterday the token download worked fine:

And today, after restarting the command (to check), an error appeared:

PS C:\Users\gggg> safe wallet get-faucet 134.209.21.136:8000
Logging to directory: "C:\\Users\\gggg\\AppData\\Roaming\\safe\\client\\logs\\log_2024-01-06_13-00-35"
Built with git version: ba2bb2b / main / ba2bb2b
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network
Requesting token for wallet address: b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31
Failed to get tokens from faucet, server responded with: "Failed to send tokens: Transfer Error Failed to send tokens due to The transfer was not successfully registered in the network: CouldNotSendMoney(\"Network Error GetRecord Query Error SplitRecord { result_map_count: 3 }.\")."

To be sure, I reinstalled the Client binary files and re-entered the command to download tokens, then the connection error appeared:

PS C:\Users\gggg> safeup client --version 0.86.90
**************************************
*                                    *
*          Installing safe           *
*                                    *
**************************************
Installing safe.exe for x86_64-pc-windows-msvc at C:\Users\gggg\safe...
  [########################################] 6.72 MiB/6.72 MiB
safe.exe 0.86.90 is now available at C:\Users\gggg\safe\safe.exe
PS C:\Users\gggg> safe wallet get-faucet 134.209.21.136:8000
Logging to directory: "C:\\Users\\gggg\\AppData\\Roaming\\safe\\client\\logs\\log_2024-01-06_13-03-29"
Built with git version: ba2bb2b / main / ba2bb2b
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network
Requesting token for wallet address: b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31
Error:
   0: error sending request for url (http://134.209.21.136:8000/b8f5046ea7cc9d4b55b93d342893651d1ecb465eb4c3c27d600a7f8534a6cf1d4bb0f39ae55c83073d20aa7dc17a8e31): error trying to connect: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)
   1: error trying to connect: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)
   2: tcp connect error: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)   3: Nie można nawiązać połączenia, ponieważ komputer docelowy aktywnie go odmawia. (os error 10061)

Location:
   sn_cli\src\subcommands\wallet.rs:268

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Is it because I retyped the command to download tokens? Is something wrong?

I can’t think of a way of checking the response time of individual nodes.

But I might have an answer for the CPU usage increasing over time. It is something I was puzzling over. Then I realised that as the number of records stored on a node as time goes by means more GETs as long as the downloads people are doing (and I assume MaidSafe have tests running) stays roughly constant and that means more CPU usage and more outbound bandwidth used to serve them.

2 Likes

I think adding a check is a good idea but may not be a fix for the problem being reported.

My understanding is that if one uploads privately, then repeats with -p to make it public, then people have had problems.

Either that or the problem arises when trying to download a single file that you have only uploaded privately (perhaps with others in a directory).

I’m not sure because the reports don’t all seem to be for the same thing, so worth checking both scenarios.

1 Like

Hi, All,

Sorry to let all know that have to shutdown the faucet_server due to un-recoverable issue.

Tried to restart it but not working.
Will need some further investigation/work to get it resolved.

You can still get tokens from other members.

Sorry for the hassle.

7 Likes

If the default cli/API upload can cause small files to be uploaded in plain text I think that’s a problem. So padding (or whatever) to prevent this should be a default, with the option to disable that.

When using the Safe (app/API) this won’t be necessary because everything with be encrypted, if it is inside the Safe.

But the CLI and API are separate, in fact used to build the Safe, so we still need to be careful and so a sensible default is to ensure everything is encrypted unless explicitly turned off (eg by a CLI option, or a flag in an API call.

4 Likes

This has not been my case
after -p there have been no problems
with the other people

It’s more like this really

data_map can be very small.
A directory is really a list of data maps (pointers to file) → so is larger, but still not encrypted.

So think of a dir as a register full of data maps and possibly other registers (sub directories)

The root dir (register) can live encrypted in our account packet and therefore all your private files data maps are contained in registers, all the way back to the root dir.

So all we need to do is provide this directory structure and folk upload their files in their own “SAFE disk”. With registers encrypted (which we can do easily be deterministically creating new BLS keys on the fly) then there are really no data_maps for folk to worry about.

I hope this makes sense.

To make some files public then just using a plain unencrypted register is enough. Only you can add files to the dir, but everyone can see them.

cc @joshuef @qi_ma @roland

10 Likes

I assume that this will not cause removal of the ability to upload a single small file as now, causing it to be stored in an unencrypted chunk?

In which case the CLI and API should still be designed defensively, so that only happens if explicitly requested (eg providing an option on the CLI or setting a flag in the API to disable default padding).

7 Likes

Yes, thank you. :+1:

I am not sure what use case there is for uploading single unconnected files though. It’s extra work and code, so the use case should be compelling.

It can perhaps be done outside of self encryption and use some other form, but what would such a small single file be for? Genuine query here

3 Likes

I’m not arguing for it but clarifying.

If the API to allow that is withdrawn, or perhaps left undocumented that is different.

But if the official API (or CLI) allows people to upload data they think is being encrypted but is not, I don’t think that’s a good position for a network which claims to be secure and private from the ground up.

So it’s not just a matter of whether there’s a good use case but also about making the system live up to its promises.

9 Likes

I agree. Right now it has been like this for testing the nodes and replication etc. It has to change before release IMO and that should be fine. It’s a showstopper for building apps really right now.

9 Likes