NewYearNewNet [04/01/2024 Testnet] [Offline]

I think @joshuef intends this happens anyway, I mean encrypt what they store on disk. We can now as we have small nodes who on reboot just restart without causing tons of churn issues. So let’s see what he says, but I think this is a definite, unless we come up with an even simpler mechanism.

So to be clear not nodes encrypting everything for us as we need to decrypt it to read it, but they can make sure data at rest is encrypted.

7 Likes

“So to be clear…” Sorry, but I don’t understand. Is this something different to your earlier solution, if so can you elaborate?

2 Likes

No it’s the same. It’s a thing that prevents users seeing the data, but bad nodes run by bad users won’t encrypt it. So we cannot depend on this mechanism as a thing to make sure data is encrypted.

There are additional mechanisms clients can take though, but the proposal above simply stops readable data from appearing on honest peoples machines. To make sure your data is encrypted will have to be client side.

5 Likes

Complicateder and complicateder as Alice once said! No, not that Alice :wink:

So many twists and turns.

So honest users can and you think will be protected by nodes adding a local layer of encryption. The has the downside that data cannot be recovered after a crash, reboot etc, which also prevents network wide recovery from a widespread outage, although buses storing less ameliorates this to an extent.

It should though keep honest users safe from legal jeopardy from storing illegal data.

Dishonest nodes would be able to see unencrypted small files and chunks that are put without self encryption.

I’m not sure what the risks are, or to whom from that. They seem quite a mixture - some obvious, some probably hard to imagine.

5 Likes

[EDIT Added P to show pub and secret keys]

Here is a left field lazy Saturday idea. To encrypt small entries (value in KV) which are 32Bytes long IIRC. Let’s call the entry E

  • E is stored in a register R of given addressRa`
  • S is our owner secret key for register R (nobody but us know this)
  • P is our pub key which is the owner key in the register folk can see

So we take XOR(E + S)C (cyphertext)

Now at Ra we don’t store E but instead store the cyphertext C

So the register entries are now obfuscated using a key where the key is only known to the holder of the secret key and is only used to read or write the entry, it does not stay in RAM or ever be transmitted.

To read an entry
→ Go to Er
→ retrieve entry (C)
XOR(C + S)E

Nice and simple and all in client side. Can only be broken if folk find the secret key corresponding to the registers owners key, which is of course not feasible. You can derive another key for this, but I don’t think there is a need to. Doing so would create a one time pad type algorithm, but not necessary here AFAIK.

11 Likes

All on the client side :dart:
is the most important thing for me

6 Likes

I made the silly mistake of using the option 1 in the script to restart everything, I thought the wallet would persist… but not, it did not, I have a new wallet and no coins.
Any way to access my old wallet? Probably the script wiped everything anyway…

So if I can get a few coins in 9418294fd9d6081392fcc79668058a0ffcfcb11599c24d43838f5265fdd1b0d339259ffa998a0798233db6b4de789e3f
I’d appreciate it. Can I backup the wallet? Does it use public/private key? Where is the private one in that case and how to restore it?

1 Like

Here’s a few to keep you going

125123199ecc23dccc197ffeccbacc28fdcc3e53bdcc14a5cc4bebcc613aa8ccc4cc80cc0548f9ccb4ccbdccfccc726a334631bdcce3cc6397cc6c4dffcca6cc95cc0ad8cc1981ccf0cc04b9cc17ccccd7cc2323519dcc5a0efdccb9cc2f6c7f80cccccc3341e8cc480620facce2ccc9ccf7ccc1ccb5ccabccd4cca7cc449ccca5cc0e4602c1cc9acc799ccc90cc738ccc6000dc4f64ddcc0a5914eacc6edccc98ccd7cca2ccb6ccd2cc1283cc94cc8acc09e4ccb3cc9bcc312b0da8cc4be5cc5e8accc1cc47decc14fecc32f1cc341e7ff6ccd0cc6694cc18e8cc72fbccafcc6fcccc65f7cc7984cc40cbcca9ccc2cc4b8acc458bcc065c6edbcc568bccd9ccdbcc4cfecc40a3ccbbcc303498cc89cc97cc00eecce2cc5888cc595bc6ccc4cc33311de7cca2cc100886ccb9cc34ddcc83cc1d78e3cc6900dc99cc0fbacc8dccb3ccc6ccf5ccc7cc95cceecc2e375c3e0effccfecccfcc5bffcc159fccddccc0cc8bcc49cbcc36faccdbcc0aa2cc96cca9ccf2ccd9cce8cc0fadccefcc42e3ccc9cccccc86cc21f6cca7cc3000dc9391646574707972636e45a981

4 Likes

Here you go, it’s as simple as this:


type Entry = [u8; 32];
type SecretKey = [u8; 32];
type CipherText = [u8; 32];

// Function to combine Entry and SecretKey to produce CipherText
fn entry_secretkey_to_cipher(entry: Entry, secret_key: SecretKey) -> CipherText {
    let mut cipher_text = CipherText::default();
    for i in 0..32 {
        // Using XOR to combine for demonstration
        cipher_text[i] = entry[i] ^ secret_key[i];
    }
    cipher_text
}

// Function to combine SecretKey and CipherText to produce Entry
fn secretkey_cipher_to_entry(secret_key: SecretKey, cipher_text: CipherText) -> Entry {
    let mut entry = Entry::default();
    for i in 0..32 {
        // Assuming reverse process is also XOR
        entry[i] = cipher_text[i] ^ secret_key[i];
    }
    entry
}

fn main() {
    // Example usage
    let entry = Entry::default(); // Replace with actual Entry
    let secret_key = SecretKey::default(); // Replace with actual SecretKey

    // Create CipherText from Entry and SecretKey
    let cipher_text = entry_secretkey_to_cipher(entry, secret_key);
    println!("CipherText: {:?}", cipher_text);

    // Recover Entry from CipherText and SecretKey
    let recovered_entry = secretkey_cipher_to_entry(secret_key, cipher_text);
    println!("Recovered Entry: {:?}", recovered_entry);
}
7 Likes

I tried to send some but get the error “Wallet has pre-unconfirmed txs, cann’t progress further.”

Get similar error when trying to upload … so looks like my wallet is broken. Any way to fix without starting from scratch?

It’s all fun and games until it’s real money … then wallet hiccups are going to be hell.

Edit: I am supposing that the error first developed when my first upload failed.

Perhaps each upload needs to somehow extract the amount needed into a separate temporary wallet, so that if the upload or transaction goes awry, only that amount get’s locked or destroyed - and doesn’t affect the actual true wallet.

2 Likes

If the entry and ciphertext are both known, the secret key is also known. Does this create any problems in real world use?

I think normal crypto expectations are knowing both the ciphertext and plaintext should not reveal a secret…?

AES-GCM produces a 48 byte output for a 32 byte input, would the larger size be an ok tradeoff?

type Entry = [u8; 32];
type SecretKey = [u8; 32];
type CipherText = [u8; 32];

// Does this function cause a problem??
// Function to combine CipherText and Entry to produce SecretKey
fn cipher_entry_to_secretkey(cipher_text: CipherText, entry: Entry) -> SecretKey {
    let mut secretkey = Entry::default();
    for i in 0..32 {
        // Assuming reverse process is also XOR
        secretkey[i] = cipher_text[i] ^ entry[i];
    }
    secretkey
}

// Function to combine Entry and SecretKey to produce CipherText
fn entry_secretkey_to_cipher(entry: Entry, secret_key: SecretKey) -> CipherText {
    let mut cipher_text = CipherText::default();
    for i in 0..32 {
        // Using XOR to combine for demonstration
        cipher_text[i] = entry[i] ^ secret_key[i];
    }
    cipher_text
}

// Function to combine SecretKey and CipherText to produce Entry
fn secretkey_cipher_to_entry(secret_key: SecretKey, cipher_text: CipherText) -> Entry {
    let mut entry = Entry::default();
    for i in 0..32 {
        // Assuming reverse process is also XOR
        entry[i] = cipher_text[i] ^ secret_key[i];
    }
    entry
}

fn main() {
    // Example usage
    let mut entry = Entry::default(); // Replace with actual Entry
    let mut secret_key = SecretKey::default(); // Replace with actual SecretKey
    entry[0] = 44;
    println!("Original entry: {:?}", entry);
    secret_key[0] = 88;
    println!("Original secret_key: {:?}", secret_key);

    // Create CipherText from Entry and SecretKey
    let cipher_text = entry_secretkey_to_cipher(entry, secret_key);
    println!("CipherText: {:?}", cipher_text);

    // Recover Entry from CipherText and SecretKey
    let recovered_entry = secretkey_cipher_to_entry(secret_key, cipher_text);
    println!("Recovered Entry: {:?}", recovered_entry);

    // Create SecretKey from CipherText and Entry
    let recovered_secretkey = cipher_entry_to_secretkey(cipher_text, entry);
    println!("Recovered SecretKey: {:?}", recovered_secretkey);
}
5 Likes
root@e6cb40ba73ec:~# safe files upload -p openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso 
Logging to directory: "/root/.local/share/safe/client/logs/log_2024-01-07_00-35-38"
Built with git version: ba2bb2b / main / ba2bb2b
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network                                                                                                                                                                                                            "openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso" will be made public and linkable
Starting to chunk "openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso" now.
Uploading 311 chunks
**************************************
*          Uploaded Files            *
**************************************
"openSUSE-Leap-15.5-NET-x86_64-Build491.1-Media.iso" ChunkAddress(2ff418)
Among 311 chunks, found 311 already existed in network, uploaded the leftover 0 chunks in 11 minutes 7 seconds
**************************************
*          Payment Details           *
**************************************
Made payment of 0.000000000 for 0 chunks
Made payment of 0.000000000 for royalties fees
New wallet balance: 20.000000000

Looks like I had already uploaded this and forgot to share it…
Uploading the full ISO of 4,X gigs instead of the net version of 200 or so megs now…

Edit: address of the reduced iso uploaded in this message is: 2ff4187289456c669c5bf6481da917ceaed92e676cb85f227b5c749b723fadd9

2 Likes

Nice, as usual ian

It should not, as the Entry is local to the client and never transmitted. It should be like a secret key itself, but maybe that’s asking too much and opens a door for terrible mistakes. No worries using AES mind you.

Using AES could be easier in terms of being audited in the case of leaking an entry and exposing a secret key. It would mean entries need to be 48 bytes as you say, but maybe worth it?

What we would potentially lose is the group or multisig. So to overcome that we would do something like use the BLS signature of the Entry plus some word like (Encrypt) in some fashion such as
E == “abc…xyz”
Enc key == Sign(Hash("abc.....xyz" + ENCRYPT")) → Sig
C == Aes("abc.....xyz") using Sig as the key
In this way each threshold member can provide the encrypt or decrypt share. Or something of that fashion

This might make better use of the threshold scheme? (I had thought we could do threshold encrypt natively but I cannot find that any more)

We could have an entry type that is an enum where 48 bytes is encrypted and 32 is plain? So if folk are making stuff public we know it’s 32 bytes?

[EDIT → Another , perhaps simpler way is this

  • For each entry, take the previous entry and derive a new PublicKey key
  • Use the PublicKey as above (xor only)
  • For the first entry, derive a secret key from Reg name or similar

Then if the pub_key exposed it’s only ever used to obfuscate that single entry. So it’s useless for anything else. The secret key of that derivation is still secure. That makes it crypto safe AFAIK

Need to consider 2 entries the same and if that harms anything, I don’t think it does mind you as the derivation root should be the secret key of the registers owner key.

6 Likes
2 Likes

downloaded here OK :slight_smile:

I dont think I have used SuSE for >20 yrs now…

my bad i did 136 / 777 instead of 136 / (136+777)

which gives 14.9% REALLY 15% but with quantitation errors - i guess should average out to 15% with great number of uploads.

4 Likes

@dirvine I can see one of @happybeing points (unless I’m mistaken) that an attack vector against the network is that attackers can upload illegal files unencrypted and as long as they are under 1MB then they are available to be easily “read” by the node operators and anyone with the chunk address. Or by the ABC’s gaining physical access to nodes

So then the opponents of the safe network can point to those chunks and “cry foul” and potentially causing those node operators real problems and/or bring the network into disrepute by claiming the illegal files are stored forever easily read by all. (yea I know public encrypted files can be read, but a bit harder to push the claims)

I followed a lot of the conversation following that post of @happybeing but did not see if a solution to this specific issue was found.

To me a possible solution is for the data map to hold an extra key (to decrypt) for each chunk which was used to secondly encrypt every chunk on upload by the node. The chunk at rest has a different hash because it was encrypted by node when stored, but a tag store links the chunk and its hash as sent by the client.

The node sends the decrypt key back to client for inclusion in the data map for that chunk. The node destroys that keypair so that even it cannot decrypt the chunk. Even when replication occurs the chunk and its hash tag is distributed, thus no required info is lost and the receiving node of the replicated chunk knows the hash for that secondly encrypted chunk.

Then when a client is retrieving the chunk it uses the decrypt key on each retrieved chunk before using reverse of the self encryption to finally decode the whole file.

now if the client, when uploading the file is uploading to 3 or five nodes for each chunk then there will be upto 3 or five “decode” keys to be stored for each chunk. And the process to decode a chunk may require trying with each of those keys for that chunk. The chunk hash allows the client to know it succeeded.

To reduce client work when decoding then the last 4 bytes of each “decode key” can be stored with the hash tag file on the node and returned on chunk retrieve.

Yes its extra work for nodes on storing, but no extra work on GETs other than access of the tag store to find the chunk.

Works for both private & public files and should be able to be an effective addon to existing code with some changes to existing code.

Benefits

  • no chunk at rest is unencrypted and enforced by the nodes
  • client cannot cause unencrypted data to be stored
  • node operators protected from potentially storing readable illegal files
  • node is not required to keep keys, only return decode key to client. Node cannot decrypt chunk since it destroys keys.
  • hash tag store allows replication to occur normally and new node receives the chunk and its hash tag.

possible cons

  • extra step/work for node when storing (offset by extra security for node operator?)
  • extra work for clients when reconstructing files
  • morally elite hackers lose one avenue of attack. Oh wait… thats a benefit for the network
  • dedup will be lost in some/most cases.
    • if another client uploads the same chunk then either the node will encrypt it too with new key pair and will need the last 4 bytes from client when retrieving the chunk to know which chunk to retrieve OR node will ignore the new chunk and it becomes unreadable in this new datamap. Obviouly the 1st option should be used to prevent data loss
9 Likes

Doesn’t this open up the uploader to prosecution under the law? If node operators can see it, then they can report it with IP address.

I’m not criticizing your post, just wanted to add that thought in there.

2 Likes

If they can be found then yes.

But where on earth are they. No IP address is recorded. But the node is easily found long after the store

Oh I am sure there is things to criticise it for. One being that this is not for now but late beta or after launch

6 Likes

The at rest solution is nodes have an temp key created at random. They encrypt all data stored to disk. They decrypt and dev lier on request This means the on disk data is secured from everyone, but when the node reboots that is useless data now, which is fine for our small nodes.

6 Likes