Close group security

I will try and see if I can follow this a bit further here

This is where I mention it’s like randomly creating nodes. You won’t know you close group. So say you are trying to create a close group, so connect to wherever. So you are connected “somewhere”.

  • So this somewhere, you calculate your close group ID’s with respect to a particular target ID.
  • Then you try and create ID’s that will fit in that group to be relocated.
  • You create such and ID

So you have a single ID abc and it get’s hashed with the two closest nodes ID’s and gives a result pqr so now you want to take those two id’s that were hashed with abc to give pqr and continually create an id (within the closeness of this original id wrt to the the closest nodes) that will be close to pqr in the address space and do so before any node changes etc. (so lets ignore churn to make it easier).

Is this what you are looking at? (it could be an attack, we can dig deeper, it is also easily fixed as well though (I am not sure it need to be yet though), there is an address accumulator not discussed yet, but also ignore that)

I am sorry I don’t understand your algorithm because words are ambiguous, so I am not sure it is same as @bcndanos.

I prefer his algorithm because he uses code which is clearer for me. Here is what I understand (but I may be wrong):

  • The attacker creates and connects 2 vaults with the standard two steps connection
  • The first one (called origin) is used for initial connection of subsequent vaults (3, 4, 5, …)
  • The second one (called destination) together with the relocated subsequent vaults will form the hostile group that is controlled by the attacker
  • The initial connection of subsequent vaults is done with IDs that are known to produce relocated IDs close to destination node. This is possible because the attacker knows the closest nodes of origin which he also controls.

Please can you read again his algorithm with these remarks in mind?
Plausible attack, isn’t it?

This point is different from what I proposed.
Instead, I look for those public keys(IDOrigin) whose hash with the two closest nodes(those nearest in this moment) gives the closest ‘pqr’.

As @tfa said, I’m not sure if your explanation is the same as my first proposal for the attack.

I think we are saying the exact same thing here?

Use the two closest known addresses and keep hashing with the public_key you create until you get an address near pqr

or

SHA512(address1 + address2 + public_key) just to be clear. Then repeat this until it is closer to pqr than the furthest close node (not owned by you) in the group closest to pqr If you are not saying this then sorry but I am definitely missing something.

A really (really) good way to do this is write a mini program (I would really suggest use the c++ address_space tool and alter it to inject this attack) and measure the attack, like how many attempts are required, how long they take (since switch to curve 2519 then we can create keys considerably faster, which must help offline attack model analysis)

Ok David. You know my English is not very good .
I think that now we are saying the same. :laughing:

1 Like

At least 1000 times better than my Spanish :smiley: It’s me who is the infant when it comes to multi lingual stuff.

We are in a manic time rigth now and this is a very very deep subject (sentinel checks group messages and reconfirms them, all nodes must agree and you cannot create many addresses the same (there is a 10 minute block after such an address is used etc. N:B this is an area of great consideration for security though, so don’t mistake my appreciation here) so this will get very deep very fast and my head is in another part right now. I will try and detail that soon though. So I can help, but my answers may be very fast and not great English either :slight_smile:

2 Likes

@dirvine I’ve only been able to follow this marginally because of my lack of technical depth, but if I’m not mistaken there is another factor that makes this even more interesting:

If one were able to control enough malicious nodes and place them in such a way as to surround a target node, what profit would it be to the attacker? Safecoin ownership is transferred atomically, so even if ownership of a safecoin could be somehow be falsely transferred within a close group, the verification of the larger group of groups would thwart it, since the ability to surround those groups would have to be done–a task fantastically improbable, and then to what gain? And that’s just concerning safecoin. What other harm could be wreaked by surrounding one, or even several nodes? As far as I can tell, only DOS, because vaults and clients will always control their own keys. Surrounding a node may force a wrong consensus, but it can’t change it’s control of its keys.

Additionally, are there not different relationships, and thus different consensus group formation amongst different persona, which starts to complicate this beyond ability to predict?

2 Likes

Yes this is all true, also what you said prior to this. In the design a single close group being taken over is considered to be able to cause some vandalism, but it cannot do much more than that, it will not be able to create keys. With Immutable Data data it’s cool and not a problem (a group is not near enough), with safecoin this goes a bit further though so worth ensuring cannot happen, or it’s a reduced capability. Interesting thing is at the beginning some attack vectors are large but exponentially decay with population.

So it is good to imagine how to take over a group, we continually do and sometimes do find an attack. In the case here I am not sure there is an attack, but well worth testing. The ability to create off line keys and then try and get them on the network is the reason for the address relocation in routing. It currently uses only the 2 closest nodes addresses though, it can use the quorum_number which then reduces attack vector considerably.

There is also a few accumulators to filter such activity, so it’s easy to ensure there can be no more than 1 node from a group to a group (relocates) in a X (X initially is 20) min period, which basically kills off any chance of such. There is a million more things, but this is why we write programs to simulate attacks when possible, the network amazingly reacts to such in incredible ways, mostly silently dropping them.

So getting a program to simulate it often shows gaps, where linear thinking jumped in to play again. The changing personas in vaults also adds a lot of security measures as messages are checked by all surrounding personas and nodes.

It’s all good to investigate, but hard to focus when your sprinting at top speed :slight_smile: so nice folks look and poke around.

4 Likes

@bcndanos You may have something, I do believe it’s much more difficult than you imagine, but still possible, perhaps.

The NodeId (routing table struct) holds the relocated ID, in the branch I am in this includes the id’s used to relocate the id (those two Id’s again lets call them relocating_id) :-)). So when checking acceptable of a relocated ID, then this check
(ps, this will not compile so don’t try :wink: )

    match routing_table.find(relocating_id) {
    Some(id) => RoutingError::InvalidId,
    None => routing_table.add(relocated_id)
    }
}

Initial routing_table.len() check is required for zero state to operate properly (please don’t ask me to explain, it’s a massive post then). In any case such a check would prohibit a single group attack as you suggest, so lets see if we can take it further perhaps, maybe similar with a multigroup initial state? ? Worth looking for sure. (I am rushing as usual, too many meetings too many mails and private messages, plus a network to get all up and running nicely :slight_smile: ) So forgive brevity

2 Likes

[added note] probably most of what I say has already been said; I didn’t see the full discussion first

the name is the hash of the concatenation of these names (node’s name + closest + 2nd closest)

the (calculated, i.e. not the network given name) name for a node is now simply the hash of the public signing key This has been simplified from the Client Id’s (MAID, MPID) as the use of a revocation key is not needed any longer for vaults: vault Id’s are not stored onto the network anymore. They are solely kept in the routing tables of the existing node, so when you disconnect, your vault key will be erased everywhere instantly too. There is no meaning to revoking a key that no longer exists. This change happened with the push for “non-persistent vaults” some months ago.

3 Likes