SAFE and anonymity - 81% of Tor users easily de-anonymised

So can we quantify SAFE’s anonymity?

Apparently even a poorly resourced attack can de-anonymise 81% of Tor users:


The core of SAFE is that we provide anonymity and security for everyone, but importantly as simple as possible in code, but also in the very core algorithms and types (very hard). To me the balance between over engineering and launch is critical. Hard, invisible to most, bit very important. The balance is tough but part of the journey we need to take. In the office we are very aware of this and thus not so much communication, but vital.


From my brief read of the article and paper: the “81%” attack is actually a confirmation attack, which is actually not in Tor’s threat model from what I’ve been told. That said, Tor is pretty sucky against these types of attacks – but more secure (but less practical/efficient) mixing systems exist.

On that note, is there a place I could read more about SAFE network anonymity?


This and a related post about Cameron today, going after darknet paedophiles, also raises the spectre of attacks on individuals working on SAFE, including psychological.

Cf. Aaron Swarz, and more recently I think targeting the the Tor team (though I’ve not read details yet - there was some comment on twitter about the legality of GCHQ doing the latter today).

The team need to look out for each other, anyone going though tough times, doubts etc, needs support whether this is being orchestrated or not.


This is not easy, as it is a core of the network the anonymity and security is baked into nearly every part (encryption, routing, vaults, nfs etc.) It is like asking what part of AES can you look at to see the security in a way. So the only way really is to go through the whole system docs. Here is a very brief (many people including us have tried to create FAQ, listings, videos, papers etc. there is a ton of info and this is the hard part, it all needs considered) overview of some parts

  • rUDP encrypts ever message hop to hop
  • routing scrubs IP addresses after hop1
  • data put/get is done via an identifier that is not linked to a person
    or public name
  • No server login so no central point of knowledge or attack
  • passwords are not stored or transmitted on the network
  • messaging encrypted, but identifier of sender/receiver is not what anyone logs in as (its retrieved inside an encrypted packet)

There is a load more and some of these, folk will argue are not anonymity but security, I argue it’s very much the same thing. Privacy, Security & Freedom, the first two are mutually inclusive and required together to provide the freedom we desire.


Tor is really getting hammered nowadays… Why doesn’t the Tor team join Maidsafe?

1 Like

@dirvine What about people creating a false hop? They give you the idea that you’re connected to the 4 closest nodes but all they do was creating false “virtual” nodes that only live on their system? I guess you’ve probably thought of things like that as well.

This is handled by the PKI type interface. You will encrypt messages to the close nodes with the key the network provides to you. There is a load more stuff like this goes on, but creating false nodes is a sybil type attack, the address_space_tool in common can calculate the numbers required to create a false group, its almost 3 times the size of the network and assumes nearly all other defences are not included in the code. Its all due to the non euclidean distances and the network holding the keys which all validate in a chain. It makes the Sybil attack really really hard, fortunately.


I wanted to point out that this isn’t exactly what the article and paper stated. Both stated that this new research suggests that it can de-anonymize a user with 81.4% accuracy “in the wild”. This is not the same as being capable of identifying 81.4% of TOR users. The attack relies on a user to go to a website controlled by an adversary, or one in which the TCP stream between the exit node and server can be manipulated, and then must transmit a large chunk of data continuously between the two points. If a user managed to avoid both (bad server, manipulatable TCP stream from exit node to server), then this attack wouldn’t work. TOR services should be safe (from this attack) because data never leaves the TOR network and is encrypted end-to-end.

@dirvine never pointed this out directly, but this attack wouldn’t be viable against the SAFE network because of the lack of servers. Additionally, the DoS attacks which may have been the method used by the FBI for identifying the TOR services (basically this attack in reverse with much more noise), wouldn’t make much sense because there isn’t a specific server holding the document being retrieved. The SAFE network does have other concerns in identifying an IP to a document request, as David already discussed.

I hope to have time to go through the available literature on TOR de-anonymization techniques. The closest thing SAFE has to a “server” is direct communication between two nodes, and I’m wondering if traffic analysis could reveal the two IPs communicating with each other. Since neither end should be malleable (end-to-end encryption), an attack similar to one this paper should be difficult. However, if one of the endpoints was “owned”, it might be possible to reveal the other IP with a similar technique. Additionally, I imagine other side-channel attacks could still be possible, but should be easy to defeat by padding packets to equivalent sizes (or perhaps even changing the size randomly). Will have to dig up information on timing attacks too for direct communication.


As in “honepot node”, which was discussed in a topic of security where China was mentioned.
I won’t search for it now but I think they’d need to setup many of those, but they still wouldn’t know what is MaidSafe used for. They’d also have to seed some counterrevolutionary content and watch for time correlation in disk (on the honey-pot side) and network activity to be able to merely guess who does what. That doesn’t seem very effective…

1 Like