Tor security advisory: "relay early" traffic confirmation attack
This advisory was posted on the tor-announce mailing list.
SUMMARY:
On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.
The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.
Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.
Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.
THE TECHNICAL DETAILS:
We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.
A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://ocewjwkdco.tudasnich.de/blog/one-cell-enough
The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.
The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.
So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.
There are three important points about this attack:
A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.
(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)
B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.
C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://ocewjwkdco.tudasnich.de/blog/one-cell-enough for more discussion.
Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.
We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)
In response, we've taken the following short-term steps:
1) Removed the attacking relays from the network.
2) Put out a software update for relays to prevent "relay early" cells from being used this way.
3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.
4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".
The following longer-term research areas remain:
5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.
6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://ocewjwkdco.tudasnich.de/blog/how-report-bad-relays
7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://ocewjwkdco.tudasnich.de/blog/lifecycle-of-a-new-relay
https://ocewjwkdco.tudasnich.de/blog/improving-tors-anonymity-changing-guar…
8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.
9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://ocewjwkdco.tudasnich.de/blog/hidden-services-need-some-love
OPEN QUESTIONS:
Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?
Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.
Comments
Please note that the comment area below has been archived.
A very irresponsible way to
A very irresponsible way to carry out research. Shame on them.
Yeah? and who is "them"?
Yeah? and who is "them"?
Research? That's just a guess. Tor guys don't know who and they don't know why.
"Together these relays summed to about 6.4% of the Guard capacity in the network. "
Does that sound like something like Joe Blow could afford? Because the presentation that never was, talked about doing this for 3k. But they didn't even have that. I don't think you run all these boxes for 6months for that little anyways. I don't think 'researchers' ie guys in their basement throw thousands of dollars at something so they can write a pdf.
Those network ranges
Those network ranges coincide with fdcservers. Looking at the prices, yes they could get 116 servers on fast connections for 3k. Whois says they're out of Chicago which sounds like a researcher might use (centralized, not hiding their tracks, etc)
$30/mo * 115 VPSs * 5 months
$30/mo * 115 VPSs * 5 months = $17k. Totally within the budget of some research group who decided that was a good use of their money.
The larger the Tor network gets (in terms of capacity), the more expensive it is to sign up a given fraction of it. Alas, bandwidth prices are very different depending on where the relay is, so getting good diversity is more expensive. I'm glad Hart Voor Internetvrijheid and other groups are working hard at the location diversity goal even though it's more expensive:
https://www.torservers.net/partners.html
See "fix 4" on
https://ocewjwkdco.tudasnich.de/blog/improving-tors-anonymity-changing-guar…
for more discussions on the topic of growing the total network capacity as a defense against these sorts of attacks.
One offer from this ISP
One offer from this ISP is
VPS Special 3
1. 50Mbps unmetered
2. 1GB RAM
3. 150GB HDD
4. 5 IP Addresses
5. 2CPU Core
$31.90
Note this are 5 IPs per ~ $30.
115 / 5 = 23
23 * $30 ~ $600
5 months ~ $3000
The number from the canceled BH talk.
50 megabits divided by 5
50 megabits divided by 5 isn't enough per relay to handle much capacity.
Also, 1 gigabyte of ram divided by 5 relays means you'll run out of memory right quick if you're trying to push a lot of bytes.
I assume that the $3k number was for a month, and they were planning to say something like "in the first month we ran 6% of the network and became the entry guard for 6% of the users, look it works".
In any case, we can speculate about how to make the numbers add up, or if they ever even did, but it's pretty much moot now -- and whether it's $3k or $27k doesn't really matter.
We know exactly who "them"
We know exactly who "them" is. Their names are Alexander Volynkin and Michael McCord (https://img.4plebs.org/boards/pol/image/1404/73/1404736805983.png) and they are researchers/students affiliated with Carnegie-Mellon University (http://www.reuters.com/article/2014/07/21/cybercrime-conference-talk-id… and http://www.theregister.co.uk/2014/07/22/legal_wrecking_balls_break_budg…).
Now, could one or more of our US-based colleagues please kindly FOIA the hell out of CMU on behalf of the community, please? Thanks!
Not viable under US law.
Not viable under US law. CMU being non-governmental is not subject to FOIA. And any request directed to a government agency would be stiff-armed on a secrecy basis.
CERT is Federally Funded…
CERT is Federally Funded Research and Development Center.. Because they take federal funds, wouldnt they be subject to FOIA?
This is not "Joe Blow". CERT
This is not "Joe Blow". CERT is one of the most well-funded computer security research organizations in the country. 30K, let alone 3K, is easily within the budget of the powers that be, if they feel it's worth spending that much. It's also easily within the budget of external funding providers (I'm sure your conspiracy theory-oriented brain could come up with some plausible ones).
These are not "guys in their basement". They are researchers in their well-funded computer lab.
And for those who do not
And for those who do not follow general infosec issues closely, CMU/(US)CERT has a very close collaboration and funding association with US Homeland Security. Which means that it is nearly certain that all of the results of this research attack have been passed on to NSA.
Thanks!
Thanks!
Thank you for the
Thank you for the comprehensive write-up.
Is there anything that users can do to check they have been affected (e.g. have been using a bad guard?). For example by examining the Data/Tor/state file in their TBB directory?
Good thinking. Yes, this
Good thinking. Yes, this should work. It won't tell you if you used them in the past (and then discarded them), but it will tell you if they're in your recent set.
Grab a copy of e.g. https://collector.torproject.org/archive/relay-descriptors/server-descr… and then pull out the relays with nickname Unnamed running Tor 0.2.4.18-rc in the two /16 netblocks I described.
Once you've done that, maybe put the set of fingerprints on a paste bin or something so other people can use them too?
> maybe put the set of
> maybe put the set of fingerprints on a paste bin or something so other people can use them too?
fwiw, these seem to be the fingerprints in question: http://ravinesmp.com/volatile/tor_relay_early_nodes.csv (CSV file generated from http://paste.debian.net/112652/ )
Note that there are 116 (not 115) nodes in this list. (Obviously don't trust this information, etc.; ideally one would reproduce it from collector.torproject.org or elsewhere.)
Here's the full message to
Here's the full message to check for in the Tor log:
"Received an inbound RELAY_EARLY cell on circuit %u."
" Closing circuit. Please report this event,"
" along with the following message.",
followed by a list of the relays in your circuit.
Where can one find the
Where can one find the identity key fingerprints for the removed 50.7.0.0/16 and 204.45.0.0/16 relays?
I'd like to scan backups of my /var/lib/tor/data/state file and see if I had one of those relays for a guard. (Although the real question of course is, was one of them my last hop...)
Can Tor users check if
Can Tor users check if they've been using one of the guards in the ranges that were removed from the network, or would those guard entries have been immediately removed from the client's state file upon learning that they'd been declared invalid?
(Of course, knowing that one *wasn't* using one of these guards would *not* mean you weren't affected, but it would still be interesting to know.)
I wonder how many people
I wonder how many people have obtained the attacker's data! Locations of a lot of hidden services and their users would be quite interesting to many people - operators of hidden services are quite diverse and even include hardened criminals like the GCHQ's JTRIG hacking/trolling department: https://firstlook.org/theintercept/2014/07/14/manipulating-online-polls… (their catalog of capabilities include several that use, rather than attack, Tor hidden services).
If the attacker is the CMU researchers and law enforcement seizes their data to selectively prosecute certain hidden services, perhaps that data could also be used to investigate and litigate against JTRIG? Sadly though, we probably would not hear about it if such seizure happens since everything would be parallel constructed for the public case... unless the researchers decide to tell us (which would probably be violating an NSL or something).
"If the attacker is the CMU
"If the attacker is the CMU researchers and law enforcement seizes their data to selectively prosecute certain hidden services" - seems like that would be fruit of the poisonous tree, but I am not a lawyer
Sorry to ruin your day, but
Sorry to ruin your day, but Law & Order is fiction.
https://www.muckrock.com/news/archives/2014/feb/03/dea-parallel-constru…
https://en.wikipedia.org/wiki/Parallel_construction
Quite often, even the prosecutors don't know that the actual investigatory tools used in their cases are not being disclosed.
Don't be silly! They don't
Don't be silly! They don't need the researchers for that.
The 'NSA' was using MIT (and others) to seed TOR for the SOD. Notice how the FOIA leaves the SOD out, they often masquerade behind the DEA title. Despite early news reports (circa 2009) it wasn't the DEA that busted Viktor Bout it was the SOD (I think it was a time article in 2011).
Mudrock also got the Hemisphere FOIA from LAPD or TacPD combine that with the telecom immunity act of 9/2007.
Should serve as an example
Should serve as an example for other researchers of how not to go about things. Thanks for the good work patching the vulnerability and writing it up - if I knew you IRL I would buy you a beer.
Donate to the project then :)
Donate to the project then :)
Is there a guide somewhere
Is there a guide somewhere on how to do research like this on Tor?
> Is there a guide somewhere
> Is there a guide somewhere on how to do research like this on Tor?
Start with RFC 1 and keep on going to the end. That'll get the basics started.
We have panels on the topic
We have panels on the topic periodically at the PETS conference, e.g.
https://www.petsymposium.org/2011/program.php
But nobody has sat down to write a full guide. That would be very useful! Somebody want to start one? :)
The network IP blocks
The network IP blocks 50.7.0.0/16 and 204.45.0.0/16 are assigned to a U.S. provider.
Further on I relate to the hidden service explanation at
https://sedvblmbog.tudasnich.de/docs/hidden-services.html.en
If U.S. IP blocks are excluded in the torrc via 'ExcludeNodes {US}'
can there be any point in the connection from client to hidden service (rendevouz point, introduction point) that could be on U.S. IP blocks notwithstanding?
Does the rendevouz point know it connects the anonymous client to a specific hidden service, does it know the servers .onion address?
Does 'DB' in the graphics on the explanation page stand for a Hidden Service Directory server? Why is 'DB' not drawn within the Tor cloud?
I ran WHOIS on a couple of
I ran WHOIS on a couple of addresses from both blocks, and they came back as https://fcsservers.net in different US and European locations. Which makes sense because they have a lot of datacenters: https://fdcservers.net/network.php
ExcludeNodes cannot be
ExcludeNodes cannot be applied to HSDir selection because clients need to be able to construct the same list of HSDirs as the publisher (service) so that they can find the place where the descriptor is published.
If you had ExcludeNodes {US} and the two blocks listed are indeed identified as US by tor's geoip data source (I haven't checked about that), then at least you won't have one of them as a guard. But any other guard could also potentially be passively decoding the signals sent by the malicious HSDirs.
It is, for me, not really
It is, for me, not really clear what data the attacker actually got from users.
Suppose someone used a clean install of tails and visited a hidden service site. What do "they" know about the user?
You wrote: "but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up)"
Does this mean, the have the ip address of the user but not the page he has actually visited?
I'm pretty curious right now.
Thanks for answer!
The attacks observed were
The attacks observed were coming from HSDirs, which know the address of the hidden service they're serving a descriptor for. The message transmitted was the hidden service address. This message can be decoded by the guard, which knows the IP of the client (which is accessing or publishing the descriptor).
So, when the attacker is a hidden service's HSDir (which will probably happen eventually, as the position in the DHT rotates at some interval - it would be good to know how long it takes to cycle through 50% of the HSDirs) the guards for the hidden service can deanonymize it - meaning, they can link its IP address with its onion address. Clients using a malicious guard can also be deanonymized (their IP can be identified as one which accessed the service).
It is entirely possible that other guards which are not in the set of nodes mentioned above (and/or not controlled by the attacker running the nodes caught doing the active part of the attack) are or were also decoding these messages.
The same attack could also deanonymize non-hidden-service traffic if these messages were sent from exit nodes. There have not (yet) been exit nodes observed sending relay_early cells backwards.
Thank you for
Thank you for explanation,
you wrote: "their (the clients) IP can be identified as one which accessed the service".
Do you mean they know what specific hidden site the client has visited (worst case one can imagine!!), or do they only know that the client accessed the hidden service generally?
Just asking because they said in the article: "but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up)"
Thanks again!
The attacker, if his relays
The attacker, if his relays are in the right place, could learn that you (that is, your IP address) did a lookup for the hidden service address (e.g. duskgytldkxiuqc6.onion). But he won't learn whether you actually loaded the page in your browser. He also won't learn whether you visited http://duskgytldkxiuqc6.onion/comsense.html or http://duskgytldkxiuqc6.onion/fedpapers/federa00.htm or what.
Hope that helps.
That is he wont know
That is he wont know directly. Since he is also your entry gaurd, he can watch the traffic over the circuit and get a good idea of how much traffic is passing, combined with knowing the site, could very well let him figure out some of those details.
Even if he didn't, it would likely be enough information over time to seperate casual observers who find a site and check it out, from serious users who may be more interesting targets.
Unless this is pure research, then I would assume this is not the end game but simply helping troll for targets.
Good point. The result of
Good point. The result of the attack in this advisory is that he knows which hidden service you looked up or published about. It's totally possible that he would then go on to do some other attack, like the website fingerprinting one you hint about, or just general "gosh she's using Tor a lot" observations.
Speaking of website fingerprinting, first read
https://ocewjwkdco.tudasnich.de/blog/critique-website-traffic-fingerprintin…
and then read "I Know Why You Went to the Clinic" from
https://www.petsymposium.org/2014/program.php
and finally, I hear there will be another website fingerprinting research paper at CCS this year (showing that false positive rates on realistic data are indeed higher than originally suspected).
@arma: I think I see a
@arma: I think I see a technical limitation to this attack, can you confirm or deny: if you are running a relay as well as a client, will you be protected at all?
"The attacker, if his relays
"The attacker, if his relays are in the right place, could learn that you (that is, your IP address) did a lookup for the hidden service address (e.g. duskgytldkxiuqc6.onion)."
It's funny to think that some FBI ip could appear among the one they de-anonymize.
Duh, the FBI regulary infiltrate cp, drugs hidden service. (Think Silk Road, the FBI had several accounts on it from the beginning).
I know, it doesn't change anything but it makes me smile.
Anonymity cannot be
Anonymity cannot be free.
Setup a exit relay on your dedicated server and use StrictNodes. This will make you invulnerable from this attack.
At the cost of making you
At the cost of making you obviously and trivially fingerprintable. Horrible idea.
No, because hidden service
No, because hidden service traffic does not go through exit nodes.
Correct, running your own
Correct, running your own exit relay won't really impact this attack.
(Also, the StrictNodes torrc directive does not apply to ExitNodes. It only applies to ExcludeNodes.)
Thank you for an informative
Thank you for an informative post and for releasing a timely fix!
I have a quick release coordination question. Why wasn't the version of tor in TBB also bumped up, especially given how recently TBB 3.6.3 was released? Doesn't the current release cycle gap between TBB and tor potentially increase the likelihood that .22 (mostly TBB client) users will be distinguished from .23 (mostly relay) users?
I know it's not necessarily preferred/ideal practice, but some people run relays from TBB instances. I certainly have in the past...so if you agree with the sentiment, it might even be a good idea to append a notice to the most recent TBB blog post discouraging people from configuring TBB's tor as a relay until it gets bumped up to .23.
Perhaps I'm making too big a deal of this, and TBB 3.6.4 is already on its way...
Yeah, the coordination
Yeah, the coordination didn't go as smoothly as it could have.
The new Firefox in TBB 3.6.3 was urgent to get out, since it includes the usual raft of fixes for Firefox vulnerabilities:
https://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html#…
Whereas the new Tor release isn't urgent for clients, since it only 1) adds a log entry (and the interface letting TBB users read their log lines sure isn't easy to use) and 2) prepares them to move from 3 guards to 1, but only once we set the consensus parameter to instruct them to switch (which I plan to do after the next TBB has been out for a while).
Hopefully it won't be too long until there's a TBB with the next Tor stable in it. But the TBB team have been working hard on a TBB 4.0 alpha, which will include the Tor 0.2.5.x tree, and I sure want to see that too. So much to do, and not enough people working on it all!
I would love to help on this
I would love to help on this sort of front as a volunteer, but this type of issue--as minor as I hope it will turn out to be--seems pretty staff-driven in terms of progress. So in terms of inviting volunteers to help, it unfortunately seems like one of the few areas where volunteers working mostly in *other* areas would be the precondition to open up staff bandwidth to address this type of issue.
they signed up around 115
they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network.
Do they sign these relays at once or were the number growing gradually? If at once wouldn't that be an alert signal?
They signed up in two main
They signed up in two main clumps. And yes, it was (could have been) an alert signal -- see the paragraph in the advisory about the DocTor scanner.
You mentioned in your
You mentioned in your write-up that while their signing up all those relays triggered warnings in DocTor, they were left in the consensus since it was felt that they weren't too significant a portion of the network.
Hypothetically, what would be enough to make the authority operators say "Hey, these guys are bad news, let's get them out of the consensus ASAP"? Maybe this is a dumb question, and it depends on specific circumstances I don't know enough about.
Thanks for your tireless efforts as always!
Yeah. That's still not
Yeah. That's still not entirely resolved. I hope the answer is "it will take a lot less the next time we see it!" :)
But really, I worry about more subtle attacks, where the adversary signs up a few relays at a time over the course of months. Also, there are events (like the blocking in Egypt) where many people decide to run relays at once. If the adversary signs up a pile of relays on that day, will we say "oh good, people care about security and safety"?
The attack was possible to notice this time because the relays all came from the same netblocks at around the same time and running the same (kind of old by now) Tor version. Detecting this sort of attack in the general case seems like a really hard research problem.
Any machine learning people
Any machine learning people in the building?
Yes, but the field of
Yes, but the field of adversarial machine learning is very young compared to typical machine learning. Or said another way, most machine learning algorithms fall very quickly to an adversary that knows the algorithm and tries to manipulate it.
See the papers on 'adversarial stylometry' for some fun examples here.
When you're doing any kind
When you're doing any kind of AI against an adversary, independence assumptions turn into vulnerabilities that he can exploit. Just look at search engines. They have to devote considerable resources to thwarting the "SEO" guys who try to cheat their ranking algorithms.
Interesting - that'd be
Interesting - that'd be quite nice, you could encode the hidden service address as zeros and ones and send it back down the circuit (e.g. EARLY = 0 RELAY=1). As EARLYs would never normally be sent backward, it's also trivial to spot a client that has been caught.
But as Roger says, traffic confirmation attacks are still very easy even with this patched - it's just that this attack made it marginally easier.
"it's also trivial to spot a
"it's also trivial to spot a client that has been caught" -- it depends what you mean here. It's trivial to notice (at the client) that somebody is sending you relay-early cells. But that doesn't tell you about whether your entry guard is looking for them. So if you mean "attacked" when you say "caught", I agree.
Most likely the NSA
Most likely the NSA "intercepted" the talk and made use of the technique for several months. Had it not been axed the hole would've been patched sooner. Isn't it obvious?
The talk was due to happen
The talk was due to happen next month.
Actually, the NSA doesn't
Actually, the NSA doesn't need to (and from the evidence we've seen, actually doesn't) run relays of their own.
But that shouldn't make you happy, since one of the huge risks is about how many parts of the network they can observe, not how many relays they operate. They don't need to run their own relays, if they can just wait until nice honest folks set up a relay in a network location that they're already tapping.
Now, the interesting thing about the traffic confirmation attack here is that you actually do need to operate the entry guard, not just observe its traffic (because you need to see inside the link encryption). So in fact the NSA would have to run a bunch of relays in order to do this exact attack.
But the more general form of traffic confirmation attack can be done (if you're in the right places in the network) by correlating traffic volume and timing -- and that can be done passively just by watching network traffic.
The two blog posts to read for more details are:
https://ocewjwkdco.tudasnich.de/blog/one-cell-enough
https://ocewjwkdco.tudasnich.de/blog/improving-tors-anonymity-changing-guar…
"you actually do need to
"you actually do need to operate the entry guard, not just observe its traffic (because you need to see inside the link encryption"
Accept, Heartbleed lets you dump the encryption keys... or did around that time, right ?
Correct -- that's part of
Correct -- that's part of why I mentioned a global adversary who logs traffic and then tries to break the link encryption.
Though to be fair, nobody has successfully shown that heartbleed could be used to extract link encryption keys from a Tor relay.
(Though to be extra fair, nobody has successfully shown than you couldn't.)
Yes, I get that the NSA's
Yes, I get that the NSA's strength is in network observation.
Although they could also run numerous malicious relays, thousands perhaps, that would be too obvious too quickly to be very useful. Right?
When I updated to 3.6.3Mac
When I updated to 3.6.3Mac there is no reference to Tor 0.2.4.23. Then I click on "about tor" in the browser, it reads version 1.6.11.0, maintainer, Mike Perry. Is there a problem here?
Also, when I go to the tor announcement page at the bottom there is url: http://lists.torproject.org/pipermail/tor-announce/attachments/20140730…
that when clicked a window opens to open with GPGServices.service. When downloaded and opened a window appears that says "attachment-1.sig, Verification FAILED: No signature.
What is happening? Thank you.
There hasn't been a Tor
There hasn't been a Tor Browser Bundle with the new log message released yet.
BTW, that 1.6.11.0 version number is for the Tor Button Firefox extension.
Correct, TBB 3.6.3 doesn't
Correct, TBB 3.6.3 doesn't have Tor 0.2.4.23 in it yet. See
https://ocewjwkdco.tudasnich.de/blog/tor-security-advisory-relay-early-traf…
for details.
As for your gpg thing trying to automatically interpret the signature on the mailman archives, I am not surprised that it doesn't work. If you were on the tor-announce mailing list, and got the signed mail, then you could check the signature on the mail there. But the result will be that you can verify that I really sent the mail -- it sounds from your questions like you were hoping it would do something else.
BBC News and other outlets
BBC News and other outlets in the UK recently (early-July) carried a set of reports regarding large numbers of people (over 600) who were arrested for accessing illegal material via TOR hidden services.
The reports extensively quoted police sources, who, amongst the usual fluff associated with such reports, explicitly claimed that the UK police and intelligence services could de-anonymise TOR hidden services, but declined to indicate how.
The dates quoted in the reports for the 600 arrests supposedly connected with de-anonymisation of TOR hidden services were "the last six months" preceeding mid-July, i.e. almost exactly matching the Jan 30th to July 4th window quoted above by the TOR foundation.
The story also appears to have been fed to the UK press by the UK police and Intelligence services within a few days of the compromised TOR relays being disconnected.
This may just be coincidence, but it smells fishy to me.
It is also worth noting that, according to BBC Wales reports, several of the 600 arrested subsequently committed suicide (before they could be either charged or tried).
Big busts like that are
Big busts like that are always hyped up in the media as much as possible. They want to make it sound as dramatic as possible, spread FUD among other criminals, win brownie points with tabloid newspapers, and so on. If a journalist asks you, the detective, "hey, that dark web thing, did some of the guys you arrested use the dark web?", of course you're gonna say "yes". But that doesn't mean that they broke tor itself. Maybe some of the people arrested messed up and deanonymized themselves another way. Maybe they were targeted with another Firefox 0-day. Who knows?
The announcement of this big "bust" also coincided with the rushing through of the UK's new data retention law through parliament. It coincided with a whole lot of things. There isn't anything to be gained out of speculating like this.
Indeed. I enjoyed reading
Indeed. I enjoyed reading http://phys.org/news/2014-07-tor-cops.html the other day.
Presumably the increased
Presumably the increased interagency coöperation of Operation Notarise snagged a lot of low-hanging fruit, so the NCA can spread all the FUD it wants. That OPSEC is hard shall be borne out in the trial transcripts. Which should make for an interesting study nonetheless: there are more paedophiles than terrorists.
This is nonsense. I assume
This is nonsense.
I assume that UK is a state ruled by law. Only accussing users having asked for hidden services without even knowing the exact service or page they have demanded, or not even know if they really visit the web site is not enough to arrest anyone.
Actually they tracked a
Actually they tracked a total of 10000, but 'cherry picked' 660.
They could also be using some of the FH data from last year.
I have heard law enforcement
I have heard law enforcement officials describing such investigations as "shooting fish in a barrel"/
I'd really be surprised if
I'd really be surprised if they weren't just doing the same old P2P file sharing stuff. That does fit the broadest definition of "dark net", i.e. not indexed by conventional search engines. And last time I looked people were still dumb enough to share illegal porn that way. (Based on file names only, I certainly didn't download any.)
Agree. Using not-anonymous
Agree.
Using not-anonymous P2P is the most foolish thing to do with certain kind of stuff and there were some other european mass-stings in the previous couple of years, with hundreds of people investigated or arrested.
It could be just a PSYOP trying to scare Tor users, even if Tor wouln't be involved.
BTW, "dark net" could also mean Freenet, not necessarily Tor!
You're right. And "darknet"
You're right.
And "darknet" could be I2P, too.
Who says those people didn't
Who says those people didn't screw up something else? They could have used tor2web. They could have sent emails. They could have been in the Tormail database that the FBI seized.
Don't believe everything the press tells you.
That number of 115/116
That number of 115/116 relays is too low. I checked the server descriptor archives from January to July and found a total of 167 relay fingerprints, and here's the kicker: January had the largest number (161), then decreasing from month to month until it's at 116 for June and July! (The IP address count was 121 from January to May, and 116 from June to July.) Someone should definitely analyze the data from a lot further back in time, because we might be looking at the wind-down phase of something.
Fingerprints January - July:
https://gist.github.com/anonymous/901239f40977e6045756
IP addresses January - July:
https://gist.github.com/anonymous/a0e0f0725f88c5dfc471
relayearly_extractor.sh:
https://gist.github.com/anonymous/1c5c9328acb8b686f155
There are definitely more
There are definitely more attacker relay descriptors in 2013:
Lots of additional ones in December 2013 that match the original criteria (IP blocks + Unnamed + 0.2.4.18-rc), but in November it gets complicated because v0.2.4.18-rc was only released in the middle of that month, and the attacker did not immediately upgrade. Take a look at November's descriptors for relay fingerprint 06D5 508F 225A 3D94 C25B E4E7 FD55 1CAD 1CE3 5672, they used v0.2.3.25 and then only in January 2014 was it finally upgraded to 0.2.4.18-rc.
237 likely attacker relay fingerprints 2013-10 - 2014-07 (IP blocks + Unnamed + either of both versions)
https://gist.github.com/anonymous/a7f5addc58f5418e045b
Observed attacker platform descriptors:
32997 Tor 0.2.4.18-rc on FreeBSD
1700 Tor 0.2.4.18-rc on Linux
749 Tor 0.2.3.25 on Linux
42 Tor 0.2.2.35 (git-73ff13ab3cc9570d) on Linux x86_64
1 Tor 0.2.3.25 on Windows XP
1 Tor 0.2.3.25 on Windows 8
(Appearently using Windows as a development platform, which suggests that the attacker truly is evil.)
Updated relayearly_extractor_v2.sh (the first one didn't work if your grep is not egrep):
https://gist.github.com/anonymous/c714a58b2c7cebc1b051
My old netbook is quickly reaching it's limits here, performance wise, so can someone else please take this up?
Also people, if you have btrfs/zfs snapshots of your filesystem, that's a way to check your historical Tor state files.
I suspected that there were
I suspected that there were false positives and retried using a stricter approach, namely grepping for the original 116 relay fingerprints on the 2013-09 - 2014-07 descriptor dataset. This still returned matches dated 2013-12-26 - 2014-07-09, but the only platform was "Tor 0.2.4.18-rc on FreeBSD".
A build for 0.2.4.23 isn't
A build for 0.2.4.23 isn't available on the repository for Trusty armhf, it's still on 0.2.4.22. Is this an oversight, or is it just still coming for ARM architecture?
https://lists.torproject.org/
https://lists.torproject.org/pipermail/tor-relays/2014-July/005033.html
One question came up was
One question came up was "What about Tor users who connect to non-hidden services?" As is use Tor as a web browser.
For example www.google.com. I suppose the traffic confirmation attack would gather data but that would only work to identify the user and not the fact the user connected to Tor then went to google.com
You could do this from an
You could do this from an exit node as well.
But as importantly, if you from an exit node visit an HTTP site, it is trivial for an adversary to do a traffic confirmation attack by injecting a bit of javascript into the fetched reply.
And even for HTTPS, the adversary can manipulate your traffic reply rate when they control the exit node for a confirmation attack.
The real innovation is that you can use it to say "who's accessing OR assigning a given hidden service", which is a particularly powerful capability.
No, I don't think that last
No, I don't think that last part is accurate.
If you run a bunch of relays and you manage to get one of your relays into the HSDir position in the circuit and another in the entry guard position, you can do passive traffic volume and timing correlation between them to realize they're on the same circuit. You don't need to do anything active (either the header tagging approach described in this advisory, or the javascript injection thing you describe, etc).
So the "real innovation" here isn't that traffic confirmation attacks are now enabled in a new situation where they didn't work before. They likely worked just fine. "So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work."
For this specific attack, it
For this specific attack, it doesn't look like they were deanonymizing "regular" users. Since none of the relays they set up were exit relays, you would never pick one of them as the last hop in your path to the website you want to visit.
Still, it's possible that the particular method they used could be modified to deanonymize "regular" users. Wiretap your exit relay, parse destination IP addresses from packet headers, encode them in RELAY/RELAY_EARLY sequences, then send them back down the circuit to the guard like before. Such an attack could discover that you (your IP address) connected to google.com at some date and time. If you weren't using HTTPS, it could find out that you (your IP address) connected to google.com and googled for kittens.
Also, it's important to realize that this is not the first and certainly not the last example of a traffic correlation attack on the tor network. It's one of the fundamental problems which is very hard (maybe impossible?) to get around in low-latency anonymity systems. You can help everyone mitigate the threat from these kinds of attacks by running a relay, which lowers the bad guy's chances of being your first and last hop.
Correct.
Correct.
The problem lies in the fact
The problem lies in the fact that many of us users are technologically deprived.
I'd love to "run a relay" or however it's termed because I would so enhance the security/anonymity benefits but lack the knowledge and skills to implement same.
And flinging me at anything less than a dummies guide for the ungeeks is simply a waste of your'n valuable time.
Like many on this planet I still use a XP and I'm not even sure the 1.5GHz OS would be able to support y'alls nodes and relays.
I think y'all need a higher class of clientele - which sorta leaves us poor, struggling revolutionaries who be aspiring to also be members of a stable proletariat out inna cold...
If your PC is on all the
If your PC is on all the time anyway and you have a decent broadband connection, running a relay is helpful and not too difficult: https://sedvblmbog.tudasnich.de/getinvolved/relays.html.en
It is possible to run a relay on XP, but... XP is no longer supported by Microsoft and thus has an ever-growing list of known vulnerabilities which they don't plan to ever fix. So, you really shouldn't be using XP for anything, much less a Tor relay.
You don't need to buy a new PC to escape the security disaster that is Windows XP. I highly recommend trying some flavor of GNU/Linux. Many of them are actually very easy to use these days. Check out http://getgnulinux.org/ to get started.
If you can't switch immediately but want to try it out, you could also get a copy of Tails https://tails.boum.org/ a privacy-focused (Tor preinstalled) GNU/Linux distribution you can run from a CD or USB stick without touching your hard drive. You can't (easily) run a relay from Tails, though.
"XP is no longer supported
"XP is no longer supported by Microsoft and thus has an ever-growing list of known vulnerabilities which they don't plan to ever fix. So, you really shouldn't be using XP for anything, much less a Tor relay."
Amen to that!
OP here: Grateful thanks fer
OP here:
Grateful thanks fer y'alls responses.
Regrettably, living in a country with a fast falling exchange rate makes replacing my XP prohibitive and Win8 or similar ain't my preference. My personal browsing/computing hobby don't need any "enhancing" an' I love touch-pads. It's all I use as an interface - I don' wanna to keep needing to do the "gorilla-arm" thing.
My security never, ever bin breached 'cos I allus go offline [an' I don't follow dodgy links] after use - so bye-bye relays. I weren't aware that the device needed to be online 24/7.
When we were still being issued with a separate Vidalia interface I did notice that, betimes, some operators not allus online so I assumed that relay availability too would fluctuate. But in those days I wuz dialup with a cap on 512Mb monthly.
Now I broadband with a 2Gb month cap and I thought I could somehow add "support" to Tor security by adding m'self to the network on an ad hoc basis - thereby adding to my own security too.
What else kin I do then here to make our world a better, more secure place - and to stop me feelin' jus' like a parasitic appendage...
"My security never, ever bin
"My security never, ever bin breached 'cos I allus go offline"
LOL. Sorry, but if you're using XP and ever going online at all, your computer is most probably compromised by multiple people and organizations.
"What else kin I do then here to make our world a better, more secure place - and to stop me feelin' jus' like a parasitic appendage..."
Educate yourself, and then teach others.
Here are a few links:
How to get started using Tails (an easy way to start quitting XP): https://tails.boum.org/getting_started/index.en.html
Find a hackerspace near you, or start one yourself: http://hackerspaces.org
Find a cryptoparty near you: http://www.cryptoparty.in/parties/upcoming
...or start one yourself: http://www.cryptoparty.in/organize/howto
There are lots of free university courses online, for instance:
https://www.coursera.org/course/pythonlearn
https://www.coursera.org/specialization/fundamentalscomputing/9
https://www.coursera.org/course/crypto
There are more opportunities to educate yourself for free now than ever before. All that you really need (besides a desire to learn) is some free time and an internet connection. It sounds like you might be one of the people lucky enough to have those two things (and certainly, many people are not so lucky) so you should go for it!
Can anyone shed any light on
Can anyone shed any light on how the actual execution of this attack was observed?
If the attackers were only sending the hidden service name in response to HSDir reads/writes, the detector would need to to be or be accessing a hidden service for which the attacker was the current HSDir.
A relaying node could also detect relay_early cells in the wrong direction, but it wouldn't know they were related to HSDir traffic.
It sounds like the
It sounds like the researchers begrudgingly dropped hints about the attack involving relay_early cells. This, combined with knowledge of the mysterious group of 115 relays joining the network, and the fact that they directly encoded the .onion address and sent it down the wire, means you could set up your own guard relay, watch for the relay, relay_early messages, and try to work out a pattern.
It's easier to set up your
It's easier to set up your own client, and go to a hidden service, and see if you get any relay_early cells back. The relay at the HSDir point doesn't know who it's attacking when it's making the decision about whether to inject the signal.
So to confirm that these
So to confirm that these nodes were doing the relay_early attack someone accessed hidden services mapped to those HSDirs at that time?
You could just set up your
You could just set up your own "testing" hidden service and "testing" client, and monitor them for a while. Eventually, your hidden service's descriptor will be served by a bad HSDir, and your client will pick a bad guard.
I think that everything
I think that everything content which goes over the tor network should be strongly encrypted. So that if they de-anonymize you they should still have much fun decrypting your content.
During browsing, you communicate with servers whose certificate you can not easily verify manually, and agencies like the nsa can even produce faked or stolen ssl certificates for google and yahoo to perform man in the middle attacks https://www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html
So you have to restrict your communications to people that you personally know and whose security certificate you can verify.
For this, retroshare http://retroshare.sourceforge.net/ does a good job, when it operates over tor.
Good luck talking to
Good luck talking to clearnet then.
The data has to exit somehow!
Please take your spam about
Please take your spam about your project somewhere else.
Actually, to ensure security
Actually, to ensure security I create a TC volume and upload this to a random upload service. The PW can be anything up to max. characters. And by using Tor [ without Java or javascript ] I'm ensuring that any traces of my ip are negligible. The upload itself too has a short lifespan.
Of course, now that TC is no longer with us this isn't a viable method any longer - so I'm free to reveal it. But I'm sure you get my drift.
1)to those who said good
1)to those who said good luck talking to clearnet then:
I said, run retroshare http://retroshare.sourceforge.net/ over tor. There are tutorials for this on the net. For example here: http://wiki.piratenpartei.de/RetroShare#Paranoide_Konfiguration
https://www.whonix.org/wiki/Chat#RetroShare
The german pirate party calls this the "paranoid configuration".
So no, when you are using retroshare over tor, you are not "talking to clearnet then".
Use tor to anonymize your IP, and then retroshare, to encrypt your content.
2)
yes, truecrypt also does a good job to encrypt files. Actually, I do this: I run tor, over it retroshare, and with this I sent truecrypt container to my friends.....
Truecrypt is, however, not a good way to encrypt chat, email and voip. And that is what retroshare is good for. Note that your mobile is one of the biggest sources of metadata for every agency. With retroshare, you have encrypted voip, and when you run this over tor, then the agencies have much fun. They have to de-anonymize tor, and then crack the retroshare encryption, and for files they have to decrypt my truecrypt containers....
I wish them luck with that....
As for webbrowsing: There you communicate with webservers that you do not know personally. It is difficult for the average user to verify a google certificate. As I explained above, NSA can fake certificates for google servers. So the process of webbrowsing is probably an insecure thing by principle. Perhaps one should abandon webbrowsing altogether but instead restrict ones communication to individual people whose certificate one can verify. And that is why I recommended retroshare to be used over tor.
I don't understand how the
I don't understand how the headers can make it past 1 relay to relay communication.
There should be no information that is transferred beyond a relay-relay transfer except perhaps the exit node, entry node, and hop count (and even those should be avoided if possible).
Client - request exit public key from directory of exit nodes
\|/
Node1-entry node (sends request to exit node)
\|/
Node3-exit node (responds with exit node public key)
\|/
Node1-entry node (responds with exit node public key)
\|/
Client - encrypts with exit node public key, sends that and Client public key
\|/
Node1 (requests Node2 public key)
\|/
Node2 (responds with Node2 public key)
\|/
Node1 (requests Node3 public key)
\|/
Node3 (responds with relay public key -- not exit node public key)
\|/
Node1 (encrypts data, entry node, and Node1 public key with Node3 public key, then also exit node with Node2 public key)
\|/
Node2 (decrypts Node1-Node2 information, requests Node3 public key with information about chosen exit node)
\|/
Node3 (responds to Node2 with public key)
\|/
Node2 (encrypts previously encrypted data with Node3 public key and sends Node2 public key)
\|/
Node3-exit node (decrypts data encrypted by Client (hopefully still SSL encrypted), sends request, receives response, encrypts with Client public key and then includes entry node in encryption with Node2 public key, encrypts request id with Node1 public key)
\|/
Node2 (decrypts with Node3 public key, has entry node, request id is still encrypted with Node1 public key)
\|/
Node1 (decrypts request id that was encrypted with Node1 public key, looks up client, sends data that is still encrypted with Client public key to Client)
\|/
Client receives response and decrypts data that was encrypted with its public key and then further decrypts any extra encrypted data (like SSL) from the response
With this way, there's not any room for non-necessary headers, which will mess up the data. The only way an attack like the one mentioned would work (as far as I can see), would be to tack on data to the encrypted data block which could be read by the exit node and the entry node and exit node could then correlate it. This would result in a majority of connections being bad, so it would be noticeable.
If I'm reading your proposed
If I'm reading your proposed protocol correctly, Node 1 knows the identity of Node 3? If so, there is no point in having a middle node. Read https://gitweb.torproject.org/torspec.git/blob/HEAD:/tor-spec.txt to learn about how tor works and https://trac.torproject.org/projects/tor/ticket/1038 to learn a little about the history of relay_early cells.
dammit
dammit
http://www.bbc.co.uk/news/uk-
http://www.bbc.co.uk/news/uk-28326128
related?
Seems unlikely. But in any
Seems unlikely. But in any case, see the same thread earlier:
https://ocewjwkdco.tudasnich.de/blog/tor-security-advisory-relay-early-traf…
I was thinking about this
I was thinking about this after reading the article, seems to me that a way to be a bit more secure in using Tor would be..
1. Set up an OS on a virtual machine to keep it isolated
2. Get an account with an anonymous VPN provider that doesn't log.
3. Connect to the VPN
4. Connect to Tor
That way if the attacker did get your IP, it's not your actual address.. and if they gleaned information about your operating system, it's not your actual operating system.
Unfortunately I couldn't recommend which virtual machine software, operating system or VPN provider might be best since I've not done any research into it.. but it does seem like it would be a much more secure way of using Tor.
>anonymous VPN provider that
>anonymous VPN provider that doesn't log
Found your problem.
This. This so much. I can't
This. This so much.
I can't this enough.
how about vip72 isnt it a
how about vip72
isnt it a vpn run on a botnet?
basically log-less
<facepalm>
<facepalm>
If you like GNU/Linux then
If you like GNU/Linux then Whonix is a visualized Tor operating system. The basic concept of Whonix is called 'isolated proxy.
Two VMs are used. One runs as a visualized Workstation (Debian, Fedora, even Windows if you'er crazy). The other VM runs as a Tor Gateway VM and routes the Workstation VM software applications over Tor.
This is what's called the isolated proxy concept. It prevents IP and DNS application leaks. You can read more about isolated proxies and Whonix here.
https://trac.torproject.org/projects/tor/wiki/doc/TorifyHOWTO/Isolating…
Qubes is another Virutal OS, like Whonix, Except Qubes requires hardware visualization support and uses the Xen Hypervisor, instead of a host operating system like Whonix does. In theory, Qubes is probably more secure than Whonix because hypervisors have less of an attack surface than the Linux kernel running as a host OS does.
visual ==> virtual
visual ==> virtual
Great work and keep looking
Great work and keep looking for malicious nodes, because we can be sure bad guys will keep doing this.
"We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relay"
Hah! I remember the 50.7.0.0/16 family and I thought they were up to no good based on my observations, which suggest that the nastiness may have been much worse than you report in the blog. If only it were easier to contact you anonymously with encrypted anecdotes, I would certainly have reported them soon after they started running those.
Some years ago another bunch of FDC servers were hired by another "offensive research" company. This earlier family was also notable for its inexplicable ties to the Chicago Board of Trade network.
Unfortunately, future malicious families will probably try to diversify their nodes,so may be harder to spot, but I am sure you have thought of ways and will continue to think of ways to spot dodgy families of nodes.
1. That probably would
1. That probably would provide some additional security against application-level attacks, especially on Windows. However remember that VM breakout bugs are a thing. Also, if you use something like Tails, it's not persistent, so you don't have entry guards, actually making you more vulnerable to this kind of traffic correlation attack.
2. Who cares if a VPN provider claims they don't log? Maybe they log everything. How could you tell if they logged everything or not?
You should trust any VPN you don't run yourself as much as you trust a random router somewhere on the Internet. Which is not at all.
Nice that you're trying, but
Nice that you're trying, but another MAILING list is not going to encourage people to report bad relays to you:
https://trac.torproject.org/projects/tor/wiki/doc/ReportingBadRelays
1. It is almost certain that our enemies monitor all your mailing lists.
2. We know from Snowden leaks that the enemy maliciously exploits bug reports to attack "targets". And we know that all Tor users are among their targets.
3. It is almost certain that our enemies use their database of vulnerabilities found in individual personal electronic devices (recall the JTRIG "tool" which does a torified nmap scan of every device "in an entire country" which is attached to the internet) to tailor their attack to particular devices. It follows that anomalies observed by users are likely to be tend to de-anonymize them if users report what they have seen in unencrypted communications.
Why wouldn't the
Why wouldn't the "researchers" fully inform the tordevs of all vulnerabilities ahead disclosing their findings so that they can work to address them? Seems very reckless if you're not a bad actor. What is CERTs role in this situation?
There are many possible
There are many possible reasons, all plausible to various degrees. E.g.
1. They wanted to disclose to the Tor project, but CERT didn't let them
2. They wanted publicity and dramatic effect by revealing this attack in the wild at Black Hat
3. They were going to disclose to the Tor project, but far later than expected
...
As always, without the actual details all we can do is speculate, which isn't always the most useful thing we could be doing.
Does the Tor Project expect
Does the Tor Project expect more data from the researchers ?
Many thanks to Philip and
Many thanks to Philip and Damian for "adopting" the node monitoring task.
Tor Project is a victim of its own success, in the sense that Tor has been so successful that increasing numbers of well funded entities are pouring serious time, money, and effort into subverting the Tor network.
Some users have long urged the Project to pay more attention to projects like node monitoring, as well as to devising strategies in legal/psychological/economic/political space. The developments reported in the blog show that we were right. Inevitably this will take time away from work in coding space, but it has to be done.
I'm sure this whole fiasco
I'm sure this whole fiasco is working wonders for Alexander Volynkin and Michael McCord's reputations as security researchers.
Maybe running Tor over Tor
Maybe running Tor over Tor might have mitigated this specific problem.
i.e first tor exit asking second running tor to access the HSDir.
Since, if I'm understanding this correctly, the tag would have been seen as far as the second running tor guard node, but then gets encapsulated and pushed down the first tor exit and on to the client.
But yet again, even Tor over Tor can't help against the other types of traffic confirmation attacks. Nor can vpn for that matter.
All that can be done right now is running more "good" relays!
The answer to that is
The answer to that is "maybe, maybe not". There hasn't been enough research into what happens when you connect to tor, over tor. What if you pick relays 1, 2, and 3 for the first circuit, then you pick relays 3, 2 and 1 for the second circuit? Will that deanonymize you? Maybe.
You will have to manage your
You will have to manage your circuits better to avoid the scenario you mention. The first instance runs as normal but uses just 1 guard or a bridge, this will be default for everybody very soon. On the second instance, make it also use just 1 guard, but add an exclude clause to the configuration file to exclude the guard(s) of your first instance. You can take the extra step of excluding the guards of the second instance to the first instance, but that might not be necessary, since you are only concerned with your immediate guard not being used on the circuit path of the second instance. The other scenarios or loops that can happen, are not ideal but you can live with them, as long as it does not happen to your immediate guard. note that the strict nodes option is not used. As for allowing the first instance to recycle circuits and closing "dirty" circuits, run a cron job script on the first instance that gets the circuit numbers of the streams that are open, and close them with CLOSECIRCUIT command every now and then. More thought and research should be done on this. It's a better alternative than the more prevalent vpn to tor suggestions that are floating around.
Suppose my relay is running
Suppose my relay is running 0.2.4.23, and suppose it's the middle hop between adversary's guard node and adversary's hidden service directory. Will it kill circuits sending relay_early cells backwards? Or is that impossible, since it doesn't have the keys to decrypt that stream?
It will indeed kill circuits
It will indeed kill circuits if it sees an inbound (towards the client) relay_early cell.
It doesn't have to decrypt the stream to see it, because whether a cell is relay or relay_early is a property of the (per hop) link, not a property of the (end-to-end) stream.
Awesome, now post the data
Awesome, now post the data that is relevant so we can track these people down and end them.
Or are the maintainers of Tor too scared to let everyone know who's attacking them, and are in fact trying to cover up the addresses of those trying to attack us?
Get to it, bois. Start dropping information.
NOW.
Huh? What data? We all know
Huh? What data? We all know who the researchers who planned to give the BlackHat talk are. I think arma made it very clear in the blog post that we don't know 100% that they are definitely the ones behind this particular attack, but it's most likely them. The tor project knows as much about the researchers than any other public entity.
Note that they had to go
Note that they had to go through the trouble of the Sybil attack because they are not a global adversary. So, probably not NSA.
Russia offers $110,000 to
Russia offers $110,000 to crack Tor anonymous network
http://www.bbc.com/news/technology-28526021
Should we be concerned?
No Other government agencies
No
Other government agencies would pay more; if someone is trying to break tor for cash, they're not going to report it to the Russians.
So no, you shouldn't be concerned over Russia wanting to break tor given that most intelligence agencies want to. Unless you live in Russia all the other agencies (with more money) are more important, and if you live in Russia based on what I've heard you should be more concerned about what actions the government might take against you for simply using tor.
Nah, either its a weird form
Nah, either its a weird form of psyops, or the Russian government is hilariously naive.
If you really could "crack Tor anonymous network", you could probably make a lot more money by going to intelligence agencies in the US or the UK. $110,000 is chump change even for a private exploit contractor like VUPEN.
All these scary dramatic articles are just click bait written by journalists who don't really know what they're talking about.
Thank you for taking the
Thank you for taking the time to explain all these technical details. I'd be lost without the blog author breaking down how Tor works. I'm learning a lot!
I'm glad the technical
I'm glad the technical details were useful! I want everybody to get up to speed on Tor (as much as you're willing to) so we're all in a better position to decide what sort of a world we want to live in.
Guys, while at it. Let's fix
Guys, while at it. Let's fix hidden service referer leak too:
https://trac.torproject.org/projects/tor/ticket/9623
Sounds good, submit a patch!
Sounds good, submit a patch!
We are not going to change
We are not going to change the fate of the world with an anonymity network that is pwnt to fuck and back with $3,000 arma, you are delusional. Tor is pretty much a honeypot more than anything else these days. You guys need to be way more proactive.
1. The idea of using PIR for HSDIR lookups was suggested a long time ago, never implemented of course, would have protected from this attack
2. Using a single entry guard and slowing rotation should have happened four years ago when it was first being suggested, would have greatly reduced the damage of this attack
3. Tor is way too fucking vulnerable to confirmation attacks the entire thing needs to be scrapped and a new network needs to be designed and implemented, Tor failed, come to terms with it, accept it, stop running a network that gets people pwnt on a daily basis now
You guys will never understand defeat because you repeat the "Tor is the awesomest" mantra just like any other propaganda that people repeat over and over and then come to blindly accept as truth. Sorry, I would be better off using a VPN chain than a low latency network that arbitrary assholes can inject nodes on.
Tails users are especially fucked of course because they go through entry guards like a junkie goes through lines of cocaine, who would have ever guessed that was a horrible idea?
Please stop with your
Please stop with your ignorant FUD. You've been spouting this all over the blog and the IRC. I'm sorry, but Tor is not "pwnd as fuck", and as much as you hate to hear it, low-latency anonymity networks will never be immune to confirmation attacks. The best we can do is make it very hard both technically and legally, and that can be best done by improving the code and having more users run relays. I know you have a hard-on for high-latency mixnets, but they have a totally different purpose. No one could edit Wikipedia or chat with a friend using a high-latency network, it's just not possible. For networks capable of transmitting data through exit nodes on the time scale required by computers using TCP/IP and which have a sane TTL (aka "all of the internet"), a low-latency network is required. As for using a VPN chain, Tor is not just a 3-hop proxy, it has many other features which make it resistant to a wide variety of attacks.
Please stop spreading your "zomg tor is b0rked!1" nonsense and naively suggesting we move to a poorly studied, extremely difficult to implement, high-latency mixnet which would be incapable of communicating with anything but itself. If you want that, go to the painfully slow, and questionably secure, Freenet (not that I have anything against Freenet, I like it, but it is not a replacement for an anonymity network).
To go over your points one by one:
1) Tor was never meant to focus on HSes, they were always an afterthought. If your priorities for fixing HSes is to implement PIR, then you need to take a long hard look at how they work. The next generation of HSes will be focused on fixing issues that are impossible to fix without rewriting them. Wait for those, and please don't make your very first point the criticism of an afterthought.
2) There is a lot of math involved in that. I suggest you read some of the papers involving such things before assuming it's as simple as that.
3) See my original response to your misunderstanding of the differences in the purposes of high and low latency mixnets.
>You guys will never understand defeat because ...
Ah yes, throw in the obligatory fanboi accusations that come up as soon as you realise how little substance the rest of your points have.
>Tails users are especially fucked of course because ...
And that is Tor's fault how? It's as if you criticized internal combustion engines by saying "Wow, look how much polution this releases when the catalistic converter is removed! Internal combustion engines suck!". You don't see the irony in that?
I'm sorry if I sound excessively confrontational, but you've been spouting this FUD non-stop, and it's getting old.
https://mailman.boum.org/pipe
https://mailman.boum.org/pipermail/tails-dev/2013-May/003113.html
In short words, Tor fuckins
In short words, Tor fuckins sucks, I give up!
Hi, a new version of the Tor
Hi, a new version of the Tor network is blocked in Iran torbrowser3.6.3 bridges just do not work please help the Iranian regime uses more advanced techniques you are so retarded
You should first learn to
You should first learn to use punctuations and write in proper English. I can use tor just fine in Iran... it is shame you bash what you don't understand.
Same person as above, just
Same person as above, just thought to give you some instructions... When using Tor Browser Bundle you should select the "configure" option, Select "No" for the first two questions and "Yes" for third one, then click connect. it should resolve the problem in most cases.
This is experimental
This is experimental software. Do not rely on it for strong anonymity.
it was deployed in an
it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.
Even if traffic confirmation/sibyl attacks were not feasible on Tor, isn't encrypted traffic recording by States putting them at risk indefinitely in the future anyway?
I mean, some cryptographers don't predict a brilliant future for the maths underlying current public key cryptography. It would mean that Diffie-Hellman could be broken and the subsequent recorded Tor traffic decrypted in a near future (5 years, 10 years?), when lots of Tor users taking risks on Tor would still be alive.
Betting your life on the maths behind public key cryptography is also a serious risk.
After the Heartbleed problem
After the Heartbleed problem end of april more and more relays upgraded SSL and exchanged keys.
The discussion was about to close out relays not doing so.
Two weeks later I checked the lifetime of relays and found the following:
about 90 relays 9001/9030 unnamed average 4000-5000KB Tor 0.2.4.18-rc on FreeBSD Guard
50.7.134.* 86d
50.7.159.* 84d
50.7.110.* 84d
50.7.111.* 84d
about 15 relays 9001/9030 Unnamed average 4000-5000KB Tor 0.2.4.18-rc on FreeBSD Guard
204.45.252.* 119d/86d
End of April they were online between 84 to 119 days which was far before Heartbleed.
Was FreeBSD save against Heartbleed at that time ?
"using one entry guard
"using one entry guard rather than three"
...
"extending the guard rotation lifetime"
I have the impression you focus on the threat of an attacker following the circuits from content server to entry guard. That an attacker wants to know who is looking at that content.
While I think many users are just worried about the opposite. That an attacker who already knows them wants to know what they are doing online.
At least reducing the entry guards makes it a lot easier to judicially order the one entry guard's ISP into submission.
Do you think the Tor network would be able to bear getting rid of the entry guard model and randomly pick a new relay every x requests?
For example web page with 50 graphics. Randomly picked relay A gets requests for the page and the first nine graphics, randomly picked relay B requests for the next ten graphics and so on.
If there is a larger download like an embedded video the formula could be to change the relay every 10 objects or if the sum of object sizes is larger than 1 MByte.
Since anyone can run a
Since anyone can run a relay, there will be malicious relays. The Tor Project tries to catch and remove them, but they can't prevent malicious relays entirely as long as it is an open volunteer network.
So, every time you randomly pick a relay, you have some chance of picking a malicious relay. The purpose of guards (entry nodes that you keep using for a long time) is to roll the dice less often. Assuming you aren't moving around, this makes lots of sense as the only thing the guard knows is your IP (and the middle nodes you're connecting to).
Assuming you are moving around, however, you might consider either not using guards or using a different set of guards for every location. See https://trac.torproject.org/projects/tor/ticket/10969 for details.
If your treat model is concerned about an attacker who knows who and where you are and is willing/able to perform sophisticated judicial or extra-judicial attacks against you specifically, I have bad news: they can most likely deanonymize you at least some of the time, even if your guard is honest. One way is by correlating surveiled traffic from your internet connection with surveiled traffic at the exit (you'll eventually use one they can see, or perhaps one they own) or if they're also already looking at your destination (outside tor) they could also correlate traffic there without even monitoring a single part of the tor network.
Tor does not claim to be able to protect against traffic confirmation attacks, particularly by a global passive adversary and especially not against a global active adversary. :(
The good news is that now that these types of threats are no longer hypothetical (although law enforcement use of them is still secret and obscured by parallel construction) there are renewed efforts to build tools that *are* resistant to them. I expect that we'll soon see Tor making it much harder if not impossible to do these kind of attacks passively, and forcing adversaries to go active is a win as they have a lot more chance of getting caught that way.
Wait a minute you guys pay
Wait a minute you guys pay people to break into TOR... The Russian government is offering 3.9 million roubles, around $111,000 or £65,000, to anyone who can produce a system for finding data on those using Tor.
Ok, here's the source:
CALL FUNCTION 'GUID_FINGERPRINT'
IMPORTING
ev_guid_16 = ev_guid_16
ev_guid_22 = ev_guid_22
ev_guid_32 = ev_guid_32
WRITE: /, ev_guid_16, ev_guid_22, ev_guid_32.
Send HTTP GET requests to: freegeoip.net/{format}/{ip_or_hostname}
fingerprint('image/exif/gpsCoordinates') =
file_ext('jpeg' or 'pjpeg' or 'jpg' or 'pjpg' or 'tiff' or 'gif' or 'png' or 'riff' or 'wav') and
'exif:GPSLatitude' or 'exif:GPSLongitude' or 'exif:GPSDestLatitude' or 'exif:GPSDestLongitude';
The API supports both HTTP and HTTPS.
Now where's my Rubles?
The annual risk of someone
The annual risk of someone being hit by a meteorite is estimated to be one chance in 17 billion, which means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate.
In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. Or, to put it another way, the probability of one duplicate would be about 50% if every person on earth owned 600 million UUIDs.
So tracking user's via TOR using GUID and UUID as you attack vector not an impossibility, nor an improbability. The source code's for XKeyScore where written in C so it stands to reason if you wanna de-anonymise millions of people just to obtain a better pay grade, these things can be done and done very easily. Why waste time with a System that is inherantly broken and by all accounts broken quite baddly, if people go around degrading the security standards to the point where anyone can just come along and do this kind of evil crap, then it's time to go back to the drawing board.
Unless you like the idea of walking around with your brain being controlled by Nano-Robots serving Google Ad's 24/7!
Do you expect me to Talk? No
Do you expect me to Talk? No Mr OOG we expect you to Die!
Does this Logo remind
Does this Logo remind anybody of anything?
https://en.wikipedia.org/wiki/SPECTRE
When it comes to the secret services.. I think of a few sterling examples of all there hard work, importing nazi's under project paperclip, torturing kids under mk-ultra, inventing the atomic bomb and when it comes to there logo of a giant octopus well it's kind of surprising how that can look so much like a super-villian organisation from a Ian Flemming Novel.
Then, in part because of our
Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.
When it is assumed did the relays become entry guards? One day, one week or several days, weeks?
Probably a week or two after
Probably a week or two after they started up.
https://ocewjwkdco.tudasnich.de/blog/lifecycle-of-a-new-relay
And they became HSDir relays a day after they started up.
Could you please clarify
Could you please clarify that:
To de-anonymize a user the malicious source must get an entry guard as well as an another one as "exit node" to the hidden service. It that correct?
Then: how is the probability to get entry AND " exit" node (a relay "middle node" instead of entry or "exit" wouldn't help) from this malicious source?
So if I got this right, 6.4%
So if I got this right, 6.4% of nodes were rogue? So that means for each conection to TOR there was a 6.4% chance you'd connect to one of the rogues, and then if you were accessing a HS, there was also a 6.4% chance the HSDir you queried was also rogue. So there's roughly a 0.4% chance that connection is affected.
BUT, if you did this 100 times over the affected period, there would be roughly a 1 in 3 chance it occured. Anyone care to chek my math?
It was 6.4% of the guard
It was 6.4% of the guard capacity. Tor load balances by capacity, so the number of relays isn't usually the right metric for things.
Also, the attack you have invented with your "if you did this 100 times" line is exactly what guard relays are designed to handle:
https://sedvblmbog.tudasnich.de/docs/faq#EntryGuards
The math is complicated by the fact that clients pick among three guards at once (so the chance of having one of them is more than 6.4% over time).
Yes, math ok. If first
Yes, math ok. If first calculation comes from 6.4% times 6.4%, the result is 0.4096% or roughly 0.41% (Less than 1%). And your other number is ok wich comes from 99% secure raised to 100 times (meaning the probability of going clean all 100 times), which gives 36.6%, so yeah, roughly 1 out of three guys using the service 100 times will come out completely clean i.e., undetected...
No, you should read about
No, you should read about how entry guards work.
If the entry guard you picked turns out to be bad, then over the course of the 100 circuits you'll probably pick a bad exit and you lose. But if the entry guard you picked turns out to be good, then all 100 circuits will be safe.
Maybe some of you
Maybe some of you passionate, extra-communicative, literate Tor users could contact the reporters who penned this incredibly one-sided and garbled story for the Post-Gazette here in Pittsburgh, where the Software Engineering Institute is based (at CMU), and get them to perhaps at least include a reference to the incredible immorality of the researchers' actions in attacking this wonderful project?
Please? Seriously.
Carnegie Mellon engineers aim to unmask anonymous surfing software
http://www.post-gazette.com/business/technology/2014/07/31/2-CMU-experts-said-to-unmask-surfing-software/stories/201407310210
By Matt Nussbaum, Bill Schackner and Liz Navratil / Pittsburgh Post-Gazette
July 31, 2014 12:00 AM
Two Carnegie Mellon University researchers may have removed the veil of secrecy from the Tor Project, a free software program that allows users to anonymously surf the Internet.
Tor was the preferred mode of covert communication used by Edward Snowden, a former National Security Agency contractor. Mr. Snowden leaked a trove of confidential documents that revealed secret snooping of personal communications conducted by the NSA.
Governments in the United States, Russia and Britain have worked for years to unmask users of Tor, which, according to the British Broadcasting Corp., has been linked to illegal activity including drug deals and the sale of child-abuse images.
It seems that the CMU researchers may have cracked it.
Alexander Volynkin and Michael McCord work at the university’s Software Engineering Institute, whose efforts in Oakland are financed by the Defense Department.
Mr. Volynkin was slated to give a talk titled “You Don’t Have to be the NSA to Break Tor: Deanonymizing Users on a Budget” at the Black Hat USA hacker conference in Las Vegas beginning Saturday.
The talk was canceled after CMU said neither the university nor SEI had approved of the talk, according to a post by Black Hat organizers.
Messages left Wednesday night for Mr. Volynkin and Mr. McCord were not returned.
According to the university’s website, Mr. Volynkin is a research scientist and Mr. McCord is a software vulnerability analyst, both with SEI’s cybersecurity solutions department.
“Right now, I’m told we’re not commenting,” CMU spokesman Ken Walters said when asked Wednesday night about the scientists’ work.
SEI spokesman Richard Lynch said he could offer no elaboration beyond a schedule update added to the Black Hat conference website.
According to the Black Hat website, Mr. Volynkin has research interests that include network security, malware behavior analysis, advanced reverse engineering techniques and cryptanalysis. He wrote various scientific publications and a book on malware behavior analysis, and has a patent related to full disk encryption technologies, the site said.
One of Tor’s creators, Roger Dingledine, announced on his blog Wednesday that an attack on the site was discovered July 4.
He wrote that he believes the attack, initiated in January, was led by the CMU researchers, who were trying to “deanonymize users.”
“We spent several months trying to extract information from the researchers who were going to give the Black Hat talk,” Mr. Dingledine wrote. “They haven’t answered our emails lately, so we don’t know for sure, but it seems likely that” they were the hackers.
SEI is a federally funded research and development center with a mission to “research software and cybersecurity problems of considerable complexity,” according to its website.
In 2010, the Defense Department extended its contract with SEI through June 2015. The contract was worth $584 million, or a little over $110 million a year.
In what may have been a coincidence, the director of the FBI, James B. Comey, and the assistant U.S. attorney general for national security, John Carlin, came to Pittsburgh on Wednesday to laud the city's contributions to efforts to fight cybercrime.
Mr. Carlin spoke to a crowd of about 100 at the SEI in Oakland but didn't mention Tor.
Later, he stood beside Mr. Comey at a news conference at the FBI's Pittsburgh office.
Mr. Comey, who did not mention Tor either, said his plan to boost FBI staffing across the nation by 1,500 later this year will include sending more agents to the FBI's Pittsburgh office to focus on cybercrime.
Briefly touching on the first-of-their kind indictments of Chinese officials for stealing trade secrets from Pittsburgh-based companies, Mr. Comey said, “It is no coincidence the work that has come out of Pittsburgh is the product of something that I was just about to describe as magical. I hope it's not magical because that will be harder to replicate.”
Matt Nussbaum: mnusbaum@post-gazette.com or 412-263-1504; Bill Schackner: bschackner@post-gazette.com or 412-263-1977; Liz Navratil: lnavratil@post-gazette.com or 412-263-1510.
Wow. Really someone from Tor
Wow. Really someone from Tor should see about this FUD and outright lies.
You still have not solved
You still have not solved the problems with MITM attacks?
Two simple working examples (attack on client users).
1) Ineffective way
Small device.
Mark packets before the Tor's servers. Need one devices and one modified Exit Node.
I theoretically realized a few years ago possibility to mark packets in the Tor network.
You can not solve this problem completely. Another slight modification of the node...
This is the simplest "attack", any semiprofessional hacker, familiar with the network of Tor, able to carry out such an "attack" alone.
2) Super effective way
Interception and substitution of network connections and traffic
+
Emulation of Tor network, including the master server.
Hardware or software.
Hardware - small device mounted on the Internet line, any place, before the Tor's servers and spoofing.
Need one device and nothing more.
There are much more clever MITM attacks (without the use of spy-devices), because "encryption" "system", node changing "system" and almost all the rest of the Tor's features for today is no actual.
It was actually and tough 10-14 years ago.
And for today, Tor - just outdated trash.
For approach 1, you're
For approach 1, you're absolutely right. Read
https://ocewjwkdco.tudasnich.de/blog/one-cell-enough
for more examples.
For approach 2, if I understand your description correctly, no this would not work.
https://sedvblmbog.tudasnich.de/docs/faq#KeyManagement
You not understand. Second
You not understand.
Second also work.
Please provide enough
Please provide enough details for us to be able to reproduce the issue. Thanks!
I'm frankly amazed how
I'm frankly amazed how complacent people are here. "Don't worry", "it's fine", "this is a conspiracy theory" in reply to others.
I think we have to make certain assumptions. If this attack came through that university network, and the university is impervious to FOIA requests, it seems reasonable to assume the universities are effectively acting on behalf of the government or power itself. It's a convenient front.
If governments specifically claim to have arrested people accessing illegal pornography over Tor, we should assume the worst and that the claim is true (even if it isn't). We don't know how it is true, whether it is a bug in FF, or some other, as of yet undiscovered defect but we should take the claim seriously. It may be governments with access to virtually the entire internet (NSA/GCHQ) blanket target Tor users, and assume they are doing something suspicious. I imagine this could be a genuine threat in itself.
So, what's your
So, what's your suggestion?
To stop using Tor?
Going on the web NOT anonymously?
Or do you think that political activists all over the world should stop their activities, just because some fanatics said that "we won't reveal our methods to track suspects" (without mentioning Tor, BTW!)?
These months we are looking at simultaneous attacks to the best security tools (Tor, Truecrypt), all of them lacking credibility.
Being panicked is just what they want.
Regardless of how hard it is
Regardless of how hard it is for them to track Tor users, it's easier to to track non-Tor users; ergo, the more people they convince not to use Tor, the easier their job is.
tor is dead
tor is dead
From
From http://archives.seul.org/or/talk/Jul-2014/msg00648.html
"Trying this right now gives (unexpectedly) only 121 Guards (> 2MBps) and 130 Exit nodes, really working."
Can you determine wether these smaller than expected numbers are the result of a dedicated blockage of nodes to push selected nodes into a favourite position?
I don't know what the person
I don't know what the person was smoking to produce those numbers.
So no -- you'd have to figure out what he/she was doing. My assumption based on the other things that person has done is that they're wrong here.
So is the next update of tor
So is the next update of tor legit?
You'd have to provide more
You'd have to provide more details (exactly which one, what its version is, what its hash is, etc) before we can answer that.
will there be a torbrowser
will there be a torbrowser for ios? why isnt there one now?
what do you think of the OnionBrowser?
https://sedvblmbog.tudasnich.de/do
https://sedvblmbog.tudasnich.de/docs/faq#Mobile
There's no Tor Browser for iOS because there's no Firefox for iOS:
https://support.mozilla.org/en-US/kb/is-firefox-available-iphone-or-ipad
orweb isn't based on firefox!
orweb isn't based on firefox!
Yeah, this is a real problem
Yeah, this is a real problem too. All the things that Tor Browser gives you:
https://sedvblmbog.tudasnich.de/projects/torbrowser/design/
are missing in Orweb too. It's bad news. That's why the Guardian folks are trying to switch to Orfox. And we're also excited that Dave Huseby is working on Orfox OS, a version of Firefox OS that has Tor embedded in it.
OrWeb haven't had an update
OrWeb haven't had an update in years!!!
what about onionbrowser? how
what about onionbrowser? how can onionbrowser make an app but torproject can't?
It's because we don't want
It's because we don't want to give you an app full of application-level privacy holes and then put the Tor name on it.
I'm glad the onionbrowser guy is working on it, since it helps everybody get closer to a world where we could have a safe browser on iOS. But that's not the same as having a package that normal users can use and get safety comparable to the Tor Browser.
It's nice that someone put
It's nice that someone put effort into making an app like OnionBrowser. Unfortunately, there are fundamental limitations in iOS which prevent it being useful. For example, the system-wide video player leaks your IP address. Also, sites can access the HTML5 geolocation API from the embedded safari frame. This could make your iPhone anonymously send your GPS coordinates to an attacker!
tor does not work in Iran
tor does not work in Iran now.
You have to use bridges. I
You have to use bridges. I believe any (types of) bridges will do.
@arma: Would it be a clever
@arma: Would it be a clever idea to use my one free year of the amazon cloud service to set up my own obfsproxy bridge and use that very bridge as my entry node, knowing that it has not been tempered with as it essentially is my own?
In doing so, I would have a 12 month free subscription to a safe and uncompromised entry node - or am I missing something?
Well, it isn't yours now is
Well, it isn't yours now is it, it's Amazon's. How much do you (or can you) trust Amazon?
That is a new angle. Could
That is a new angle.
Could they temper with the actual running instance (or maybe have someone temper with it) or is it solemly about (for example) logging connections to the bridge?
please reply from some body
please reply from some body .. i am interesterd in a anwer also !
660 pedophiles from darknet
660 pedophiles from darknet were caught in UK last month, I guess this is how they did it. If all this was done to remove cp and drugs I don't really mind.
Seems unrelated to me. See
Seems unrelated to me. See the enormous thread about this above.
"If all this was done to
"If all this was done to remove cp and drugs I don't really mind."
1.) Regarding "CP": How much does merely removing (some of) the evidence of a crime do anything to help the victims of the crime or prevent future ones?
2.) Regarding drugs:
- Actions done by countries that glorify and promote alcohol. How much more deadly than alcohol are any of the substances in question that don't enjoy such legal and social blessings?
- What about people who are dying a miserable death, who find their only relief in marijuana. Forbidden even in such cases in many places still.
3.) Regarding both and anything else: Do you really believe the /ends/ (prosecuting what you consider evils) always justify the /means/ (arguably massive privacy and other civil liberty violations, massive expenditure of resources that could arguably be used more efficiently and for more urgent needs, etc.)?
Those 660 arrests were from
Those 660 arrests were from peer-to-peer the most common form of prosecution for CP. Check your sources again....
Traffic padding. Random,
Traffic padding. Random, send and receive dummy traffic.
Yes, what about
Yes, what about it?
Intuitively it seems like it should help, but nobody has ever worked out the details of the idea in a way that convincingly *does* help.
Some URLs you might find useful:
https://petsymposium.org/
http://freehaven.net/anonbib/
http://freehaven.net/anonbib/#active-pet2010
https://ocewjwkdco.tudasnich.de/blog/one-cell-enough
"tor is dead" Every time he
"tor is dead"
Every time he says this, we should all start thumping our drum and singing the Dies Irae from Verdi's Requiem at the top of our lungs, so that NSA will know we are listening!
When facing state-sponsored
When facing state-sponsored attacks from lethal organizations like NSA (but also smaller intelligence agencies working for other nations, including nations adversarial to the USA), we need to continually be mindful not only of technical but also economic and socio-political considerations.
HRW (Human Rights Watch) and the ACLU (American Civil Liberties Union), two leading human rights organizations, have published a major joint white paper outlining the chilling effects of NSA's global panopticon on free speech, journalism, and democracy itself in the EU and FVEY nations:
https://www.hrw.org/news/2014/07/28/us-surveillance-harming-journalism-…
https://www.aclu.org/blog/national-security-free-speech/how-surveillanc…
https://www.eff.org/deeplinks/2014/07/nsa-surveillance-chilling-effects
http://www.wired.com/2014/07/the-big-costs-of-nsa-surveillance-that-no-…
Personal Privacy Is Only One of the Costs of NSA Surveillance
Kim Zetter
29 Jul 2014
One comment the authors often encountered in interviewing journalists and lawyers had an all too familiar ring:
Both journalists and lawyers also emphasized that taking such elaborate steps to do their jobs makes them feel like they're doing something wrong. As one lawyer put it, "I'll be damned if I'm going to start acting like a drug dealer in order to protect my client's confidentiality."
I have lost count of the number of times I have heard that over the last ten years, even from Pulitzer prize winning reporters. We need to all keep telling them: no-one likes acting paranoid, but let's not forget that WE are not in fact doing anything wrong. To the contrary, when we resist oppression, we are doing something RIGHT. Our enemies have been revealed as porn-passing lethal drone-striking kidnappers who lie to their own governments. THEY are doing something wrong. Citizens are not doing anything wrong by exercising our natural right of self-defense, and journalists and lawyers have a DUTY (to the public, to their clients) to use all available countermeasures.
"Maybe some of you passionate, extra-communicative, literate Tor users could contact the reporters who penned this incredibly one-sided and garbled story for the Post-Gazette here in Pittsburgh, where the Software Engineering Institute is based (at CMU), and get them to perhaps at least include a reference to the incredible immorality of the researchers' actions in attacking this wonderful project?"
No chance of that happening. "Reporters" like that think they are simply a cheering squad for the local business associations, or their publishers "most favored" politicians.
Carnegie Mellon has long-standing close ties to the USIC. Some researchers there are involved in things like exploiting AI for "algorithmic governance", for example. Every lolcat's ears should prick up whenever that term is mentioned because this notion is the foundation of the population oppression machinery to which NSA is feeding all that data ("collect it all").
Journalism is not quite dead, as the excellent work of journalists like Glenn Greenwald, Laura Poitras, Barton Gellman, Kim Zetter, Julia Angwin, and Marcy Wheeler shows.
"[Tor] needs to be scrapped
"[Tor] needs to be scrapped and a new network needs to be designed and implemented, Tor failed, come to terms with it, accept it, stop running a network that gets people pwnt on a daily basis now"
Tor failed? That is not the story told by the Snowden leaks, which suggest that when talking amongst themselves (so, not hawking anti-Tor FUD), the spooks confess Tor creates lots of problems for them. Good. We need to make even more problems for them, until THEY give up and go home.
Recalling that the capabilities of occasionally adversarial nations like Russia and China are comparable to those of the FVEY monster, GCHQ would hardly have deeply incorporated Tor into their own infrastructure if they thought that Russia or China could routinely deanonymize Tor users at will.
We are all caught up in an arms race pitting governments against (mainly) their own citizens. Tor alone is not enough, but because it has been continually developed, tested, attacked, fixed, and improved by smart researchers for a decade, it is one of the best understood and least unreliable platforms currently available to bloggers, journalists, organizers, whistleblowers, elected politicians, lawyers, and others (such as the spooks themselves) who often need to use secure anonymous communication.
An important point about NSA's human capital: while the agencies thousands of employees do include experts in arcane specialties in ELINT and such, for any novel problem (and these days many of its technical problems are novel, as the agency itself is constantly complaining), they try to bring in outside consultants. In particular, some of their tailored CNE (computer network exploits, i.e. malware) appear to be bought from "cybersurveillance-as-a-service" companies like Gamma International and Hacking Team, which also sell to the most oppressive governments on the planet, such as Saudi Arabia and Vietnam.
Citizen Labs of Toronto is a good example of the rising number of extremely smart and knowledgeable experts who are organizing to fight on-line oppression wherever it rears its ugly head. They just published an important new report in their long running series on exposing the misdeeds of companies like Gamma:
http://www.theregister.co.uk/2014/07/31/citizen_lab_alleges_middle_east…
Securobods claim Middle East govts' fingerprints all over malware flung at journos
Darren Pauli
31 Jul 2014
Citizen Labs has also just been profiled in Edward Snowden's hangout-of-record, Ars Technica:
http://arstechnica.com/security/2014/07/inside-citizen-lab-the-hacker-h…
Inside Citizen Lab, the “Hacker Hothouse” protecting you from Big Brother
Globe-spanning white hat network hacked for the Dalai Lama, inspired arms legislation.
Joshua Kopstein
31 Jul 2014
"I think we have to make
"I think we have to make certain assumptions. If this attack came through that university network, and the university is impervious to FOIA requests, it seems reasonable to assume the universities are effectively acting on behalf of the government or power itself. It's a convenient front."
No need to assume. Many FVEY universities (mostly in USA, but also some in UK and a handful in Canada, Australia, and even New Zealand) have certain research groups which are very closely tied to their spooks. These ties are even public information, although the evidence is obscure and rarely discussed by media organizations. Carnegie Mellon is one of the best known examples.
If you know who Carnegie was, and who the Mellons are, it makes perfect sense that CMU acts on behalf of the real power behind the USG: the money interests.
Never forget that over the next few decades, even the US and UK governments privately acknowledge, the very notions of nation states and the rule of law will become irrelevant.
It is ironic in a way that in public they say such nasty things about "anarchists", because the fact is that they have themselves made the willing choice to abandon their responsibility to govern, in favor of encouraging the transition from laws and governance as humanity has known it, to "algorithmic governance". Which means "continual monitoring" by corporations who put us all into computer models fed by all that data the surveillance machinery is collecting 24/7/365, and use the model predictions to decide what "suasion" methods to apply to us individually in order to modify our behavior in ways which will benefit their bottom line. "Algorithmic governance" is the real anarchy, and it has been brought into being by the very governments which it is rapidly supplanting.
So chin up: NSA is doomed, even if we don't succeed in eradicating it within the year, as I hope we will. In the longer term, the USG may not be doomed to vanish entirely, but it will become irrelevant.
Many of the recent actions of the USG only make sense in view of its collective sense of desperation at losing all its power and prestige on the world stage. But of course it has people like Hayden and Alexander to thank for encouraging the growth of "algorithmic governance", so it nursed the viper at its own breast.
Watching the US and UK parliaments passing law after law which further reduces their own relevancy to governance reminds one of the self-destructiveness of the last stages of Romanov dynasty. That government also made decision after decision which further reduced its own ability to survive, as many intellectuals pointed out at the time.
Governments have always
Governments have always passed irrelevant laws; this doesn't change their ability to wage wars or imprison people. While the powerful are using big data more and more to make their decisions, they still remain powerful. We're a long way away from computers actually making the decisions and we'll probably never get there; the powerful like to maintain their hold on power.
"Even if traffic
"Even if traffic confirmation/sibyl attacks were not feasible on Tor, isn't encrypted traffic recording by States putting them at risk indefinitely in the future anyway?"
Perfect Forward Secrecy is our friend here.
The situation is dangerous but not hopeless.
Remember, fabulous researchers like Citizen Labs are working on behalf o The People. We are not entirely friendless, although we certainly have a dismaying variety of state-sponsored enemies.
" It may be governments with
" It may be governments with access to virtually the entire internet (NSA/GCHQ) blanket target Tor users, and assume they are doing something suspicious."
No need to suppose anything. This is verified fact, strongly documented in published documents from Snowden's trove.
So we all have a problem. But they have worse problems, so we can win the War on US.
In recent weeks, certain
In recent weeks, certain entities have mounted quite an effort to persuade the Tor userbase to abandon Tor entirely. We can and should turn such "suasion" right back upon them. How? Just think of ways in which we can make them stop and think before sending more malware our way.
One effective way to do this has been suggested, in a slightly different context, by Nathan Yee (Univ. of Arizona):
http://www.theregister.co.uk/2014/08/01/bust_comment_crew_with_this_arm…
Security chap writes recipe for Raspberry Pi honeypot network
Cunning security plan: dangle £28 ARM boxes and watch crooks take the bait
Darren Pauli
1 Aug 2014
1. Set Hidden Service Baits
2. Capture
3. Share with Citizen Labs, CCC, EFF....
4. Reverse Engineer
5. Analyze
6. Trace back to Source
7. Publish
8. Dissuade future attacks
Security researchers: who would you rather outwit? NSA/GCHQ or some inconsequential professional criminal?
I have a technical
I have a technical question:
As I understand, the attack was independent of the way tor gets used,
or is there any difference in using tor via e.g. the browser bundle, tails, linux liberte concerning such a type of attack ?
The attack was independent
The attack was independent of the way Tor is packaged, but it's not independent of the way Tor is configured.
So yes, there isn't any difference in theory whether you're on Windows, Linux, Tails, etc.
But some of those provide different configurations for Tor, which impact the attack. For example, Tails configures its Tor to disable entry guards, so I think the attack would have worked much more quickly on Tails users.
Thanks for this information.
Thanks for this information.
Why have tails not released
Why have tails not released a statement about this, can anyone fill a brother in.
Please update the MacPorts
Please update the MacPorts configuration to use version 0.2.4.23. Can't update my relay.
Who/what maintains that?
Who/what maintains that?
Never mind. I sent the guy
Never mind. I sent the guy an email.
https://trac.macports.org/browser/trunk/dports/security/tor/Portfile
Have any more relays been
Have any more relays been detected using this attack since the latest Tor update?
No. We did find the node-Tor
No.
We did find the node-Tor (javascript) relay implementations sending relay_early cells, but that was due to a bug in their implementation, which they fixed today. And those relays are experimental and tiny so not really a big deal either way.
This explains a lot why
This explains a lot why attacks took place on tor networks: http://www.dailydot.com/business/tor-hackers-break/
It does? That looks like
It does? That looks like another generic journalist article saying "there's this talk, but then it got cancelled".
I began getting some
I began getting some warnings in the tor log (tor-bundle ver. 3.6.3). Would you take a look and tell if there is some suspicious? http://pastebin.com/mrVch5Aa I cant reach some public sites through tor. It looks like my prov can see public addresses. Moreover I have already been given by an official warning right inside the tor-browser. And how much is the threat big? Could they see my traffic or only an address I was looking for?
This all seems to be beating
This all seems to be beating around the bush. What is the real impact here? What % of users can expect to be deanonymized in terms of having the hidden service they were connecting to known? If you've connected to a hidden service that's suspicious during the affected time period, should you be shredding your hard drive or what? I'd like a practical advisory.
I believe the problem is
I believe the problem is it's unknown. Also, shredding your hard drive won't do anything but make you look suspicious unless you did more than just connect to a hidden service.
1. Nobody can tell what % of
1. Nobody can tell what % of users can expect to be deanomymized, given some hidden service address. There are too many factors involved.
2. The definition of "suspicious" depends on who and what
3. Sure, shred your hard drive. Couldn't hurt.
hard drive shredding is not
hard drive shredding is not easy, and I'd image it actually could hurt you quite a lot if appropriate safety precautions are not taken. http://www.ssiworld.com/watch/hard_drives.htm
probably better to fill the disk with zeroes and then install something innocuous like Windows XP. Maybe set the clock back a few years first for added plausibility.
Is this RELAY_EARLY attack
Is this RELAY_EARLY attack any worse (better for the attacker) than the PADDING attack last year that also deanonymized hidden services and users? Was that attack ever mitigated in some way?
http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
http://arxiv.org/pdf/1308.6768v1.pdf
Good questions! It seems
Good questions! It seems like the PADDING attack is at least as bad if not worse. And couldn't it be done exit->guard too?
can anyone answer this?
can anyone answer this?
They're both instances of
They're both instances of traffic confirmation attacks, and we don't have a general solution for traffic confirmation attacks.
I talk more about the issue here:
https://ocewjwkdco.tudasnich.de/blog/one-cell-enough
What about android tor users?
What about android tor users?
This attack is platform
This attack is platform independent
leave the country
leave the country
"leave the country" Implying
"leave the country"
Implying TLAs can't hunt you down...
Were all servers that were
Were all servers that were affected located in the US?
I am routing my traffic through a handful of European nodes only, would that have limited the chances of Tor picking an affected entry guard?
No, they were wherever
No, they were wherever fdcservers had them -- in a variety of countries I believe.
yes it will help. But the
yes it will help. But the signal sent by the rogue Hsdir will be recorded by any logging entry guard that was used at the time - forever
I'm thinking the advice to switch to tails after last years FH bust was bad advice.
Why was it bad advice? It is
Why was it bad advice? It is a great improvement against last year's attack vectors.
I think the big downside with Tails currently is the lack of persistent entry guards. If you (the user) have linkable activities, then sometimes an adversary (whether he runs relays or just watches points on the Internet) will be in a position to do traffic confirmation attacks. The exact definition of 'sometimes' is up for grabs, but I think it's pretty clearly more than with entry guards.
[Tails-dev] Added support
[Tails-dev] Added support for keeping entry guards
https://mailman.boum.org/pipermail/tails-dev/2013-May/003113.html
Seems that the biggest
Seems that the biggest security hole of tor is the tor browser bundle.
http://www.wired.com/2014/08/operation_torpedo/
"For the last two years, the FBI has been quietly experimenting with drive-by hacks as a solution to one of law enforcement’s knottiest Internet problems: how to identify and prosecute users of criminal websites hiding behind the powerful Tor anonymity system.
The approach has borne fruit—over a dozen alleged users of Tor-based child porn sites are now headed for trial as a result."
The simple solution is:
Just restrict the communication to people that you know, and do not communicate with random webservers..
Solution:
1) run tor
2) run some program over tor that lets you communicate with strong encryption enabled and only with persons that you know, instead of relatively arbitrary servers..
For this, applications like torchat, or retroshare are the way to go.
Webbrowsing will probably never be secure. It is unlikely that a browser with no security holes will ever be built. But on top of that, the process of browsing essentially means communicating with servers that you do not personally know. Your friend is much more unlikely to deliver a malware on your computer that some webserver that you visit. In order to reduce this risk, you have to switch from browsing to targeted communication.
For the whole freedom
For the whole freedom hosting fuss from last year, see
https://ocewjwkdco.tudasnich.de/category/tags/freedom-hosting
And also let me point out how "there's that guy telling everybody to use retroshare" is a way to link your comments to each other despite the anonymity. :)
If those relais were removed
If those relais were removed - why again are 6189 relais online ? There were 59xx at the time of the attack ....
https://metrics.torproject.or
https://metrics.torproject.org/network.html
(If the Tor network were static, it wouldn't be able to keep up with the growth in users.)
TOR IS UNDER ATTACK!
TOR IS UNDER ATTACK!
FBI was the only agency not
FBI was the only agency not to comment, so that pretty much confirms they have the data
See below; I think your
See below; I think your conclusion doesn't follow.
http://www.reuters.com/articl
http://www.reuters.com/article/2014/08/06/us-cybersecurity-hackers-tor-…
FBI
Wow, yeah. Two points for
Wow, yeah. Two points for some defense spokewoman for telling us that.
But the fact that FBI and CMU didn't answer the question the 15th time somebody asked it doesn't really tell us much. CMU seems to be sticking to their "I don't answer questions" advice from whichever lawyers decided that was the best way to handle things. And the FBI just never answers these sorts of questions in the first place.
Edit: though to be fair, the DoD just never answers these sorts of questions in the first place either. How odd!
"This particular project was
"This particular project was focused on identifying vulnerabilities in Tor, not to collect data that would reveal personal identities of users,"
So I assume these vulnerabilities identified by CMU will be disclosed to TOR at some stage, otherwise what's the point of DoD contracting this !
Does the DoD statement "not
Does the DoD statement "not to collect data that would reveal personal identities of users" now give the Carnegie-Mellon University the green light to legally destroy the data.
DoD doesn't want people to
DoD doesn't want people to stop using Tor. Hopefully they tell the FBI not to bust us.
Someone should go to CMU and
Someone should go to CMU and kill those researchers.
they would be smarter to use
they would be smarter to use a quadcopter with a gun attached to it controlled with a cellphone while they hide behind Tor
it would be a Tor fragging
it would be a Tor fragging attack
Yeah, uh, please don't do
Yeah, uh, please don't do this. We like researchers. That's how we understand privacy and security these days. That's how the papers on http://freehaven.net/anonbib/ come to exist. Many of us are active in the research community.
There is a lot of quite reasonable talk these days about "the real criminals", but it sure isn't those two researchers at cert.
Hi arma, i want an answer to
Hi arma, i want an answer to a question that i don't really 100% get... If an agency will take control of a bounce of tor relays (and generically with their funds they could all do it), this could rappresent a serious problem for all the network? What are the countermeasures?
Security experts call it a
Security experts call it a “drive-by download”: a hacker infiltrates a high-traffic website and then subverts it to deliver malware to every single visitor. It’s one of the most powerful tools in the black hat arsenal, capable of delivering thousands of fresh victims into a hackers’ clutches within minutes.
Now the technique is being adopted by a different kind of a hacker—the kind with a badge. For the last two years, the FBI has been quietly experimenting with drive-by hacks as a solution to one of law enforcement’s knottiest Internet problems: how to identify and prosecute users of criminal websites hiding behind the powerful Tor anonymity system.
Watch out tor team because the FBI has you in a chokehold!