OpenSSL bug CVE-2014-0160

by arma | April 7, 2014

A new OpenSSL vulnerability on 1.0.1 through 1.0.1f is out today, which can be used to reveal memory to a connected client or server.

If you're using an older OpenSSL version, you're safe.

Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today. If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle.

Here are our first thoughts on what Tor components are affected:

  1. Clients: The browser part of Tor Browser shouldn't be affected, since it uses libnss rather than openssl. But the Tor client part is: Tor clients could possibly be induced to send sensitive information like "what sites you visited in this session" to your entry guards. If you're using TBB we'll have new bundles out shortly; if you're using your operating system's Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor. [update: the bundles are out, and you should upgrade]
  2. Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you're at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor's multi-hop design means that attacking just one relay in the client's path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys. (You will need to update your MyFamily torrc lines if you run multiple relays.) [update: we've cut the vulnerable relays out of the network]
  3. Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn't allow an attacker to identify the location of the hidden service [edit: if it's your entry guard that extracted your key, they know where they got it from]. Also, an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  4. Directory authorities: In addition to the keys listed in the "relays and bridges" section above, Tor directory authorities might leak their medium-term authority signing keys. Once you've updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don't rotate that yet. We'll see how this unfolds and try to think of a good solution there.
  5. Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  6. Orbot looks vulnerable; they have some new packages available for testing.
  7. The webservers in the https://sedvblmbog.tudasnich.de/ rotation needed (and got) upgrades. Maybe we'll need to throw away our torproject SSL web cert and get a new one too.

Comments

Please note that the comment area below has been archived.

April 07, 2014

Permalink

We have been speculating if a tor relay couldn't be made to leak information about the IP address of the next hop in a circuit, since an arbitrary memory leak is possible. Then, in theory, one could walk all nodes in a circuit to eventually uncover the other end of the circuit... (if all nodes in the circuit are linked to vulnerable openssl). Is that somehow prevented by the implementation design?

Sounds doable in theory (no idea about practice, but we should assume so).

Arbitrary memory leaks are bad news.

Another fine reason to get relays to update (many of the big ones are in the process of updating right now).

April 07, 2014

In reply to arma

Permalink

Do heartbeat messages go both ways? If so, can a relay also theoretically read a tor client process's memory?

Is Tor working on removing http://opensslfoundation.com/ as a dependency yet? It seems riddled with bugdoors. Personally, I'd like to avoid using software from Maryland.

Yes, heartbeat messages can go both ways. See the "clients" section above.

Removing the openssl dependency, and replacing it with what? The world is missing an actually good crypto library.

Yes, maybe. There aren't [m]any good crypto libraries out there to choose from. It's not clear to me that libnss is any better -- at least people *find* some of the bugs in openssl. :)

April 08, 2014

In reply to arma

Permalink

Since this attack relies on the entire chain being owned, a mix of libraries will prevent any single compromise from owning the system.

By "chain" I assume you mean "Tor circuit".

In that case check out the comment at the top:
https://ocewjwkdco.tudasnich.de/blog/openssl-bug-cve-2014-0160#comment-55451

And then imagine fetching the memory from the relay that turns out to be Alice's entry guard, and also fetching it from the relay that turns out to be Alice's exit relay.

So the "entire chain" isn't needed. Maybe.

April 13, 2014

In reply to arma

Permalink

Entry guards or exits could use different crypto than other relays. Would you consider having exits use lighter weight encryption to ease the load on their CPUs?

April 08, 2014

In reply to arma

Permalink

"Replacing it with what?".

Isn't OpenSSL somewhat bloated? Tor does not need the many ciphers and operation-modes implemented in OpenSSL. Tor could operate just fine using a single cipher and mode of operation. That can be implemented without the baggage of a huge crypto library.

Yes, OpenSSL is a massive library. with several cipher suites. The protocol has seen many years of (from time to time, shaky) service with a huge install base.

The benefits of keeping the same code versus rewriting are well known. I don't like OpenSSL, and would love to see it fixed in so many ways. Better test suites, static analysis, real verification, additional cipher suites, fixes to the protocol design all spring to mind; some of these cannot be done in a backwards-compatible way. It can be rescued, it's just a herculean effort.

Having lots of ciphers is very useful, for instance, when BEAST, CRIME, etc. came along and exploited padding oracles that were only present in block ciphers, servers could switch to RC4. When RC4 was shown to be broken, but the previously mentioned attacks were fixed, we could switch back. This flexibility is crucial in being able to respond quickly to new attacks, and in providing a smooth migration path for users.

April 07, 2014

Permalink

So in practice, what does this mean for Tor? Could an adversary like the NSA completely unmask the entire Tor network without anyone knowing, or could they unmask a user that connects to a compromised or honeypot website?

This is quite scary...

Completely unmask the entire Tor network? Not anymore, since many relays have upgraded. But before the vulnerability was announced? Who knows.

A compromised website won't be a good place to launch an attack, since the Tor Browser shouldn't be affected by the bug, and the website doesn't interact with the Tor client at the link encryption layer.

But an entry guard (the first Tor relay you connect to) can potentially read client-side memory. See the 'clients' section above.

April 07, 2014

In reply to arma

Permalink

So if I had done something "bad" in the past before the CVE was out, how much should I worry? By "bad" I mean things on the level of drug dealing, child porn, dissidence from nasty countries, etc. (not that I actually _do_ those specific things, but hypothetically if I did something on that level). Should I toss all my online pseudonyms out the window? I'm not quite sure what _practical_ steps I should take to ensure my safety.

By "bad" I mean things on the level of drug dealing, child porn, dissidence from nasty countries, etc.

If you deal in drugs, you should pack your bags immediately and head for Mexico, Colombia or Honduras. You will find sanctuary there with like-minded people.

If you indulge in child porn, you should head for Russia. I heard some of Putin's men are child porners.

If you are a political dissident, you are safe. NSA and GCHQ will never uncover your activities or reveal your identity to North Korea, Iran, China, Turkey, Pakistan, etc.

"If you deal in drugs, you should pack your bags immediately and head for Mexico, Colombia or Honduras. You will find sanctuary there with like-minded people." -

Well not necessarily, Mexico, Colombia, Honduras, and some other countries produce huge quantities of drugs, but the drugs are not for them, but for the worldwide drug-hungry-consumer countries like US and EU's. In fact, US is the winner on this subject, It's the greatest drug consumer in the world.

"if you are a political dissident, you are safe. NSA and GCHQ will never uncover your activities or reveal your identity to North Korea, Iran, China, Turkey, Pakistan, etc."

Well Not necessarily, if you are a politically dissident towards US policy, or a journalist, who have profound commitment with US constitution's freedom statements and Law, who also look for the truth and nothing but the truth on what's going on with US government's illegal activities perpetrated against American's citizens and other countries's government that are not fond of with US policy. then you might be careful, they may well say you are a whistleblower, and persecute you around the world. even though your duty and responsability you know will always be to release to public opinion those offensive government activities, Hence, you may be in serious trouble specially if living in US soil.

If you believe NSA or GCHQ wouldn't shop your ass to the security services, I've got a bridge I want to sell you. Either of them will do anything to anyone precisely as it suits their perceived needs (which mutate constantly).

I disagree. Vulnerability disclosure starts with the source and based on severity, escalates quickly to vendors. I know of at least two major OS vendors that were blind-sided by this. They did a great job of releasing patches quickly, but there will be serious fiscal impact, some of which could have been mitigated.

April 07, 2014

Permalink

Would the disclosure be limited to the memory that belonged to the OpenSSL process?

April 07, 2014

Permalink

I made a tool to check the status of your SSL and see if heartbeat is enabled. If it is, you should run this command: openssl version -a

Ensure your version is NOT 1.0.1f, 1.0.1e, 1.0.1d, 1.0.1c, 1.0.1b, 1.0.1a, 1.0.1, 1.0.2-beta1

Tool at: http://rehmann.co/projects/heartbeat/

April 08, 2014

In reply to arma

Permalink

Well, anyone who runs a site and is affected by this should call their host and find out. :)

April 14, 2014

In reply to arma

Permalink

Discarding CA keys is unnecessary, as only SSL keys, not key-signing keys, are affected.

Here's a quote I heard today: "When it comes to deciding between maliciousness and incompetence in OpenSSL, there's a whole lot of incompetence to go around."

It's actually really hard to write a secure crypto library that implements all the things openssl implements.

So I guess that means my answer is "not necessarily".

April 12, 2014

In reply to arma

Permalink

in that case if some guys have unlimited resources imho they do know about this bag from day zero. take just 100 programmers and ask them to watch openssl development. surely they catch this bag at once.

April 18, 2014

In reply to arma

Permalink

Agreed. Another quote, by Napoleon (incorrectly ascribed to an American, who had just repeated it and became the "author"): "don't ascribe to malice what can be plainly be explained by incompetence".

April 08, 2014

Permalink

Out of curiosity: If the webserver uses Diffie-Hellmann for the SSL Key exchange, old and new traffic should still be secure, even if the cert was leaked, right?

(You obviously would want to replace your cert either way, but as I said, curiosity).

The vulnerability has been there for 2 years. No one guarantees you that it hasn't been exploited before to extract private keys and that you did your Diffie-Hellmann key exchange with a man-in-middle who possessed the right key.

In case you're ruling a MITM out, then yeah, that's perfect forward secrecy and you should be good to go with that old traffic.

It should still be safe against a passive attacker -- that's one of the nice features of PFS in handshakes. They have to actually mitm every connection, or they don't get to learn the session key that's computed for that connection.

If the server is using forward secrecy (DHE or ECDHE cipher suites) old traffic is secure, but if the certificate is leaked new traffic can be MITMed. The actual key exchange algorithm doesn't matter.

No the old isn't always secure. You're still old screwed if TLS session tickets are in use and the server hasn't restarted and cleared state. It's rare, but google for more.

April 08, 2014

Permalink

Hi. What do you mean with
«best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys»
?

1. aptitude update -> aptitude safe-upgrade
2. rm -rf /var/lib/tor/keys/*
3. /etc/init.d/tor restart

Is this correct? Or the second step is superfluos (or erroneous) ?
Thans in advance for your attention.

Well, the capacity of the tor network would be harmed a lot now if everybody were doing this, right? As I understand it, the new relay would need to go through "unmeasured" and "measured" phases, during which the full capacity is not used.

Do we have some data on how many relays actually changed the keys during update?

Correct. Once they get measured by the bwauths, which should take just a couple of days, they should ramp up. Hopefully it won't be too bumpy.

As for data, sort of, but certainly nothing comprehensive yet.

We got the advor person to rename it from advtor, since "advanced Tor" is a poor name when what you're doing is adding a bunch of un-audited patches to Tor. Maybe it is better, maybe it is worse, most likely it's a combination (which in sum is probably not a great result for its users).

So, feel free to use a program called advor if you find one on the internet, if you really want to, but know that it's not endorsed or checked or anything by the Tor people.

April 08, 2014

Permalink

I wonder if this is related to an observation concerning Google's recaptcha service.
Various web services embed Google's captchas before they let you use their service.
I observed many times that after solving the captcha the SSL connection to Google on port 443 remains open for a long time.
This could be 30 minutes or longer.
Would it make sense regarding the heartbeat bug that Google keeps those connections open to read parts of the memory of connected Tor clients on a large scale?

April 08, 2014

Permalink

i've been to some really bad websites lately.i'm quite pc illiterate.please explain clearly where i must go and what i must do to ensure my safety.went thru tor to these sites.was that enough?

Are you one of the tiny fraction of Tor users who gives Tor a bad name by trying to reach child porn sites? If so, please go away and stop using Tor. That's not what Tor is for, and you're hurting us.

(If not, I'm sorry I yelled at you.)

April 08, 2014

In reply to arma

Permalink

LOL talk about jumping to conclusions. There are plenty of websites that are considered "bad" in certain countries that in the west we wouldn't even bat an eye at.

April 08, 2014

In reply to arma

Permalink

I don't like it when people make assumptions that just because someone said they went on bad websites that it has to be child porn, even from a Tor dev. Not including the fact that much of what people think is child porn is not actually bad (eg jailbait, not rape or real children, those can still be bad), but there are so many other websites that various governments or societies consider bad, whether it be drugs, or various religious or atheist views, or political views, or even freedom of speech. If anything we need to provide help to anyone who asks and not shun them or even throw out so much suspicion just because they say bad websites and the first thing that comes to mind is the most emotional worst case scenario.

April 08, 2014

Permalink

It's time to switch to 4096 RSA cryptosystems, I'm on a VPN using AES 256 for encryption, SHA256 for data authentication, and 4096RSA for handshake on a 1.7GHz processor with 2GB memory, and I have never EVER had any issues concerning speed, never experienced lag or hiccups, on the contrary I couldn't tell the difference between switching the VPN on and off.

Bumping up key sizes is a fine idea. And it's for that reason that we switched to much stronger ECC for our circuit handshakes, and for link encryption when it's available:
https://gitweb.torproject.org/tor.git/blob/tor-0.2.4.21:/ReleaseNotes#l…

But switching to stronger cryptosystems is not what this vulnerability is about. Even if you switched to 4096 rsa, this vulnerability will be just as bad for you.

April 08, 2014

In reply to arma

Permalink

Which is better ECC or RSA?
And I do second op, higher cryptosystems will put the minds of laymen to rest, after fixing the current vulnerability of course!

"It depends" is the only answer that can fit in a blog comment.

At this point they're both likely to be stronger than other components in the system (as we learned this week).

April 08, 2014

Permalink

this bug exposes just how bad the NSA circumventing encryption really is, watch everyone panic over this yet the NSA and probably foreign intelligence have even better exploits. This is a reminder for myself that nothing is safe or secure online.

Yes.

And I'm not sure if I should feel happier or sadder to imagine a world where these are the *accidental* bugs that we, the security community, introduce.

Security sure is hard, even without government-level adversaries.

April 08, 2014

Permalink

I travel a lot, like every week, and sometimes every 2-3days, I'm in a different country, should I download TBB in every country I'm in, to be safe, or is there no problem with using the same TBB I downloaded many countries ago in different countries?

As long as you have the latest TBB, there should be no difference between fetching it from one country vs fetching it from another country.

But be sure to check the signatures on it, to make sure it really is the TBB we made for you.

April 08, 2014

Permalink

is there any way to check the first relay my client is connected to for the vulnerability?

April 08, 2014

Permalink

I can't believe how few people on this and other sites have not even mentioned using a vpn. Running a vpn-server on the inside-IP of your -second- router with Gargoyle or openwrt on it, would protect you against any vulnerabilities in SSL/TLS. Don't use Tor or surf the web without it. Configuration of the vpn server/client is quite simple on Gargoyle.

Wait, what? No, using a vpn would not protect you from "any vulnerabilities in SSL/TLS". For example, if you go to an https website with your browser, and it goes through your vpn, and there's a vulnerability in your browser's SSL library, the vpn does not help you.

For another example, if you use Pidgin to talk to an xmpp server over ssl, and it goes through your vpn, a vulnerability in openssl (like the one in this post) will be bad news for you.

It's about what applications you use, not about how you transport your traffic. Notice that that statement is true in the same way for Tor itself.

April 08, 2014

In reply to arma

Permalink

I see, you mean in terms of anonimity, of course.
I would think a vpn would still encrypt the content, though.
Or am I wrong?

I don't mean in terms of anonymity.

The vpn does encrypt the content, but the unencrypted content is still exposed on both ends. And that's where the vulnerability is.

April 08, 2014

In reply to arma

Permalink

Vexing (bug) and enlightening.
The key lies in memory.
I'm glad I updated my systems, but we'll have to wait for servers to do the same.
Looking forward to the latest TBB as per usual.
Keep up the good work.

April 13, 2014

In reply to arma

Permalink

And tor encrypt encrypted content right? What info can be collected from that? Endpoints? What else?
To prevent similar problems never collect and keep unnecessary historical information.

Am I right that entry guard potentially can see data for exit relay? tor-tls(tor-tls(tor-tls(tls(plain))))

For the general encryption question, you might like
https://svn.torproject.org/svn/projects/articles/circumvention-features…

And no, you are mistaken that the entry guard gets to see data for the exit relay. The Tor client does, but that's the one that you run under your own control so it's ok that it does. (Otherwise there would be a point in the network that gets to both see you and learn where you're going, which is exactly what Tor's decentralized design aims to avoid.)

https://sedvblmbog.tudasnich.de/about/overview

April 08, 2014

Permalink

I scanned all the nodes in consensus for Heartbleed,
with the following results:

at least 530 nodes are VULNERABLE,
at least 2995 are NOT VULNERABLE,
the rest I don't know beacause of network timeout or something.

I did not check for rekeying.

To test for yourself, do this:
- get IP:ORPORT list of relays (grep the microdesc-consensus)
- get the script https://github.com/noxxi/p5-scripts/blob/master/check-ssl-heartbleed.pl
- run it against each IP:PORT
- count the results

Yep. I think it's more like 1000 relays vulnerable at this point.

But counting relays is the wrong way to assess the vulnerability of the Tor network as a whole. You should ask what fraction of total consensus weight, or what fraction of advertised descriptor bandwidth, is vulnerable.

Otherwise you fall into the trap of various past researchers who see 500 relays on Windows, claim Windows is vulnerable, and conclude they can compromise 10% of the Tor network. This is true except that 10% of the Tor network doesn't see 10% of the users or traffic.

Now, in this case a big fraction of the network by weight *is* vulnerable. I mailed the top 400 relay operators last night to tell them about the bug. You can also follow along the tor-relays list:
https://lists.torproject.org/pipermail/tor-relays/2014-April/thread.htm…

Eventually we'll start taking away the Valid flag from fingerprints of relays who were known to be running with a vulnerable openssl.

It's going to be a bumpy couple of days / weeks.

April 10, 2014

In reply to arma

Permalink

Hi! Sorry if I ask something stupid, I'm not an advanced user. I don't quite understand what these numbers mean to the average user. Can I assume that it means that there is a good chance that a few of my my connections weren't vulnerable at all?
If I understand correctly probably someone could have connected my ip with my traffic. Was this an easy thing, or hard to achieve? What I'm trying to figure out is how probable is that someone kept their anonmity?

April 08, 2014

Permalink

keep up the good work tor and hopefully in a few days everyone above can carry on with all their "bad" stuff ;-) lel

April 08, 2014

Permalink

What the hell is OpenSSL? I never installed anything called OpenSSL. Can't anyone write who is affected and what needs to be done in easy to understand words?

Every freaking time something happens everyone seems to start speaking binary...

OpenSSL is a library which many programs and websites (including but not limited to tor) use to do cryptography.

A critical security vulnerability was found in this library yesterday. Just about everyone who uses tor (and the whole Internet, in fact, not just tor) is affected in some way.

What needs to be done is for an updated Tor Browser Bundle (coming soon) to be released, and for all users to upgrade. The relay operators also need to update their relays, and generate new keys for them.

Unfortunately, there isn't a lot more a user can do than that. Nobody knows if anyone has actually been attacked using this vulnerability, and even if they were, it would be basically impossible to find out. That's why everyone is scrambling today.

April 08, 2014

Permalink

How many directory authorities were vulnerable? Assuming more than half, how soon can we expect a new tor build with their new keys?

Two or three of the nine directory authorities were unaffected, and the rest were vulnerable.

The long-term authority identity keys are unaffected, since they're kept offline.

The medium-term authority signing keys were affected, and they've been rotated (except for two authorities, which are offline until they can be brought back safely). Rotating them makes things a lot better, but still not perfect.

And the relay identity keys might have been taken, but really that's more of a hassle than a security thing -- if we rotate them, existing clients will scream in their logs that they're being mitm'ed, and more importantly existing clients will refuse to proceed, even though the directory documents they fetch are signed with other keys.

It's not clear to me yet that this change is worth a flag day. Hopefully we can do it more smoothly.

April 08, 2014

Permalink

Is there going to be a new os tor version coming out or do i have to update openssl? I have no idea how to do that. Any help??? Please

New release of the Tor Browser Bundle which include the security upgrade are on the way.

If you use another version of tor distributed with your operating system, you should ask the people who maintain that package for your operating system. You should also ask those who maintain openSSL for your operating system.

April 08, 2014

Permalink

Let me get this straight: an attacker can know what relays and clients are communicating to each other? and de-anonymize traffic?

April 08, 2014

Permalink

OK, I'm not as tech-savy as many on here. Am I understanding this correctly though?

The bug does not allow your computer to be directly compromised or identified? It can alow people to see what data is going backwards and forwards though. It can alow the first computer in TOR to see what other websites you have visited in this session and what cookies you have?

So if you follow tor's advice and never enter personally-identifying information, you are still safe, it is not like the big last problem then?

Thanks for any clarification, there must be lots like me who dont really understand the significance of thsi.

I would also really appreciate an answer to this. I am still pretty confused whether or not the tor browser bundle was compromised

It can allow the first relay in your Tor circuit to see what other websites you have visited in this session, yes. That's because your Tor client might keep past destinations (e.g. websites you visited) in memory, and this bug allows the SSL server (in this case, the first relay in your Tor circuit) to basically paw through your Tor client's memory.

If you visit https websites using your browser, though, your Tor client will never have your web cookies in its memory, because they would be encrypted -- just as the exit relay can't see them because they're encrypted, so also your Tor client can't see them because they're encrypted:
https://svn.torproject.org/svn/projects/articles/circumvention-features…
https://www.eff.org/pages/tor-and-https

The Tor Browser, which is based on Firefox, is not affected. But your Tor client, which is the program called Tor that comes in the Tor Browser Bundle and that your Tor Browser proxies its traffic into, is affected.

I'm sorry this is complicated. If the above doesn't make sense, a little box on a blog comment isn't going to fix it for you. I recommend starting at
https://sedvblmbog.tudasnich.de/docs/documentation#UpToSpeed

April 08, 2014

In reply to arma

Permalink

Hidden services are encrypted end to end right ? Would they show in memory, exposing the user ?

They could potentially show in memory yes. That's because one of the ends *is* your Tor client. And the other end is the hidden service itself. If the entry guard ran this attack in either case, it could potentially learn things.

April 08, 2014

Permalink

It can also get passwords can't it? So we should all be visiting all of the places we have passwords for like Amazon, eBay and websites where we leave comments and checking to see if we need to change our passwords for any of them also?

Yes, maybe. But that's a question about those websites and their past security practices, and has nothing to do with Tor or the Tor Browser Bundle.

That is, the way it would have been a problem, if it is, is due to security at the webserver end, not the webbrowser end.

April 08, 2014

Permalink

Would my privacy and anonymity be safe if I use tails to visit vulnerable clearnet/.onion sites and use vulnerable relays?

April 08, 2014

Permalink

I was running a virtual machine and I used the both the guest os(TAILS) and the host(windows) to visit vulnerable sites. Could the attacker read and decipher the memory content of the virtual machine?

As mentioned before: Tails was not explitable because it is using the older OpenSSL that doesn't support the heartbeat feature. You got lucky there, pal. ;)

April 08, 2014

Permalink

A client side question:

What does it exactly mean "... Tor browser is not affected but your Tor client is affected ..."? I understand when one operates the Tor browser it connects to the Tor network through the Tor client. What kind of data could leak out from the Tor client?

You can think of the Tor client like your VPN provider. Or if that metaphor doesn't make sense, think of it like your network.

For example, when you're sitting at Starbucks just using the Internet directly, people next to you get to see your network traffic and learn things about what you do -- if you only go to https sites then they learn which sites you go to but not what you send since it's encrypted. And if you go to an http site then they learn not only where you went but also what you said.

The Tor client sees exactly this same information.

Or if that metaphor didn't make sense, but you know about the Tor exit relay issue: the Tor client gets to see everything that all your exit relays get to see.

No, I think this is bad advice. TBB ships with its own libssl.

(The answer to the original question, if you're a normal user, is "you can read the TBB changelog to find out". If you're not a normal user, there are plenty of ways you can actually learn on your own, none of which fit into a little blog comment box.)

April 09, 2014

Permalink

I'm guessing those people using TBB 3.6 Beta-1 are affected but have no where to go for Pluggable Transports except the last stable TBB-PT which has its own security problems, hence the update to a newer Firefox ESR.

April 09, 2014

Permalink

Another out of bounds and pointer's arithmetic bug... shouldn't we just walk away from C and C++? These two languages make it simply too easy to make bad mistakes like these... i am quite sure that by just dropping it we could improve a lot our security. It is simply too hard for a human being to make correct programs with such languages.

Also... C was a good idea back in the 70ies... but with the hardware of today it makes little sense to use it (if not for OSes)... it doesn't even support multi-core processors natively (you need an external library to use threads, it is heavy and memory-hungry and the language itself has no safe native way for inter-process communication).

Most applications of today, Tor included, could be written in a modern language, without pointer arithmetics (which is useless) and with a garbage collector which frees the developer from having to remember to allocate or free memory areas. What about Google-Go for instance? It is fast, it is made to make it easy to write error-free software, it supports natively multi-core and multi-thread programming in a lightweight way, it supports natively interprocess-communication, it doesn't need pointer's arithmetics and it has a garbage collector...

Really, it is time to evolve. Especially if we care about security.

You are jumping to conclusions, because in the background behind your magic vm curtain everything behaves like before, and there memory and addresses and "pointers" are the fabric every programm runs on.

With VMs - that actually jitcompile your bytecode into native - you are just relaying your dependence on layers of layers of lay..., however if your bottom layer has a vulnerability your programm might be untouched however, the outcome isn't different.

A vm is just a layer and if one layer breaks your program is broken,
replace layer by sandbox.

What is really needed is a best practice "book", about code examples being vulnerable and how to do things better.

You are wrong. I am not jumping at anything.

I have several decades of experience in programming, in more languages that i can remember, and i perfectly know what i am saying.

And no. No "best practice book" or experience, no matter how long that is, can help you in this. This is the usual excuse that C and C++ fanboys throw at you whenever you tell them the pure and simple truth: the only thing that DOES help is a language that simply stops you from doing stupid things and that takes care of things such as memory management that are better left to the machine itself.

Please also note that who works on complex projects such as openssl are usually very experienced programmers with a huge background in security and "best practices"... they try their best to avoid bugs... yet they DO mistakes as this (very serious) bug proves. Probably now you see my point. No matter how many books you read. No matter how much experience you read. You will do mistakes. More so if the language seems to be DESIGNED to work against the developer itself.

And yes, in the bottom of every VM there is low level code. So what? I did not state that this solution would be perfect... i said that it would be BETTER.

In few words... it is much harder to find and exploit bugs in a VM (which, in time, gets better and better) than find and exploit bugs in several thousands (or even millions) lines of code in every software ever made because... well... you are human... and humans do mistakes. Humans forget things such as freeing a block of allocated memory, machines don't. Humans make math mistakes such as accessing an index outside an array of bytes, machines don't. Humans forget to free resources such as open files, machines don't. In simple words... Machines are just better than Humans at certain tasks... so we should just let them do it in our place. And unfortunately C and C++ fail at this.

We cannot completely avoid bugs, it is in our nature to make mistakes, but we can make the surface for a possible attack smaller. My proposal is exactly this: make the attack-surface smaller by using better, newer languages that make it easy to write better software and make it harder to make disastrous bugs.

Well, I think that C and C++ are still the best languages out there for doing actual computation, or more generally for anything where speed is important...unless you want to break down and code in assembly language.

That being said, I agree that they should not be the medium for network-facing applications. Type-safe languages may be the only practical way to prevent buffer-overrun attacks.

Well, to my understanding this bug has been there for one year or more.
If so... then the NSA could have found it and used it to deanonymize the whole Tor network...

This could explain how they found SilkRoad and Freedom Hosting servers.

Although i still believe that they used vulnerabilities on the servers themselves to gain root access and find out the real IP.

Roger, how much work would it be to make Tor use PolarSSL and GnuTLS?

I think it would be good if relay operators could run Tor on a mixture of operating systems and SSL libs.

I think if we're going to do that, and maintain them all, we should seriously consider switching to a link encryption that doesn't use the SSL protocol at all.

That said, it shouldn't be *too* hard technically. Check out src/common/crypto.[ch] and src/common/tortls.[ch].

April 09, 2014

Permalink

Now Tor expert bundle 0.2.4.21 has OpenSSL 1.0.1g.
Do this fix the bug for the client?

April 09, 2014

Permalink

Isn't it kind of hilarious that human beings can both design cryptosystems that take multiple lifespans of the universe to break and yet also have them be undone by simple memory management bugs? When will the software development community adopt formal verification as a standard practice for critical programs? Save us BitC.

Um, for the record, in general, verifying the correctness of a computer program with another program is impossible...theoretically, you can't even tell if the program will finish executing, this is in fact what motivated the Turing machine in the first place.

I should qualify this to point out that this works a little differently for quantum computers...that is why Lockheed bought the D-Wave machine they keep at USC, actually. But, that caveat notwithstanding, I think the answer to your question regarding adoption of code verification is, more or less, "not any time soon."

April 09, 2014

Permalink

I have updated to the newest TBB and when I go to https://www.howsmyssl.com/ It tells me that my browser is still vulnerable. That's due to the use of TLS 1.0 by Firefox 24.4. Does this affect Browsing through Tor in any shape or form?

FF is capable of using TLS 1.2 but it's not enabled in FF 24 or even 26 as far as I know. I can fix this by modifying security.tls.version.max to 3 and security.tls.version.min to 1 in about:config . Would this modification single me out among other Tor users in any shape or form?

Thanks.

I can fix this by modifying security.tls.version.max to 3 and security.tls.version.min to 1 in about:config .

For the benefit of those who are not computer savvy, could you outline in greater detail on how to make those modifications to FF 24 and FF 26?

And what do Tor developers have to say about FF 24.4 using the very old version of TLS 1.0 instead of 1.2? Let us hear what they (through arma, probably) have to say.

- type about:config in the URL bar, then click on "I'll be careful I promise"
- search for "security.tls.version.max", double-click on it and change the number from 1 to 3
- (optional) search for "security.tls.version.min", double-click on it and change from 0 to 1 or 2
- search for "security.ssl3.rsa_fips_des_ede3_sha" just double-click on it, it will set it to "false"
- search for "security.enable_tls_session_tickets", double-click on it, it will set it to "true".

(thanks to https://blog.dbrgn.ch/2014/1/8/improving_firefox_ssl_tls_security/ and https://blog.samwhited.com/2014/01/fixing-tls-in-firefox/)

You may have to re-start Tor browser to make sure the changes were effective.

You can now re-test https://www.howsmyssl.com/, it should now be on "probably okay".

Note : SamWhited says that these settings were disabled by default because "firefox is vulnerable to downgrade attacks", I definetly don't know which one is better.

Note2 : you may want to check your browser fingerprint on https://panopticlick.eff.org/ before and after the changes. For me it was OK.

April 09, 2014

Permalink

Please help me understand. Could the hacker see the entire memory of the victim or only the memory being used by the tor client?

April 10, 2014

In reply to arma

Permalink

Are you really, absolutely positively sure of that ?
Yes, technically it may be true that the memory, which is used to send the (dummy) "heartbeat" data, belongs to the Tor client process; - at the moment that data is being sent - ;
but if the Tor client had to request (allocate) the ~ 64k bytes block of memory from the OS, that particular block of physical mem might happen to come from anywhere (depending on OS and unpredictable conditions) and still hold data which formerly belonged to other processes, system's or users'. Or does the OS zero out a block of memory it handles in response to a process's requests ? Surely not your vanilla OS, including MS Windows !

Assuming Tor requests a 64 k block for the stated uninitialised "heartbeat" operation, does it then release, or keep the same block for use in other heartbeat (or other) circumstances ? If the block is allocated once and for all, ot would mitigate the vulnerability as an attacker could not dig in the client's memory again and again by repeating the attack. OTOH if the block is released and a new block be allocated each time, then it's the worst possible scenario, depnding on precise OS kernel's way of satisfying memory requests, a large part of the system's physical memory contents might be grabbed by an persistent attacker.

Opinions, please ?

--
Noino

I had a similar train of thought and I think this is the most important question on this page.

Many people are using Tor for online research and making notes while browsing. Their text may be stored encrypted on the hard disk but in memory it is clear text. In addition an auto save feature of the word processors or text editor may put multiple versions of the clear text in dynamically allocated memory locations.

Say such a person downloads a video while writing and during this download every few seconds OpenSSL happily sends 64KB chunks of memory into the internet.

I would like to see a table with the threat potential for memory disclosures for the prevalent Windows OS from XP to 8.1 for administrator and user sessions as well as for Linux, if necessary with a differentation for 32bit and 64bit systems.

Every serious OS, including Windows, zeros the memory it hands out to programs. This is to prevent security issues like reading the memory of sensitive programs, including those that are a part of the OS themselves.

The problem is that OpenSSL has its own memory management, it does not use the memory management from the OS. It has been a known bug for I think 5 years that disabling the OpenSSL internal memory management when compiling results in a non-functional version of OpenSSL, because of all the memory handing bugs that exist in the OpenSSL code. This is why the OpenBSD folks are forking the code base and attacking it with chain saws, in order to get it down to a code base that they can audit and fix to their satisfaction.

April 09, 2014

Permalink

If being able to dump memory from a relay exposes users, wouldn't the admin that is running the relay be able to dump his memory ( via say gdb ) and expose the clients that are going though him ?

Yes, a given relay operator can see whatever his relay can see. That's why Tor circuits are multiple hops, and no single relay gets to know both the client and also her destination.
https://sedvblmbog.tudasnich.de/about/overview

But if you can break into many relays, your odds go up of running across both the first hop in the user's path and also the last hop.

It would be pretty cool to have a design where the relay can't know anybody about the connections it's handling. But that solution would need to include somebody watching the traffic flows into / out of the relay, which has nothing to do with Tor process memory.

April 10, 2014

In reply to arma

Permalink

Makes complete sense, I don't see how that could deanonymize a user though ? If I am a relay operator and like you said can dump memory / use wireshark whatever and see the data that is going through. The exploit does the same thing ( dumps memory ). The two are the same, how can that be used as a attack vector ?

Thanks for answering my questions

April 15, 2014

In reply to arma

Permalink

Can tor use use 3 processes through pipe or like?
client <-->
[p3 <--> (p2 <-->(p1 <--> entry_guard)<-->inner) <--> exit ]
<--> inetsvr
any leakage restricted to corresponding process.
btw you are free to use distinct codecs/tls versions/etc at stages.

April 10, 2014

Permalink

How many relays have upgraded to acceptable SSL so far. When does the process reach 90% complete? What is the schedule for Directory Authorities?

You'll do better following the answers to these questions on #tor-dev irc channel and on tor-dev mailing list. (Also, don't think of these things as number of relays, but rather percentage of capacity or consensus weights.)

April 10, 2014

Permalink

So if I understand this correctly, for the past two years any malicious entry guard has been able to match up a user's real IP address (which it has) with a list of sites they have visited in TBB (which it obtains via heartbleed)?

If so, yikes! I wonder how many western agencies have been exploiting this little baby.

For someone to connect my traffic with my IP they have to be connected directly to me (so they are the relay I'm connected to), they have to know about this fact, and they have to know about the vulnerability.
Am I correct? If so, there is not a big chance that someone did this, even if they did, only a small amount of people are affected. (At least not every tor user) Am I wrong?

The counters for this is obvious:
1: Always run Torbrowser from a newly-extracted, never-used directory or from a copy of that in a directory on a tmpfs in RAM.
2: When it really counts, do not log into anything or engage in any activity that would identify you. Boot, do your secure work, then shut down.
3: Any time security forces could be a danger, use Tor from public wifi hotspots, using that hotspot for nothing else. Use it at home only to avoid things like building up an unwanted Google search history.

This way, heartbleed and any similar attacks all fail. They get an empty history and the IP address of a public wifi hotspot, after working like hell to get it. Just like running a brute-force encryption cracking program for three months, only to find another encrypted tarball as the only contents...

April 10, 2014

Permalink

Could duckduckgo.com be made to replace the google.com in the search space in the upper right corner of Firefox's browser?

April 10, 2014

Permalink

Following on from this if the user was using a VPN although the malicious entry guard would know the sites visited and whatever it could get out of memory, would the associated IP be the one of the VPN? Or is there a way of gaining the real IP through this bug?

April 10, 2014

Permalink

If I'm using OpenSSL 1.0.0j (which is what is in Liberte) then I'm not affected by this bug correct?

April 10, 2014

Permalink

Sorry for the dumb question, but reading news, this blog, the comments there is one thing I'm not sure of.

I know that there is no way to know whether someone actually exploited this vulnerability or not.

But could they listen to everyone, or is it just based on luck? So was it technically possible to monitor everyone, or just random members? Let's assume that in the past two years someone did actively exploited this vulnerability. (let's assume the worst). Would that mean that everyone's traffic is affected or just a few or a lot of people?

April 10, 2014

Permalink

Is there a patch I can run to fix this problem? Will running the "OpenSSL 1.0.1g" fix my computer? Will it ask me Q's that I can't answer (as a intermediate computer user)?

April 10, 2014

Permalink

I have Tor v0.2.3.25 (installed from expert bundle) running on Windows. I use it as client-only: no hidden service or relay. Is it affected by this vulnerability?

April 10, 2014

Permalink

This is not really about TOR, but please could someone knowledgeable help me as I can't find the answer via searches?

While logged in to Yahoo (when it was vulnerable) and logged in to eBay at the same time (which was not vulnerable), could the bug have revealed my eBay password and so I need to reset that as well as the Yahoo one?

"It depends."

Not through the obvious version of the attack (since it's the server that's vulnerable, not your browser), but maybe through some non-obvious version of it.

April 10, 2014

Permalink

Was the old version of Torchat - 0.9.9.553 or isthe version of OpenSSL too old?

You'd have to ask the Torchat people. Torchat has nothing to do with Tor and we haven't looked at it or evaluated it in any way. (In large part this is because they picked a confusing name for their program, so we spend energy teaching people that it's a confusing name rather than actually looking at it).

Torchat hasn't updated in ages, you need to do this manually.

Upgrade Tor in TorChat

1. Close TorChat
2. Download the offical Tor Browser Bundle from Tor Project
3. Extract Tor Browser Bundle to: c:\
4. Copy: C:\Tor Browser\Tor\tor.exe to c:\TorChat\bin\Tor\
5. Copy: C:\Tor Browser\Tor\libeay32.dll to c:\TorChat\bin\Tor\
6. Copy: C:\Tor Browser\Tor\libevent-2-0-5.dll to c:\TorChat\bin\Tor\
7. Copy: C:\Tor Browser\Tor\libssp-0.dll to c:\TorChat\bin\Tor\
8. Copy: C:\Tor Browser\Tor\ssleay32.dll to c:\TorChat\bin\Tor\
9. Copy: C:\Tor Browser\Tor\zlib1.dll to c:\TorChat\bin\Tor\
10. Start TorChat: c:\TorChat\bin\torchat.exe

Remember TC is a hidden service and like mentioned in the post above you should update Tor and then switch IDs.

April 11, 2014

Permalink

Wow

April 11, 2014

Permalink

Since any active security agency had plenty of time to map all Tor users IP addresses and more, What is the best practice to become anonymous from now on?. They know all their targets and their signatures as far as how they use internet. Does one needs to restart with new IP address, new persona, new hardware (computer, etc.), new software, new firmware, new VPN, new guards (relays), and essentially get ride of all things that could connect one to old persona?

April 11, 2014

Permalink

Hi!

I'm also using 0.2.03.25. What should I have to do/to change/to check, please!

Regards,
Me.

Stop using the outdated version of Tor and switch to the latest version.

(I bet there are a lot of other things wrong with your setup too, if that version is a part of it.)

April 11, 2014

Permalink

After one changes the keys on a relay tor weather jumps in with an announcement. Something should be done about that, a rekeying API or something.

I'm just happy Tor weather is still running at all. We've had nobody to maintain it or fix bugs or anything in it for years. Perhaps somebody wants to volunteer to help? See the tor-dev threads about it.

As far as I know, there are zero cases where anybody has successfully extracted a hidden service private key from a Tor client. Or for that matter a relay identity key from a relay.

That doesn't mean you can't do it. But it means we're not near to answering your "how long, how often" questions.

April 11, 2014

Permalink

Could there be a future torrc option to restrict OpenSSL heartbeat to once every few minutes or shut it off altogether?

Somebody should indeed go through openssl and figure out all of its 'features' like this one. So far as I can tell, Tor doesn't need this heartbeat thing -- the Tor protocol has its own heartbeats built in.

The other question for each one will be whether an external observer can use any of the features we take out to distinguish us from 'real' SSL handshakes -- that's a major way that governments like Iran have been blocking Tor via DPI over the years.

I'm guessing at this point that focusing on just the heartbeat feature is like closing the barn door after the horses are extinct. But there are bound to be more issues remaining in other parts of openssl.

April 13, 2014

In reply to arma

Permalink

Tor Browser's anonymity stands on OpenSSL and NSS.
I think a proactive code review of NSS would be well advised.
https://developer.mozilla.org/en-US/docs/NSS_Sources_Building_Testing

You should give out a bounty if someone reports a deanonymizing bug in one of those libraries. I think about $1000-2000. This would be a nice reward without the need for shady dealings to sell such bug on the black market.

I think this is the only realistic approach to get enough people to actually look through the code.

We talked a while ago about doing bug bounties. Note that Mozilla itself does bounties for "security" problems, though you're right that our definition of security problem differs from theirs.

In the end we decided that we already know about plenty of important bugs that need fixing (see trac.torproject.org), and our Tor Browser money is better spent fixing as many of the known issues as we can than finding yet more issues but not fixing them.

That said, if anybody knows somebody who wants to fund Tor Browser bug bounties, we'd love to reconsider this plan.

April 12, 2014

Permalink

In retrospect did the Tor client with an unpatched OpenSSL send the heartbeat over TCP only once per server session or could it have been more often? Multiple hearbeats could pave the way for reading larger memory areas at the client side.

I understand from http://tools.ietf.org/html/rfc6520 that multiple heartbeats are only necessary over UDP.

This heartbeat implementation is so silly one has not change to change anything to make it into a joke: http://xkcd.com/1354/

April 12, 2014

Permalink

Is anyone on the project doing practical tests to see how effective the attacks would be? If malicious entry guards are able to see sites a user had visited, possibly for 2 years, that is quite worrying.

If it turned out to be quite hard in practice (like private keys on web servers) it might be a bit more reassuring for tor users.

April 12, 2014

Permalink

hi i'm still confused (after reading all these posts) exactly how i go about sorting this problem out? i use the tor bundle 3.5.4 that i updated a couple of days ago, i have no idea how to "update my ssl package" and don't understand if that applies to me as i use the bundle. also whats this about a tool to check to see if my ssl is compromised
Tool at: http://rehmann.co/projects/heartbeat/
is this a good idea?
basically is there anything i personally can do to protect myself, and should i still use tor?

thanks
ps i recon the dude that posted that he looks at "very bad" websites is into kiddy porn and i hope he's sweating waiting for the feds to to make a "hard entry" on his front door and take him to live in the big house with "bad bubba and the shower sisters"

If you're just using Tor as a client, and only using TBB, then moving to TBB 3.5.4 should be all you need to do for Tor.

(I say "for Tor" because if you logged into some website using https over the past few years, it's possible that the website was vulnerable, completely separate from what browser you used to reach it -- people could attack the website to extract whatever personal information you might have given it.)

>ps i recon the dude that posted that he looks at "very bad" websites is into kiddy porn
OK, that's disgusting. Not him, you. Let me guess, you're from America, Canada, or the UK, right? Either way, most of the world does not jump to the conclusion that "I look at bad websites" = "I look at kiddie porn". It's people with your views who try to get Tor banned or censored, because they assume the only reason people use Tor is for "bad things". Please, don't make completely and utterly unfounded assumptions, to the point where you actually wish great suffering upon a person. Honestly, I find that more disturbing what you're doing than the slim chance that his version of "bad sites" is exactly the same as your view.

I'm not trying to be rude, but I'm really quite tired of this. Quite often I'm on various chats, or forums, and I mention I like anonymity and privacy, and the first thing people assume is drugs, kiddie porn, or terrorism, and refuse to help me, or just as you do, wish for pain and suffering.

How's this. I go on "bad sites". Do you hope I suffer now? Do you hope I'm terrified of being locked up for decades and raped? Well too bad for you, because the sites I go on that are o-so bad are websites about atheism.

Once we're all done with our moral panics, can we please show some compassion for others who are lumped into one category just because we live in a place where we might like things big brother doesn't approve of?

April 12, 2014

Permalink

btw how about return to rotating entry guards? longer you connected to the guard more leakage it can collect. new tor development lock you at single entry guard, is it coincidence?

April 12, 2014

Permalink

never ever use shared libraries! if you application was from openssl 1.0.0 era and you have updated system to "newest" 1.0.1

April 13, 2014

Permalink

April 13, 2014

Permalink

i think its great the way the Tor project actively responds to a lot of user inquires.The EFF linked to this thread and i just wanted to say, you guys sincerely care about your work and its very admirable.

anyways, it is possible to incorporate PFS to tor nodes?

April 13, 2014

Permalink

Is using remote (shared) tor through SOCKS protocol seems preferable? No personal data leakage possible, bcose process dont have access to them.

No, because that remote Tor client still knows everything that a local Tor client would, and it would still be vulnerable to this same sort of attack (if you haven't upgraded).

Plus, if you use a remote Tor client you add yet another point in the network that gets to know both you and everything you do.

And if that's not enough, you're still going to be running whatever application (e.g. browser) on your own computer, so if it has problems then you didn't deal with that.

Bad idea IMO.

April 13, 2014

Permalink

If you are using Tor, STOP NOW.
I suggest EVERYONE GO BACK TO 2.3.5.
Client and server. Clients, enable NoScript!
2.3.5 is back from 2011 and is tried and true, as far as I know.

Any hidden sites using compromised versions of Tor/SSL are unsafe and can never be considered safe again, unless the owner can prove through use of an old PGP key/.bit address that they own the new .onion site. All .onion sites using newer versions of Tor ARE POTENTIALLY COMPROMISED.

Also, any clients who have used a version of Tor within the last 2-3 years should be considering all of their keyrings potentially compromised by the entry guard (first relay) and should be completely re-encrypting their systems and generating new PGP keys, etc, as the first relay could have been reading our RAM through HeartBleed.

* IF YOU CANNOT SAFELY THREATEN TO KILL POLITICIANS OR DOWNLOAD/POST CP, YOU ARE NOT ANONYMOUS. *

I knew that it was sketchy when Tor Project was telling everyone to update their browser bundles after the Firefox javascript exploit that was revealing IP addresses of pedos.

THIS REQUIRES FURTHER RESEARCH AND THE TOR PROJECT IS COMPLETELY INCAPABLE OF DOING IT, AS IT HAS BEEN OVERRUN BY NSA SHILLS. We need to go back and fork the project!

Be careful listening to this person's advice.

For example, the 2.x TBBs have old obsolete insecure versions of Firefox in them, so that part is clearly bad advice.

Hidden services that used insecure versions of openssl are indeed unsafe and shouldn't be used again -- I agree. But this notion of proving something via PGP? Not enough details. And why blame the newer versions of Tor? Haven't you looked at the code? Or at least the changelog that shows all the bugs we fixed since the version you prefer to run?

Ha, and then we get to the end of your comment. I guess I'll let people judge that one for themselves.

April 13, 2014

Permalink

Is it possible to check which SSL-version is installed by my TBB? (I'm only using the browser/client-only)

Version is 0.2.3.25-?. Do I have to look vidalia or browser-options?

Wow. You are using an obsolete insecure version of Tor Browser from years ago.

That browser you have probably has security holes by now. I recommend against running it.

April 18, 2014

In reply to arma

Permalink

I recommend you implement a way to to make tor nodes refuse connections from older version -anything that isn't the latest TBB. Too many people have no idea what dangers they're putting themselves in when using a not-up-to-date TBB.

Part of the trouble is that Tor is an anonymity system, and our protocol is open and there are multiple implementations.

So there are no good ways of having the relays reach in and figure out what version the Tor client is.

I guess we could have the Tor client volunteer its version info, and then the relays can hang up if they don't like it. But if we're to do that, why not have the clients just opt to fail if they're out of date?

That sure would make some users upset, e.g. the ones who put Tor on their USB stick, go to the censored/surveilled place with really crappy Internet, and then can't use it because of the update that came out the day before -- even if all they planned to use it for was to fetch the newer version.

So in sum, "it's complicated; somebody should come up with a clear plan and then we can see if it would actually work. Most versions of such plans don't seem good."

April 14, 2014

Permalink

PersonalWeb, the True Names patent troll, claimed they could hack SSL to insert advertising with MITM attacks for ISP customers. Does anybody know if they were using this hack?

April 19, 2014

Permalink

No question, but I just wanted to thank you very much for answering so many questions on here, it must be hard with all the other work you are doing. I wish I knew as much as you do.

Yes, the bug is fixed. But some of the fallout from the bug is still ongoing. For example, we'll be putting out a Tor update in the next while that blacklists the old (no longer in use) directory signing keys from the directory authorities -- not because we know they were compromised, but because we don't know they weren't. For another example, we cut 1000 relays out of the network because they were still vulnerable to the bug. And there are another 500-1000 that have upgraded (so the bug is fixed for them) but maybe their long-term identity key was extracted from them before they upgraded, and we'll be cutting those out of the network at some point.

It'll be a while yet until we can call it all resolved. I recommend following on the tor-relays list and other places than these blog comments.