Tor Messenger 0.1.0b5 is released

by sukhbir | March 9, 2016

We are pleased to announce another public beta release of Tor Messenger. This release features important security updates to libotr, and addresses a number of stability and usability issues. All users are highly encouraged to upgrade.

The initial public release was a success in that it garnered a lot of useful feedback. We tried to respond to all your concerns in the comments of the blog post but also collected and aggregated a FAQ of the most common questions.

OTR over Twitter DMs

Tor Messenger now supports OTR conversations over Twitter DMs (direct messages). Simply configure your Twitter account with Tor Messenger and add the Twitter account you want as a contact. Any (direct) message you send to another Twitter contact will be sent over OTR provided that both contacts are running Tor Messenger (or another client that supports Twitter DMs and OTR).

Facebook support dropped

Facebook has long officially deprecated their XMPP gateway, and it doesn't appear to work anymore. We had multiple reports from users about this issue and decided that it was best to remove support for Facebook from Tor Messenger.

We hear that an implementation of the new mqtt based protocol is in the works, so we hope to restore this functionality in the future.

Before upgrading, back up your OTR keys

Before upgrading to the new release, you will need to back up your OTR keys or simply generate new ones. Please see the following steps to back them up.

In the future, we plan to port Tor Browser's updater patches (#14388) so that keeping Tor Messenger up to date is seamless and automatic. We also plan to add a UI to make importing OTR keys and accounts from Pidgin, and other clients, as easy as possible (#16526).

The secure updater will likely be a part of the next release of Tor Messenger.

Downloads

Please note that Tor Messenger is still in beta. The purpose of this release is to help test the application and provide feedback. At-risk users should not depend on it for their privacy and safety.

Linux (32-bit)

Linux (64-bit)

Windows

OS X (Mac)

sha256sums.txt
sha256sums.txt.asc

The sha256sums.txt file containing hashes of the bundles is signed with the key 0x6887935AB297B391 (fingerprint: 3A0B 3D84 3708 9613 6B84 5E82 6887 935A B297 B391).

Changelog

Here is the complete changelog since v0.1.0b4:

Tor Messenger 0.1.0b5 -- March 09, 2016

  • All Platforms
    • Bug 13795: Remove SPI root certificate because Debian no longer ships it
    • Bug 18094: Remove references to torbutton from start-tor-messenger script
    • Bug 18235: Disable Facebook as they no longer support XMPP
    • Bug 17494: Better error reporting for failed outgoing messages
    • Bug 17749: Show version information in the "About" window
    • Bug 13312: Add support for OTR over Twitter DMs
    • Bump libotr to 4.1.1
  • Mac
    • Bug 17896: Add Edit menu to the conversation window on OS X
  • Windows
    • ctypes-otr
      • GH 65: Support Unicode paths on Windows

Comments

Please note that the comment area below has been archived.

March 09, 2016

Permalink

Grading system of the level of the trust for all nodes. And best node's auto & manually selectable in the chain.

March 09, 2016

Permalink

:C

couldn't tor messenger be made to use a javascript otr implementation instead of libotr?

Yes, it could and that's something we've considered (and are still considering). We went with libotr because it's correct and constant time and audited, but these memory safety issues are indeed troubling.

March 11, 2016

In reply to arlo

Permalink

Even though I don't yet really understand all of the design decisions you explain briefly, I think one of the most promising things about TM is the fact that you are trying to explain design decisions in response to queries from technically able users. That's very important because it helps less able users to trust that you are thinking hard about all these things. I think most less able users (like me) are at least capable of understanding that everything is tradeoff, and that all we can reasonably ask is that you make careful choices and continually reexamine them. (I hope TM will be able to remain sufficiently nimble to undo a bad decision in the event of some possible future revelation in BlackHat etc. which changes expert opinion on the relative hazard of various different vulnerabilities affecting TM's reverse dependencies.)

I'd love to see an Ars article by one of their intrepid journalists which tries to explain the design decisions you have defended in more detail. Even better if Tor Project can get a sizable fraction of journalists who write about tech to help us all beta test TM.

We can certainly do better and one of the things we want to get across is who can see what in a typical Tor Messenger conversation. Can your ISP or the server see what you are talking about (no)? Can the server see who you are talking to (yes)? We need to get this information across to users, in a simple "yes" "no" "maybe" tabular format. We are tracking this in https://trac.torproject.org/projects/tor/ticket/17528.

March 09, 2016

Permalink

What's changed regarding the IRC client? Because it doesn't work anymore. Now I get 'Error: Peer's Certificate issuer is not recognized'. Never saw that before. I then add a permanent security exception, but it still doesn't connect, just says 'Lost connection' and then after a while I get again 'Error: Peer's Certificate issuer is not recognized'. I'm using the irc.oftc.net network

As the changelog says, "Bug 13795: Remove SPI root certificate because Debian no longer ships it". The OFTC certificates are signed by the SPI certificate, which we were including earlier because Debian also used to include it, but now no longer does (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=796208). When we discussed adding certificates to our builds (see https://lists.torproject.org/pipermail/tbb-dev/2014-November/000181.html), we decided that we are comfortable adding the SPI certificate as Debian packages it as part of the ca-certificates package. Since they no longer consider it "safe", we decided to remove it as well.

OFTC has been known to block Tor but it seems like the situation has improved recently. So the "lost connection" messages are most likely OFTC blocking connections from your specific exit.

March 11, 2016

In reply to sukhbir

Permalink

Does tor-messenger actually store the security exceptions? I've been clicking away at permanently store security exceptions for I don't know how many times, but they keep coming back.

My subjective feeling is that is much harder to connect to a irc network with this version than the previous one.

Yes, it should (and does in my limited testing) store the exception. Can you clarify which platform you're on?

This change would only affect OTFC (well, and other networks with SPI issued certs, which I assume is likely few) but I accept your premise that this was a regression in usability.

March 10, 2016

Permalink

Tor Messenger 0.1.0b5 is released and Tor Messenger 0.1.0b4 is released crash when I try to open a menu ( File, Tools or Help ) on kubuntu 14.04.4 LTS.
That before to create an account, so accounts configuration window is closed by me.
I think it is a bug related on gtk applications theme on kde environment.

That looks like a mirror of the website,
https://lists.torproject.org/pipermail/tor-mirrors/2014-August/000673.h…
https://sedvblmbog.tudasnich.de/getinvolved/mirrors.html.en

Wherever you download from, please verify the hash and signature.
https://sedvblmbog.tudasnich.de/docs/verifying-signatures.html.en

It should support that platform / architecture, yes.

ITsn is listed on the mirrors page, there just doesn't happen to be a column for onion addresses.

But you're right to say that if you can access the .onion, you should be to just visit torproject.org and get it from there.

March 10, 2016

Permalink

I think TM is one of the most promising projects Tor Project is doing right now, so very glad to see the next edition coming out!

@ Sukhbir: are you using Shamir Secret Sharing Scheme (SSSS) or similar to take modest precautions against "rubber hose cryptanalysis" by US, RU, CN, etc. governments?

The fear is that an agency like FBI could try to compel you on pain of indefinite imprisonment without trial or something like that to misuse your authentic signing key to sign FBI made/bought malware disguised as a genuine TM tarball. Further, if the legal coercion came in the form on an NSL (National Security Letter) you would be forbidden from every revealing to anyone, even your lawyer, that you had been served with a demand and a gag order. If that happens I strongly encourage you to immediately disobey the gag order and tell everyone because no-one ever had the courage to do that and I strongly suspect that courts might surprise everyone by ruling that the NSL gag orders violate the US Constitution. (IANAL so that guess represents a political judgment, not a legal one!)

SSSS (look in the Debian software repositories) provides some protection against that by allowing trusted people in distinct jurisdictions (e.g. people in US, Norway, RU, CN, Brazil all must sign, and we hope those countries would not all cooperate in rubber hose cryptanalysis to obtain a backdoor in TM, targeted or otherwise). And there is Cothority, which offers massively scaled SSSS-like "witnessing":

http://arstechnica.com/security/2016/03/cothority-to-apple-lets-make-se…
Cothority to Apple: Let’s make secret backdoors impossible
Decentralized cosigning could make it tough for government to gain access.
J.M. Porup (UK)
10 Mar 2016

> Cothority decentralises the signing process, and scales to thousands of cosigners. For instance, in order to authenticate a software update, Apple might require 51 percent of 8,000 cosigners distributed around the world.

March 11, 2016

In reply to arlo

Permalink

@ arlo:

Apparently I was insufficiently clear and I regret that.

The deterministic builds project is much needed and long overdue, but as I understand it, this project addresses a very different kind of threat: "scenarios where malware sneaks into a development dependency through an exploit in combination with code injection, and makes its way into the build process of software that is critical to the function of the world economy" (according to Mike Perry's blog post which you cited in the link).

That is a very serious (and all too plausible) potential threat, but it is quite different from the very serious (and too plausible) potential threat I was talking about, which is a kind of "rubber hose cryptanalysis" in which a key developer (or a set of same) is forced by some government (or coalition of governments) to misuse their *genuine* cryptographic signing key by signing a version of their product which has been "backdoored" by some government malware-as-a-service contractor. Wary users who check the gpg signature before installing would still be fooled because the "bad" version has been signed with the *genuine* signing key.

See this explainer:

http://arstechnica.com/security/2016/02/most-software-already-has-a-gol…
Most software already has a “golden key” backdoor: the system update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

> Q: What does almost every piece of software with an update mechanism, including every popular operating system, have in common?
>
> A: Secure golden keys, cryptographic single-points-of-failure which can be used to enable total system compromise via targeted malicious software updates.
>
> I'll define those terms: By "malicious software update," I mean that someone tricks your computer into installing an inauthentic version of some software which causes your computer to do things you don't want it to do. A "targeted malicious software update" means that only the attacker's intended target(s) will receive the update, which greatly decreases the likelihood of anyone ever noticing it. To perform a targeted malicious software update, an attacker needs two things: (1) to be in a position to supply the update and (2) to be able to convince the victim's existing software that the malicious update is authentic. Finally, by "total system compromise" I mean that the attacker obtains all of the authority held by the program they're impersonating an update to. In the case of an operating system, this means that the attacker can subvert any application on that computer and obtain any encryption keys or other unencrypted data that the application has access to.
>
> A backdoored encryption system which allows attackers to decrypt arbitrary data that their targets have encrypted is a significantly different kind of capability than a backdoor which allows attackers to run arbitrary software on their targets' computers. I think many informed people discussing The Washington Post's request for a "secure golden key" assumed they were talking about the former type of backdoor, though it isn't clear to me if the editorial's authors actually understand the difference.
>
> From an attacker perspective, each capability has some advantages. The former allows for passively-collected encrypted communications and other surreptitiously obtained encrypted data to be decrypted. The latter can only be used when the necessary conditions exist for an active attack to be executed, but when those conditions exist it allows for much more than mere access to already-obtained-but-encrypted data. Any data on the device can be exfiltrated, including encryption keys and new data which can be collected from attached microphones, cameras, or other peripherals.
>
> Many software projects have only begun attempting to verify the authenticity of their updates in recent years. But even among projects that have been trying to do it for decades, most still have single points of devastating failure.
>
> In some systems there are a number of keys where if any one of them is compromised such an attack becomes possible. In other cases it might be that signatures from two or even three keys are necessary, but when those keys are all controlled by the same company (or perhaps even the same person) the system still has single points of failure.
>
> This problem exists in almost every update system in wide use today. Even my favorite operating system, Debian, has this problem. If you use Debian or a Debian derivative like Ubuntu, you can see how many single points of failure you have in your update authenticity mechanism with this command:
> ...

As Ryge explains, something like Shamir's Secret Sharing System (SSSS in Debian repository) can help combat the threat I am talking about, by distributing "shares" of a single secret, such as a gpg signing key, to a number of people in various countries, some minimal number of whom can combine their shares to recreate the secret in order to sign software. But when your enemy is the USG, even with SSSS it would not be easy to guarantee that USG could not force developers to misuse their shares, because it would be very difficult to name a set of countries in which key developers already live, which cannot be pressured by FBI into cooperating in "rubber hose cryptanalysis" of the kind described by Ryge.

The potential threat I am talking about, that Ryge is talking about, affects all Linux distributions which use a package manager, all Open Source software projects like Tails which offer a gpg signed ISO image of a specialized Linux distribution for at-risk users, all Open Source software projects such as Tor Project which offer cryptographically signed tarballs such as TBB or TM tarballs for download by ordinary citizens who need privacy/anonymity/security. And all users of a smart phone who download cryptographically signed software upgrades from the phone's manufacturer. All users of a router who download cryptographically signed firmware upgrades. Pretty much everyone who uses an electronic device which accepts cryptographically signed upgrades.

This threat is not hypothetical, as FBI's demands to Apple show.

The potential threat Mike Perry is talking about in the cited blog post is also rather general, and I certainly regard it as non-hypothetical and very serious. But it is a different threat which calls for a different response.

No, you were clear. I just think that reproducible builds solve the same problem. Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

March 11, 2016

In reply to arlo

Permalink

> Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

I think I understand your point, and agree as far as it goes, but it seems that keeping the signing key distributed (e.g. using something like SSS) is still a very good idea, and I hope you will consider it.

Are the auditors geographically distributed? Not all in countries where Comey can force the local government to apply "rubber hose cryptanalysis" by forcing an auditor to abuse their privileges by falsely stating that their build matched the expected hash?

A tricky issue with SSSS and allied schemes: you want to set the number of required shares sufficiently low so that if the USG organizes a global roundup of all Tor devs they can arrest, there will still be sufficiently many survivors to carry on and sign the next bundle of TM or TB, or the next ISO image of Tails. But you want to set it high enough (ideally with some geographical constraints also) so that FBI will find it difficult to pressure enough governments to round up all the devs at once.

Hope this is clear, its hard to describe in words late in the day.

I think it's difficult to use SSSS to keep the signing key distributed. At some point someone has to reassemble the signing key on some computer to sign the release, and then we have a single person with a copy of the signing key, which is no longer distributed.

What we can do however is having the build signed by multiple people. We do it for Tor Browser. We should do it for Tor Messenger too, but first we need to have the builds for all platforms reproducible (currently only the Linux builds are, so the sha256sums.txt files which contains builds for all platforms wouldn't be matching).

But the signature of official builders is not all. We also hope that other people that we don't know will verify our builds anonymously, and say something if something seems wrong.

March 15, 2016

In reply to boklm

Permalink

> I think it's difficult to use SSSS to keep the signing key distributed. At some point someone has to reassemble the signing key on some computer to sign the release, and then we have a single person with a copy of the signing key, which is no longer distributed.

Point taken. Perhaps SSSS used *in isolation* is really more suitable for a Board of Trustees which wishes to be able to reconstruct a copy of the master key to the company networks in case their cybersecurity chief "breaks bad" (c.f. the experience of San Francisco some years ago).

> What we can do however is having the build signed by multiple people. We do it for Tor Browser. We should do it for Tor Messenger too, but first we need to have the builds for all platforms reproducible (currently only the Linux builds are, so the sha256sums.txt files which contains builds for all platforms wouldn't be matching).
>
> But the signature of official builders is not all. We also hope that other people that we don't know will verify our builds anonymously, and say something if something seems wrong

I think Open Source community must consider every available technical tool, legal strategm, geolocational distribution, to come up with a workable solution to the threat from "rubber hose" breakage of the authentication and data integrity functions of cryptography.

It worries me, boklm, that no-one at Tor Project has yet clearly stated that they even understand the nature of this threat. If that's true, there is no way you can prepare defenses against it. That's terrible because I believe you may have only weeks or months before you (and some or all of your users) fall victim to coerced cooperation by

o key Tor Project people in abusing cryptographic signing keys to sign maliciously modified TP products (TB, TM, tor client, tor server software) provided by the bad guys,

o key certificate authority people is abusing signing keys by signing bad certs produced by the bad guys.

Here is that link again:

http://arstechnica.co.uk/security/2016/02/most-software-already-has-a-g…
Most software already has a “golden key” backdoor—it’s called auto update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

March 14, 2016

In reply to arlo

Permalink

> No, you were clear. I just think that reproducible builds solve the same problem.

I don't think that's true. If I'm wrong you need to explain this:

> Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

Here is how Leif Ryge (I think some Tor people know him) described the threat:

http://arstechnica.co.uk/security/2016/02/most-software-already-has-a-g…
Most software already has a “golden key” backdoor—it’s called auto update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

> Q: What does almost every piece of software with an update mechanism, including every popular operating system, have in common?
>
> A: Secure golden keys, cryptographic single-points-of-failure which can be used to enable total system compromise via targeted malicious software updates.
>
> I'll define those terms: By "malicious software update," I mean that someone tricks your computer into installing an inauthentic version of some software which causes your computer to do things you don't want it to do. A "targeted malicious software update" means that only the attacker's intended target(s) will receive the update, which greatly decreases the likelihood of anyone ever noticing it. To perform a targeted malicious software update, an attacker needs two things: (1) to be in a position to supply the update and (2) to be able to convince the victim's existing software that the malicious update is authentic. Finally, by "total system compromise" I mean that the attacker obtains all of the authority held by the program they're impersonating an update to. In the case of an operating system, this means that the attacker can subvert any application on that computer and obtain any encryption keys or other unencrypted data that the application has access to.
>
> A backdoored encryption system which allows attackers to decrypt arbitrary data that their targets have encrypted is a significantly different kind of capability than a backdoor which allows attackers to run arbitrary software on their targets' computers. I think many informed people discussing The Washington Post's request for a "secure golden key" assumed they were talking about the former type of backdoor, though it isn't clear to me if the editorial's authors actually understand the difference.
> ...
> Many software projects have only begun attempting to verify the authenticity of their updates in recent years. But even among projects that have been trying to do it for decades, most still have single points of devastating failure.
>
> In some systems there are a number of keys where if any one of them is compromised such an attack becomes possible. In other cases it might be that signatures from two or even three keys are necessary, but when those keys are all controlled by the same company (or perhaps even the same person) the system still has single points of failure.
>
> This problem exists in almost every update system in wide use today. Even my favorite operating system, Debian, has this problem.
>
> [Software developers] probably thought they would be able keep the keys safe against realistic attacks, and they didn't consider the possibility that their governments would actually compel them to use their keys to sign malicious updates.
>
> Fortunately, there is some good news. The FBI is presently demonstrating that this was never a good assumption, which finally means that the people who have been saying for a long time that we need to remove these single points of failure can't be dismissed as unreasonably paranoid anymore.

Here is one possible scenario:

FBI serves Tor Project with an NSL (which is automatically accompanied by a gag order forbidding recipients from telling anyone about the secret order, on pain of very lengthy prison terms) or some court order (accompanied with a "delayed notification" which can be renewed every 90 days simply by FBI asking for a renewal) ordering developers to use their authentic signing key(s) to sign a version of latest TM or TB tarball which has been provided by FBI, and which contains hidden malware. Facing indefinite imprisonment or worse--- perhaps they are sitting in jail and told they will remain there unless they agree to cooperate--- the developers sign the maliciously modified tarball. That's "rubber hose" breakage of the *authentication* function of gpg.

With NSA help, FBI then manages to divert https connections by targeted Tor users attempting to download the next edition of TB or TB tarball to their own site, where they are served the trojaned version of the tarball. Perhaps NSA has compelled the Certificate Authority to use *their* genuine signing key to validate FBI's fake torproject.org PEM certificate, thus fooling users into thinking they are connected to the genuine website.

Since the detached signature verifies against the genuine signing key, the targeted users accept the "updated" as genuine.

Meanwhile, the "verifiers" you describe get the genuine tarball like almost everyone else, and thus fail to detect the targeted attack, and because of the gag order they have no way of knowing you were forced to sign malware with your authentic key.

Another scenario: FBI not only compels you to use your authentic key to sign maliciously modified tarballs, they secretly seize the domain torproject.org and serve the bad tarballs to *everyone* (no bad PEM needed).

Maybe I misunderstood what you said, but I am not sure the verifiers would catch the deception in this case either, because FBI has complete control over your persons and your domain, so they can manipulate all your communications with the verifiers.

March 10, 2016

Permalink

could you add an option to enable logs again? This might be slightly against the concept, but I can't trust other ppl I can with to have logging disabled anyway.

March 11, 2016

In reply to arlo

Permalink

I tried this a couple months ago, it ostensibly re-enables logging but it's functionally unusable. It overwrites the logfile every time the app launches, the contents are JSON, not human-readable text, and it's buried deep inside the application directory.

Logging is very important to average users. That I can't get a meaningful log is keeping me from recommending it to others I'd like to have using it. (I'm not the OP here, either.)

You can open the context menu (right click) of a conversation and select "Show Logs" to view them in a human-readable way in the application itself, right? (Not sure if you're aware of that.) Also, there are many tools to process JSON to a more readable format. You can also change the `purple.logging.format` pref from `json` to `txt`, if you really want, but I'm not sure how much longer that'll work. I believe the structured logging is by design.

But, more substantively, there're a number of bugs in the way before we can safely reenable logging,
https://bugzilla.mozilla.org/show_bug.cgi?id=1175706
https://bugzilla.mozilla.org/show_bug.cgi?id=1175374
https://github.com/arlolra/ctypes-otr/issues/49

Thank you for the feedback though. It's helping me gauge the issue.

March 12, 2016

In reply to arlo

Permalink

The menu from an item in the contacts list does bring up Show Logs, thank you! And the viewer respects my font settings, which is terrific. It would be good of course to have it available by a regular menu and keyboard shortcut so users can find it in their preferred way. (Which you can probably guess hunting for stuff behind mouse buttons isn't mine.)

March 11, 2016

In reply to arlo

Permalink

Correction: The files aren't being overwritten anymore. But the rest is the same.

On OS X, the logs are in

  1. <br />
  2. /Applications/Tor Messenger.app/Contents/TorMessenger/Data/Browser/[myprofile].default/logs/jabber/<a href="mailto:myuser@domain" rel="nofollow">myuser@domain</a>/<a href="mailto:otheruser@domain" rel="nofollow">otheruser@domain</a><br />

sample:

  1. <br />
  2. {"date":"2016-03-11T17:19:54.000Z","who":"<a href="mailto:myuser@domain" rel="nofollow">myuser@domain</a>/Instantbird","text":"what i typed","flags":["outgoing"],"alias":"myuser"}<br />

> I can't trust other ppl I can with to have logging disabled anyway.

I have sometimes told people in chats that I was not logging, and I really was not. You probably meant "I cannot be certain that other people I chat with have disabled logging" and of course I agree with that.

March 10, 2016

Permalink

Who is it, if I have defined multiple irc accounts, so they share the same tor circuit when connecting to the network?

March 11, 2016

In reply to arlo

Permalink

I have few accounts on the same XMPP server which I don't want to associate one with other (the server is the adversary). Could I get different tor circuits for them in this case? Now I could use different boundles for each account to specify differnent Tor SOCKS host and port for them, but it is quite unconvenient. I would like to see a possibility to specify different

  • SOCKS host
  • SOCKS port
  • OTR key
  • OTR fingerprints

for different accounts in the same tor messenger.

Then, the second question: Tools → Preferences → Content → "Send these fonts and colors as part of my messages when possible". ← What is this? Is it safe to allow? Why should I send any info about my fonts and colors to somebody? The interface should be standard for everybody.

P.S. Thanks for the project! I waited for a long time any officially supported messenger for Tor. Now I'm happy the project exists and evolves.

March 10, 2016

Permalink

Thanks for making Tor Messenger :) running it for a few weeks 24/7 now and it works perfectly.

March 11, 2016

Permalink

To defeat browser profiling, inject javascript to make random micro mousemoves added &/or subtracted to or from each real move to block "mousemove finger printing", also allow browser to lie about plugins,
randomized plugins li
st per site etc.

Someone else already cited the link, but here are the details of the cited attack:

http://jcarlosnorte.com/security/2016/03/06/advanced-tor-browser-finger…
Advanced Tor Browser Fingerprinting
6 March 2016

> The ability to privately communicate through the internet is very important for dissidents living under authoritary regimes, activists and basically everyone concerned about internet privacy.
>
> While the TOR network itself provides a good level of privacy, making difficult or even practically impossible to discover the real I.P. address of the tor users, this is by no means enough to protect users privacy on the web. When browsing the web, your identity can be discovered using browser exploits, cookies, browser history, browser plugins, etc.

Tor Project is tracking this issue so we hope the next edition will field countermeasures.

March 11, 2016

Permalink

OTR over Twitter DM is an incredible feature. I wonder if moxie would be willing to share his work on encrypted twitter DMs that was blocked for surveillance considerations... could be an excellent feature set to incorporate!

FBI Director Comey is spending quite a bit of time these days testifying before Congress, and not all his auditors are entirely happy with his rumored decision to charge Mrs. Clinton or his insistence on breaking American cybersecurity (not to mention privacy) in order to spy better on dead criminals or whatever nonsensical excuse he offers.

Suggestion for a fun PR stunt:

Get some journalists to communicate by TM over Twitter DM with a well known whistleblower and Tor supporter currently residing in Russia, and make sure someone passes a note to Comey to tell him about it during his testimony.

March 14, 2016

In reply to sukhbir

Permalink

> We plan to highlight this feature a bit more.

Good.

> I guess without the two people you mentioned :)

I defer to your judgment on that score, but you/Shari *must* issue a statement on DOJ orders served on Apple and other companies, especially the rumored forthcoming backdoor order naming WhatsApp.

Oppressive governments may be willing to cooperate in a concerted attack on their perceived common enemy, human rights activists who use Tor, Ricochet, WhatsApp, Signal in iPhone, or whatever. On this basis, it seems not impossible that the security services of USA, UK, CN, RU, IR, VN might ink deals to collaborate in "rubber hose" breakage of cybersecurity measures protecting Open Source software. Even though those countries would be unlikely to collaborate on anything else, they are all likely to see HRW, Riseup Networks, Tor Project, WhisperSystems, Silent Circle, Apple, etc, as "dangerous adversaries" worthy of overt oppression.

It may now be true that the hand of every government is raised against us.

> Oppressive governments may be willing to cooperate in a concerted attack on their perceived common enemy, human rights activists who use Tor, Ricochet, WhatsApp, Signal in iPhone, or whatever. On this basis, it seems not impossible that the security services of USA, UK, CN, RU, IR, VN might ink deals to collaborate in "rubber hose" breakage of cybersecurity measures protecting Open Source software.

Not a day later, comes this grim news:

http://thehill.com/policy/cybersecurity/273047-china-asks-fbi-chief-to-…
China asks FBI chief to help battle terrorism, hackers
Cory Bennett
15 Mar 2016

> Chinese leaders on Monday urged FBI Director James Comey to work more closely with his Beijing counterparts on Internet security and anti-terrorism cases.
>
> The message came during a meeting in Beijing between Comey and Chinese Public Security Minister Guo Shengkun, according to Xinhua, a state-run news agency.
>
> “The two sides agreed to have more pragmatic cooperation in cybersecurity and anti-terrorism,” the report said.

A key point here is that China (and increasingly, the US--- cf Prepresident Trump) have rather broad interpretations of the meaning of the word "terrorism". China already uses this term to include political dissidents, and recently FBI keeps broadening its own use of the term, to cover for example eco-activists, animal-rights activists, BLM activists, divestment activists, social-justice activists, etc. (since any of these people, according to FBI, could turn violent at any moment, or might become "anarchists" or cybersecurity enthusiasts).

The major tech companies are outraged that the USG never really supported their attempts to stand up to Chinese demands for data on Chinese citizens (and exiles living in "the West"), and have been further outraged by the hypocrisy of FBI's anti-encryption campaign (CWII) and NSA's all-pervasive economic espionage.

And now it seems FBI and NSA are considering voluntarily sharing with the government of China the personal data of US persons and proprietary information of US companies which they collect under "counter-terror" mandates. What next? NSA sharing its data trove with the government of RU? VN? IR?

All the world's governments increasingly see themselves at war with the giant tech companies, because these companies increasingly operate independently of any government's control. A spate of trade treaties even prohibits national governments from enacting laws which would attempt to bring them back under government control. So to some extent, CWII ties in with a rather desperate attempt by the world's government to wrest back control of their national portion of the global economy. Hence demands such as these by the governments of USA and CN and other nations:

> China has also irked the international business community with a series of national security laws that foreign businesses say could give Beijing access to their source code and user data.

All the world's governments also see themselves at war with their own citizens, because increasingly the masses everywhere see their interests as being grossly abused by the political/economic elite.

So in broad outline, the history of the 21st Century seems likely to involve a grand global struggle between governments, corporate mega-conglomerates, and citizens. All three of these groups will increasingly tend to put the situation like this:

It's all of them against all of us.

Yes, that's an exciting feature indeed. It fits with our goal of making a product that works with existing social networks and yet provides a way for secure communication.

March 11, 2016

Permalink

@ sukhbir:

Thanks for bringing us TM! Early days yet but I think this will be great!

Suggestion for the FAQ:

https://trac.torproject.org/projects/tor/wiki/doc/TorMessenger/FAQ

Under libpurple add what you replied in previous threat to questions about the security of JavaScript. (You said it was a decision you made carefully and that you believe that JavaScript as used in TM is not as dangerous as JavaScript as used in Mozilla Firefox and thus in TB. Words to that effect.)

I hear many good things about Ricochet. Maybe explain in the FAQ why TM is not compatible with Richochet and whether that might change?

This is a big one: any chance you can persuade Shari to have Tor people reach out to journalists to promote TM use by journalists and whistleblowers? That doesn't quite contradict "at-risk people shouldn't beta test" since there is *no alternative* to TM that I see for at-risk people who need to chat with a journalist.

Good point, I've updated the FAQ with a section on JavaScript.
https://trac.torproject.org/projects/tor/wiki/doc/TorMessenger/FAQ#Java…

We've been planning a table to detail who has access to what metadata for the various protocols that Tor Messenger supports, and how that compares to Ricochet.
https://trac.torproject.org/projects/tor/ticket/17528

Isn't Ricochet an example of an alternative though? She'd sooner encourage you to use that. It was recently audited,
https://ricochet.im/files/ricochet-ncc-audit-2016-01.pdf

Tor Messenger has only had a minimal internal audit at this point, with lots left to cover. We mean it when we say, "At-risk users should not be depending on it for their privacy and safety."
https://trac.torproject.org/projects/tor/ticket/10944

March 14, 2016

In reply to arlo

Permalink

OT, but speaking of tables, I would love to see Tor Project, in concert with EFF, Access, ACLU and other civic minded groups, to reach out to student orgs at major universities around the world but especially in the US, possibly providing materials for "tabling", in which activists sit at a table at some prominent location on campus, perhaps near the student union, and offer information, leaflets, advice, Tails DVDs, and Tor stickers to students. Another good location might be outside Apple stores or Google stories if management agrees.

March 26, 2016

In reply to arlo

Permalink

Unfortunately Tor Messenger is much more stable then Ricochet currently. With Ricochet I've been having issues seeing people online that I know are online (because they're in the same room as me), while we can see other people online (so we're clearly connected to the network). Plus random crashes. I guess I should file a bug report.

Tor Messenger is working great, though! Would like to be able to have the same options editing an account as creating it. (For example, if I made a typo in the username/server when adding an account, I have to delete and re-add to fix it.)

March 11, 2016

Permalink

@ sukbhir:

Can please you have a dozen other Tor Project people sign your signing key please? Some of us actually do try to use the Web of Trust. I know its clunky and far from ideal, but given the lethality and determination of our enemies, we need to use every tool at our disposal.

Thanks for your work on TM! Stay safe, and don't let the b*tards get to your keyring!

March 14, 2016

In reply to sukhbir

Permalink

Good, good.

I beg you to respond to questions raised in this blog about the rapidly rising potential that USG and its allies will attempt to compel TM devs (and other signers) to sign under duress a FBI-bought/made modification of TM, for either a targeted or dragnet attack on your users, who will accept the fake-TM as genuine because they have no way of knowing the signature was compelled.

This would be analogous to a healthy patient (happy TM user) who goes to the doctor for an annual flu shot (TM user who goes to torproject.org for latest edition of TM), who trusts her doctor (Tor Project) to inject her with a genuine beneficial health-preserving medicine (latest edition of TM), but unknown to the hapless patient, the government has coerced the doctor to replace the genuine vaccine with a deadly neurotoxin (state sponsored APT malware), and forbids the doctor to tell anyone what happened.

Can something like SSSS (Shamir's Secret Sharing System) and geolocation diversity of TP people who sign future editions of TM help?

March 11, 2016

Permalink

@ sukhbir:

Although sha256 is currently considered good, md5 and sha1 are considered thoroughly broken by our most lethal adversary, NSA. Given that NSA is known to be expert in breaking cryptographic hash algorithms, wouldn't it be wiser to use GPG to sign the tar balls directly, instead of signing a statement of the expected sha256 results?

(If you hear different from anyone like Bruce Schneier or Jacob Appelbaum I defer to their judgment. But please share what they said with us!)

The reason we sign the sha256sums.txt instead of the bundles directly is that it makes verification easier if multiple people build Tor Messenger, which is currently the case; the Linux builds are reproducible and boklm and I build and compare the hashes and then sign the file sha256sums.txt file with my key. This also makes verification by users easier and we encourage you to do so.

March 12, 2016

In reply to sukhbir

Permalink

I would support that Anonymous. It is much more convenient to make single "gpg --verify file.asc" command than to compare hashes with grep. I would also point out that Tor Browser Boundle provides both: signed (by all developers) file of hashes and directly signed tarball (with detached signature).

Thanks for the support. Yes, I too don't see how signed statement of hash values is more convenient than a detached gpg signature.

In any case, the new threat Tor Project needs to guard against is the possibility that FBI will compel coerced signing of FBI-modified files using authentic signing keys, and/or compel CA's to collaborate in secretly evading https and sending Tor users to a fake torproject.org sign serving malware, and possibly also faked statements from verifiers that the hash values for their own builds check.

It is far from clear that FBI could not also compel the verifiers to cooperate if they thought that would be useful. Once FBI starts using Putin-style rubber hose breakage of cryptosecurity measures, it would be difficult to imagine any national government which is truly immune from USG pressure. At best, some governments might be strong enough to demand a quid pro quo for their cooperation.

I too don't see how signed statement of hash values is more convenient than a detached gpg signature.

In open source community it is quite widespread to sign file with hashes, because typically one needs to sign bunch of different files. However, if only a single or couple of files will typically be download and verified by users (e.g., TBB case), then it is better to produce separate detached signatures for these files too.

March 14, 2016

In reply to sukhbir

Permalink

I always do, but I am very concerned that DOJ will shortly serve TP with a court order demanding that devs abuse the genuine cryptographic signing keys by improperly signing a version of TB or TM which has been modified by USG agencies or contractors to contain a covert undetectable APT malware function (cyberespionage backdoor with cyberwar data modification/destruction function), and prohibit any of you from telling anyone what happened. This could be used in either a targeted attack on some Tor users (people who speak out against FBI abuses for example), or a dragnet attack on all Tor users.

March 12, 2016

Permalink

There is already Tor Chat, Ricochet, and others ones out there. Best to combine them all into one and merge with Tor Messenger?

I seem to recall that there may be some technical reason why Ricochet and TM can't work well together?

I think TorChat has been superseded by Ricochet (former no longer being developed, latter being developed).

I am not sure whether I can use Ricochet yet, but the software is available as a free download (with gpg detached signature) and the coder seems to know Sukhbir, and as mentioned above, unlike TM (so far), Ricochet has been audited and did fair--- which is better than most or even all other IM apps I think except Signal. Signal is only available for iOs devices, I believe.

Signal is available for Android, but won't work if you have a de-googled operating system without the Google Play Store service.

I'm waiting for something like Ricochet for phones.

March 12, 2016

Permalink

After connecting, receive error message: "Received unexpected data. Reconnecting in X seconds." Rinse, repeat. Using Yosemite.

March 15, 2016

Permalink

Is Twitter group DM supported with OTR or just a one on one DM conversation?

The last time I used Tor Messenger with a Twitter account, it didn't prompt for a password the way Jabber prompts for a password. That would be bad for people who share computers.

OTR is for one-to-one conversations, so no, Twitter group DM is not supported yet. Re: your second point, do you mean you would like an option not to save the Twitter password?

March 17, 2016

Permalink

A million thanks to Calyx Institute for allowing us to register jabber accounts anonymously. But are they prepared to defend against FBI demands to backdoor their chat accounts?

Current edition of TM appears to allow my laptop to successfully connect to my Calyx account, but I think I accidently generated a new fingerprint when I intended to import my old one (tried to follow the directions in the FAQ). Any advice on checking/fixing?

Hoping to engage in one-one encrypted chat with a tech reporter or two or three million...

March 17, 2016

Permalink

I don't think I am using TM correctly.

Is there a simple tutorial showing how to initiate an OTR chat once you are connected to a chat server?

March 18, 2016

In reply to arlo

Permalink

My questions:

1. Using a previous edition of TM, I was able to create an account on a "Jabber" (xmpp?) chat server A and to generate OTR keys. J said he would be available there, but his handle suggests he is using another chat server entirely, chat server B.

When the current edition of TM came out, I followed directions to import previous keys fingerprint into current TM's directory on my computer.

When I call TM using the provided script, it appears to connect successfully to chat server A. But I dont seem to be able to contact J for a private OTR chat. And it seems TM might be confusing I and J.

Any suggestions?

2. I usually use Tails and have a strong preference for security/amnesia as well as anonymity. To use TM, AFAIK, I have to use online my usual offline OS (Debian stable). I believe TM tries to keep connection to chat server A alive by exchanging data every second. Is that right?

When I experiment with latest edition of TM, I also see an http connection to a mystery server associated with my ISP every minute or so, and this worries me. Should I be worried?

Vague suggestion for writing future TM documentation:

Some of your prospective users have virtually no experience with chat, and not all can buy a chat account. So we need explanations for everything. Videos work well for many people but some of us disable videos out of security concerns.

March 19, 2016

Permalink

I have TBB 5.5.4

I keep getting disconnected when using Tor Messenger on my Twitter account. This message keeps appearing:

http://pho.to/A4iMY

While this message appears in the Tor Messenger tab in the section I try to send a Twitter DM to someone:

"An error (Your credentials do not allow access to this resource.) occurred while sending: ?OTRv2?
(my twitter account name) has requested an Off-the Record private conversation. However, you do not have a plugin to support that. See http://otr.cypherpunks.ca/ for more information."

I do have the OTR plugin. The people I try to DM also have Tor Messenger. They can't DM me either on Tor Messenger. Here is a screenshot of the OTR plugin I have:

http://pho.to/A4iOG

This also appears with a 401 error. It keeps getting disconnected, then re-connects, then disconnects and on and on.

http://pho.to/A4iMY

The Instantbird app does appear in my twitter account.

http://pho.to/A4iP5