In the USA presidential election of 2016, e-mail was stolen from the Democratic National Committee and the campaign organization of Hillary Clinton. I am sure that worst practices were followed in several areas, but in this essay I'm focusing on just one aspect, secure e-mail. I'm taking this opportunity to tighten up my own e-mail security.
Abstract: I continue to use RoundCube webmail on my private e-mail server. I am using RoundCube'sEnigmaplug-in to sign, verify and encrypt mail. I'm accepting the security imperfection of having my secret key on the webserver (itself encrypted), to get the benefit of message integrity through signed e-mail. Users for whom RoundCube is not feasible would use Mozilla Thunderbird (for Linux, Windows or OS-X, not handheld devices).
Published reports of the hacks vary in their depth and usefulness for someone trying to improve security. If you are trying to improve, you should ask if you are vulnerable in these various ways, and fix if so.
While the stolen mail had only a few real nasties (that led to the resignation of Debbie Wasserman Schultz as the DNC chairperson), if the organization regularly deals with exploitable secrets then their mail should be encrypted. A nicer alternative is for them to conduct their business in a way that does not need secrecy.
In Julian Assange's manifesto
Conspiracy as Governance
(2006-12-03, page 7 in the linked
document collection) he takes the view that evil governments cannot
function without secrecy. If he can reveal secrets
en masse, he can shut down the evil governments he has taken on.
In
Want to Know Julian Assange's Endgame? He Told You a Decade Ago
(2016-10-14 by Andy Greenberg in Wired) this document collection
is reviewed in connection with the 2016 election. If you are a minion
of evil government, you need to take encryption seriously. And you
should also take seriously the abuses of power of your employer.
Neither the DNC nor the Hillary campaign were able to confirm which of the leaked messages were authentic, i.e. unaltered. This strikes me as a serious weakness.
On 2016-10-10, Kurt Eichenwald of Newsweek posted a story
titled
Dear Donald Trump and Vladimir Putin, I Am Not Sidney
Blumenthal
. He reported that one of the documents posted on
WikiLeaks was an e-mail message allegedly sent by Sidney Blumenthal,
containing content which Eichenwald had previously written and
published, rearranged into a damning revelation
about Hillary
Clinton's handling of the Benghazi incident. If Blumenthal were known
to regularly sign his messages, this forgery would not have been
possible.
John Podesta, the manager of Clinton's campaign, received a
legitimate-looking e-mail from
Google Security warning him that
his Gmail account had been abused and he needed to change his password.
The message included a convenient link -- to the Russian attackers'
fraud site. He followed it, and the destination site acted as a
man in the middle
, so both Google and the Russians knew his
new password. His mail was then downloaded from Gmail.
In a situation like this, do not click on links you
cannot be confident in; instead originate a connection to your
business partner with your own bookmark or by typing the whole thing
by hand from your own memory, and then follow the account maintenance
link on their own page (not the fraud site).
If Google Security normally signed their messages, and if their customers could know and trust Google's public key, a user would immediately recognize that the phishing e-mail was fraudulent, because the attacker's signature would not be signed by a trust anchor that the victim believes in.
Do remember that it is fairly easy to purchase trust (an X.509 host
certificate) in a domain name that is similar but not identical to the
one the victim is expecting, e.g. gmail.google.com (correct) vs.
gmail.g00gle.com (fraudulent). For digital signatures the Web of
Trust
has to be applied effectively.
Here is some related reading:
The term message integrity
means that the recipient or the general
public can verify for themselves that the sender (and not a fraud person) did
send the message and it was not altered afterward.
When the sender signs
a message, she appends a code, which is very
hard to fake, using her secret key, that gives message integrity.
When the sender encrypts
a message, she turns it into random-looking
gibberish which only the intended recipient(s) can decrypt, i.e. can return it
to readable form. An attacker has a very hard task to decrypt the message
without knowing the recipient's secret key.
In modern cryptography each participant creates a private or secret key, never to be revealed, and then derives from it a public key, which is published so everyone who needs it can get it. The secret key is protected from theft by a physically removable container (smart card), biometric data, and/or a good password, all of which must be gotten past to wield the secret key. If even so the enemy manages to steal the secret key, all security assurances are void. Inverting a public key to steal the secret key is very hard.
The very hard tasks
mentioned above require billions of dollars
worth of computing equipment working together for months or years, to finish
stealing that one secret. The hacking exploits are well known, and generally
are published together with the encryption procedure, but are outrageously
expensive.
First my goals for this project:
All e-mail that I send shall be signed. I hope that correspondents will also sign their mail; all signatures seen shall be checked and the result displayed.
I will very rarely encrypt mail or receive encrypted mail, but an encryption feature is desired for other people to use.
I want the solution to be feasible not just for me but for general users who are not highly trained and experienced in information technology. I am trying to set an example for people of good will so they can avoid having this year's debacle repeated.
The wire format needs to be standard and interoperable, so the sender and recipient can have integrity (signed messages) and encryption even though using different mail readers.
The solution needs a balance between security, availability, and usability. Security means that it will ensure the integrity (and privacy) of the mail despite attacks from motivated geopolitical enemies. Availability means that the user should be able to sign, verify, encrypt and decrypt the mail when needed, not to be prevented by ordinary disruptions or by enemy attacks. Usability means that the procedures should be simple and easily performed, so that users will use the solution and will not bypass it because it is too onerous.
As a component of security, it must be feasible for any user at any time to look in the program's code and verify that correct actions are happening and maliciously inserted incorrect actions are not happening. It is understood that ordinary users will not have the skill to read the source code, but they are expected to outsource to the expert of their choice who does have that skill. If the software is proprietary the user must trust the vendor, which does not really count as security. Therefore the solution software must be open source.
There are a few issues with the basic design of mail readers and attached crypto packages.
In general, messages should reside on the server, not the client(s). Server residence has these advantages:
Some mail readers are very helpful and user-friendly: they index the mail locally and download the message bodies for quick presentation. The balance is pushed too far toward usability and too far away from security; see above for the advantages of residence on the server.
Webmail is an application that executes on the webserver. It retrieves mail and message indices and formats them into a web page which the user can read with any web browser. Typically the user has different software on each device on which she reads mail, e.g. Microsoft Internet Explorer on the work desktop machine, Mozilla Firefox for Linux on the home desktop, and Safari on the Apple iPhone. Making all of them do crypto is turning out to be impossible, whereas if the webmail app handles crypto, interoperation becomes very feasible.
However, server-side crypto means the server needs access to the user's secret key. This means the user needs a private e-mail server that she can trust. A public service like Gmail keeps custody of your secret key in a way that maximizes their profits, which cannot be considered secure, regardless of the well-respected skills of e.g. the Google security team. A possible compromise for work is to tolerate enterprise I.T. having custody of work-related secret keys.
Another problem with server-side crypto is that all the secret keys are readable by the webserver's special user (wwwrun on SuSE systems). The webserver must be configured carefully so untrusted user-written web apps execute as the user and not as wwwrun. A mitigating factor with RoundCube's Enigma is that the secret keys are encrypted and the passphrase is saved (itself encrypted) in the session cookie, so the sessions of inimical other users will not have the passphrase.
The mail readers I have seen are not designed for mail security. In particular, for the nicest display the message has to be transformed in various ways, e.g. emoticons are replaced with cute little pictures, or quoted messages are highlighted more readably than the original text. Of course, after alteration the message has a different hash than the one in the signature and is rejected. The mail reader has to sign the exact message being sent and verify the exact message received.
Data flow in my mail system:
I use an outsourced mail exchanger (MX) (MailRoute). Spammers, and a few legitimate correspondents, send to that server. It detects and tags spam, doing better than I can do locally, then forwards the mail to my private server.
My server accepts mail via SMTP on port 587 with TLS, using Postfix. The mail is filed in my home directory using maildir format, each message in a separate file. Postfix uses root privilege to write on these files.
My mail delivery agent is Dovecot. Mail readers connect to it using IMAP-4 on ports 143 (recommended) or 993 (deprecated). My security rules, but not the server configuration for port 143, require TLS to be used. Dovecot uses root privilege to read and write the mail files.
I prefer to read the mail with RoundCube webmail, which is a web app, executing as the Apache user. Besides accepting a loginID and password (validated by PAM, i.e. using host authentication), it has a plugin which is sensitive to various styles of web authentication including Kerberos and X.509 certificates. It uses split authentication and authorization to connect to Dovecot as myself, since web authentication is not transitive.
Outgoing messages are passed to Postfix to be sent out.
Here is Wikipedia's list of mail clients. Clients are filtered by:
These readers and webmail programs survived the selection procedure. A lot more programs were rejected; they are listed in an appendix.
This is the currently market leading webmail program. A lot of
mail services have RoundCube installed, some quite large, including
jimc and UCLA-Mathnet. The users like it.
Inception 2006, v1.0.0 2014-04-07, latest 2015-09-14; the Wikipedia article
is not up to date; v1.2.2 was released on 2016-09-28.
IMAP4, LDAP3, IPv6. Authentication: PLAIN; jimc has hacked for all web
auth methods.
PGP support (enigma
plugin) is distributed with the core software.
GUI uses XUL, cross platform. For Linux, Windows, OS-X, none for Android. Inception 2003-07-28, v1.0 2004-12-07, most recent 2016-10-03, active development. IMAP4, IPv6, partial LDAP. CardDAV via the Lightning plugin. Authentication: PLAIN, X.509, find out about GSSAPI. S/MIME is supported intrinsically; OpenPGP by the Enigmail plugin. PIM features are available through Lightning, which is installed by default.
SeaMonkey is a continuation of the Mozilla Application Suite (discontinued); this is the mail client component. Much code is shared with Thunderbird. GUI uses XUL. Inception 2005-09-15, v1.0 2006-01-30, recent 2016-03-14, actively maintained. IMAP4, IPv6, partial LDAP3. Authentication: PLAIN, X.509, find out about GSSAPI. Has all PGP and S/MIME features, also all other "general" features. (Other suite components: web browser, HTML editor, and IRC client.)
It was forked from Sylpheed. Inception 2001-05-xx, v1.00 on 2005-01-xx, latest 2016-11-06, i.e. active development. IMAP4, LDAP3, IPv6. Find out about CardDAV and CalDAV. Authentication: PLAIN, no X.509. PGP and S/MIME support: intrinsic GPG interface. Has 3 variant plugins to view HTML mail.
Claws Mail was its development version but eventually forked completely. Inception 2000-01, v1.0.0 2004-12-xx, recent 2016-07-29, actively maintained. IMAP4, LDAP3, IPv6. Authentication: PLAIN, no X.509. Has PGP support, no S/MIME. Does not display HTML messages.
Outlook is able to sign and encrypt e-mail. But it is proprietary (not open source), and so cannot be considered secure. It is available on Microsoft Windows, Apple iOS, and Android, but not desktop Linux.
Of the local clients, Thunderbird is the most promising, followed by Claws Mail. I'm not tempted to try SeaMonkey or Sylpheed in competition with their newer or more well-known siblings. No credible local client is found for Android. RoundCube is the currently market leading webmail. Its competitors are either obsolete or limited. I am currently using RoundCube webmail, and the way this selection process is going, I expect I will end up using its crypto support rather than switching to a local client.
This is an add-on for Firefox, using OpenPGP (RFC 2440) wire format. It is intended to work with (almost) all webmail services, when they are viewed with Firefox. It implements PGP within the plugin, coded in Javascript, and it stores your secret and public key(s) in its own file. Jimc's experience is that it works great some of the time, but if the webmail service alters the message, e.g. converting emoticon text codes to pictures or using bars vs. '>' signs to quote message fragments, signature verification fails. Therefore I'm looking for a different crypto add-on.
This plugin is distributed with the RoundCube core. It can use the OpenPGP (RFC 2440), PGP/MIME (RFC 3156) and S/MIME (RFC 3369 et seq) wire formats. It can be configured to sign and/or encrypt all messages; this is off by default. It uses gnupg and phpssl (for S/MIME) as backends. By default, users' keyrings are kept in .../roundcubemail/plugins/enigma/home, must be read/writable by the Apache user. See the discussion previously about the security implications of storing your secret key on the server.
It's a plugin for Thunderbird and SeaMonkey. Wire format is OpenPGP (and I think I saw that it can do S/MIME also, of course not in the same message). gpg-agent required.
For Android. Uses OpenPGP wire format.
It's an Android app and is sort of like GPG for desktop Linux. It has integration with the K-9 mail reader.
Requires Crypt/GPG.php which we need to get from the SuSE Build Service. On SuSE it's package php5-pear-Crypt_GPG. Dependencies: php-pear(Console_CommandLine) in package php5-pear-Console_CommandLine.
To export your private and public key use these commands. RoundCube Enigma requires ASCII armored files (the --armor option). When you export the secret key the public key is included; you don't need two command executions and files. The umask is set so the resulting files will be readable only by you.
(umask 077 ; gpg --output pubkey.asc --armor --export jimc@jfcarter.net)
(umask 077 ; gpg --output seckey.asc --armor --export-secret-keys jimc@jfcarter.net)
Log in to RoundCube. Turn to Settings, and within that, the PGP Keys tab. Click on the Import Keys icon (above the key name panel; use the tooltip to tell which icon does what). Pick one of the key files using the Browse button, then hit Import.
Oops!
PHP Error: Enigma plugin: Unknown error importing GPG
key.
But hit Refresh; the key(s) were imported and appear correct.
Enigma appears to do the
operation but then gets an unknown error
, tested on importing
a public key and deleting it. The problem turns out to be that the
Web of Trust database doesn't exist. Keep an eye on
this bug report for Enigma for how the issue gets resolved.
In the meantime if you have gpg2-2.0.24, just ignore the error, hiss,
boo!
The state as of 2016-11-24 is: gpg2-2.0.24 incorrectly opens trustdb.gpg even when told not to, and gets this error. I don't know how far back this bug goes, probably not far. It's fixed in gpg2-2.0.27. However, new versions of my distro are using gpg2-2.1.16. This version, and probably all 2.1.x versions, has a bad interaction with the Enigma plugin: when importing a secret key, gpg expects enigma to send it something, which isn't happening, so it just sits there and the secret key isn't imported. (The public key does get imported.) Watch the bug report to see progress on the issue with gpg2-2.1.16.
The real problem is that the Web of Trust is not being used. For Enigma to be sensitive to the Web of Trust is not as easy as it looks. The developer intends to make progress on this, but it isn't working yet. So the degree of trust in a key cannot be configured. You need to put only trusted keys into your keyring on the webserver.
Ignoring the errors, I created a message with no gotcha's, which in previous tests confused Mailvelope. There's a padlock icon in the compose window, second from the right of the icon row. Click; you get checkboxes to sign, encrypt, or attach the public key. This time I just signed and attached the key. With the checkboxes still showing, hit Send. It pops a dialog asking for the passphrase.
I read the message. There are two attachments: the signature and the public key that I requested. The signature is verified. There is an icon to import the attached key (which I already have). No error messages in any of this.
I re-did many of the tests I used for Mailvelope. This is all with web authentication (Kerberos), and also there is no guarantee that the password used for RoundCube is the same as the passphrase for the various secret keys, so Enigma will have to ask for the password itself. By default the password persists until you log out of RoundCube; you can set a site-wide time limit.
So Enigma for RoundCube seems to be operational and reliable for e-mail security.
There is one last detail in using PGP keys: anyone can create a secret and public key pair tagged with whatever name they choose, so how do you know that a public key is authentic?
What does it mean for a key to be authentic? In one class of applications, all you care about is that the same person returns in a sequence of visits. For example, someone registers at your online store by giving a public key. She places an order (and pays). Then when she returns to check order progress or do customer service things, you want to know authoritatively that it is the same person returning, and you know that from a digital signature which requires the customer to wield her secret key. But you don't need to know the customer's real-world identity.
On the other hand, the most important cases involve real identities, for
example security notices (avoid phishing) or work instructions: is your boss
telling you to do something or is it an enemy? PGP (and GPG) handles
key authenticity by a Web of Trust
. Here's an example.
As a campaign worker you receive a public key from the
hand of someone you trust, such as Hillary Clinton. She signs the public keys
of senior campaign staff such as your boss. He/she will give you his public
key with Hillary's signature, or someone you haven't met may send you a signed
message with his public key attached. Because you have Hillary's key you can
recognize that your boss' key was signed with hers, and you will then believe
in your boss' key.
GPG has this trust procedure automated
according to configured security rules. But unfortunately
the current version of Enigma is not able to use the Web of Trust, so you
need to assess trust by hand (or by another GPG management program like
Seahorse), and you need to put only keys that you trust into the keyring that
Enigma has on the webserver.
Then a message signed with a trusted key will be tagged as having
a good signature. Whereas if the message were validly signed by someone not
on your Web of Trust, it would be tagged as valid but untrusted, and you should
treat it as phishing or a similar fraud.
Unfortunately the Web of Trust does not scale globally, and also, a compromised high-level key can be very dangerous. See this Wikipedia article about the DigiNotar debacle in which a national level X.509 certificate authority (not Web of Trust) was compromised with massive inimical consequences perpetrated by a repressive regime. So how is an ordinary user such as myself going to trust a signed message from outside, for example a notice, like the one John Podesta got, that my Google account had been compromised?
Web browser vendors work together to make a common list of approved X.509 root certificates (excluding DigiNotar's root certs, among others), and the various operating system distros include a package of these certs, signed by their own software signing keys, which are distributed to the end users as part of the distro. The X.509 trust anchor issue is different from the PGP Web of Trust, but issues there can illuminate how we could establish a Web of Trust when person-to-person trust relations are impossible.
Even so, the Web of Trust can be effective as-is within a smaller organization.
So with RoundCube's Enigma I have achieved my main goal, to be able to routinely sign my mail. But some other goals were not met.
All outgoing e-mail can be signed and/or encrypted. Enigma can be configured to make these the default, but I didn't. Incoming signed mail is verified and/or decrypted, and the outcome is displayed.
The wire formats are standard and are supported by most of the alternative crypto packages. I have not personally verified interoperability.
This is open source software, a very important point when security is the goal because the code can be inspected.
Enigma's balance between security, availability and usability seems to be not too far off the mark.
The major unmet goal is that few users will be competent to install a webserver plus RoundCube. Whereas, a local mail reader such as Thunderbird is normally included with the distro, and installing the Enigmail add-on is simple; but the disadvantages of the local reader have also been discussed above. As a compromise for work, it would be reasonable to define the enterprise I.T. staff to be trustworthy to keep custody of the secret keys of work-related identities. In other words, enterprise I.T. should install RoundCube. But it is unacceptable for one of the big commercial mail services like Google's Gmail to have access to the secret keys, no matter how respected their security and operational staff are.
These passed initial filtering but were rejected later.
No Windows. Intended for Gnome. It's been around for a long time but may be on the back burner: last stable release 2015-05-30. Intrinsic GPG, also GSSAPI and OpenLDAP. But it's not going to be useful on non-UNIX OS's.
No Linux. Strange licensing and support. Poor security fixes and customer support for UCLA-Mathnet; they switched to Thunderbird. Last known release about 2006-10-11 (10 years ago). Rejected.
Both GUI and text interface provided. Inception 1987, v2.0 1998-02-01, most recent 2015-09-18. Not promising. It is written in Gnu Lisp and runs under Gnu Emacs. Not going to fly.
Part of Horde, a PIM suite. Pure webmail. Inception and v1.0 1998. Most recent 2016-09-06, active development. Info missing (research not done?) about protocols served. Nor other info. Jimc had poor user/sysadmin experience with Horde and replaced it with RoundCube.
Part of KDE, but is listed as cross platform
.
Inception 1998-10-xx, v1.0.17 1999-02-xx, most recent 2014-11-11,
2 years ago for KDE-4, looks like it's been replaced by something else.
Also Jimc doesn't use KDE and wants to avoid dragging in all the
infrastructure just for this program. Rejected.
Webmail. No data on release dates or development. Jimc has never heard of it and already has two good webmail candidates. Forget this.
GUI uses XUL, cross platform. Part of the Mozilla Application Suite, which has been officially discontinued but carried forward as SeaMonkey, q.v.
Inception 1996-09-20, last release 2007-02-21, forget this one.
GUI uses XUL. No data on inception or releases. It was a fork of Thunderbird but has been discontinued.
Webmail. Jimc had moderately good experience with it. Inception 1999-12-xx, v1.0 2001-01-xx, last release 2001-07-12, seems to not be actively maintained. Too bad.
It would be a good idea to explain some concepts from cryptography here.
When I say that an operation is hard
I'm talking about billions
of dollars worth of computing equipment working together for months or
years on that one problem. While computers become more capable as the
years pass, it is relatively inexpensive for the intended victim to
increase the problem size, putting an attack out of the enemy's reach.
For example, in the near future it will become prudent to lengthen
symmetric keys from 128 bits to 256 bits, which is not a big burden for
the legitimate users, but is an insurmountable burden for the attacker.
Generally the procedure for the exploit is well known, and the attacker
cannot be prevented from putting on enough resources to perform it, except
that there are practical limits to the resources available.
Your e-mail message is already encoded in numbers, and arithmetic functions can be applied to it, which accomplish the cryptography, e.g. encrypting, decrypting, signing or verifying the message.
When encryption is done with a symmetric key, the recipient needs to use the same key in a different function to decrypt the message. Then you have a big problem informing the recipient safely of the key. During World War II the allies on several occasions intercepted keys being distributed, without the enemy's knowledge, and were able to decrypt important messages, e.g. sent to submarines (on the surface) by radio. Modern cryptography uses an asymmetric key: one party has a private or secret key, never to be revealed, and a public key is derived from it and posted for the other party (and the enemies) to use. It is very hard to map a public key back to the private key.
Signing a message
means that the sender appends a code by which
the recipient can be assured that the sender made the signature and that
the message was not fraudulently altered later. When you sign a message
you first compute a function, called a hash function, with a short result
-- currently 256 bits is recommended -- but for which it is hard to
fraudulently alter the message and have the same result come out. The
recipient will compute the same hash function, and will reject the message
if the result is not as expected. But could an attacker alter the message,
re-hash it, and get the victim to believe in the new hash? The sender
encrypts the hash with his private key. The recipient decrypts it with the
sender's public key. If it fails to match the hash of the received
message, then either the message was altered, or the private key of someone
other than the alleged sender (i.e. the attacker) was used, or both. Thus
the recipient can be assured that the sender made the signature and that
the message is unaltered.
When the message is encrypted, the idea is that only the intended recipient(s) can decrypt and read it. Also, public key encryption is not very efficient. The usual procedure is that the sender picks a symmetric key at random, and encrypts the message with it. Then the symmetric key is encrypted with the recipient's public key. If there are several recipients, each of their public keys is used in turn. The recipient uses his private key to retrieve the session key, and then uses that to decrypt the message.
In the normal case that the message is both encrypted and signed, which
should be done first? The standard doctrine is that a message with a bad
signature should not be touched in any way, lest unrecognized threats be
set off. (I've seen a cute story: a dog's master sends a signed message
warning that there is a dangerous coyote in the back yard. But the evil
cat changes the word coyote
to squirrel
. Of course the
signature doesn't match, but when the dog sees squirrel
it
immediately runs to the back yard and is eaten by the coyote. Moral: check
the signature first.) So the sender should encrypt first, and sign the
encrypted text. However, the GPG software handles nested MIME messages
imperfectly, and needs the message to be signed first, then encrypted.