[524] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

No subject found in mail header

daemon@ATHENA.MIT.EDU (John Kelsey)
Sun Apr 13 11:24:07 1997

To: "Perry's Crypto List" <cryptography@c2.net>
Reply-To: kelsey@email.plnet.net
From: John Kelsey <kelsey@email.plnet.net>
Date: Sun, 13 Apr 97 02:14:57 CDT

-----BEGIN PGP SIGNED MESSAGE-----

[ To: sci.crypt, alt.cypherpunks, cryptography list ##
  Date: 10-Apr-97 ##
  Subject: Paris Protocols Workshop ]

This week, I attended the Protocols Workshop in Paris.  It was
organized by Mark Lomas and Serge Vaudenay, and I think it had
been held at Cambridge last year.  This year, it was held at the
ENS in Paris.

The workshop format was pretty informal--the audience would
sometimes rapid-fire questions at the speaker, asking him about
the implications of his work, pointing out prior work, or
questioning his assumptions.  The strongest lesson I took away
from the workshop was that crypto people have gotten fairly good
at building tools that will do a certain job under certain
conditions, but we still don't seem to be too good at building
whole systems.  (This is common to all large systems involving
many disciplines, I think.)

One sense I got from a lot of people (although I thought this
going in, so maybe I just *think* this was the general
consensus) was that relatively few people acutally understand
what a certificate is, and what its implications are.  There are
lots and lots of people--bankers, crypto programmers,
policymakers--who mistakenly believe they understand
certificates.  I think this is the place where most of the next
few years' worth of trivial end-run attacks around peoples'
security features will happen.

Someone (I think some of the Cambridge people) raised this
issue:  Suppose the CA revokes your current certificate, issues
you a new certificate, and signs a bunch of contracts with some
friendly agency or company under your new key.  This lets the CA
have most of the benefits (for it, not for you) of escrowing
your signing key.  Their short answer to this problem was that
revokations ought to require a different entity that
certification, to divide up the powers necessary to do this
among different groups of people.

I think a certificate is nothing more than a statement, signed
by one entity known by a lot of people, about his (alleged)
relationship with some other person or entity.  I can PGP-sign
my wife's PGP key, along with a statement saying ``the holder of
this key is my wife.''  Note that this doesn't mean that anyone
whose key I sign in this way is in any sense bound by my
signature--I can commit myself to a statement of this kind, but
nobody else.  If I were delusional, maybe I would find some
random stranger's public key, and sign it, claiming she is my
wife.  This signed document wouldn't be good evidence for my
claiming half her property or income, though it might be pretty
good evidence for her claiming half of mine.

The invented-certificate attack is essentially the cryptographic
equivalent of a bank opening an account in your name but without
your consent, issuing checks, writing the checks to several
people, bouncing the checks, and then having you hauled into
court to make good on those bad checks.

In the real world, we usually trust that banks won't do this
because we think they would lose a lot of money, and their
employees would probably wind up in jail, if they were caught
doing it.  However, part of what makes getting caught likely is
that there is typically a trail of physical evidence for an
action like opening a bank account or writing a check.  If you
allege that I opened the account at 11:00 AM on April 8, 1997
Paris time, at your Chicago branch, you will run into a problem
with your story.  (I was giving a presentation to 20+ people in
Paris.)  If the whole transaction is online, and if much or most
of my other itenerary is online, then you can probably succeed
more easily in framing me.

Similarly, an online certification authority for key-exchange
keys might (if it were also working for the FBI as an escrow
authority) simply carry out a MIM attack, sending out
revokations and new certificates for Bob, but only when it's
Alice asking for the certificate.

Anyway, there were several talks I found interesting at one
level or another.  Ross Anderson probably had the most
entertaining talk, since it was full of anecdotes of people
doing shockingly stupid things in security system design, and
since he's just a good speaker.  There was a talk on how to
represent anonymous communications channels in protocol
descriptions, a talk on how a great deal of security involves
figuring out how to make mappings from real world objects
to/from binary objects.  (This always winds up involving some
group of trusted third parties, as far as I can see.)  I did a
talk on the ``chosen protocol'' attack, which I will discuss in
another post.  There were various more esoteric protocol ideas,
which I didn't find too useful, and one paper on a couple of
very simple protocols with some security proofs (for mutual
authentication and authenticated key agreement) which I thought
was pretty neat, especially since the audience rapid-fired some
questions at him along the lines of whether his assumptions were
workable in practice.

This spurred some discussion of what I have been calling ``not
voiding your warranty.'' That is,  you want to be very careful
using one cryptographic primitive to do the job of another, or
using it in situations radically different than the ones
envisioned by its designers.  Obvious examples of this idea
include timing attacks, error-inducing attacks in
tamper-resistant hardware, related-key attacks, the prepended
key mechanism for doing keyed hashes, using a block cipher in
CBC-mode for message authentication, etc.  In some sense, when
you use cryptographic primitives for purposes for which they
were never intended, you run the risk of ``voiding their
warranty,'' i.e., of using them in ways that expose them to
attack.  Of course, as an attacker, you always try to find ways
to void the designer's assumptions about what was true.

One of the really hard tasks in security engineering is to
figure out what your warranty terms are on each primitive.  This
is the place where the proofs like the one that reduces the
security of HMAC to at least the security of the underlying hash
function turn out to be really useful.  This gives you a kind of
guarantee that you're not voiding your warranty.

In some sense, I think you start this by understanding that some
components simply cannot resist some attacks, no matter what
they do.  No block cipher will resist an attack in which the
attacker reads the key bits out of supposedly-protected memory,
for example--that has to be done  at some other level of the
system design.  The problem is that it seems likely that
real-world systems will just protect against the easy stuff, but
never get around to dealing with other threats that are harder
to defend against.  For example, every software cryptosystem out
there which does its work on a general-purpose machine used by
other software is vulnerable to having other programs affect its
results--perhaps by viral or trojan-horse attacks, perhaps by
messing up the random number generator sources, perhaps by doing
some other thing.  The fact is, these concerns need to be dealt
with at the operating system or hardware levels--application
software really can't resist them too effecively.  However, I
can't really fix the operating system, or tell the users how to
manage their whole PCs to make one application work properly.
So, I tell clients ``Tell your users that if I can compromise
their machine once, even remotely with a virus or trojan horse,
they will lose all their security.''  As far as I have seen,
this winds up having little effect on actual user behavior.

Anyway, this was a really neat conference.

   --John Kelsey, Counterpane Systems, kelsey@counterpane.com
 PGP 2.6 fingerprint = 4FE2 F421 100F BB0A 03D1 FE06 A435 7E36

-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBM1CGSEHx57Ag8goBAQGCuAP/QuyqZ+Q43yyQif3Tr4gLo8AUegX9uUqz
AcR2m/ewV9bIHjBrbogArevJhAcgyJUu2nv9hvR8TU6aa9wtAwp6cK69R/mqrtCj
HLfCTZBzpgDQaIx+6iLurSc6jSE8fnG1CS0OLrx3D4hCzMaw5M7qpibAte3G9zeR
eTl3WM5S35o=
=GJ7G
-----END PGP SIGNATURE-----


   --John Kelsey, kelsey@email.plnet.net / kelsey@counterpane.com
 PGP 2.6 fingerprint = 4FE2 F421 100F BB0A 03D1 FE06 A435 7E36



home help back first fref pref prev next nref lref last post