[3478] in cryptography@c2.net mail archive

home help back first fref pref prev next nref lref last post

Medium-term real fix for buffer overruns

daemon@ATHENA.MIT.EDU (Lenny Foner)
Wed Oct 14 23:33:34 1998

Date: Wed, 14 Oct 1998 23:28:22 -0400
From: Lenny Foner <foner@media.mit.edu>
To: gnu@toad.com
To: karn@qualcomm.com, decius@ninja.techwood.org
To: smb@research.att.com, reinhold@world.std.com
Cc: cryptography@c2.net
In-Reply-To: <199810150237.TAA29923@servo.qualcomm.com> (message from Phil
	Karn on Wed, 14 Oct 1998 19:37:54 -0700 (PDT))
Cc: foner@media.mit.edu

I must admit, I'm enjoying a sort of grim satisfaction in this
exchange, mostly because I have spent the better part of a decade
trying to avoid C whenever possible.  Why?  Because I've used---and
helped write a compiler for---Ada, and long ago worked at Symbolics as
a developer on the Lisp Machine.  Both languages solve basically this
entire class of dumb bugs, be it at compile-time or runtime, and many
more besides.

It's heartening to see people actually advocate doing it -right- as
opposed to merely doing it -fast-.  (C.f. _The Mythical Man Month_:
"Hey, if my program is allowed to -get the wrong answer-, it can sort
cards -ten- times as fast as yours!")

Things like the lispm (and various capability-machine architectures)
also showed that you can do a lot of the runtime stuff in hardware,
with little or -no- cost in performance.  Even a lot of GC could
happen very fast, given an extra 5% in CPU transistor count, which is
about what you'd add to implement, say, virtual memory.  Perhaps in 20
years we'll look at hardware bounds protection as just as necessary as
VM is now to a "real" computer---or perhaps the New Jersey approach
will continue to triumph.  [I should point out that applicability of
these lessons to (a) RISCs and (b) machines with a much larger ratio
of access times for cache to main memory than were common in 1990 may
tend to skew the results; I really don't know.  On the other hand, if
people are seriously suggesting designing CPU's with Java in mind,
maybe we -might- get a CPU that wasn't optimized for straight C.]

I've resisted contributing to this thread up to now, since it's
clearly not quite cryptography-related, but as long as we're tossing
around radical ideas like compile-time or runtime checking, it can't
hurt to bring up such "modern" everything-old-is-new-again languages
such as Ada, Lisp, or any one of a number of the polymorphic ones.

P.S.  While we're talking about modifying gcc to avoid random pointer
fiascos, it might make sense to (a) put the equivalent of Purify in
there, to catch other sorts of bugs, and (b) to include a switch to
allow allocating variables in stack frames -in random order-, so
memory is laid out across compilations nondeterministically.  This
latter approach gives even a successful buffer-overrun attack much
less wiggle room, since the attacker can't assume that his memory
image matches yours, even if the source was the same.  In essence,
it raises the morphological diversity of the population (even though
genotypically it might still be a monoculture), and hence improves its
resistance to a wide variety of attacks.  Diversity is good.  That's
why sex evolved.

home help back first fref pref prev next nref lref last post