January 2018

S M T W T F S
  123456
78910111213
14151617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
Sunday, December 17th, 2006 08:29 pm
Back in the days of Russell and Whitehead, there was this attempt to make mathematics totally rigorous by turning everything into manipulation of strings of symbols in formally-specified languages. The idea was that mistakes could be eliminated (in fact, I think the idea was that proofs could become machine-checkable, if not machine-generable). This attempt foundered, for two reasons:

1) Godel's theorems, which show that any such formal system powerful enough to talk about the natural numbers (0, 1, 2, etc) will always be incomplete, ie there will be truths not provable in the system;
2) The sheer, mind-destroying boredom and ghastliness of writing proofs in this style. In my first year, when I was studying formal logic, I found that the only way I could force myself to actually do my logic homework every week was to get blind drunk on Wednesday nights, and then wake up at six on a Thursday morning to do the work, which was to be handed in at nine. Only then, when I was sleep-deprived, hungover and in a deadline panic, would my brain be in the right state of zombified automatism to do the work.

Now, we can't do anything about 1). But I'm starting to wonder if we can do something about 2). The early systems were horribly low-level: they didn't have any mechanisms for adding useful abstractions. In programming terms, it was like writing everything in assembly language. Programmers (mostly) know that this is a daft thing to do: apart from hardcore graphics/numerical types, everyone now writes in languages that are more-or-less high-level, which is to say closer to the way humans think about problems, rather than how computers think about them. Intermediate programs called "compilers" and "interpreters" do the translation from the high-level language to the low-level language that the computer speaks.

As far as I know, there's only been one attempt to apply this insight to formal proofs, by Leslie Lamport: paper abstract here. It's awful. Not quite as bad as writing things in the language of Principia Mathematica, but close. Surely we can do better. It has line numbers, for God's sake. We know about all kinds of useful abstraction mechanisms and syntactic constructs, from our experience with programming languages - object/aspect orientation, function abstraction, macros/templating, block structure - maybe some of these can be applied to proof?

In case you're wondering, this line of thought was inspired by the following footnote in Paul Graham's latest article:
It's conceivable that the elegance of proofs is quantifiable, in the sense that there may be some formal measure that turns out to coincide with mathematicians' judgements. Perhaps it would be worth trying to make a formal language for proofs in which those considered more elegant consistently came out shorter (perhaps after being macroexpanded or compiled).
"Nah, never work", I thought. "Formal proofs are ghastly. But hang on... he's talking about using macroexpansion. Maybe we could use that..."
Tuesday, December 19th, 2006 03:21 pm (UTC)
Every so often my mind wanders around this very topic.
(Anonymous)
Friday, February 16th, 2007 07:10 pm (UTC)
MESSAGE