pozorvlak: (Default)
Wednesday, November 2nd, 2011 05:18 pm
I was pleasantly astonished to see Peter Norvig comment on my recent post about the Stanford online courses - with 160,000 students competing for his attention, that's dedication! I completely take his point that I'm only a small part of the intended audience, and what works best for me is probably not what works best for the students they really want to reach.

Nevertheless, the course has sped up in the last couple of weeks to the point where I'm finding it pleasantly stretching. Today I tweeted:
I take it back: #aiclass is covering the Oxford 1st-year logic course in <1wk. The good bits of the syllabus, anyway :-)
I want to expand on that remark a bit. There were actually three first-year Oxford logic courses:
  • "Introduction to Symbolic Logic", taught in the first term to all Philosophy students (including those like me who were studying Maths and Philosophy), and also a few others like Classics students. This covered propositional logic, first-order predicate logic, proof tableaux, and endless pointless arguments about the precise meaning of the word "the" and whether or not the symbol → accurately captures the meaning of the English word "if". This is the course I was talking about on Twitter. I found it unbearably slow-paced, but I remember a couple of folk who'd given up maths years before and couldn't handle being asked to do algebra again. "The alarm is sounding and Mary called" was fine, but "A & M" was apparently unintelligible to them.

    According to a legend which was told to me by the Warden of New College and is thus of unquestionable veracity, a class of ItSL students were once sent to the Wykeham Professor of Logic's graduate-level logic seminar due to a scheduling error. The WPoL walked in, saw the expected roomful of youngsters, and started "Let L be a language recursively defined over an alphabet X..." The poor undergrads, still in their first week (and quite possibly at their first class) of their undergraduate careers, must have started to entertain grave doubts about their ability to handle this "Oxford" place. When the WPoL was eventually informed of the mistake, he is supposed to have meditatively said "I thought they seemed rather ill-prepared..."

  • "Elements of Deductive Logic", taught in the second term to those studying (Maths|Physics) and Philosophy. This bumped the mathematical content up a notch, covering (for instance) completeness and consistency theorems for the languages that had been introduced in ItSL. There was also a rather handwavy treatment of modal logic, and lots more philosophical wrangling about what it all meant and how relevant it was to the broader philosophical project. This was one of the most intense courses I did in my four years as an undergrad.

  • There was an introduction to symbolic logic buried somewhere in the first-year computer science course - without the philosophy, I'm assuming.
Maths students were expected to pick the basics of logic up in the course of learning real analysis, as is traditional.

I always thought they'd have been better splitting ItSL and EoDL into two more evenly-sized courses - perhaps one heavily mathematical one, to be shared with the CS students (and perhaps incorporating some digital electronics), followed by a purely philosophical one to be taken by the (Maths|Physics) & Philosophy students. Meanwhile the algebra-phobes could do a single course more tailored to their level. I'm guessing that resource availability ruled this idea out, though.

There was also a two-term second-year maths course called "b1 Foundations", which was set theory, logic (up to, IIRC, the Löwenheim–Skolem theorem and Skolem's paradox) and some computability theory. This was compulsory for Maths & Philosophy students, but I didn't take it because I'd given up philosophy by then and was sick of the whole thing. In light of my subsequent career, this was probably a bad decision.
pozorvlak: (Default)
Thursday, July 2nd, 2009 08:04 am
All the way back in 2006, I wrote about the problem of the philosopher's axe: if you replace the shaft, and then later you replace the head, is it still the same axe? So as you can imagine, I was delighted to discover the existence of the Helko modular axe system.



In contrast to my ice-climbing friend, I was interested to note, they think that having to replace the axe's handle is more likely.

But according to the always-interesting Eliezer Yudkowsky, the whole notion of particles having a continuing identity in time makes no sense on a quantum-mechanical level, so the whole question is moot anyway.
pozorvlak: (polar bear)
Tuesday, April 28th, 2009 11:14 am
Most people know the story of the three blind men and the elephant. There were three wise blind men (the story goes) who had never before encountered an elephant. One day, an elephant was brought before them, and they were asked to describe it. The first wise man felt the trunk, and said "Elephants are like snakes", the second felt a leg, and said "Elephants are like tree-trunks", and the third felt an ear and said "Elephants are like bats".

This story gave rise to the title of the mathematician Peter Johnstone's epic work Sketches of an Elephant: A Topos Theory Compendium. Topoi, you see, can be described from several different viewpoints, and it's not at all obvious that the different descriptions all describe the same object.

As is so often the case, the story of the blind men and the elephant has a dual version, which is less well-known but (IMHO) equally interesting. It concerns three wise, blind elephants, who had never before encountered a human being. One day, a human was brought before them, and they were asked to describe it. The first elephant stepped forward, and felt the human with its front hooves for a while.

"Humans," the elephant pronounced, "are flat".

And both the other elephants agreed.

I once told the dual story to Peter Johnstone. He didn't appear to enjoy it.
pozorvlak: (kittin)
Thursday, June 12th, 2008 12:23 pm
  • Mathematics is the study of statements which can be proved.
  • Science is the study of statements which can't be proved, but can be falsified.
  • The humanities are the study of statements which can neither be proved nor falsified, but whose credibility can be supported or undermined by advancing evidence.
  • Philosophy is the study of statements which can neither be proved nor falsified, and for which evidence cannot be advanced.
This classification suggests some further ideas. Firstly, mathematics is in some sense the easiest branch of scholarship: in fact, mathematics is precisely "that which is easy", for the appropriate definition of "easy". Secondly, philosophy is really, really hard. This accounts for the almost total lack of progress in philosophy in the last 2,500 years. Philosophers are still debating problems posed by Thales of Miletus, and defending (or attacking) positions advanced by Plato; pretty much all we've achieved is to clarify our statements of the problems1. Is there a single statement whose truth would be agreed-on by all philosophers? I'd love to be corrected, but I don't think there is. Compare the progress achieved in philosophy to the progress achieved in mathematics over the same time period, or with the progress achieved in science in a mere 500 years, and you'll see what I mean. As far as I can see, all progress in philosophy has come by re-stating philosophical questions as scientific or mathematical ones. And this despite philosophy attracting some of the best and brightest minds of every generation. Hell, even the humanities people are arguing about different books now. Thirdly, mathematics is neither a science nor a branch of philosophy, though it has things in common with both.

We're left with a puzzle, though: empirically, mathematics is difficult, when it ought to be easy. I'd like to suggest several reasons for this. Firstly, mathematics is very old, and has been worked on in the past by beings of otherworldly intelligence: all the easy and accessible problems were solved long ago, mostly by Euler. The git. These days, even finding a sufficiently easy problem is challenging for us mere mortals. Secondly, much of mathematics is highly abstract, and humans are not evolved for highly abstract thought: the capacity to grasp concepts with high degrees of abstraction (which is not the same thing as intelligence) seems to be quite a rare one, and requires substantial training to be brought to a useful level. Thirdly, performing experiments in mathematics was largely impractical until the invention of the computer, and even today the technology for performing mathematical experiments is at an early stage of development. This means that until recently our experience was limited to those systems which can be worked out in the head or on paper.

1 This is, of course, a slight exaggeration. For instance, the alert reader will have noticed my implicit appeal in point 2 to Karl Popper's principle of falsifiability: Popper's theories have greater credibility and explanatory power than those of the logical positivists, and thus represent an advance in the philosophy of science. But I bet you could find a philosopher who disagreed with it without too much difficulty, probably just by walking into any philosophy department common room and declaring your support for the principle in a loud voice. Philosophers are an argumentative bunch. For comparison, try finding a mathematician who doesn't agree with Cauchy's residue theorem, or a physicist who doesn't agree that general relativity represents a good approximation to reality.
pozorvlak: (polar bear)
Wednesday, September 26th, 2007 10:47 am
OK, it looks like the gender theory quotation thread has reached the end of its life-cycle, so it's time to reveal the answer. Yes, that's right: I knew where the quotation came from all along, and was using it... not really to test you, but to test the author's assumptions. I'm sorry if any of you feel betrayed by this - if it helps, I think you all passed :-)

Anyway, Spoilers ho! )
Tags:
pozorvlak: (sceince)
Monday, September 24th, 2007 02:46 pm
Gender-theory kru, your attention please:

I came across the following recently, and wondered if any of you might have some idea of its source:
Gender is not like some of the other grammatical modes which express precisely a mode of conception without any reality that corresponds to the conceptual mode, and consequently do not express precisely something in reality by which the intellect could be moved to conceive a thing the way it does, even where that motive is not something in the thing as such.
Any thoughts? And can someone tell me what it means?
pozorvlak: (Default)
Tuesday, June 12th, 2007 02:43 pm
Here's something that's been bugging me for a while: why are computers useful?

A bit of an odd question, you might think, and I'm going to need to explain some background. First, you need to know that there are problems that computers can't solve, even in principle. A classic example is the halting problem: given a program and its input, can you predict whether or not the program will terminate? Alan Turing showed that it's impossible to write a program that will do this correctly in all cases. A co-worker of mine was once asked to do precisely this by a client...

Next, you need to know a bit about infinity, and the surprising behaviour of infinite sets )

Armed with this knowledge, we can see that Turing didn't need to exhibit an actual example of an uncomputable problem: he could have shown that some problems are uncomputable much more easily. Consider the following problem: you have some subset X of the natural numbers. X might be the set of even numbers, or the set of prime numbers, or the set of perfect numbers, or the set of Fibonacci numbers, or something like that. You want to write a program which will accept a number as input, and output "Yes!" if the number is in X, and "No!" if not. Now, by a close relative of Cantor's diagonal argument, there are uncountably many subsets of the natural numbers. But every program in your favourite programming language is a finite sequence chosen from an alphabet of finitely many symbols, so there can only be countably many programs! Hence, there must be some subsets X for which there is no such program. [I'd like to thank Jeff Egger for introducing me to this argument]. In fact, it's much worse than that - if you were to pick a subset at random, the probability that it would be computable would be 0. Not 1%, or 0.01%, or 0.000000001%, but actually zero. You need to be quite careful about how you define probability for this statement to make sense, but it's true. [Edit: having discussed this with an actual analyst, it turns out the truth is more complicated than I'd thought. There may be reasonable ways to define the probability measure on the space such that the probability of computability is non-zero. However, there is at least one way of defining a probability measure such that my statement is true - see my reply to [livejournal.com profile] benparker below.]

This seems like quite a contrived and artificial problem, but it's not really: lots and lots of useful problems can be re-stated in this form, with a judicious choice of X. We're not worried about how efficient the solution would be, only that one exists; hence, if the "subset of N" form of the problem is incomputable, so was the original form.

Hence my original question: why are computers useful? If the probability of a computer being able to answer our questions is zero in theory, how come they are so useful in practice? My former colleague aside, most of us don't need to worry about computability in our day-to-day coding: we can pretty much assume that there is a solution to our problem, and that finding it is the hard part.

I don't really know the answer, but it seems to me that it must have something to do with approximations. We're finite creatures, and we're usually happy with finite answers - we'll approximate an infinite decimal to one with ten (or thirty, or five hundred, or at most a few billion) decimal places, we'll just ask for the first few elements of an infinite sequence, we'll approximate the movement of continuous fluids by thousands of discrete chunks, and so on. This can lead to errors and problems, but usually these can be dealt with or lived with. Also, since we're finite, we're only capable of creating a finite amount of data for our computers to search through and manipulate. If we restrict the problem above to finite subsets of the natural numbers, the picture becomes a lot better: now there are only countably many subsets to choose from. We might still have probability zero, but it's not enforced in the way it was before.

In practice, as I said, the probability of their being a useful computable approximation to our original problem seems to be near to 100%. This strikes me as something close to miraculous. Is this just a mirage? Do we not encounter uncomputability because we subconsciously shy away from it, just as 19th-century scientists largely failed to notice the ocean of nonlinearity that they were swimming in? Or are the computable functions somehow dense in the uncomputable ones, in the way that the (countable) rational numbers are dense in the (uncountable) real numbers?
pozorvlak: (Default)
Monday, April 9th, 2007 08:53 pm
I'd like to talk about a couple of simple thought experiments invented by the mathematician and philosopher Gottlob Frege, which for me really nail the relationship between mathematics and the physical world, and at the same time raise some fundamental questions in the philosophy of mathematics.

Suppose you have a vessel containing 2cc of liquid. To this, you add a further 3cc of liquid. How much is in the vessel now? Well, 2+3 = 5, so obviously it's 5cc. But the liquid you added wasn't the same as the liquid that was in there: the two underwent a chemical reaction, emitting some gas, and so the volume of the liquid in the vessel is actually less than 5cc. Or suppose you have five things, and you then add two more half-things. You've got six things, right? Except the "things" were pairs of boots, and the half-things were individual boots, but both individual boots were left boots. You have twelve boots, but only five pairs.

In both cases, you have some physical system (liquid in a vessel, pairs of boots in a rack) that you're trying to model using some mathematical formalism (whole numbers and addition, fractions and addition). In each case, it turns out that the model isn't a very good one, as it incorrectly predicts the behaviour of the system. But this doesn't mean that 2+3 is not equal to 5, or that 5 + 1/2 + 1/2 is not equal to 6! Nor does it mean that there's no mathematical model that would fit - in both cases, it would be pretty easy to construct one. It just means that we've chosen the wrong models. Standard arithmetic still works fine, it just happens to be the wrong thing in this case.

SCIENCE! )

PHILOSOPHY! )

Frege's story's rather a sad one, as it happens. He did pioneering work in the philosophy of language and in logic (he's got a solid claim to be one of the three great logicians of all time, along with Boole and Aristotle, and without his work much of modern mathematics would be literally unthinkable). In 1903 the culmination of his life's work, the Basic Laws of Arithmetic (Grundgesetze der Arithmetik), was on the verge of publication: in it, he claimed to show that arithmetic (and thus all of mathematics) could be derived from the obvious truths of logic. As it was about to go to press, he received a letter from Bertrand Russell informing him of what is now known as Russell's Paradox. This blew the entire enterprise out of the water. Frege inserted a preface to the effect of "This doesn't work any more, but I hope you find it interesting", then went off and had a nervous breakdown.

1 If you can get hold of a copy of his (sadly out-of-print) Foundations of Arithmetic, I strongly recommend it, if only for the chapter contra Mill.
pozorvlak: (Default)
Sunday, April 8th, 2007 03:15 pm
Closely related to the No True Scotsman fallacy is Sliding Definition Ploy1. It goes like this: "for the sake of argument", give a slightly-odd definition to some common term, like Mind, or Freedom, or Justice, or Beauty. Deduce some consequences from that definition (ideally, this stage should take a while, to overflow your audience's input buffers). Claim that these deductions tell you something about the ordinary meaning of the term you started with. Better yet, don't claim it; just proceed as if they do.

SDP is, unfortunately, endemic to philosophy, or at least was when I last took a look. In philosophy, one of the big problems is that all the really interesting ideas you want to talk about (Truth, Justice, Ethics, Vision, Desire, the Soul, Mind, Body, even simple things like "looking"2) are ill-defined. The other big problem, of course, is that you can't do experiments on most of these things (even if you're the sort of philosopher who believes we can trust our senses and the sort who's prepared to accept the validity of the scientific method, which is by no means all of them). So you're left with pure reasoning. Mathematicians can get away with relying on pure reasoning, because they're working with abstract things that are precisely defined. Sorta3. So, in order to get any traction at all on their problems, philosophers often have to provide definitions of common terms. This allows the unscrupulous philosophers4 to insert a Sliding Definition Ploy or two.

SDP is also common in arguments about politics: we're all in favour of Liberty, Justice, etc, but these terms are all ill-defined, and by choosing definitions carefully, it's easy to show that your opponent is opposed to any given Good Thing. [livejournal.com profile] zompist has written more about this in one of his rants, in which he also elaborates on the linguistic problems with definitions.

There are lots more standard logical fallacies: Wikipedia has a list (Googling will reveal several others), and here's infidels.org's guide to logic and fallacies, which also has what looks like a good section on what logic is, what it isn't, and what it's useful for.

[By the way, I am no longer eating salted porridge. I am now about to tuck into a sausage sandwich. An organic venison sausage sandwich, no less, filled with sausages from the farmers' market at Queen's Park yesterday :-) ]

1 The term is due to John Puddefoot, AFAIK.
2 I know a philosopher who recently wrote a paper on the difference between "looking" and "watching". Or possibly "watching" and "seeing". Or something like that. It's subtle stuff.
3 At the risk of being accused of SDP myself, this is probably the best definition of mathematics we have.
4 A depressingly large subset, from what I can see.
pozorvlak: (Default)
Sunday, April 8th, 2007 03:14 pm
Happy Easter, everybody! Today, I'm going to talk about two of my favourite logical fallacies. This post got a bit long, so I've split it into two: the second part, about Sliding Definition Ploy, is here.

My favourite, for the name if nothing else, is the No True Scotsman Fallacy1, which canonically goes like this:
Hamish: No true Scotsman puts sugar on his porridge.
Jock: But my cousin Angus MacAlasdair from Glen Coe puts sugar on his porridge.
[Beat.]
Hamish: No true Scotsman puts sugar on his porridge.
[We're assuming that Angus MacAlasdair's Scottish nationality is above reproach, apart from his unnatural and perverted breakfast habits.]

Discussion and further examples )

By the way, porridge really is a lot tastier with salt on it. In fact, I'm eating some salted porridge now. Mmmm...

1 That's its name. Really!
pozorvlak: (Default)
Thursday, August 31st, 2006 03:28 pm
Fascinating idea, courtesy of [livejournal.com profile] bruce_schneier: the correct legal framework for dealing with terrorism is the one already in use for dealing with pirates. Apparently, piracy is a crime defined in international law, and pirates can be arrested anywhere and by anyone. This would, apparently, both deter states from harbouring terrorists, and deter the US, UK, China and others from abusing the term "terrorist" to crack down on their own populations, deny human rights to terrorist suspects and so on. Full article here. [livejournal.com profile] r_e_mercia and [livejournal.com profile] mi_guida (and any others, of course), a penny for your thoughts.

Yesterday I found a mistake in a paper ("A General Coherence Result" by John Power, if you're interested). The mistake itself is trivial - an equation failed to typecheck, it took about a minute to fix and another couple of minutes to be sure that it really was an error, and the mistake had no knock-on effects - but this still disquiets me. This was a paper by a highly-regarded mathematician, presenting a significant result, that had passed refereeing and been published. The paper's written in a very clear style, uses a notation that makes that kind of error immediately obvious, and it's only nine pages long! I read the whole thing in a couple of hours, and I wasn't reading it that closely. If an error that obvious, and that simple to find and fix, can make it through to a published paper, then we are all doomed. Forget worries about errors in the 11,000-page proof of the classification of finite simple groups (see Marcus du Sautoy's excellent New Scientist article), if we can't keep errors out of something only 9 pages long then the entire notion of peer-review (and possibly even proof) becomes bunk, and the only thing to be done with things like the finite simple group proof is to burn them, lest anyone be tempted to take them seriously. I've mentioned my idea for statistical estimation of error rates in proofs before - anyone else care to proffer ideas? It occurs to me that the FSG proof is of a similar size (and probably a greater complexity) to the Linux kernel: Debian's bug database reveals about 300 open kernel bugs, but that's probably only the tip of the iceberg.

[By the way, there's what looks like a good article in the New Yorker on the Perelman-Yau Fields Medal snafu.]
pozorvlak: (kittin)
Thursday, June 22nd, 2006 05:22 pm
I'm going away (first to London, then to Nova Scotia) tomorrow, so I'll have to stop work on the fridge poetry for a while. So, here's what I've done so far (100K tarball), in case any of you want to have a crack while I'm away. There are lots of dead-ends and abandoned files there, but you should be able to work it out from the README and the Makefile.

Also, I've now fixed enough errors in my P-categories paper that it's met my rapidly plummeting idea of acceptable quality. Get it while it's hot, it's lovely. If this is the amount of effort required to write a twelve-page paper, I'll be totally screwed in a year when I have to write my thesis.

Actually, this has provoked an interesting philosophical musing: there are statistical techniques available for estimating the number of undiscovered errors in a piece of software, which could be adapted for estimating the number of undiscovered errors in a mathematical paper. So I could keep looking until the chance of an undiscovered error is less than (say) 1%, or the number of estimated errors is lower than some critical threshold, and then release, possibly with a statement like "Error-free with 99% probability". But if I'm only 99% certain that a proof is error-free, does that count as a proof, whose defining characteristic should be 100% confidence? OTOH, that kind of estimation is clearly better than the ad-hoc approach currently favoured...
pozorvlak: (kittin)
Tuesday, March 21st, 2006 05:31 pm
Another idea/problem of which everyone should be aware.

The problem is simple: a philosopher has an axe (bear with me here). He uses it for a while, until the blade wears out and has to be replaced. He uses it for a bit longer, until the handle breaks. The head's still OK, so he puts it on a new handle.

Is the axe he has now the same axe as the one he started with?

Wikipedia calls this the Ship of Theseus paradox, and gives examples from fields as diverse as rock music and automobile registration fraud, some with serious consequences. It's a problem that often comes up in discussions of the feasibility of Star Trek-style matter transference. Like so many important philosophical problems, the answer depends on what your definition of "is" is, and these are seriously deep waters. As a mathematician, it depends on what context you're operating in: sometimes two things can be the same for one purpose but different for another. Two groups can have the same elements, and thus be the same for the purposes of set theory, but have quite different algebraic structure. Or vice versa - two groups might happen to have different names for their elements, but exactly the same structure, in which case we'd call them the same group. Topologists will frequently say that two objects are "the same" when they're homotopic, which is a very weak condition - for instance, Euclidean n-dimensional space is homotopic to a single point for all n. This can all be made precise using category theory: each category will have a different notion of isomorphism, and you say that two things are "the same" when they're isomorphic in the appropriate category. But then you get into higher categories, and it all goes wrong: in a 2-category, the moral notion of "the same" isn't isomorphism but rather equivalence...

By the way, I once mentioned the Philosopher's Axe to an ice climber. He said that ice-axe heads wear out all the time, and that when you replace the handle then you've got a new axe, regardless of whether you then put an old head on it :-)

[By the way, it seems my Lie Algebras lecture wasn't as bad as I'd thought. Catharina said I'd done well to get the whole theorem into one lecture, albeit one that ran nearly half-an-hour over time...]
pozorvlak: (sceince)
Monday, March 20th, 2006 04:19 pm
The first in an occasional series discussing ideas and concepts that (in my totally unhumble opinion) everyone should know. We'll kick off with an idea from the world of historical wargaming, an idea from 19th Century German philosophy, an idea that comes from varnish manufacture via programming language design, and an idea from a touring Cornish children's play of the mid-1980s.

Grognard capture )

Hegelian dialectic )

Onions in the varnish )

Sidneying )

My justification for this post is the one the Design Patterns guys use: by giving names to fairly common things, you make it easier to recognize them and to discuss them. Grognard capture, in particular, can be a Bad Thing (though it isn't always), and recognizing it is the first step to dealing with it.