January 2018

S M T W T F S
  123456
78910111213
14151617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
pozorvlak: (Default)
Tuesday, March 20th, 2007 05:35 pm
There's a scene in John Wyndham's marvellous Chocky where the main character asks his father "Why does a cow stop?"

What he means by this is why does a cow's intelligence stop: why are cows smart enough to escape out of an open gate, but not smart enough to see that they could lift up the latch with their noses and escape whenever they want? Why do cows hit a point where their problem-solving ability just stops dead?

I was reminded of this as I read Matthew Huntbach's essay What's Wrong with Ruby this morning. He points out that "Ruby, and the whole scripting language phenomenon is a slap in the face" for the academic language-design community of which he is apparently a part.
We have been used to complaining that the programming languages we had developed were so much better than the mainstream programming languages, the only reasons they hadn’t taken off were that they weren’t developed and promoted by big companies, and that the inherent conservatism of industry restricted commercial programming to tried and trusted languages.

Yet a whole stream of new languages: Perl, PHP, Python and now Ruby, each initiated by one person as back-room projects, have been adopted for serious use and achieved many thousands of users. Unlike academically-derived languages, they have given the impression of being thrown together to meet a need without any strong underlying theoretical basis.
That's right. Your languages weren't unsuccessful for the reasons you thought they were, and languages that don't meet your standards of elegance have flourished in actual use. Something you thought was necessary for a language to succeed (corporate backing) turns out not to be necessary after all. Now, let's apply your insights to the rest of your essay (which, by the way, is a perfect example of the kind of thing I was talking about in my post Tightrope Walking). Ruby et al don't have static typing, formal semantics, standards documents, or the guarantee that each object will have a class that determines its behaviour - in short, they're not academic languages - and yet thousands of bright users use them and like them. Could it be that perhaps the lack of these things isn't such a problem as you think? Might they, in fact, have advantages that you're too blind to see?

See that bit of metal attaching the gate to the fence? You might like to think about what you can do with that.
pozorvlak: (Default)
Tuesday, March 13th, 2007 01:08 pm
brian d. foy says that you can't be an effective advocate for a language unless you can think of five things that you hate about it off the top of your head, the reason being that if you can't, then you don't know enough about the language to advocate it effectively. So, just to go one (or five) better, here are ten things I hate about Perl )
This is not to say that I hate Perl: far from it. I think it's a wonderful, fun language, with a great community around it, producing some insanely cool software (CPAN might be considered the world's premier laboratory for module and language-extension design). I am continually surprised that Perl has such a bad press, and gets such short shrift from the language-design community. You might expect that a language that breaks almost every accepted precept of language design would simply be a bad language, and no fun at all to use. Yet Perl does this, and has a fanatical community of users, some of them truly excellent hackers. This, it seems to me, is a datum point of enormous importance.

But anyway, I'd like to ask this question to the hackers reading this. What five things do you hate most about your favourite language?

* Unless you've set $[ to 1 - thanks, [livejournal.com profile] paddy3118!
pozorvlak: (Default)
Saturday, March 10th, 2007 12:57 pm
A while ago I wrote a post about what appeared to be the endemic pusillanimity of the Haskell community, tried to find more charitable explanations for my observations, and, I think, mostly succeeded. Unfortunately, the example I gave turned out not to be a good one: it turns out that it is possible to have integer-parametrized types in Haskell (though you have to go through Hell and high water to do so). But the other day, I remembered a much better example )
pozorvlak: (Default)
Monday, February 26th, 2007 11:24 pm
[Non-coders: it shouldn't matter if you don't know about the specific technical issue that I'm discussing here. Hopefully, I've managed to convey something of what it feels like to code in different languages, which is the important thing.]

I was brought up on static languages. Admittedly, the first programming language I touched was BBC Basic, which had autovivification of variables, but it also had separate namespaces for string and numerical variables. Then I was taught programming properly on Turbo Pascal, which is all about the type system. I learned object-oriented programming on Delphi, which is what Turbo Pascal turned into, and was also heavily typeful (and, incidentally, one of the nicest development environments I've ever had the pleasure to use). When I first encountered dynamically typed languages, in the form of RSI's noxious Interactive Data Language (and later VB), I thought dynamic typing was an accident waiting to happen. Programming without static typing felt like walking a tightrope without a safety net, with the slightest wobble ready to pitch me into the abyss below. Wherever possible, I'd declare variables, turn off everything implicit, and curse if there was no way to do these things.

It wasn't until I started using Perl (and I discover with surprise that I can't remember exactly when that was - presumably around 2000-2002, when I started using Linux regularly) that this began to change. Perl offers no facilities for static typing - it's dynamic all the way. More than that, it's mutably typed - if I ask for "3" + 4, it won't die at compile-time like Haskell, or at run-time like Python, it'll simply convert the "3" to a 3 and return 7. If you ask for an array element that's out-of-bounds, it won't die with an "array subscript out of bounds" error, it'll just lengthen the array so it's big enough and autovivify the extra elements. Yes, this can be slow. But there are plenty of ways of finding out how big an array is, so obviously you did that because you wanted to.

This was presumably painful at first (I can't remember). But after a while, I started to get it. All my old fears were revealed, in the light of day, to be groundless, and even slightly ridiculous - if I wobbled, then the tightrope moved under me, to counterbalance my movement and hold me up. I could walk along the rope as easily as if it were pavement, and on the few occasions where I fell off, it turned out that the bottom of the abyss was actually covered with big trampolines. The sense of liberation was wonderful. Why should I, as a thinking human being, have to worry about the trivial difference between 3, 3.0, "3" and (3), when a machine could do it for me and leave me free to concentrate on the important stuff? I started to code in tune with the language, and as a result my code became simpler, clearer, more robust. I could Say What I Meant, and perl1 was smart enough to Do What I Meant.

Which is why it's so jarring coming back to static typing for Haskell. After grokking dynamic typing, static typing just seems antediluvian. Every time a compilation dies because of a type error (and actually, having to compile code before running it seems pretty antediluvian too), it's like the hip young guy I thought I was talking to has suddently adopted the tone of a boarding school headmaster circa 1950 and caned me for having my shirt untucked and misusing the optative. And if you have no idea what an optative is or in what context you might wish to use one, then you have some idea of my bewilderment at most GHC error messages.

Maybe I'll grow to love static typing again, as I grew to love dynamic typing before it. Maybe (and this is what I'm hoping for), I'll be able to transcend the dichotomy, and come to a deeper understanding in which I realise there is no real opposition. But right now, it's hella frustrating.

1 As any fule kno, Perl is the name of the language, and perl is the name of the interpreter for that language. PERL is something different :-)
pozorvlak: (Default)
Friday, February 23rd, 2007 10:55 am
pozorvlak@delirium:~> clisp
  i i i i i i i       ooooo    o        ooooooo   ooooo   ooooo
  I I I I I I I      8     8   8           8     8     o  8    8
  I  \ `+' /  I      8         8           8     8        8    8
   \  `-+-'  /       8         8           8      ooooo   8oooo
    `-__|__-'        8         8           8           8  8
        |            8     o   8           8     o     8  8
  ------+------       ooooo    8oooooo  ooo8ooo   ooooo   8

Copyright (c) Bruno Haible, Michael Stoll 1992, 1993
Copyright (c) Bruno Haible, Marcus Daniels 1994-1997
Copyright (c) Bruno Haible, Pierpaolo Bernardi, Sam Steingold 1998
Copyright (c) Bruno Haible, Sam Steingold 1999-2000
Copyright (c) Sam Steingold, Bruno Haible 2001-2006

[1]> (defun car (x) (cdr x))

** - Continuable Error
DEFUN/DEFMACRO(CAR): # is locked
If you continue (by typing 'continue'): Ignore the lock and proceed
The following restarts are also available:
ABORT          :R1      ABORT
Break 1 [2]> continue
WARNING: DEFUN/DEFMACRO: redefining function CAR in top-level, was defined in
         C
CAR
[3]> (car '(3 4))
(4)
[4]> (defun defun (x) (+ 3 x))
DEFUN
[5]> (defun 7)
10

Bad. Ass.

[Those of you who don't speak Lisp may find a measure of enlightenment here. Should be reasonably accessible.]
pozorvlak: (babylon)
Sunday, December 17th, 2006 08:29 pm
Back in the days of Russell and Whitehead, there was this attempt to make mathematics totally rigorous by turning everything into manipulation of strings of symbols in formally-specified languages. The idea was that mistakes could be eliminated (in fact, I think the idea was that proofs could become machine-checkable, if not machine-generable). This attempt foundered, for two reasons:

1) Godel's theorems, which show that any such formal system powerful enough to talk about the natural numbers (0, 1, 2, etc) will always be incomplete, ie there will be truths not provable in the system;
2) The sheer, mind-destroying boredom and ghastliness of writing proofs in this style. In my first year, when I was studying formal logic, I found that the only way I could force myself to actually do my logic homework every week was to get blind drunk on Wednesday nights, and then wake up at six on a Thursday morning to do the work, which was to be handed in at nine. Only then, when I was sleep-deprived, hungover and in a deadline panic, would my brain be in the right state of zombified automatism to do the work.

Now, we can't do anything about 1). But I'm starting to wonder if we can do something about 2). The early systems were horribly low-level: they didn't have any mechanisms for adding useful abstractions. In programming terms, it was like writing everything in assembly language. Programmers (mostly) know that this is a daft thing to do: apart from hardcore graphics/numerical types, everyone now writes in languages that are more-or-less high-level, which is to say closer to the way humans think about problems, rather than how computers think about them. Intermediate programs called "compilers" and "interpreters" do the translation from the high-level language to the low-level language that the computer speaks.

As far as I know, there's only been one attempt to apply this insight to formal proofs, by Leslie Lamport: paper abstract here. It's awful. Not quite as bad as writing things in the language of Principia Mathematica, but close. Surely we can do better. It has line numbers, for God's sake. We know about all kinds of useful abstraction mechanisms and syntactic constructs, from our experience with programming languages - object/aspect orientation, function abstraction, macros/templating, block structure - maybe some of these can be applied to proof?

In case you're wondering, this line of thought was inspired by the following footnote in Paul Graham's latest article:
It's conceivable that the elegance of proofs is quantifiable, in the sense that there may be some formal measure that turns out to coincide with mathematicians' judgements. Perhaps it would be worth trying to make a formal language for proofs in which those considered more elegant consistently came out shorter (perhaps after being macroexpanded or compiled).
"Nah, never work", I thought. "Formal proofs are ghastly. But hang on... he's talking about using macroexpansion. Maybe we could use that..."
pozorvlak: (sceince)
Sunday, December 17th, 2006 05:16 pm
[livejournal.com profile] michiexile has written a really nice introduction to his work for the layman. There's some commonality between what he and I do (if I'm understanding his work right), in that we're both concerned with structures that don't quite satisfy equations, but do up to some kind of transformation. So, situations in which a+(b+c) isn't equal to (a+b)+c, but you can easily transform one into the other. I suspect his #haskell username refers to this - "syzygy" is used to mean a relation between relations between relations between... The fundamental difference is that I'm interested in situations where the things you're "adding" lie on some sort of network (to be precise, they're objects in a category, which is just a funky directed graph), whereas he's interested in situations where they live in some kind of geometric space, and the transformations are paths between points. But there's a lot of overlap, most of which I don't understand, and we use some of the same tools to attack our problems.

[I wrote an explanation of what I do here, if you're interested]

The other day I was looking through my email archives for something, and found the following in an email to Duncan:
BTW, it occurred to me the other day that the security technique called Bernstein chaining can be considered an example of continuation passing style. Don't you love it when two things that you don't quite understand turn out to be only one thing, reducing the number of problems in the world by one? :)
To which my reaction was, roughly, "Is it really? Gosh. Er, what exactly is Bernstein chaining again, younger self?"

In further serendipity news, my office-mate showed me a paper called something like "Geometric presentations of the Thompson groups" and, despite the irrelevant-looking title, it looks like it may actually have serious bearing on my problem. It seems that every set of formal algebraic laws (such as associativity) has associated to it a "geometry group" (which, in the case of associativity, is Thompson's group F, and in the case of associativity + commutativity is Thompson's group V). This totally needs following up. Any successful categorification of an algebraic theory will give rise to groups, namely the automorphism groups of the operators in the theory. It looks like the geometry groups aren't exactly what I want (what with F being vast, and the automorphism group of any tree of tensor products in a monoidal category being trivial), but maybe I can do something like (free group)/(geometry group). Or, more likely, (free groupoid)/(geometry groupoid), whatever taking quotients of groupoids means. Anyway, I need to read the paper properly now.

Another thing I need to read up on is term-rewriting. I was dimly aware that there was a well-developed theory of "ways of rewriting words in a given formal language", and that this might have some bearing on what I'm doing, but it seems like the connection may be closer than I'd thought. In particular, the Knuth-Bendix algorithm looks interesting. I also find it suggestive that two of the big names in string-rewriting (Thue and Huet) have names that are not only anagrams of each other, but actually cyclic permutations! And has anyone ever seen them together in the same room? :-)
pozorvlak: (Default)
Monday, December 4th, 2006 09:02 pm
As I've mentioned a few times, I'm (re)-learning Haskell. Here are my current thoughts on the language. No doubt, when I have drunk of the Kool-aid in fullness, I shall look back on them and laugh/cringe at my naiveté. Please try not to get upset about them: if it helps, think of them as having big <at-my-current-state-of-knowledge> tags around them.
Rant )

By the way, I've fixed the juggling program. But then I was reading Burkard Polster's The Mathematics of Juggling on Saturday, and came across a much simpler algorithm: a siteswap [a_0,...,a_n-1] is juggleable iff the function (i |-> i + a_i mod n) is a permutation of [0..n-1] (by finiteness, iff it's a surjection). In Haskell (untested),

isJuggleable ss = and (map (`elem` test) [0 .. n])
        where test = map (`mod` n) $ zipWith (+) [0 .. n-1] ss
              n    = length ss
Or in Perl:
sub is_juggleable {
        $hit{$_ + $i++ % ($#_+1)}++ foreach @_;
        return scalar(keys %hit) > $#_;
}
See what I mean about the hashes? :-) Tests: er, that has a bug in it, but my girlfriend has been wanting to go home for a while now. I'll fix it tomorrow.

1I strongly recommend this exercise to everyone, actually - writing code helps you to absorb the ideas, and thinking about why the code's different in the two languages helps you to see the tradeoffs between their design decisions. And On Lisp is a great book, with rather more than its fair share of Keanu moments - those moments when you sit back and say "Whoa." The chapter on anaphoric macros is particularly cool.
2By "hashes", I mean associative arrays/dictionaries/etc, I'm not bothered about the implementation. Data.Map qualifies, but it's a lot less convenient than just being able to say $foo{$bar} = $fred, or $foo{bar} = $fred (barewords are treated as strings in hash indexes). Consider how ugly most Haskell code would be if there were no syntactic sugar for lists...

pozorvlak: (Default)
Friday, November 3rd, 2006 05:16 pm
This morning I wrote a TeX macro using \expandafter (ie, I had to muck about with the order in which the macro engine expands things). Possibly this makes me some sort of l337 TeX h4x0r, but mainly it makes me unhappy with (a) TeX and (b) XY-Pic for forcing me to do such an ugly thing.

[The macro in question defines an XY-Pic arrow with certain decorations. The code to produce the decoration is (since it's XY-Pic) hairy and unreadable, so I've packaged it into a macro. But this needs to be expanded before the rest of the code for drawing the arrow, or it gets swallowed up by the label on the arrow. Aaargh.]

In other mathematical news, today I sent this email to two of my students who have yet to show their faces at my tutorials:
Hi [students' names]

Are you planning to turn up to our tutorial this week? I've got some work from you both that I've marked and would like to hand back. In case you've forgotten, it's in room [number] on the [n'th] floor of the maths building (the really ugly one next to the Boyd Orr and opposite the QM), at 12 noon on Wednesday. It would be good to see you both.

I hope your class tests went well!

[My name]
I think that strikes a nice balance between friendliness and menace :-)