pozorvlak: (Default)
Friday, February 4th, 2011 02:11 pm
On Monday I went to the first day of Conor McBride's course Introduction to Dependently-Typed programming in Agda. "What's dependently-typed programming?" you ask. Well, when a compiler type-checks your program, it's actually (in a sense which can be made precise) proving theorems about your code. Assignments of types to expressions correspond to proofs of statements in a formal logical language; the precise logical language in which these statements are expressed is determined by your type system (union types correspond to "or", function types correspond to "if... then", that kind of thing). This correspondence goes by the fancy name of "the Curry-Howard isomorphism" in functional programming and type-theory circles. In traditional statically-typed languages these theorems are mostly pretty uninteresting, but by extending your type system so it corresponds to a richer logical language you can start to state and prove some more interesting theorems by expressing them as types, guaranteeing deep properties of your program statically. This is the idea behind dependent typing. A nice corollary of their approach is that types in dependently-typed languages (such as Agda, the language of the course) can be parametrised by values (and not just by other types, as in Haskell), so you can play many of the same type-level metaprogramming games as in C++ and Ada, but in a hopefully less crack-fuelled way. I spent a bit of time last year playing around with Edwin Brady's dependently-typed systems language Idris, but found the dependent-typing paradigm hard to wrap my head around. So I was very pleased when Conor's course was announced.

The course is 50% lab-based, and in these lab sessions I realised something important: fancy type-system or no, I need to test my code, particularly when I'm working in such an unfamiliar language. Dependent typing may be all about correctness by construction, but I'm not (yet?) enlightened enough to work that way - I need to see what results my code actually produces. I asked Conor if there were any way to evaluate individual Agda expressions, and he pointed me at the "Evaluate term to normal form" command in Emacs' Agda mode (which revealed that I had, indeed, managed to produce several incorrect but well-typed programs). Now, that's better than nothing, but it's clearly inadequate as a testing system - you have to type the expression to evaluate every time, and check the result by eye. I asked Conor if Agda had a more extensive unit-testing framework, and he replied "I'm not aware of such a thing. The culture is more 'correctness by construction'. Testing is still valuable."

So I wrote one.

I've written - or at least hacked on - a test framework of some sort at almost every programming job I've had (though these days I try to lean on Test::More and friends as much as possible). It gets hella tedious. This one was a bit different, though. One message that came out of Conor's lectures was that the correct way to represent equality in dependently-typed languages is somewhat controversial; as a beginner, I didn't want to dip a toe into these dangerous waters until I had a clearer idea of the issues. But the basic operation of any testing system is "running THIS should yield THAT". Fortunately, there was a way I could punt on the problem. Since Agda development seems to be closely tied to the interactive Emacs mode, I could deliver my system as a set of Emacs commands; the actual testing could be done by normalising the expression under consideration and testing the normalised form for string-equality with the expected answer.

This was less easy than I'd expected; it turns out that agda-mode commands work by sending commands to a slave GHCi process, which generates elisp to insert the results into the appropriate buffer. I'm sure that agda-mode's authors had some rationale for this rather bizarre design, but it makes agda-mode a lot harder to build on than it could be. However (in close collaboration with Aaron Crane, who both contributed code directly and guided me through the gargantuan Emacs API like a Virgil to my Dante) I eventually succeeded. There are two ways to get our code:
darcs get http://patch-tag.com/r/pozorvlak/agda-test
or
git clone git://github.com/pozorvlak/agda-test.git
Then load the file agda-test.el in your .emacs in the usual way. Having done so, you can add tests to your Agda file by adding comments of the form
{- test TESTNAME: ACTUAL is EXPECTED; -}
For instance,
{- test 2+1: (suc (suc zero)) +N (suc zero) is (suc (suc (suc zero)));
   test 3+0: (suc (suc (suc (zero)))) +N zero is (suc (suc zero)); -}
When you then invoke the function agda2-test-all (via C-u C-c C-v for "verify"), you should be presented with a new buffer called *Agda test results*, containing the text
1..2
ok 1 - 2+1
not ok 2 - 3+0
    got 3
    expected 2
[ Except, er, I get "expected _175" for that last test instead. I don't think that's a bug in my elisp code, because I get the same result when I evaluate that expression manually with C-c C-n. Halp pls?]

You should recognise the output as the Test Anything Protocol; it should be easy to plug in existing tools for aggregating and displaying TAP results.

There are a lot of things I'd like to do in the future:
  • Add commands to only run a single test, or just the tests in a given comment block, or a user-specified ad-hoc test group.
  • Highlight failing tests in the Agda buffer.
  • Allow the user to specify a TAP aggregator for the results.
  • Do something about the case where the test expressions don't compile cleanly.
If you'd like to send patches (which would be very welcome!), the darcs repository is currently considered the master one, so please submit patches there. I'm guessing that most Agda people are darcs users, and the slight additional friction of using darcs as my VCS is noise next to the friction of using Emacs as my editor :-) The git repository is currently just a mirror of the darcs one, but it would be easy to switch over to having it as the master if enough potential contributors would prefer git to darcs. Things might get confusing if I get patches submitted in both places, though :-)
pozorvlak: (Default)
Monday, March 10th, 2008 08:47 pm
A guy on Reddit pointed me at this article today. It's the Wikipedia biography of the Welsh computer scientist Donald Davies. Like my father, he was from the Rhondda Valley; he was the co-inventor of packet-switched networks (like this big one you're using right now); and he once found a bug in Alan Turing's code, before the first computer had been built :-)

I think he may be my new hero.

In other news:
  • I've just finished Portal. It was great, but far too short :-( If you haven't played it yet, then you should go and buy/download a copy right now, and cover your ears and sing "La la la la la..." until it's loaded to avoid being spoilered like I was. The puzzles are still a lot of fun, but the plot and jokes would have been a lot better if I hadn't half-known they were coming.
    [And I didn't find "The Cake is a Lie!", even though I was looking out for it...]
  • Spent a lot of time sitting at the computer today, but didn't get any thesis done. Bah.
  • After a similarly unproductive morning yesterday, I went to the climbing wall, and had quite a good session: I did two 6as (which is good, for me), led a widely-agreed-to-be-undergraded 5+, and finally knocked off a 5+ with a big scary overhang that had been tormenting me for months. Yay!
  • This may be the most awe-inspiring programming war story I've ever read.
  • Letter from Linacre: I didn't get the job. Never mind.
  • The winter mountaineering course I was booked on next weekend has been cancelled due to lack of interest. Just when the snow's started up again! I'm much more annoyed about that than about the rest, to be honest.

Ah well, tomorrow will be better. Anyway, have a look at this video, which is the trailer for one of the films I saw at the mountain film festival on Saturday:



That sea arch he's climbing up the underside of, by the way, is provisionally graded as a French 9b :-)
pozorvlak: (gasmask)
Thursday, March 6th, 2008 11:25 am
A comparison of the Haskell argmunger with its Arc equivalent (or even its Perl equivalent) should make something clear. It's been claimed that Haskell doesn't need macros, because most of the things that Lispers need macros for can be done in Haskell using lazy evaluation and so on. But the argmunger example makes it clear that there are things for which Lispers don't need macros and Haskellers do - if not Template Haskell macros, then SYB or DRiFT or some other macro-like facility.

Lispers often say that coding in other languages feels like manually generating macroexpansions. I'm nobody's idea of a Lisp hacker, but I often feel like this when I'm writing Haskell. Which is why I'm learning about Template Haskell even though I'm not yet au fait with things like do-notation and monad transformers, which most Haskell experts would probably consider more basic. Would you choose before or after you'd walked to Moscow to staunch the blood from your severed arm?
pozorvlak: (polar bear)
Tuesday, February 12th, 2008 10:26 am
I'm going to make what should be an uncontroversial statement: if you don't understand and use monads, you are at best a quarter of a Haskell programmer. A corollary of this is that, since using monad transformers is the only (or at least the approved) way to use two or more monads together, if you don't understand and use monad transformers you are at best half a Haskell programmer.

[Another corollary is that I am, at best, about an eighth of a Haskell programmer: though I understand monads well on a theoretical level, I invariably emerge defeated from any attempt to bend them to my will.]

But we'll come back to that later.

Something I've been thinking about for a while is this whole business of designing languages to make programs shorter. )

1 There really ought to be a word that means "would never use a twopenny word when a half-crown word would do", but I can't think of one. English grads? Edit: sesquipedalian! Of course! Thanks, [livejournal.com profile] fanf! (Originally, I used "prolix")
2 I actually came up with this list by thinking about languages whose users were the most passionate. But they're also extremely concise, which I think is a large part of the reason for the passion. If I were focusing purely on concision, I should probably consider Forth, but I don't know enough about it.
3 J has "boxed arrays" too, which are something like two-dimensional s-expressions, but let's leave those aside for now.
4 You might want to raise this objection against Smalltalk, too: objects are members of classes, which are something like types. Now, I've hardly used Smalltalk, so I'm probably talking out of my elbow, but: since everything is an object, and the language has powerful reflection features and duck typing, we can in fact write generic operators that work for objects of many or all classes. But maybe I'm entirely wrong about Smalltalk programming: in which case, please delete all references to the language from my argument.
5 Do you find yourself wanting to go out and strangle small fluffy animals every time you have to type out an instance declaration that would be entirely unnecessary in a duck-typed language? I do. Particularly when it doesn't work and spits out some ludicrous error message at me, telling me that I've encountered another stupid corner case of the typeclass system.
6 I learned to my surprise the other day that I'm a member of the Geometry and Topology research group, and not the algebra research group as I'd always assumed - apparently universal algebra is now considered a branch of geometry!
pozorvlak: (gasmask)
Tuesday, February 5th, 2008 02:35 am
[livejournal.com profile] totherme kindly drew my attention to this blog post today. The author, Slava Akhmechet, tries to explain away that horrible feeling of unproductivity and frustration that I know so well from my attempts to use Haskell: his claim is that it's just that my expectations are miscalibrated from using imperative languages. A Haskell solution to a given problem, he claims, will take the same amount of thought as a solution in another language, but much less typing: by simple statistics, therefore, you're going to spend a lot of time staring at the screen not doing anything visible, and this can feel very unproductive if you're not used to it.

As an example, he gives the code
extractWidgets :: (Data a) => a -> [String]
extractWidgets = nub . map (\(HsIdent a)->a) . listify isWidget
    where isWidget (HsIdent actionName)
              | "Widget" `isSuffixOf` actionName = True
          isWidget _ = False
Bleh! :-( This function takes a parse tree representing some Haskell code, and extracts all the identifiers ending in "Widget", then removes the duplicates. Slava challenges any Java programmers reading to do the same in five lines or fewer.

Pointless language pissing match, including some Arc coding )

On language concision in general: I typically find that Haskell requires fewer lines for a given program, but Perl requires fewer characters, and they both use about the same number of tokens. Lisp is longer and messier than Haskell for easy problems, but quickly gains ground as the problems get harder.1 The APL family own on all three axes. This is extremely unscientific, of course, and because I don't know much APL/J I can't be sure how it extends to harder problems. I did once email Paul Graham asking why, if succinctness was power, he didn't switch to a member of the APL family; he has yet to reply, but I don't attach any great significance to this.

And having got all that stuff out of my head, I'm going back to bed. Hopefully I'll be able to sleep this time :-)

1Fun exercise: translate the example code in On Lisp into Haskell (or Perl, Ruby, etc.). It really helps you to get to grips with the material in the book. I found that the Haskell solutions were shorter and cleaner than the Lisp solutions up until about a third of the way through the book, at which point Lisp started to catch up with a vengeance: shortly thereafter, he was doing things in a few lines of Lisp that simply couldn't be done in finitely many lines of Haskell. I'd be very interested to see solutions written by someone who knew what they were doing, though!
pozorvlak: (Default)
Wednesday, January 30th, 2008 11:04 am
http://arclanguage.org/

Well, that makes the "which language shall I learn next" question rather easier...

First impressions (based on reading the tutorial rather than playing with it): I like it. It embodies PG's philosophy that a language should get out of your way and let you shoot yourself in the foot because one day, you might need to do tarsal surgery and only have a pistol to hand. In many respects, it's the anti-Haskell: it encourages you to put off the decision of how to represent your data as long as possible. Here's a feature along those lines that I liked: indexes into data-structures are indistinguishable from function calls. So if I write
(foo 0)
you have no way of knowing if foo is a list, a string or a function. Evaluation is strict by default, which I think is a net loss (but you've got macros, so it's swings and roundabouts, I suppose). The anaphoric (pronoun-introducing) macros from On Lisp are included by default - I've found pronouns to be very useful in Perl, so this can only be a Good Thing. I was amused to see that most of the language is defined in Arc, and that PG seems to think that this is a bold and novel experiment :-)
pozorvlak: (Default)
Monday, January 7th, 2008 10:33 pm
This afternoon I gratefully took possession of the copy of Iverson's A Programming Language which the library had been holding for me. Within two sentences, he'd managed to say something so thought-provoking that I felt compelled to post about it here. )
pozorvlak: (kittin)
Sunday, September 30th, 2007 08:26 pm
SICP section 2.4 set off my wheel-reinvention alarms in a big way. It discusses writing code to use complex numbers in a representation-agnostic way; your complex numbers can be either in polar (r, θ) form or rectangular (x + i y) form, and ideally you'd like not to care. Their solution? "Tag" each number with its representation, by passing around pairs whose first element is a symbol: either 'rectangular or 'polar. Your procedures to (say) find the magnitude of your complex numbers first check the symbol and then (in a nice, generic, data-driven, extensible way) dispatch the correct procedure given the representation you're using.

Which is all very well, but isn't this... y'know... a type system? Given that Scheme obviously has some sort of type system in place so it can complain when I try to add 1 to "fred", aren't we reinventing a pretty large wheel? Is this, in fact, final and clinching proof of Smith's (First) Law?

Well, yes and no. Yes, we are implementing a simple type system, but given that the main thrust of the book is how to write interpreters and compilers, that's a pretty useful thing to know. It's also interesting to see how you can whip up your own type system (and module system) atop pairs and lambdas. It's a very simple type system, but it's not too hard to see how, with a slightly more complicated data structure storing your dispatch tables and possibly some macros to take away the busywork, you could implement inheritance, generics... even something like Frink's physical units tracking. And you could mix and match such systems throughout your program. I can't decide if that would be really really cool, or a maintenance and reuse nightmare :-)
pozorvlak: (babylon)
Saturday, September 29th, 2007 12:58 am
While trying to do one of the exercises from SICP, I wanted to apply and to a list. Now, I could have used fold, but it seemed a bit of a waste given that Scheme's and is variadic: what I needed was an equivalent of Perl's func(@list_of_args) idiom, or Python and Ruby's func(*list). So, inspired by the syntax for variadic function definitions, I tried
(and . list)
for various values of list, and it worked! Rather elegant, I thought: no special syntax, just an application of Lisp's usual list-construction and argument-passing rules (like the Perl idiom, in fact). But when I tried it with + instead of and, no dice - instead, I get a "Combination must be a proper list" error. Obviously something to do (I realise, belatedly) with the fact that and is a special form and not an ordinary function. Maybe Larry Wall was onto something with his idea that different things should look different :-) But why doesn't it work for ordinary functions? Isn't (func . args) equal to (func arg1 arg2 arg3...)?

[You can get the effect I was originally after by using (apply func list).]


One of the especially nice things about my shiny new Ubuntu installation is that I now have a working Java system, so I can run J in all its graphical glory. So today I worked through one of the J labs: specifically, the one about the Catalan numbers (I can't find the lab online, alas - you'll just have to download J and try it for yourselves :-) ). The standard formula for the Catalan numbers involves factorials, so both the nth Catalan number itself and the time needed to calculate it grows very quickly with n. You can improve the run-time by rearranging the formula to use binomial coefficients (J has a built-in function for calculating nCr in a non-stupid way), and this makes calculation of Catalan numbers up to the 10,000th or so almost instant. But when I tried to use this code to find the 100,000th Catalan number, my computer locked solid. Response became glacial, the hard drive paged constantly, and I had to kill J and restart the lab.

Fortunately, there was a better way. The rest of the lab showed how you could use the prime decomposition of 2n! and n! to find a much quicker way of calculating Catalan numbers. With this code, the calculation again became instant - a salutory reminder of the power of a good algorithm :-) I'm still finding J a bit hard to read, though: code like
+/ <. n % p ^ >: i. <. p ^. n
or worse,
cat1=: (! +:) % >:
makes my eyes cross :-(

[As any fule can see, the first line calculates the exponent of each prime p in n!, and the second is the naive formula for calculating the Catalan numbers.]

While writing this post, I came across this link, which is rather fun: Culture Shock.
pozorvlak: (pozorvlak)
Monday, September 24th, 2007 12:49 pm
I got back from Wales on Tuesday, to find my copy of Structure and Interpretation of Computer Programs (sometimes known as the Wizard book, or SICP, or "sickup", as I've taken to calling it) waiting for me. It's the first-year computer science textbook used at MIT, and early signs are that it's going to be worth its weight in rather good coffee.

Chapter summary )

I've been working through the exercises, and I'm up to about page 70 (most of the way through Chapter 1). So far, there hasn't been anything really difficult, though a couple of the exercises require you to do Actual Experiments on a computer, and since, what with one thing and another, I don't have a working one at the moment (I'm typing this on [livejournal.com profile] wormwood_pearl's laptop), I haven't been able to do these. Some of the algorithms are new to me, and I've particularly enjoyed doing some programming with loop invariants (but how do you find them in the first place? Presumably there are techniques...), but it's hard to shake off the feeling that a lot of it is old hat; I've been programming computers for nearly twenty years, I'm no longer surprised by the concept of a data structure :-) I've enjoyed the philosophical and historical asides - if an algorithm dates back to ancient Egypt, they'll tell you which papyrus fragment it was found on - and the generally thoughtful and informed tone of the book. The progression from procedural to data-driven to metalinguistic programming reminds me of Eric Raymond's The Art of Unix Programming, but this is surely not coincidence - like many expert Unix hackers, Eric was a Lisper first. [livejournal.com profile] totherme may be amused to learn that his old tutor, Joe Stoy, is a frequently-cited contributor :-)

1Including some probabilistic algorithms - the ease with which one can roll a die in Lisp is not such a trivial matter :-)
2Plus ways of making change from a dollar or pound - I was amused to note that they haven't updated it to reflect the disappearance of the halfpenny :-)
3They're quite fond of the prefix "meta": for instance, they call an interpreter for language X written in language X a "metacircular interpreter", rather than just a circular one, for reasons which I can't quite fathom.
pozorvlak: (gasmask)
Tuesday, September 11th, 2007 07:47 pm
irb(main):001:0> def fred(x); x + 1; end
nil
irb(main):002:0> fred.methods
ArgumentError: wrong # of arguments(0 for 1)
        from (irb):2:in `fred'
        from (irb):2
irb(main):003:0> 1.+.methods
ArgumentError: wrong # of arguments(0 for 1)
        from (irb):3:in `+'
        from (irb):3
irb(main):004:0> {|x| x+1}.methods
SyntaxError: compile error
(irb):4: parse error
{|x| x+1}.methods
  ^
(irb):4: parse error
{|x| x+1}.methods
         ^
        from (irb):4
irb(main):005:0> lambda {|x| x+1}.methods
["call", "==", "[]", "arity", "to_s", "dup", "eql?", "protected_methods",
 "frozen?", "===", "respond_to?", "class", "kind_of?", "__send__", "nil?",
 "instance_eval", "public_methods", "untaint", "__id__", "display",
 "inspect", "taint", "hash", "=~", "private_methods", "to_a", "is_a?",
 "clone", "equal?", "singleton_methods", "freeze", "type", "instance_of?",
 "send", "methods", "method", "tainted?", "instance_variables", "id",
 "extend"]
In unrelated news, Happy birthday [livejournal.com profile] stronae!

And does anyone if there's a portable way of finding the arity of a function in Scheme?
pozorvlak: (kittin)
Monday, September 3rd, 2007 12:41 am
I have a question for the CS types out there: what exactly is the point of tail-call elimination?

Before you go into great detail about stack overflows, rest assured that I understand that bit. But why is tail-recursive code considered such a good thing? I mean, you write your code in a non-obvious way so that the optimiser can turn it into the loop that you meant to write in the first place. Why not just write the loop yourself? It has the same effect, expresses your intent more clearly, and doesn't require the mental gymnastics to make your code tail-recursive, or to decode it when reading.

I can see why you need it if you're determined to make your language stateless (though actually, that's a related question: how is recursive code with accumulators any safer than iterative code with local state?), but why is it needed in languages like Lisp, which already have perfectly good looping constructs?

Don't get me wrong: recursion is clearly a Good Thing. But all the really interesting uses of recursion, it seems to me, are things like recursion on trees or other data structures, which are not in general tail-recursive.

In other hacking news, I spent much of my evening making a set of spice racks out of chopsticks, duct tape and strawberry boxes. My kitchen is now a fraction tidier.
pozorvlak: (Default)
Wednesday, May 16th, 2007 09:32 am
The relationship between human languages and programming languages is interesting to me. In general, programming languages are a lot simpler (which is great, because it makes it easier to learn lots of them): they also have rather less vocabulary (but that's OK, because you're actively encouraged to make up your own). We talk about programming "idiomatically": while it's often possible to use one language's idioms in another, they can look almost as odd as, say, Japanese idioms used in English. In the evolution of programming languages, you can see a super-compressed version of the evolution of human languages: ideas, grammatical forms, and vocabulary transfer from one language to another where there's contact between the two communities, and languages split into mutually-unintelligible descendants when the communities fracture. There's at least one example of creolization that I'm aware of: the programming language Perl can be considered a creole of C, shell, sed and awk (and has since absorbed features from many other languages). Perl's also been heavily influenced by that other notable creole, English, and incorporates several features from human languages (like topicalization). It's no coincidence that Larry Wall, the original creator of Perl, studied linguistics at graduate school. In the opposite direction, you'll sometimes hear hackers describing human languages in terms derived from programming languages: Japanese is sometimes said to use reverse Polish notation.

But as I said, programming languages tend to be a lot simpler than human languages. In fact, so-called object-oriented languages (like, say, Java) have been described (perhaps "derided" is a better term) as languages in which everything is a noun: dually, so-called functional languages (like Haskell) could be described as languages in which everything is a verb. Actually, verbs and nouns cover pretty much everything in most programming languages. Perl has pronouns and conjunctions, and pronouns can be added to Lisp with sufficient cleverness; a couple of languages have something vaguely resembling adverbs.

So you can imagine my pleasure at learning yesterday that the language called J has gerunds :-)

*downloads J interpreter*


Fig. 1: The gerund attacks some peaceful pronouns (image courtesy n. molesworth)
pozorvlak: (Default)
Thursday, March 22nd, 2007 11:33 am
[Non-geeks: sorry for the recent rash of beware-the-geek posts: this stuff has been much on my mind lately. Normal service to be resumed shortly.]

Is this the kind of thing people mean when they talk about embedded domain-specific languages in Haskell?

Code listing )

There's a long tradition of writing symbolic differentiators in epsilon lines of Lisp, so this seemed like a reasonable exercise. It could be straightforwardly extended to deal with other variables, partial differentiation, standard functions like sin and cos, etc, though it would quickly get unwieldy. I'm not too happy with the simplifier - can anyone see how to make it tidier? Basically, I'm applying rules until no more apply (this is how Mathematica works internally, btw): maybe an explicit rules database would help? Expressing the rules as Haskell functions allows me to use pattern-matching, which helps, but less than I'd hoped.

Edit: on reflection, this probably shouldn't qualify as a DSL. What could really use DSLifying, however, is the simplification rules. (Slot 1 + Slot 2) + Slot 3 --> Slot 1 + (Slot 2 + Slot 3), or something, or better yet, associative op = (Slot 1 `op` Slot 2) `op` Slot 3 --> Slot 1 `op` (Slot 2 `op` Slot 3); associative (+); associative (*). Now, how to implement that? :-)
pozorvlak: (Default)
Friday, February 23rd, 2007 10:55 am
pozorvlak@delirium:~> clisp
  i i i i i i i       ooooo    o        ooooooo   ooooo   ooooo
  I I I I I I I      8     8   8           8     8     o  8    8
  I  \ `+' /  I      8         8           8     8        8    8
   \  `-+-'  /       8         8           8      ooooo   8oooo
    `-__|__-'        8         8           8           8  8
        |            8     o   8           8     o     8  8
  ------+------       ooooo    8oooooo  ooo8ooo   ooooo   8

Copyright (c) Bruno Haible, Michael Stoll 1992, 1993
Copyright (c) Bruno Haible, Marcus Daniels 1994-1997
Copyright (c) Bruno Haible, Pierpaolo Bernardi, Sam Steingold 1998
Copyright (c) Bruno Haible, Sam Steingold 1999-2000
Copyright (c) Sam Steingold, Bruno Haible 2001-2006

[1]> (defun car (x) (cdr x))

** - Continuable Error
DEFUN/DEFMACRO(CAR): # is locked
If you continue (by typing 'continue'): Ignore the lock and proceed
The following restarts are also available:
ABORT          :R1      ABORT
Break 1 [2]> continue
WARNING: DEFUN/DEFMACRO: redefining function CAR in top-level, was defined in
         C
CAR
[3]> (car '(3 4))
(4)
[4]> (defun defun (x) (+ 3 x))
DEFUN
[5]> (defun 7)
10

Bad. Ass.

[Those of you who don't speak Lisp may find a measure of enlightenment here. Should be reasonably accessible.]
pozorvlak: (Default)
Saturday, February 17th, 2007 12:22 am
Re. today's xkcd comic:

Hell yeah. Duct tape of the Internet, baby, and don't you forget it.

See also In the Beginning Was the Command Line.