January 2018

S M T W T F S
  123456
78910111213
14151617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
pozorvlak: (Default)
Thursday, August 26th, 2010 03:32 pm
A couple of days ago, Chris Yocum asked me to help him out with some OCaml code. He's learning OCaml at the moment, and thought he'd practice it by solving a problem he'd encountered in his research on early Irish law.

In early Irish law, murder was punishable by a fine dependent on the status of the victim. Ten-and-a-half cows per king murdered (no, really!), five for a major nobleman, and so on. At one particular event, we know the total fine levied, but not the number of victims. Chris wanted to find out the range of possibilities. This is an example of the change-making problem: given small change of various denominations, how can you give your customer 73p? I most often encounter this problem with rugby scores: if Wales beat England 67 - 3 (I can dream...), then at seven points for a converted try, five for an unconverted try and three for a penalty, what might have happened?

Be the change you want to make )

All suggestions for improvement, as ever, gratefully accepted :-)
pozorvlak: (Default)
Sunday, August 19th, 2007 12:09 am
Here's something that occurred to me the other day: consider duck-typed languages like Python )

OK, so far so standard. Now, most duck-typed languages are dynamic, which means that we only try to determine if bar has a spoffle method at runtime, and die with an error message when it doesn't (possibly after trying some error recovery, e.g. by calling an AUTOLOAD method or similar). But it occurred to me that in simple cases (which is to say, the majority), we could work out most of this stuff statically. For each function definition, see what methods are called on its arguments. Recurse into functions called from that function. Now, each time a function is called, try to work out what methods the arguments will support, and see if that includes the interface required by the function. Issue an error if not. Thus we get the code reuse benefits of duck typing, and the correctness benefits of static checking. If the static checking is slowing down your development cycle too much, drop back to fully dynamic checking, and only run the static checks on your nightly builds or something.

This also cleared up something else I'd been vaguely wondering about. In his splendid Drunken Blog Rant Rich Programmer Food, Steve Yegge says
Another problem is that they believe any type "error", no matter how insignificant it might be to the operation of your personal program at this particular moment, should be treated as a news item worthy of the Wall Street Journal front page. Everyone should throw down their ploughshares and stop working until it's fixed. The concept of a type "warning" never enters the discussion.
I'd wondered at the time what a type warning would mean. When is a type error mild enough to only warrant a warning? Here's one idea: a type warning should be issued when it cannot be proved that an argument to a function implements the interface that function needs; a type error should be issued when it can be proved that it doesn't.

This all seemed very interesting, and struck me as potentially a fun and reasonably easy hacking project, at least to get something workable going. But if it's occurred to me, it has probably occurred to someone else, so I asked the Mythical Perfect Haskell Programmer if he was aware of any work that had been done on static duck-typed languages. "Oh yes," he said, "O'Caml's one." Buhuh? Really? Well, that's O'Caml moved a couple of rungs up my "cool languages to learn" stack...
pozorvlak: (Default)
Tuesday, April 17th, 2007 01:31 pm
About a month ago, [livejournal.com profile] wormwood_pearl and I went to see the fantastic stand-up comedian Daniel Kitson (who you should all go and see if the slightest opportunity presents itself). One thing that he said really struck me:
If you ever think you've had an original thought, put it in quotation marks and type it into Google. Now that's a humbling experience.
So I really shouldn't have been surprised when I found out that someone else had already gone ahead with my idea to extend TeX with OCaml. You can get OCamlTeX here. Somewhat to my surprise, it's based on PerlTeX, which allows you to extend TeX using Perl, writing the bodies of your macros in Perl code. For example, \perlnewcommand{\reversewords}[1]{join " ", reverse split " ", $_[0]} defines a macro that reverses the words in its arguments (not at all easy in raw TeX). I'm not entirely sure (the Perl's less than entirely clear, and the TeX is completely beyond me), but it seems to work by setting up intermediate files to communicate between a perl process and a latex process: latex writes out requests to one file, perl reads them, writes the results out to another file, which latex reads and incorporates in the document. Thus there's no need to reimplement the hairy TeX lexer or macro-expander (as there would be with a preprocessor-based approach) or to rewrite the whole of TeX in Perl/OCaml (as I originally envisaged - but come on, it could be fun :-) ).

Another one for the originality-is-hard file is this article, which says a lot of the things I've been thinking about the differences between static typing and unit testing, and how they each influence development methodology. Only the author, mwh, said it back in 2004 :-(. Mwh mentions some things I hadn't thought of, like unit testing tending to make your designs more loosely coupled (because it's such a pain setting up dozens of helper objects for each test). I was particularly struck by this paragraph:
As to which "side" is better, well, I'm not going to attempt an answer to that question :-) One thing to consider, though, is the side benefits of a given methodology such as the unit testing driving loose coupling. There are probably side benefits to the powerful static type system approach, too. I will notice that for sub-genius programmers, the unit test approach is probably just plain easier.
This resonated with something else that I've been thinking. Kevin Barnes draws a distinction between "freedom languages" and "safety languages" - freedom languages are languages like Lisp or Perl or C, which strive to give you as much freedom as possible. The rationale for this was expressed wonderfully by chromatic: bad programmers will shoot themselves in the foot anyway, and good programmers will use the features to do things that would otherwise be impossible. Safety languages are languages that go to the opposite extreme, and make it very hard to do unsafe things, like Java or Ada. The rationale being roughly a) fewer bugs (even good programmers make mistakes), b) better automated tools. Compare this with the distinction drawn here between Languages for the Masses and Languages for Smart People. At first I thought he was talking about the same thing, but then it struck me: Haskell (and ML, etc), are Languages for Smart People that are also Safety Languages. Which makes them interesting, as most other LFSPs are Freedom Languages. This suggests the question: are there any LFMs that are Freedom Languages? Possibly Python: it's certainly an LFM (it's closely entwined with a project called Computer Programming for Everybody, will be the main programming language used on the One Laptop Per Child laptops, etc - note that I'm not saying that being an LFM is a bad thing!), and yet you can (if you're prepared to ignore the scary DEPRECATED! warnings) do most of the run-time metaprogramming and dark hackery that are more common in Ruby, Perl etc.

Re-reading the LFSP article, I notice that he's mainly concerned with abstractive power: maybe this is the distinction. Safety Languages for Smart People employ powerful abstractions, but restrict dark hackery; Freedom Languages for Smart People allow for both. Maybe the distinction is only visible above a certain level of power, like the weak and electromagnetic forces are only distinct at certain energies...

Of course, if all my "thoughts" are formed by stitching together things I read on the web, it's hardly surprising that they're not original :-)
pozorvlak: (Default)
Monday, May 15th, 2006 04:03 pm
Sadly, Doug Bagley's Functional Programming Koans seem to have disappeared from the web. Googling will get you some, but not all. But here's my response to some of them anyway.

[If you're unaware of the connection between hackerdom and Zen Buddhism, you should probably go and read the AI koans, and then the Unix koans of Master Foo, though they're not as good. And if anyone can find a complete archive of the Tales of Zen Master Greg I will love you forever. Oh, and here's a nice koan I stumbled across while writing this.]

[And then you should read The Gateless Gate of Mumon, and blow your mind apart.]

The koan of Higher Order Functions )
The koan of Static Typing )