pozorvlak: (Default)
Friday, January 14th, 2011 03:01 pm
[This is a cleaned-up version of the notes from my Glasgow.pm tech talk last night. The central example is lifted wholesale from apenwarr's excellent README for redo; any errors are of course my own.]
Read more... )
pozorvlak: (pozorvlak)
Wednesday, January 12th, 2011 08:28 pm
I think that djb redo will turn out to be the Git of build systems.

Read more... )
pozorvlak: (Default)
Wednesday, December 1st, 2010 03:03 pm
After my git/darcs talk, some of the folks on darcs-users were kind enough to offer constructive criticism. In particular, Stephen Turnbull mentioned an interesting use-case which I want to discuss further.

As I tried to stress, the key insight required to translate between git-think and darcs-think is
In git, the natural thing to pull is a branch (an ordered list of commits, each of which is assumed to depend on those before it); in darcs, the natural thing to pull is a patch (a named change whose dependencies are calculated optimistically by the system).
Stephen's use-case is this: you're a release manager, and one of your hackers has written some code you want to pull in. However, you don't want all their code. Suppose the change is nicely isolated into a single commit. In darcs, you can pull in just that commit (and a minimal set of prior commits required for it to apply cleanly). This is as far as my thinking had got, but Stephen points out that the interesting part of the story is what happens next: if you subsequently pull in other changes that depend on that commit, then darcs will note that it's already in your repository and all will be well. This is true in git if the developer has helpfully isolated that change into a branch: you can pull that branch, and subsequent merges will take account of the fact that you've done so. However, if the developer hasn't been so considerate, then you're potentially in trouble: you can cherry-pick that commit (creating a new commit with the same effect), but if you subsequently pull a branch containing it then git will not take account of your having cherry-picked it earlier. If either of you have changed any of the lines affected by that diff, then you'll get conflicts.

Thinking about this further, this means that I was even righter than I realised. In the git view of the world, the fact that that commit is not in its own branch is an assertion that it only makes sense in the context of the rest of the branch. Attempting to pull it in on its own is therefore not useful. You can do it, of course - it's Unix git, you can do anything - but you're making a rod for your own back. As I tried to emphasize in the talk, git-cherry-pick is a low-level, hackish tool, only really intended for use in drastic situations or in the privacy of your own local repo. If you want something semantically meaningful, only pull branches.

Git-using release managers, therefore, have to rely on developers to package atomic features sensibly into branches. If your developers can't be trusted to do this, you may have a problem. But note that darcs has the dual problem: if you can't trust your developers to specify semantic (non-textual) dependencies with darcs commit --ask-deps, then you're potentially going to be spending a lot of time tracking down semantic dependencies by hand. Having been a release manager under neither system, I don't have any intuition for which is worse - can anyone here shed any light?

[The cynic in me suggests that any benefits to the Darcs approach would only become apparent in projects which are large enough to rule out the use of Darcs with its current performance, but I could, as ever, be completely wrong. And besides, not being very useful right now doesn't rule out its ultimately proving to be a superior solution.]

On another note: Mark Stosberg (who's written quite a lot on the differences between darcs and git himself) confirms that people actually do use spontaneous branches, with ticket numbers as the "branch" identifiers. Which got me thinking. Any git user can see that spontaneous branches are more work for the user than real branches, because you have to remember your ticket number and type it into your commit message every time. Does that sound like a trivial amount of work to complain about? That's because you have no idea how easy branching and merging is in git. But it's also work that can be automated away with some tool support. Stick a file somewhere in _darcs containing the name of the current ticket, and somehow prepend that to your commit messages. I have just written a Perl script to do that (GitHub repo, share and enjoy).

Now we just need the ability to easily back-out and restore incomplete tickets without creating a whole new repo, and they'll be as convenient as git topic branches :-)
pozorvlak: (Default)
Tuesday, November 30th, 2010 04:03 pm
[livejournal.com profile] fanf linked to this on Twitter: I thought I'd fill it in with my "score". Domain-specific stuff applies to my current main gig.

Read more... )
pozorvlak: (Default)
Wednesday, November 17th, 2010 09:59 am
Here's a pattern I see a lot in imperative languages:
sub doStuff {

    # do this
    code
    code
    code
    more code

    # do that
    code
    code
    code
    code
    more code

    # do t'other
    code
    code
    code
    code
}
That is, a long subroutine with different sections separated out by vertical whitespace.

I try to live by the following rule:
If you are ever tempted to insert vertical whitespace in the middle of a subroutine to separate out different sections, just put the sections in their own freaking subroutines already.
The usual reasons to strive for short subroutines apply:
  • It's very hard to get an overview of a block of code that doesn't fit into one screen.
  • Long subroutines make it much harder to follow the dataflow: where was this variable defined? When was it last set? When will it be used again?
  • The subroutine is the basic unit of re-use; if a section of code isn't isolated into a subroutine, you can't use it again elsewhere.
  • For maximum comprehensibility, a subroutine should do one well-defined task. If your subroutine's long, it suggests that it's doing lots of things.
The hard part of splitting up large subroutines is usually finding the right subsections to extract; but if you feel a temptation to add vertical whitespace, that's a great big hint that this is one such place. Take that comment you were tempted to add, change the spaces to underscores, and you've even got the name of your new subroutine¹. If you're using a modern IDE like Eclipse or Padre, extracting the new subroutine is a matter of selecting some text and clicking "Extract Method"; but even in vim it should be pretty straightforward².

So please, take that hint. Or I shall track you down and strangle you.

¹ As any fule kno, there are only two really hard problems in computer science: cache invalidation, naming things, and off-by-one errors.
² I once had a colleague who'd not only write subroutines that consisted of commented whitespace-separated sections, but who put each section into its own lexical scope. At which point, you've done 90% of the work to extract a new subroutine, so why not go the final step?
pozorvlak: (Default)
Friday, November 12th, 2010 10:37 am
Last night was Glasgow.pm's second technical meeting, and I finally gave a version of the DVCS-comparison talk I've been thinking about doing for at least a year. I could have done with a lot more rehearsal, something that would have been helped by finishing off the slides before (checks git-log) 5.33pm - the meeting started at 6.15 - but I think it went OK.

The idea of the talk was to
  • explain a little bit of what git's doing under the hood
  • explain how the darcs model differs
  • cut through some of the tangle of conflicting terminology
  • explain why git users think branching's such a big deal, and why darcs users think that cherry-picking's such a big deal (spoilers: the answers are the same!).
I didn't try to explain the details of how you use either system, because in both cases that's fairly easy to work out once you have a decent handle on what's going on under the hood. The audience was mostly full of git users, so I spent more time explaining the darcs model; hopefully darcs users wanting to learn git will also find the slides helpful. For a more detailed introduction to how git works, I strongly recommend mjd's recent talk on the subject, and for more on the (much ameliorated) "exponential merge" problem in darcs see here. Edit: and for the details of how to use git from a darcsy perspective, try the GHC wiki's git for darcs users page.

By the way, there's a useful consequence of git's design which neither mjd's slides nor mine mention, but which I mentioned in the talk, namely the reflog. It's a list of every commit you've visited, and when you were there. This means you can say things like "Show me the state of the repository at 11.15 last Monday"; more usefully, it lets you track down and recover commits that have been orphaned by some botched attempt at history rewriting. This is not a feature that you need often, but when you do need it it's an absolute lifesaver. Git's "directed graph of snapshots" model makes this feature almost trivial to add (and because git's built on a content-addressable filesystem, jumping to those orphaned commits is fast), but darcs' "bag of patches" model makes it much harder to add such a feature (though they're thinking about possible approaches that make more sense than storing your darcs repo in git).

Thanks very much to Eric Kow and Guillaume Hoffman for answering my questions about darcs. Any errors remaining are of course my own.

Anyway, you can get the slides here (slightly cleaned-up). Please let me know if they don't make sense on their own. Or, if you really care, you can browse the history here. To build the TeX source, you'll need the prosper, xypic and graphicsx packages installed.

Edit: some of the people on darcs-users were kind enough to comment, prompting some further thought. I've written a followup post in which I respond to some of the things they said.
pozorvlak: (Default)
Friday, November 5th, 2010 11:17 am
Yesterday was a bit of a weird one: I was invited to attend a friend's citizenship ceremony at the last minute, and it threw my schedule off a bit. I managed to get a bit of work done on the SEKRIT WEB PROJECT (all setup stuff; didn't close any actual tickets, but I did write some bash, and did have a look through some of the existing code), and had another brief play with Idris.

Idris doesn't, as far as I can tell, have a name-overloading mechanism like Haskell's typeclasses (the keywords are proof, data, using, idiom, params, namespace, module, import, export, inline, where, partial, syntax, lazy, infix, infixl, infixr, do, refl, if, then, else, let, in, return, include, exists, and with - suggesting Edwin's added a module system and possibly even a syntax-extension mechanism since writing the tutorial...). "But so what?" I thought. "You can treat types as values, and implement Haskell's Eq as a function from a type to its equality function." So saying, I wrote this:
eqFun : {a:Set} -> a -> a -> Bool;
eqFun String = strEq;
eqFun Char = charEq;
eqFun Int = (==);
I loaded it into the interpreter, and got the response
eqs.idr:2:Can't unify Set -> Bool and String -> String -> Bool
Bah. Still, it could be worse: when working on the cat fragment on Day 3, I got the message user error (EPIC FAIL). Protip: don't give any of your programs the name io.idr (or any of the other names used by the standard library) - it looks like "." is an early part of the library search path.

Still, it looks to me like I'm misunderstanding something big: can types only be used as arguments in type declarations? Looking through the tutorial so far, that's the only use I can see. In that case, I'm not sure how you'd go about writing a polymorphic equality-tester. I'd like one of those so I can write a test framework, so I can write some more interesting code... the "prove invariants by constructing inhabitants of types which correspond to logical statements about your code" approach looks very interesting, but also like the kind of thing that will take some time to wrap my head around.

So all-in-all not a very successful day, coding-wise. But a day in which you try something and fail is more successful than a day in which you don't try.

[Meanwhile, Tom Scott's similar "produce a web thing every week of November" challenge resulted in a text-mining and data-visualisation project that claims to show that the inhabitants of the House of Lords are not, contrary to recent evidence, going senile. I must stop comparing myself to others.]
pozorvlak: (Default)
Wednesday, November 3rd, 2010 07:12 pm
Today I have
  • Submitted some more documentation patches for Idris.
  • Written a fragment of the Unix tool cat in Idris:
    eachLine : (String -> IO ()) -> File -> IO ();
    eachLine f file = do {
    	finished <- feof file;
    	if finished then return II -- II is the sole element of the unit type ()
    	else do {
    		line <- fread file;
    		f line;
    		eachLine f file;
    	};
    };
    
    main : IO ();
    main = do {
    	file <- fopen "fred" "r";
    	eachLine putStrLn file;
    };
    You'll notice that the file to catenate is hard-coded: I haven't yet worked out how to access your program's command-line arguments.
  • Started work on a SEKRIT WEB PROJECT for some friends; however, so far all my time has been spent mucking about with configuration files and installing dependencies rather than actually coding.
    pozorvlak: (Default)
    Tuesday, November 2nd, 2010 02:25 pm
    Today's hack is up. It's to the flashlight app again; now the UI stays in portrait orientation no matter how you rotate the phone. This prevents the pause/resume/create cycle that was killing my Activity and causing the light to go out. Most of yesterday's code is no longer needed and has been taken out again, but that's a good thing, right? :-)

    Sideload the app (if you care - HTC Sense has such a thing built-in) from here.

    Anyway, the way you do this is by adding the attribute android:screenOrientation="portrait" to the activity element in your manifest. This tip came from this StackOverflow post: I tried the more complicated "add a configChanges attribute and override onConfigurationChanged" approach described there, but that resulted in the LED wedging in whatever state it was in when you rotated the phone and not accepting any further changes. God knows what was going on there.

    By the way, does anyone use git add -p much? I tried the "edit this hunk" feature a couple of times, but it told me that it my patch wouldn't apply cleanly, and then rejected the whole thing. Also, I'm having trouble uploading files to GitHub's "download" section.

    Edit: and I've had a documentation patch accepted into Idris. Go me!
    pozorvlak: (Default)
    Thursday, October 28th, 2010 11:30 am
    I badly need some better strategies for making sense of large, twisty, underdocumented codebases.

    My current "strategy" is
    • grep for interesting words or phrases
    • find relevant-looking functions
    • look for their call-sites
    • look up definitions of other functions called in those sections of code
    • if I don't understand what a variable's for (almost certain) then look for assignments to it
    • once I've identified all the bits of code that look relevant, stare at them until my eyes cross
    • maybe put in a few printf's, try to make sense of the logs
    • enter procrastinatory spiral of despair
    • stress about losing job
    • make more coffee
    • repeat.
    What do you do?
    pozorvlak: (Default)
    Thursday, August 26th, 2010 03:32 pm
    A couple of days ago, Chris Yocum asked me to help him out with some OCaml code. He's learning OCaml at the moment, and thought he'd practice it by solving a problem he'd encountered in his research on early Irish law.

    In early Irish law, murder was punishable by a fine dependent on the status of the victim. Ten-and-a-half cows per king murdered (no, really!), five for a major nobleman, and so on. At one particular event, we know the total fine levied, but not the number of victims. Chris wanted to find out the range of possibilities. This is an example of the change-making problem: given small change of various denominations, how can you give your customer 73p? I most often encounter this problem with rugby scores: if Wales beat England 67 - 3 (I can dream...), then at seven points for a converted try, five for an unconverted try and three for a penalty, what might have happened?

    Be the change you want to make )

    All suggestions for improvement, as ever, gratefully accepted :-)
    pozorvlak: (Default)
    Tuesday, August 24th, 2010 06:10 pm
    I've updated the JSON utilities first described here. Changes:
    • jd outputs JSON, as it should really have done all along. It's now an even thinner wrapper around Makamaka Hannyaharamitu's excellent JSON module :-)
    • jf now has a -p option that pretty-prints its output directly.
    • jf can now handle several field specifiers at once, and preserves enough of the structure of the input to contain them all. Here's an example:
      $ curl -s -upozorvlak:<my password here> http://api.twitter.com/1/statuses/mentions.json \
      | jf -p user/name user/screen_name text
      [
         {
            "text" : "@pozorvlak I have to admit that if you have polymorphism then
                     things like +. are particularly pointless.",
            "user" : {
               "name" : "Christopher Yocum",
               "screen_name" : "cyocum"
            }
         },
         {
            "text" : "@pozorvlak Huh, I still like the safety that static typing gives you.",
            "user" : {
               "name" : "Christopher Yocum",
               "screen_name" : "cyocum"
            }
         }
      ... etc ...
      ]
      The XPathy syntax is due to Leon Timmermans.
    You can download the source or check it out from GitHub. All suggestions (or better, patches!) gratefully received.

    So, what else could I do to make these useful? One option would be to allow a wider range of XPath selectors - currently only the child axis is supported - but I'm not sure how to do that and preserve the structure of the input, which is essential for the kind of thing I use jf for. I could certainly document the programs better. For now, though, I think I'll email Makamaka and ask if he'd be interested in including jf in the JSON.pm distribution.
    pozorvlak: (Default)
    Tuesday, July 13th, 2010 04:39 pm
    It occurs to me that not enough people know about TeX's line-breaking algorithm.

    Take a paragraph of text; this one, for instance. Form a graph whose vertices are possible line-breaks, and whose edges are the words (and part-words, in the case of hyphenation) between two possible breaks. For instance, if your text is "Fred loves Wilma", then you'd have four vertices: the beginning, the space between "Fred" and "loves", the space between "loves" and "Wilma", and the end. You'd have five edges: "Fred", "Fred loves", "Fred loves Wilma", "loves Wilma" and "Wilma".

    Now, here's the clever bit: you decorate each edge with the "badness" associated with fitting those words onto a line, represented as a number between 0 and 10,000. There are various heuristics used to calculate this "badness" score, but basically it comes down to how much you have to squash or stretch the words and the spaces between them, plus extra badness penalties for hyphenation etc.

    Finding the optimal set of line breaks is now a simple matter of applying a standard minimum-weight graph-traversal algorithm. Some further cleverness brings the average-case running time for the algorithm down to linear, and the worst-case to O(n2).

    Theoretical computer science for the win.

    Unfortunately, TeX doesn't apply this algorithm to the problem of breaking paragraphs into pages, for (AIUI) historical reasons: the machines on which TeX was developed didn't have enough memory to hold more than one or two pages in memory at a time, and so had to write out each page as it was created. The page-breaking algorithm is thus local rather than global, and hence sometimes gives strange results. Wikipedia informs me that optimal page-breaking is NP-complete in the presence of diagrams, but in practice we efficiently find good-enough solutions to NP-complete problems every day, so I don't know why they're still using the local algorithm. Hopefully some TeXpert will pop up in the comments and enlighten me :-)
    pozorvlak: (Default)
    Sunday, December 13th, 2009 03:16 pm
    We've been using git for version control at work for the last couple of months, and I'm really impressed with it. My favourite thing about git is what might be termed its unfuckability: no matter what you do to a repository to fuck it up, it seems, it's always possible to unfuck¹ it, usually simply by keeping calm and reading the error messages. I've managed to lose data with an ill-advised git reset --hard, but that was before I knew about the reflog, and I've always been able to recover "lost" work in every other case. And then there's the rest: cheap local branching², the index, the raw speed, git-bisect, git-gui and gitk (which has rapidly become an indispensable part of my development toolchain)³.

    The merge algorithm pretty much Just Works: we get the occasional merge conflict, sure, but (so far) never without good reason. So I was surprised to learn (from Mark Shuttleworth) of a really simple case where git's merge algorithm does the Wrong Thing ).

    Bazaar obviously gets this right, otherwise Mark Shuttleworth wouldn't have written his post. Commenters there suggest that Darcs gets this right too, but after spending a while looking through the Darcs wiki I discover that I really, really can't be arsed to work out how to do the necessary branching to test it. Hopefully some helpful Darcs user (I know you're still out there...) will be able to post the relevant transcript in a comment. [Edit: I realised belatedly that you don't need branches for this. Transcript here.]

    Overall, I don't think this is a show-stopper, or even a reason to seriously think about switching to another DVCS, but it's certainly worth remembering and watching out for.

    ¹ Why yes, I have been watching the excellent Generation Kill. How did you guess? :-)
    ² If you're not used to systems in which this is possible, it probably sounds really scary and difficult, and like the kind of thing you'd never need or use. It's not. It's actually really simple and incredibly useful. The article that made it click for me, Jonathan Rockway's Git Merging By Example, is no longer online, but I'm sure there are equally good ones out there. You'll probably find Aaron Crane's "branch name in your shell prompt" utility helpful.
    ³ Just to clarify: I'm not saying that Your Favourite VCS sucks, or that it's impossible to get these features using it: I'm just saying that git has them, and they're really, really helpful.
    pozorvlak: (Default)
    Sunday, September 27th, 2009 12:00 pm
    It occurred to me recently that programming comes in several flavours, and that how much I enjoy a programming task depends strongly on which flavour predominates. The flavours I've identified so far, in descending order of how much I like them, are the following:

    Data munging: You have a large mass of data in some fairly generic form, and must fold, spindle and mutilate it with well-understood tools to extract the information of interest to you. Writing Unix scripts is the classic example, but list manipulation in Haskell or Lisp and array manipulation in J or APL have this flavour too.

    Clever algorithms: You have some calculation or task to perform, and brute force has proved inadequate. Now you must apply the Power Of Your Mind to find a cunning and better approach. I haven't actually done very much of this stuff, but I have had to solve a couple of problems of this nature at my current employer, and have another one waiting for me on Monday morning.

    Twisty if-statements, all alike: You want to zom all the glaars, but only if (the moon's in (Virgo or Libra), unless of course the Moon's in Libra and Hearts are playing at home), or (the engine-driver's socks are mismatched xor (the year ends in a seven and the month contains an R)). And you meanwhile want to wibble the odd-numbered spoffles if the Moon's in Libra, Hearts are playing at home and (the year ends in a seven or the engine-driver has mismatched socks). The challenge lies in making sure you've identified all the exceptions and special cases, and in actually coding them up correctly. Not remotely elegant, but better than...

    Doctor X-style wizardry: Making a system do things that it was never intended to do. If you squint at the problem just right you have all the tools you need to do the job, sort of, but it's at best a witty hack and at worst a horrible bodge, and certainly not something you'd want to put much weight on. All non-trivial TeX programming has this nature. This kind of thing is quite fun when you're doing it as a joke or a proof-of-concept, but it's downright horrible when you need to do it to get something important done. But it's still far more fun than...

    API spelunking. You have a candle, a slice of cheese, and a pair of old boots. You need a fork-handle. Is it even possible to construct one out of what you have? Is there a chain of method calls and constructors that will lead you from what you have to what you need? And if you succeed in constructing your fork-handle, is it the fork-handle you need? Or some nearby, but completely inappropriate, fork-handle? This is in some sense dual to data-munging. The worst examples I've encountered have actually not been in relation to documented APIs (though try reading lines out of a zipped text file in Java, if you want to see what I'm talking about), but rather in large crufty systems with complicated and ill-thought out data models. The Law of Demeter addresses this problem, but I have yet to work on a project that sticks to it. I'd settle for some sort of coherence theorem, but that would require coherent API design, which is kind of the problem to begin with...
    pozorvlak: (gasmask)
    Saturday, July 4th, 2009 12:16 am
    By popular demand, a picture of Slava Pestov wailing on guitar:

    Slava Pestov wailing on guitar, in mid-air.


    NB: image may not actually depict Slava Pestov.

    Previously.
    pozorvlak: (Default)
    Friday, June 26th, 2009 12:02 am
    Hi, this post is about Factor, real Factor. This post is awesome. My name is [livejournal.com profile] pozorvlak and I can't stop thinking about Factor. Factor is cool, and by cool, I mean totally sweet.

    Facts:
    1. Slava Pestov is a mammal.
    2. Slava Pestov hacks on Factor ALL the time.
    3. The purpose of Slava Pestov is to flip out and kill people.

    Weapons and Gear:
    1. Concatenative (stack-based, Forth-like) language.
    2. Dynamic types.
    3. First-class functions.
    4. Object-orientation.
    5. Real macros.
    6. Batteries included.
    7. The listener: debugger/REPL/help browser/etc.
    8. Optimizing, self-hosting native-code compiler.
    9. Ninja stars.

    Testimonial:
    Factor programmers can kill anyone they want! Slava cuts off heads ALL the time and doesn't even think twice about it. The guys on #concatenative are so crazy and awesome that they flip out ALL the time. I heard that littledan was eating at a diner. And when some dude dropped a spoon littledan GC'ed every object in memory. My friend Mark said that he saw a Factor programmer totally uppercut some kid just because the kid opened a gl-window.

    And that's what I call REAL Ultimate Power!!!!!!!!!!!!!!!!!!

    If you don't believe that Factor programmers have REAL Ultimate Power you better git clone their repository right now or they will chop your head off!!! It's an easy choice, if you ask me.

    Writing OpenGL code in Factor:
    Step 1: Look for some Factor OpenGL documentation.
    Step 2: Fail to find any.
    Step 3: Get really super pissed.
    Step 4: Get some C++ OpenGL documentation instead.
    Step 5: Put something slippery on it, like butter or cream.
    Step 6: Bend it to fit (this is crucial).
    Step 7: Keep folded and insert into listener hard.
    Step 8: Push hard until you can't see it.
    Step 9: Wait.
    Step 10: Die.

    If you succeed, everybody will be like “Holy Crap!

    Update: by popular demand, a picture of Slava wailing on guitar.
    pozorvlak: (Default)
    Saturday, June 6th, 2009 10:33 pm
    I've got a problem: I've had an idea I want to write about, but it depends on two or three other ideas I wanted to write about but never got around to. So I'm going to write this post in a top-down, Wirthian fashion, stubbing out those other posts: maybe, if there's enough interest, I'll come back and write them properly and replace the stubs here with links. OK with everyone?

    Right, on with the motley.

    Stub post no. 1
    Extreme Programming (XP), whether intentionally or unintentionally (and my money is on "intentionally, but try getting them to admit it") is really good for getting work out of people who are bright but have short attention spans. This is a Good Thing. It's most obvious in the case of pair programming - imagine saying to your partner "Y'know, this is kinda hard. Let's surf Reddit for a while" - but actually, most XP practices have this benefit. Short feedback cycles, concrete rewards, definite "next moves" (given by failing tests and the "simplest thing that could possibly work" approach) - all of these things have the effect of maintaining flow and reducing the incentive to slack off. It's programming as a highly addictive game. Dynamic languages work well with this approach, because they make it as easy as possible to get something up and running, and to test the things you've written.

    Stub post no. 2
    Haskell is the opposite. It encourages deep thinking, and everything about the language makes it as hard as possible to get something running unless it's just right. Screw up, and you're not presented with a running program and a failing test that you can run in the debugger; you're presented with an unfriendly compiler message and a bunch of dead code that you can't interrogate in any meaningful way. After a morning hour few minutes of this (usually involving no small loss of hair), the consultant Barbie that lives in my head invariably says "Statically-typed pure functional programming is hard. Let's go shopping!" And I, fed up and mindful of my scalp, agree. This is why I am no good at Haskell.

    Stub post no. 3
    Everything I read by or about the climber Dave MacLeod (blog) makes me more inspired by him. Partly for his visionary climbs, but mostly for his approach to choosing, training for and tackling really hard problems, which I think should generalise really well, if only I could put my finger on what exactly it is. It helps that he's a really friendly, pleasant guy in person. Check out the BAFTA-winning film Echo Wall that he and his wife made about his preparation for his first ascent of the trad route of the same name. If you're in Edinburgh, you can borrow my DVD, I'm positively eager to lend it out.

    Anyway, something Dave wrote about training (which I can't be arsed to find right now) said that in order to train effectively, you have to be constantly pushing yourself in some way: either in terms of power, or stamina, or technique, or fear¹, or whatever. You have to find your comfort zone and then consciously go beyond it, in whichever direction you wish to improve. As you improve, your comfort zone shifts, and you need to keep pushing yourself harder and harder in order to continue to improve. But (and here's the interesting bit), he said that if you do this for long enough, your whole conception of comfort shifts, and you start to feel uncomfortable if you aren't pushing yourself in some way.

    So, here's the thought I had. Maybe all the Haskellers have been training themselves Dave MacLeod-stylee, and now only feel comfortable pushing themselves really hard, and that's why they like using such a bloody difficult language.

    ¹ About a year and a half ago, I was seconding a route called Dives/Better Things (Hard Severe) in Wales, and got to a bit that was a bit too hard and a bit too scary. I floundered around for a bit, getting more and more freaked out, and then said to myself "What would Cale Gibbard do? He'd pause for a bit, think really hard, work out exactly what to do next, and then do that. OK. Do that, then." I have no idea if Cale climbs, but it did the trick. Cale, if you're reading, thanks for that :-)
    pozorvlak: (Default)
    Saturday, December 20th, 2008 05:34 pm
    Whenever I spot myself doing something repeatedly on a computer, I try to automate it. Consequently, I have a directory full of random little scripts, and a long list of aliases in my .bash_profile, all of which arose because I had to do something lots of times. Often, these scripts and aliases call other ones which I wrote earlier, as I spot higher-level patterns in what I'm doing. But every time, I have to fight against the False Laziness that tells me not to bother, I'll only have to do it once or twice more, and it's not worth the effort of doing it right.

    So yesterday I added the following line to my .bash_profile:
    alias viprof="vim ~/.bash_profile && source ~/.bash_profile"
    We'll see how it works out.