pozorvlak: (Default)
Monday, November 11th, 2013 03:21 pm

We've recently moved house, to a refitted Victorian tenement flat in Leith. We're renting it from a lovely couple from Continental Europe, and this I suspect is the reason for one of the few things that annoy me about the place: that every sink in the flat is fitted with mixer taps. Ordinarily this is merely a mild irritant, but occasionally (as happened this morning), they drive me into a towering rage. Let me explain...

I'd taken out the contents of the food recycling bin, but a foul-smelling brown gunge still coated the insides of the bin itself. I was therefore filling the bin with a mix of bleach and hot water, the latter from the bathroom sink. The sink was too small to fit the bin in, so I was filling a pint cup with hot water from the sink and tipping it into the bin. Fortunately it's quite a small bin. My attention lapsed for a moment, though, and the water overflowed, mildly but painfully scalding my left hand. No problem: I could keep filling the hot water with my right hand, while holding my left hand under the cold tap for as long as it took to cool down. Except, oh, wait, mixer taps. Dammit. So I had to turn off the hot tap, put down the cup, turn on the cold tap, and wait uselessly for however long it took for my hand to stop hurting.

Except I had forgotten about the other problem with mixer taps: hysteresis. When you turn off the hot water in a mixer tap system, you see, you don't reset the tap to a safe state: a slug of hot water remains in the pipe, lying in wait for the unwary. And so when I put my sore hand under the tap and turned on the cold water, I was instead treated to a high-pressure dose of painfully hot water onto the already painful area.

And then a few minutes later, while mentally composing this blog post and muttering curses against the inventors of mixer taps and their descendents, yea, unto the seventh generation, the same thing happened to me again.

In conclusion: fuck mixer taps. Fuck them right in their stupid single non-parallelisable pain-causing water outlets.

This post is dedicated to [personal profile] elmyra, who labours under the misapprehension that mixer taps are not only a superior technology, but so obviously a superior technology that the only possible reason they have not been universally adopted can be ignorance of their existence.

Tags:
pozorvlak: (polar bear)
Thursday, September 15th, 2011 12:11 am
There's a petition up on the British Government's e-petitions website, called "teach our kids to code". Despite being plugged by geek luminaries like Ben Goldacre, it's received barely a thousand signatures at the time of writing. I think this issue is much more important than that.

Goldacre says "heaven knows where our successive governments think the next generation of nerds is going to come from". But that's not the point. Call us nerds, geeks, hackers or Morlocks, the technological priesthood of humanity isn't going anywhere: this stuff is so goddamn fascinating that enough people will teach it to themselves for the wheels to keep turning. I don't know any programmers who weren't at least partly self-taught. Today's proto-hackers don't have the easily-programmable 8-bit micros that programmers of my generation cut their teeth on, but they've got something much better: a full open-source software stack whose source they can read, resources like Project Euler and Hackety Hack to help them take their first steps, and a huge community of open-source hackers to learn from. The petition talks about narrowing the appalling gender gap in IT, and that's a genuinely important issue, but it's not the main reason we should be teaching coding in schools.

John Graham-Cumming very nearly nails it:
I fully support that idea because I think that 'programming thinking' is an important skill that needs to be taught. Children first need to learn to be literate, then they need to learn to be numerate and finally they need to learn to be 'algorithmate' (yes, I just made that word up)... It's obvious to most people that illiteracy and innumeracy are problems to be tackled at school, but it's not obvious that we are now living in a world where logical and algorithmic thinking are very, very important.
Yes, we need some people who know how to instruct the machines if we want to have a viable economy. But the real point is that we currently live in a pre-algorate society¹. People who object that "Kids no more need to know how to code than they need to know car maintenance or how to build a table - useful for some, pointless for the majority" (a quote from one of JGC's commenters) are missing this point entirely. Computers are not like cars. Cars have a well-defined purpose, for which they have been ruthlessly optimised. Computers are more like writing. Writing is a means for expressing and conveying ideas (and here's the key bit: any ideas); computers are machines for executing instructions (and here's the key bit: any instructions). Saying that only a few people need to code today is like saying that only a few people needed to read and write in the 1500s². Their whole society was set up with the assumption that very few people could read: pub signs were carved representations of the pub's name, for instance. Everyone got by, but our society is incomparably richer since the advent of mass literacy. We're in the same situation with regards to algorithms today: a technological priesthood tries to do the algorithmic thinking for everyone, but the shiny toys they produce for non-algorists are often more hindrance than help. And unsurprisingly they miss a lot of opportunities for algorithmic assistance: ever see someone repetitively copying-and-pasting something over and over in Word? Do you feel the same almost physical pain that I get when I see someone doing drudge-work on a computer? In a mass-algorate society, that Word user would realise immediately that they were doing work that the computer could be doing for them, and the software would be set up to make it easy for them to automate their drudgework.


A very, very old description of an algorithm for calculating volumes. From the British Museum's Babylonian collection.


I want to live in a mass-algorate society. I want to use the software that a mass-algorate society would develop: sane and hackable, because everyone would know what computers fundamentally do and how to make them do what they want. I also expect that a mass-algorate society's software would have discoverable, predictable APIs, because 99% of coders would not be specialist programmers and would have better things to do than read endless documentation. But again, that's not really the point; the point is that I expect mass algoracy to have knock-on effects at least as dramatic as those of mass literacy.

Where do you want to go today?

Edit: followup post; Reddit discussion.




¹ "Algorithmate" strikes me as rather a mouthful. I propose "algorate", because it's shorter, easier to say, and a backhanded compliment to the man who, after all, was the first political leader to recognize the importance of the Internet and to promote and support its development. Poor Muhammad ibn Musa al-Khwārizmī al-Majousi al-Katarbali has already lost 14 of his name's 18 syllables by the time we've got to "algorithmate", two more seems like a relatively minor loss.

"Algorist", by the way, isn't a neologism: it's a long-established term meaning "one who invents algorithms".

² Car maintenance isn't analogous to coding; car maintenance is analogous to system administration, which I agree most users shouldn't have to bother with.
pozorvlak: (Default)
Monday, July 4th, 2011 10:10 pm
Occasionally, when I link to my post complex numbers made straightforward, people tell me "but students are used to extending the number line with fractions and surds and negatives already, you should just tell them that you're extending it again in an analogous way!"

So here's an example of someone who was hopelessly confused by complex numbers (or, more precisely, someone who thought they were "bullshit"), whose confusion was instantly cured by an explanation exactly along the lines in my post. Not by my post, alas: today must have been one of those rare days when I don't check Reddit enough.

Ideally we'd want more: it would be nice to know if there exist people for whom "a complex number is an ordered pair of reals" doesn't work. Probably. But this at least shows us that there is a problem with the "by analogy with the extension from N to Z" approach, and not just among the mathematically clueless. In fact, I'd argue that their very confusion shows that they have a better handle on the crucial mathematical notion of well-definedness than their teachers.
Tags:
pozorvlak: (Default)
Saturday, July 2nd, 2011 06:37 pm
I'm currently running a lot of benchmarks in my day job, in the hope of perhaps collecting some useful data in time for an upcoming paper submission deadline - this is the "science" part of "computer science". Since getting a given benchmark suite built and running is often needlessly complex and tedious, one of my colleagues has written an abstraction layer in the form of a load of Makefiles. By issuing commands like "make build-eembc2", "make run-utdsp" or "make distclean-dspstone" you can issue the correct command (build/run/distclean) to whichever benchmark suite you care about. The lists of individual benchmarks are contained in .mk files, so you can strip out any particular benchmark you're not interested in.

I want to use benchmark runs as part of the fitness function for a genetic algorithm, so it's important that it run fast, and simulating another processor (as we're doing) is inherently a slow business. Fortunately, benchmark suites consist of lots of small programs, which can be run in parallel if you don't care about measuring wallclock seconds. And make already has support for parallel builds, using the -j option.

But it's always worth measuring these things, so I copied the benchmark code up onto our multi-core number crunching machine, and did two runs-from-clean with and without the -j flag. No speedup. Checking top, I found that only one copy of the simulator or compiler was ever running at a time. What the hell? Time to look at the code:
TARGETS=build run collect clean distclean

%-eembc2: eembc-2.0
        @for dir in $(BMARKS_EEMBC2) ; do \
          if test -d eembc-2.0/$$dir ; then \
            ${MAKE} -C eembc-2.0/$$dir $* ; \
          fi; \
        done
Oh God. Dear colleague, you appear to have taken a DSL explicitly designed to provide parallel tracking of dependencies, and then deliberately thrown that parallelism away. What were you thinking?¹ But it turns out that Dominus' Razor applies here, because getting the desired effect without sacrificing parallelism is actually remarkably hard... )

Doing it in redo instead )

Time to start teaching my colleagues about redo? I think it might be...

¹ He's also using recursive make, which means we're doing too much work if there's much code shared between different benchmarks. But since the time taken to run a benchmark is utterly dominated by simulator time, I'm not too worried about that.
pozorvlak: (Default)
Wednesday, March 2nd, 2011 01:15 pm
A couple of months ago, I decided to re-evaluate my long-held hatred for Radiohead and their works, by sitting down and listening to their first five albums in a row. I learned that my teenage self was wrong: Radiohead are capable of producing good music. However, they don't seem to be capable of doing so consistently. Some of their albums I genuinely (in the main) liked: Pablo Honey is a decentish soft-rock album, Kid A was enjoyable enough, and Amnesiac was actually quite good, apart from a couple of tracks I absolutely loathed. If I'd been able to approach The Bends on my own terms, I might have quite liked it: it's not Physical Graffiti or Nevermind or Kind of Blue or Rounds, but, you know, there are some good bits in there. As it is, though, my conditioned flinch reaction to the sound of Thom Yorke's voice - like that of a puppy that's been kicked too many times - makes listening to it really rather stressful. And OK Computer is just a trainwreck.

[While I'm aware that my opinions on music are largely subjective, I shall be stating them as facts throughout this post. This will hopefully spare us a tedious debate about the philosophy of aesthetics and keep my prose crisper.]

Radiohead have just released a new album, and so it's time to finish off the project and listen to their albums Hail to the Thief, In Rainbows and The King of Limbs. Afterwards, if I still have time and energy, I'll listen to the Easy Star All Stars' Radiodread, a reggae version of OK Computer. As before, I'll put up comments under each track name as I get to it, so check back throughout the afternoon.

Hail to the Thief )

In Rainbows )

The King of Limbs )

Radiodread )

Overall conclusions )
Tags:
pozorvlak: (Default)
Friday, January 7th, 2011 12:11 pm
Today I shall be mostly listening to Radiohead, in an attempt to find out if they're as bad as I remember.

While it's not 100% true to say that I dislike all things which are popular, it is true that the more ubiquitous something is, the more likely I am to take violently against it. My answer to "Blur or Oasis?" was "Get the hell away from me before I cut you." And Radiohead were everywhere when I was at school. Playing out of every other window, lyrics scrawled on walls, you name it. As one spotty oik, my entire year (except me) went into town and bought OK Computer on the day it was released. But as far as I could see it was just tuneless dirge played by wankers who hated their fans and couldn't even be classy about saying so. I have no problem with depressing music - I listened to quite a lot of depressing music back then, being a teenager and all - but Radiohead were depressing and rubbish.

But it's thirteen years later, and Radiohead are still inexplicably popular, so perhaps I was the one who was wrong. Sometimes you're just not ready for certain music, and will enjoy it more later. And music's always easier to appreciate when it isn't assaulting you every time you turn a corner.

So today I'm going to listen to a bunch of Radiohead albums in chronological order, make notes on each track, and post them on my blog for your amusement.

Here's what I'll be listening to: notes will appear throughout the afternoon. )

Overall verdict )

[livejournal.com profile] wormwood_pearl is now teaching herself to play Karma Police on the ukulele :-(
Tags:
pozorvlak: (Default)
Wednesday, November 17th, 2010 09:59 am
Here's a pattern I see a lot in imperative languages:
sub doStuff {

    # do this
    code
    code
    code
    more code

    # do that
    code
    code
    code
    code
    more code

    # do t'other
    code
    code
    code
    code
}
That is, a long subroutine with different sections separated out by vertical whitespace.

I try to live by the following rule:
If you are ever tempted to insert vertical whitespace in the middle of a subroutine to separate out different sections, just put the sections in their own freaking subroutines already.
The usual reasons to strive for short subroutines apply:
  • It's very hard to get an overview of a block of code that doesn't fit into one screen.
  • Long subroutines make it much harder to follow the dataflow: where was this variable defined? When was it last set? When will it be used again?
  • The subroutine is the basic unit of re-use; if a section of code isn't isolated into a subroutine, you can't use it again elsewhere.
  • For maximum comprehensibility, a subroutine should do one well-defined task. If your subroutine's long, it suggests that it's doing lots of things.
The hard part of splitting up large subroutines is usually finding the right subsections to extract; but if you feel a temptation to add vertical whitespace, that's a great big hint that this is one such place. Take that comment you were tempted to add, change the spaces to underscores, and you've even got the name of your new subroutine¹. If you're using a modern IDE like Eclipse or Padre, extracting the new subroutine is a matter of selecting some text and clicking "Extract Method"; but even in vim it should be pretty straightforward².

So please, take that hint. Or I shall track you down and strangle you.

¹ As any fule kno, there are only two really hard problems in computer science: cache invalidation, naming things, and off-by-one errors.
² I once had a colleague who'd not only write subroutines that consisted of commented whitespace-separated sections, but who put each section into its own lexical scope. At which point, you've done 90% of the work to extract a new subroutine, so why not go the final step?
pozorvlak: (Default)
Tuesday, July 20th, 2010 03:02 pm
You may have seen this image, on a poster or a fridge magnet or a birthday card or some other flat surface:

Beer: helping ugly people have sex since 1862!


You know what annoys me about it? Beer hasn't been helping ugly people have sex since 1862 - it's been helping ugly people have sex since at least 7000 BC¹, and probably as far back as 9000 BC. Our oldest written recipe for beer dates from 1800 BC, and even older beers have been reconstructed from chemical evidence.

Now, you can argue that much of the beer we know today owes a lot to industrial and microbiological advances in the latter half of the 19th century - Saccharomyces carlsbergensis was isolated in 1883, for instance - but why 1862, for God's sake?

¹ Not least by keeping them alive long enough to make it through puberty.
pozorvlak: (Default)
Sunday, September 27th, 2009 12:00 pm
It occurred to me recently that programming comes in several flavours, and that how much I enjoy a programming task depends strongly on which flavour predominates. The flavours I've identified so far, in descending order of how much I like them, are the following:

Data munging: You have a large mass of data in some fairly generic form, and must fold, spindle and mutilate it with well-understood tools to extract the information of interest to you. Writing Unix scripts is the classic example, but list manipulation in Haskell or Lisp and array manipulation in J or APL have this flavour too.

Clever algorithms: You have some calculation or task to perform, and brute force has proved inadequate. Now you must apply the Power Of Your Mind to find a cunning and better approach. I haven't actually done very much of this stuff, but I have had to solve a couple of problems of this nature at my current employer, and have another one waiting for me on Monday morning.

Twisty if-statements, all alike: You want to zom all the glaars, but only if (the moon's in (Virgo or Libra), unless of course the Moon's in Libra and Hearts are playing at home), or (the engine-driver's socks are mismatched xor (the year ends in a seven and the month contains an R)). And you meanwhile want to wibble the odd-numbered spoffles if the Moon's in Libra, Hearts are playing at home and (the year ends in a seven or the engine-driver has mismatched socks). The challenge lies in making sure you've identified all the exceptions and special cases, and in actually coding them up correctly. Not remotely elegant, but better than...

Doctor X-style wizardry: Making a system do things that it was never intended to do. If you squint at the problem just right you have all the tools you need to do the job, sort of, but it's at best a witty hack and at worst a horrible bodge, and certainly not something you'd want to put much weight on. All non-trivial TeX programming has this nature. This kind of thing is quite fun when you're doing it as a joke or a proof-of-concept, but it's downright horrible when you need to do it to get something important done. But it's still far more fun than...

API spelunking. You have a candle, a slice of cheese, and a pair of old boots. You need a fork-handle. Is it even possible to construct one out of what you have? Is there a chain of method calls and constructors that will lead you from what you have to what you need? And if you succeed in constructing your fork-handle, is it the fork-handle you need? Or some nearby, but completely inappropriate, fork-handle? This is in some sense dual to data-munging. The worst examples I've encountered have actually not been in relation to documented APIs (though try reading lines out of a zipped text file in Java, if you want to see what I'm talking about), but rather in large crufty systems with complicated and ill-thought out data models. The Law of Demeter addresses this problem, but I have yet to work on a project that sticks to it. I'd settle for some sort of coherence theorem, but that would require coherent API design, which is kind of the problem to begin with...
pozorvlak: (gasmask)
Friday, December 12th, 2008 08:26 am
Dear Facebook,

You know I'm not single. If you track IP addresses, which I'm sure you do, then you also know that I share a computer with my girlfriend, and thus even if I were inclined to stray, I wouldn't do so from here.

So could you please stop showing me adverts for hot singles I could meet in my area?
pozorvlak: (gasmask)
Thursday, March 6th, 2008 11:25 am
A comparison of the Haskell argmunger with its Arc equivalent (or even its Perl equivalent) should make something clear. It's been claimed that Haskell doesn't need macros, because most of the things that Lispers need macros for can be done in Haskell using lazy evaluation and so on. But the argmunger example makes it clear that there are things for which Lispers don't need macros and Haskellers do - if not Template Haskell macros, then SYB or DRiFT or some other macro-like facility.

Lispers often say that coding in other languages feels like manually generating macroexpansions. I'm nobody's idea of a Lisp hacker, but I often feel like this when I'm writing Haskell. Which is why I'm learning about Template Haskell even though I'm not yet au fait with things like do-notation and monad transformers, which most Haskell experts would probably consider more basic. Would you choose before or after you'd walked to Moscow to staunch the blood from your severed arm?
pozorvlak: (gasmask)
Monday, January 28th, 2008 04:40 pm
Why, in this day and age, does it take two working days and £27 to send a few hundred dollars from Britain to the US?
Tags:
pozorvlak: (Default)
Wednesday, December 19th, 2007 08:14 pm
An email from the people to whom I applied:
I missed the point entirely, it seems :-( )
I wouldn't mind so much if they'd asked for a "Personal Statement" or something - to me, the term "Research Proposal" suggests that it should mainly be about the research that you propose to undertake. But apparently not. I've got until, er, tomorrow to submit another one :-(

In other news, hearken ye programmers unto Steve Yegge's latest drunken blog rant. I've been having similar thoughts myself, related to Bad Things that have happened to me with big codebases: it's a large part of why I'm so interested in the APL family*. But I'd like to stick my neck out and say that the way Steve feels about Java is the way I feel about Haskell.

* I note in passing that that page comes up first in a Google search for "APL lesson" - epic win!
pozorvlak: (pozorvlak)
Monday, November 26th, 2007 12:53 pm
For reasons that are increasingly unclear to me, I've been using the TeX package XY-Pic for the commutative diagrams (of which there are many) in my thesis and papers. If there is one and only one Right Way to do something, you can pretty much guarantee that XY-Pic will:
  1. do something else by default, producing horribly ugly output;
  2. only do the Right Thing if you utter some cryptic and fragile incantation in a language that looks like the bastard offspring of APL and the Black Speech of Mordor1;
  3. hide the details of said incantation away somewhere in the depths of the voluminous, poorly-indexed, verbose and maddeningly unclear manual.
As should be clear, I'm not a huge fan.

One of its more annoying characteristics is the way it handles tails of arrows, which (by default) start after the end of the arrow, so the tail invariably collides with whatever it was the arrow is pointing away from. Like this:
That was produced by the code \xymatrix{ A \ar@{>->}[r] & B }, which, while not terribly clear, is the obvious thing to try, and the only thing you'll know how to do unless you've wasted days reading the manual.

Fortunately, there is a fix )

1 Thinking about it, XY-Pic is actually kinda agglutinative, much like the Black Speech...
pozorvlak: (gasmask)
Monday, October 8th, 2007 03:24 pm
In a world such as this one, filled with war, famine, impending ecological collapse and Celtic fans, it really ought to take more to horrify me than this. But I guess I'm shallow that way.

Before diving into the actual text, let's just take a step back and consider the implications: the University of Limerick was teaching their "introduction to programming" course in COBOL, in 2002 (and as far as I can see, still are).

[Non-geeks: COBOL is the Common Business-Oriented Language, an ancient language used on dinosaur mainframes with about as much style and joie de vivre as the name implies. The Jargon File notes that in hacker circles its name is "synonymous with evil", and Edsger Dijkstra famously commented that "The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense."]

Read more... )
pozorvlak: (gasmask)
Monday, October 1st, 2007 09:18 pm
Implementation-defined languages come in for a lot of flak. Users of certain languages will point to their language's standards document as if it were self-evidently proof of greater inherent worth: a language that's defined by its implementation, it is implied, is always going to be a bit non-U. It may train itself out of its dropped H's and tendency to mispronounce the word "scone"1, but it will never have the true breeding that comes from a standards document.

Which is daft, because implementation-defined languages have some important advantages )

By the way, I'm not saying that all specifications are bad (a good one is an excellent thing to have) or that specification-defined languages have no advantages - I'm assuming that the advantages of specification-defined languages are so well-rehearsed that I don't need to repeat them. Anyone needing further enlightenment is encouraged to go to comp.lang.scheme and say "I think spec-defined languages suck!" :-)

Now for the second part of my rant: Haskell, as we know it today, is an implementation-defined language, defined by the latest CVS snapshot of GHC. "But what about the Haskell Report, and the Revised Report, and Haskell prime, and Hugs, and, er, those other implementations?" I hear you cry. Well, what about them? Every time I asked some question, it seemed, the answer would begin with "first download the latest version of GHC, then turn on these extensions..." A spec for a language that people don't actually use is - well, not useless, if it's sufficiently close to a language that they do use, but it ain't a specification as I understand the term. Everyone in the Haskell community seems to track the latest version of GHC for its new features. This is not spec-like behaviour. Now, as I said above, I don't think that implementation-defined languages are bad: quite the reverse. I just think it would save everyone a lot of bother and false superiority if the community could admit this to itself and to outsiders.

1 The O is short: "scone" rhymes with "gone", not "groan". Unless you're talking about the town in Scotland, when it rhymes with "boon". People who insist on mispronouncing this word around me will have their scones taken away from them and donated to the Pozorvlak Pedant-Feeding Drive.
pozorvlak: (Default)
Thursday, March 29th, 2007 03:19 pm
chromatic, in a conversation about domain-specific languages in Perl 6, put something I've been trying to say for a while into the correct words that would never quite come for me.
I can decipher and fix bad Ruby code, for example, because I know the underlying language.

You know Ruby's syntax maybe, but who says you know the domain of the business problem the code attempts to solve?

I maintain that that is much more important than syntax--and if you have undisciplined, barely-competent monkeys who cannot or will not write maintainable code, then your biggest risk is not that they might use powerful language features to do bad things that are hard to unravel.

Your biggest risk is that they will do anything. It doesn't matter what they do, if they have access to your source code. They may never know about symbol tables or run-time code evaluation or method aliasing or macros or code generation or monkey patching, but you can be sure that they'll pull a stupid trick such as writing files and reading them in line by line because they don't know how to use arrays.

They're barely competent! They're undisciplined! They're monkeys! Want to fix your coding problems? Start by getting rid of monkeys, not by complaining about powerful tools that competent developers might be able to use productively.
From what I've seen, all the safety features in the world will not prevent monkeys from shooting themselves and others in the foot. Paw. Whatever. Whereas good, disciplined programmers will be able to get great mileage out of supposedly dangerous features, by using them correctly (note that "using them correctly" includes not using them in cases where it's not a good idea). I might, of course, be wrong, but this has been my experience thus far, and this is why I'm finding it such a big adjustment to come to a language community that seems to believe the opposite.

[Remember that crack I made a while ago about how, in the limit case, the "guarantees over expressivity" philosophy would lead to preferring guaranteed termination over Turing-completeness? It seems someone actually does believe that. Disclaimer: I've only skimmed the paper.]

By the way, I came across chromatic's post from Piers Cawley's post DSLs, Fluent Interfaces, and how to tell the difference, which says something I've long suspected: that this whole (embedded) DSL business is mostly just a buzzword slapped on a lot of fairly straightforward and common practices.
pozorvlak: (Default)
Tuesday, March 13th, 2007 01:08 pm
brian d. foy says that you can't be an effective advocate for a language unless you can think of five things that you hate about it off the top of your head, the reason being that if you can't, then you don't know enough about the language to advocate it effectively. So, just to go one (or five) better, here are ten things I hate about Perl )
This is not to say that I hate Perl: far from it. I think it's a wonderful, fun language, with a great community around it, producing some insanely cool software (CPAN might be considered the world's premier laboratory for module and language-extension design). I am continually surprised that Perl has such a bad press, and gets such short shrift from the language-design community. You might expect that a language that breaks almost every accepted precept of language design would simply be a bad language, and no fun at all to use. Yet Perl does this, and has a fanatical community of users, some of them truly excellent hackers. This, it seems to me, is a datum point of enormous importance.

But anyway, I'd like to ask this question to the hackers reading this. What five things do you hate most about your favourite language?

* Unless you've set $[ to 1 - thanks, [livejournal.com profile] paddy3118!
pozorvlak: (Default)
Saturday, March 10th, 2007 12:57 pm
A while ago I wrote a post about what appeared to be the endemic pusillanimity of the Haskell community, tried to find more charitable explanations for my observations, and, I think, mostly succeeded. Unfortunately, the example I gave turned out not to be a good one: it turns out that it is possible to have integer-parametrized types in Haskell (though you have to go through Hell and high water to do so). But the other day, I remembered a much better example )
pozorvlak: (Default)
Thursday, January 25th, 2007 06:10 pm
Many times in the course of my Haskell apprenticeship, I've had conversations of the following form:

Me: I want to do X in Haskell, but the obvious thing doesn't work. How do I do it?
Sensei (usually Duncan or [livejournal.com profile] totherme): Oh no, you can't do that! If you could do that, then you could have a transperambic Papadopolous twisted asiatic sine-curve freezeglargle Windhoek chromium bebop.
Me: Er, come again?
Sensei: well, [note of horror creeps into voice] it would be really bad! You could get [thing that doesn't sound all that bad, really].
Me: But that's a good thing/Nah, that would never happen/would never happen in real-world code/that's a price I'd gladly pay.

Here's why I think this happens - Haskellers prefer guarantees to expressivity )

So, Haskellers: am I right? More importantly, is this widely known? Is it explicitly written down anywhere? (The second article comes very close, but I don't think it gets there in full generality). Because it strikes me that it would be a good thing to tell newbies as early as possible, and also a good thing to have it explicit so it can be appealed to, or (better) debated.