pozorvlak: (Default)

We've recently moved house, to a refitted Victorian tenement flat in Leith. We're renting it from a lovely couple from Continental Europe, and this I suspect is the reason for one of the few things that annoy me about the place: that every sink in the flat is fitted with mixer taps. Ordinarily this is merely a mild irritant, but occasionally (as happened this morning), they drive me into a towering rage. Let me explain...

I'd taken out the contents of the food recycling bin, but a foul-smelling brown gunge still coated the insides of the bin itself. I was therefore filling the bin with a mix of bleach and hot water, the latter from the bathroom sink. The sink was too small to fit the bin in, so I was filling a pint cup with hot water from the sink and tipping it into the bin. Fortunately it's quite a small bin. My attention lapsed for a moment, though, and the water overflowed, mildly but painfully scalding my left hand. No problem: I could keep filling the hot water with my right hand, while holding my left hand under the cold tap for as long as it took to cool down. Except, oh, wait, mixer taps. Dammit. So I had to turn off the hot tap, put down the cup, turn on the cold tap, and wait uselessly for however long it took for my hand to stop hurting.

Except I had forgotten about the other problem with mixer taps: hysteresis. When you turn off the hot water in a mixer tap system, you see, you don't reset the tap to a safe state: a slug of hot water remains in the pipe, lying in wait for the unwary. And so when I put my sore hand under the tap and turned on the cold water, I was instead treated to a high-pressure dose of painfully hot water onto the already painful area.

And then a few minutes later, while mentally composing this blog post and muttering curses against the inventors of mixer taps and their descendents, yea, unto the seventh generation, the same thing happened to me again.

In conclusion: fuck mixer taps. Fuck them right in their stupid single non-parallelisable pain-causing water outlets.

This post is dedicated to [personal profile] elmyra, who labours under the misapprehension that mixer taps are not only a superior technology, but so obviously a superior technology that the only possible reason they have not been universally adopted can be ignorance of their existence.

pozorvlak: (Default)

I hate making decisions. And I'm right to do so, as the emerging body of evidence on decision fatigue makes clear. But sometimes you have to make a decision in situations where there's no obviously good choice: sometimes the differences between options are trivial, sometimes the differences are significant but the advantages and disadvantages are finely balanced, and sometimes you just don't have enough information to assess what those advantages and disadvantages are, but need to make a decision anyway so you can move on.

Some people advocate rolling a die or tossing a coin in this situation. These people are clearly less indecisive than me. Some people advocate tossing a coin, and if you catch yourself thinking "dammit, the other option would have let me do X" then you have discovered your hidden underlying preference and can go for that one. These people are also clearly less indecisive than me: I do that every time. Finely-balanced advantages and disadvantages; if there were no opportunity cost, there would be no decision to be made.

However, I have discovered a procedure that allows me to deal with many of these situations, and to do so quickly and with minimal stress. If you can't find a good argument for choosing one option over the others, look for a stupid reason instead. And do so in a consistent and general way, to minimise the mental effort required. The short version of my system is

  1. Pick the red one.
  2. If that doesn't work, pick the one with more cats.
  3. If that doesn't work, pick the one with more dogs.
  4. Give up.

The more detailed version is

  1. Eliminate all choices that are less than maximally red. Red, of course, is the Best Colour.
  2. If more than one choice remains, eliminate all choices which are less than maximally feline. More cats beat fewer cats, fluffy cats beat smooth cats, cute cats beat ugly cats, kittens beat adult cats.
  3. If more than one choice remains, eliminate all choices that are less than maximally canine, subject to the rules above with the obvious formal modification applied.
  4. If more than one choice remains, it is officially Too Hard and you are entitled to give up. Go to the pub and order the beer you haven't had before¹.

This may sound stupid, but inventing this procedure has had a noticeable positive effect on my life. It's quick to run through. It turns angsty and tiring vacillation into either a purely mechanical procedure, or a fun game of inventing reasons why abstract things are red, or catlike, or doglike. It works remarkably often: I can't remember when I last had to invoke Rule 4 (though it has had at least one notable success: see below). Hell, Rule 1 is enough most of the time. And, applied consistently over a long period, it causes my life to fill with things that are red, feline or canine, all of which make me happier.


Q: Which of these two equally cute and fluffy kittens should you pick to take home with you?
A: Mu.

Obviously these advantages are not tied to the specific steps listed above. Feel free to substitute your own favourite colour or animals, or to come up with different steps entirely. But I do recommend inventing a procedure like this one if you also struggle to make decisions.

¹ If there is more than one beer available that you haven't had before, drink them all in left-to-right order.

pozorvlak: (babylon)

We've just moved house, which means that we're waiting for a new broadband connection to be set up. While we're waiting for reliable Internet we've been making our own entertainment, and by "entertainment" I of course mean "beer". It's our first attempt at homebrewing, so we decided to walk before we tried running and bought a beginner's homebrew kit. Which made me think, as one does, of Ninkasi, the Sumerian goddess of beer and brewing, who every day used to brew beer for the rest of the Sumerian pantheon. Beer was one of the crucial enabling technologies for the early urban civilisations in Egypt and Mesopotamia: as well as being fun to consume, it allows you to live in close proximity to other humans without dying of cholera from contaminated drinking water. Beer brewing tech has moved on a bit since Ninkasi's day, but the essentials of the process remain the same: germinate grains to turn the starches into sugars, soak in water to extract the sugars, drain off the resultant liquid (the "wort"), mix with flavourings, ferment in a large vat, drink, enjoy. And since today is National Poetry Day (with a theme of "water", no less), I thought it might be fun to update the Hymn to Ninkasi. Dating from around 1800BC, it's one of the oldest known recipes for beer. Here's an academic translation, and here's a discussion, and a looser but more poetic translation.

Given birth by the flowing water, tenderly cared for by Ninhursaja! Ninkasi, given birth by the flowing water, tenderly cared for by Ninhursaja!

Having founded your town upon wax, she completed its great walls for you. Ninkasi, having founded your town upon wax, she completed its great walls for you.

Your father is Enki, the lord Nudimmud, and your mother is Ninti, the queen of the abzu. Ninkasi, your father is Enki, the lord Nudimmud, and your mother is Ninti, the queen of the abzu.

It is you who watches the included instructional DVD, and is soothed by the presenter's reassuring Australian accent and self-deprecating humour. Ninkasi, it is you who watches the included instructional DVD, and is soothed by the presenter's reassuring Australian accent and self-deprecating humour.

It is you who puts a cupful of bleach in the plastic bucket, and fills it up with hot water. Ninkasi, it is you who puts a cupful of bleach in the plastic bucket, and fills it up with hot water.

It is you who leaves the bucket to sterilise for half an hour, and then rinses it down carefully in the shower. Ninkasi, it is you who leaves the bucket to sterilise for half an hour, and then rinses it down carefully in the shower.

It is you who carries the bucket through to the kitchen, and places it on the counter. Ninkasi, it is you who carries the bucket through to the kitchen, and places it on the counter.

It is you who peels the backing from the thermometer strip, and sticks it to the outside of the bucket. Ninkasi, it is you who peels the backing from the thermometer strip, and sticks it to the outside of the bucket.

It is you who upends the wort can into a saucepan of hot water so that it may flow more easily. Ninkasi, it is you who upends the wort can into a saucepan of hot water so that it may flow more easily.

It is you who opens the can of hopped wort, and guards it even from the noble cats. Ninkasi, it is you who opens the can of hopped wort, and guards it even from the noble cats.

It is you who dissolves the wort in two litres of boiling water, then adds the 1kg packet of fermentable and non-fermentable sugars and stirs vigorously. Ninkasi, it is you who dissolves the wort in two litres of boiling water, then adds the 1kg packet of fermentable and non-fermentable sugars and stirs vigorously.

It is you who wonders what the hell the non-fermentable sugars are there for anyway, and tries to tell your girlfriend about that Holsten Pils advert with Dennis Leary, and remembers that she's too young to remember it. Ninkasi, it is you who wonders what the hell the non-fermentable sugars are there for anyway, and tries to tell your girlfriend about that Holsten Pils advert with Dennis Leary, and remembers that she's too young to remember it.

It is you who tops up the bucket to twenty litres with cold water, stirs and checks the temperature. Ninkasi, it is you who tops up the bucket to twenty litres with cold water, stirs and checks the temperature.

It is you who adds another couple of litres of cold water to ensure the temperature is within the range 21C-27C. Ninkasi, it is you who adds another couple of litres of cold water to ensure the temperature is within the range 21C-27C.

It is you who takes the packet of yeast, tries to tear it open, realises there is no slit in the packet, and looks frantically for a pair of scissors. Ninkasi, it is you who takes the packet of yeast, tries to tear it open, realises there is no slit in the packet, and looks frantically for a pair of scissors.

It is you who cuts open the packet of yeast with a Swiss Army knife and pours it into the bucket. Ninkasi, it is you who cuts open the packet of yeast with a Swiss Army knife and pours it into the bucket.

It is you who curses when your girlfriend reminds you that you were meant to sprinkle the yeast evenly over the surface of the liquid. Ninkasi, it is you who curses when your girlfriend reminds you that you were meant to sprinkle the yeast evenly over the surface of the liquid.

It is you who accepts all the blame if this whole thing goes wrong. Ninkasi, it is you who accepts all the blame if this whole thing goes wrong.

It is you who slots the stupidly-named Krausen Kollar onto the bucket, and then fits the lid. Ninkasi, it is you who slots the stupidly-named Krausen Kollar onto the bucket, and then fits the lid.

It is you who wonders why this system doesn't use an airlock, Googles to find out, and emerges no wiser. Ninkasi, it is you who wonders why this system doesn't use an airlock, Googles to find out, and emerges no wiser.

It is you who hopes that you won't end up with partially-fermented beer all over your new kitchen floor. Ninkasi, it is you who hopes that you won't end up with partially-fermented beer all over your new kitchen floor.

It is you who half-fills the hydrometer tube with the diluted wort and drops in the weighted bulb, to determine the beer's original gravity. Ninkasi, it is you who half-fills the hydrometer tube with the diluted wort and drops in the weighted bulb, to determine the beer's original gravity.

It is you who attempts to read the specific gravity at the meniscus, but can't actually see the meniscus because beer is fizzy. Ninkasi, it is you who attempts to read the specific gravity at the meniscus, but can't actually see the meniscus because beer is fizzy.

It is you who decides that accuracy to +/- 0.001 is probably good enough, and writes down your best guess in the Brewer's Log. Ninkasi, it is you who decides that accuracy to +/- 0.001 is probably good enough, and writes down your best guess in the Brewer's Log.

[It is you who has only proceeded up to this point so far, and is copying the remaining steps out of the instruction booklet. Ninkasi, it is you who has only proceeded up to this point so far, and is copying the remaining steps out of the instruction booklet.]

It is you who waits for four days, periodically checking that the temperature is within the range 21-27C, and then draws off another tubeful of liquid and measures the specific gravity with the hydrometer. Ninkasi, it is you who waits for four days, periodically checking that the temperature is within the range 21-27C, and then draws off another tubeful of liquid and measures the specific gravity with the hydrometer.

It is you who checks the specific gravity daily until it has remained the same for 24 hours. Ninkasi, it is you who checks the specific gravity daily until it has remained the same for 24 hours.

It is you who nervously taste-tests the beer, hoping that it has avoided infection or other problems. Ninkasi, it is you who nervously taste-tests the beer, hoping that it has avoided infection or other problems.

It is you who sterilises the supplied heavy PET bottles, ready to receive the beer. Ninkasi, it is you who sterilises the supplied heavy PET bottles, ready to receive the beer.

It is you who fits the bottling valve to the tube, and opens the tap. Ninkasi, it is you who fits the bottling valve to the tube, and opens the tap.

It is you who fills the bottles with the beer. Ninkasi, it is you who fills the bottles with the beer.

It is you who puts one sugar tablet into each 500mL bottle so that the beer may undergo a second fermentation, then caps them and inverts each one several times. Ninkasi, it is you who puts one sugar tablet into each 500mL bottle so that the beer may undergo a second fermentation, then caps them and inverts each one several times.

It is you who wonders what was going on in that bit in 1984 where the old guy complained that half a litre of beer wasn't enough, seriously, you can barely tell the difference between half a litre and a pint. Ninkasi, it is you who wonders what was going on in that bit in 1984 where the old guy complained that half a litre of beer wasn't enough, seriously, you can barely tell the difference between half a litre and a pint.

It is you who stores the bottles upright at a temperature above 18C (in Scotland, in October) for two weeks while the beer undergoes carbonation. Ninkasi, it is you who stores the bottles upright at a temperature above 18C (in Scotland, in October) for two weeks while the beer undergoes carbonation.

It is you who invites your friends round to try the finished beer; it is like the onrush of the Tigris and the Euphrates. Ninkasi, it is you who invites your friends round to try the finished beer; it is like the onrush of the Tigris and the Euphrates.

Cheers!

pozorvlak: (Default)

I got back from my first climbing trip to the Alps a bit over a month ago. My friend Andy and I spent two-and-a-bit weeks in the Écrins climbing routes graded Facile and Peu Difficile, and generally having a blast. I'd wanted to go to the Alps for years, but I'd heard so much about how hard and scary Alpine climbing was that I consistently failed to get a trip together. This year I finally overcame my fears and disorganisation and made it out there, and I found that low-grade Alpine mountaineering (the stuff I was interested in, in other words) was much easier, more fun and less scary than I'd been led to expect. I wish I'd gone five years ago.

Some of the stuff I read beforehand was useful (in particular, I recommend the BMC's DVD Alpine Essentials, which does a wonderful job of demystifying Alpine mountaineering), but I was still left with some misconceptions and gaps in my knowledge. Here's some of what I wish I'd known back then; the usual disclaimer applies.

There is absolutely no need to go to the Dolomites first

People kept telling me that I should go to the Dolomites and do lots of long rock routes before attempting the high Alps. I'm sure the Dolomites are lovely, but this is nonsense. If, like me, you want to do low-grade mountaineering on high mountains, go and do it. Experience of 10+-pitch rock routes is not necessary; I doubt it's even very useful. Moving together on scrambling terrain in big boots is a different skill.

It's nothing like the books

Climbing memoirs and films concentrate disproportionately on the super-hard routes and the times when Everything Went Wrong. It turns out that there are plenty of easier routes too.

Guidebook times are perfectly achievable

I received differing advice on this. All the books and DVDs stressed the importance of completing routes within guidebook time, and only increasing your grade/altitude/length once you were doing so. However, most of the people I spoke to said that this was an unrealistic aim. One guy even said that 1.5x guidebook time was a more reasonable target, but that I (as a Slow Climber) should allow double. In the event, we did almost all of our routes either within guidebook time or only a few minutes over. On the one exception, the South Ridge of the Aiguille Centrale de Soreiller, we took 4.5h versus 3h, but (a) the time quoted was for the shorter variation of the route, and we did the longer variation, (b) we deliberately decided to pitch the exposed summit ridge rather than moving together, having assessed the glacier below and deciding it would be safe to descend later in the day.

It's not that scary

All the books said "you need a few days to get used to the sheer scale of the Alps". I really didn't find this. Granted, the Écrins are not the tallest part of the Alps, but the exposure levels were at most about double what I'm used to from Scotland, and I felt pretty much at home. In fact, I found fear-management much easier than on the average UK roadside crag. If you can handle the Cuillin Ridge, you'll be fine.

That said, learning the Litany Against Fear is not a bad idea. It actually helps, and as an earworm it beats the hell out of Brown Girl in the Ring.

The days needn't be that long

Similarly with the oft-repeated advice that Alpine days can be really really long. Our longest day was ten hours (although we stopped back at the hut - it would have been more like 14 if we'd descended to the valley the same day); I've done 13-hour winter days in Scotland, and 14-hour days in summer. Or 18-hour days if you count epics. There are obviously plenty of very long routes in the Alps, but you don't have to do them. Pick a shorter route and move fast.

I suspect that most advice to newbie Brit alpinists is aimed at hot-headed wannabe Sheffield hardmen. The exposure's huge and the days are long if you think that 10m of gritstone is a long route. Which reminds me of the time I was climbing Curved Ridge in summer with three friends, moving roped together, and two know-it-alls with Yorkshire accents told us that they'd been climbing for thirty years and what we were doing was "not a recognised rope technique". Hey, how about you (a) fuck off back to Stanage, and (b) pick up a fucking book? I'm sure if you ask nicely in the climbing shop they'll help you with the big words.

You'll spend a lot of time downclimbing

The voies normales are, almost by definition, the easiest routes up the mountains. Hence, if there were an easy route off the top, you'd have climbed that instead. You may be able to abseil some sections, but there's no guarantee of this.

It's surprisingly warm up there

My previous experience of climbing in snow and ice had all been in Scottish winter conditions, where your fingers are usually painfully cold, touching the rock ungloved will chill you to the bone, hydration tubes freeze solid, and if you stop for more than a minute you'll need to layer up or dance about or both to keep warm. This was not the case in the Écrins. I did most routes in just a base layer, with my thin belay jacket coming out for summit stops or the occasional fixed belay in the wind. My hardshell only got used during rainstorms in the valley, and my outer gloves were entirely unused. Softshells were more useful: in particular, my Rab Sawtooth softshell trousers were excellent.

On the Barre des Écrins, a guide asked me "do you get many days like this on Ben Nevis?" "Oh yeah, we get some sunny days, even in winter." "No, I meant with the wind!" "Oh, right. In Scotland, we don't consider it windy if you can hold a conversation."

Staying hydrated is hard

Non-freezing hydration tubes make it easier to take a drink without stopping - and you will have very few stops if you're doing it right - but we kept running out of water in the heat. The lack of stops also means you can't do much to adjust your layering system if you get too hot. On our first route - which took a mere 4.5 hours hut-to-hut - I drank the whole of my 2L hydration bladder, then knocked back a 1.5L bottle of water on my own back at the hut.

We also struggled to eat enough on the routes; we never properly hit the wall, but we were definitely suffering from depleted blood-sugar on several occasions. My normal strategy is to scoff chocolate biscuits and sandwiches on belays, or eat while hiking, but this doesn't work when you're moving together on class 3-4 terrain, need your hands to make progress and can't spare the time to stop. I suggested filling our drinking bladders with Gatorade or something similar to Andy, but apparently when he tried that on a previous trip he lost a tooth. Suggestions?

Lassitude is a real thing

I was astonished how little energy I had down in the valleys. The heat sapped the power to do anything except lie about and drink tea.

You'll do a lot of traversing

It turns out that

  1. you have muscles in the side of your calves
  2. they're used a lot when you traverse steep slopes
  3. almost nothing else trains them.

Ow.

You'll need to switch very quickly between belayed climbing and moving together

File this one under "try not to stop for any reason" - you quite often reach a spot where you can belay the leader over a tricky bit, but the second wants to move off immediately once the rope comes tight. This argues for the use of direct belays off spikes, a technique which horrified me when I first saw it but to which I quickly became accustomed.

Fitness is useful, but you don't have to be an elite super-athlete

I had an ambitious training plan, involving half-marathons and marathons and mountain marathons, but due to various injuries and illnesses and my local gym closing down and other such excuses, I utterly failed to go through with it. Consequently I headed out to the Alps well below my usual level of fitness and carrying about 15kg of excess weight. About a week before I went out, I ran 10km and got delayed onset muscle soreness, so long had it been since I'd done any running. And, you know what? I was mostly fine. The walk-ins to the huts were hard, largely because we were doing them in the heat of the afternoon (see above, "lassitude is a real thing"), and I was pretty spaced out with tiredness on the descent from the Barre des Écrins, but I managed. More fitness would definitely have helped, sure, but lack of fitness wasn't (usually) the limiting factor.

Alpine star fields are amazing

Install a star-map app on your phone before you go. Trust me on this.

So what would be a really useful training plan for that sort of trip? I suggest the following:

  • Do as much hillwalking as you can. If it involves some scrambling, all the better. Practice traversing steep slopes.
  • Practice climbing easy routes in big boots.
  • Practice downclimbing easy routes in big boots.
  • Practice climbing with a full bladder (once again, you don't want to stop if you can avoid it).
  • Do lots of long, grade I/II Scottish winter routes: the kind of thing where you want to move together. This was the only part of this training programme that I actually did, and I'm extremely grateful for it.
  • Practice your French (or the language of whatever country you're visiting). High school was a long time ago for me, and it's embarrassing asking "Parlez-vous Anglais?" all the time. Also, the English-language guidebooks are selective and concentrate a lot on the more aspirational routes; reading the local guidebooks will give you more options.

tl;dr Alpine climbing is the Best Thing Ever. All the fun of Scottish winter climbing without the hot aches.

Me on the summit of La Grande Ruine
Me on my first Alpine summit, La Grande Ruine 3765m. More photos here.

pozorvlak: (Default)
Ah've drank
the specials
that wur in
the fridge

n thit
ye wur prably
haudin back
fer the party

Awright
They were great
that strang
that cauld
pozorvlak: (Default)
If I ever design a first course in compilers, I'll do it backwards: we'll start with code generation, move on to static analysis, then parse a token stream and finally lex source text. As we go along, students will be required to implement a small compiler for a simple language, using the techniques they've just learned. The (excellent) Stanford/Coursera compilers course does something similar, but they proceed in the opposite direction, following the dataflow in the final compiler: first they cover lexing, then parsing, then syntax analysis, then codegen. The first Edinburgh compilers course follows roughly the same plan of lectures, and I expect many other universities' courses do too.

I think a backwards course would work better for two reasons:
  1. Halfway through the Stanford course, you have a program that can convert source text into an intermediate representation with which you can't do very much. Halfway through the backwards course, you'd have a compiler for an unfriendly source language: you could write programs directly in whatever level of IR you'd got to (I'm assuming a sensible implementation language that doesn't make entering data literals too hard), and compile them using code you'd written to native code. I think that would be pretty motivating.
  2. When I did the Stanford course, all the really good learning experiences were in the back end. Writing a Yacc parser was a fiddly but largely straightforward experience; writing the code generator taught me loads about how your HLL code is actually executed and how runtime systems work. I also learned some less obvious things like the value of formal language specifications¹. Most CS students won't grow up to be compiler hackers, but they will find it invaluable to have a good idea of what production compilers do to their HLL code; it'll be much more useful than knowing about all the different types of parsing techniques, anyway². Students will drop out halfway through the course, and even those who make it all the way through will be tired and stressed by the end and will thus take in less than they did at the beginning: this argues for front-loading the important material.
What am I missing?

¹ I learned this the hard way. I largely ignored the formal part of the spec when writing my code generator, relying instead on the informal description; then towards the end of the allocated period I took a closer look at it and realised that it provided simple answers to all the thorny issues I'd been struggling with.
² The answer to "how should I write this parser?" in an industrial context is usually either "with a parser generator" or "recursive descent". LALR parsers such as those produced by Yacc are a pain to debug if you don't understand the underlying theory, true, but that's IMHO an argument for using a parser generator based on some other algorithm, most of which are more forgiving.
pozorvlak: (Default)
I've been learning about the NoSQL database CouchDB, mainly from the Definitive Guide, but also from the Coursera Introduction to Data Science course and through an informative chat with [personal profile] necaris, who has used it extensively at Esplorio. The current draft of the Definitive Guide is rather out-of-date and has several long-open pull requests on GitHub, which doesn't exactly inspire confidence, but CouchDB itself appears to be actively maintained. I have yet to use CouchDB in anger, but here's what I've learned so far:

  • CouchDB is, at its core, an HTTP server providing append-only access to B-trees of versioned JSON objects via a RESTful interface. Say what now? Well, you store your data as JavaScript-like objects (which allow you to nest arrays and hash tables freely); each object is indexed by a key; you access existing objects and insert new ones using the standard HTTP GET, PUT and DELETE methods, specifying and receiving data in JavaScript Object Notation; you can't update objects, only replace them with new objects with the same key and a higher version number; and it's cheap to request all the objects with keys in a given range.

  • The JSON is not by default required to conform to any particular schema, but you can add validation functions to be called every time data is added to the database. These will reject improperly-formed data.

  • CouchDB is at pains to be RESTful, to emit proper cache-invalidation data, and so on, and this is key to scaling it out: put a contiguous subset of (a consistent hash of) the keyspace on each machine, and build a tree of reverse HTTP proxies (possibly caching ones) in front of your database cluster.

  • CouchDB's killer feature is probably master-to-master replication: if you want to do DB operations on a machine that's sometimes disconnected from the rest of the cluster (a mobile device, say), then you can do so, and sync changes up and down when you reconnect. Conflicts are flagged but not resolved by default; you can resolve them manually or automatically by recording a new version of the conflicted object. Replication is also used for load-balancing, failover and scaling out: you can maintain one or more machines that constantly replicate the master server for a section of keyspace, and you can replicate only a subset of keyspace onto a new database when you need to expand.

  • CouchDB doesn't guarantee to preserve all the history of an object, and in particular replications only seem to send the most recent version; I think this precludes Git-style three-way merge from the conflicting versions' most recent common ancestor (and forget about Darcs-style full-history merging!).

  • The cluster-management story isn't as good as for some other systems, but there are a couple of PaaS offerings.

  • Queries/views and non-primary indexes are both handled using map/reduce. If you want to index on something other than the primary key - posts by date, say - then you write a map query which emits (date, post) pairs. These are put into another B-tree, which is stored on disk; clever things are done to mark subtrees invalid as new data comes in, and changes to the query result or index are calculated lazily. Since indices are stored as B-trees, it's cheap to get all the objects within a given range of secondary keys: all posts in February, for instance.

  • CouchDB's reduce functions are crippled: attempting to calculate anything that isn't a scalar or a fixed-size object is considered Bad Form, and may cause your machine(s) to thrash. AFAICT you can't reduce results from different machines by this mechanism: CouchDB Lounge requires you to write extra merge functions in Twisted Python.

  • Map, reduce and validation functions (and various others, see below) are by default written in JavaScript. But CouchDB invokes an external interpreter for them, so it's easy to extend CouchDB with a new query server. Several such have been written, and it's now possible to write your functions in many different languages.

  • There's a very limited SQL view engine, but AFAICT nothing like Hive or Pig that can take a complex query and compile it down into a number of chained map/reduce jobs. The aforementioned restrictions on reduce functions mean that the strategy I've been taught for expressing joins as map/reduce jobs won't work; I don't know if this limitation is fundamental. But it's IME pretty rare to require general joins in applications: usually you want to do some filtering or summarisation on at least one side.

  • CouchDB can't quite make up its mind whether it wants to be a database or a Web application framework. It comes by default with an administration web app called Futon; you can also use it to store and execute code for rendering objects as HTML, Atom, etc. Such code (along with views, validations etc) is stored in special JSON objects called "design documents": best practice is apparently to have one design document for each application that needs to access the underlying data. Since design documents are ordinary JSON objects, they are propagated between nodes by replications.

  • However, various standard webapp-framework bits are missing, notably URL routing. But hey, you can always use mod_rewrite...

  • There's a tool called Erica (and an older one called CouchApp) which allows you to sync design documents with more conventional source-code directories in your filesystem.

  • CouchDB is written in Erlang, and the functional-programming influence shows up in other places: most types of user-defined function are required to be free of side-effects, for instance. Then there's the aforementioned uses of lazy evaluation and the append-only nature of the system as a whole. You can extend it with your own Erlang code or embed it into an Erlang application, bypassing the need for HTTP requests.

tl;dr if you've ever thought "data modelling and synchronisation are hard, let's just stick a load of JSON files in Git" (as I have, on several occasions), then CouchDB is probably a good fit to your needs. Especially if your analytics needs aren't too complicated.

Commuting

Feb. 7th, 2013 12:41 pm
pozorvlak: (Hal)
Quaffing the last of my quickening cup,
I chuck fair Josie, my predatory protégée, behind her ear.
Into my knapsack I place fell Destruction,
my weapon in a thousand fights against the demon Logic
(not to mention his dread ally the Customer
who never knows exactly what she wants, but always wants it yesterday).
He sleeps lightly, but is ready
to leap into action, confounding the foe
with his strings of enchanted rubies and pearls.
To my thigh I strap Cecilweed, the aetherial horn
spun from rare African minerals in far Taiwan
and imbued with subtle magics by the wizards of Mountain View.
Shrugging on my Cuirass of Visibility,
I mount Wellington, my faithful iron steed
his spine wrought in the mighty forge of Diamondback
his innards cast by the cunning smiths of Shimano
and ride off, dodging monsters the height of a house
towards the place the ancients knew as Sràid na Banrighinn
The Street of the Queen.

Just wanna clarify that in lines 5 and 6 I'm not talking about the Growstuff customers, all of whom have been great.
pozorvlak: (Hal)
[Wherein we review an academic conference in the High/Low/Crush/Goal/Bane format used for reviewing juggling conventions on rec.juggling.]

High: My old Codeplay colleague Ally Donaldson's FAT-GPU workshop. He was talking about his GPUVerify system, which takes CUDA or OpenCL programs and either proves them free of data races and synchronisation-barrier conflicts, or finds a potential bug. It's based on an SMT solver; I think there's a lot of scope to apply constraint solvers to problems in compilation and embedded system design, and I'd like to learn more about them.

Also, getting to see the hotel's giant fishtank being cleaned, by scuba divers.

Low: My personal low point was telling a colleague about some of the problems my depression has been causing me, and having him laugh in my face - he'd been drinking, and thought I was exaggerating for comic effect. He immediately apologised when I told him that this wasn't the case, but still, not fun. The academic low point was the "current challenges in supercomputing" tutorial, which turned out to be a thinly-disguised sales pitch for the sponsor's FPGA cards. That tends not to happen at maths conferences...

Crush: am I allowed to have a crush on software? Because the benchmarking and visualisation infrastructure surrounding the Sniper x86 simulator looks so freaking cool. If I can throw away the mess of Makefiles, autoconf and R that serves the same role in our lab I will be very, very happy.

Goal: Go climbing on the Humboldthain Flakturm (fail - it turns out that Central Europe is quite cold in January, and nobody else fancied climbing on concrete at -7C). Get my various Coursera homeworks and bureaucratic form-filling done (fail - damn you, tasty German beer and hyperbolic discounting!). Meet up with [livejournal.com profile] maradydd, who was also in town (fail - comms and scheduling issues conspired against us. Next time, hopefully). See some interesting talks, and improve my general knowledge of the field (success!).

Bane: I was sharing a room with my Greek colleague Chris, who had a paper deadline on the Wednesday. This meant he was often up all night, and went to bed as I was getting up, so every trip into the room to get something was complicated by the presence of a sleeping person. He also kept turning the heating up until it was too hot for me to sleep. Dually, of course, he had to share his room with a crazy Brit who kept getting up as he was going to bed and opening the window to let freezing air in...
pozorvlak: (Hal)
I've been using Mercurial (also known as hg) as the version-control system for a project at work. I'd heard good things about it - a Git-like system with a cleaner UI and better documentation - and was glad of the excuse to try it out. Unfortunately, I was disappointed by what I found. The docs are good, and the UI's a bit cleaner, but it's still got some odd quirks - the difference between hg resolve and hg resolve -m catches me every bloody time, for instance. Unlike Git, you aren't prompted to set missing configuration options interactively. Some of the defaults are crazy, like not sending long output to a pager. And having got used to easy, safe history-rewriting in Git, I was horrified to learn that Mercurial offered no such guarantees of safety: up until version 2.2, the equivalent of a simple commit --amend could cause you to lose work permanently. Easy history-rewriting is a big deal; it means that you never have to choose between committing frequently and only pushing easily-reviewable history.

But I persevered, and with a bit of configuration I was able to make hg more like Git more comfortable. Here's my current .hgrc:
[ui]
username = Pozorvlak <pozorvlak@example.com>
merge = internal:merge
[pager]
pager = LESS='FSRX' less
[extensions]
rebase =
record =
histedit = ~/usr/etc/hg/hg_histedit.py
fetch =
shelve = ~/usr/etc/hg/hgshelve.py
pager =
mq =
color =

You'll need at least the username line, because of the aforementioned lack of interactive configuration. The pager = LESS='FSRX' less and pager = lines send long output to less instead of letting it all spew out and overflow your console scrollback buffer. merge = internal:merge tells it to use its internal merge algorithm as a merge tool, and put ">>>>" gubbins in files in the event of conflicts. Otherwise it uses meld for merges on my machine; meld is very pretty but not history-aware, and history-aware merges are at least 50% of the point of using a DVCS in the first place. The rebase extension allows you to graft a sequence of changesets onto another part of the history graph, like git rebase; the record extension allows you to select only some of the changes in your working copy for committing, like git add -p or darcs record; the fetch extension lets you do pull-and-merge in one operation - confusingly, git pull and git fetch are the opposite way round from hg fetch and hg pull. The mq extension turns on patch queues, which I needed for some hairy operation or other once. The non-standard histedit extension works like git rebase --interactive but not, I believe, as safely - dropped commits are deleted from the history graph entirely rather than becoming unreachable from an active head. The non-standard shelve extension works like git stash, though less conveniently - once you've shelved one change you need to give a name to all subsequent ones. Perhaps a Mercurial expert reading this can tell me how to delete unwanted shelves? Or about some better extensions or settings I should be using?
pozorvlak: (Hal)

I've been running benchmarks again. The basic workflow is

  1. Create some number of directories containing the benchmark suites I want to run.
  2. Tweak the Makefiles so benchmarks are compiled and run with the compilers, simulators, libraries, flags, etc, that I care about.
  3. Optionally tweak the source code to (for instance) change the number of iterations the benchmarks are run for.
  4. Run the benchmarks!
  5. Check the output; discover that something is broken.
  6. Swear, fix the problem.
  7. Repeat until either you have enough data or the conference submission deadline gets too close and you are forced to reduce the scope of your experiments.
  8. Collate the outputs from the successful runs, and analyse them.
  9. Make encouraging noises as the graduate students do the hard work of actually writing the paper.

Suppose I want to benchmark three different simulators with two different compilers for three iteration counts. That's 18 configurations. Now note that the problem found in stage 5 and fixed in stage 6 will probably not be unique to one configuration - if it affects the invocation of one of the compilers then I'll want to propagate that change to nine configurations, for instance. If it affects the benchmarks themselves or the benchmark-invocation harness, it will need to be propagated to all of them. Sounds like this is a job for version control, right? And, of course, I've been using version control to help me with this; immediately after step 1 I check everything into Git, and then use git fetch and git merge to move changes between repositories. But this is still unpleasantly tedious and manual. For my last paper, I was comparing two different simulators with three iteration counts, and I organised this into three checkouts (x1, x10, x100), each with two branches (simulator1, simulator2). If I discovered a problem affecting simulator1, I'd fix it in, say, x1's simulator1 branch, then git pull the change into x10 and x100. When I discovered a problem affecting every configuration, I checked out the root commit of x1, fixed the bug in a new branch, then git merged that branch with the simulator1 and simulator2 branches, then git pulled those merges into x10 and x100.

Keeping track of what I'd done and what I needed to do was frankly too cognitively demanding, and I was constantly bedevilled by the sense that there had to be a Better Way. I asked about this on Twitter, and Ganesh Sittampalam suggested "use Darcs" - and you know, I think he's right, Darcs' "bag of commuting patches" model is a better fit to what I'm trying to do than Git's "DAG of snapshots" model. The obvious way to handle this in Darcs would be to have six base repositories, called "everything", "x1", "x10", "x100", "simulator1" and "simulator2"; and six working repositories, called "simulator2_x1", "simulator2_x10", "simulator2_x100", "simulator2_x1", "simulator2_x10" and "simulator2_x100". Then set up update scripts in each working repository, containing, for instance

#!/bin/sh
darcs pull ../base/everything
darcs pull ../base/simulator1
darcs pull ../base/x10
and every time you fix a bug, run for i in working/*; do $i/update; done.

But! It is extremely useful to be able to commit the output logs associated with a particular state of the build scripts, so you can say "wait, what went wrong when I used the -static flag? Oh yeah, that". I don't think Darcs handles that very well - or at least, it's not easy to retrieve any particular state of a Darcs repo. Git is great for that, but whenever I think about duplicating the setup described above in Git my mind recoils in horror before I can think through the details. Perhaps it shouldn't - would this work? Is there a Better Way that I'm not seeing?

pozorvlak: (Hal)
Inspired by Falsehoods Programmers Believe About Names, Falsehoods Programmers Believe About Time, and far, far too much time spent fighting autotools. Thanks to Aaron Crane, [livejournal.com profile] totherme and [livejournal.com profile] zeecat for their comments on earlier versions.

It is accepted by all decent people that Make sucks and needs to die, and that autotools needs to be shot, decapitated, staked through the heart and finally buried at a crossroads at midnight in a coffin full of millet. Hence, there are approximately a million and seven tools that aim to replace Make and/or autotools. Unfortunately, all of the Make-replacements I am aware of copy one or more of Make's mistakes, and many of them make new and exciting mistakes of their own.

I want to see an end to Make in my lifetime. As a service to the Make-replacement community, therefore, I present the following list of tempting but incorrect assumptions various build tools make about building software.

All of the following are wrong:
  • Build graphs are trees.
  • Build graphs are acyclic.
  • Every build step updates at most one file.
  • Every build step updates at least one file.
  • Compilers will always modify the timestamps on every file they are expected to output.
  • It's possible to tell the compiler which file to write its output to.
  • It's possible to tell the compiler which directory to write its output to.
  • It's possible to predict in advance which files the compiler will update.
  • It's possible to narrow down the set of possibly-updated files to a small hand-enumerated set.
  • It's possible to determine the dependencies of a target without building it.
  • Targets do not depend on the rules used to build them.
  • Targets depend on every rule in the whole build system.
  • Detecting changes via file hashes is always the right thing.
  • Detecting changes via file hashes is never the right thing.
  • Nobody will ever want to rebuild a subset of the available dirty targets.
  • People will only want to build software on Linux.
  • People will only want to build software on a Unix derivative.
  • Nobody will want to build software on Windows.
  • People will only want to build software on Windows.
    (Thanks to David MacIver for spotting this omission.)
  • Nobody will want to build on a system without strace or some equivalent.
  • stat is slow on modern filesystems.
  • Non-experts can reliably write portable shell script.
  • Your build tool is a great opportunity to invent a whole new language.
  • Said language does not need to be a full-featured programming language.
  • In particular, said language does not need a module system more sophisticated than #include.
  • Said language should be based on textual expansion.
  • Adding an Nth layer of textual expansion will fix the problems of the preceding N-1 layers.
  • Single-character magic variables are a good idea in a language that most programmers will rarely use.
  • System libraries and globally-installed tools never change.
  • Version numbers of system libraries and globally-installed tools only ever increase.
  • It's totally OK to spend over four hours calculating how much of a 25-minute build you should do.
  • All the code you will ever need to compile is written in precisely one language.
  • Everything lives in a single repository.
  • Files only ever get updated with timestamps by a single machine.
  • Version control systems will always update the timestamp on a file.
  • Version control systems will never update the timestamp on a file.
  • Version control systems will never change the time to one earlier than the previous timestamp.
  • Programmers don't want a system for writing build scripts; they want a system for writing systems that write build scripts.

[Exercise for the reader: which build tools make which assumptions, and which compilers violate them?]

pozorvlak: (Default)
I've recently submitted a couple of talk proposals to upcoming conferences. Here are the abstracts.

Machine learning in (without loss of generality) Perl

London Perl Workshop, Saturday 24th November 2012. 25 minutes.

If you read a book or take a course on machine learning, you'll probably spend a lot of time learning about how to implement standard algorithms like k-nearest neighbours or Naive Bayes. That's all very interesting, but we're Perl programmers - all that stuff's on CPAN already. This talk will focus on how to use those algorithms to attack problems, how to select the best ML algorithm for your task, and how to measure and improve the performance of your machine learning system. Code samples will be in Perl, but most of what I'll say will be applicable to machine learning in any language.

Classifying Surfaces

MathsJam: The Annual Conference, 17th-18th November 2012. 5 minutes.

You may already know Euler's remarkable result that if a polyhedron has V vertices, E edges and F faces, then V - E + F = 2. This is a special case of the beautiful classification theorem for closed surfaces. I will state this classification theorem, and give a quick sketch of a proof.
pozorvlak: (Default)
Remember how a few years ago PCs were advertised with the number of MHz or GHz their processors ran at prominently featured? And how the numbers were constantly going up? You may have noticed that the numbers don't go up much any more, but now computers are advertised as "dual-core" or "quad-core". The reason that changed is power consumption. Double the clock speed of a chip, and you more than double its power consumption: with the Pentium 4 chip, Intel hit a clock speed ceiling as their processors started to generate more heat than could be removed.

But Moore's Law continues in operation: the number of transistors that can be placed on a given area of silicon has continued to double every eighteen months, as it has done for decades now. So how can chip makers make use of the extra capacity? The answer is multicore: placing several "cores" (whole, independent processing units) onto the same piece of silicon. Your chip can still do twice as much work as the one from eighteen months ago, but only if you split that work up into independent tasks.

This presents the software industry with a problem. We've been conditioned over the last fifty years to think that the same program will run faster if you put it on newer hardware. That's not true any more. Computer programs are basically recipes for use by particularly literal-minded and stupid cooks; imagine explaining how to cook a complex meal over the phone to someone who has to be told everything. If you're lucky, they'll have the wit to say "Er, the pan's on fire: that's bad, right?". Now let's make the task harder: you're on the phone to a room full of such clueless cooks, and your job is to get them to cooperate in the production of a complex dinner due to start in under an hour, without getting in each other's way. Sounds like a farce in the making? That's basically why multicore programming is hard.

But wait, it gets worse! The most interesting settings for computation these days are mobile devices and data centres, and these are both power-sensitive environments; mobile devices because of limited battery capacity, and data centres because more power consumption costs serious money on its own and increases your need for cooling systems which also cost serious money. If you think your electricity bill's bad, you should see Google's. Hence, one of the major themes in computer science research these days is "you know all that stuff you spent forty years speeding up? Could you please do that again, only now optimise for energy usage instead?". On the hardware side, one of the prominent ideas is heterogeneous multicore: make lots of different cores, each specialised for certain tasks (a common example is the Graphics Processing Units optimised for the highly-parallel calculations involved in 3D rendering), stick them all on the same die, farm the work out to whichever core is best suited to it, and power down the ones you're not using. To a hardware person, this sounds like a brilliant idea. To a software person, this sounds like a nightmare: now imagine that our Hell's Kitchen is full of different people with different skills, possibly speaking different languages, and you have to assign each task to the person best suited to carrying it out.

The upshot is that heterogeneous multicore programming, while currently a niche field occupied mainly by games programmers and scientists running large-scale simulations, is likely to get a lot more prominent over the coming decades. And hence another of the big themes in computer science research is "how can we make multicore programming, and particularly heterogeneous multicore programming, easier?" There are two aspects to this problem: what's the best way of writing new code, and what's the best way of porting old code (which may embody complex and poorly-documented requirements) to take advantage of multicore systems? Some of the approaches being considered are pretty Year Zero - the functional programming movement, for instance, wants us to write new code in a tightly-constrained way that is more amenable to automated mathematical analysis. Others are more conservative: for instance, my colleague Dan Powell is working on a system that observes how existing programs execute at runtime, identifies sections of code that don't interfere with each other, and speculatively executes them in parallel, rolling back to a known-good point if it turns out that they do interfere.

This brings us to the forthcoming Coursera online course in Heterogeneous Parallel Programming, which teaches you how to use the existing industry-standard tools for programming heterogeneous multicore systems. As I mentioned earlier, these are currently niche tools, requiring a lot of low-level knowledge about how the system works. But if I want to contribute to projects relating to this problem (and my research group has a lot of such projects) it's knowledge that I'll need. Plus, it sounds kinda fun.

Anyone else interested?
pozorvlak: (polar bear)
1. Start tracking my weight and calorie intake again, and get my weight back down to a level where I'm comfortable. I've been very slack on the actual calorie-tracking, but I have lost nearly a stone, and at the moment I'm bobbing along between 11st and about 11st 4lb. It would be nice to be below 11st, but I find I'm actually pretty comfortable at this weight as long as I'm doing enough exercise. So, I count that as a success.

2. Start making (and testing!) regular backups of my data. I'm now backing up my tweets with TweetBackup.com, but other than that I've made no progress on this front. Possibly my real failure was in not making all my NYRs SMART, so they'd all be pass/fail; as it is, I'm going to declare this one not yet successful.

3. Get my Gmail account down to Inbox Zero and keep it there. This one's a resounding success. Took me about a month and a half, IIRC. Next up: Browser Tab Zero.

4. Do some more Stanford online courses. There was a long period at the beginning of the year where they weren't running and we wondered if the Stanford administrators had stepped in and quietly deep-sixed the project, but then they suddenly started up again in March or so. Since then I've done Design and Analysis of Algorithms, which was brilliant; Software Engineering for Software as a Service, which I dropped out of 2/3 of the way through but somehow had amassed enough points to pass anyway; and I'm currently doing Compilers (hard but brilliant) and Human-Computer Interaction, which is way outside my comfort zone and on which I'm struggling. Fundamentals of Pharmacology starts up in a couple of weeks, and Cryptography starts sooner than that, but I don't think I'll be able to do Cryptography before Compilers finishes. Maybe next time they offer it. Anyway, I think this counts as a success.

5. Enter and complete the Meadows Half-Marathon. This was a definite success: I completed the 19.7km course in 1 hour and 37 minutes, and raised over £500 for the Against Malaria Foundation.

6. Enter (and, ideally, complete...) the Lowe Alpine Mountain Marathon. This was last weekend; my partner and I entered the C category. Our course covered 41km, gained 2650m of height, and mostly consisted of bog, large tufts of grass, steep traverses, or all three at once; we completed it in 12 hours and 33 minutes over two days and came 34th out of a hundred or so competitors. I was hoping for a faster time, but I think that's not too bad for a first attempt. Being rained on for the last two hours was no fun at all, but the worst bit was definitely the goddamn midges, which were worse than either of us had ever seen before. The itching's now just about subsided, and we're thinking of entering another one at a less midgey time of year: possibly the Original Mountain Marathon in October or the Highlander Mountain Marathon next April. Apparently the latter has a ceilidh at the mid-camp, presumably in case anyone's feeling too energetic. Anyway, this one's a success.

5/6 - I'm quite pleased with that. And I'm going to add another one (a mid-year resolution, if you will): I notice that my Munro-count currently stands at 136/284 (thanks to an excellent training weekend hiking and rock climbing on Beinn a' Bhuird); I hereby vow to have climbed half the Munros in Scotland by the end of the year. Six more to go; should be doable.
pozorvlak: (Default)
Yesterday, hacker-turned-Tantric-priest-turned-global-resilience-guru Vinay Gupta went on one of his better rants on Twitter. I've Storified it for your pleasure here. The gist was roughly
  1. We don't have enough resources to give everyone a Western lifestyle.
  2. Said lifestyle isn't actually very good at giving us the things which really make us happy.
  3. We do, on the other hand, have the resources to throw a truly massive party and invite everyone in the world. Drugs - especially psychedelics - require very little to produce, and sex is basically free.
My favourite tweet of the stream was "Hello, I'm the Government Minister for Dancing, Getting High and Fucking. We're going to be extending opening hours and improving quality."

It strikes me that this is a fun thought experiment. Imagine: the Party Party has just swept to power on a platform of gettin' down and boogying. You have been put in charge of the newly-created Department of Dancing, Getting High and Fucking (hereinafter DDGHF)¹. Your remit is to ensure that people who want to dance, get high and/or have sex can do so as safely as possible and with minimal impact on others. What do you do, hotshot? What policies do you implement? What targets do you set? How do you measure your department's effectiveness? How do you recruit and train new DDGHF staff, and what kind of organisational culture do you try to create?

Use more than one sheet of paper if you need.

You have a reasonable amount of freedom here: in particular, I'm not going to require that you immediately legalise all drugs. You might even want to ban some that are currently legal, though if so, please explain why your version of Prohibition won't be a disaster like all the others. However, I think we can take it as read that the Party Party's manifesto commits to at least scaling back the War on Drugs.

Bonus points: how does the new broom affect other departments? How do we manage diplomatic relations with states that are less hedonically inclined? What are the Party Party's policies on poverty, the economy, defence and climate change?

I guess I should give my answer )

Edit: LJ seems to silently fail to post comments that are above a certain length, which is very irritating of it. Sorry about that! If your answer is too long, perhaps you could post it on your own blog and post a link to it here? Or split it up into multiple comments, of course.

¹ Only one Cabinet post for all three? I hear you ask. That's joined-up government for you. Feel free to create as many junior ministers as you think are merited.
pozorvlak: (kittin)
I've been doing some work with Wordpress off and on for the last couple of weeks - migrating a site that uses a custom CMS onto a Wordpress installation - and a couple of times I've run into the following vexing problem when setting up a local Wordpress installation for testing. I couldn't find anything about it on the web, and it took me several hours to debug, so here's a writeup in case someone else has the same problem.

Steps to reproduce: install Wordpress 3.0.5 (as provided by Ubuntu). Using the command-line mysql client, load in a database dump from a Wordpress 3.3.1 site. Visit http://localhost/wordpress (or wherever you've got it installed).

Symptoms: instead of your deathless prose, you see an entirely blank browser window. HTTP headers are sent correctly, but no page content is produced. However, http://localhost/wordpress/wp-admin is displayed correctly, and all your content is in the database.

What's actually going on: Wordpress has decided that the TwentyTen theme is broken, so it's reverting to the default theme. It is hence looking for a theme called "Wordpress Default". But the default theme is actually just called "Default". So it doesn't find a theme, and, since display is handled by the theme files, nothing gets displayed.

How to fix it: go into the admin interface, and select Appearance->Themes. Change the theme to "Default". Your blog is now visible again!

If you wish, you can now change the theme back to TwentyTen: it turns out that it's not actually broken at all.

Thanks to Konstantin Kovshenin for suggesting I turn WP_DEBUG to true in wp-config.php. This allowed me to eventually track down the problem (though, annoyingly, the "theme not found" error was only displayed on the admin page, so I didn't see it for a while).

Next question: this is clearly a bug, but it's a bug in a superseded version. Where should I report it?

Edit: on further thought, I think this may be more to do with the site whose dump I was loading in using a theme that I don't have installed. In which case, the bug may well affect the latest version of Wordpress. But I haven't yet proved this to my satisfaction.
pozorvlak: (polar bear)
You may recall that one of my New Year's Resolutions was to enter and complete the half-marathon event at the Meadows Marathon. Well, I entered it! Now I just have to actually run the thing. This Sunday, to be precise. I've pounded enough cold pavements over the last three months that I'm fairly confident of finishing, though I've no idea whether I'll finish within my target of two hours.

In keeping with the spirit of the event, I'm trying to raise money for the Against Malaria Foundation, who are one of GiveWell.org's two top-rated charities in terms of misery alleviated per dollar donated. It is, in other words, a very good cause. Please sponsor me!.

Edit: I completed the race in 1 hour and 37 minutes, despite being hailed on for the last 1.5 laps. Better than that, though, was my friends' generosity: together, they donated over £400 to the Against Malaria Foundation.
pozorvlak: (Default)
It was often uncomfortable, often painful, particularly for the first month, but other days were pure joy, a revelling in the sensation of movement, of strength and wellbeing. My regular headaches stopped. For the first time ever, I got through winter without even a cold. I felt incredibly well, began to walk and hold myself differently. When friends asked "How are you?", instead of the normal Scottish "Oh, not too bad," I'd find myself saying "Extremely well!"

How obnoxious.

On other days training was pure slog, the body protesting and the will feeble. The mind could see little point in getting up before breakfast to run on a cold, dark morning, and none at all in continuing when it began to hurt. Take a break, why not have a breather, why not run for home now?

It is at times like that that the real work is done. It's easy to keep going when you feel strong and good. Anyone can do that. But at altitude it is going to feel horrible most of the time - and that's what you're really training for. So keep on running, through the pain and the reluctance. Do you really expect to get through this Expedition - this relationship, this book, this life for that matter - without some of the old blood, sweat and tears? No chance. That's part of the point of it all. So keep on running...

The real purpose of training is not so much hardening the body as toughening the will. Enthusiasm may get you started, bodily strength may keep you going for a long time, but only the will makes you persist when those have faded. And stubborn pride. Pride and the will, with its overtones of fascism and suppression, have long been suspect qualities - the latter so much so that I'd doubted its existence. But it does exist, I could feel it gathering and bunching inside me as the months passed. There were times when it alone got me up and running, or kept me from whinging and retreating off a Scottish route. The will is the secret motor that keeps driving when the heart and the mind have had enough.

[From Summit Fever.]

Profile

pozorvlak: (Default)
pozorvlak

November 2013

S M T W T F S
     12
3456789
10 111213141516
17181920212223
24252627282930

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 2nd, 2014 04:38 pm
Powered by Dreamwidth Studios