pozorvlak: (Default)
Monday, January 2nd, 2017 12:57 pm
Happy New Year! You're all no doubt itching to learn how I got on with my 2016 New Year's Resolutions.

[Content note: weight loss/gain]
Read more... )
pozorvlak: (Default)
Wednesday, February 3rd, 2016 10:34 pm
Following on from my 2015 review post, I should document my New Year's Resolutions for this year. In descending order of priority:
  1. Make a first ascent in the Greater Ranges. Same plan as last year: we've found some 4000m mountains in Kyrgyzstan with no recorded ascents, we've applied for financial assistance from the Mount Everest Foundation (who exist to fund this sort of thing), I've booked the time off, I'm learning Russian via a combination of night classes and Duolingo (befriend me here!), and I'm training in earnest. Speaking of which:
  2. Get my body mass below 70kg, from a starting point of 81.2kg on New Year's Day. I want to retain a few kilos of body fat, because food is heavy and burning fat on a route is way better than burning muscle, but every extra gram of body mass will have to be carried 2000m up a mountain, at altitude, as fast as possible. If you've never done hard physical work at altitude while overweight, let me share a secret with you: it is Not Fun. I'm tracking my weight using the Libra app for Android, which implements Hacker's Diet-style smoothing on your noisy daily weigh-in data; calorie intake via MyFitnessPal; and calorie expenditure via a FitBit exercise-tracking band, because MyFitnessPal's calorie-per-hour estimates for most forms of exercise are laughably high. FitBit can sync calories-burned to MFP, which I currently have set up; Libra doesn't sync to either of them, which is annoying, but I really want the trend rather than the raw weight data. FitBit also have a native food-tracking system, so I may ditch MFP at some point.
  3. Show up for work in a timely fashion. This is something I struggle with horribly at the moment. I almost always arrived in time for our morning standup meeting at my last job, but now I'm working remotely as part of a distributed team, and we don't have any equivalent for that. I've just signed up for Beeminder and created a "Do Less" goal with units of "minutes late to work", and a fairly generous weekly target; we'll see how that goes.
  4. Actually do some work while I'm there. Not sure how to make this SMART or how to achieve it. The Pomodoro technique is... moderately effective, if I actually start doing it (which is much easier if I show up not-too-late in the morning). RescueTime integrates with Beeminder, so I could set myself a goal for "spend more time looking at an IDE, terminal or job-related websites" (or one for "spend less time looking at blogs and social media"). By the way, a pet peeve: if you're reading about the Pomodoro Technique and thinking "sounds interesting, but 25 minutes isn't enough" then you are not the kind of person who needs it. 25 minutes is a major challenge for some of us.
  5. Read an average of one book a week. You can follow my progress on this one at Goodreads.

pozorvlak: (Default)
Thursday, December 31st, 2015 08:18 pm

Last year I made three New Year's Resolutions:

  1. Get better at dealing with money.
  2. Run a marathon.
  3. Make a first ascent in the Greater Ranges.

Number 2 was an obvious success: I finished the Edinburgh Marathon in 4:24:04, and raised nearly £900 for the Against Malaria Foundation. I'd been hoping to get a slightly faster time than that, but I lost several weeks of training to a chest infection near to the end of my training programme, so in the end I was very happy to finish under 4:30. The actual running was... mostly Type II fun, but also much less miserable than many of my training runs, even at mile 21 when I realised that literally everything below my navel hurt. Huge thanks to everyone who sponsored me!

Number 3 was an equally obvious failure. My climbing partner and I picked out an unclimbed mountain in Kyrgyzstan and got a lot of the logistics sorted, but then he moved house and started a new job a month before we were due to get on a plane to Bishkek. With only a few weeks to go and no plane tickets or insurance bought yet (and them both being much more expensive than we'd expected - we'd checked prices months earlier, but forgot how steeply costs rise as time goes on), we regretfully pulled the plug. We're planning to try again in 2016 - let's hope all the good lines don't get nabbed by Johnny-come-lately Guardian readers.

Number 1 was a partial success. I tried a number of suggestions from friends who appear to have their financial shit more together than me (not hard), but couldn't get any of them to stick. I was diagnosed with ADHD at the end of 2014; I don't want to use that as an excuse, but it does mean that some things that come easily to most people are genuinely difficult for me - and financial mismanagement is apparently very common among people with ADHD. The flip-side, though, is that I have a license to do crazy or unusual things if they help me be effective, because I have an actual medical condition.

I've now set up the following system:

  • my salary (minus taxes and pension contributions) is paid into Account 1;
  • a couple of days later, most of it is transferred by standing order into Account 2;
  • all bills are paid from Account 2 by direct debit, and Account 2 should maintain enough of a balance for them to always clear;
  • money left in Account 1 is available for spending on day-to-day things;
  • if I pay for something on a credit card, I pay it off from Account 1 (if small) or Account 2 (if big) as soon as possible;
  • Account 2 pays interest up to a certain ceiling; above that I'm going to transfer money out into a tax-efficient Account 3, which pays less interest but which doesn't have a ceiling.

I'll have to fine-tune the amount left in Account 1 with practice, but this system should ensure that bills get paid, I can easily see how much money I have left to spend for the month, and very little further thought or effort on my part is required.

While I was in there, I took the opportunity to set up a recurring donation to the Against Malaria Foundation for a few percent of my net salary - less than the 10% required to call yourself an Official Good Person by the Effective Altruism movement, but I figure I can work up to it.

It's too early to say whether the system will work out, but setting it up has already been a beneficial exercise - before, I had seven accounts with five different providers, most of them expired and paying almost zero interest (in one file, I found seven years' worth of letters saying "Your investment has expired and is now paying 0.1% gross interest, please let us know what you want us to do with it.") I now have only the three accounts described above, from two different providers, so it should be much easier to keep track of my overall financial position. Interest rates currently suck in general, but Accounts 2 and 3 at least pay a bit.

I've also started a new job that pays more, and [profile] wormwood_pearl's writing is starting to bring in some money. We're trying not to go mad and spend our newfound money several times over, but we're looking to start replacing some broken kit over the next few months rather than endlessly patching things up.

What else has happened to us?

I had a very unsuccessful winter climbing season last year; I was ill a lot from the stress of marathon training, and when I wasn't ill the weather was terrible. I had a couple of good sessions at the Glasgow ice-climbing wall, but only managed one actual route. Fortunately, it was the classic Taxus on Beinn an Dothaidh, which I'd been wanting to tick for a while. I also passed the half-way mark on the Munros on a beautiful crisp winter day in Glencoe.

One by one, my former research group's PhD students finished, passed their vivas, submitted their corrections, and went off, hopefully, to glittering academic careers or untold riches in Silicon Valley. Good luck to them all.

In June, I did the training for a Mountain Leadership award, the UK's introductory qualification for leading groups into the hills. Most of the others on the course were much fitter than me and more competent navigators, but the instructor said I did OK. To complete the award, I'll need to log some more Quality Mountain Days and do a week-long assessment.

In July, we went to Mat Brown's wedding in Norfolk, and caught up with some friends we hadn't seen IRL for far too long. Unlike last year, when it felt like we were going to a wedding almost every weekend, we only went to one wedding this year; I'm glad it was such a good one. Also, it was in a field with camping available, which really helped to keep our costs down.

In July, I started a strength-training cycle. I've spent years thinking that my physical peak was during my teens, when I was rowing competitively (albeit badly) and training 15-20 hours a week, so I was surprised to learn that I was able to lift much more now than I could then - 120kg squats versus around 90kg (not counting the 20kg of body weight I've gained since then). Over the next few weeks, I was able to gain a bit more strength, and by the end I could squat 130kg. I also remembered how much I enjoy weight training - so much less miserable than cardio.

In August, we played host to a few friends for the Edinburgh Fringe, and saw some great shows, of which my favourite was probably Jurassic Park.

In September, we went to Amsterdam with friends for a long weekend, saw priceless art and took a canal tour; then I got back, turned around within a day and went north for a long-awaited hiking trip to Knoydart with my grad-school room-mate. There are two ways to get to Knoydart: either you can take the West Highland Line right to the end at Mallaig, then take the ferry, or you can get off at Glenfinnan (best known for the viaduct used in the Harry Potter films) and walk North for three days, sleeping in unheated huts known as bothies. We did the latter, only it took us six days because we bagged all the Munros en route. I'm very glad we did so. The weather was cold but otherwise kind to us, the insects were evil biting horrors from Hell, and the starfields were amazing. It wasn't Kyrgyzstan, but it was the best fallback Europe had to offer.

In October, I started a new job at Red Hat, working on the OpenStack project, which is an open-source datacenter management system. It's a huge, intimidating codebase, and I'm taking longer than I'd like to find my feet, but I like my team and I'm slowly starting to get my head around it.

That's about it, and it's five minutes to the bells - Happy New Year, and all the best for 2016!

pozorvlak: (Default)
Monday, August 19th, 2013 09:10 pm
If I ever design a first course in compilers, I'll do it backwards: we'll start with code generation, move on to static analysis, then parse a token stream and finally lex source text. As we go along, students will be required to implement a small compiler for a simple language, using the techniques they've just learned. The (excellent) Stanford/Coursera compilers course does something similar, but they proceed in the opposite direction, following the dataflow in the final compiler: first they cover lexing, then parsing, then syntax analysis, then codegen. The first Edinburgh compilers course follows roughly the same plan of lectures, and I expect many other universities' courses do too.

I think a backwards course would work better for two reasons:
  1. Halfway through the Stanford course, you have a program that can convert source text into an intermediate representation with which you can't do very much. Halfway through the backwards course, you'd have a compiler for an unfriendly source language: you could write programs directly in whatever level of IR you'd got to (I'm assuming a sensible implementation language that doesn't make entering data literals too hard), and compile them using code you'd written to native code. I think that would be pretty motivating.
  2. When I did the Stanford course, all the really good learning experiences were in the back end. Writing a Yacc parser was a fiddly but largely straightforward experience; writing the code generator taught me loads about how your HLL code is actually executed and how runtime systems work. I also learned some less obvious things like the value of formal language specifications¹. Most CS students won't grow up to be compiler hackers, but they will find it invaluable to have a good idea of what production compilers do to their HLL code; it'll be much more useful than knowing about all the different types of parsing techniques, anyway². Students will drop out halfway through the course, and even those who make it all the way through will be tired and stressed by the end and will thus take in less than they did at the beginning: this argues for front-loading the important material.
What am I missing?

¹ I learned this the hard way. I largely ignored the formal part of the spec when writing my code generator, relying instead on the informal description; then towards the end of the allocated period I took a closer look at it and realised that it provided simple answers to all the thorny issues I'd been struggling with.
² The answer to "how should I write this parser?" in an industrial context is usually either "with a parser generator" or "recursive descent". LALR parsers such as those produced by Yacc are a pain to debug if you don't understand the underlying theory, true, but that's IMHO an argument for using a parser generator based on some other algorithm, most of which are more forgiving.
pozorvlak: (Hal)
Thursday, January 24th, 2013 09:59 pm
[Wherein we review an academic conference in the High/Low/Crush/Goal/Bane format used for reviewing juggling conventions on rec.juggling.]

High: My old Codeplay colleague Ally Donaldson's FAT-GPU workshop. He was talking about his GPUVerify system, which takes CUDA or OpenCL programs and either proves them free of data races and synchronisation-barrier conflicts, or finds a potential bug. It's based on an SMT solver; I think there's a lot of scope to apply constraint solvers to problems in compilation and embedded system design, and I'd like to learn more about them.

Also, getting to see the hotel's giant fishtank being cleaned, by scuba divers.

Low: My personal low point was telling a colleague about some of the problems my depression has been causing me, and having him laugh in my face - he'd been drinking, and thought I was exaggerating for comic effect. He immediately apologised when I told him that this wasn't the case, but still, not fun. The academic low point was the "current challenges in supercomputing" tutorial, which turned out to be a thinly-disguised sales pitch for the sponsor's FPGA cards. That tends not to happen at maths conferences...

Crush: am I allowed to have a crush on software? Because the benchmarking and visualisation infrastructure surrounding the Sniper x86 simulator looks so freaking cool. If I can throw away the mess of Makefiles, autoconf and R that serves the same role in our lab I will be very, very happy.

Goal: Go climbing on the Humboldthain Flakturm (fail - it turns out that Central Europe is quite cold in January, and nobody else fancied climbing on concrete at -7C). Get my various Coursera homeworks and bureaucratic form-filling done (fail - damn you, tasty German beer and hyperbolic discounting!). Meet up with [livejournal.com profile] maradydd, who was also in town (fail - comms and scheduling issues conspired against us. Next time, hopefully). See some interesting talks, and improve my general knowledge of the field (success!).

Bane: I was sharing a room with my Greek colleague Chris, who had a paper deadline on the Wednesday. This meant he was often up all night, and went to bed as I was getting up, so every trip into the room to get something was complicated by the presence of a sleeping person. He also kept turning the heating up until it was too hot for me to sleep. Dually, of course, he had to share his room with a crazy Brit who kept getting up as he was going to bed and opening the window to let freezing air in...
pozorvlak: (Hal)
Sunday, December 9th, 2012 09:17 pm
I've been using Mercurial (also known as hg) as the version-control system for a project at work. I'd heard good things about it - a Git-like system with a cleaner UI and better documentation - and was glad of the excuse to try it out. Unfortunately, I was disappointed by what I found. The docs are good, and the UI's a bit cleaner, but it's still got some odd quirks - the difference between hg resolve and hg resolve -m catches me every bloody time, for instance. Unlike Git, you aren't prompted to set missing configuration options interactively. Some of the defaults are crazy, like not sending long output to a pager. And having got used to easy, safe history-rewriting in Git, I was horrified to learn that Mercurial offered no such guarantees of safety: up until version 2.2, the equivalent of a simple commit --amend could cause you to lose work permanently. Easy history-rewriting is a big deal; it means that you never have to choose between committing frequently and only pushing easily-reviewable history.

But I persevered, and with a bit of configuration I was able to make hg more like Git more comfortable. Here's my current .hgrc:
username = Pozorvlak <pozorvlak@example.com>
merge = internal:merge
pager = LESS='FSRX' less
rebase =
record =
histedit = ~/usr/etc/hg/hg_histedit.py
fetch =
shelve = ~/usr/etc/hg/hgshelve.py
pager =
mq =
color =

You'll need at least the username line, because of the aforementioned lack of interactive configuration. The pager = LESS='FSRX' less and pager = lines send long output to less instead of letting it all spew out and overflow your console scrollback buffer. merge = internal:merge tells it to use its internal merge algorithm as a merge tool, and put ">>>>" gubbins in files in the event of conflicts. Otherwise it uses meld for merges on my machine; meld is very pretty but not history-aware, and history-aware merges are at least 50% of the point of using a DVCS in the first place. The rebase extension allows you to graft a sequence of changesets onto another part of the history graph, like git rebase; the record extension allows you to select only some of the changes in your working copy for committing, like git add -p or darcs record; the fetch extension lets you do pull-and-merge in one operation - confusingly, git pull and git fetch are the opposite way round from hg fetch and hg pull. The mq extension turns on patch queues, which I needed for some hairy operation or other once. The non-standard histedit extension works like git rebase --interactive but not, I believe, as safely - dropped commits are deleted from the history graph entirely rather than becoming unreachable from an active head. The non-standard shelve extension works like git stash, though less conveniently - once you've shelved one change you need to give a name to all subsequent ones. Perhaps a Mercurial expert reading this can tell me how to delete unwanted shelves? Or about some better extensions or settings I should be using?
pozorvlak: (Hal)
Thursday, December 6th, 2012 11:41 pm

I've been running benchmarks again. The basic workflow is

  1. Create some number of directories containing the benchmark suites I want to run.
  2. Tweak the Makefiles so benchmarks are compiled and run with the compilers, simulators, libraries, flags, etc, that I care about.
  3. Optionally tweak the source code to (for instance) change the number of iterations the benchmarks are run for.
  4. Run the benchmarks!
  5. Check the output; discover that something is broken.
  6. Swear, fix the problem.
  7. Repeat until either you have enough data or the conference submission deadline gets too close and you are forced to reduce the scope of your experiments.
  8. Collate the outputs from the successful runs, and analyse them.
  9. Make encouraging noises as the graduate students do the hard work of actually writing the paper.

Suppose I want to benchmark three different simulators with two different compilers for three iteration counts. That's 18 configurations. Now note that the problem found in stage 5 and fixed in stage 6 will probably not be unique to one configuration - if it affects the invocation of one of the compilers then I'll want to propagate that change to nine configurations, for instance. If it affects the benchmarks themselves or the benchmark-invocation harness, it will need to be propagated to all of them. Sounds like this is a job for version control, right? And, of course, I've been using version control to help me with this; immediately after step 1 I check everything into Git, and then use git fetch and git merge to move changes between repositories. But this is still unpleasantly tedious and manual. For my last paper, I was comparing two different simulators with three iteration counts, and I organised this into three checkouts (x1, x10, x100), each with two branches (simulator1, simulator2). If I discovered a problem affecting simulator1, I'd fix it in, say, x1's simulator1 branch, then git pull the change into x10 and x100. When I discovered a problem affecting every configuration, I checked out the root commit of x1, fixed the bug in a new branch, then git merged that branch with the simulator1 and simulator2 branches, then git pulled those merges into x10 and x100.

Keeping track of what I'd done and what I needed to do was frankly too cognitively demanding, and I was constantly bedevilled by the sense that there had to be a Better Way. I asked about this on Twitter, and Ganesh Sittampalam suggested "use Darcs" - and you know, I think he's right, Darcs' "bag of commuting patches" model is a better fit to what I'm trying to do than Git's "DAG of snapshots" model. The obvious way to handle this in Darcs would be to have six base repositories, called "everything", "x1", "x10", "x100", "simulator1" and "simulator2"; and six working repositories, called "simulator2_x1", "simulator2_x10", "simulator2_x100", "simulator2_x1", "simulator2_x10" and "simulator2_x100". Then set up update scripts in each working repository, containing, for instance

darcs pull ../base/everything
darcs pull ../base/simulator1
darcs pull ../base/x10
and every time you fix a bug, run for i in working/*; do $i/update; done.

But! It is extremely useful to be able to commit the output logs associated with a particular state of the build scripts, so you can say "wait, what went wrong when I used the -static flag? Oh yeah, that". I don't think Darcs handles that very well - or at least, it's not easy to retrieve any particular state of a Darcs repo. Git is great for that, but whenever I think about duplicating the setup described above in Git my mind recoils in horror before I can think through the details. Perhaps it shouldn't - would this work? Is there a Better Way that I'm not seeing?

pozorvlak: (Default)
Sunday, September 9th, 2012 01:12 pm
Remember how a few years ago PCs were advertised with the number of MHz or GHz their processors ran at prominently featured? And how the numbers were constantly going up? You may have noticed that the numbers don't go up much any more, but now computers are advertised as "dual-core" or "quad-core". The reason that changed is power consumption. Double the clock speed of a chip, and you more than double its power consumption: with the Pentium 4 chip, Intel hit a clock speed ceiling as their processors started to generate more heat than could be removed.

But Moore's Law continues in operation: the number of transistors that can be placed on a given area of silicon has continued to double every eighteen months, as it has done for decades now. So how can chip makers make use of the extra capacity? The answer is multicore: placing several "cores" (whole, independent processing units) onto the same piece of silicon. Your chip can still do twice as much work as the one from eighteen months ago, but only if you split that work up into independent tasks.

This presents the software industry with a problem. We've been conditioned over the last fifty years to think that the same program will run faster if you put it on newer hardware. That's not true any more. Computer programs are basically recipes for use by particularly literal-minded and stupid cooks; imagine explaining how to cook a complex meal over the phone to someone who has to be told everything. If you're lucky, they'll have the wit to say "Er, the pan's on fire: that's bad, right?". Now let's make the task harder: you're on the phone to a room full of such clueless cooks, and your job is to get them to cooperate in the production of a complex dinner due to start in under an hour, without getting in each other's way. Sounds like a farce in the making? That's basically why multicore programming is hard.

But wait, it gets worse! The most interesting settings for computation these days are mobile devices and data centres, and these are both power-sensitive environments; mobile devices because of limited battery capacity, and data centres because more power consumption costs serious money on its own and increases your need for cooling systems which also cost serious money. If you think your electricity bill's bad, you should see Google's. Hence, one of the major themes in computer science research these days is "you know all that stuff you spent forty years speeding up? Could you please do that again, only now optimise for energy usage instead?". On the hardware side, one of the prominent ideas is heterogeneous multicore: make lots of different cores, each specialised for certain tasks (a common example is the Graphics Processing Units optimised for the highly-parallel calculations involved in 3D rendering), stick them all on the same die, farm the work out to whichever core is best suited to it, and power down the ones you're not using. To a hardware person, this sounds like a brilliant idea. To a software person, this sounds like a nightmare: now imagine that our Hell's Kitchen is full of different people with different skills, possibly speaking different languages, and you have to assign each task to the person best suited to carrying it out.

The upshot is that heterogeneous multicore programming, while currently a niche field occupied mainly by games programmers and scientists running large-scale simulations, is likely to get a lot more prominent over the coming decades. And hence another of the big themes in computer science research is "how can we make multicore programming, and particularly heterogeneous multicore programming, easier?" There are two aspects to this problem: what's the best way of writing new code, and what's the best way of porting old code (which may embody complex and poorly-documented requirements) to take advantage of multicore systems? Some of the approaches being considered are pretty Year Zero - the functional programming movement, for instance, wants us to write new code in a tightly-constrained way that is more amenable to automated mathematical analysis. Others are more conservative: for instance, my colleague Dan Powell is working on a system that observes how existing programs execute at runtime, identifies sections of code that don't interfere with each other, and speculatively executes them in parallel, rolling back to a known-good point if it turns out that they do interfere.

This brings us to the forthcoming Coursera online course in Heterogeneous Parallel Programming, which teaches you how to use the existing industry-standard tools for programming heterogeneous multicore systems. As I mentioned earlier, these are currently niche tools, requiring a lot of low-level knowledge about how the system works. But if I want to contribute to projects relating to this problem (and my research group has a lot of such projects) it's knowledge that I'll need. Plus, it sounds kinda fun.

Anyone else interested?
pozorvlak: (Default)
Wednesday, October 19th, 2011 12:08 pm
Previously on posts tagged with 'angst' (mostly friendslocked): our hero became depressed, sought help, and is now (mostly) feeling better. Now read on...

John Walker, in his excellent book The Hacker's Diet, distinguishes two approaches to tackling problems, which he calls the Manager's approach and the Engineer's approach. Management is about monitoring and ameliorating chronic problems to keep their symptoms to within an acceptable level; engineering is about solving problems outright. Most difficult problems, he claims, must be tackled using a combination of both approaches.

My problem, which might be summarised as "I became depressed because I suck at my job", has so far responded well to a management approach: leaning on friends, attending CBT workshops, prioritizing exercise, talking to my supervisor about my problems. I'm very grateful to all of you for your support, and to the Glasgow NHS mental health team. Now that I'm feeling a bit more spoonful, it's time to apply the engineer's approach, and suck less at my job.

[A perhaps more honourable alternative would be to find another job at which I wouldn't suck, but that would be fraught with risk, and my current job has much to recommend it. Besides, I'm not sure that such a job exists.]

I have three basic problems:
  1. I'm not a good enough programmer;
  2. I don't know enough about the problem domain;
  3. I am now effectively an experimental scientist, but I don't know anything about experiment design.

On the first point, I'm reasonably happy with the readability of my code, but I'm unhappy with its testability and correctness, and I'm very unhappy with the time it takes me to produce it. I'm frequently struck by analysis paralysis. I've spent most of my programming career working with high-level languages, so I'm not very good at lower-level programming. I think the only solution to this problem is to write (hundreds of) thousands of lines of code, at as low a level as possible.

On the second point: before starting this job, I'd previously worked at a compiler vendor and at a web startup which did some machine-learning; back in high school, I'd done some assembly programming for an embedded system. I'd also done a bit of background reading on compiler theory. It turned out that this was insufficient preparation for a job using machine-learning to improve the state of the art in compilers targetting embedded systems. Astonishing, I know.

There used to be a cute slide in the Edinburgh University first compilers course:

In the front end, everything's polynomial. In the back end, everything's NP-complete. In the middle-end, everything's uncomputable.

Now, that's true for compiler theory, and it explains why compiler research is still a going concern after sixty years, but it doesn't explain why day-to-day hacking on compilers is hard. For me at least, that's because hacking on compilers is systems programming. You need to know, at least to the level of "understand the basic principles of and can look up the details if needed", about things like addressing modes, instruction set architecture design, executable layout, and calling conventions. Forget the fat guys who know C++ who apparently keep Silicon Valley running; I work with guys who know Verilog and the GNU ld control language.

[Betcha didn't know that the GNU linker embeds a Turing-equivalent programming language :-)]

Now, none of this stuff is especially difficult as far as I can see. But it's a lot of background knowledge to have, and if you lack it then you'll be constantly hitting problems that you lack the mental toolkit to address. So here's what I'm doing about it:
  • To address the patchiness of my knowledge about machine-learning, I went to the library and got out a couple of ML textbooks, one of which I've now read. I've also signed up to the free online machine-learning class from Stanford, and, while I was at it, the Introduction to AI class too.
  • To address my ignorance of architectural issues, I'm auditing the computer design undergrad course at Edinburgh; when I've finished that, I'll audit the follow-on course in computer architecture. So far it's mostly been the sort of boolean algebra and digital electronics that I learned at my mother's knee, but there's a lot of stuff I don't know later on in the course.
  • Linkers are a particular problem for me; I think linker errors are the Universe's way of telling me what it's like for a non-techie who thinks they need to press the "any" key. [livejournal.com profile] zeecat kindly lent me a copy of Levine's book Linkers and Loaders, which I am now reading; as an unexpected bonus, one of the early chapters is a 30,000ft tour of computer architecture. To my delight, the book walks you through the construction of a simple linker in Perl.
  • To address my lack of C and assembly language experience, to solidify my understanding of basic compiler theory, and to give me a testbed for implementing some optimisation algorithms later on, I started writing a simple compiler in C. Currently it accepts a simple term/factor/expression grammar and outputs x86 assembly; the plan is to extend it so it accepts a subset of C. "Compiler" is currently a bit of a joke; it is technically a compiler, but one that does really aggressive constant folding :-) I haven't hacked on this for a while, because work got in the way, but intend to pick it up again soon.
  • To address my ignorance of C minutiae, I started reading through the comp.lang.c FAQ, at Aaron Crane's suggestion. This is also stalled, as is a project to convert said FAQ to a collection of Mnemosyne flashcards.
  • The world seems to be standardising on Hadoop as a clustering solution; I should try that out at some point.

So, anything else I should be doing? Anything in that list I should not be doing?

I have no real idea how to start addressing my third problem, namely my ignorance of experimental design. Go to the library and get another book out, I guess. Anyone got any recommendations?
pozorvlak: (babylon)
Monday, October 17th, 2011 12:38 pm
I was recently delighted to receive an email from someone saying that he'd just started a PhD with my old supervisor, and did I have any advice for him? You'll be unsurprised to learn that I did; I thought I'd post it here in the hope that someone else might find it useful. Some of what follows is specific to my supervisor, my field, or my discipline; most is more general. Your mileage may vary.
  • Your main problem for the next 3/4 years will be maintaining morale. Don't beat yourself up for slow/no progress. Do make sure you're eating, sleeping and exercising properly. Consider doing some reading about cognitive behavioural therapy so you can spot negative thought-patterns before they start to paralyse you.
  • Try to get some structure in your life. Weekly meetings are a minimum. Set yourself small deadlines. Don't worry overly if you miss them: if this stuff were easy to schedule, they wouldn't call it "research".
  • Sooner or later you'll discover that something you're working on has already been done, probably by Kelly. Do not panic. Chances are that one of the following is true:
    • his technique applies in some different domain (actually check this, because folklore often assigns greater utility to theorems than they actually possess)
    • your technique is obviously different (so there's an equivalence theorem to prove - or maybe not...)
    • your technique can be generalised or specialised or reapplied in some way that his can't.
  • Start writing now. I know everyone says this, but it's still good advice. It doesn't matter if you don't think you've got anything worth writing up yet. Write up background material. Write up rough notes. The very act of writing things up will suggest new ideas. And it will get you familiar with TeX, which is never a bad thing. As a category theorist, you will probably need to become more familiar with TeX than the average mathematician. And writing is mostly easier than doing mathematics - important, since you'll need something to do on those days when you just don't have enough energy for actual research.
  • Even if you don't start writing, you should certainly start maintaining a bibliography file, with your own notes in comments.
  • Speaking of fluctuating energy, you should read Terry Tao's advice on time management for mathematicians.
  • Keep your TeX source in version control. It's occasionally very helpful to be able to refer back and find out what changed when and why, and using a properly-designed system avoids the usual mess of thesis.old.tex.bak files lying around in your filesystem. I like Git, but other systems exist. Mercurial is meant to be especially nice if you haven't used version control before.
  • Make sure you have up-to-date backups (perhaps via a source-code hosting site like GitHub or BitBucket). And try to ensure you have access to a spare machine. You don't want to be futzing around with screwdrivers and hard drive enclosures when you've got a deadline.
  • Tom's a big fan of using rough sheets of paper to write on in supervision meetings [and perhaps your supervisor will be too, O reader]. You'll need to find a way of filing these or otherwise distilling them so that they can be referred to later. I never managed this.
  • For my own rough working, I like paper notebooks, which I try to carry around with me at all times. Your mileage may vary. Some people swear by a personal wiki, and in particular the TiddlyWiki/Dropbox combo.
  • Speaking of filing: the book Getting Things Done (which I recommend, even if I don't manage to follow most of its advice myself) recommends a simple alphabetical filing system for paper documents, with those fold-over cardboard folders (so you can pick up your whole file for a given topic and cart it around with you). I find this works pretty well. Make sure you have some spare folders around so you can easily spin up new files as needed.
  • Don't be afraid to read around your field, even if your supervisor advises you not to. I really wish I'd ignored mine and read more about rewriting systems, for instance.
  • Try to seize that surge of post-conference inspiration. My major theorem was proved in the airport on the way back from a conference. Also, airports make great working environments at 2am when hardly anybody's around :-)
  • Don't forget that if things get too bad, you can quit. Sometimes that's the best choice. I know several people who've dropped out of PhD programmes and gone on to happy lives.
  • The supply of newly-minted PhDs now outstrips the number of academic jobs available to them, and category theory's a niche and somewhat unfashionable field (in maths, at least - you may well have more luck applying to computer science departments. Bone up on some type theory). When you get to the end of your studies, expect finding an academic job to take a long time and many iterations. Try to have a backup plan in case nothing comes up. Let's hope the economy's picked up by then :-)
pozorvlak: (Default)
Tuesday, July 26th, 2011 10:31 am
Back in 2003 when I was working for $hateful_defence_contractor, we were dealing with a lot of quantities expressed in dB. Occasionally there was a need to add these things - no, I can't remember why. Total power output from multiple sources, or something. Everyone cursed about this. So I wrote a desk calculator script, along these lines:

print "> ";
while (<>) {
   print 10*log(eval $_)/log(10)."\n> ";
I've always thought of this as The Most Evil Code I've Ever Written. For those of you who don't speak fluent regex, it reads a line of input from the user, interprets everything that looks like a number as a number of decibels, replaces each decibel-number with the non-decibel equivalent, evaluates the resulting string as Perl code, and then converts the result back into decibels. Here's an example session:
> 1 + 1
> 10 * 10 
> cos(1)
Some of you are no doubt thinking "Well of course that code's evil, it's written in Perl!" But no. Here's the same algorithm written in Python:
#!/usr/bin/python -u
import re, math, sys

def repl(match):
        num = float(match.group(0))
        return str(10**(num/10))

number = re.compile(r'-?\d+(\.\d+)?([eE][-+]?\d+)?')
while 1:
        line = sys.stdin.readline()
        if len(line) == 0:
        line = re.sub(number, repl, line)
        print 10*math.log10(eval(line))
If anything, the Perl version is simpler and has lower accidental complexity. If Perl's not the best imaginable language for expressing that algorithm, it's damn close.

[I also tried to write a Haskell version using System.Eval.Haskell, but I got undefined reference to `__stginit_pluginszm1zi5zi1zi4_SystemziEvalziHaskell_' errors, suggesting my installation of cabal is b0rked. Anyone know what I need to do to fix it? Also, I'm sure my Python style can be greatly improved - suggestions most welcome.]

No, I thought of it as evil because it's doing an ugly thing: superficially altering code with regexes and then using string eval? And who the hell adds decibels, anyway?

Needless to say, it was the most successful piece of code I wrote in the year I spent in that job.

I was talking about string eval to Aaron Crane the other day, and I mentioned this program. His response surprised me:
I disagree; I think it’s a lovely piece of code. It may not be a beautiful jewel of elegant abstractions for a complex data model, true. But it’s small, simple, trivial to write, works on anything with a Perl interpreter (of pretty much any vintage, and with no additional dependencies), and clearly correct once you’ve thought about the nature of the arithmetic you’re doing. While it’s not something you’d ship as part of a safety-critical system, for example, I can’t see any way it could be realistically improved as an internal tool, aimed at users who are aware of its potential limitations.
[Full disclosure: the Perl version above didn't work first time. But the bug was quickly found, and it did work the second time :-)]

The lack of external dependencies (also a virtue of the Python version, which depends only on core modules) was very much intentional: I wrote my program so it could be trivially distributed (by samizdat, if necessary). Most of my colleagues weren't Perl programmers, and if I'd said "First, install Regexp::Common from CPAN...", I'd have lost half my potential audience. As it was, the tool was enthusiastically adopted.

So, what do you all think? Is it evil or lovely? Or both? And what's the most evil piece of code that you've written?

Edit: Aaron also pointed me at this program, which is both lovely and evil in a similar way. If you don't understand what's going on, type
perl -e 'print join q[{,-,+}], 1..9'
perl -e 'print glob "1{,-,+}2"'
at the command-line.
pozorvlak: (Default)
Saturday, July 2nd, 2011 06:37 pm
I'm currently running a lot of benchmarks in my day job, in the hope of perhaps collecting some useful data in time for an upcoming paper submission deadline - this is the "science" part of "computer science". Since getting a given benchmark suite built and running is often needlessly complex and tedious, one of my colleagues has written an abstraction layer in the form of a load of Makefiles. By issuing commands like "make build-eembc2", "make run-utdsp" or "make distclean-dspstone" you can issue the correct command (build/run/distclean) to whichever benchmark suite you care about. The lists of individual benchmarks are contained in .mk files, so you can strip out any particular benchmark you're not interested in.

I want to use benchmark runs as part of the fitness function for a genetic algorithm, so it's important that it run fast, and simulating another processor (as we're doing) is inherently a slow business. Fortunately, benchmark suites consist of lots of small programs, which can be run in parallel if you don't care about measuring wallclock seconds. And make already has support for parallel builds, using the -j option.

But it's always worth measuring these things, so I copied the benchmark code up onto our multi-core number crunching machine, and did two runs-from-clean with and without the -j flag. No speedup. Checking top, I found that only one copy of the simulator or compiler was ever running at a time. What the hell? Time to look at the code:
TARGETS=build run collect clean distclean

%-eembc2: eembc-2.0
        @for dir in $(BMARKS_EEMBC2) ; do \
          if test -d eembc-2.0/$$dir ; then \
            ${MAKE} -C eembc-2.0/$$dir $* ; \
          fi; \
Oh God. Dear colleague, you appear to have taken a DSL explicitly designed to provide parallel tracking of dependencies, and then deliberately thrown that parallelism away. What were you thinking?¹ But it turns out that Dominus' Razor applies here, because getting the desired effect without sacrificing parallelism is actually remarkably hard... )

Doing it in redo instead )

Time to start teaching my colleagues about redo? I think it might be...

¹ He's also using recursive make, which means we're doing too much work if there's much code shared between different benchmarks. But since the time taken to run a benchmark is utterly dominated by simulator time, I'm not too worried about that.
pozorvlak: (Default)
Monday, May 30th, 2011 06:44 pm
I have just written the following commit message:
"Branch over unconditional jump" hack for arbitrary-length brcc.
 - brcc (branch and compare) instructions can have static branch-prediction
   hints, but can only jump a limited distance.
 - Calculating the distance of a jump at expand-time is hard.
 - So instead of emitting just a brcc instruction, we emit an unconditional
   jump to the same place and a branch over it.
 - I added a simple counter to emit/state to number the labels thus introduced.
 - With this commit, I forfeit all right to criticise the hackiness of anyone
   else's code.
 - Though I doubt that will stop me.
pozorvlak: (Default)
Thursday, May 19th, 2011 02:45 pm
I mostly work from home. However, due to stupid IP licensing requirements, much of my work has to be done on a machine physically located in my employer's building. This is OK, because I can login to said machine over the Internet using ssh.

But! My work machine (sentinel) is not visible over the public Internet. First I have to ssh into a gateway machine (rydell), and then ssh from rydell into sentinel. I like to open a lot of xterms at once, and so I'd like this process to be as simple as possible: ideally, I'd like to click one button and get an xterm sshed to sentinel and cd'ed to the directory containing the code I'm currently working on.

Oh, there's another wrinkle: rydell doesn't allow passwordless login using the normal ssh public key infrastructure. Instead, you have to use Kerberos. Kerberos is an authentication protocol developed at MIT that utilises zzzzzz...

Sorry, drifted off for a minute there. The key point about Kerberos is that you ask a keyserver for a time-limited session key, which is decrypted locally using your password. This session key is then used to establish encrypted connections to other servers in the same authentication realm. You never have to send your password over the network, and you don't have to distribute your public key to every host you ever want to talk to. So, once I've acquired a session key by typing kinit and then giving it my password, I should be able to log in to any machine on my employer's network (including sentinel) without typing my password again that day. Which is brilliant.

Except sentinel still isn't visible over the public Internet. So I still need to ssh into rydell and then ssh into sentinel from there. Both of these logins are now passwordless, but this is still annoying. Here are some things I've tried to improve the situation:

The simplest thing that could possibly work

0 $ ssh rydell ssh sentinel
Pseudo-terminal will not be allocated because stdin is not a terminal.

Automating the double-login with expect

#!/usr/bin/expect -f
set timeout 30
match_max 100000
spawn ssh rydell
send "\r"
expect "0 "             # prompt
send "ssh sentinel\r"
expect "0 "
send "cde\r"            # cd to work directory
This actually works, right until I open a text editor or other ncurses program, and discover that I can't resize my xterm - or rather, that the resize information is not passed on to my programs.

Using sshuttle

sshuttle is a poor man's VPN written by the author of redo. Using the -H option, it allows you to proxy your DNS requests through the remote server's DNS server. So a simple
sshuttle -H -vvr rydell 0/0
at the beginning of the day allows me to ssh directly from my local machine (delight) to sentinel. But! It asks me for my sodding password every single time I do so. This is not what I wanted.

ssh tunnelling

I am too stupid to make sense of the "tunnelling" section of the ssh manpage, but fortunately some Googling turned up this, which describes exactly the case I want.
0 $ ssh -fN -L 9500:sentinel:22 rydell
0 $ ssh -p 9500 pvlak1@localhost
pvlak1@localhost's password: 
Last login: Thu May 19 14:31:32 2011 from rydell.my.employ.er
pvlak1@sentinel 14:34 ~
0 $ 
Yes, my employer is located in Eritrea, what of it? :-) Anyway, you will note that this suffers from the same problem as the previous attempt: I have to type my password for every login. Plus, if the sshuttle manpage is to be believed, tunnelling ssh over ssh is a bad idea performance-wise.

I notice that I am confused. Specifically, I notice that I have the type of confusion that comes from lacking an appropriate conceptual framework for attacking the problem.

Can anyone help?

Edit: Yes! Marco Fontani pointed out that the -t option to ssh allocates a pseudo-terminal, so ssh -t rydell ssh sentinel Does What I Want. Thanks, Marco! And thanks to everyone else who offered suggestions.

Edit 2: hatfinch and [livejournal.com profile] simont (who you may recognise as the author of the ssh client PuTTY) came up with an alternative solution. My .ssh/config now contains the stanza
Host sentinel
    User pvlak1
    ProxyCommand=ssh rydell nohup nc sentinel 22
    HostName sentinel.my.employ.er
This doesn't require me to type a password for every login, does allow me to resize ncurses apps, and feels slightly snappier than ssh -t rydell ssh sentinel, so that's what I'll be using from now on. Thanks very much!
pozorvlak: (Default)
Wednesday, March 30th, 2011 10:57 pm
In The Art of Unix Programming, Eric Raymond lists among his basics of the Unix philosophy the "Rule of Generation":
14. Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
He goes into this idea in more detail in chapter 9 of the same book.

I used to believe this was a good idea, and in many situations (here's a great example) it is. But my current work project, which makes heavy use of code generators and custom minilanguages, has been a crash course (sometimes literally) in the downsides. Here's the latest example.

I've recently been merging in some code a colleague wrote about a year ago, just before I started. As you'd expect, with a year's drift this was a non-trivial exercise, but I eventually got all the diffs applied in (I thought) the right places. Protip: if forced to interact with a Subversion repository, use git as your client. It makes your life so much less unpleasant. Anyway, I finished the textual part of the merge, and compiled the code.

Screens-full of error messages. Oh well, that's not so unexpected.

I'm a big fan of Tilton's Law: "solve the first problem". The chances are good that the subsequent problems are just cascading damage from the first problem; no sense in worrying about them until you've fixed that one. Accordingly, I looked only at the first message: "The variable 'state' has not been declared at line 273".

Hang on...

Git checkout colleagues-branch. Make. No errors.

Git checkout merge-branch. Make. Screens-full of errors.

Git checkout colleagues-branch. Grep for a declaration of "state". None visible.

Clearly, there was some piece of voodoo that I'd failed to merge correctly.

I spent days looking through diffs for something, anything, that I'd failed to merge properly that might be relevant to this problem. I failed.

I then spent some serious quality time with the code-generation framework's thousand-page manual, looking for some implicit-declaration mechanism that might explain why "state" was visible in my colleague's branch, but not in mine. I failed.

Finally, I did what I probably should have done in the first place, and took a closer look at the generated code. The error messages that I was seeing referred to the DSL source code rather than the generated C code, because the code-generator emitted #line directives to reset the C compiler's idea of the current file and line; I could therefore find the relevant section of generated code by grepping for the name of the buggy source file in the gen/ directory.

The framework uses code generators for all sorts of things (my favourite generator being the shell script that interprets a DSL to build a Makefile which is used to build another Makefile), but this particular one was used to implement a form of polymorphism: the C snippet you provide is pasted into a honking great switch statement, which switches on some kind of type tag.

I found the relevant bit of generated code, and searched back to the beginning of the function. Yep, "state" was indeed undeclared in that function. And the code generator had left a helpful comment to tell me which hook I needed to use to declare variables or do other setup at the beginning of the function. So that was the thing I'd failed to merge properly!

Git checkout colleagues-branch. Grep for the hook. No results.

And then it hit me.

Like all nontrivial compilers, ours works by making several transformation passes over the code. The first pass parses your textual source-code and spits out a machine-independent tree-structured intermediate representation (IR). There then follow various optimization and analysis passes, which take in IR and return IR. Then the IR is expanded into a machine-specific low-level IR, and finally the low-level IR is emitted as assembly language.

The code that was refusing to compile was part of the expansion stage. But at the time that code was written, the expansion stage didn't exist: we went straight from the high-level IR to assembly. Adding an expansion stage had been my first task on being hired. Had we been using a language that supported polymorphism natively, that wouldn't have been a problem: the code would have been compiled anyway, and the errors would have been spotted; a smart enough compiler would have pointed out that the function was never called. But because we were using a two-stage generate-and-compile build process, we were in trouble. Because there was no expansion stage in my colleague's branch, the broken code was never pasted into a C file, and hence never compiled. My colleague's code was, in fact, full of compile-time errors, but appeared not to be, because the C compiler never got a look at it.

And then I took a closer look at the screensfull of error messages, and saw that I could have worked that out right at the beginning: subsequent error messages referred to OUTFILE, and the output file isn't even open at the expansion stage. Clearly, the code had originally been written to run in the emit phase (when both state and OUTFILE were live), and he'd got half-way through converting it to run at expansion-time before having to abandon it.

Lessons learned:
  1. In a generated-code scenario, do not assume that any particular snippet has been compiled successfully just because the whole codebase builds without errors.
  2. Prefer languages with decent native abstraction mechanisms to code generators.
  3. At least skim the subsequent error messages before dismissing them and working on the first bug: they may provide useful context.
  4. Communication: if I'd enquired more carefully about the condition of the code to be merged I could have saved myself a lot of time.
  5. Bear in mind the possibility that you might not be the guilty one.
  6. Treat ESR's pronouncements with even greater caution in future. Same goes for Kenny Tilton, or any other Great Prognosticator.
Any more?

Edit: two more:
  1. If, despite (2), you find yourself writing a snippet-pasting code generator, give serious thought to providing "this snippet is unused" warnings.
  2. Learn to spot when you're engaged in fruitless activity and need to step back and form a better plan. In my case, the time crawling through diffs was wasted, and I probably could have solved the problem much quicker if I'd rolled up my sleeves and tried to understand the actual code.
Thanks to [livejournal.com profile] gareth_rees and jerf.
pozorvlak: (Default)
Tuesday, November 30th, 2010 04:03 pm
[livejournal.com profile] fanf linked to this on Twitter: I thought I'd fill it in with my "score". Domain-specific stuff applies to my current main gig.

Read more... )
pozorvlak: (Default)
Thursday, October 28th, 2010 11:30 am
I badly need some better strategies for making sense of large, twisty, underdocumented codebases.

My current "strategy" is
  • grep for interesting words or phrases
  • find relevant-looking functions
  • look for their call-sites
  • look up definitions of other functions called in those sections of code
  • if I don't understand what a variable's for (almost certain) then look for assignments to it
  • once I've identified all the bits of code that look relevant, stare at them until my eyes cross
  • maybe put in a few printf's, try to make sense of the logs
  • enter procrastinatory spiral of despair
  • stress about losing job
  • make more coffee
  • repeat.
What do you do?
pozorvlak: (Default)
Monday, May 10th, 2010 12:41 pm
A while back I read a book called Gin: the Much-Lamented Death of Madam Geneva, from which I learned all kinds of interesting things about the Gin Craze (and parallel Gin Panic) in eighteenth-century London. I can't whole-heartedly recommend the book - it would have been better with fewer rhetorical flourishes and more serious analysis - but it should be required reading for anyone proposing measures aimed at cutting "binge drinking", or any other kind of drug abuse. Short version: whatever your idea is, they tried it in the eighteenth century, and it either didn't work or made the problem worse. Concentrate on fixing poverty instead.

One titbit that particularly surprised me was that it used to be common for workplaces to provide gin to their workers, with the cost of the gin being deducted from wages. No risk of being sacked for showing up drunk to work! However, I wonder if our descendants will feel the same about workplaces today which provide unfiltered Internet access to employees...
pozorvlak: (sceince)
Tuesday, March 2nd, 2010 08:00 pm
Every time I read about a new development in the weird and wonderful world of materials science, I wonder if I've gone into the wrong field. Here are a few new products that have caught my eye recently:

Sugru - a bit like modelling clay, only it cures into a flexible silicone overnight. Make your stuff waterproof, or more ergonomic, or funky and artistic, or simply not broken. Check out the video on their site, and some of the hundreds of pictures of cool sugru hacks submitted by their users. Currently out of stock, due to (foolishly, IMHO) unanticipated massive demand.

Spray-on glass - I can't decide if this is a hoax or not. According to the article, the spray can coat whatever surface you like with a 100nm film of glass, with some really bizarre properties (breathable, waterproof, non-toxic, flexible...). Apparently it makes clothes stain-resistant, kitchen counters wipe-clean and antibacterial, wood termite-proof, and vines resistant to fungi. My bogon detector is triggered by the bit about "not available in supermarkets because they make too much money off conventional cleaning products", however.

Woolfiller - not sure if this is really materials science, but - well, watch the video. If you've ever darned an item of clothing, you'll see what I mean.
[Edit: turns out this is clever marketing of a well-known (for suitable values of "well-known") technique called "needle felting". I still think it's cool, though. Thanks to [livejournal.com profile] susannahf and [livejournal.com profile] taimatsu for pointing this out.]

Rather older, but still cool: metallic glass and rubberized asphalt.

PS: I am now employed again - I started work here this morning :-)