While I'm here, I'll write down a few more IITESKAs that have been buzzing, half-formed, around my head, in some cases for months.
Junk Dilemmas no. 63: This is the name of a chapter in Trainspotting. Renton is lying on the floor of his flat, slowly coming down from heroin. It's cold - "unheated leaky flat in a Scottish winter" cold, and he's not exactly in great physical shape. There's a real possibility that he might freeze to death. He has a small electric fire at the other end of the room. He could drag himself to the heater and turn it on, but it probably wouldn't make any difference. Instead, he prefers to stay where he is, reasoning that the comfort he gains from knowing that he could turn the heater on if it got really bad is better than the comfort he'd get from actually doing it and making the room infinitesimally warmer.
But what he's really doing is rationalising his reluctance to move.
This was actually one of the ideas that suggested this series to me - once I was aware of this thought (anti?)pattern, I kept noticing it in other contexts. I bet you will too, now. Which brings us on to...
Whorfian mind-lock: The Sapir-Whorf hypothesis is that the language(s) you speak constrain the thoughts you can think. I'm not enough of a linguist to have noticed this myself for human languages, but I've written code in enough computer languages to see that some algorithms really are much more elegant in some languages than others. Similarly (claim Sapir and Whorf*), some sentences are more easily expressible in some human languages than in others. If something goes against the grain of the language you're trying to write it in, you're less likely even to think of it. This is the basis of both Orwell's Newspeak and Iain Banks' Marain, as well as the real-world conlang Lojban. The upside of this effect is that learning new languages (particularly more powerful languages) can give you new ways of thinking. This is what I say to people who tell me that category theory is "just a language"** - fine, but it's a more powerful language than yours, so it allows me to think more powerful thoughts :-) The downside is Whorfian mind-lock, which occurs when your language is inadequate to describe the thoughts you need to think, so you don't think them.
In the case of programming languages, this is interesting, because there's a well-known theorem (the Church-Turing thesis) that every programming language more powerful than a certain (low) threshold is as powerful as every other. So every C/Python/Fortran/Lisp/TeX/ZX80 assembler/sendmail config file program could, in principle, be re-written in any of the other languages mentioned. And yet some languages are more powerful than others, in the sense that some languages allow you to write the same program in less space and with less effort than others. With that in mind, I'd like to propose a sort of dual or converse to the Sapir-Whorf hypothesis for human languages, by analogy with the Church-Turing thesis:
Conjecture: every human language is capable of expressing the same set of statements as every other.
This needs sharpening up - what exactly do we mean by a "human language"? For the sake of argument, let's restrict to any language spoken as a first language by more than 10,000 people. And we need to be more precise about the what we mean by the meaning of a sentence, as the emotional overtones of a sentence might well be peculiar to a language community. Could we define the meaning of a statement as the state of affairs in the physical world that it describes? Then what about mathematical or philosophical statements?
Note that I'm not claiming that the translation process is going to be easy; some languages have words that are extremely difficult to translate into other languages. I'm merely conjecturing that it can be done (possibly by turning words in the source language into entire books in the target language, but hey).
Whew. A couple of lighter ones:
Paving cowpaths: another antipattern. You have some process that you're trying to update using shiny modern technology. But, rather than take a step back and look at what the technology could actually do for you, you simply whizzify every stage of your existing process. It's as if you had a twisty, windy cowpath through the hills which you wanted to upgrade to a proper road, but you then build a narrow, windy paved road along the path of the original cowpath, when you could have built a perfectly straight motorway that goes through a cutting or a tunnel. My favourite example: the company I used to work for won a contract for $big_government_department to provide a computer-based solution to their document-processing needs. They handled huge numbers of documents, warehouses full of them in fact, and were finding it increasingly difficult and expensive to keep track of them and get them to where they needed to be on time. When they initially announced the tender, the client wanted an RFID system to keep track of paper documents in their warehouses...
And finally, another one with a great name:
Yak shaving: to take the Jargon File's admirably clear definition, "[MIT AI Lab, after 2000: orig. probably from a Ren & Stimpy episode.] Any seemingly pointless activity which is actually necessary to solve a problem which solves a problem which, several levels of recursion later, solves the real problem you're working on." There's a useful military/NATO saying for when you find yourself in this situation: "Maintain the Aim". In other words, look to see if there's an easier way of solving the actual problem which bypasses the yak-shaving you're stuck on.
* No relation to the Klingon.
** It's not - we have actual theorems and everything. It's just that the naysayers aren't comfortable enough with the language to understand their statements :-)
Junk Dilemmas no. 63: This is the name of a chapter in Trainspotting. Renton is lying on the floor of his flat, slowly coming down from heroin. It's cold - "unheated leaky flat in a Scottish winter" cold, and he's not exactly in great physical shape. There's a real possibility that he might freeze to death. He has a small electric fire at the other end of the room. He could drag himself to the heater and turn it on, but it probably wouldn't make any difference. Instead, he prefers to stay where he is, reasoning that the comfort he gains from knowing that he could turn the heater on if it got really bad is better than the comfort he'd get from actually doing it and making the room infinitesimally warmer.
But what he's really doing is rationalising his reluctance to move.
This was actually one of the ideas that suggested this series to me - once I was aware of this thought (anti?)pattern, I kept noticing it in other contexts. I bet you will too, now. Which brings us on to...
Whorfian mind-lock: The Sapir-Whorf hypothesis is that the language(s) you speak constrain the thoughts you can think. I'm not enough of a linguist to have noticed this myself for human languages, but I've written code in enough computer languages to see that some algorithms really are much more elegant in some languages than others. Similarly (claim Sapir and Whorf*), some sentences are more easily expressible in some human languages than in others. If something goes against the grain of the language you're trying to write it in, you're less likely even to think of it. This is the basis of both Orwell's Newspeak and Iain Banks' Marain, as well as the real-world conlang Lojban. The upside of this effect is that learning new languages (particularly more powerful languages) can give you new ways of thinking. This is what I say to people who tell me that category theory is "just a language"** - fine, but it's a more powerful language than yours, so it allows me to think more powerful thoughts :-) The downside is Whorfian mind-lock, which occurs when your language is inadequate to describe the thoughts you need to think, so you don't think them.
In the case of programming languages, this is interesting, because there's a well-known theorem (the Church-Turing thesis) that every programming language more powerful than a certain (low) threshold is as powerful as every other. So every C/Python/Fortran/Lisp/TeX/ZX80 assembler/sendmail config file program could, in principle, be re-written in any of the other languages mentioned. And yet some languages are more powerful than others, in the sense that some languages allow you to write the same program in less space and with less effort than others. With that in mind, I'd like to propose a sort of dual or converse to the Sapir-Whorf hypothesis for human languages, by analogy with the Church-Turing thesis:
Conjecture: every human language is capable of expressing the same set of statements as every other.
This needs sharpening up - what exactly do we mean by a "human language"? For the sake of argument, let's restrict to any language spoken as a first language by more than 10,000 people. And we need to be more precise about the what we mean by the meaning of a sentence, as the emotional overtones of a sentence might well be peculiar to a language community. Could we define the meaning of a statement as the state of affairs in the physical world that it describes? Then what about mathematical or philosophical statements?
Note that I'm not claiming that the translation process is going to be easy; some languages have words that are extremely difficult to translate into other languages. I'm merely conjecturing that it can be done (possibly by turning words in the source language into entire books in the target language, but hey).
Whew. A couple of lighter ones:
Paving cowpaths: another antipattern. You have some process that you're trying to update using shiny modern technology. But, rather than take a step back and look at what the technology could actually do for you, you simply whizzify every stage of your existing process. It's as if you had a twisty, windy cowpath through the hills which you wanted to upgrade to a proper road, but you then build a narrow, windy paved road along the path of the original cowpath, when you could have built a perfectly straight motorway that goes through a cutting or a tunnel. My favourite example: the company I used to work for won a contract for $big_government_department to provide a computer-based solution to their document-processing needs. They handled huge numbers of documents, warehouses full of them in fact, and were finding it increasingly difficult and expensive to keep track of them and get them to where they needed to be on time. When they initially announced the tender, the client wanted an RFID system to keep track of paper documents in their warehouses...
And finally, another one with a great name:
Yak shaving: to take the Jargon File's admirably clear definition, "[MIT AI Lab, after 2000: orig. probably from a Ren & Stimpy episode.] Any seemingly pointless activity which is actually necessary to solve a problem which solves a problem which, several levels of recursion later, solves the real problem you're working on." There's a useful military/NATO saying for when you find yourself in this situation: "Maintain the Aim". In other words, look to see if there's an easier way of solving the actual problem which bypasses the yak-shaving you're stuck on.
* No relation to the Klingon.
** It's not - we have actual theorems and everything. It's just that the naysayers aren't comfortable enough with the language to understand their statements :-)
Tags:
no subject
It very much seems as if Sapir-Whorf not only is not true, but is actually damaging to the way you think about languages to the extent that (at least some) linguists take almost excessive care to eradicate the meme as early as possible.
no subject
The sci.linguistics FAQ says the following:
no subject
no subject
Well, I don't know anything about human linguistics, but I wanted to comment on a possible link between sapir-whorf and the particular junk dilemma mentioned.
It seems to me that the dilemma only exists because of a confusion of physical comfort and psychological comfort. I can imagine that it could be argued that the confusion wouldn't exist, if only the language made the distinction between the two clear.
I can also imagine the argument that the connection between physical and psychological comfort is ingrained, and not a product of the language, so even if by some freak of nature, the language turned out to clearly distinguish the two, then the confusion would still exist. In fact, it might come to the surface in creative use of the language, in the same way as a book reviewer might describe "delicious flowing prose", while making no pretence to actually refer to the sensation we experience via our taste buds.
Of course the book reviewer is deliberately using the language creatively, and so presumably couldn't seriously consider the possibility that the words actually tasted of something... But then, perhaps that is the same as Renton - maybe there's no genuine confusion at all, just a creative exercise to take his mind off the pain.
no subject
Having said all that in non-coherent English, I will return to my French Constitutionnal LAw essay and stop procrastinating!
no subject
Yeah, this and things like the experimentally verified necessity of high level programming languages and mathematical notation (ever tried doing arithmetic with roman numerals?) convince me that there's something sapir-whorfish going on somewhere. I'm presuming therefore that the argument against sapir-whorf takes a form similar to "we create language, and are not created by it" - asserting that where we want to reason in a way that our current language doesn't support, we'll just extend our language or invent a new one. So the correlation is there, but the causal relationship is reversed...
no subject
Marilyn Manson: No, I think we cause the music to do what it does.