Here's something that occurred to me the other day: consider duck-typed languages like Python. The idea of duck typing is that if something walks like a duck, and quacks like a duck, then it might as well be a duck. Or rather, if something walks and quacks in a sufficiently ducklike manner, it doesn't really matter for your purposes whether it actually is a duck or not. Less metaphorically, we don't specify that the arguments to our functions can only accept (say) Integers, we only specify that they have to have an add() method. In fact, duck typing as commonly understood goes further: we don't explicitly specify a necessary interface at all, we specify it implicitly by the methods we call on our arguments.
An example is probably in order. Consider the Python code
OK, so far so standard. Now, most duck-typed languages are dynamic, which means that we only try to determine if bar has a spoffle method at runtime, and die with an error message when it doesn't (possibly after trying some error recovery, e.g. by calling an AUTOLOAD method or similar). But it occurred to me that in simple cases (which is to say, the majority), we could work out most of this stuff statically. For each function definition, see what methods are called on its arguments. Recurse into functions called from that function. Now, each time a function is called, try to work out what methods the arguments will support, and see if that includes the interface required by the function. Issue an error if not. Thus we get the code reuse benefits of duck typing, and the correctness benefits of static checking. If the static checking is slowing down your development cycle too much, drop back to fully dynamic checking, and only run the static checks on your nightly builds or something.
This also cleared up something else I'd been vaguely wondering about. In his splendid Drunken Blog Rant Rich Programmer Food, Steve Yegge says
This all seemed very interesting, and struck me as potentially a fun and reasonably easy hacking project, at least to get something workable going. But if it's occurred to me, it has probably occurred to someone else, so I asked the Mythical Perfect Haskell Programmer if he was aware of any work that had been done on static duck-typed languages. "Oh yes," he said, "O'Caml's one." Buhuh? Really? Well, that's O'Caml moved a couple of rungs up my "cool languages to learn" stack...
An example is probably in order. Consider the Python code
def foo(bar, baz) : bar.spoffle() bar.buffy(baz.willow()) baz.xander(bar.angel())From that code, we can deduce that the first argument to foo must have methods called spoffle, buffy and angel, and the second argument must support methods called willow and xander. Now, here comes the clever bit. That's all that's necessary for foo to work. Supposing some clever hacker comes along later and invents a new datatype which it would make sense to foo, all they need to do to allow your code to work on theirs is to implement those methods in a sensible way. This can be done without any knowledge or forethought on your part. Maybe you honestly believe that fooing is something that can only be done to ScoobyGang objects, but in fact the algorithm is much more general, and will work with any TeenSuperHeroTeam. It doesn't matter: your code will still work. To achieve the same effect in Java or Haskell, you'd have to have defined a Fooable interface explicitly, and users of your library would have to start their code with a raft of
instance Fooable TheThem where ... instance Fooable AquaTeenHungerForce where ... instance Fooable PowerRangers where ...type stuff. This is in fact how most of my Haskell code ends up looking: I find typing it almost physically painful. Until the RSI starts to kick in, at which point it becomes actually physically painful. And that's in the good case, where the author of the library I'm trying to use has thought about the kind of generalisation I want to make and defined the relevant interfaces.
OK, so far so standard. Now, most duck-typed languages are dynamic, which means that we only try to determine if bar has a spoffle method at runtime, and die with an error message when it doesn't (possibly after trying some error recovery, e.g. by calling an AUTOLOAD method or similar). But it occurred to me that in simple cases (which is to say, the majority), we could work out most of this stuff statically. For each function definition, see what methods are called on its arguments. Recurse into functions called from that function. Now, each time a function is called, try to work out what methods the arguments will support, and see if that includes the interface required by the function. Issue an error if not. Thus we get the code reuse benefits of duck typing, and the correctness benefits of static checking. If the static checking is slowing down your development cycle too much, drop back to fully dynamic checking, and only run the static checks on your nightly builds or something.
This also cleared up something else I'd been vaguely wondering about. In his splendid Drunken Blog Rant Rich Programmer Food, Steve Yegge says
Another problem is that they believe any type "error", no matter how insignificant it might be to the operation of your personal program at this particular moment, should be treated as a news item worthy of the Wall Street Journal front page. Everyone should throw down their ploughshares and stop working until it's fixed. The concept of a type "warning" never enters the discussion.I'd wondered at the time what a type warning would mean. When is a type error mild enough to only warrant a warning? Here's one idea: a type warning should be issued when it cannot be proved that an argument to a function implements the interface that function needs; a type error should be issued when it can be proved that it doesn't.
This all seemed very interesting, and struck me as potentially a fun and reasonably easy hacking project, at least to get something workable going. But if it's occurred to me, it has probably occurred to someone else, so I asked the Mythical Perfect Haskell Programmer if he was aware of any work that had been done on static duck-typed languages. "Oh yes," he said, "O'Caml's one." Buhuh? Really? Well, that's O'Caml moved a couple of rungs up my "cool languages to learn" stack...
Tags:
no subject
And you're also right that generally you can figure that stuff out statically.
But in Haskell (or rather GHC), sadly, you generally don't get that static analysis, and instead you end up with functions passing dictionaries/virtual-function-table-pointers around all the time.
no subject
[Though dynamic/duck-typed languages could also be a lot faster (http://www.avibryant.com/2006/09/ruby-and-strong.html), apparently.]
no subject
no subject
I think it's a bit harder than that: you'd need a typeclass for every function's interface and every intersection of those interfaces that occurs. So if you had then you'd need typeclasses as follows:
One typeclass per primitive function
If instead, winnow was required to return an object that had a particular interface:
Also
And for more fun:
s x y z = x z $ y z
bar = s spodify winnow
typechecks just fine to the correct type, which surprised me!
Re: Also
no subject
no subject
no subject
no subject
http://msdn2.microsoft.com/en-us/library/3y20cc1z(VS.80).aspx
For instance, if I say something like:
Dim x as Integer = 0
MessageBox.Show(x)
then I'm doing implicit type conversion from Integer to String, so that can be a warning or an error depending on your preferences. (I use "Option Strict On", so it's an error for me.)
More generally, I think the key advantage of interfaces is for separate libraries, where you may not know the specific one until runtime. For instance, I've written n-tier applications which have a choice of data layers, so there's just one point in the code where I choose which DLL to load (e.g. SQL Server vs Oracle) and all the rest of my code refers to the interfaces rather than the specific classes. That means that the type checking is done at compile time.
There are also issues if I'm working on a library and I want to change some behaviour, e.g. by no longer supporting a particular method. With interfaces, I can say "I no longer implement IScoobyGang but I do implement IPowerRangers", then make whatever changes I like as long as I stick to those interface definitions. The calling code should check whether a given object implements a particular interface at runtime, and if not then it can handle that appropriately, rather than assuming that my code has a particular method and then crashing mid-function.
no subject
I understand the rationale behind explicit interfaces (you make a good point when you say "The calling code should check whether a given object implements a particular interface at runtime, and if not then it can handle that appropriately, rather than assuming that my code has a particular method and then crashing mid-function" - this can be done in dynamic languages, but it's a bit more of a faff), but for the kind of code I write, they're overkill. And I have a very low tolerance for writing unnecessary lines of code.
no subject
Consider the following interactive session:
The method for method bit removes all the attributes on the class that are built-in (start with __) or internal (start with _) by convention. Also, never use a straight except: in real code.
How do you type check that without actually running it?
You can't. So why bother?
The whole point is to ignore the type of the object. Your test are supposed to catch the common errors, and likely as not, the uncommon cases are being done by developers getting thing wrong or testing your code anyway. And they should test their code to catch these common errors.
Given that it's impossible to prove a python program's type correctness, we prefer to not bother, and get on with stuff that actually matters. Yes, having type warnings from pychecker et al can sometimes help, but normally they are more of a hindrance, and the sort of type errors that actually worry you won't be picked up by such tools.
no subject
And as for I can only say: "well said!". The trouble is that this is such a horribly polarised debate: those who believe that compilation is just a weak form and frustrating of testing on one hand (you, me, Steve Yegge, etc) and those who believe that testing is just a weak and labour-intensive form of compilation on the other (the Haskell, Java etc communities - and have you ever tried to argue with Haskell users? The dead, staring eyes, the moans of "monadsss... monadssss..." *shudder*. Almost as bad as Mac users :-) )
no subject
If you want to see some truly scary python type mangling, I reccommend a peek at some of the Django ORM.
no subject
no subject
no subject
no subject
Personally I like to see type information where I can read it, not in the compilers head.
no subject
I'm a bit confused by type inference too - Haskell's implementors have had to leave out or limit a whole raft of useful features because it would make type inference Too Hard, and yet preferred Haskell style is to give explicit type declarations for all top-level functions (at least). If it makes desirable things impossible, and you're not actually meant to use it, what's it good for besides impressing newbies? I asked
Personally, I like to ignore the way my data is represented, and focus on what it means :-)
Anyway, welcome. I take it you're here via
no subject
So to take an example in Java (java.nio.channels specifically): In the former I could pass a Selector to a method requiring a Channel. In the latter, the compiler would detect that I've screwed up (or rather, that I'm being malicious), because although Selector has all the methods necessary to be a Channel, it certainly isn't a Channel. [Channel just has isOpen and close methods.]
Polymorphism and encapsulation sufficiently hide how data is represented as far as I am concerned.
And yes, I was just nosing about having seen a comment of yours on firefliesinjune.
Statically duck-typed languages
In particular, the C++ template system is duck-typed. Suppose I write a template with a parameter T, and in the body of the template, it has a method that adds two T's with +, and another method that calls the method void quack(bool). C++ doesn't check this at all until you apply the template, and then it will check that the actual parameter you give for T supports those operations.
OCaml is duck-typed in the sense that it supports structural subtyping for both objects and open variants (tagged sums). The idea, called row types, is actually due to Mitch Wand (http://www.citeulike.org/user/tov/article/1628251) and Didier Remy (http://www.citeulike.org/user/tov/article/600741). These original type systems supported both record subtyping, in the sense that you can pass a record with more fields to a function that knows about only some of them, and record update, in the sense that you can polymorphically append a new field to a record, preserving all the old fields. OCaml sadly did away with the latter. OCaml objects have duck-subtyping, but you can't extend an OCaml object with new methods.
Matthias Blume et al. implemented MLPolyR (http://www.citeulike.org/user/tov/article/1652932), which supports extensible, structurally-subtyped records and sums. I found it really impressive.
Re: Statically duck-typed languages