Quantcast
Viewing all articles
Browse latest Browse all 10

Inferential Duck Typing, or: Type Hierarchies Considered Harmful

Let's begin with some Java code:

interface File {
    void open();
    void close();
    void delete();
}

interface Fridge {
    void open();
    void close();
    void addSticker(Sticker s);
}

class Util {
    // Too bad this version only works for Files, not Fridges.
      
    void with(File file, Runnable r) {
        try {
            file.open();
            r.run();
        } finally {
            file.close();
        }
    }
}


How can we make Util.with work with both Files and Fridges? Obviously, by crafting an interface IOpenClose and making both File and Fridge extend it:

interface IOpenClose {
    void open();
    void close();
}

interface File extends IOpenClose {
    void delete();
}

interface Fridge extends IOpenClose {
    void addSticker(Sticker s);
}

class Util {
    // This works for both Fridges and Files.
      
    void with(IOpenClose obj, Runnable r) {
        try {
            obj.open();
            r.run();
        } finally {
            obj.close();
        }
    }
}


Unfortunately, the above version requires us to go through all code and mark all open-closeable classes and interfaces with IOpenClose. This is simply superfluous: everyone with two eyes can see that both Fridges and Files provide open and close methods, so why should this be marked with an additional hierarchy parent (IOpenClose)? Not only humans can see this, but a compiler can automatize it.

Here's what I propose: the language should provide implicit downcasting between classes and interfaces with compatible methods. Perhaps this shouldn't be the default behavior, but at least it should be possible. Here's an example:

interface IOpenClose {
    void open();
    void close();
}

interface File {
    void open();
    void close();
    void delete();
}

interface Fridge {
    void open();
    void close();
    void addSticker(Sticker s);
}

class Util {  
    void with(IOpenClose obj, Runnable r) {
        try {
            obj.open();
            r.run();
        } finally {
            obj.close();
        }
    }

    static void main(String[] args) {
        with(new File("temp.txt"), new Runnable() {
            public void run() {
                System.out.println("Hello, world!");
            }
        }
    }
}


OK, so the change wasn't really that big. But suppose Java did support operator overloading. Then look at this example:

interface VecArithmetic<E> {
    E operator + (E rhs);
    E operator * (E rhs);
}

class Vec3<E extends VecArithmetic<E>> {
    public E x;
    public E y;
    public E z;

    public Vec3(E x_, E y_, E z_) {
        this.x = x_;
        this.y = y_;
        this.z = z_;
    }

    public Vec3<E> operator + (Vec3<E> rhs) {
        return new Vec3<E>(x + rhs.x, y + rhs.y, z + rhs.z);
    }

    public Vec3<E> operator * (E rhs) {
        return new Vec3<E>(x * rhs, y * rhs, Z * rhs);
    }
}


This version should work for integers, floats, doubles or complex numbers (class Complex), or any type that defines right-addition and right-multiplication.

So, why would such implicit downcasting be preferable?
  • First, it doesn't require changing existing classes.
  • Second, classes such as Vec3 above, can effectively define a set of methods that they require from parameterized types. For example, some classes may only require operator +, where others require both operator + and operator *. The same effect could be achieved by constructing an interface hierarchy with interfaces IAdd and IMultiply, but this is superfluous and redundant.
  • Third, implicit downcasting is very easy to check statically. In the above example, the compiler doesn't have to work through the whole class Vec3 to see that it requires operator + and operator * from E. The legitimacy of the type parameter is only checked upon downcasting. This is vastly easier implement than C++ way latent typing, at the cost of some boilerplate code (e.g., interface VecArithmetic).
  • Ultimately, this method reduces what could be termed hierarchy interdependency: the class Vec3 and its parameterized type E no longer depend on common type hierarchy. This often obviates the need for adapter classes.
But are there any downsides? Well...
  • Explicitly marking a class C to adhere to some interface B increases documentability. If you know that "C is a B" and "A uses objects of type B", then you know objects of type C can be used by A. In the implicit conversion model, we only know that "A uses objects of type B": on the other hand, the classes conforming to interface B can be deduced automatically.

Viewing all articles
Browse latest Browse all 10

Trending Articles