The Artima Developer Community
Sponsored Link

Weblogs Forum
Adding Optional Static Typing to Python -- Part II

59 replies on 4 pages. Most recent reply: Jun 21, 2005 2:42 PM by A. Ellerton

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 59 replies on 4 pages [ 1 2 3 4 | » ]
Guido van van Rossum

Posts: 359
Nickname: guido
Registered: Apr, 2003

Adding Optional Static Typing to Python -- Part II (View in Weblogs)
Posted: Jan 3, 2005 5:08 PM
Reply to this message Reply
Summary
On Dec. 23 I posted some thoughts about this topic, which received a record amount of feedback. Here's a follow-up, based on the responses as well as some thinking I did while off-line for the holidays.
Advertisement

** Note: please read my later post (http://www.artima.com/weblogs/viewpost.jsp?thread=87182) **

This blog entry will contain two parts: a response to the most significant feedback feedback, and a more detailed strawman proposal. (Alas, it's becoming a monster post, and at some point I will just have to stop writing in blog form and instead start writing a PEP. But I'd like to try the blog thing once a week for a few more weeks first.)

Feedback Response

A couple of themes were prevalent in the feedback: concern that Python would lose its simplicity; quibbles with the proposed syntax; questions about which problem(s) I'm trying to solve; and the desire for interfaces, and more specifically design by contract. There were also some suggestions out in left field, and some questions with simple answers ("why can't you do X?"); I'm ignoring these. If you feel left out, please send me an email.

Simplicity

I share this concern. Every feature added causes Python to lose some simplicity; some more so than others. New syntax is particularly prone to this problem, and the proposed syntax (any syntax imaginable for type declarations, really) is relatively heavy. At the same time, this is something that many people, especially folks writing frameworks or large applications, need -- and as long as there's no standard syntax, they are forced to make up their own notation using existing facilities. The same thing happened before bool was a standard Python type -- everybody defined their own Booleans. This duplication of effort is wasteful, and replacing the various home-grown approaches with a standard feature usually ends up making things more readable, and interoperable as well. So, given that there is quite a bit of demand, I think this feature will be a net win.

Syntax

We won't be able to all agree on one syntax; this has historically been true for any new feature added to Python. I have reasons for liking the syntax I picked, but for now I don't feel like discussing the merits of various counter-proposals; the underlying functionality deserves our attention first.

Motivation

I'm not doing this with code optimization in mind. I like many of the reasons for type declarations that were given by various proponents: they can be useful for documentation, for runtime introspection (including adaptation), to help intelligent IDEs do their magic (name completion, find uses, refactoring, etc.), and to help find certain bugs earlier. Indeed, my original motivation was mostly the latter: I've been thinking about integrating the functionality of PyChecker in the core, and I feel that sometimes the type inferencing used there needs a little help. Someone pointed me to a blog entry by Oliver Steele arguing that type declarations are a good tool because they serve several purposes at once reasonably well.

Indirectly, optimization is also served: the best way to optimize code is probably to use type inference, and type declarations can sometimes help the type inferencing algorithm overcome dark spots. Python is so dynamic that worst-case assumptions often make optimizations nearly impossible; this was brought home to me recently when I saw a preview of Brett Cannon's thesis (sorry, no URL yet). But most programs uses the dynamism sparingly, and that's where type declarations can help the type inferencer.

Interfaces and Design By Contract

I'm all for interfaces, and I think I will introduce them at the same time as type declarations (all optional, of course). The Python interface framework with which I'm most familiar is Zope's, and it always felt like me that it needed argument type declarations for its methods; the proposed syntax would solve that. But I don't want to lose duck typing; I think that when I declare an argument's type to be some interface, any object that happens to implement the set of methods defined by that interface should be acceptable, whether or not its class explicitly declares conformance to that interface. In general, I think type checking should be structural, so two types are considered equal if they look the same (have the same methods with the same signatures, etc.).

Design By Contract, Eiffel's approach to integrating pre- and post-conditions into the type system, was recommended frequently (also in the past) and I would love to do something with this. Thinking aloud, perhaps the body of a method declaration in an interface could be interpreted as code implementing the precondition? I had previously thought that interfaces could use this syntax:

interface I1:
    def fumble(name: str, count: int) -> bool:
        """docstring"""

Now it seems easy enough to extend this by allowing additional code in the body following the docstring, for example:

interface I1:
    def fumble(name: str, count: int) -> bool:
        """docstring"""
        assert count > 0
        assert name in ReferenceTable

(But what to do for the postcondition? Perhaps we could use a nested function with a designated name, e.g. check_return().)

Strawman Proposal

This is still extremely raw and rambling. Perhaps the most cooked part is the proposed notation for parameterized types, so I'll start with that.

Parameterized Types

In Java 5 (and who knows where else) these are known as generic types; while that is shorter, it's more mysterious (what's so generic about these types?), so I prefer the other term.

Do we need parameterized types? I think we do; I often write comments explaining to the reader what the element types of lists and dicts are. Python's parameterized types will be primarily a run-time construct; they won't be anything like C++ templates with the accompanying compile-time complexity. (Almost everything in Python happens at runtime rather than at compile time; in fact this is one of the biggest differences between C++ and Python, when you think about it.)

I'm now proposing to use [square brackets] rather than <pointy ones> for these, because then some of this can be prototyped today using a metaclass that implements a suitable __getitem__ method. I have some working sample code in my /tmp directory that lets me declare a class List (a subclass of list) which can be parameterized by writing List[int], List[str] etc., and the right thing will happen. The List class implements explicit type checks; for example, here is the append() method:

def append(self, x):
    assert isinstance(x, self.T)    # self.T references a class variable
    super(List, self).append(x)

A syntax change to the class statement should allow declaring the type parameters, so that the List class could start as follows:

class List(list) [T]:
    ...etc...

Without this syntax change, we could fake it as follows:

class List(list):
    __metaclass__ = ParameterizedType
    __typeargs__ = ["T"]    # the metaclass looks for this
    ...etc...

The new syntax might translate into this. Of course, once all of this becomes part of the language the built-in list type itself would already be parameterizable like this, but the building blocks would be the same and availale to all.

Aside: it would be nice if str(List[int]) returned "List[int]"; str(int) should probably return "int" rather than the current "<type 'int'>".

There are some dark corners here. For example, consider a typed library function declared as taking a list[int] argument. Now we call this from untyped Python code with a plain list argument, where we happen to have ensured that this list only contains ints. This should be accepted of course! But if we pass it a list containing some ints and a float, this ought to fail, preferably with a TypeError at the call site. That means that each list element must be typechecked, which could slow down the call considerably, alas. I can think of various ways to minimize the cost, but it won't completely disappear (we could skip such typechecks when -O is used, which seems reasonable enough).

Worse, if instead of list[int] an argument is declared as iterator[int] (a hypothetical notation for an iterator yielding ints), we can't typecheck the iterator (since this would exhaust it prematurely); we must typecheck each value returned by the iterator's next() method. So, perhaps the generic approach will have to be that the code generated for a function declared to take a constrained container argument must check the types of individual values as they are retrieved from the container. I can see why some responses indicated that they didn't want to see such type checks at all, but (except for -O) that seems the wrong approach, throwing away one of the benefits of the type declaration. Rather, the compiler should use additional type inferencing to avoid inserting unnecessary typechecks most of the time, and to insert typechecks as early as possible.

Interface Declarations

Everybody seems to agree that we should have interface declarations. Rather than trying to overload the class keyword (which all current interface implementations have done out of necessity), I think we should bite the bullet and create separate syntax, for example:

# declare an interface named I1, inheriting from interfaces I2 and I3
interface I1(I2, I3):
    "docstring"

    # declare a method, with argument and result types and input validation code
    def method1(arg1: type1, arg2: type2) -> resulttype:
        "docstring"
        ...statements to validate input values...

    # declare a read-only attribute, with type and default (== initial) value
    attrname: attrtype = defaultvalue

Interfaces can be composed using multiple inheritance just like classes. An interface should only have interfaces as its bases. If an interface overrides a method defined in a base interface, it must be an "extension" of the base method. Examples of extending a method include: adding arguments with default values; adding default values to existing arguments; replacing an argument type with a supertype (this is contravariance, required by Liskov substitutability!); replacing a return type with a subtype (covariance). There might also be a way to declare and add overloaded methods a la Java.

An interface declaration ends up creating an object just like a class declaration, and it can be fully inspected. The body should only contain method and attribute declarations (this is a departure from the class statement, and I haven't researched all the ramifications yet!).

Methods in interfaces should not declare "self" as an explicit argument. Interfaces should not declare class methods or static methods (or any other decorators). The interface only cares about what the signature is in the actual call, so for static methods, all arguments should be declared, and for class methods, the "cls" argument should be omitted from the interface method declaration.

Method declarations can be inspected to find out their signature. I propose a __signature__ attribute (also for methods defined in classes!) which might be an object whose attributes make the signature easily inspectable. This might take the form of a list of argument declaration objects giving the name, type and default (if any) for each argument, and a separate argument for the return type. For signatures that include *args and/or **kwds, the type of the additional arguments should also be given (so you can write for example a varargs method whose arguments are all strings).

Non-method declarations can similarly be inspected. I'd like this mechanism to be available for classes too; it will need an augmentation to allow the declaration of class variables and writable attributes. (Actually, interfaces should also allow declaring writable attributes, although the default should be that attributes declared by interfaces are only writable by the implementing class, not by outsiders -- IOW, self.foo should always be writable, but x.foo would not be unless explicitly declared as writable).

It is fine for argument types (etc.) to be omitted; there's a special default type 'any' which means "don't care". This is the type assumed whenever a type is not explicitly given (and cannot deduced reliably at compile time). Note that 'any' differs from 'object' -- when something is declared as 'object', the type inference must assume it has no methods (except for the few standard ones that every object has, like __repr__); when something is declared as 'any', we assume it may have any methods at all. This distinction is important for compile-time type checking: 'any' effectively shuts up compile-time errors/warnings about type mismatches, and instead causes run-time typechecks to be emitted where necessary.

A class can declare that it intends to conform to an interface by including that interface in its list of base classes; bases that are interfaces are treated differently by the default metaclass. The metaclass should match up the implemented methods and attributes with the methods and attributes declared by the interfaces, make sure they match, and add wrappers to implement on-call type checking. For example:

interface I:
    def foo(x: int) -> str:
        "The foo method."
        assert x > 0

class C(I):
    def foo(self, x):
        return str(x)

Here the metaclass would replace C.foo with a wrapper that (1) checks that exactly one argument is passed, (2) checks that it is an int, (3) executes the body of the method declared in the interface (which in this example checks that x is > 0), (4) calls the method implementation; (5) checks that the return value is a string; and (6) returns the return value. (As I said before, I haven't quite figured out how additional validation of the contract for the return value should be done; perhaps the interface method could contain special syntax for that.) The wrapper should be inspectable; its __signature__ attribute should match that of the interface method, and it should also be possible to retrieve the original implementation.

Note that if an argument type is given as a parameterized type (e.g. list[int]), type checking the call arguments may be expensive or impossible; see the "dark corner" described in the subsection on parameterized types above. When in doubt, the type checking should pass; the dynamic type checking that's "always on" in Python should catch those cases as least as well as they are handled currently.

Interfaces can be parameterized by adding [T1, T2, ...] just like shown above for classes. If a class implements a parameterized interface, it can either specify a type parameter, like this:

interface I[T]:
    ...

class C(I[int]):
    ...

or else it becomes a parameterizable class itself:

interface I[T]:
    ...

class C(I)[T]:
    ...

(Actually, this is the same for inheritance among interfaces or among classes.)

Types vs. Classes

The way I think about it (and I believe I'm in good company), a "type" is an abstract set of method signatures; a "class" is an implementation. A type T2 is a subtype of a type T1 if T2 defines the same methods as T1, plus perhaps some new ones; also if T2 makes certain modifications to the signatures of methods defined by T1, while satisfying the Liskov substitution principle: a T2 instance should be acceptable whenever a T1 instance is acceptable. Interestingly (and frustratingly), this means that a subtype can changes method argument types to supertypes of the corresponding argument types in T1, but not to subtypes. This is called contravariance; I don't have time to explain it in more detail, but it's a fundamental theorem of most type systems. Each class implicitly defines a type, but often a subclass does not define a subtype of its base class's type, since subclasses often violate contravariance: methods taking an argument of the same class are often refined in a subclass to expect an instance of that subclass as argument.

That's all fine with me; I'm not about to restrict classes to implement a subtype of their base classes' (implied) type. But for interfaces I'd like to maintain subtyping relationships; hopefully, interfaces will be othing more than concrete representations of abstract types.

I'll write T2 <= T1 when T2 is a subtype of T1; this matches Python's notation for the subset relationship, and a subtype defines a set of objects that is a subset of the set of objects defined by its supertype. I think we'll have a use for this when writing restrictions on parameterized types, perhaps like this:

interface I[T <= sequence]:
    ...

which means that the type parameter T should be a sequence; I[list] would be acceptable, but I[int] would be an error.

I'm thinking that in some cases a method signature requires relationships between argument types that can't be attached to an individual argument; in that case, perhaps we could add a 'where' clause to the signature. This is a pretty vague design so far; I haven't figured out exactly where 'where' clauses should be allowed, but here's a simple example:

def foo(a: T1, b: T2) -> T2 where T2 <= T1:
    ...

What does it mean when a parameter's type is given as a class name rather than an interface name? Or, for that matter, what does it mean when an interface name is given? My proposal is that the default interpretation is that the type of the argument must match the declared type -- the explicit type represented by an interface, or the implicit type represented by a class. This is closest to Python's practice of duck typing.

Perhaps the Eiffel notion of conformance can be used here. I also expect that there will be some built-in or standard interfaces for things like iterators, iterables, sequences, mappings, files, numbers, and the like.

Unions and Such

Often it is useful to have an argument that can take one of several types. This is a union type in other languages; in Python, I'd like to use the '|' operator on types to express this, for example:

def read(f: file | str) -> str:
    "read data from a file"

A common use would be to declare optional values, where None is acceptable in addition to a designated type; it would be nice if we could write "int | None" for this case, rather than having to invent a name for the type of None (currently NoneType; I think I used 'void' in part I).

The '&' operator can be useful at times as well, to require an argument that should implement several interfaces at once:

def update(f: readableFile & writableFile):
    "blah"

Finally, I'd like to use the '*' operator to represent the cartesian product of two or more types; for example, the type of a tuple containing two strings and a float could be expressed as "str * str * float". OTOH, tuple[T] would stand for a tuple of arbitrary length whose elements all have type T (analogous to list[T]).

These operators should construct objects that can be introspected at run-time.

(Note: I'm not quite sure that I really like this operator syntax; I've toyed with alternatives, like union[T1, T2] instead of "T1 | T2", but I'm not keen on intersection[T1, T2] or cartesian[T1, T2] (both too long) and I can't seem to find shorter words. For now, the operators will do.)

Code Generation

One option would be to generate no type checking code, but simply make the types and interfaces available for introspection, e.g. through a __signature__ attribute on all methods. This would be sufficient for people interested in using type declarations for purposes of adaptation (PEP 246). But it seems silly to be able to write buggy code like this:

def foo(x: int) -> str:
    x.append(42)
    return x[:]

a = foo([])

and not get an error at compile time or at run time. I think that this particular example should generate code that includes a type check on the argument so that the invocation foo([]) will be trapped. In addition, a decent compiler should be able to do enough type inferencing within the body of foo() to figure out that both lines in the body of foo() contain static type errors: an int has no append() method, nor does it support the __getslice__() operation. Under certain circumstances the compiler should also be able to figure out that the call foo([]) is a static type error. The compiler can choose to issue errors or warnings about static type errors detected at compile time; in some cases (like foo()'s body) its case is strong enough to issue an error, while in other cases (like perhaps the foo([]) call) there might be a way that the code would execute without errors, e.g. when the foo variable at runtime is bound to a different function than the one visible to the compiler. We could choose to report all static type errors as warnings; the standard warnings module could then be used to turn these into errors, to suppress them, and/or to control the way they are reported.

Extreme Dynamic Behavior

In order to make type inferencing a little more useful, I'd like to restrict certain forms of extreme dynamic behavior in Python.

  • When a built-in is referenced and there is no assignment to a global with the same name anywhere in the module, the type inferencer should be allowed to assume that the real built-in is meant, and it should be allowed to generate errors and code based on that assumption. For example, consider a module A containing this code:

    def foo(a):
        return len(a)
    

    Then the compiler should be allowed to assume that there isn't code in some other module that does something like this:

    import A
    A.len = sum
    

    On the other hand, if method A contains this code:

    def foo(a):
        return len(a)
    
    def setlen(func):
        global len
        len = func
    

    then the compiler should assume that some other module might call setlen() and it should make sure that the reference to len inside foo() grabs the global variable named len if it is set (i.e. it should use the current, fully dynamic lookup, first in globals, then in builtins).

    The compiler should be allowed to assume that no changes to the __builtin__ module are made (except additions, and except documented cases like __import__).

  • Similarly, when a class defines a method with a certain signature, and the compiler cannot see (in the current module) any assignments to an instance variable overriding the method, then the compiler should be allowed to assume that any call to that method will actually call the method it can see, rather than some other method substituted dynamically. It should allow for a dynamic substitution of another method with the same signature, but nothing else. (This may depend on the metaclass used; the compiler should be able to tell which metaclass is used and adjust its assumptions accordingly. If a non-standard metaclass is used, it should conclude that no assumptions are safe.)

  • Various things should be interpreted by the compiler as hints that extreme dynamicism may be used; for example, using an object's __dict__ attribute, or the globals() function, or setattr() with an argument whose value isn't a string literal.

  • The presence of "from ... import *" poses extra problems. The compiler should be allowed to assume that this doesn't change the meaning of any previously imported names, and this should be checked at run-time as well. It should be allowed to do redundant imports this way, e.g. if module A contains "import sys", then "import sys; from A import *" does not constitute an error; but if module A contained instead "sys = 42" then that same import would be a run-time (and possibly compile-time) error.

  • The exec statement (without an 'in' clause) poses similar problems. Today, you can write:

    s = raw_input()
    exec s
    print x
    

    and assume that s defines x. I never use this myself (instead, I use "d = {}; exec s in d" and then carefully pick apart what appeared in d). It's probably okay if the simply turns of most of its type inferencing after seeing an exec without 'in' in a given scope. (Maybe the presence of exec without 'in' by itself should be worthy of a warning.)

  • Duplicate definitions in a class body should be considered errors, except when guarded by if clauses.

  • This list is open-ended; I expect that PyChecker already implements many heuristics from which we can learn (a) how to recognize extreme dynamic behavior, and (b) which dynamic behavior is likely or (almost) certainly a mistake.

Standard Interfaces

There are a bunch of de-facto interfaces that are used all the time in current Python, for example number, file-like, callable, mapping, sequence, iterable, iterator. (What about sets?) I think all of these should become built-in formal interfaces that should be used in preference to the corresponding concrete types. This would once and for all decide the question of whether something needs to implement writelines() or isatty() before it can be considered file-like, and whether every mapping needs to implement iteritems(). I don't intend to write all these up exactly at this point; I expect that getting all the little details right here is probably worthy of a PEP if not several (I imagine file-like and number could turn into great little mine fields).

Two special types deserve some attention:

  • 'any', already mentioned, is the type that's the union of all possible types; no operation on an object of type 'any' should ever be flagged as a static error by the compiler. Assignment from 'any' to a more specifically typed object should never be considered a static error but instead cause a run-time typecheck. (In general, I think narrowing operations shouldn't require explicit casts.) 'any' is the type of any expression whose type is unknown. In current Python, the implicit type of every variable and expression is 'any'.
  • 'nothing' is pretty much the opposite of 'any' -- it is the union of no types. There are no values of type 'nothing'; this sets it apart from NoneType, which has one value, None. 'nothing' is the element type of the empty sequence or mapping. 'nothing' disappears when united with any other type: for any type T, (nothing | T) == (T | nothing) == T. You can't declare a variable, argument or return type to be nothing, but it is useful as the "zero" of the type algebra.

Loose Ends

  • There could be an operator (a keyword, not a built-in function) 'typeof' that takes an expression argument and returns the static type (as inferred by the compiler) of that expression, without evaluating it. For example, typeof(3) == int, typeof(int) == type, typeof([]) == list[nothing], typeof([1, 2, 3]) == list[int], typeof([1, "a"]) == list[int|str], typeof(C()) == C (assuming C is a class -- more strictly, this should return the interface implied by the class C). If the compiler is doing good type inferencing, this example:

    def foo():
        return 42
    print typeof(foo())
    

    should print "int", not "any".

  • Sometimes we want to insist that an argument's type is a specific class, rather than just something that conforms to that class's interface. (I.e., we don't want duck typing.) We could express this by inserting the keyword 'class' in front of the type. For example:

    def foo(x: class int) -> int:
        ...
    

    In this example, x should be a real int (or a real subclass thereof). In particular, this would exclude a long argument.

  • There should be a notation for call signatures. For example, I should be able to declare x as being a callable that takes two ints and returns a string. Perhaps this would work?:

    x: def(int, int) -> str
    

    How to represent keyword arguments? Perhaps this:

    x: def(a: int, b: int) -> str
    

    But the current LL(1) parser can't quite handle that; it needs to be able to commit to one of the syntactic alternatives after seeing the first token, and 'int' is just an identifier just like 'a'. (Incidentally, this is also the reason why the C/Java/Pyrex style of argument declarations won't work.) It would be ugly to have to always require parameter names, or to require a dummy parameter name. I'm toying with these variants:

    x: def(:int, :int) -> str
    x: def(_: int, _: int) -> str
    

    but neither looks very attractive given that callables with only positional arguments are much more common than ones with keyword arguments (in those cases where one needs to declare an attribute or argument to have a callable type).

    Perhaps the keyword used should be lambda instead of def? I've also played with the idea of not having a keyword at all:

    x: (int, int) -> str
    

    but this would mean we can't use parameters for grouping in type expressions -- I don't know if that's acceptable, especially when using the &, |, * composition operators. (And then again, -> could be the operator that makes something into a callable?)

  • Let's get rid of unbound methods. When class C defines a method f, C.f should just return the function object, not an unbound method that behaves almost, but not quite, the same as that function object. The extra type checking on the first argument that unbound methods are supposed to provide is not useful in practice (I can't remember that it ever caught a bug in my code) and sometimes you have to work around it; it complicates function attribute access; and the overloading of unbound and bound methods on the same object type is confusing. Also, the type checking offered is wrong, because it checks for subclassing rather than for duck typing.

Conclusion

I've covered a lot of ground here. And there's a lot more to cover. Making it longer isn't going to help, and I don't really have time to make it shorter at this point, so here goes. I've been rambling too long already; I'm throwing this over the fence now and am expecting another fruitful discussion (without too many repeats or me-toos, please). I hope to have time to blog weekly until the topic is exhausted, hopefully resulting in a PEP and an implementation plan.


Phillip J. Eby

Posts: 28
Nickname: pje
Registered: Dec, 2004

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 3, 2005 6:59 PM
Reply to this message Reply
Mostly good; but please don't actually type check; use PEP 246 semantics instead. I.e. instead of asserting that you receive an object that already conforms to an interface, invoke adapt() and let it raise an error if the item can't be adapted.

Basically, if you require an actual type check, PEP 246 is DOA with respect to declarations. Here's why. Let's say I have a routine that takes an IFoo parameter, that I currently adapt to *inside* the routine, so that callers can pass a variety of items. For example, in PEAK there are certain interfaces that can accept either a filename, a URL, or a stream factory. There's a 'config.IStreamFactory' interface that many methods adapt their arguments to. This allows the caller freedom to pass whatever makes sense in context; they don't have to explicitly wrap the arguments in an 'IStreamFactory()' call.

However, if this proposal were implemented as it stands, these methods in PEAK could *not* declare themselves as receiving IStreamFactory arguments, because this would then break adaptation! A string isn't going to conform to the typechecker's idea of IStreamFactory, so it can't get into the method to be adapted. So, if you're using PEP 246 semantics now (as Zope, Twisted, and PEAK do), you must declare your method arguments 'any' or leave them undeclared. It would be one thing if your proposal broke *some* of the code out there, but as it sits it will break *all* adaptation-based (PEP 246 or otherwise) systems.

Of course, I suppose one could use 'IStreamFactory | any' as a solution, but that seems odd, and if documentation tools only end up with the result of that union (i.e. 'any'), then it doesn't help and you might as well not bother declaring anything.

Anyway, this is one of the reasons I think type checking should either use PEP 246 semantics, have no required semantics at all, or provide some kind of hooks so that people can define custom semantics for type checking/conversion/adaptation. Maybe you are already thinking something like this, but if so it wasn't clear from your post.

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 3, 2005 7:26 PM
Reply to this message Reply
For me type checking first gives to the programmer/reader/compiler (let's sumarise all this in "the client") information about the method. Just as the docstring, pre- and post-condition or interfaces. I don't know if many other folks here share such view but for me it's the main point : the client need information about the method before diving into the code itself.

So, what are the consequences? Those information must be as much readable as possible and if possible easyly accessible before reading the body of the method.

Something like that:

info1
info2
info3
def foo(arg1, arg2, arg3):
body

or more precisely:

args_types
return_types
interfaces
preconditions
postconditions
do cstring
def foo(arg1, arg2, arg3):
body

would be very readable and usefull (ok maybe a little heavy!).

If we now considere the syntax used in the article (I do not want to criticise the syntax itself, I just want to discuss the concept of information in general about methods) :


def foo(arg1: type1, arg2: type2, arg3: type3)-> return_type:
"""docstring"""
preconditions
body
postconditions


Here all the information are dispatched... for someone that just want to know "what the methods do", "how to use it", "how to check the in- and outputs". It's quite hard... and hardly automatisable...

That why for me the principle of using decorators to express all of this would be quite elegant:

@accepts(type1, type2, type3)
@returns(return_type)
@preconditions(pre1, pre2, pre3)
@postconditions(post1, post2, post3)
@documentation("docstring")
def foo(arg1,arg2, arg3):
body

This approach seems (for me) simple because it keeps the current python syntax unchanged and makes easy to :
- auto-generate doc about a method
- understand the methods inputs and ouputs
- unable compiler optimisations (if all the above decorators are some kind of builtins/standards)
- keep anything optionnal
- allow easy change of old python code (all the new code is outside the body of the method)

But I have to admit that such approach has so defects:
- the syntax is not very "common" (we are all used to the idiom "arg_type : arg_name" or "arg_name : arg_type" that is very common in many programmation langages: C, java, basic, delphi...)
- it use the decorators feature... not everybody love them... and they are brand new in python... (but what the point of adding them to the language but not using them ^_^ )
- many other I do not think of right now...

As Guido wrote it, Design by contract, interfaces and static typing (and for me documentation too) are linked... that's why I wrote this... python is a very simple/clear language and I think it's his strength and I think the "decorator" approach is the simplest way to write all those things... I may be wrong... but I think the topic IS important.

Guido van van Rossum

Posts: 359
Nickname: guido
Registered: Apr, 2003

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 3, 2005 8:01 PM
Reply to this message Reply
I was planning (but forgot to write) that all run-time typechecking would be of the form typecheck(x, T) which would return x if isinstance(x, T) and otherwise raise an exception. It would be simple enough to make this do adaptation instead -- either by default or through a hook.

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Interface and design by contract Posted: Jan 3, 2005 8:18 PM
Reply to this message Reply
The idea of usinf interface to have a design by contract behaviour is quite interesting... but here is my problem with this:

interface I:
def foo(x: int) -> str:
"The foo method."
#precondition
assert x > 0

class C(I):
def foo(self, x):
#here the precondition must be verified
return str(x)

How to have a precondition without implementing an interface ?

The simplest way is just something like that:

class C:
def foo(self, x):
#precondition
assert x > 0
#body (the precondition must be verified)
return str(x)


What are the problems with this? First it's not really readable because it's hard to understand (except with the comment) that the first and second piece of code share the same precondition... Second preconditions are hard to extract of the code (for exemple for documentation purpose) and Third how to write a post condition ?

But in the principle it is very smart because it makes possible to inherit contract... and therefore makes the interfaces even more powerful than before.

Let see the possible variants.

"Perhaps we could use a nested function with a designated name"

interface I:
def foo(x: int) -> str:
"The foo method."
#nested functions for pre- and postconditions
def pre(x)
assert x > 0
def post(r)
assert r is not nil

class C(I):
def foo(self, x):
#here the precondition must be verified
return str(x)

and in the no-interface version:

class C:
def foo(self, x):
#nested functions for pre- and postconditions
def pre(x)
assert x > 0
def post(r)
assert r is not nil
#body (the precondition must be verified)
return str(x)

I choose short and simple names for the "nested function with a designated name" and the point is not their names but the fact that now we have a common behaviour for pre and post condition with and without interfaces. I also decided to use the same principle for pre- and postcondition (nested function) because it seems more natural (for me) to have symetric behaviour.

General purpose of interfaces for methods
Interfaces are an easy way to define a type and to specify a behaviour. but it seems to be also a good way to define many common things for the methods it defines:
- name
- docstring
- contract (pre/postconditions)
- signature
- others???

Interface and type checking
Some prefer interface rather than type checking... I think those thing are just two diferent things. You may want to check the type of your parameters (just to be sure to works with integers for exemple) or you may want to ensure your parameter has an append() method so that you can use it.

You even may want to be sure than your param has an append(int) method --> so you want to check an interface and make some type checking.

Type checking is very usefull for optim, readability and simple debug. Interface is very useful for its incredible flexibility. But having both makes them more powerful.

Syntax is not the point
Even if I'm a "decorator syntax" defender for all those stuff (cf my first reply to this post) I agree with Guido about not discussing the syntax of all those things... what is interesting here is to see what is important for the language, how to make everything work together (contract/interface/typechecking) and how to keep python pythonic (what means for me smart, simple and readable).

Guido van van Rossum

Posts: 359
Nickname: guido
Registered: Apr, 2003

Re: Interface and design by contract Posted: Jan 3, 2005 9:06 PM
Reply to this message Reply
Interesting ideas. A question: by what mechanism do you think the nested functions for the pre- and post-condition will be invoked?

Phillip J. Eby

Posts: 28
Nickname: pje
Registered: Dec, 2004

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 3, 2005 11:08 PM
Reply to this message Reply
Okay. So my next question is, why so sophisticated of a type system? I mean, relatively few languages support all the bells and whistles you've got here, and Python certainly has gotten on well enough without most of them. It seems to me that your past philosophy has been very 80/20-ish - get 80% of the use cases with 20% of the baggage, and most of what you've posted here seems outside that 20%. Declarations+adaptation seem to me all that's needed for runtime use and foreign language interfacing, which is historically what Python has focused on. Runtime use, I mean. Well, and interfacing with other languages, to some extent.

I mean, it would also be *nice* to be able to spell things like 'list[int]' and 'IComponentFactory[INamingService]' and such (although the latter is technically possible now, except for the declaration part). I'm just not seeing them as having that much runtime value.

However, I can see that without a "complete" system it would be hard to write a "complete" type checker, and if that's your motivation I suppose it makes sense. I'm just finding it hard to see why a good code coverage tool wouldn't be more helpful in finding *actual* bugs, as opposed to just holes in the type checker's understanding of the code. I mean, how many errors in *your* code would the thing you plan to write have caught?

Seriously, I think that if Python could just warn you about unexecuted branches of code it'd probably be almost 80% as effective as any super-intelligent analysis tool, at a fraction of the development cost. (Although I admittedly have no hard facts to back that up.) Pretty much by definition, though, the bugs in my code are nearly always in branches not covered by tests. (Which is why I'm starting to get very particular about writing tests first, but that's another topic.)

I'm also *really* surprised (and I get the impression others are as well), that someone as pragmatic as you, who perhaps thinks that folks like me and the Zope folks are maybe a little too far out there in framework land, wants to build a framework as complex as this one. :)

I mean, if I had to build the framework you're proposing, I'd be more than a little scared, because I'm not sure it's do-able without creating lots of unanticipated holes. (E.g., just the issue of covariance vs. contravariance for parameterization of interfaces -- which I think you skipped here -- gives me the shivers. I only touched the edge of this issue in PyProtocols and backed away fairly quickly.)

I would also especially worry about creating situations where users would have to turn off the checking to get anything done, but then would have the problem of interfacing with code written by other users with static backgrounds who typecheck everything as a security blanket.

So I guess I'd just like to ask you to treat the type checking project as just another "client" of type declarations, interfaces, and such, rather than making the static type checker a central focus of your work. After all, the primary value of a program is what it does at runtime, not (usually) its compilation!

I also think there are lots of *other* use cases for declarations and interfaces and lots of the stuff you've got here, and I do like most of it. But not all of the people who want that stuff want to have anything to do with static typechecking, and for some of us the first question is going to be how to switch *off* the typechecking, even if that's a purely superstitious emotional reaction, or kneejerk performance obsession. :)

I also wonder how this impacts Jython and IronPython in their interaction with foreign type systems. It seems to me that it's another area where keeping the type system simple would be helpful, since types from non-Python languages obviously aren't going to be accessible to the Python type checker.

Oh well, I think I'm rambling now too, and you said you didn't want repetition, so I'll quit now. :)

Ka-Ping Yee

Posts: 24
Nickname: ping
Registered: Dec, 2004

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 4, 2005 3:55 AM
Reply to this message Reply
I'm rather surprised that you're suggesting such an ambitious and complex type system. Not that this is a bad thing, but it feels a little out of character for Python as i'm used to it. I've just been reading a bit about <a href="http://scala.epfl.ch/index.html">Scala</a> and it's pretty interesting stuff — if you're going to explore things like parameterized types, i'd say Scala is worth a look. Even if you don't model Python after aspects of its type system, there's still a lot to learn from it.

To prevent possible YAGNI pitfalls (designing lovely deep complex type systems for the sake of their own elegance rather than to serve a practical purpose) i'd like to give a gentle push in the direction of motivating each feature of the design with a practical example. The examples don't have to be complicated, but working through examples from start to finish can yield some insights as to what is actually useful or confusing.

What do you think of allowing code in interfaces? Your idea of putting precondition checks as method bodies seems to preclude the possibility of putting mixin functionality in interfaces. But i think it would be quite useful to be able to define some code in an interface:
interface Dict:
# implementors must implement these (no trailing colon means no implementation)
def __delitem__(self, key)
def __getitem__(self, key) -> any
def __setitem__(self, key, value)
def keys(self) -> list

# implementors may optionally override these
def __len__(self):
return len(self.keys())
def __contains__(self, key):
return key in self.keys()
def clear(self):
for k in self.keys():
del self[k]
def copy(self) -> Dict:
result = self.__class__()
result.update(self)
return result
def get(self, key, defaultvalue=None):
try:
return self[key]
except KeyError:
return defaultvalue
def has_key(self, key):
return self.__contains__(key)
def items(self):
return [(key, self[key]) for key in self.keys()]
def iteritems(self):
return iter(self.items())
def iterkeys(self):
return iter(self.keys())
def itervalues(self):
return iter(self.values())
def pop(self, key, defaultvalue=None):
try:
value = self[key]
except KeyError:
return defaultvalue
else:
del self[key]
return value
def popitem(self):
if len(self.keys()):
key = self.keys()[0]
return (key, self.pop(key))
else:
raise KeyError
def setdefault(self, key, defaultvalue=None):
if key not in self:
self[key] = defaultvalue
return self[key]
def update(self, dict):
for key in dict:
self[key] = dict[key]
def values(self):
return [self[key] for key in self.keys()]

The interface seems to be a natural place to put such boilerplate code, rather than having to repeat it in every class that implements Dict. Admittedly, the implementations i've written here are quite inefficient, but i'm trying to make the point that having this in the interface allows implementations of Dict to cheaply achieve consistency with a wide API: you only really need to write four methods to get all 15 or so methods in working order, which is good for rapid prototyping.

You could say this is in line with the "adapt rather than fail" philosophy of PEP 246. Just as PEP 246 suggests that the runtime should try to find an appropriate adaptor rather than aborting when there is a type mismatch, allowing code in the interface allows the programmer the flexibility to implement more or less of an interface but still end up presenting a standard, complete API to clients.

There are only a few type declarations in the above example. I initially had type annotations on everything. Then i assumed that the default type for an argument is any, and that the default type for a return value is any if there are any return statements present, or None otherwise, and removed annotations that were covered by these defaults. Then i removed type annotations for return types that you could obviously read right off of the outermost expression in the return statement. That got rid of almost all of them.

For fun, here's an attempt to do the same example with parameterized types.
interface Dict[K, V]:
# implementors must implement these (no trailing colon means no implementation)
def __delitem__(self, key: K)
def __getitem__(self, key: K) -> V
def __setitem__(self, key: K, value: V)
def keys(self) -> list[K]

# implementors may optionally override these
def __len__(self):
return len(self.keys())
def __contains__(self, key):
return key in self.keys()
def clear(self):
for k in self.keys():
del self[k]
def copy(self) -> Dict[K, V]:
result = self.__class__()
result.update(self)
return result
def get(self, key: K, defaultvalue=None: V | None) -> V | None:
try:
return self[key]
except KeyError:
return defaultvalue
def has_key(self, key: K):
return self.__contains__(key)
def items(self) -> [K * V]:
return [(key, self[key]) for key in self.keys()]
def iteritems(self) -> iter[K * V]:
return iter(self.items())
def iterkeys(self) -> iter[K]:
return iter(self.keys())
def itervalues(self) -> iter[V]:
return iter(self.values())
def pop(self, key: K, defaultvalue=None: V | None) -> V | None:
try:
value = self[key]
except KeyError:
return defaultvalue
else:
del self[key]
return value
def popitem(self) -> K * V:
if len(self.keys()):
key = self.keys()[0]
return (key, self.pop(key))
else:
raise KeyError
def setdefault(self, key: K, defaultvalue: V) -> V:
if key not in self:
self[key] = defaultvalue
return self[key]
def update(self, dict: Dict[K, V]):
for key in dict:
self[key] = dict[key]
def values(self) -> list[V]:
return [self[key] for key in self.keys()]


Note that i found myself forced to change the API here. I had to remove =None as the default value for the last argument to setdefault because i could no longer assume that None would be an allowed member of type V.
Urg. I know this is a side issue, but the inconsistent capitalization of type names was starting to drive me batty.

Ka-Ping Yee

Posts: 24
Nickname: ping
Registered: Dec, 2004

Preconditions and Postconditions Posted: Jan 4, 2005 4:18 AM
Reply to this message Reply
Hmm. I just thought of a way to do preconditions and postconditions that seems conceptually elegant (though perhaps syntactically ugly).

Suppose you establish a convention that calling any method foo first calls the method pre_foo with the arguments to foo (if pre_foo exists); then calls foo; then calls the method post_foo with the value returned by foo (if post_foo exists).

Then preconditions and postconditions are just normal methods.

If you allow interface definitions to include method bodies, then you can easily put preconditions and postconditions in the interface. The precondition and postcondition methods are inherited by implementors of the interface, just like any other methods defined in the interface.

Using a name-mangling convention is icky to me, but the simplicity of having preconditions and postconditions inherited in the normal fashion is appealing. It also enables usual techniques for code reuse like calling other preconditions or postconditions in the current class or superclass, and so on. (If preconditions and postconditions were written in nested functions, you wouldn't be able to refer to them elsewhere.)

interface Account:
def deposit(self, amount: int)
def get_balance(self) -> int
def withdraw(self, amount: int)

def pre_deposit(self, amount):
assert amount >= 0
def post_deposit(self, amount):
assert self.get_balance() >= amount
def post_get_balance(self, result):
assert result >= 0
def pre_withdraw(self, amount):
assert amount >= 0
assert self.get_balance() >= amount

Tricky questions:

1. What about postconditions that specify when an exception should or should not be thrown?

2. What about postconditions that refer to the initial values of variables? It's quite common to describe a postcondition on x in terms of "the initial value of x". I suppose it would be sufficient to provide the precondition routine with a place to stash away the initial value of x, though that seems a bit tedious.

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Re: Interface and design by contract Posted: Jan 4, 2005 4:44 AM
Reply to this message Reply
Here is a simpler way to use the pre/post idioms:

interface I:
def foo(arg1, arg2):
def pre():
assert arg1 > 0
assert arg2 > 0
def post(result):
assert result is not nil


About the arguments

First, there is no problem for the pre and post methods to access the parameters of the foo method because they are nested.
Second, the post function *need* an argument to access the result of the function if we do not want to use another reserved word ("_result" or something like that).

how to invoke the pre/post functions?

Any callable/function should have some attribute __pre__ and __post__ which are function/callable. If pre/postcondition are activated (not in -O mode ???) any call to the function could be something like that:

__pre__()
result = __raw_function__(args*, kwargs**)
__post__(result)
return result

what suppose that we have a __raw_function__ function that is the exact equivalent of the function itself without the pre and post... or we could do something like:

__pre__()
#execution of the code
#retrieving of the result in a variable internaly called "result"
__post__(result)
return result

For me this solution is better because we do not have to copy the params to the __raw_function__. And I do not like the idea that any function have to possess an attribute corresponding to a raw version of herself... (what is most of the time useless).
Here we just have to had a little piece of code to retrieve the result of the function and then test it throught the __post__ function.

nested function redefinition problem

I already implemented such a system of pre/postcondition (without the interface system) and here is the problem I had.

def foo1(arg1, arg2):
#start of body
#all stuff here are executed for each call
#end of body

def foo2(arg1, arg2):
#start of body
def pre():
#body of the precondition
def post(r):
#body of the precondition
#all stuff here are executed for each call
#end of body

The definitions of pre() and post(r) here are executed each time the function is called... what means that a new function is defined everytime:

>>> def foo2():
... def nested():
... pass
... return nested
...
>>> foo2() == foo2()
False

so if we want to use such kind of syntax we have to change something... it is stupid to define new pre/post functions every time the function is called.

I thought of a solution with a syntax like this one:

>>> def foo2():
... global def nested():
... pass
... return nested
...
>>> foo2() == foo2()
True

...of course this syntax currently doesn't work... it is not possible to use the global keyword for function definition... but it may be a solution...

The solution I used... and its defaults

Here is what I did to solve the problem of function redefinition... I decided to define pre and post condition has decorators (because they are executed only once, when the function itself is defined) but decorators are not nested... and I can not access the args of the function except by passing it throught arguments (copy of the argument is quite complicated and long considering the args* and kwargs** system).

Let's come back to interface problem

We have gone far away from the interface problem itself... but what have we learned about it?

An interface is a very simple way to express behaviour common to many classes/methods. It seems simple to say that it's a elegant way to feed function attribute such as:
- __name__
- __docstring__
- __signature__
- __precondition__
- __postcondition__
but we do not have to forget that these attribute (if they exist) have to be feedable from a classic method (I mean one that is not defined in an interface).

Using method interface body to define pre and postconditions is a good idea... as long as it is also possible in non interfaced method. It's just as if it was possible to specify typing of arguments only in interface... it would seems obviously a too heavy/complicated way of implementing static typing.

So here is my solution (based on an extension of the "global" keyword).

#without interface
class MyClass:
def foo(arg1, arg2):
global def __pre__():
assert arg1 > 0
assert arg2 > 0
global def __post__(result):
assert result is not nil
#body of the method

#with interface
interface MyInterface:
def foo(arg1, arg2):
"""docstring"""
global def __pre__():
assert arg1 > 0
assert arg2 > 0
global def __post__(result):
assert result is not nil

class MyClass(MyInterface):
def foo(self, arg1, arg2):
#body of the method

With that solution everything seems possible and we used as few new keywords as possible... we just need to estend the "global" keyword so that it can be apply to function definition.
Of course if a nested function called __pre__() is not declatred as global there should be a warning... to prevent any mistakes.

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Some code in interface... Posted: Jan 4, 2005 5:02 AM
Reply to this message Reply
I'm not a "pure interface bigot" but I have some objection to the addition of code in interface definition:
- theorically an interface as no code in it
- allowing code in interfaces is equivalent to have a C++ style interface... what is class with pure virtual methods...
- interface are here (in this discussion) used for static typing what is not directly linked to implementation of the defined method (it is another problem)

I think your remark is really interesting... but here how I would do to allow same feature with no code in the interface

interface Dict:
def __delitem__(self, key)
def __getitem__(self, key) -> any
def __setitem__(self, key, value)
def keys(self) -> list
def __len__(self)
def __contains__(self, key)
def clear(self)
def copy(self) -> Dict
def get(self, key, defaultvalue=None)
def has_key(self, key)
def items(self)
def iteritems(self)
def iterkeys(self)
def itervalues(self)
def pop(self, key, defaultvalue=None)
def popitem(self)
def setdefault(self, key, defaultvalue=None)
def update(self, dict)
def values(self)

class DictImplementation(Dict):
# implementors must implement these (if not assertion fails)
def __delitem__(self, key):
assert False
def __getitem__(self, key) -> any:
assert False
return nil
def __setitem__(self, key, value):
assert False
def keys(self) -> list:
assert False

# implementors may optionally override these
def __len__(self):
return len(self.keys())
def __contains__(self, key):
return key in self.keys()
def clear(self):
for k in self.keys():
del self[k]
def copy(self) -> Dict:
result = self.__class__()
result.update(self)
return result
def get(self, key, defaultvalue=None):
try:
return self[key]
except KeyError:
return defaultvalue
def has_key(self, key):
return self.__contains__(key)
def items(self):
return [(key, self[key]) for key in self.keys()]
def iteritems(self):
return iter(self.items())
def iterkeys(self):
return iter(self.keys())
def itervalues(self):
return iter(self.values())
def pop(self, key, defaultvalue=None):
try:
value = self[key]
except KeyError:
return defaultvalue
else:
del self[key]
return value
def popitem(self):
if len(self.keys()):
key = self.keys()[0]
return (key, self.pop(key))
else:
raise KeyError
def setdefault(self, key, defaultvalue=None):
if key not in self:
self[key] = defaultvalue
return self[key]
def update(self, dict):
for key in dict:
self[key] = dict[key]
def values(self):
return [self[key] for key in self.keys()]

Of course a default of that solution is that it is no more possible to detect if the implementor forgot to implement a method... but I prefer to let him take care of that (he will notice the error at runtime) rather than have "interface with code".

For me interface must:
- define type
- define behaviour
- initialize class/function attributes

If you want to prevent implementor to type thousands times the same code you just need to do a default implementation of the interface (it's what you do in java and it's not so bad).

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

no trailing colon means no implementation Posted: Jan 4, 2005 5:32 AM
Reply to this message Reply
Having something as clean as this code is obviously interesting (I simplified the exemple so that it is easier to read):

interface Dict:
def __delitem__(self, key)
def __getitem__(self, key) -> any
def __setitem__(self, key, value)
def keys(self) -> list

...but what kind of information do we have about declared methods? signatures... and nothing more. What is the purpose of __delitem__ ? We could add some comment about it... but this should be written in the docstring (and of course then inherited by the implementation).

So for me it's better to always have a colon (just as any function definition) and then anything that would appear in a normal function... except maybe code (it's another discussion).

interface Dict:
def __delitem__(self, key):
"""delete an item from the dictionnary. Raise an exception if the key do not exist"""
def __getitem__(self, key) -> any:
"""return an item corresponding to the key. Raise an exception if the key do not exist."""
def __setitem__(self, key, value):
"""set an item value with the corresponding kay. If the key do not exist the key is added if it already exists the value is replaced."""
def keys(self) -> list:
"""return all the keys as a list."""

(here I just added the docstring but many other thing could appear such as pre/postconditions and other stuff).

Jeremy Stein

Posts: 4
Nickname: jstein
Registered: Jan, 2005

Perspective from a Java developer Posted: Jan 4, 2005 6:07 AM
Reply to this message Reply
I'm primarily a Java developer (though I've used C++, Eiffel, etc.). I thought I'd give you my perspective on this issue. I've read the Python manual, but haven't had the opportunity to write any real programs with it.

It took me a while to get used to dynamic typing. It's been well-ingrained in me that loosely typed variables are bad, and I've been burnt by it in VBScript programming with ASP. The first hurdle was to understand the difference between loose typing and dynamic typing. The second was to understand duck typing. Once I got these smashed into my brain, it opened my eyes to a whole new, easier, better(?) way of programming.

However, if I had been learning Python and read that I could declare types for function parameters, but I didn't have to, I certainly would have assumed that it's better to do so. I think you'll find that people moving to Python will take advantage of whatever language features look most familiar. This might be great for experts who already understand the advantages of programming the Python way, but you might be throwing the newbies for a loop. A great way to teach Python philosophy is to require it in the language.

Marek Baczyński

Posts: 3
Nickname: imbaczek
Registered: Jan, 2005

Re: Perspective from a Java developer Posted: Jan 4, 2005 6:30 AM
Reply to this message Reply
> Python way, but you might be throwing the newbies for a
> loop. A great way to teach Python philosophy is to
> require it in the language.

I concur. Python as-is is a great eye-opener (and that's what makes Java/static typing drones hate it the most IMHO.) Making a type system like what is described in the article would make Python a poor man's ML, because most people will use a feature for the sake of it.

It's not that I don't want a faster Python (and I believe a system like this and a type inferencer would make Python sooo much faster), but I don't know if damaging simplicity of the language is really worth it. I've got a feeling that writing today's Python under your proposal will become a form of art rather than usual everyday coding. (IMHO using adaptation will help to reduce scale of this issue, but won't remove it.)

Ka-Ping Yee

Posts: 24
Nickname: ping
Registered: Dec, 2004

Re: Some code in interface... Posted: Jan 4, 2005 7:25 AM
Reply to this message Reply
This is regarding my suggestion that the interface contain code to define the behaviour of "extra" methods (i.e. methods whose behaviour is expected to derive from the behaviour of the "core" part of a class).

I'm aware that it's possible to achieve the same goal by writing out the interface and then separately providing a mixin class that implements the extra methods. But what advantage does it give you? I don't see any. The pure-virtual-interface-plus-mixin-class solution has several drawbacks:

1. You have two names to deal with (Dict, DictImplementation), which you have to keep straight (presumably by some naming convention).
2. Some classes will inherit from Dict, others will inherit from DictImplementation. If the recommended style is to inherit from DictImplementation, why not just have one type?
3. You have to repeat the signatures of all the methods in DictImplementation even though you already wrote them for Dict.

The single-interface solution is less work, but i'd also argue that it makes more sense to me: the behaviour of a method like setdefault is part of the Dict interface semantics to me. So it seems reasonable that it should be right there in the interface.

Flat View: This topic has 59 replies on 4 pages [ 1  2  3  4 | » ]
Topic: Typesafe Enums in C++ Previous Topic   Next Topic Topic: Enhancing agile planning with abuser stories

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use