The Artima Developer Community
Sponsored Link

Weblogs Forum
Adding Optional Static Typing to Python -- Part II

59 replies on 4 pages. Most recent reply: Jun 21, 2005 5:42 PM by A. Ellerton

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 59 replies on 4 pages [ « | 1 2 3 4 | » ]
daniel suarez

Posts: 2
Nickname: danielsu
Registered: Jan, 2005

Re: Roadmap... Posted: Jan 4, 2005 11:41 PM
Reply to this message Reply
fixed :

def foo(a:int):
def pre_Something: pass
#foo code
def post_Something: pass

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Decorators for typechecking Posted: Jan 5, 2005 4:55 AM
Reply to this message Reply
You seem to have some problems with using decorators :) but whatever here is my point of view.

For the type checking problem we have many things to deal with:
- won't it be too strict for python to have type checking?
- what mechanism is going to be used to manage type?
- is it possible to win a lot in optimisation?
- is it possible to win a lot in documentation?
- what is the best syntax for this?

For me the last question is the less interesting: a syntax choice is just an arbitrary choice (or a choice that depends on the will to complexify or not the parser).

But with no syntax how to deal with the other question? We can not do everything by using thought experience just like Einstein! We have to test things... using the current possibility of the language is for me a good way to do so.

Maybe decorators are not the best way... but it is the most easy from my point of view. If you read PEP 318 you see that it was one of the aim of the decorators to do this.

And last thing: I do not think that the syntax:

@accepts(int, int)
def foo(arg1, arg2):
#foo code here

is a hack... it's just a decorator, nothing more. I just ask for a standardisation of such a mechanism so that we can start to use a type checking and see the real advantages.

Maybe creating a PEP about it would make arise some needs or problems about type checking in python.

Here is what I mean with my "decorators for type checking".

Note: using decorators is note really a "static" type checking I admit it... but even with that syntax it is possible to use it for static optimisation I think. For exemple I think the psyco library could use it for optimisation... this is just an exemple...

Nick Coghlan

Posts: 13
Nickname: ncoghlan
Registered: Dec, 2004

Re: Roadmap... Posted: Jan 5, 2005 7:11 AM
Reply to this message Reply
> so here is the roadmap with exemple for each step...

I think coming up with a roadmap is a great idea. There's no point coming up with a Grand Unified Type System without a practical means of getting from here to there.

I think this latest essay gives a much better idea of where Guido is thinking of heading, which means it should be feasible to start plotting a route.

> 1 - adaptation

I agree that this should be number 1 on the list. One thought is whether it would be useful for adapt() to include a 'strict' keyword argument (or provide a separate adapt_strict() function). Strict adaption would require that the result be an actual instance of the specified type.

Then type declarations would be syntactic sugar for adaptation - normal adaptation (duck typing) in most cases, or adaptation plus a type-check for strict adaptation (e.g. using Guido's syntax of adding the 'class' keyword to the type declaration).

> 2 - standard type checking decorators

Again, I think this is a nice way of trying out the machinery before changing the syntax. I see it as somewhat equivalent to the cumbersome repeat-the-name-three-times decorator syntax that existed prior to Python 2.4

Standard accepts() and returns() decorators should naturally be based on adaptation. They might also populate Guido's proposed "__signature__" function attribute.

If the adaptation PEP is updated to include the concept of 'strict' adaptation, there is the question of how that can be incorporated into these decorators.

> 3 - advanced type operations

I'd prefer to see this concept integrated with the adaptation PEP before moving down the operator syntax path.

The associated concept for "|" is "adapt_to_any", which could be obtained simply by allowing a tuple for the protocol argument to adapt(), and try to adapt to the protocol in left-to-right order (ala isinstance()). As soon as the supplied object is adapted successfully, the adapted result is returned. For example (using a special type, rather than a plain tuple_:
  adapt(x, any(prot1, prot2))
to mean:
  for prot in (prot1, prot2):
result = adapt(x, prot)
return result
except AdaptationError:
An appropriate definition of "adapt_all" (for "&") in terms of adaptation is slightly less obvious. The simplest seems to be to handle it as a nested adaptation to each of the given protocols. That is:
 adapt(x, all(prot1, prot2))
to mean:
 adapt(adapt(x, prot1), prot2)
Finally, there is the proposed operator syntax for typing a tuple ("*"). Again, this can be done manually initially to determine if there is enough demand to make it operator based:
  adapt(x, each(prot1, prot2))
to mean:
  (adapt(y, prot) for y, prot in zip(x, (prot1, prot2)))
Assume any(), all() and each() are appropriate classes from an adaptation module.

> 4 - interface/protocolshave a nice way to define
> protocols (I think this term is nearer of the exact
> purpose: definig a behaviour).

Interfaces certainly seem like a nice way of formalising Python's duck-typing.

However, if interfaces are pursued, adaptation should, as a last resort, inspect the supplied object to see if it actually supports the methods and attributes required by the interface, and then allow the adaptation to succeed with the original object.

> 5 - pre/post condition

This is again something that could be done with decorators now, and a couple of magic function attributes that get invoked automatically if present (e.g. __pre__ and __post__):
def pre(orig):
def bind_pre(func):
orig.__pre__ = func
return orig
return bind_pre

def post(orig):
def bind_post(func):
orig.__post__ = func
return orig
return bind_post

def foo(x):
# Main body

def foo(x):
# precondition

def foo(x, result):
# postcondition
> 5 - parametrised type

I like Guido's idea of using type.__getitem__ to handle parameterisation of a type. However, I think it should be up to type instances to determine exactly what the parameter list means.

Going back to adaptation again, consider an optional third argument to the __adapt__ special method that contains the tuple of type parameters. Types that want to be parameterisable declare __adapt__ with the extra argument (defaulting it to None so normal adaptation can still work). Types that aren't parameterisable simply use the standard __adapt__ signature.

type.__getitem__ returns an interface that delegates to the type instance's standard __adapt__ method with that 3rd argument populated. For instance:
adapt(x, list) -> list.__adapt__(x)
adapt(x, list[int]) -> list.__adapt__(x, (int,))
The result of the __getitem__ call would presumably be some special object with an appropriate __adapt__ method (again, accessible through an adaptation module). For example:
class parameterised_type(object):
def __init__(self, cls, types):
self._cls = cls
self._types = types

def __adapt__(self, obj):
self._cls.__adapt__(obj, self._types)

class type:
def __getitem__(self, item):
if isinstance(item, tuple):
return parameterised_type(self, item)
return parameterised_type(self, (item,))

Anthony Baxter

Posts: 2
Nickname: anthonyb
Registered: Jan, 2005

Re: Preconditions and Postconditions Posted: Jan 5, 2005 7:31 AM
Reply to this message Reply
Rather than using new keywords (blech) or magic names, why not just use a trivial decorator?

class Account:

def pre_decrement(self, amt):
assert amt <= self.balance

def post_decrement(self, balance):
assert balance > 0

@contract(pre_decrement, post_decrement)
def decrement(self, amt):
... method body ...
return self.balance

Steve Massey

Posts: 2
Nickname: stephenm
Registered: Jan, 2005

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 5, 2005 10:50 AM
Reply to this message Reply
How about this for postcondition syntax:

def foo(a):
assert a > 0
finally (result):
assert result > 0

... although actually, I think that only some small subset of this proposal should be implemented: as it stands the proposal is a major change to the overall "feel" of python

Steve Massey

Posts: 2
Nickname: stephenm
Registered: Jan, 2005

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 5, 2005 10:53 AM
Reply to this message Reply
How about this for postcondition syntax:

def foo(a):
assert a > 0
finally (result):
assert result > 0

... although actually, I think that only some small subset of this proposal should be implemented: as it stands the proposal is a major change to the overall "feel" of python

Phillip J. Eby

Posts: 28
Nickname: pje
Registered: Dec, 2004

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 5, 2005 1:59 PM
Reply to this message Reply
> why not:
> do_something(stream : str | URL | IStreamFactory)
> and again do the adaptation inside of do_something() ?

Because that's not extensible; it doesn't allow somebody to create an adaptation from some other type to IStreamFactory.

Robin Dunn

Posts: 1
Nickname: robind
Registered: Jan, 2005

Re: Does "optional" really mean optional? Posted: Jan 5, 2005 2:49 PM
Reply to this message Reply
> 1. Adaptation is perhaps the lowest hanging fruit, since
> it requires no syntax changes and provides useful
> flexibility.
> 2. Interfaces may be a possible burning need. It's worth
> considering how far you can get just by adding interfaces
> without type declarations. The ambiguity of "standard"
> protocols (sequence, mapping, etc.) has been bugging
> people for some time, and it would be nice to settle some
> of those things cleanly, which is what motivated my
> suggestion of including mixin code in interfaces.
> 3. Preconditions and postconditions might be the next
> low-hanging fruit after that. (I consider methods defined
> on the interface a smaller step in complexity than nested
> functions that aren't redefined each time like normal
> nested functions.) I don't know how badly anyone wants
> this, though.
> 4. Types seem to me a lower priority than the above...
> 5. ...and parameterized types even lower than that.

For what it's worth I agree with this ordering of priorities. 1, 2, and 3 would be useful to have in Python even if 4 and 5 are not done. But 4 and 5 worry me for many of the same reasons that have been expressed already by others, complexity, runtime overhead, fragmentation of the community, uglification of the code, etc.

Jayson Vantuyl

Posts: 7
Nickname: kagato
Registered: Jan, 2005

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 5, 2005 4:03 PM
Reply to this message Reply
You don't want static typing. Static typing has a great deal with what an object is. Tying that distinction to bindings in a namespace (that is, what Python symbols really are) is a mistake.

We should only care about what an object can do. That's what interfaces are. That is something that the Java people get, that perhaps we are groping for here.

What does static typing do? It's been determined that it can do the following:

* Provide Documentation
* Ease Integration into Statically Typed Languages
* Catch Bugs Involving Bad Data

Additionally, we have imposed the restriction that it be optional. All of these requirements add up to a headache that (can be avoided).

My one overriding fear, that you cannot soothe by any means, will be that, despite it being optional, there will be no way I can avoid static typing in Python once it is implemented. If you can convince yourself otherwise, be my guest, but I am not fooled. It will cause problems for me. It will not help me.

Let me also respond to a few points you made in favor of static typing:

They can be useful for documentation

I think my proposal will address this. I also propose that this is ill suited to this purpose, as there are a large number of systems that will be unable to use this, as they are duck-typed.

for runtime introspection (including adaptation)

This gives us nothing we don't already have with type(x), except kludgy syntax and the potential to explore the vagaries of function overloading (whether that be duplicated code or differing behavior in overloaded variants).

to help intelligent IDEs do their magic (name completion, find uses, refactoring, etc.)

See my proposal on this. I feel that IDEs will not be any better off, because types are often not the issue.

to help find certain bugs earlier

I haven't had a real type-mismatch related bug in almost a decade. I think most people will identify with this. Certainly we already can use assert type(x) is types.IntType.


I propose that we implement interfaces similarly to those proposed in your post. Interfaces are far more Pythonic than most people realize. To illustrate this, let me state some things that I believe are unquestionably obvious in a tongue-in-cheek pretending-to-be-mathematically-rigorous style.

Lemma 1: Interfaces Are Duck-Typing Formalized

Duck-Typing is only caring about how an object can be used. An interface specifies this formally. It describes behavior.

Lemma 2: Types Are Not Important, Behavior Is

Data is there to be used. It's often not important that the data is an Int, Long, Complex, Float, or Decimal. What is important is that it can be used as a numeric value. It's not important that the data is a tuple or an instance, what's important is that our program can use a tuple of integers or a special caching DNS class to find a network address. Type is nothing, behavior is everything.

Corollary 1 to Lemma 2: Types Try To Specify Behavior

Ints behave like Ints. Floats behave like Floats. Classes have a behavior all to themselves. However, the type of any of these objects encapsulates their identity (the real data) in a very real way. At it's core, an Int or a Float holds a type of data that is unique to it in a primitive way that cannot be effectively implmented more generally. It is tempting to use this to indicate behavior.

Corollary 2 to Lemma 2: Classes Try Even Harder

Classes take the above primitives and bundle together data and methods. This takes us one step closer. Ironically, the way we have handled classes makes this so flexible that we can do Duck-Typing. We also implement a very sophisticated form of multiple inheritence to make subclassing very flexible.

Lemma 3: Behavior Is Not Mutually Exclusive

The problem with using types or classes (or the final result of the type-class unification) to represent behavior is too restrictive. In the end, there are situations where Ints, Longs, Floats, Strings, and certain Classes can all be used equally. I have a type-neutral Binary Tree library that makes use of this.

Subclassing and subtyping (and multiple inheritence) are all artifacts of trying to indicate which behavior certain classes/types share with each other. Unfortunately they are very hierarchical in nature. Breaking with that (in multiple inheritance) can introduce complexities that come with a heavy cost.

Corollary 1 to Lemma 2: Interfaces Express This

Intefaces give a way to indicate behavior in a way that is agnostic to the type of core data and the class that may be wrapped around it. This is the Right Thing (TM) in the same way that Duck Typing is.

Theorem: Behavior Must Be Abstracted From Type

Putting it all together:
* typing determines the core data that is held in a data type
* classing determines how some data is bundled together and holds an implementation of its behavior
* metaclassing fundamentally alters the way that a class is constructed and behaves--by altering the implementation.

The chief gain of Static Typing is the ability encapsulate how an object behaves (for runtime identification, documentation, or bug checking). We should abstract behavior from type.

This achieves some useful things:

* Convenient behavior of pre-existing data structures can be codified.

* Behavior specifications are optional.

* Python can still infer behavior when its not specified.

* Behavior specifications can be attached to pre-existing data at runtime.

In a sense, this is really what has been attempted with C prototypes, Java interfaces, and the like. I would like to be able to do something like the following:

import StringIO

interface FileInterface:
__def read(n = ...)

def myFileFunc(FileInterface f):

implement FileInterface with File, StringIO.StringIO

from fileStuff import FileInterface,myFileFunc
from StringIO import StringIO

# This is a hack to show two different syntaxes for this
if syntax == 0:
__# This syntax specifies an interface at definition time
__class superFile implements FileInterface:
____def __init__(filename):

____def read(self,n = None):

____def superfunc(self):
elif syntax == 1:
__# This syntax allows an interface to applied post-hoc
__class superFile:

__implement FileInterface with superFile

c = superFile('/etc/passwd')
d = File('/etc/passwd')
e = StringIO('magic data')
f = 5
# Raises BehaviorException, repr(5) does not implement a
# FileInterface

Of course I'm flexible on the syntax, but I think we should force interfaces to be a new primitive. I think allowing the use of an existing class or type to indicate behavior will create problems. We need a separate notion of this that can be applied post-hoc to support duck-typing and refactoring. It would also be fairly trivial for Python to verify that an interface applied to an object even though it didn't specifically implement it (thus allowing type inference).


For the record, I think that parameterized types are a bad idea. They are only needed as a hack to make static typing more dynamic. They solve a problem we should not have. Look at the C++ source for the STL implementations. Ask yourself two questions: Can this be made simple? Does it really do anything useful here? I think you'll find the answer to both is no.

Peter William Lount

Posts: 1
Nickname: pwl
Registered: Jan, 2005

The Futility of Adding Types to a Dynamic Language Posted: Jan 5, 2005 6:07 PM
Reply to this message Reply
I've made some comments regarding the proposed extensions to Python in the article "The Futility of Adding Types to a Dynamic Language" at

Hopefully you'll make a wise choice for the next version of Python. I urge you to read and seriously consider the two "encapsulation extensions for dynamic langauges" papers.

Steven Yi

Posts: 1
Nickname: stevenyi
Registered: Apr, 2003

Re: Adding Optional Static Typing to Python -- Part II Posted: Jan 5, 2005 7:10 PM
Reply to this message Reply
I'm not aware of any other programming languages going through a process of adding static typing to a language except for Actionscript in Macromedia's Flash application. There, they introduced Actionscript 2.0 which uses a much more Java-like class syntax and static typing of variables (similarly to what is proposed for Python here).

I find I like using both statically-typed languages (I develop my personal music application in Java) and dynamically-typed languages(musical scripts with Python), but I like to work one way or the other, and find it's tricky for me to work with mixed static/dynamically-typed code.

I wonder what the impact on static typing in Python will be for developers in the long run. I hadn't seen a post that mentioned Actionscript and thought it was worth looking at in terms of its impact on its community of developers, perhaps as a precedent for what may happen with Python.

Bengt Richter

Posts: 2
Nickname: bokr
Registered: Jan, 2005

Re: Interface and design by contract Posted: Jan 5, 2005 7:30 PM
Reply to this message Reply
> I thought of a solution with a syntax like this one:

> >>> def foo2():
> ... global def nested():
> ... pass
> ... return nested
> ...
> >>> foo2() == foo2()
> True

> ...of course this syntax currently doesn't work... it is
> not possible to use the global keyword for function
> definition... but it may be a solution...
Not true, apparently, though I haven't explored
this interesting possible way of accessing a closure:

>>> dir()
['__builtins__', '__doc__', '__name__']
>>> def foo():
... global bar
... def bar(): print 'bar!'
... return bar
>>> foo()()
>>> dir()
['__builtins__', '__doc__', '__name__', 'bar', 'foo']
>>> bar()

Ryan Paul

Posts: 2
Nickname: segphault
Registered: Dec, 2004

post conditions Posted: Jan 5, 2005 8:25 PM
Reply to this message Reply
"But what to do for the postcondition? Perhaps we could use a nested function with a designated name, e.g.

I think my suggested post-condition syntax is much nicer.

def blah(x):
return x:
x > 5

When return is used as a block it allows you to establish post-conditions.

Marek Baczyński

Posts: 3
Nickname: imbaczek
Registered: Jan, 2005

Re: post conditions Posted: Jan 5, 2005 8:46 PM
Reply to this message Reply
What about multiple returns? Maybe your proposal could be used right after def:... I don't feel it's good enough, though.

Pierre-Yves Martin

Posts: 17
Nickname: pym
Registered: Jan, 2005

Re: Interface and design by contract Posted: Jan 5, 2005 8:54 PM
Reply to this message Reply
Sorry Bengt... but your proposition do not work better than mine (I tested it a long time ago in fact but did not mention it).

>>> def foo():
... global nested
... def nested():
... pass
... return nested
>>> foo() == foo()

the variable "nested" here is global but the function in it is redefined at each call... what is the same problem than with my current implementation... you just keep a track of the previous defined nested function before redefiniting a new one...

Flat View: This topic has 59 replies on 4 pages [ « | 1  2  3  4 | » ]
Topic: Typesafe Enums in C++ Previous Topic   Next Topic Topic: Enhancing agile planning with abuser stories

Sponsored Links


Copyright © 1996-2014 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use - Advertise with Us