This amazing feature appeared in the language almost apologetically and with concern that it might not be that useful.
I predict that in time it will be seen as one of the more powerful features in the language. The problem is that all the introductions to decorators that I have seen have been rather confusing, so I will try to rectify that here.
First, you need to understand that the word "decorator" was used with some trepidation, because there was concern that it would be completely confused with the Decorator pattern from the Design Patterns book. At one point other terms were considered for the feature, but "decorator" seems to be the one that sticks.
Indeed, you can use Python decorators to implement the Decorator pattern, but that's an extremely limited use of it. Python decorators, I think, are best equated to macros.
The macro has a long history, but most people will probably have had experience with C preprocessor macros. The problems with C macros were (1) they were in a different language (not C) and (2) the behavior was sometimes bizarre, and often inconsistent with the behavior of the rest of C.
Both Java and C# have added annotations, which allow you to do some things to elements of the language. Both of these have the problems that (1) to do what you want, you sometimes have to jump through some enormous and untenable hoops, which follows from (2) these annotation features have their hands tied by the bondage-and-discipline (or as Martin Fowler gently puts it: "Directing") nature of those languages.
In a slightly different vein, many C++ programmers (myself included) have noted the generative abilities of C++ templates and have used that feature in a macro-like fashion.
Many other languages have incorporated macros, but without knowing much about it I will go out on a limb and say that Python decorators are similar to Lisp macros in power and possibility.
I think it's safe to say that the goal of macros in a language is to provide a way to modify elements of the language. That's what decorators do in Python -- they modify functions, and in the case of class decorators, entire classes. This is why they usually provide a simpler alternative to metaclasses.
The major failings of most language's self-modification approaches are that they are too restrictive and that they require a different language (I'm going to say that Java annotations with all the hoops you must jump through to produce an interesting annotation comprises a "different language").
Python falls into Fowler's category of "enabling" languages, so if you want to do modifications, why create a different or restricted language? Why not just use Python itself? And that's what Python decorators do.
Decorators allow you to inject or modify code in functions or classes. Sounds a bit like Aspect-Oriented Programming (AOP) in Java, doesn't it? Except that it's both much simpler and (as a result) much more powerful. For example, suppose you'd like to do something at the entry and exit points of a function (such as perform some kind of security, tracing, locking, etc. -- all the standard arguments for AOP). With decorators, it looks like this:
When the compiler passes over this code, aFunction() is compiled and the resulting function object is passed to the myDecorator code, which does something to produce a function-like object that is then substituted for the original aFunction().
What does the myDecorator code look like? Well, most introductory examples show this as a function, but I've found that it's easier to start understanding decorators by using classes as decoration mechanisms instead of functions. In addition, it's more powerful.
The only constraint upon the object returned by the decorator is that it can be used as a function -- which basically means it must be callable. Thus, any classes we use as decorators must implement __call__.
What should the decorator do? Well, it can do anything but usually you expect the original function code to be used at some point. This is not required, however:
def __init__(self, f):
print "inside myDecorator.__init__()"
f() # Prove that function definition has completed
print "inside myDecorator.__call__()"
print "inside aFunction()"
print "Finished decorating aFunction()"
Notice that the constructor for myDecorator is executed at the point of decoration of the function. Since we can call f() inside __init__(), it shows that the creation of f() is complete before the decorator is called. Note also that the decorator constructor receives the function object being decorated. Typically, you'll capture the function object in the constructor and later use it in the __call__() method (the fact that decoration and calling are two clear phases when using classes is why I argue that it's easier and more powerful this way).
When aFunction() is called after it has been decorated, we get completely different behavior; the myDecorator.__call__() method is called instead of the original code. That's because the act of decoration replaces the original function object with the result of the decoration -- in our case, the myDecorator object replaces aFunction. Indeed, before decorators were added you had to do something much less elegant to achieve the same thing:
def foo(): pass
foo = staticmethod(foo)
With the addition of the @ decoration operator, you now get the same result by saying:
def foo(): pass
This is the reason why people argued against decorators, because the @ is just a little syntax sugar meaning "pass a function object through another function and assign the result to the original function."
The reason I think decorators will have such a big impact is because this little bit of syntax sugar changes the way you think about programming. Indeed, it brings the idea of "applying code to other code" (i.e.: macros) into mainstream thinking by formalizing it as a language construct.
The only constraint on the result of a decorator is that it be callable, so it can properly replace the decorated function. In the above examples, I've replaced the original function with an object of a class that has a __call__() method. But a function object is also callable, so we can rewrite the previous example using a function instead of a class, like this:
new_f() is defined within the body of entryExit(), so it is created and returned when entryExit() is called. Note that new_f() is a closure, because it captures the actual value of f.
Once new_f() has been defined, it is returned from entryExit() so that the decorator mechanism can assign the result as the decorated function.
The output of the line printfunc1.__name__ is new_f, because the new_f function has been substituted for the original function during decoration. If this is a problem you can change the name of the decorator function before you return it:
Both of these are powerful forms in Python. You can rewrite the semantics of the language on the fly to do things automatically that would normally need to be left to the rules of using a framework properly.
Database transactions, persistence, logging, debugging, warning... And it's metaprogramming using the same syntax as the programming language itself. These can fill the same roles as templates in C++, or JNI and annotations in Java, but being able to express orthogonal things completely orthogonally in code, but with the same idioms in the core flow of the system is so much more powerful.
We agree again. But there is something fundamentally awesome about being able to change the language to fit the problem instead of changing the problem to fit the language. And to do it using the language itself.
> Python decorators are similar to Lisp macros in power and possibility.
This is a tempting association (decorators <=> macros) but unfortunately it is not true. Lisp macros are strictly *more* powerful than Python decorators because:
1. macros can change the syntax of the language (decorators cannot)
2. macros work at compile time (decorators cannot)
For most practical purposes decorators are powerful enough (the fact that they work at runtime is even an advantage in some cases) but there are things you cannot implement with decorators but only with macros.
For instance, an impressive usage of macros in the Scheme world is the "Typed Scheme" project: here you take a dynamically typed language such as Scheme and you implement on top of it a statically typed language where types are checked at compile time (see http://docs.plt-scheme.org/typed-scheme/). You could not implement that on top of Python.
Decorators can't change the syntax of Python, of course. But python doesn't have the same separation between compile-time and run-time, even though it has equivalents to what LISP has. But decorators and metaclasses can change fundamental behaviors of the language, and do it at the equivalent point in the python world as macros in LISP.
While not being able to change the syntax of the language does limit their power, I think it limits their power in ways that make them much more useful. Python always looks like Python, and always behaves like python inside of a block of code. But you can easily change the behavior at the block boundaries, whether it's by using context managers for anonymous blocks, decorators for functions, or metaclasses for types. And since the changes to execution are explicitly managed with Python code, these mechanisms are much simpler to unravel for less experienced Python users.
It is actually possible to use decorators and/or metaclasses to implement something similar to Typed Scheme (http://docs.plt-scheme.org/typed-scheme/). However, the key difference is that Python doesn't have the same kind of "compile-time" that Scheme does, and there is no way at all to do early type checking.
However it is possible to tag functions with decorators that enforce type checking at the time of the call in Python, and even provide automatic conversion for types that should be equivalent. Then it's much easier to produce informative errors when something does go wrong. Since decorators are bound to each function only once, then it's even possible to change from expensive runtime behavior to "free" non-checking runtime behavior by changing a global setting that tells the type-checking decorator what should be done. It's even easy to enable checking for some modules or functions without globally enabling or disabling checking too. Again, since the decorators are bound when the functions are constructed, once this is done the only cost is when a decorator returns a modified function.
The mechanism may not be exactly the same as LISP macros, but LISP and Python are fundamentally different languages. But decorators and metaclasses happen at the closest equivalent Python has to LISP's "compile" time.
Macros... macros? Seems like macros have become a matter of sloppy theoretical discourse about DSLs rather than a living but thorny programming practice. Decorators don't need to be hyped as some kinds of macros for Python that lacks them. Notice that I'm not a fan of Lispy macros for Python either which reflects my experiences with implementing a macro system for the language that had its complexities. Evaluation of code defined inside of the macro body at compile time or suspension of evaluation therein leads often to bugs that are hard to track. The behaviour of the code is generally not easy to understand and I'm not sure this is a value in itself. I've started to work on something that is simpler and more straightforward to use. Otherwise it was fun to create a recursive macro that produced the n-th Fibonacci as syntactically correct source code.
Compared to fancy macros decorators are a rather convenient composition and modularization technique for functions. It is so appealing because it is just a small step apart from techniques we have used for ages and still use every day.
I don't know Python so I've probably got the wrong end of the stick, but the examples give the impression that the source code of the thing to be decorated must be edited. That is, if I want to decorate func1, I have to put @entryExit just before the definition. Is it possible to decorate func1 without having to edit its source file?
> I don't know Python so I've probably got the wrong end of > the stick, but the examples give the impression that the > source code of the thing to be decorated must be edited. > That is, if I want to decorate func1, I have to put > @entryExit just before the definition. Is it possible to > decorate func1 without having to edit its source file?
The @deco decoration is just gives a visual clue for the reassignment foo = deco(foo) that has to be written after foo was defined. So it's possible to load a module and decorate each of the functions without prior manipulation of the source.
Just today I have discovered a nice little library based on decorators to profile functions: http://mg.pov.lt/profilehooks/ It makes a good example of the usefulness of decorators for people not knowing the concept, and you may want to keep it in mind for your book.
It would be nice if you used a style more in line with the stdlib/PEP 8 -- specifically my_decorator instead of myDecorator. (What to do with the class is a little unclear -- PEP 8 strongly suggests MyDecorator, but in some ways the class is an artifact of the implementation and not the interface, which itself is indistinguishable from a function -- I'd stick with lower case.)
entryExit.__call__ should probably return the result of the function call. Otherwise the decorator will cause all calls to the function to return None. In general, when delegating calls, it's safest to always return the result of the delegated call.
def __init__(self, f): self.f = f
def __call__(self): print "Entering", self.f.__name__ r = self.f() print "Exited", self.f.__name__ return r
> Why decorators and not full compile-time programming?
Probably because Python is a scripting language: one of the source file of your program might be used for "configuration purposes". A sysadmin somewhere may even have a cronjob that modifies one of the source code files of your program while the program is running and does a periodic "eval" or "exec" on it.
Flat View: This topic has 35 replies
on 3 pages