A trip report from the Python UK conference (and the C/C++ conference of which it was a part).
Here's my dilemma: what should a blog post be? Some of the entries I've seen here are about the author's philosophy of programming. Others are news items. Being an inexperienced blogger I'll have to try various things to see what works best. Oh, and how do you measure success of a blog entry? I suppose the amount of feedback. By that measure, Ken Arnold's post "Are Programmers People?" is the winner of the Artima.com blogs so far. But that measure of success suggests that successful blog posts should be controversial, and rehashing an existing controversy is an easy way to get feedback. Not that I'm saying that that's what Ken is doing - just that it's easy to "win" by trolling for flames.
I'll take my own approach for now, which today is thinking aloud while reminiscing on a recent conference.
I can't quite pinpoint it, but Python UK was almost more exciting than PyCon for me. Maybe it was the location: I'd never been to Oxford before, and while the conference itself was in a Holiday Inn outside the city, almost a piece of America away from home, I visited the city a few times and came away impressed with both its life and its concentration of ancient and historic buildings. Or maybe it was the fact that I wasn't so much the center of attention: I had more opportunity to watch presentations. Perhaps I also learned more because the topic of the whole conference was C and C++ (and Java) as well as Python.
One highlight of the conference was the attention being paid to Python outside the Python tracks. Apart from my own opening keynote (given to the whole conference, not just the Python tracks), where I summarized some of Python's history and influences, each of the following three days' keynotes, on various aspects of C and C++, mentioned Python at least once. For example, in Andrew Koenig's keynote about what's wrong with C++ (because of a travel restriction read by conference chair Francis Glassborow), he mentioned Python as one of the languages that had made him think about alternatives for some of C++'s problems. In general, quite a few died-in-the-wool C++ programmers showed interest in learning Python, and Python in a Nutshell was one of the bestsellers at the well-stocked bookstand.
My own take on C++, by the way, is that it has grown such incredible compile-time power that the average C++ programmer is stuck with following cookbook-style examples and cannot really understand why things work, or, especially, why they don't work. The more you think about it, the less you understand, unless you have the time and IQ points to really understand the deepest parts of the language definition. I'm sure there are places where this is just what the doctor ordered; but in most cases, I think Python's approach, a dumb compiler and a smart run-time, is much saner, and makes it simple for the programmer to predict what will happen. Another way of viewing the difference between the two languages is to say that in Python, all you've got is run-time; in C++, incredibly complex things go on at compile time, and then on top of that there's the run-time model, which boils down to a really low-level old-fashioned hardware model, whose existence you cannot abstract away because it affects when things go wrong at run-time.
Probably the talk I most enjoyed was neither about Python nor about C++, but about Haskell. Simon Peyton-Jones (Mr. Haskell himself AFAICT) gave a talk on metaprogramming in Haskell. Simon is an incredibly lively and entertaining speaker; he puts real drama in his presentations. I wish I could speak like that! Like Python, Haskell uses indentation for blocks; after that, the similarities stop. It is a strongly typed pure functional language, where just about everything is written as recursive functions, because the language has no looping construct. Because of this it will probably always remain a language of mostly academic interest: loops may be theoretically inferior to recursion, but I have no doubt that the human brain has special reasoning abilities for loops, and many real-world problems are most naturally expressed using loops rather than recursion. I'd say that a loop is a higher-level concept than recursion; recursion is more powerful, but also more low-level, like assembly language. That said, Haskell is incredibly expressive - and not just because it uses a very sparse syntax. (I picked up a book on Haskell, and it fell open on a page of list comprehensions - which are exactly like Python list comprehensions except that 'for' and 'if' are written using different symbols. The book claimed that no other language had this feature. Sigh.)
Anyway, metaprogramming is a new extension to Haskell, which lets the Haskell programmer write Haskell code that runs at compile-time and generates other Haskell code which is then incorporated into the program being compiled. This is just about the reverse of what Python does: in Python, you can invoke the compiler from the run-time (e.g. with "exec" or "eval"); in Haskell, you can invoke the run-time from the compiler. Both mechanisms are equivalent in power if you ask me. Curiously, Haskell uses $(...) to invoke the run-time from the compiler; this gives metaprogramming a vague Perlish flavor. It makes sense though: $(...) feels like substitution, and that's exactly what it does. Inside a substitution (and I suppose also outside), another kind of brackets is used to "quote" syntax (rather than parse it): [|...|]. (I mainly note this because it is a nice pair of symmetric brackets that may come in handy during language design when you need more styles of brackets than you have symmetric character pairs.) During Simon's elaboration of an example (a type-safe printf function) I realized the problem with functional programming: there was a simple programming problem where a list had to be transformed into a different list. The code to do this was a complex two-level lambda expression if I remember it well, and despite Simon's lively explanation (he was literally hopping around the stage making intricate hand gestures to show how it worked) I failed to "get" it. I finally had to accept that it did the transformation without understanding how it did it, and this is where I had my epiphany about loops as a higher level of abstraction than recursion - I'm sure that the same problem would be easily solved by a simple loop in Python, and would leave no-one in the dark about what it did.
I'll stop now, before this becomes a novel. I should mention that I also spent some time hacking on a prototype for a new standard I/O library for Python. Stackable streams, yeah! The code is checked into the Python sandbox for now.
I've travelled through Assembly, to C, to C++ over the past many years and of all the new developments I have attempted to track in the last few years, Python has interested me the most.
Your point about Python approach, a dumb compiler and a smart run-time, is really a great point that I hadn't seen put so succinctly.
You mention how the C++ run-time model boils down to an old-fashioned hardware model. But there isn't a new hardware model right? So C++, C, and assembly all come in handy when you need a closer connection to a hardware model. In those situations, Python is great for the high-level calling to the lower-level written in these other languages.
And yes, the fact that Python can invoke the compiler or an interactive console from the run-time is also a big draw for me. Plus, it is possible for a small team to port Python to new platforms and maintain the code base. Python scales well in either direction: to smaller projects and bigger projects.
Reading the comment about loops versus recursion made me curious what the complicated function in the presentation was. I assume it is the "gen" function in the linked paper, which takes a list like [D,D,D] and converts it into a nested lambda-expression \n -> (\m -> (\l -> show n ++ show m ++ show l))).
I think the comparison is a bit misleading though, because that sounds like an inherently recursive problem, and I can't think of any simple way to do it with a loop. One idea would be to first scan through the list forwards to construct the "kernel" expression, and then scan through it once more in reverse order to add on the lambdas. However, I suspect that might be more complicated than it sounds, because we must make sure that the free variable "n" in the body is the same as the variable that is bound by the outer lambda-abstraction. The quasi-quote syntax no doubt does some terribly clever things to free variables in the syntax trees it builds, and second guessing it and building the labda-expressions in "stages" might be tricky.
Anyway, what I'm trying to say is that the Haskell code might be hard to understand not only because it is written using recursion, but also because the problem is a bit more complex than just transforming a list into a different kind of list.
> because the language [Haskell] has no looping construct. > Because of this it will probably always remain a language of > mostly academic interest: loops may be theoretically > inferior to recursion, but I have no doubt that the human > brain has special reasoning abilities for loops, and many > real-world problems are most naturally expressed using loops > rather than recursion. I'd say that a loop is a higher-level > concept than recursion;
Yes, loops are "higher-level" than recursion, in the sense that all loops can be expressed as recursion, but not all uses of recursion can be expressed as loops. That said, Haskell doesn't really need looping constructs; it's quite possible to build one in the language. Compare with Common Lisp's loop macro, for example.
> The book claimed that no other language had this feature. Sigh.
It must be old. In addition to Python, Erlang, Clean and a handful of other minor languages also have list comprehensions.
Hi, Guido. About blogging: I've given this a lot of thought. I came to blogging from the fake-blogging community [in my case, a community that blogged as characters from Buffy the Vampire Slayer, according to what happened on the show.] There I learned to think of blogging as collaborative fiction. I still do -- that the events I blog about them are true doesn't make them less fictional. So how do you measure the success of fiction? Now, this is still controversial, but at least this is familiar territory. So now one of the alternatives is crossed right out -- it is obvious it is *not* measured by the amount of feedback and controversy you generate :) I do a simple thing: occasionally, I ask my readership what kind of posts they liked. Once you start think of your readers *as* readership you want to satisfy, you'll see yourself thinking in completely different ways.
For example, one thing I realized early one is that "you can't win 'em all" with every post. Therefore, I try to be somewhat diverse in my posts. I also realized that I am most entertaining when I write stuff I care about, so I tend to post stuff I enjoy writing, like short stories or philosophical/political rants.
> It is a strongly typed > pure functional language, where just about everything is > written as recursive functions, because the language has > no looping construct. Because of this it will probably > always remain a language of mostly academic interest: > loops may be theoretically inferior to recursion, but I > have no doubt that the human brain has special reasoning > abilities for loops, and many real-world problems are most > naturally expressed using loops rather than recursion.
Note: I don't get any money for plugging those. :-)
I agree that looping is more high level and natural for many common things, like doing something to each element of a list. These are the cases where functional languages use higher-order (and higher-level) functions like map and foreach. I don't find those so different to Python's for x in loops, though in Erlang it's more awkward because the lambda syntax is verbose. (Both approaches are certainly higher-level than needing a separate integer loop-counter!)
But in some other examples I find that looping is lower-level than recursion, in the sense of "requiring attention to the irrelevant". For example, searching for an object in a binary tree.
The Little Schemer is a pleasant little book that demonstrates the functional programmers' view of recursion. It could be interesting to people who want to see the FP'ers' point of view.
> Like Python, Haskell uses indentation for blocks; > after that, the similarities stop. It is a strongly typed > pure functional language, where just about everything is > written as recursive functions, because the language has > no looping construct.
Haskell has no syntactic looping construct, but you have map, foldr, foldl, scanr, scanl, mapAccumL, mapAccumR, iterate defined in the Prelude standard or the standard List library. These cover most about everything you ever want to do with a loop. And when they don't, you can write your own, which is what you always have to do in a language that only has a generic loop construct. Except that with recursion, there are many more interesting operations you can write than with loops alone.
> During Simon's elaboration of an > example (a type-safe printf function) I realized the > problem with functional programming: there was a simple > programming problem where a list had to be transformed > into a different list.
I think you misunderstood the problem; I think that the type-safety Simon was striving for makes it more complicated than that.
> I'm sure that the same problem would be easily solved by a > simple loop in Python, and would leave no-one in the dark > about what it did.