The Artima Developer Community
Sponsored Link

Weblogs Forum
Other Programmers and Shared-Memory Concurrency

32 replies on 3 pages. Most recent reply: Sep 28, 2007 6:09 PM by Raoul Duke

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 32 replies on 3 pages [ 1 2 3 | » ]
Bruce Eckel

Posts: 875
Nickname: beckel
Registered: Jun, 2003

Other Programmers and Shared-Memory Concurrency (View in Weblogs)
Posted: Sep 16, 2007 1:21 PM
Reply to this message Reply
Summary
Continuing the discussion of the GIL and parallel programming in Python.
Advertisement

In the comments to his posting, Guido van Rossum said:

But I've got a feeling that Bruce isn't thinking of this scenario when he asks for actors (which I remember him bringing up in 2001-2003, so at least he's consistent :-). Unfortunately I can't quite think what problem area he wants to address. There are many different ways one can use multiple CPUs to make a given algorithm faster, but it depends a lot on the algorithm how you have to code it to benefit. E.g. I believe that in the numpy world, GIL removal is pretty much a non-issue: all their heavy lifting is done by C, C++ or Fortran code, which can easily benefit from multiple CPUs by using special vectorizing operations or by creating OS-level threads that aren't constrained by the GIL (since they don't touch Python objects, only arrays of numbers).

My mistake is in introducing too many concepts at once.

First, I want to be able to easily use multiple CPUs to solve parallelization problems, without leaving Python. I don't know that I explicitly asked for GIL removal, and if I did I don't mean to. I don't want to specify the solution, just say what the problem is.

From what I've understood about the GIL it would not only be a huge task to remove it, but I've probably moved over to the camp of saying "don't remove it," especially after seeing Parallel Python (pp). Introducing true threads to the language would probably cause more problems than it solves, especially because it would introduce subtle programming problems that the GIL now prevents. Basically, it keeps you from cutting yourself by preventing a lot of collisions that you would get with true threading.

Now, I've heard all the arguments before, how we really need true shared memory and all that. But every time I drill into such arguments it turns out that the person has learned threading and that's their entire world view -- that's all they can imagine. So that's what they want, and the concern about not having threads is superstition. The same kind of superstition that makes them believe they can write bug-free concurrent programs.

Before you hit the 'reply' button, let's just assume that you're one of the elite who can actually do this -- we're talking about all those OTHER programmers out there who aren't as smart as you, and whose code you'll have to fix if they aren't put into a straightjacket so they can't do any damage.

If I can ever get Brian Goetz to do it, or to give me the list, I'll write about how we got where we are in thinking that we simply must have threads.

So before you hit the 'reply' button, imagine this scenario: you have a computer with 64 or 128 cores, and you want to use those cores. If you allow those OTHER programmers to write with threads, they're going to muck it up; even the best ones will have little race conditions they can't find. But with 64 or 128 or 1024 cores, those race conditions will show up as bugs for the end user and you're going to have to either figure out how to fix it, or give up.

Now, wouldn't you rather have a system where people can blindly write concurrent programs and not worry about guarding that shared memory? If you have all those cores, do you REALLY need to worry about performance so much that you have to do that dangerous memory sharing? Why not let the OS protect all those OTHER programmers from the problem of shared memory -- the the OS guarantee that there isn't any, and eliminate the problem.

You're going to say you really, really need shared memory threads. But you won't be able to keep those OTHER programmers from messing it up. Someday you'll thank me.

The reason for asking for Agents is definitely icing on the cake, but I think that it's icing that will make us much more productive with all those cores. Agents make it easier to think about concurrency problems because they're an object-way to think about concurrent programming. Again, Agents will make it easer for those OTHER programmers to write correct concurrent programs.

Introducing Agents, as shown in Scala, can be done with a library as long as the language has the basic support built in (well, there are even Agent packages for Java, but the right language support can make the use of Agents much more pleasant that you typically expect in Java). So it's not so big of a risk as it would be to ask for complete support for Agents directly in the language.

So to summarize, I want support for multiple cores, but not true threading -- I want process support. And it looks like pp or a similar design will accomplish that. And secondly, I'd like to raise the level of abstraction for concurrent programming via Agent support.

The first is essential, but the second would be Pythonic.


Chris Stiles

Posts: 2
Nickname: chrisstile
Registered: Jul, 2006

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 16, 2007 1:50 PM
Reply to this message Reply
I think that in general the Actor approach scales further than the threaded approach for a certain class of problems.

I'm at all convinced that the Actor approach is a panacea for systems with large numbers and types of Actor objects which interact in a multiplicity of ways. It seems to me that generally Actors work very well when the flow of data within the system is a fairly simple one - say something resembling an assembly line of types passing messages to and fro.

However, once the flow of data becomes complex/meshed datalock becomes as large a problem as deadlock used to be;

http://www.erights.org/elang/concurrency/epimenides.html

Bruce Eckel

Posts: 875
Nickname: beckel
Registered: Jun, 2003

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 16, 2007 1:54 PM
Reply to this message Reply
I won't say that Actors are the only way to simplify concurrent programming, but I think they might take a big, interesting chunk out of the problem.

In general I'd like to see Python adopt higher-level concurrency abstractions on the order of Actors. If we end up with several of these abstractions, each of which solves a different type of concurrency problem, I'd be happy.

For some reason this is an area of fascination for me so I will probably keep poking at the problem, and suggestions for other abstractions are welcome.

Jesse Noller

Posts: 3
Nickname: jnoller
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 16, 2007 4:36 PM
Reply to this message Reply
Bruce - good post, and I agree that we may need multiple abstraction libraries to achieve a "good enough" level of coverage on the concurrency issue. Just to do some benchmarking/code examples and data collection, I've been working on a google-code project for this. I've collected a bunch of information here: http://code.google.com/p/python-distributed/w/list
The interesting thing is that there is already more than a few libraries aiming at this space, parallel python (pp) is one of them (and mainly aimed at job-dispatching).

I myself like(d) the Processing module (http://pypi.python.org/pypi/processing/) due to the fact that it takes minimal effort to port threaded apps to a fork/exec model.

Paul Boddie

Posts: 26
Nickname: pboddie
Registered: Jan, 2006

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 3:55 AM
Reply to this message Reply
Jesse Noller wrote: "The interesting thing is that there is already more than a few libraries aiming at this space, parallel python (pp) is one of them (and mainly aimed at job-dispatching)."

Indeed. You might be interested in looking at the pprocess module if only for the API experiments described in the tutorial:

http://www.boddie.org.uk/python/pprocess/tutorial.html

It should be possible to hide as much of the plumbing involved in spawning processes and managing communications as possible, even though that stuff can be quite tricky. Certainly, Python provides enough language support to make all this seem very natural, in contrast to the claims made by the "shiny new syntax" brigade. If any language-related support is desirable, it'd be the ability to know whether a parallelised function has side-effects which would disqualify it from being executed in parallel, but you'd solve this problem with code analysis not new syntax, and the pprocess solution is to suppress global side-effects due to the copy-on-write memory model associated with the fork system call.

I'm actually surprised that Twisted hasn't driven this field more prominently, and I think distributed object solutions should also have a lot of promise, mostly because they too often have to support a certain amount of asynchronous behaviour and things like re-entrancy.

Stefan Arentz

Posts: 4
Nickname: st3fan
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 10:37 AM
Reply to this message Reply
> I'm actually surprised that Twisted hasn't driven this field more prominently,

Twisted is a great framework and I would use it immediately if I had a good use-case for it. Unfortunately their developers (at least the folks on IRC) have an attitude like: "Threads are dumb. If you cannot write a giant state machine to adapt your code into Twisted than you are an idiot."

Completely event driven like Twisted is another extreme. And it will only get you so far. The fact is, the python standard lib and the huge amount of open source modules available are basically all blocking. Period. None of these return Deferreds like Twisted prefers. Which from a pragmatic point of view is not very useful.

As a simple example, Python has some GREAT solutions for accessing databases. Like SQLObject for example. Unfortunately this (blocking) code depends on running simply in a thread and to use it in a Twisted application you have to jump through so many hoops to get it working without locking up your whole app. The end result in code is so much more complicated than simply spawning a worker thread and doing your thing in ten normal lines of code.

You don't get my vote for event-driven concurrency.

S.

Paul Boddie

Posts: 26
Nickname: pboddie
Registered: Jan, 2006

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 10:58 AM
Reply to this message Reply
Stefan Arentz wrote: "Twisted is a great framework and I would use it immediately if I had a good use-case for it. Unfortunately their developers (at least the folks on IRC) have an attitude like: "Threads are dumb. If you cannot write a giant state machine to adapt your code into Twisted than you are an idiot.""

Yes, but regardless of whether threads are dumb or not, I've found myself writing communications handlers which use similar techniques to that employed by Twisted because, even if you dispatch work to separate processes, the parent process can't be waiting for one specific process to communicate results (in the most optimal case): it has to be waiting for any for them to communicate. And it isn't easier to have a load of threads listening for results since they must ultimately be synchronised somehow. I'd have considered using asyncore instead if it were adequately documented and appeared to work as anticipated, but eventually I just decided to learn the basics of select.poll and get my hands dirty.

"Completely event driven like Twisted is another extreme."

Let's not go there! ;-)

"You don't get my vote for event-driven concurrency."

I wasn't asking for it. I think that we need to think more in terms of lazy evaluation: like generators but without so many of the restrictions.

Stefan Arentz

Posts: 4
Nickname: st3fan
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 10:58 AM
Reply to this message Reply
I'm really disappointed that people keep chanting the 'nobody can write correct concurrent code because it is too complicated' mantra. I guess I'm dreaming when I see C++ and java app servers here running complex applications without any problems. Even when they sometimes use thousands of threads to do their work.

It is funny that you mention Brian Goetz because he actually spells out very clearly that writing concurrent code (in Java) is certainly not impossible and also not rocket science. I'm sure you are familiar with his book.

For me the key to writing correct concurrent code is to rely on basic foundations like java.util.concurrent, by using proven patterns and common sense. I bet that when Python would have great thread support that these same kind of frameworks would happen in the Python eco system and that people would be just as confortable writing concurrent code in Python.

Concurrent code can definitely turn into debugging hell with race conditions and vague problems. If you choose to work at that level.

On the other hand it can also turn into very succesfully putting results quickly together by using basic building blocks like (in case of Java) BlockingQueue, ExecutorService, Runnable and friends. Again, proven patterns.

Of course anyone can choose to use the much more low level threading/concurrency primitives as they please and have more change of shooting themselfes in the foot. But how is that different from ANY technology that Python currently exposes. Everything is potentially dangerous in the hands of inexperienced users.

So I guess what I'm asking here is to let go of the Evil Threads idea and let people make that decision based on their own experience and project.

I think it wil take Python to the next level and attract a LOT of people who are now doing very smart things in other languages.

S.

James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 12:00 PM
Reply to this message Reply
> It is funny that you mention Brian Goetz because he
> actually spells out very clearly that writing concurrent
> code (in Java) is certainly not impossible and also not
> rocket science. I'm sure you are familiar with his book.

I agree and this was true in java before the introduction of the java.util.concurrent libraries.

Writing basic concurrency into code is not extremely difficult if you understand how threads work. The really hard part about concurrent code is testing it.

This leads to the real problems with concurrent code IMO: Most developers don't really understand how the code they write works and depend on testing (trial and error) and debugging to make things work. And I don't mean to leave myself out of this. There are definitely times when I have just blindly followed a recipe for a solution. I generally try to understand it all before I finish but ultimately this is a very efficient way to get things done.

But with multi-threading, that strategy doesn't really work. You can't just run a test and expect any issues to pop up. And if they do, it can be a real pain to figure out what is going on, especially for those who depend on debuggers as the problems often disappear completely once you slow things down.

From my experience, hack and slash development doesn't work with concurrency. It's not efficient. You need to think about what could happen. And without a really clear understanding of how threads behave, it's just not possible.

So, I think abstractions over threads are a very good idea. It's just choosing the right set of abstractions at this point.

Mike Ivanov

Posts: 23
Nickname: mikeivanov
Registered: Jul, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 12:47 PM
Reply to this message Reply
> So, I think abstractions over threads are a very good
> idea. It's just choosing the right set of abstractions
> at this point.

There are such abstractions. They are called Fork, Process, Shared Memory and IPC, they work just fine. Is Fork inefficient? Let's then solve this problem first.

Stefan Arentz

Posts: 4
Nickname: st3fan
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 12:56 PM
Reply to this message Reply
> > So, I think abstractions over threads are a very good
> > idea. It's just choosing the right set of abstractions
> > at this point.
>
> There are such abstractions. They are called Fork,
> Process, Shared Memory and IPC, they work just fine. Is
> Fork inefficient? Let's then solve this problem first.

I think the problem with a lot of Pythonistas is that they don't understand what it means to have a truly concurrent environment available to them simply because that has never existed on the platform. If you are never exposed to the 'luxury' of that then you have to fall back to 80s solutions I guess ;)

(BTW I'm well aware what fork and ipc mean. I'm using the 'classic' process model at least a few times a year for specific projects, mostly in Python or C++, but it is in NO way a real replacement for having threads.)

S.

Mike Ivanov

Posts: 23
Nickname: mikeivanov
Registered: Jul, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 4:05 PM
Reply to this message Reply
> I think the problem with a lot of Pythonistas is
> that they don't understand what it means to have
> a truly concurrent environment...

At least some understand it well enough to keep away from that stuff :o)

The real problem is the vast majority of third-party libraries are not thread safe and never will be. Even Swing is not thread safe. That means a lot of the valuable development time spent on arranging safe calls, heap variable synchronization and other stuff not directly related to the problem being solved. For what? To work around the lame programming model?

All we need is a lightweight processes with safe heap memory, that's it. Everything else is invented 30 years ago.

On a related issue, consider how many library issues people have with multithreaded Apache:
http://www.google.ca/search?q=httpd+apache+thread-safe+problem

Also, PostgreSQL developers don't use threads not because they don't know how.

Dave Aitel

Posts: 2
Nickname: daveaitel
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 5:00 PM
Reply to this message Reply
I completely agree that concurrency is the next hard problem Python should solve. To do this we need to abstract away data as well...I don't want to have to know how many computers my object is executing on - I just want the result!

Someone in this thread said that you CAN program using Thread and Lock primitives correctly - which is true, until you come up against an attacker. The same way you CAN, theoretically, do memory management correctly, until a hacker finds that one buffer overflow....

Luís Pureza

Posts: 2
Nickname: pureza
Registered: Sep, 2007

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 5:20 PM
Reply to this message Reply
Process based concurrency, as proven by Erlang, can be quite scalable while easing the burden on the programmer. It's also very elegant and I understand why people like it.

However, there are other shared-memory concurrency models under active research, such as software transactional programming. What do you think of these?

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Other Programmers and Shared-Memory Concurrency Posted: Sep 17, 2007 10:29 PM
Reply to this message Reply
> Process based concurrency, as proven by Erlang, can be
> quite scalable while easing the burden on the programmer.

Well, yes for making concurrency explict. But that's not really what the "multicore challenge" might be all about and I understand Guidos reservation against the latest hystery. My own take for process oriented programming in Python is the use of a generator framework ( very similar to Stackless Python ) and an implicit distribution mechanism using somewhat like the processing package. The question is just whether or not we want to program our software in this style all the way down.

Honestly I'd prefer alternatives using dataflow variables and implicit synchronization or declarative programming idioms where the solver engine cares for distribution and not the application programmer. Process oriented thinking might be worthwhile in cases where we model our system as a set of independent processes but I can't imagine I want this as an optimization hack for all my algorithms. Note I'm aware of research in this area also in the Java domain ( FlowJava for example ).

Of course real men want to deal with threading and shared state concurrency explicitely just like others love to malloc and free memory buffers. No one can help those fairly advanced programmers with their superior skills and intellectual abilities.

Flat View: This topic has 32 replies on 3 pages [ 1  2  3 | » ]
Topic: Other Programmers and Shared-Memory Concurrency Previous Topic   Next Topic Topic: Thinking in Java 4e Solution Guide Now Available

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use