The Artima Developer Community
Sponsored Link

Weblogs Forum
Musing about Closures

30 replies on 3 pages. Most recent reply: Oct 24, 2005 6:54 PM by Howard Lovatt

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 30 replies on 3 pages [ 1 2 3 | » ]
Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Musing about Closures (View in Weblogs)
Posted: Sep 30, 2005 11:17 AM
Reply to this message Reply
Summary
Closures are extremely useful, no question, but they are hard to implement efficiently. I am considering implementing them in Heron but with auto-destruction semantics.
Advertisement
In many dynamic languages code blocks (a series of statements enclosed by curly braces) are first class objects along with nested functions. Within a code-block or nested function it makes sense for a programmer to be able to refer to variables outside of the codeblock's own scope. Doing so makes it a closure. The challenge here is that the language needs to assure that the lifetime of any objects referred to from the closure, last at least as long as that of the closure itself. This is a trivial problem in a garbage collected language, but considerably less so in a non-garbage-collected imperative language like Heron. A few options are available to me, but there is a nagging question:
Does the usage of closures lead to programmers inadvertently extending the lifetime of objects and if so is it a significant problem?
Some readers will surely say I worry too much, and in a garbage collected language, everything works out magically in the end. I am not convinced of that, and I think it is important to try and find an implementation technique which is efficient, and doesn't quitely make objects last longer than intended. I am trying to find a happy medium between efficiency, and also not be obliged to use garbage collection as a crutch for implementing closures.

As an alternative I am considering the possibility of auto-destruction semantics for closures. The idea is that if a nested function or code-block object refers to a variable, its lifetime becomes tied to the lifetime of the shortest lived object to which it refers. This approach means that closures can be passed to functions, but not returned from a function. However functions can still be passed from functions. So in a sense what I would be doing is somewhat limiting the usefulness of closures, but at the same time limiting the associated expenses.

There could be very good reasons for not doing things this way, and I would like to hear them. I am also wondering if there are alternative implementation approaches which don't require a garbage-collector which I can consider.


Vesa Karvonen

Posts: 116
Nickname: vkarvone
Registered: Jun, 2004

Re: Musing about Closures Posted: Sep 30, 2005 1:20 PM
Reply to this message Reply
> There could be very good reasons for not doing things this
> way, and I would like to hear them.

Well, the reason is that it is just plain wrong. It severely limits what you can express conveniently in the language. So, I think that you have the question backwards. You should be asking:

Does the absense of proper closures (and GC) make
it significantly more difficult or even impossible
to express certain abstractions?

Yep. You just found out about one of the problems I was thinking about in the GC debate and I'm sure that you can find some complicated and expensive workaround that almost works.

Todd Blanchard

Posts: 316
Nickname: tblanchard
Registered: May, 2003

Re: Musing about Closures Posted: Sep 30, 2005 1:24 PM
Reply to this message Reply
> Does the usage of closures lead to programmers
> inadvertently extending the lifetime of objects and if so
> is it a significant problem?

The answer to both is "sometimes". Like everything else, you need to be aware of the consequences of your actions. However, it is kind of rare that the problem manifests itself in any meaningful way and when it does, you do the same kinds of things you might when your program hits a size or performance limit. You analyze your usage and optimize somehow.

> Some readers will surely say I worry too much, and in a
> garbage collected language, everything works out magically
> in the end. I am not convinced of that, and I think it is
> important to try and find an implementation technique
> which is efficient, and doesn't quitely make objects last
> longer than intended.

That would typically be garbage collection.

> I am trying to find a happy medium
> between efficiency, and also not be obliged to use garbage
> collection as a crutch for implementing closures.

Except that garbage collection is usually more efficient than manual memory management.

> As an alternative I am considering the possibility of
> auto-destruction semantics for closures. The idea is that
> if a nested function or code-block object refers to a
> variable, its lifetime becomes tied to the lifetime of the
> shortest lived object to which it refers. This approach
> means that closures can be passed to functions, but not
> returned from a function. However functions can still be
> passed from functions. So in a sense what I would be doing
> is somewhat limiting the usefulness of closures, but at
> the same time limiting the associated expenses.

Well, you get what you pay for. I'm not sure about your strategy for cleaning up closures, seems like you'll have a circularity problem. If closure C refers to objects A and B, then A and B must not be destroyed as long as C exists. Saying that C can go as soon as A or B creates a paradox unless you are somehow willing to say "kill A" to trigger this.

It makes for more complexity and smells fragile to me.

> There could be very good reasons for not doing things this
> way, and I would like to hear them. I am also wondering if
> there are alternative implementation approaches which
> don't require a garbage-collector which I can consider.

I don't know of any. As you note, closures vastly increase the complexity of object lifetime management - at some point you should just let the computer handle it.

Max Lybbert

Posts: 314
Nickname: mlybbert
Registered: Apr, 2005

Re: Musing about Closures Posted: Sep 30, 2005 1:36 PM
Reply to this message Reply
/* Does the usage of closures lead to programmers inadvertently extending the lifetime of objects and if so is it a significant problem?
*/

By definition, as long as the objects are interesting, shouldn't they be kept alive?

I know that you have several pointer classes in Heron. IIRC, you have something of a weak ref pointer and a strong ref pointer. It seems to me that the programmer can be asked to choose which closure he wants, and the right pointer type can be picked based on that.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: Musing about Closures Posted: Sep 30, 2005 2:06 PM
Reply to this message Reply
I'm imagining this discussion being repeated with call/cc as the subject in a month or so.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Sep 30, 2005 4:11 PM
Reply to this message Reply
> I'm imagining this discussion being repeated with call/cc
> as the subject in a month or so.

For the uninitiated call/cc stands for call-with-current-continuation.

http://community.schemewiki.org/?call-with-current-continuation

To be honest call/cc is not one of those feature I instinctively found lacking as a natural way to express my designs, unlike closures. Though maybe you are right, and the topic will come back later on.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Sep 30, 2005 4:29 PM
Reply to this message Reply
> Well, the reason is that it is just plain wrong. It
> severely limits what you can express conveniently in the
> language.

Do you have any examples?

> Does the absense of proper closures (and GC) make
> it significantly more difficult or even impossible
> to express certain abstractions?

So fine, whatever semantics you want, that is the question I am seeking to answer. How specifically is the auto-destructive closure I am proposing unable to achieve the same abstraction as a full closure?

I am seeking specific canonical examples of closures which I can compare against.

To get you started consider the following code:

void repeat(function f, int n) {
  while (n > 0) {
    f();  
    --n; 
  }
}
 
void test()
{
  int x;
  void increment_x() { 
    ++x;
  }
  repeat(increment_x, 42);
  write(x); 
}


At least this is possible with what I am proposing. Some things you can't do is return a locally defined closure from a function, and you can't store a closure in a variable which outlasts one of the references.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Sep 30, 2005 4:38 PM
Reply to this message Reply
> Well, you get what you pay for. I'm not sure about your
> strategy for cleaning up closures, seems like you'll have
> a circularity problem. If closure C refers to objects A
> and B, then A and B must not be destroyed as long as C
> exists.

That is not what I am proposing. I am proposing that C auto-destructs when the first of A or B is destroyed.

> I don't know of any. As you note, closures vastly
> increase the complexity of object lifetime management

I am not convinced this is neccessarily the case, even though traditionally it is. Not many language designers bothered even trying to solve the problem for stack based non-GC languages.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Sep 30, 2005 4:46 PM
Reply to this message Reply
> /* Does the usage of closures lead to programmers
> inadvertently extending the lifetime of objects and if so
> is it a significant problem?
> */
>
> By definition, as long as the objects are interesting,
> shouldn't they be kept alive?

In a stack based language like Heron, when a stack allocated object reaches the end of its life due to the stack being unrolled that is taken to mean that the programmer's intention is for the object to die precisely at that moment. When the object is heap allocated, the object is taken to be intended to be killed precisely when delete is called. This is how the programmer signals their "intent" to kill the object. If an object is used beyond that point, there is discordance. Does the programmer want the object to live longer or not. The explicit destruction is usually the intentional usage, but I can't be sure of that. The programmer has designed conflicting goals into their software. This throws a runtime error, as it indicates a design flaw. This whole phiilosophy is based on the principle of error management through redundancy.

In a GC language, the programmer has no way of explicitly stating when the object is supposed to be killed, so everything is kept alive as long as the computer deems needed, removing the ability of the programmer to detect design flaws.

Ummm... does this make my thought process more clear or more muddy?

> I know that you have several pointer classes in Heron.
> IIRC, you have something of a weak ref pointer and a
> a strong ref pointer. It seems to me that the programmer
> can be asked to choose which closure he wants, and the
> right pointer type can be picked based on that.

If a programmer wants the closure to live longer, they have to explicitly make the referenced objects live longer. Then everyone is happy. Automatically extending the life because the closure wants it, is not something I like at all for the reasons stated above.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: Musing about Closures Posted: Sep 30, 2005 5:22 PM
Reply to this message Reply
> > /* Does the usage of closures lead to programmers
> > inadvertently extending the lifetime of objects and if
> so
> > is it a significant problem?
> > */
> >
> > By definition, as long as the objects are interesting,
> > shouldn't they be kept alive?
>
> In a stack based language like Heron, when a stack
> allocated object reaches the end of its life due to the
> stack being unrolled that is taken to mean that the
> programmer's intention is for the object to die precisely
> at that moment.

Yeah, I think that's one way of looking at it. Don't you see stack deallocation as problematic? It is silent in much the same way as GC. Maybe, for consistency's sake, all variables should be explicitly deleted, auto or not. Then you'd have a check on design flaws.

I'm not saying I agree, but it seems like it would be consistent with the principles you're laying out.

> When the object is heap allocated, the
> object is taken to be intended to be killed precisely when
> delete is called. This is how the programmer signals their
> "intent" to kill the object. If an object is used beyond
> that point, there is discordance. Does the programmer want
> the object to live longer or not. The explicit destruction
> is usually the intentional usage, but I can't be sure of
> that. The programmer has designed conflicting goals into
> their software. This throws a runtime error, as it
> indicates a design flaw. This whole philosophy is based
> on the principle of error management through redundancy.

Explicit deletion for everything would take care of it and it fits that philosophy.

> In a GC language, the programmer has no way of explicitly
> stating when the object is supposed to be killed, so
> everything is kept alive as long as the computer deems
> needed, removing the ability of the programmer to detect
> design flaws.

Hmm... I think that the way that programmers state their intention to kill is to lose reference. And, it is no less explicit than stack deallocation or losing a reference in a reference counting scheme.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Sep 30, 2005 6:58 PM
Reply to this message Reply
> > In a stack based language like Heron, when a stack
> > allocated object reaches the end of its life due to the
> > stack being unrolled that is taken to mean that the
> > programmer's intention is for the object to die
> precisely
> > at that moment.
>
> Yeah, I think that's one way of looking at it. Don't you
> see stack deallocation as problematic? It is silent in
> much the same way as GC.

It is however extremely predictable, and not context dependant.

{
  object o;
  flubber(o);
} // o is always destroyed here 


> Hmm... I think that the way that programmers state their
> intention to kill is to lose reference.

Dropping a reference, might not delete the object if there is a design flaw (e.g. another reference lurking elsewhere in the software).

My point is this: in any language lacking explicit destruction semantics, the programmer can not explicitly assure the destruction occurs and that the design is sound. In non-trivial systems this is a very bad thing.

Todd Blanchard

Posts: 316
Nickname: tblanchard
Registered: May, 2003

Re: Musing about Closures Posted: Sep 30, 2005 8:57 PM
Reply to this message Reply
> I am proposing that C
> auto-destructs when the first of A or B is destroyed.

Sounds hard to implement but lets ignore how that works for a minute.

> > closures vastly
> > increase the complexity of object lifetime management
>
> I am not convinced this is neccessarily the case, even
> though traditionally it is.

Hmmm, I just remembered that Squeak's blocks are not currently full closures (there is an extension that makes them full closures, it is not yet in the standard distro).

Actually blocks are instances of BlockContext. All references to external objects are weak in a BlockContext. It is possible to make them strong by sending the block the message #fixTemps. That essentially turns it into a full closure. In your case you could do something similar by having the block take ownership of the objects it references -or- have a way to specify that the reference is weak or strong.

I'm not too keen on the last approach as it complicates the programming model and we're still at the level of walking fish in the computer world as it is. Really we ought to stand up and get out of the mud one of these days.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: Musing about Closures Posted: Oct 1, 2005 4:19 AM
Reply to this message Reply
> My point is this: in any language lacking explicit
> destruction semantics, the programmer can not explicitly
> assure the destruction occurs and that the design is
> sound. In non-trivial systems this is a very bad thing.

Yes, but the thing is, automatic destruction at the end of a block is not explicit, it is implicit.

Christopher Diggins

Posts: 1215
Nickname: cdiggins
Registered: Feb, 2004

Re: Musing about Closures Posted: Oct 1, 2005 6:39 AM
Reply to this message Reply
> > My point is this: in any language lacking explicit
> > destruction semantics, the programmer can not
> explicitly
> > assure the destruction occurs and that the design is
> > sound. In non-trivial systems this is a very bad thing.
>
> Yes, but the thing is, automatic destruction at the end of
> a block is not explicit, it is implicit.

You are right. I guess I should have said deterministic, or reproducible and predictable.

Max Lybbert

Posts: 314
Nickname: mlybbert
Registered: Apr, 2005

Re: Musing about Closures Posted: Oct 3, 2005 6:50 AM
Reply to this message Reply
/* If a programmer wants the closure to live longer, they have to explicitly make the referenced objects live longer. Then everyone is happy. Automatically extending the life because the closure wants it, is not something I like at all for the reasons stated above.
*/

Well-reasoned. OTOH, I can't shake the feeling that if the programmer chooses to use a closure after the variables it references are destroyed, then the programmer most likely jumped the gun.

Then again, it's probably best to alert the programmer to his over-eagerness instead of making changes behind his back.

Flat View: This topic has 30 replies on 3 pages [ 1  2  3 | » ]
Topic: Lua and HeronScript Previous Topic   Next Topic Topic: Back to Generics: Contravariance and Erasure

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use