The post title is derived as counterpoint to Michael Lucas-Smith's fun Uh.. wait.. before you leave...
I owe a big thanks to Michael Lucas-Smith and Vassili Bykov for the fun that went into what's behind the post. Thanks for the engaging back and forth discussions. They each deserve a portion of the credit.
It all started with a discussion Vassili and I were having shortly after he posted one of his latest. Mostly I was lavishing respect on him. But I was also wishing aloud that it would be nice if we didn't have to have a special compiler to "change the method's guts". Partly, I'd like to be able to generalize the notion and be able to use it for resources that were indeed generated in the image, but that maybe I wanted the same caching benefit for.
This wandered into a discussion of the ##() syntax that Dolphin has. And VA too I guess. I'm not as familiar as I ought to be with these environments, but If I Understand Correctly, they give you compile time expressions (CTEs). The expression is evaluated when you accept the method (and possibly when it is compiled at load time if you're not loading binary). This spiraled out into the effort it might take to hammer another syntactic element into VisualWorks. Also, I began hypothesizing that what I wanted was lazy expression caching. CTEs can't take advantage of any local references, the expression must be totally clueless about the environment its running in. From a load-by-compile perspective, CTE's may place an unnecessary burden doing all of the initialization up front at once. If we could treat the ##() as a normal expression the first time it is encountered, but thereafter remembers the result for future evaluations, we'd have something similiar, but subtly different. One could take advantage of local state in that case (I've imagined some interesting things one could do with this, I'm not sure how useful, but who knows). It's very JIT like from the perspective of optimizing the access to those resources.
Either way, we'd be able to simply take those generated image store strings and wrap them with ##() around them. This is where I got talking to Michael about it. He had all kinds of grand ideas (he thinks grander than I do). When I was describing the implementation as basically a "special kind of block" he had the insight to ask, why not just:
["some code"] once
then? Beautiful! No new syntax. So I went off and implemented it. Was quite quick. It's a one class couple of methods type of thing. We spent more time than anything coming up with names for the access method and the package name. For access names, I ended up torn between #once which is nice and terse and embodies that tradition in Smalltalk of anthropomorphizing the objects we talk about with everyday language. #cached was the more technically correct term. So I did both. Runner up was #rerember(ed).
It's published in the Open Repository as OnceUponATime. Runners up for the package name included: WhenYouComeBack, SecondVerseSameAsTheFirst, SayItAgain, and GiveMySideAffectsToBroadway.
One of the things that simply amazes me about Smalltalk is how it's the littlest thing that can end up being the most powerful. As I continue to stroll the path of Smalltalk Enlightenment after all of these years, two axioms that just keep repeatingly showing up are the "Less is More" and the "It's all in the messages" themes.
In case you're still not clear on what does this, perhaps this test (which passes) will illuminate further:
| block a b c d e f |
block := [Time microsecondClock].
"make sure that the block will produce a different result for each value as expected"
a := block value.
b := block value.
self assert: b > a.
"now emit cached to it and verify that the results are the same, but that it is unique from the previous values"
c := block cached.
d := block cached.
self assert: c = d.
self assert: c > b.
"finally use a value to ascertain that it reverts when sent a value, and a new cached to make sure it caches a fresh value"
e := block value.
f := block cached.
self assert: e > d.
self assert: f > e