<code> fibs = [0,1,1] def smartFib(n): for i in range(len(fibs),n+1): fibs.append( fibs[1] + fibs[2] ) return fibs[n] </code> This is O(n) space and time. You can improve to O(1) space like this: <code> (a, b) = (1, 1) def smarterFib(n): if (n == 0) return 0 for i in range(3, n+1) (a, b) = (b, a+b) return b </code> Then you can improve to O(ln(n)) time like this: <code> (a, b, c, d) = (1, 0, 0, 1) def f(n): global a, b, c, d if (n > 1): f(n/2) (a, b, c, d) = (a*a + b*c, a*b + b*d, c*a + d*c, c*b + d*d) if (n%2): (a, b, c, d) = (a+b, a, c+d, c) def evenSmarterFib(n): global a, b, c, d (a, b, c, d) = (1, 0, 0, 1) f(n) return d </code> smartFib will not compute the 1,000,000th Fibonacci number on my computer; it runs out of memory and is killed. The second version will compute the 1,000,000th Fibonacci number, but I need to go make coffee in order to wait long enough for it to finish. The last version computes the 1,000,000th Fibonacci in less than 1 minute. This method is based on the matrix identities for Fibonacci numbers and an optimaztion of taking the exponent of a matrix. You could get even faster than this by using some of the modular identities, or identities involving Lucas numbers, etc. Many of these methods(or at least the identities) are discussed in volume 1 of Knuth.
fibs = [0,1,1] def smartFib(n): for i in range(len(fibs),n+1): fibs.append( fibs[1] + fibs[2] ) return fibs[n]
This is O(n) space and time. You can improve to O(1) space like this:
(a, b) = (1, 1) def smarterFib(n): if (n == 0) return 0 for i in range(3, n+1): (a, b) = (b, a+b) return b
Then you can improve to O(ln(n)) time like this:
(a, b, c, d) = (1, 0, 0, 1) def f(n): global a, b, c, d if (n > 1): f(n/2) (a, b, c, d) = (a*a + b*c, a*b + b*d, c*a + d*c, c*b + d*d) if (n%2): (a, b, c, d) = (a+b, a, c+d, c) def evenSmarterFib(n): global a, b, c, d (a, b, c, d) = (1, 0, 0, 1) f(n) return d
smartFib will not compute the 1,000,000th Fibonacci number on my computer; it runs out of memory and is killed. The second version will compute the 1,000,000th Fibonacci number, but I need to go make coffee in order to wait long enough for it to finish. The last version computes the 1,000,000th Fibonacci in less than 1 minute. This method is based on the matrix identities for Fibonacci numbers and an optimaztion of taking the exponent of a matrix. You could get even faster than this by using some of the modular identities, or identities involving Lucas numbers, etc. Many of these methods(or at least the identities) are discussed in volume 1 of Knuth.
A little context is missing: smarterFib() and evenSmarterFib() are not going to be very smart about calculating the Fibonacci of n=100001 right after calcuating the one for n=100000. The original problem was testing the function in a loop, computing the first Fibonacci, then the second, then the third and so on. In that case, there was an advantage to caching the previous results. Of course, you could argue that you only need to cache the last two results, but that might be going overboard on specificity to the original problem. (By the way, if you're concerned about space, shouldn't you use xrange() instead of range()?)
Anyway, I think the salient point is that the original recursive implementation is a ridiculous way to evaluate the value of a language.
By the way, evenSmarterFib() is fast; I can do a million in a little over 5 seconds. However, two million is over 16 seconds. That's not O(ln(n)), is it?
N Time in Milliseconds   100000 140 200000 437 300000 734 400000 1313 500000 1719 1000000 5265 2000000 16203
Ah ha. Whilst riding home on my bicycle, I had a moment to contemplate the universe and another to spare for this question. Naturally, for the former I came to the predictable conclusion of 42 and for the latter to the conclusion that the problem is that large numbers themselves are the problem (or at least a part of the solution). This Knuth character naïvely* assumes that operations on numbers take the same amount of time, regardless of the size of the numbers involved. Of course, this isn't really the case as soon as the numbers are larger than the word size of the processor and particularly not so when they are huge in comparison, as in this case. (Anyone who's written a BigInt class in C++ Programming 101, knows this).
* For those who don't realize that this is tongueincheek, please note: this is tongueincheek!
> If you would make everything polymorphic in C++, would you > still keep value semantics? I think the clear > value/pointer semantics of C++ are one of its strengths.
Thanks for the discussion of boost::any.
Yes, I would keep value semantics, which of course makes integer a nonobject, or at least a nonreference type, as Joe Cheng has been pointing out. The Java/.NET way is to tie value semantics fundamentally to a type, whereas of course in C++ you can treat any type both ways.
I still like calling methods through anything at all and it seems to me more, not less, elegant to say that toString is available for everything.
> The question is when, not where: I believe > the second installment will appear on the Artima home page > this Monday.
Actually, this coming Monday will be Part I one of Bertrand Meyer. Bjarne's Part II should appear within a few weeks.
The best way to be notified when articles come out is probably to subscribe to the newsletter. Just click on Your Settings in the upper left hand corner of this page (after you log in), then scroll down to Your Subscriptions and click the checkbox next to Artima Newsletter. In this weekly email I announce and make a short comment about the new articles.
> > BTW, there's also a new signal/slot library added to > > Boost. And not mentioning Boost when talking about C++ > > libraries which offer what the standard library doesn't > > is a severe shortcoming, IMHO. > > I usually refer to BOOST in my talks and if I remember > rightly I mention it later in this interview  you haven't > seen half of it yet. > Actually, Bjarne mentioned Boost in this installment, but I overambitiously edited it out. I have put the mention to Boost back in (on Page 1 of the interview), and linked to Boost from the resources section.
> > The question is when, not where: I > believe > > the second installment will appear on the Artima home > page > > this Monday. > > Actually, this coming Monday will be Part I one of > Bertrand Meyer. Bjarne's Part II should appear within a > few weeks. > I assume that the majority of these interviews were done in a single 'take'. I must admit that I find the Artima format of splitting interviews and releasing them over a period of time, in order to artificially give the impression of a continuous drip feed of 'new' information, increasingly frustrating.
(Apart from that, Artima is still my favourite Java oriented web site.)
> > > The question is when, not where: I > > believe > > > the second installment will appear on the Artima home > > page > > > this Monday. > > > > Actually, this coming Monday will be Part I one of > > Bertrand Meyer. Bjarne's Part II should appear within a > > few weeks. > > > I assume that the majority of these interviews were done > in a single 'take'. I must admit that I find the Artima > format of splitting interviews and releasing them over a > period of time, in order to artificially give the > impression of a continuous drip feed of 'new' information, > increasingly frustrating. > Yes, I was going for the literary equivalent of Chinese water torture. Actually, the situation is this. First of all, I want to do indepth interviews. I speak with people for about 90 minutes. That comes out to about 12,000 to 15,000 words give or take, which in practice is too much for a single article. People won't read that long of an article. 3000 words is already really long. I aim for 1500 to 2500 words per article on Artima.com. So with the interviews, given I want them to be indepth, either I cut out a lot of stuff (which I do anyway), or I split them up.
I chose to split them and that has worked rather well, with a few hickups. First, when I started, I did one interview at a time. So for six weeks I published an interview with Bob Scheifler. Then the next six weeks were Ken Arnold. Then the next six weeks were Martin Fowler, and so on. That seemed to work well until I got to Dave Thomas and Andy Hunt. Perhaps because there were two of them, and they kept responding to each other, I ended up with 2 hours of really good material that I wanted to publish. So that interview went on for 10 weeks straight. It was getting to the point where I figured I should just rename the site DaveAndAndy.com. It was too much. A similar thing happened somehow with Rusty Harold. I tried to find parts to cut out, but I felt that I needed to publish all I published for it to be a cohesive whole. So Rusty's interview just seemed to go on and on.
So that's when I decided to start staggering people. Because I felt that it was too monotone to have the same voice week after week. It may be that I have gone too far in the other direction, though, because now it is hard to follow a single interview. I now have Anders, James, Bjarne, Bertrand, Matz, and Ward going at the same time. So if all you care about is Bjarne, you have an article only once every six weeks. I do provide easy ways to be notified of articles, though. You can subscribe to the newsletter or the NewAtArtima RSS feed. But by the time you read Bjarne II, you've probably forgotten what was in Bjarne I.
But here's the thing. When I split up these interviews, I try very hard to make each article a cohesive whole, if possible, focused on one subject. So they are a series, but they're also independent.
My real goal is to give voice to a lot of perspectives and provide a forum in which people can discuss ideas and opinions, and hopefully get insights and learn from each other things that can help us in our day to day work. That's the real reason there's a different interviewee each week, is to provide that kind of stimulus of differing opinions and perspectives. But I do recognize that I may have gone too far in the multivoice direction such that the interviews don't feel very serial anymore.
In the future, I want to publish more articles each week. So that may help. I may also reduce the number of interviews I interleave. If I publish two interview installments a week, and have four interviews going, then each installment of each particular interviewee would come out every two weeks. That might be the best compromise. I'm open to suggestion.
I'd also like to publish noninterview articles (tutorials, howtos, etc.) by others, especially by all of you, so if you have ideas for Artima.com articles, please submit them. Info on how is here:
>  you can define methods like toString and finalize and > know everything will have them
The main reason I dislike the single hierarchy of Java is that you can call methods like toString and have them do something even if that's meaningless in a given case.
FWIW though you can do something like the following in C++:
The second of these two functions is a specialisation that means that if the object already defines a string() operator then that will be used, and in the most trivial case where the object actually is a string this should hopefully be optimised away to nothing.
In the first case, which will be called in the more general case the standard mechanism for serialising an object to characters is used to construct the string. This will work for all the built in types, and for classes where the author could see a reason for doing so. If there is no reason to represent the class as a string then it won't compile  whereas with Java it will compile but then produce output of little value other than as debugging clues.
If the class' author has genuinely been remiss in not providing a way to serialise the class to a character stream then it can be provided without altering the classes definition by defining the function ostream& operator<<(ostream&, const someObject&);. This also makes it easier to add support for different character types, rather than tying that choice into an entire object heirarchy that is inseperable from the language. (I'll admit though that I would often be happier if C++ had a character type with guaranteed UTF16, UTF32 or even UTF8 semantics).
So I'd go further than saying that integer shouldn't be part of a hierarchy and say that neither should string, vector<pair<int, void*> > or many other types, unless their use or implementation genuinely gains from it (and in the latter case the inheritance could be private or protected). I think that's what Bjarne Stroustrup meant too, though I won't assume he agrees with me.
Further still, once we have classes free from any hierarchy and we have control over whether objects and primatives are created on the stack or the heap the distinction between integer and an integerlike class (like the wrapped classes of Java and .Net) becomes less striking  both to us and to the compiler (which may well optimise away all or most operations on such a class to the equivalent on an integer). Of course this means that such classes aren't of much use to us, and C++ doesn't have such wrappers (though classes that contains a single primative aren't unheard of).
> The interesting part of the interview was the necessity to > keep the number of virtual functions / member functions > to a minimum. ...
That wasn't what I understood him to be saying. I thought he was saying that a big part of designing a class is thinking about whether, how and why additional classes would be derived from it. Following some rule of thumb like "just make 'em all virtual" only helps you ignore an important part of the class design thought process.
The only place I've seen good information re the performance of virtual functions/base classes is in the C++ Annotated Reference Manual. It dicusses possible implementations of these features, which are interesting in there own right because they're so ingenious. Unfortunately, the ARM is very outofdate, so only old geezers like myself are likely to have a copy of it. It's too bad Dr. Stroustrup has not incorporated this material into C++PL.
Flat View: This topic has 65 replies
on 5 pages
[
«

12345

»
]