> > Instead A can be implemented as an indirection over AA. > > For example (Java code): > > Ok, so you have shown me it's possible to break layers in > Java. However, I have two objections:
One caveat. OO is extremely poorly defined. Many people disagree about what is and isn't OO. Particularly, people from the SmallTalk school of OO have different ideas about OO than the C++ school of OO.
> 1. Is it still OOP? What happened to the encapsulation > principle?
I consider this encapsulated. How is it any less encapsulated? Neither AA nor A expose their implementation to the outside. One of the fallacies (IMO) of novice OO is that all objects must be standalone. OO doesn't prescribe how you implement objects internally, it's about how they appear to the outside.
> (I know that the "correct" set of OOP > principles is fuzzy and may vary from person to person.) I > don't see this technique being the canonical way to > program in Java,
Well, A lot of people don't do it. I'll give you that. But if you go and look at the source for the JDK, you'll find this kind of thing done often. One good example is that when a StringBuffer (and I imagine StringBuilder) is converted to a String, the newly created String will actually use the underlying array of the StringBuffer.
Another good example is that the HashSet class is really just the keyset of a HashMap instance. Look at HashMap and how entryset and keyset are implemented and you'll find that the actually underlying structure is an array.
> just like it's not canonical to have your > program in one class, with the instance variables as > globals (though it's certainly possible to program this > Cobol way too). > > 2. How many programs out there are written like that? If > the language doesn't make it easy or obvious to write > programs this way, even if the technique may be superior, > it won't help in practice.
This pretty much the whole point of inner classes in Java.
> In fact, I would probably rather deal with program that > has networks of small objects than program of the same > complexity written this way.
These are small objects. They are actually smaller than one that contains it's own data because they only contain a few pointers.
I think it's probably more that the code is foreign than it is complex. The technique allows a large monolithic structure with strong internal binding to appear to be a network of related objects. For example, instead of passing an array, index, and length to a method and hope that the method handles it properly, I can had something that is equivalent in performance but requires no understanding of the structures. The decreases complexity at the level that matters.
And really the point here is that you can start off with the kind of standard solution you describe and then if performance becomes an issue, move to a more sophisticated one. But often it's easier to do things this way. It just takes a little getting used.
> I don't want to argue much. I simply believe it can be > done better, essentially, that compiler can select good > implementation for A (or even several different > implementations) without much fuss on the side of > programmer. If you don't believe that, fine, at least you > won't compete with me trying to figure that out. :)
I agree that things could be done better and I think you've got some really interesting and maybe even new ideas on this. I don't want to discourage you at all. My point is that you are ignoring that there is already at least one paradigm that provides a possible framework path for what you have imagined. New achievements are almost always built upon the work of others.
In a nutshell, you are talking about changing the internal data-structures of a program without changing the program. OO already has solved some, but not all of this problem. For example, when I write code to work with a List, the list provided can be array-backed or a linked list or something else entirely. My code will still do the same thing. If I understand, this is a core piece of your idea.
> One good example is that when a StringBuffer (and I > imagine StringBuilder) is converted to a String, the > newly created String will actually use the underlying > array of the StringBuffer.
That used to be true for StringBuffer, but I don't think it is any more. There was a long series of performance 'bugs' where each attempted fix caused a problem for someone else. StringBuilder's lack of synchronization meant it could never use the buffer sharing trick (it would have been a security loophole).
> > One good example is that when a StringBuffer (and I > > imagine StringBuilder) is converted to a String, the > > newly created String will actually use the underlying > > array of the StringBuffer. > > That used to be true for StringBuffer, but I don't think > it is any more. There was a long series of performance > 'bugs' where each attempted fix caused a problem for > someone else. StringBuilder's lack of synchronization > meant it could never use the buffer sharing trick (it > would have been a security loophole).
You're right. I'm kind of curious as to what the issue was. It seems like a sound idea on the surface (as long as the structure is synchronized.)
What would be really great is if escape analysis could be expanded upon to include eliminating copying in situations where the original copy is never used after the copy. I'm not sure it's feasible but who knows. It would also seem that a good bit of escape analysis could be done statically but I've not seen anything suggesting that this is being done.
> You're right. I'm kind of curious as to what the issue > was. It seems like a sound idea on the surface (as long Interaction with the sizing policy for the StringBuffer. So a 12 character String might be sharing a 4000 character array from the buffer. On the other hand aggressively reducing the buffer size caused some applications to run more slowly. So large buffers were expected by some applications, but this caused other applications to use excessive memory. Abandoning the sharing caused many applications to run a bit slower, but also eliminated the pathological bad cases (which were very bad).
> > You're right. I'm kind of curious as to what the issue > > was. It seems like a sound idea on the surface (as > long > Interaction with the sizing policy for the StringBuffer. > So a 12 character String might be sharing a 4000 character > array from the buffer. On the other hand aggressively > reducing the buffer size caused some applications to run > more slowly. So large buffers were expected by some > applications, but this caused other applications to use > excessive memory. Abandoning the sharing caused many > applications to run a bit slower, but also eliminated the > pathological bad cases (which were very bad).
Maybe not a good idea at the scope of the JDK but this couldn't that be solved by switching on a system property? Then those apps that had the 'pathological' cases could turn off the sharing without affecting everyone else.
> > Java and C# are static languages. > > Yes, they're statically typed, but they must preserve the > class structure, to be able to handle overloaded classes > and for reflection. It means that they cannot optimize > classes away like C++ can do (if virtual methods are not > used, that is). Java and C# are not alike in this resprect.
In Java, all methods are indeed virtual. The VM may inline the known methods implementations anyway, but I must still check for new method implementations (from a derived class loaded via dynamic class loading). This is called hotspot optimization.
In C#, methods are non-virtual, unless the virtual keyword is used. Given the fact that the CIL is always JITted, these methods are usually inlined.
In addition, C# has an extra construct, which is designed for performance: value types. These are stack-allocated, final/sealed classes - which guarantees that they can always be inlined where used as a field. In fact, all pointers to value types are inlined.
I think all the algorithm stuff missed the point and just slips back into the nerdy safty zone of premature optimization. What I mean is that it seems like a lot slow code is not because of algorithm choice, but simply bad code.
The example of a slow app that you started out with was an installer. It may be that the installer is so slow because it is doing things like checking whether a file is already there, and if it is there, whether it has some reference count in the registry (assuming Windows) from a previous install, then comparing of the version of the already installed file to the version in the compressed archive. If it is doing all these things in such a way that it decompresses the file more than once, or accesses the file on disk many times (to get its size, then its last modified date, version, etc.) it performance will suck. Ironically, it can actually be a case where code that is well-decoupled and nicely object oriented performs worse than code that may be special-casing to gather all the registry information for all files in one shot, then all the information from the archive in another.
As far as the "blame the library" comments, I bet most of the time you shouldn't blame the library. Instread but you should blame your own code that is resorting the same array a thousand times.
Yes, you are right. I can say more: bad programmers, bad design and virtual machines lead to the bad and slow code. By the way I have more then 7 years C/C++ experience and seems I have never used any kind of sorting algorithm in real projects.
What do you think about XML RPC over HTTP? We forgot that PC is not a text processor - it is binary device and it handles binary protocols much more efficiently. Who is interested in reading of HTTP headers? - Nobody. So why they are plain text? This example is not just one case - it is a very common practice. I saw many times that using of another approach in design can increase speed of execution up to 10 times.
And about Java, C# and others based on virtual machine languages. Most of the Java programs start so slowly that you can drink a cup of coffee while waiting. So the Moore's Law is not a law anymore, software won the battle. What they achieve? Bad developers can write good program in Java? - Not. I think the main complexity of modern programming lies in requirements and design.
On my opinion C++ has a lack of good cross-platform GUI library. If somebody invest into that one tenth of money spent on Java or .Net I will not see any competitors to C++.
P.S. Seems sooner or later I will be Java or C# programmer because that is 90% of all jobs. :(
Nice and interesting article, however I don't really agree with the following statement:
"For example, the search algorithms in the C++ Standard are O(log(N)) when used with random-access iterators, but O(N) when used with input iterators."
What I know about the C++ search algorithms is rather different:
A) std::search() original complexity: O(N*M)  Implementations with better complexity have been presented  but not yet adopted by the most notable STL implementations. [7,8,9]
B) std::search_n() original complexity: O(N*m) - O(N)  A search_n specialization for random access iterators has been presented in the year 2004  and has been soon adopted by some STL implementors. [7,8] Another STL implementor has also adopted a similar search_n specialization a little later. 
C) std::find_first_of() original complexity: O(N*M)  A find_first_of specialization for one-byte integers has been presented in the year 2007 and has been soon adopted by one STL implementor.