|
Re: Where Did All the Beautiful Code Go?
|
Posted: Apr 4, 2006 3:46 AM
|
|
Well, if the tool magically "understands" the system, it can ignore paths that are not relevent
Since the tool will try to compile every path, it can also count how long will it take to execute. It does not have to solve the halting problem; all it has to do is find a min and max value, which could then be compared to predefined limits.
Yes, but the effect this will have on real-time software may depend on what code is executing at the time of the interrupt. How does the tool handle this?
Since the system already knows how much time it takes for interrupt code to be executed, I see no problem in checking that execution of a specific piece of code may or may not go over a specific threshold.
Even in successful real-time systems we don't typicially "know everything about the system". In any case what we know can't always be communicated to a program in a standard, bounded way.
That is because we never thought of putting all the pieces of information together.
Much of the code? As long as you don't specify what an encoded "requirement declaration" is, your definition remains content free. It is essential that your "requirement declarations" do not consitute a Turing complete language because otherwise each algorithm is nothing but a "requirement declaration" and we do not advance.
A requirement declaration is, for example, that a certain program does certain X, Y and Z actions. A compiler could check that the actions indeed take place.
A requirement declaration is that certain actions take place with a certain order. A compiler could check that the order is not violated.
A requirement declaration is that a value is within certain limits. A compiler could check that a value's limits are not violated.
Besides this not any line of code can guarantee that a programmer encodes requirements correctly, but just specifications. This will still make functional tests necessary that are implemented by an independent tester. But this has surely nothing to do with unit tests.
If specifications are implemented correctly, then the requirements are implemented correctly. Functional testing is then needed to see if requirements are formulated correctly.
In my professional life, I have worked with requirements documents (produced by a military constructor). Much of the text specified in requirements, if not all, could be formalized in a language that is checkable by a tool against an implementation. The bugs we had were due to: a) conflicts between requirements, b) incomplete requirements with some parts open to interpretation, c) implicit requirements, d) a programming language (C++) that there was no way to enforce proper encoding of requirements, even if it was directly visible that it could be done from the document.
I see what your idea is, but I think you would have lots of problems trying to implement such a requirements-verification system. Suppose, for example, that you're writting a speech-recognition system. Now if you want to analyze the signal you would need a serious definition language, considering all the details you would need to specify. You could use a domain-specific language, but then you would end up using lots of such languages, some of which might not be on a much higher level than, say, C. I just think that your idea would be applicable to a very narrow problem domain.
I have no idea about how speech recognition works, but if I hypothesise on how it works and assume we know how to do it, it would involve steps like getting the signal, analyse it, compare it with other information, extract the text for it etc. So as long as these actions are required by a speech-recognition program, there can be a verification language that "forces" the program to perform those actions.
If the end result though is not speech recognition, is not the error of the system, but an error of speech recognition definition: our solution is wrong.
Also, why having a tool that verifies against a requirements definition instead of generating the code based on such definitions?
If there was such a system, much of the requirements written would be the code. For example, all of a program's types are requirements.
But a requirements verification system would also allow programmers to specify programs in a higher level, i.e. without implementation details (i.e. without the 'how'). Then a third party could come and fill in the details, and the system would prove that the implementation was correct.
Which is again a programming task - declarative programming in a specification style. From time to time I take a look at so called "specification languages" but I notice that I won't like to use them because I find it hard to analyze and debug their own language code.
It is definitely a programming task. But you know why programs have bugs? because this programming task is only implemented in people's minds: as long as it is simple enough, people do not have a problem managing the information. But when the problem becomes complex, people can no longer handle it, and therefore bugs arise.
I know it is not fun to not just open the IDE and start coding. Everytime I have done that, my program ended in failure. When I tried to formulate my problem with a formal language, then inconsistencies are immediately visible, and then I have the hard task of finding solutions for those inconsistencies. It is indeed a hard task, but a necessary step, in my opinion, for better programs.
but I want modularity, locality and reduction of dependencies not a global system of facts that offer conclusions I don't understand
Modularity, locality and low coupling is a function of code organization, which is a task about separation of concerns. A verification system is irrelevant to that; its usefulness is to minimize bugs and testing. It can not organize the code for you.
I still believe that OO was a right step into the direction of supporting architectural modeling.
Having seen how much effort it takes to make an OO program "steady" (i.e. to approach, let's say, 90% correctness), I do not think OO has contributed anything to solving the issue of correctness, which is foundamentally a problem of mathematics. In my opinion OO actually makes programming more difficult than it ought to be, because it enforces "cast-in-stone" classifications that do not reflect real life; at least the current mainstream OO programming languages do.
That's where functional PLs like ML and Haskell shine. Together with algebraic data-types and pattern-matching that provide additional flexibility they reduce overhead in type declarations.
I like Haskell and ML, and I think they have many good things in their favor; I definitely think that Java/C#/etc are a dead-end. But I am not entirely persuaded though that getting rid of destructive updates is the solution forward. Referential transparency seems good, but it enforces unecessary complexity, especially when a program has a complex object model. I think there can be a calculus for Turing machines; there is already theoritical work done (the PI-calculus, for example) that include the notion of storage, sequence of commands, threads etc.
I have an application with a complex object model with a deep hierrarchy and many rules. With C++, Java or C#, it is easy to use the model-view-controller pattern, even if the object tree is very complex. But how do I translate this model to Haskell, using monads? I understand monads, for simple things. But my brain can not handle monadic transformations of a complex object model. I just can not write such a monad. Which leaves me to a dead end: either spend all my time in writing that monad or monads in functional languages (impossible for now), or spend my time chasing null pointers and out of bounds indices in Java, C# or C++ (many other things in C++). There is also the problem of performance: for some tasks, functional languages leave a lot to be desired.
|
|