The Artima Developer Community
Sponsored Link

Weblogs Forum
Where Did All the Beautiful Code Go?

97 replies on 7 pages. Most recent reply: Apr 14, 2006 9:55 PM by Andy Dent

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 97 replies on 7 pages [ « | 1 ... 3 4 5 6 7 | » ]
Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Where Did All the Beautiful Code Go? Posted: Apr 2, 2006 6:11 PM
Reply to this message Reply
Advertisement
> There is no need for an algorithm like that. For example,
> if a specific piece of code should run within a specific
> amount of time, then the compiler can easily count how
> many clock ticks a bunch of instructions takes to execute,
> and thus fail the code if the requirements are not met.

It appears you have little experience in real-time systems.

First of all, it's not enough to know the number of clock ticks in a particluar part of the code, you have to know the timing of the entire system.

Second, you have to check for the real-time compliance in all paths the code can take.

Third, the upper bound of clock ticks isn't sufficient to examine because in real-time systems being early is no better than being late.

Fourth, on many systems with caches, memory cycle stealing, and pipelined execution, there is no consistent number of clock ticks a particular section of code will take.

Fifth, there may be async hardware events that make the real-time behavior more complex.

For these reasons, I claim that embedding real-time requirements in a compiler is not practical.

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Where Did All the Beautiful Code Go? Posted: Apr 2, 2006 6:17 PM
Reply to this message Reply
> Buggy software needs more testing. :)
> Each test case is representative of set of test cases.
> When some test cases pass, we assume code operating
> correctly for extrapolated set of configurations. If code
> is consistent, its operation extrapolates more regularly.

Perhaps you've had a good experience using this extrapolation approach to testing. The clients I had when doing testing as a consultant, would never accept such an approach.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 3, 2006 3:14 AM
Reply to this message Reply
> First of all, it's not enough to know the number of clock
> ticks in a particluar part of the code, you have to know
> the timing of the entire system.

I only used clock ticks as an example. It is possible to know the timing of the entire system, as for hard real time systems it is known before hand which processes are about to run.

But the issue of correctness is not only about hard real time systems.

>
> Second, you have to check for the real-time compliance in
> all paths the code can take.

You do not actually have to check each path of execution. You only care about the possibility of violating the constraints; therefore all you have to do is to check that the set of possible values is a subset of the allowed values. That is useful not only for checking timing constraints, but also logical correctness of values.

> Third, the upper bound of clock ticks isn't sufficient to
> examine because in real-time systems being early is no
> better than being late.

> Fourth, on many systems with caches, memory cycle
> stealing, and pipelined execution, there is no consistent
> number of clock ticks a particular section of code will
> take.

Again, you have to make sure that is not possible for certain things to happen. If execution timings vary, all you have to do is to make sure thresholds can not be violated.

> Fifth, there may be async hardware events that make the
> real-time behavior more complex.

Hardware events/interrupts would have to be taken into account. For hard real-time systems, the time it takes to service an interrupt is known.

>
> For these reasons, I claim that embedding real-time
> requirements in a compiler is not practical.

I see no reason why a compiler should not be used to check constraints, even if they are real-time constraints. Since in a real time system we know everything about the system, then it is quite feasible to do so.

Actually, in all computer systems we have full knowledge of all the possible constraints. The problems we have with computers is because we have not developed tools that can process those constraints; instead, we do the job that the tools are supposed to do.

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Where Did All the Beautiful Code Go? Posted: Apr 3, 2006 8:07 AM
Reply to this message Reply
> You do not actually have to check each path of execution.

Well, if the tool magically "understands" the system, it can ignore paths that are not relevent.


>
> Again, you have to make sure that is not possible for
> certain things to happen. If execution timings vary, all
> you have to do is to make sure thresholds can not be
> violated.

The issue is not what the real-time design does about it, it's how this hypothetical tool of yours handles it.

>
> Hardware events/interrupts would have to be taken into
> account. For hard real-time systems, the time it takes to
> service an interrupt is known.
>

Yes, but the effect this will have on real-time software may depend on what code is executing at the time of the interrupt. How does the tool handle this?

> I see no reason why a compiler should not be used to check
> constraints, even if they are real-time constraints. Since
> in a real time system we know everything about the system,
> then it is quite feasible to do so.

Even in successful real-time systems we don't typicially "know everything about the system". In any case what we know can't always be communicated to a program in a standard, bounded way.

Remember, a tool only has value if it makes the job easier.

Vladimir Nesov

Posts: 27
Nickname: robotact
Registered: Aug, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 3, 2006 9:23 AM
Reply to this message Reply
> Perhaps you've had a good experience using this
> extrapolation approach to testing. The clients I had when
> doing testing as a consultant, would never accept such an
> approach.
You can't test all possible input data sets, you can only test some sameple values of input data set, for small objects this input data set is luckily small. Otherwise it's static analysis of specification, which ongoing discussion is about.

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 3, 2006 9:24 AM
Reply to this message Reply
> I don't understand how a PL can check that a programmer
> implemented requirements correctly. I guess it would not
> even help when the compiler scans the programmers and
> customers brains?

>
> A PL shall allow the programmer to declare the
> requirements, either as separate entities from the
> implementation, or as contracts. Much of the code in a
> program is requirements anyway. Then the compiler shall
> check that all of the requirements are implemented
> correctly.

Much of the code? As long as you don't specify what an encoded "requirement declaration" is, your definition remains content free. It is essential that your "requirement declarations" do not consitute a Turing complete language because otherwise each algorithm is nothing but a "requirement declaration" and we do not advance. Besides this not any line of code can guarantee that a programmer encodes requirements correctly, but just specifications. This will still make functional tests necessary that are implemented by an independent tester. But this has surely nothing to do with unit tests.

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 3, 2006 12:16 PM
Reply to this message Reply
> Also, how would you make sure the requirements definition
> is correct?
>
> You can not make sure the requirements are 100% correct,
> just like you can not make sure unit tests are 100%
> correct. But since requirements specify the 'what' of an
> application (the implementation specifies the 'how'), and
> requirements are declarative, it is much easier to see
> what is going on at the level of requirements rather than
> at implementation level. Furthermore, conflicting
> requirements will not compile.

I see what your idea is, but I think you would have lots of problems trying to implement such a requirements-verification system. Suppose, for example, that you're writting a speech-recognition system. Now if you want to analyze the signal you would need a serious definition language, considering all the details you would need to specify. You could use a domain-specific language, but then you would end up using lots of such languages, some of which might not be on a much higher level than, say, C. I just think that your idea would be applicable to a very narrow problem domain. Also, why having a tool that verifies against a requirements definition instead of generating the code based on such definitions?

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 4, 2006 1:00 AM
Reply to this message Reply
> Also, why having a tool that
> verifies against a requirements definition instead of
> generating the code based on such definitions?

Which is again a programming task - declarative programming in a specification style. From time to time I take a look at so called "specification languages" but I notice that I won't like to use them because I find it hard to analyze and debug their own language code. Finding a solution to a problem using a magic master algorithm in the background is quite seductive and in a sense this is what each high level language interpreter actually does but I want modularity, locality and reduction of dependencies not a global system of facts that offer conclusions I don't understand. I still believe that OO was a right step into the direction of supporting architectural modeling.

By saying this I don't intend to dismiss Achilleas goal to find better languages that simplify writing more expressive type contracts than Java or C# to put it less provocatively but more modest. That's where functional PLs like ML and Haskell shine. Together with algebraic data-types and pattern-matching that provide additional flexibility they reduce overhead in type declarations. New languages like Scala are trying to master the broad range of requirements towards language design in a Java-like syntax and in a Java environment right now. I think it's time to take a closer look.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 4, 2006 3:46 AM
Reply to this message Reply

Well, if the tool magically "understands" the system, it can ignore paths that are not relevent


Since the tool will try to compile every path, it can also count how long will it take to execute. It does not have to solve the halting problem; all it has to do is find a min and max value, which could then be compared to predefined limits.


Yes, but the effect this will have on real-time software may depend on what code is executing at the time of the interrupt. How does the tool handle this?


Since the system already knows how much time it takes for interrupt code to be executed, I see no problem in checking that execution of a specific piece of code may or may not go over a specific threshold.


Even in successful real-time systems we don't typicially "know everything about the system". In any case what we know can't always be communicated to a program in a standard, bounded way.


That is because we never thought of putting all the pieces of information together.


Much of the code? As long as you don't specify what an encoded "requirement declaration" is, your definition remains content free. It is essential that your "requirement declarations" do not consitute a Turing complete language because otherwise each algorithm is nothing but a "requirement declaration" and we do not advance.


A requirement declaration is, for example, that a certain program does certain X, Y and Z actions. A compiler could check that the actions indeed take place.

A requirement declaration is that certain actions take place with a certain order. A compiler could check that the order is not violated.

A requirement declaration is that a value is within certain limits. A compiler could check that a value's limits are not violated.


Besides this not any line of code can guarantee that a programmer encodes requirements correctly, but just specifications. This will still make functional tests necessary that are implemented by an independent tester. But this has surely nothing to do with unit tests.


If specifications are implemented correctly, then the requirements are implemented correctly. Functional testing is then needed to see if requirements are formulated correctly.

In my professional life, I have worked with requirements documents (produced by a military constructor). Much of the text specified in requirements, if not all, could be formalized in a language that is checkable by a tool against an implementation. The bugs we had were due to: a) conflicts between requirements, b) incomplete requirements with some parts open to interpretation, c) implicit requirements, d) a programming language (C++) that there was no way to enforce proper encoding of requirements, even if it was directly visible that it could be done from the document.


I see what your idea is, but I think you would have lots of problems trying to implement such a requirements-verification system. Suppose, for example, that you're writting a speech-recognition system. Now if you want to analyze the signal you would need a serious definition language, considering all the details you would need to specify. You could use a domain-specific language, but then you would end up using lots of such languages, some of which might not be on a much higher level than, say, C. I just think that your idea would be applicable to a very narrow problem domain.


I have no idea about how speech recognition works, but if I hypothesise on how it works and assume we know how to do it, it would involve steps like getting the signal, analyse it, compare it with other information, extract the text for it etc. So as long as these actions are required by a speech-recognition program, there can be a verification language that "forces" the program to perform those actions.

If the end result though is not speech recognition, is not the error of the system, but an error of speech recognition definition: our solution is wrong.


Also, why having a tool that verifies against a requirements definition instead of generating the code based on such definitions?


If there was such a system, much of the requirements written would be the code. For example, all of a program's types are requirements.

But a requirements verification system would also allow programmers to specify programs in a higher level, i.e. without implementation details (i.e. without the 'how'). Then a third party could come and fill in the details, and the system would prove that the implementation was correct.


Which is again a programming task - declarative programming in a specification style. From time to time I take a look at so called "specification languages" but I notice that I won't like to use them because I find it hard to analyze and debug their own language code.


It is definitely a programming task. But you know why programs have bugs? because this programming task is only implemented in people's minds: as long as it is simple enough, people do not have a problem managing the information. But when the problem becomes complex, people can no longer handle it, and therefore bugs arise.

I know it is not fun to not just open the IDE and start coding. Everytime I have done that, my program ended in failure. When I tried to formulate my problem with a formal language, then inconsistencies are immediately visible, and then I have the hard task of finding solutions for those inconsistencies. It is indeed a hard task, but a necessary step, in my opinion, for better programs.


but I want modularity, locality and reduction of dependencies not a global system of facts that offer conclusions I don't understand


Modularity, locality and low coupling is a function of code organization, which is a task about separation of concerns. A verification system is irrelevant to that; its usefulness is to minimize bugs and testing. It can not organize the code for you.


I still believe that OO was a right step into the direction of supporting architectural modeling.


Having seen how much effort it takes to make an OO program "steady" (i.e. to approach, let's say, 90% correctness), I do not think OO has contributed anything to solving the issue of correctness, which is foundamentally a problem of mathematics. In my opinion OO actually makes programming more difficult than it ought to be, because it enforces "cast-in-stone" classifications that do not reflect real life; at least the current mainstream OO programming languages do.


That's where functional PLs like ML and Haskell shine. Together with algebraic data-types and pattern-matching that provide additional flexibility they reduce overhead in type declarations.


I like Haskell and ML, and I think they have many good things in their favor; I definitely think that Java/C#/etc are a dead-end. But I am not entirely persuaded though that getting rid of destructive updates is the solution forward. Referential transparency seems good, but it enforces unecessary complexity, especially when a program has a complex object model. I think there can be a calculus for Turing machines; there is already theoritical work done (the PI-calculus, for example) that include the notion of storage, sequence of commands, threads etc.

I have an application with a complex object model with a deep hierrarchy and many rules. With C++, Java or C#, it is easy to use the model-view-controller pattern, even if the object tree is very complex. But how do I translate this model to Haskell, using monads? I understand monads, for simple things. But my brain can not handle monadic transformations of a complex object model. I just can not write such a monad. Which leaves me to a dead end: either spend all my time in writing that monad or monads in functional languages (impossible for now), or spend my time chasing null pointers and out of bounds indices in Java, C# or C++ (many other things in C++). There is also the problem of performance: for some tasks, functional languages leave a lot to be desired.

piglet

Posts: 63
Nickname: piglet
Registered: Dec, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 4, 2006 6:55 AM
Reply to this message Reply
"Since the tool will try to compile every path, it can also count how long will it take to execute. It does not have to solve the halting problem; all it has to do is find a min and max value, which could then be compared to predefined limits."

I beg your pardon but this simply doesn't make any sense. The halting problem means you can't even be sure whether a given program will terminate or not, how then would you find a min and max value? Are you familiar at all with the theory concerning these questions? Why do you think proving programmatic correctness is such a hard problem to solve if it were only a question of "compiling every path" or "putting all the pieces of information together"? If it were so easy do be even automatically, everybody would be doing it! Compilers can do only very limited static requirement checks. They cannot verify in advance the oputcome of an algorithm!

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Where Did All the Beautiful Code Go? Posted: Apr 4, 2006 7:25 AM
Reply to this message Reply
Achilleas,

Yesterday you said:
"You do not actually have to check each path of execution."

Today you say:
"Since the tool will try to compile every path, it can also
count how long will it take to execute."

It's clear that you don't have a consistent approach and are just responding to the latest objections. Why don't you try to implement your ideas rather than discussing them hypothetically here?

If your're right, nothing will convince us better than a working tool.

Jeff Ratcliff

Posts: 242
Nickname: jr1
Registered: Feb, 2006

Re: Where Did All the Beautiful Code Go? Posted: Apr 4, 2006 7:35 AM
Reply to this message Reply
> You can't test all possible input data sets, you can only
> test some sameple values of input data set, for small
> objects this input data set is luckily small.

Perhaps I misunderstood you. I thought you were saying that only a sample of code needed to be tested, not a sample of data. In other words, all code is tested but not with all possible values.

It is in that context that I don't see how the "cleanliness" of the code affects the testing effort. You supply inputs and measure outputs. The tests pass or they fail.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 5, 2006 5:20 AM
Reply to this message Reply

I beg your pardon but this simply doesn't make any sense. The halting problem means you can't even be sure whether a given program will terminate or not, how then would you find a min and max value? Are you familiar at all with the theory concerning these questions? Why do you think proving programmatic correctness is such a hard problem to solve if it were only a question of "compiling every path" or "putting all the pieces of information together"? If it were so easy do be even automatically, everybody would be doing it! Compilers can do only very limited static requirement checks. They cannot verify in advance the oputcome of an algorithm!


Although it may seem that I am not familiar with the theory, I would like to say that I am. The halting problem shows that it is not possible to have an algorithm that can prove any statement about natural numbers. What I am proposing is not such an algorithm, but a way to catch much more bugs than previously thought possible.


Yesterday you said:
"You do not actually have to check each path of execution."

Today you say:
"Since the tool will try to compile every path, it can also
count how long will it take to execute."

It's clear that you don't have a consistent approach and are just responding to the latest objections. Why don't you try to implement your ideas rather than discussing them hypothetically here?

If your're right, nothing will convince us better than a working tool.


What I meant was that you do not have to follow each path of execution as if the program has been executed: you do not have to care about the actual values a program will take, but about the possible values a program can have. A simple examination of the possible values that can be produced by a piece of code is more than enough to prove inconsistencies in a program. As each block of code is compiled, the information of possible values can be extracted from the code for each piece of code and stored along with the symbol information, and then compared to the input values required by another piece of code that uses the already compiled code.

One of the reasons that are many bugs in programs written in modern mainstream programming languages is that these programming languages ignore the fact that values and programs are themselves types. Typing goes much deeper than what most programmers think. A pointer access operation, for example, is a typed operation that requires that the pointer is not null. An array access operation is also another typed operation that requires that the index falls within the boundaries of an array. A for loop is an operation that requires that the iterator increases at specific steps within specific boundaries. An I/O operation requires that the I/O handle is valid and that the device is open and there is no previous error. The operation of disabling a widget requires that the widget is enabled. The operation of submitting a form requires that certain fields have been filled with proper data. There are countless examples of operations that have a type, but programming languages silently ignore those types.

Each piece of code has two sets of values: a) the input set of possible values (i.e. values allowed by the code), b) the output set of possible values. If a piece of code B is executed after a piece of code A, then the output set of values of A must be a subset of the input values of B, otherwise the program may create an illegal state. My approach is this: as the code compiles, the compiler extracts the above information, and does the check described above: if the check fails, then the compiler declares an error.

The approach described above does not mind code branches: a branch simply adds one or more possible values to the set of values of the outter block. Therefore the halting problem is not an issue for this approach: we may not know if an algorithm terminates or not, but we know that it can not create an inconsistent state in our program.

Here is an example of a piece of code that most programmers happily write, yet it had costed billions of dollars:


//function that checks if a path is good.
bool isGoodPath(char pathName[256]);

//function that receives requests
void processRequest(SOCKET sock) {
char buffer[1024];
recv(sock, buffer, sizeof(buffer), 0);
char path[1024];
extractPath(buffer, path);
if (isGoodPath(path)) doRequest(buffer);
}


The above piece of code contains a critical error: the possibility for a buffer overflow: the path may be 1024 bytes long, but the function isGoodPath accepts a path of 256 characters. Yet C/C++ compilers will happily accept the code. The inconsistency can be introduced in various conditions:

1) the two functions were written by two different programmers that made different assumptions.
2) the programmer that wrote the two functions forgot what the proper path length was.
3) the requirements changes: the initial length of the path was 256, then changed to 1024, but not all pieces of code were updated.
4) other.

With the approach proposed above, a compiler could understand the inconsistency, by simply 'writing down' the possible input and output sets of the functions used and comparing those sets. Here is the same code written with the relevant information; it is pretty obvious that there is an error:


//input: {char[256]}
//output: {false, true}
bool isGoodPath(char pathName[256]);

//input: {socket}
void processRequest(SOCKET sock) {
char buffer[1024];

//input: {socket -> socket, char[1024] -> char*}
//output: {char*}
recv(sock, buffer, sizeof(buffer), 0);

char path[1024];

//input: {char[1024] -> char[1024]}
//output: {char[1024]}
extractPath(buffer, path);

[b]//input: {char[1024] -> char[256]}[/b]
if (isGoodPath(path)) doRequest(buffer);
}


The arrow above is used to show the binding between input and output. The part in bold is where the error occurs. In fact, the code above has other inconsistencies too which I have not shown.

Here is another example which happens quite often; it is about an invalid value, which happens to be something critical like a null pointer, a closed file handle etc:


#include <string>
#include <iostream>
#include <fstream>
using namespace std;

fstream file;

int main() {
while (1) {
cout << "input 1 to open the file, 2 to input string, 3 to close the file; press enter to exit\n";
string s;
cin >> s;
if (s.empty()) then break;
if (s == "1") {
file.open("out.txt");
if (file.is_open()) {
cout << "file opened\n";
}
else {
cout << "can not open file\n";
break;
}
}
else if (s == "2") {
cin >> s;
file << s;
cout << "written: " << s << "\n";
}
else if (s == "3") {
file.close();
cout << "file closed\n";
}
else {
cout << "wrong option\n";
}
}
file.close();
return 0;
}


The above program does a very simple thing: if the user inputs "1", then it opens a file; if the user inputs "2", then the program waits for the user to enter a string to write to the file; if the user inputs "3", then the file is closed. Finally if the user simply presses enter, the program exits.

The program above has inconsinstencies:

1) it is possible to close the file without being open.
2) it is possible to write to the file without being open.
3) the file can be closed twice.

The reason these inconsistencies exist are because the contract file open->file write->file close can be violated. If we do an analysis of the inputs/outputs, we would have:


int main() {

//input: {file(opened or closed)}
//output: {file(opened or closed)}
while (1) {
cout << "input 1 to open the file, 2 to input string, 3 to close the file; press enter to exit\n";

string s;
cin >> s;

if (s.empty()) then break;

if (s == "1") {

[b]//input: {file(opened or closed) -> file(closed)}[/b]
//output: {file(opened or closed)}
file.open("out.txt");

[b]//input: {file(opened or closed)}[/b]
//output: {file(opened)}
if (file.is_open()) {
cout << "file opened\n";
}
else {
cout << "can not open file\n";
break;
}
}

else if (s == "2") {
cin >> s;

[b]//input: {file(opened or closed) -> file(opened)}[/b]
//output: {file(opened)}
file << s;
cout << "written: " << s << "\n";
}

else if (s == "3") {
[b]//input: {file(opened or closed) -> file(opened)}[/b]
//output: {file(closed)}
file.close();
cout << "file closed\n";
}

else {
cout << "wrong option\n";
}
}

[b]//input: {file(opened or closed) -> file(closed)}[/b]
//output: {file(closed)}
file.close();

return 0;
}


In the above analysis, the bold parts show the inconsistencies.

This approach extends and generalizes work done with Cyclone, Scala and others.

I am optimistic that this approach can be extended to all kinds of operations in order to ensure that correctness is there. Let me state again that this approach can not prove that a program does what we intended. But at least it can prove that our implementation does not violate logical implicit or explicit constraints.

A language could support such a functionality either by user-defined contracts or by language-supplied contracts. An example of a language-supplied contract is that "this pointer can not be null" or that "the index of the array is always valid" or that two programs can not be executed concurrently under specific circumstances (for hard real-time apps). User-defined contracts can be expressions that are statically proven to be true by the analysis performed above.

For example, the 'File' class could be like this:


class File {
FILE *m_handle = null;

bool open(const char *file) {
[b]assume m_handle == null;[/b]
//bla bla open file
}

bool close(const char *file) {
[b]assume m_handle != null;[/b]
//bla bla close file
}
}


The part in bold is a simple way to declare a static contract.

Static contracts can be used to declare lots of specifications, including state machines. But as I have said above, it would be good if requirements could be declared in a more abstract form, and some third party came and provided the implementation. This can happen by having a part of a language providing the abstraction, and another part of the language provide a declarative way to "honour" the abstraction with the help of static contracts.

For example, let's see an address-book application: the user can input one or more contacts and delete one or more contacts. A requirements' language could be used to declare these requirements:


requirement {
record Contact {
firstName : String;
lastName : String;
email : EMailAddress;
phone : PhoneNumber;
}
variable contacts : {Contact};
operation addContact: {contacts, Contact} -> {contacts + Contact};
operation deleteContact: {contacts, Contact} -> {contacts - Contact};
}


Curly brackets are used to denote sets; the arrow is used to denote transformations.

The above types (records, variables, operations) are "abstract" types that must be implemented by a third party. An implementation can be like this:


class Contact [b]implements requirement Contact[/b] {
String firstName [b]implements requirement Contact.firstName[/b];
String lastName [b]implements requirement Contact.lastName[/b];
EMailAddress email [b]implements requirement Contact.email[/b];
PhoneNumber phone [b]implements requirement Contact.phone[/b];
}

List<Contact> contacts [b]implements requirement contacts[/b];

class App {
public static void main(string[] args) {
//bla bla build the gui

//add a callback which implements the 'addContact' operation
newContactMenuItem.setClickEvent(new ClickEvent () {
contacts.add(new Contact) [b]implements requirement addContact[/b];
});

//add a callback which implements the 'deleteContact' operation
deleteContactMenuItem.setClickEvent(new ClickEvent () {
Contact contact = getCurrentContact();
contacts.remove(contact) [b]implements requirement removeContact[/b];
});
}
}


The parts in bold are where requirements are declared to be implemented. The static contracts in class List<T> make sure requirements are implemented correctly.

I apologise for this long post and I am grateful if you were patient enough to read this far.

piglet

Posts: 63
Nickname: piglet
Registered: Dec, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 5, 2006 8:44 AM
Reply to this message Reply
I wouldn't dismiss that *some* requirements checking can be done at compile time. Type checking is the most obvious example, which is nothing else but a way of specifying a set of legal input values that can be checked at compile time. The Nice language also offers null pointer checking at compile time. Hoewever, I think the class of problems that can be handled by this approach is restricted to relatively trivial cases. If you try to go further, you will start running into several problems:

- The requirements mechanism itself becomes complex and error-prone.
- Because of the halting problem and related problems, complex input conditions cannot in general be verified. Take as a simple example an index bound check. The index will be the result of some computation, so in order to prevent out of bound errors, you would have to prove that the computation in question cannot, under any circumstances, yield a wrong result.
- Even if it were practically possible, the compiler would have to be far too restrictive for most practical purposes. It would have to flag as an error all indexes that might *possibly* become out of bound, even if they never would.

As a simple example, consider a method like
 
MyObject getSomething {
return (some condition)? new MyObject() : null;
}


Assume that some caller calls getSomething() and passes the result to another method, which has the specification that the input may not be null. The compiler probably can't foresee how the condition will evaluate, so in order to enforce the requirement, it would have to reject this code. However, the code may be correct. A human programmer may look at the code and say, "why, it's obvious that the method won't return null in the case in question. That stupid compiler is preventing me from getting my work done!"

In fact, some programmers prefer dynamic languages for that very reason, they feel that static checks are too restrictive. A language which pushed compile-time checks even further, along the lines you suggest, would hardly become very popular. This isn't in itself a reason to dismiss the concept but the point is you have to make a credible case not only that it is possible, but also that it is worthwile. No static verification, indeed no methodology in the world, can guarantee correctness, so the question is how to get the best result with respect to the ressources available.

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: Where Did All the Beautiful Code Go? Posted: Apr 5, 2006 9:56 AM
Reply to this message Reply
> I wouldn't dismiss that *some* requirements checking can
> be done at compile time. Type checking is the most obvious
> example, which is nothing else but a way of specifying a
> set of legal input values that can be checked at compile
> time. The Nice language also offers null pointer checking
> at compile time. Hoewever, I think the class of problems
> that can be handled by this approach is restricted to
> relatively trivial cases. If you try to go further, you
> will start running into several problems:
>
> - The requirements mechanism itself becomes complex and
> error-prone.
> - Because of the halting problem and related problems,
> complex input conditions cannot in general be verified.
> Take as a simple example an index bound check. The index
> will be the result of some computation, so in order to
> prevent out of bound errors, you would have to prove that
> the computation in question cannot, under any
> circumstances, yield a wrong result.
> - Even if it were practically possible, the compiler would
> have to be far too restrictive for most practical
> purposes. It would have to flag as an error all indexes
> that might *possibly* become out of bound, even if they
> never would.
>
> As a simple example, consider a method like
>
 
> MyObject getSomething {
> return (some condition)? new MyObject() : null;
> }

>
> Assume that some caller calls getSomething() and passes
> the result to another method, which has the specification
> that the input may not be null. The compiler probably
> can't foresee how the condition will evaluate, so in order
> to enforce the requirement, it would have to reject this
> code. However, the code may be correct. A human programmer
> may look at the code and say, "why, it's obvious that the
> method won't return null in the case in question. That
> stupid compiler is preventing me from getting my work
> done!"
>
> In fact, some programmers prefer dynamic languages for
> that very reason, they feel that static checks are too
> restrictive. A language which pushed compile-time checks
> even further, along the lines you suggest, would hardly
> become very popular. This isn't in itself a reason to
> dismiss the concept but the point is you have to make a
> credible case not only that it is possible, but also that
> it is worthwile. No static verification, indeed no
> methodology in the world, can guarantee correctness, so
> the question is how to get the best result with respect to
> the ressources available.

I agree. The idea of static verification like this is very attractive, but I don't think it's generally applicable. I'm sure there has been lots of research in this area.

Flat View: This topic has 97 replies on 7 pages [ « | 3  4  5  6  7 | » ]
Topic: Shuttleworth Foundation Workshop on Analytical Skills Development Previous Topic   Next Topic Topic: Software Architecture is Leadership

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use