The Artima Developer Community
Sponsored Link

C++ Community News Forum
John Ousterhout on What Limits Software Growth

26 replies on 2 pages. Most recent reply: Oct 8, 2006 9:56 AM by Cleo Saulnier

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 26 replies on 2 pages [ 1 2 | » ]
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

John Ousterhout on What Limits Software Growth Posted: Sep 21, 2006 12:48 PM
Reply to this message Reply
Summary
In a recent interview with Artima, John Ousterhout shares his views on software evolution, and what the current bottlenecks in building large-scale software systems are.
Advertisement

John Ousterhout's work is no stranger to developers: While a professor at UC Berkeley, he did pioneering work on distributed systems, including distributed file systems, and also invented the TCL scripting language. While a distinguished engineer at Sun Microsystems, he authored an influential paper on the importance on scripting languages. He later founded Scriptics, Inc., a company that made a TCL IDE, and most recently is founder and CEO of Electric Cloud, a company that sells a tool that reduces the build time of C++ projects by parallelizing software builds.

In an interview, Ousterhout told us that while people want to build increasingly capable systems, they constantly bump against bottlenecks. Bottlenecks occur because more capable software is also more complex, and current tools and techniques have weaknesses in managing a certain level of software complexity. As one set of bottlenecks are solve, new software is free to become more complex, but soon new bottlenecks emerge. In Ousterhout's view, the evolution of software tools and techniques mirrors the desire to tame software complexity. The following are excerpts from Ousterhout's interview:

There's an insatiable demand for features—whether underlying characteristics, such as security or reliability, or actual feature sets or Web pages, or screens, or editing tools. There's just no limit to the set of things people would like to do in their software.

To me, one of the most interesting things about software is that you're limited not by what you want to do, but by what you can do. We can conceive of so many things we'd like to have in our products, [but] we either don't have the time or the resources, or we just can't manage what it would take to build those things.

Unfortunately, everything in software leads to more complexity. There are various laws of physics people have discovered, and there are corresponding laws of software. The first law of software is that software systems tend towards increasing states of complexity. It's almost a perfect mirror of the First Law of Thermodynamics in physics...

The trend in software development tools is to give developers tools to build even bigger and more complex things. The only thing that limits how big a software project gets, is that it gets so big that people can't manage it any more. As soon as you come up with tools that make things more manageable, boom, software is going to get bigger again... It's really a question of what the next bottleneck is, and then find a way around that bottleneck... Of course, as soon as you do that, everything picks up speed, and soon some other bottleneck appears.

If you look back in time, in the '80s there were a lot of bottlenecks on the front end of software development, coding and managing source code revisions, and so on. That led to the growth of a bunch of powerful editing tools, [and] revisions control systems like ClearCase and Perforce. That allowed people to produce a lot of software, but they couldn't test it very effectively, so testing became a big bottleneck. In 1990s, there was a lot of work around testing, with the rise of companies like Mercury Interactive... Those things then allowed software projects to scale up a lot, because those parts got better. Now, build technology didn't improve much over last twenty years. That has now become the bottleneck. That's why we [Electric Cloud] are out, hunting the build bottleneck...

I think extreme programming is a great idea. There are a lot of important things to be learned from that, and it will allows us in many cases to reduce complexity - temporarily. The techniques in extreme programming will allow us to do things that we've never done before, that we couldn't do in the past. And that makes even bigger, more complex projects feasible. What's immediately going to happen, as soon as people get the current stuff totally under control, easily manageable, [is that] their ambitions ... are going to go up dramatically. And they're going to build even bigger things. We will never go back to a simpler day, I'm afraid. We just find better ways to manage complex things.

What would be next, after that? Personally, I think there is a bottleneck around the development of Web-based applications. Web applications have a very different development style than traditional software [does]... It has something to do with the fact that there are so many different technologies that have to be mixed together to do Web application development. You end up using Java, JavaScript, and Perl, HTML, CSS, and on, and on. Each of those pieces is pretty good by itself, but when you try to combine all those together, projects become very difficult to manage. I think there is opportunity for someone to come up with a paradigm, or a toolkit, to make it dramatically simpler to develop really powerful applications over the Web. Over the next five or ten years, something is going to happen there. I can't tell you what it is.

What do you see as the most limiting bottleneck in your current project?


Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 22, 2006 9:46 AM
Reply to this message Reply
This is a very serious discussion and there are a lot of points raised in the short article above.

> <p> There's an insatiable demand for features—whether
> underlying characteristics, such as security or
> reliability, or actual feature sets or Web pages, or
> screens, or editing tools. There's just no limit to the
> set of things people would like to do in their software.
> </p>

And that's the beauty of software: there is no limit to what one can do. Only the sky is the limit. Computers are able to create whole universes.

>
> <p>To me, one of the most interesting things about
> software is that you're limited not by what you want to
> do, but by what you can do. We can conceive of so many
> things we'd like to have in our products, [but] we either
> don't have the time or the resources, or we just can't
> manage what it would take to build those things. </p>

Or our tools are wrong and are more of an obstacle rather than help.

>
> <p>Unfortunately, everything in software leads to more
> complexity. There are various laws of physics people have
> discovered, and there are corresponding laws of software.
> The first law of software is that software systems tend
> towards increasing states of complexity. It's almost a
> perfect mirror of the First Law of Thermodynamics in
> physics... </p>

The reason there is complexity though is because we have not yet captured the essense of computing. Very few people actually have understood how computers should be like and what should they do.

I will develop my view at the end, so please stay with me.

> <p>I think extreme programming is a great idea. There are
> a lot of important things to be learned from that, and it
> will allows us in many cases to reduce complexity -
> temporarily. The techniques in extreme programming will
> allow us to do things that we've never done before, that
> we couldn't do in the past. And that makes even bigger,
> more complex projects feasible. What's immediately going
> to happen, as soon as people get the current stuff totally
> under control, easily manageable, [is that] their
> ambitions ... are going to go up dramatically. And they're
> going to build even bigger things. We will never go back
> to a simpler day, I'm afraid. We just find better ways to
> manage complex things.</p>

If we are talking about the extreme programming technique as is widely known, then I have serious doubts that it can offer anything new. There is a certain amount of planning required for every task, which extreme programming does not offer.

> <p>What would be next, after that? Personally, I think
> there is a bottleneck around the development of Web-based
> applications. Web applications have a very different
> development style than traditional software [does]... It
> has something to do with the fact that there are so many
> different technologies that have to be mixed together to
> do Web application development. You end up using Java,
> JavaScript, and Perl, HTML, CSS, and on and on. Each of
> those pieces is pretty good by itself, but when you try to
> combine all those together, projects become very difficult
> to manage. I think there is opportunity for someone to
> come up with a paradigm, or a toolkit, to make it
> dramatically simpler to develop really powerful
> applications over the Web. Over the next five or ten
> years, something is going to happen there. I can't tell
> you what it is. </p>
>
> </blockquote>
>
> <p>What do you see as the most limiting bottleneck in your
> current project? </p>

The real problem with computers is that they don't manage information, they manage bytes. We humans are interested in information, and not in bytes. We have built all sorts of operating systems, but none of them manages information; they all manage bytes.

The concept of process/driver/filesystem/library is totally wrong...and that's were the problem is: we deal too much with the technical details of computers; 95% of our programs consist of code that deals with the technical details of our system instead of the actual computation needed.

There are many things that are wrong:

1) operating systems are completely unaware of the structure of information. Only applications know the format of data. If there is a need to manage the information produced by an application, then one has to modify the application. Applications need to open and close files, open and close databases, open and close printers, open and close graphics devices. Data and code are not appropriately separated.

2) programs are fixed black boxes of computations with no way to communicate with the outside world. The only official interface of an executable has been the stdin/stdout mechanism. Of course each operating system provides its own communication way, but there is no way to reuse a useful computation contained in a program...and updating an program is not as simple as installing a new function.

3) programs are built in a monolithic way: heaps of source code must be compiled before we even see a single dot on the screen illuminated by our program. Of course there are interactive environments, but this interactivity does not go through the whole system: at some point, the interpreted language will invoke system code, and nobody knows what happens in there.

4) our computers speak a thousand languages, all incompatible between themselves! We have tons of different shells all with their own language, tons of different interfaces (Corba, COM, RPC, SOAP, HTML, FTP, windows messages, the X-windows protocol, OpenGL, tons of different languages and protocols (and protocols are languages). Yet there is no single way for programs to co-operate, exchange data, discover capabilities, communicate.

5) our computers can't talk to each other; they are perfect strangers only capable of exchanging hand signals. The same thing that happens to programs happens to computers, only much worse: computers can't really communicate, they can only have limited communication through network filesystems.

6) the computing environment is not the programming environment. In order to enter programming mode, I have to go through a sequence of boring and unnecessary steps like: a) firing up the IDE, b) go through the API docs, c) write the code, d) check the code, e) compile the code, f) figure out what went wrong. For example, if I want to change the color of the active window frame, I can not simply enter 'ActiveFrameColor = Orange' or whatever and have the frame color of all windows to change, because I) there is no persistent system-wide 'ActiveFrameColor' variable and II) there is no global event model so as that windows can listen to changes to 'ActiveFrameColor'.

How should the computing environment be? well, the system should be responsible for:

1) global definition of datatypes, including arrays, lists, tree maps, hashmaps, sorted maps, indexed arrays...not only at semantic level but also a binary level.

2) automatic persistence. It's the year 2006, we have computers for over 50 years, and yet we still have to 'open file'. The system should be responsible for using the system's RAM as a cache for hard disk.

3) system-wide persistent map of functions. A program should be a function. When a function is replaced, all programs are automatically updated. The difference with the current system is that the code should be distributed as source code (or in an intermediate form), and each function should be a different 'file'. The O/S should be responsible for compiling and caching the code, as well as checking for security.

4) support for reactive programming. Any state change should be an event where listeners can be attached.Reactive programming makes GUI programming very easy.

5) support for versioning. New definitions of data types and functions should be automatically versioned.

Programming should be as simple as opening a new source code file and running it...the various functions and structure of the program should be automatically stored by the system. By incrementally building systems, development would be much better and faster.

Most of the applications and mechanisms invented so far are solutions to the same problem:

1) versioning systems are information management systems which deal with source code.
2) document management systems are information management systems which deal with documents.
3) filesystems are information management systems which manage bytes.
4) the Windows registry is an information management system which manages system settings.
5) the 'etc' directory in Unix is an information management system which manages system settings.
6) databases are information management systems where triggers play the role of the reactive system.
7) help files are information management systems.
8) hypertext is an information management system.
9) an O/S kernel maintains various pieces of information in its memory.

And there are many others...

So the bottleneck exists because the same problem is solved using many different solutions; if this bottleneck goes away, scalability will skyrocket and creativity will be unlocked...

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 22, 2006 12:00 PM
Reply to this message Reply
> The reason there is complexity though is because we have
> not yet captured the essense of computing. Very few people
> actually have understood how computers should be like and
> what should they do.


... and that is?


> If we are talking about the extreme programming technique
> as is widely known, then I have serious doubts that it
> can offer anything new. There is a certain amount of
> planning required for every task, which extreme
> programming does not offer.


That's what I hear about XP, too.


> The real problem with computers is that they don't manage
> information, they manage bytes. We humans are interested
> in information, and not in bytes. We have built all sorts
> of operating systems, but none of them manages
> information; they all manage bytes.


What's wrong with having applications manage information? For example, if you use a set of sensors to analyze an earthquake then those sensors will send the binary information to a computer and then an application can be written and used to interpret the binary data. How would an OS know how to interpret such data?


> The concept of process/driver/filesystem/library is
> totally wrong...and that's were the problem is: we deal
> too much with the technical details of computers; 95% of
> our programs consist of code that deals with the technical
> details of our system instead of the actual computation
> needed.


What if that computation depends on technical details (i.e. performance or space/time complexity)? Prolog and LISP are examples of not having to deal with details.


> 2) programs are fixed black boxes of computations with
> no way to communicate with the outside world.
The only
> official interface of an executable has been the
> stdin/stdout mechanism. Of course each operating system
> provides its own communication way, but there is no way to
> reuse a useful computation contained in a program...and
> updating an program is not as simple as installing a new
> function.


If the software is not written such that its components can be reused then you're right, OS can't help you.


> So the bottleneck exists because the same problem is
> solved using many different solutions; if this bottleneck
> goes away, scalability will skyrocket and creativity will
> be unlocked...

Are you implying that the botleneck would go away if we use just one solution to a problem? To me this hardly seems possible.

Joao Pedrosa

Posts: 114
Nickname: dewd
Registered: Dec, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 22, 2006 12:55 PM
Reply to this message Reply
Events have their own bottlenecks as well. I think too many events make it harder for one to know where the codes that are being triggered are. Also, if you have many events, they probably need some kind of order on which they need to be executed. If several listeners try to listen to the same event so they can change some data, for example, the order may be important.

I like this discussion. Thanks for this interview, Artima. :-)

Though I don't think a silver bullet is around the corner...

Vincent O'Sullivan

Posts: 724
Nickname: vincent
Registered: Nov, 2002

Re: John Ousterhout on What Limits Software Growth Posted: Sep 22, 2006 3:15 PM
Reply to this message Reply
> The techniques in extreme programming will
> allow us to do things that we've never done before, that
> we couldn't do in the past.

What's your basis for this assertion?

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 23, 2006 3:46 AM
Reply to this message Reply
> > The reason there is complexity though is because we
> have
> > not yet captured the essense of computing. Very few
> people
> > actually have understood how computers should be like
> and
> > what should they do.
>
>
> ... and that is?

The essence of computing is that an O/S is one function...so the bottleneck is the barriers between the various parts of an O/S, either explicit or implicit (i.e. the lack of interfacing).

> > The real problem with computers is that they don't
> manage
> > information, they manage bytes. We humans are
> interested
> > in information, and not in bytes. We have built all
> sorts
> > of operating systems, but none of them manages
> > information; they all manage bytes.
>
>
> What's wrong with having applications manage information?

A third party can not reuse the information unless the application provides an interface.

> For example, if you use a set of sensors to analyze an
> earthquake then those sensors will send the binary
> information to a computer and then an application can be
> written and used to interpret the binary data. How would
> an OS know how to interpret such data?

It seems you have not understood what I said. I am not saying that an O/S should know how to interpret the data, but that an O/S should provide the mechanisms for open use of the data.

>
>
> > The concept of process/driver/filesystem/library is
> > totally wrong...and that's were the problem is: we deal
> > too much with the technical details of computers; 95%
> of
> > our programs consist of code that deals with the
> technical
> > details of our system instead of the actual computation
> > needed.
>
>
> What if that computation depends on technical details
> (i.e. performance or space/time complexity)? Prolog and
> LISP are examples of not having to deal with details.

For most tasks the technical details are irrelevant. When I say 'techical details' I mean, for example, where the data are stored, how they are fetched, how are they cleaned up etc.

Of course if a computation cares about such details, then it should be possible to specify them.

>
>
> > 2) programs are fixed black boxes of computations
> with
> > no way to communicate with the outside world.
The
> only
> > official interface of an executable has been the
> > stdin/stdout mechanism. Of course each operating system
> > provides its own communication way, but there is no way
> to
> > reuse a useful computation contained in a program...and
> > updating an program is not as simple as installing a
> new
> > function.
>
>
> If the software is not written such that its components
> can be reused then you're right, OS can't help you.

But even if components can be reused, they can only be reused by the same application and programming language platform. No outsider can use those components.

>
>
> > So the bottleneck exists because the same problem is
> > solved using many different solutions; if this
> bottleneck
> > goes away, scalability will skyrocket and creativity
> will
> > be unlocked...
>
> Are you implying that the botleneck would go away if we
> use just one solution to a problem? To me this hardly
> seems possible.

Yes. I am not talking about a single programming language, but a computing environment where a) data are independent of code, b) persistence is automatic, c) the simplest execution unit is the function and not the executable, d) networking is transparent, e) the O/S is an interpreter/compiler itself.

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 23, 2006 12:33 PM
Reply to this message Reply
> > Are you implying that the botleneck would go away if we
> > use just one solution to a problem? To me this hardly
> > seems possible.
>
> Yes. I am not talking about a single programming language,
> but a computing environment where a) data are independent
> of code, b) persistence is automatic, c) the simplest
> execution unit is the function and not the executable, d)
> networking is transparent, e) the O/S is an
> interpreter/compiler itself.

If I understood you right, it seems to me that this would require a complete set of standards - from hardware up to the application layer. This way one OS or application could talk to another (perhaps on a different type of computer), reuse each others components and other resources, etc. It's just like two people need to speak the same language (or use a translator) in order to understand each other. I think this would be possible, but the problem is that you can't satisfy everyone. Sooner or later someone would need something special and out of the standard, create an application for it, and then you would have the same problems. Your views seem to be a bit idealistic (not that there's anything wrong with that), but, unfortunately, the real world is far from ideal.

Roland Pibinger

Posts: 93
Nickname: rp123
Registered: Jan, 2006

Re: John Ousterhout on What Limits Software Growth Posted: Sep 23, 2006 2:40 PM
Reply to this message Reply
> > The techniques in extreme programming will
> > allow us to do things that we've never done before,
> > that we couldn't do in the past.
>
> What's your basis for this assertion?

Maybe he should read http://www.softwarereality.com/ExtremeProgramming.jsp

Roland Pibinger

Posts: 93
Nickname: rp123
Registered: Jan, 2006

Re: John Ousterhout on What Limits Software Growth Posted: Sep 23, 2006 2:53 PM
Reply to this message Reply
> Yes. I am not talking about a single programming language,
> but a computing environment where a) data are independent
> of code, b) persistence is automatic, c) the simplest
> execution unit is the function and not the executable, d)
> networking is transparent, e) the O/S is an
> interpreter/compiler itself.

I guess we had that already (for the most part). It was called 'distributed objects' and it was not successful. It turned out that the "simplistic" internet was the much better distributed environment.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 24, 2006 3:52 AM
Reply to this message Reply
> If I understood you right, it seems to me that this would
> require a complete set of standards - from hardware up to
> the application layer.

No, no hardware changes would be required. I am speaking only about software.

> This way one OS or application
> could talk to another (perhaps on a different type of
> computer), reuse each others components and other
> resources, etc. It's just like two people need to speak
> the same language (or use a translator) in order to
> understand each other.

Exactly!

> I think this would be possible, but
> the problem is that you can't satisfy everyone. Sooner or
> later someone would need something special and out of the
> standard, create an application for it, and then you would
> have the same problems. Your views seem to be a bit
> idealistic (not that there's anything wrong with that),
> but, unfortunately, the real world is far from ideal.

But the standard should be simple enough to satisfy (almost) anyone, i.e. don't make any assumptions whatsoever. To put in another way, the Turing machine model should be offered in a more flexible manner than what it is today.

Here is a realistic approach:

a) an imperative C-like language which allows down-to-the-metal programming (with pointers and such), but with enough restrictions to disallow non-safe programming (Cyclone for example)

b) with enough meta-programming capabilities to allow the compiler to be extended with new concepts, thus allowing the mapping of other concepts on to it (object oriented, functional, table-driven programming)

c) delivered as an interpreter, i.e. only typechecking and meta-programming is performed at compile time; the O/S kernel takes care of compiling and caching the code to native code as it sees fit.

d) programs are developed incrementally: the programmer writes a small piece of code, the code runs, and then the programmer inspects the output, changes the code, etc. There is no single 'executable', but each function is stored by the O/S as a separate entity.

e) data are independent from the code; they exist as separate entities accessible from all pieces of code.

f) The O/S keeps a store of the data types and the functions available. The relevant information are available to all programs, just like the rest of the data.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 24, 2006 4:05 AM
Reply to this message Reply
> > Yes. I am not talking about a single programming
> language,
> > but a computing environment where a) data are
> independent
> > of code, b) persistence is automatic, c) the simplest
> > execution unit is the function and not the executable,
> d)
> > networking is transparent, e) the O/S is an
> > interpreter/compiler itself.
>
> I guess we had that already (for the most part). It was
> called 'distributed objects' and it was not successful.

But what was the reason distributed objects were not succesful? I believe it was not the idea of distribution but its implementation which was cumbersome. The system lacked the information to properly support distribution of data, an d it required the programmer to fill that information.

To give you a practical example from Java: in order to develop a distributed object, you had to: a) write the class, b) write the proxy class, c) write the meta-information to register the class...All these steps except the first one are reduntant and are only there because the system was not smart enough to extract the proper information from a simple class description. The same example applies to CORBA.

> It turned out that the "simplistic" internet was the much
> better distributed environment.

But at the end the "simplistic" internet has become the bottleneck! in order to write even the simplest web application, you have to literally 'jump through hoops', spending an effort that is 3 or more times higher than for a simple desktop application, and the result has much less functionality than that of a desktop application.

If you think about it, what happens inside a single machine is the same as what happens on the network: information flows on a bus (or more buses) between a data store and a processing unit, with the O/S playing the role of the traffic police. The difference between a single machine and the network is the type of the bus...but that difference is not big enough to require such drastically different solutions.

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 24, 2006 10:13 AM
Reply to this message Reply
> d) programs are developed incrementally: the programmer
> writes a small piece of code, the code runs, and then the
> programmer inspects the output, changes the code, etc.
> There is no single 'executable', but each function is
> stored by the O/S as a separate entity.
>
> f) The O/S keeps a store of the data types and the
> functions available. The relevant information are
> available to all programs, just like the rest of the data.

Interesting thing, I actually experimented with something that sounds similar to this. I created a simple language, its compiler and the run-time system that allows you to create independent components that communicate with each other through input/output ports and connectors. For example, you can specify a component that calculates a square of a number like this:

sync component Square(io int^ N) { // N is an in/out port
int result = 0;

receive N {
result := ^N * ^N; // ^N means 'dereference'
} send {
^N := result;
}
}

Since a component can have multiple ports the 'receive' blocks are used to specify what component does when an input arrives at that port (like port N above). Similary, at the end of the 'send' block the component will send the result out through the specified port (also port N above). Now another component can send a request out and receive the response back:

sync component Test(io int^ V) {
int r = 0;
send V {
^V := 5; // set the value for the port and send
} receive { // receive the response
println(^V); // prints 25 _IF_ connected to Square
}
}

Since these two components don't know about each other you need to specify a connector to connect them together:

// connect Square and Test together
connector C(Square io int^ SP, Test io int^ TP) {
receive TP { // expect request from Test
int r := ^TP // read the value Test sent out
send SP { // sending to Square ...
^SP := r; // set the value from Test and send
} receive {
r := ^SP; // get the result from Square
}
} send { // sending back to Test ...
^TP := r; // set the result ...
} // ... and send back
}

Ports can be input, output, or input/output. Components can be 'sync' or 'async'; if a component is 'async' the runtime system will automatically create a clone of it if it's currently busy (responding to a previous request) and run it in a separate thread. If it's a 'sync' component and the component is busy the runtime system will add all requests for it to a queue and serve them in the order in which they appear in the queue whenever the component gets into a 'ready' state. A component that sends a request to an 'async' component does not need to wait for the result, so _from_this_perspective_ the system has a built-in multi-threading (a component can send multiple requests through multiple ports, which can trigger many components). Other types of multithreading and synchronization between threads can also be possible inside one component (like semaphores and monitors). Also, components can be stored on a different machine and the runtime system takes care of transparently managing them accross the network. How components are connected and where they are located is specified in connector definitions (which can be modified while the system is running).
Anyway, this is just an experiment that can be simulated in a "normal" language (like Python, for example).

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 24, 2006 2:35 PM
Reply to this message Reply
> Interesting thing, I actually experimented with something
> that sounds similar to this. I created a simple language,
> its compiler and the run-time system that allows you to
> create independent components that communicate with each
> other through input/output ports and connectors. For
> example, you can specify a component that calculates a
> square of a number like this:

What you propose is called component-based programming or signal-based programming.

In the following link to an LtU discussion about component-based programming, I have posted a link to CompC++, a C++ implementation of component-based programming:

http://lambda-the-ultimate.org/node/945#comment-20742

Some people say that language support for signals is reduntant...I disagree. The TR1 syntax for callbacks is simply ugly.

Alex Stojan

Posts: 95
Nickname: alexstojan
Registered: Jun, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 24, 2006 3:45 PM
Reply to this message Reply
> What you propose is called component-based programming or
> signal-based programming.
>
> In the following link to an LtU discussion about
> component-based programming, I have posted a link to
> CompC++, a C++ implementation of component-based
> programming:
>
> http://lambda-the-ultimate.org/node/945#comment-20742
>
> Some people say that language support for signals is
> reduntant...I disagree. The TR1 syntax for callbacks is
> simply ugly.

Thanks for the link! It looks very similar indeed (the CompC++ approach), although it lacks an adequate run-time support (at least in the way I described it above). For example, the application in CompC++ needs to be compiled with connection definition, so you can't change connections after the system has been built. This makes the system kind of hard-wired. This might also be a problem if you want to have components distributed accros a network (just like you connect bunch of computers in a network, and later you can reconnect them in a different way if you need to). That's why I tried using 'connectors' that know where components are and how to communicate with them, all of which is done by the runtime system.
Anyway, component-oriented approach seems interesting, but there doesn't seem to be much work in this area (AFAIK).

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: John Ousterhout on What Limits Software Growth Posted: Sep 25, 2006 2:56 AM
Reply to this message Reply
> Thanks for the link! It looks very similar indeed (the
> CompC++ approach), although it lacks an adequate run-time
> support (at least in the way I described it above). For
> example, the application in CompC++ needs to be compiled
> with connection definition, so you can't change
> connections after the system has been built. This makes
> the system kind of hard-wired. This might also be a
> problem if you want to have components distributed accros
> a network (just like you connect bunch of computers in a
> network, and later you can reconnect them in a different
> way if you need to). That's why I tried using 'connectors'
> that know where components are and how to communicate with
> them, all of which is done by the runtime system.
> Anyway, component-oriented approach seems interesting, but
> there doesn't seem to be much work in this area (AFAIK).

I agree it seems interesting. In my opinion, it solves many problems of the traditional object-oriented approach, because it unifies virtual methods, message passing, signals and slots and callbacks in one easy to understand/use mechanism.

But to get back on topic, the software bottleneck is due to the way software and programming is delivered, not about programming languages.

Flat View: This topic has 26 replies on 2 pages [ 1  2 | » ]
Topic: The Problem with Programming Previous Topic   Next Topic Topic: Announcing Pantheios - The C++ Logging Sweetspot

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use