The Artima Developer Community
Sponsored Link

Weblogs Forum
Programming in the Mid-Future

88 replies on 6 pages. Most recent reply: Apr 11, 2010 8:47 PM by Charles McKnight

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 88 replies on 6 pages [ « | 1 ... 3 4 5 6 ]
Eivind Eklund

Posts: 49
Nickname: eeklund2
Registered: Jan, 2006

Re: Programming in the Mid-Future Posted: Mar 18, 2010 12:53 PM
Reply to this message Reply
Advertisement
> > Here's a problem that I regularly want to solve and
> that
> > works perfectly well in a fully run-time-checked
> language
> > and don't work in any statically typed language I've
> come
> > across (and more or less by definition can't work):
> Start
> > executing a program with missing methods and invalid
> > "types."
>
> If types are invalid and you know it, and the compiler can
> know it, there is nothing to test.

For a normal statically typed language, it's correct: There's nothing to test, because the code won't compile. But, remove the type declarations and run the same structure code through a dynamically checked language, and there *is* something to test. The type problems can occur on a different code path I am not interested in testing right then, for various reasons (e.g, type problems due to concurrent editing of code where the changes come in from a version control sync, or a type change that I'm doing at a type where there's a lot of derived classes and I want to fix one of them fully before going on to the next, instead of doing a sweep over all to fix the type issue and then afterwards to fix the tests).

> Wow, this discussion has been turned to a 'dynamic vs
> static' debate!

There are some people that always want to switch to grandiose claims about static typing; so anything that even mention anything dynamic turns into static typing vs dynamic checking.

Most of this discussion happens at an inane level: Static typing advocates with little experience with non-static languages pretends that static typing has no cost, dynamic typing advocates claiming that everything should be dynamic and that has no drawbacks to speak of. Both solutions have drawbacks; the question is which is least in a specific situation (including the specific skills of a particular person and the specific aspects of the particular problems to be solved and languages to be used.)


Anyway, to bring this more back on topic: What would the prediction for the future of static typing and dynamic checking? How would this be better in 25 years?

I think we'll get the ability to deal with this mess better. In my ideal world, which I believe we are likely to have in 25 years, I'll start out with dynamic checking, allowing easy exploratory programming, quick response times, easy to do hacks to get things working, and so on. This works good for programming at a single module to medium size program (with a single person or a small, static team).

When working for a larger environment, I'd like to be able to infer types when my module has become somewhat stable, and place these as declarations at the edges of the small module, giving definition to an API and giving people something to relate to that is more abstract that then actual code (and more direct than documentation). I'd expect those types to be highly expressive and flexible as well - something like full dependent types, supporting much more complete proofs of my program than the presently common crop of statically typed languages.


I expect simple transition between code, tests, types and proofs for the easy cases. I expect to be able to start on any end, and whatever is easy to automatically derive from what I have provided will (if I ask) be derived from the part I provide. Not anywhere near a 100% solution, but an 80% for the simple cases. If I write some tests for simple code, the system will (often) be able to derive code that pass those tests. If I write some untyped code, and the system will be able to derive types that fit the code (though the types may be too wide), and usually tests that make sure that (some) important aspects of the code still works. If I write some tests, the system will be able to use those to generate some part of the proof skeleton, while needing to help the system the complicated parts. I expect the use of code databases to resolve some of this - where the system look up the parts you've provided against some sort of template database from other people's code, and say "Hmm, this foo looks very much like that bar" and then offers me some simple analogy to the tests or proofs for that bar (or, if it is a more or less direct replacement, to just let me use bar).


That's where I expect us to get: The benefits of dynamic typing when that is most beneficial. The benefits of deep static typing when that is most beneficial. And the ability to maneuver between these in a relatively easy fashion, with the computer doing as much as it can of the lifting.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Programming in the Mid-Future Posted: Mar 18, 2010 1:39 PM
Reply to this message Reply
> For a normal statically typed language, it's correct:
> There's nothing to test, because the code won't compile.

The code will compile, if you remove the non-existent types or abstract methods.

> But, remove the type declarations and run the same
> e structure code through a dynamically checked language,
> and there *is* something to test. The type problems can
> occur on a different code path I am not interested in
> testing right then, for various reasons (e.g, type
> problems due to concurrent editing of code where the
> changes come in from a version control sync,

It doesn't happen in reality. In properly setup environments, the main repository gets the code that compiles correctly, whereas local repositories accept changes done by developers for testing purposes.

> or a type
> change that I'm doing at a type where there's a lot of
> derived classes and I want to fix one of them fully before
> going on to the next, instead of doing a sweep over all to
> fix the type issue and then afterwards to fix the tests).

You can always isolate that part of code by creating another project and including the specific classes you are interested in.

>
> > Wow, this discussion has been turned to a 'dynamic vs
> > static' debate!
>
> There are some people that always want to switch to
> grandiose claims about static typing; so anything that
> even mention anything dynamic turns into static typing vs
> dynamic checking.
>
> Most of this discussion happens at an inane level: Static
> typing advocates with little experience with non-static
> languages pretends that static typing has no cost, dynamic
> typing advocates claiming that everything should be
> dynamic and that has no drawbacks to speak of. Both
> solutions have drawbacks; the question is which is least
> in a specific situation (including the specific skills of
> a particular person and the specific aspects of the
> particular problems to be solved and languages to be
> used.)

There is no drawback for the static typing case.

>
>
> Anyway, to bring this more back on topic: What would the
> prediction for the future of static typing and dynamic
> checking? How would this be better in 25 years?

Source code will have a dynamic system but the underlying JIT system will convert it to the static version automatically.

>
> I think we'll get the ability to deal with this mess
> better.

There is no mess.

In my ideal world, which I believe we are likely
> to have in 25 years, I'll start out with dynamic checking,
> allowing easy exploratory programming, quick response
> times, easy to do hacks to get things working, and so on.
> This works good for programming at a single module to
> o medium size program (with a single person or a small,
> static team).
>
> When working for a larger environment, I'd like to be able
> to infer types when my module has become somewhat stable,
> and place these as declarations at the edges of the small
> module, giving definition to an API and giving people
> something to relate to that is more abstract that then
> actual code (and more direct than documentation).

Static typing easily allows exploratory programming.

I'd
> expect those types to be highly expressive and flexible as
> well - something like full dependent types, supporting
> much more complete proofs of my program than the presently
> common crop of statically typed languages.

You can't go further than Haskell in proofs. It's math, and there are physical limitations to what it can be done.

>
>
> I expect simple transition between code, tests, types and
> proofs for the easy cases. I expect to be able to start
> on any end, and whatever is easy to automatically derive
> from what I have provided will (if I ask) be derived from
> the part I provide. Not anywhere near a 100% solution,
> but an 80% for the simple cases. If I write some tests
> for simple code, the system will (often) be able to derive
> code that pass those tests. If I write some untyped code,
> and the system will be able to derive types that fit the
> code (though the types may be too wide), and usually tests
> that make sure that (some) important aspects of the code
> still works. If I write some tests, the system will be
> able to use those to generate some part of the proof
> skeleton, while needing to help the system the complicated
> parts. I expect the use of code databases to resolve some
> of this - where the system look up the parts you've
> provided against some sort of template database from other
> people's code, and say "Hmm, this foo looks very much like
> that bar" and then offers me some simple analogy to the
> tests or proofs for that bar (or, if it is a more or less
> direct replacement, to just let me use bar).

It will never work, because two pieces of code need to be identical in order to have the same proof. Remember, code is math.

>
>
> That's where I expect us to get: The benefits of dynamic
> typing when that is most beneficial. The benefits of deep
> static typing when that is most beneficial. And the
> ability to maneuver between these in a relatively easy
> fashion, with the computer doing as much as it can of the
> lifting.

There are no benefits in dynamic typing that static typing doesn't have.

Working with a statically typed language interpreter has the same benefits as working with a dynamic language: you can alter the types on the fly, don't exit the current test session, etc. You can do this with C++ and Haskell, if you wished.

Timothy Brownawell

Posts: 25
Nickname: tbrownaw
Registered: Mar, 2009

Re: Programming in the Mid-Future Posted: Mar 18, 2010 3:10 PM
Reply to this message Reply
> There is no drawback for the static typing case.

There must be *some* drawback, or there'd be no need for Object, dynamic_cast, reinterpret_cast, static_cast, etc, to work around it.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Programming in the Mid-Future Posted: Mar 19, 2010 7:58 AM
Reply to this message Reply
> > There is no drawback for the static typing case.
>
> There must be *some* drawback, or there'd be no need for
> Object, dynamic_cast, reinterpret_cast, static_cast, etc,
> to work around it.

A dynamic message passing system can easily be coded in C++, but it's not really useful in practice.

Eivind Eklund

Posts: 49
Nickname: eeklund2
Registered: Jan, 2006

Re: Programming in the Mid-Future Posted: Mar 19, 2010 9:52 AM
Reply to this message Reply
> > For a normal statically typed language, it's correct:
> > There's nothing to test, because the code won't
> compile.
>
> The code will compile, if you remove the non-existent
> types or abstract methods.

So the code won't compile. I'm glad we agree. *Different* code would compile, and in some languages you could play around with a ton of crappy casts etc to sort of simulate how it would be with a dynamically checked language, but that's not the same code, and it is explicitly in the way of what I want: To not have to deal with that aspect of the codebase at that time, instead focusing on the aspect that I am interested in.


>
> > But, remove the type declarations and run the same
> > e structure code through a dynamically checked
> language,
> > and there *is* something to test. The type problems
> can
> > occur on a different code path I am not interested in
> > testing right then, for various reasons (e.g, type
> > problems due to concurrent editing of code where the
> > changes come in from a version control sync,
>
> It doesn't happen in reality. In properly setup
> environments, the main repository gets the code that
> compiles correctly,

... and your local changes can include type changes that make the other developers changes create type conflicts. This has happened to me several times over the last weeks; I've been working on a cross-cutting concern that changed the a core type that lots of other things inherited from, including new classes introduced by other developers while I was working on this.

Putting your fingers in your ears and saying "That don't happen" about the things that regularly happen to me with static types isn't a convincing argument. At *best* it shows that you've learned to work around the problems with static types fairly effectively.

You know what? I hardly ever get problems from types when programming in a dynamically typed language. Over the last 15 years, I've only twice seen a type error lead to extended debugging: Once in a Python system somebody else had written, where the fact that a string is also a list of strings had led to problems when somebody meant to pass a list containing a single string and passed the string, another where auto-vivification in Perl (perl makes up variable content when it is used) escalated a typo into a larger problem.

But I don't propose that this means "There are no advantages to having the checking" - it just means that I've worked so much with dynamic languages that I so easily work around the challenges that are there.

I've also worked with static languages, though not as much - and I only mention problems that I actually encounter when working with static languages.

>
> > or a type
> > change that I'm doing at a type where there's a lot of
> > derived classes and I want to fix one of them fully
> before
> > going on to the next, instead of doing a sweep over all
> to
> > fix the type issue and then afterwards to fix the
> tests).
>
> You can always isolate that part of code by creating
> another project and including the specific classes you are
> interested in.

Then you've already significantly interrupted the work I'm doing. It was this interruption that we were trying to get rid of, it was *not having the interruption* that is the advantage. Spending the time to set up a new project and then modifying it for each class is going to be as much of an interruption as taking the dual phase approach.

> > > Wow, this discussion has been turned to a 'dynamic vs
> > > static' debate!
> >
> > There are some people that always want to switch to
> > grandiose claims about static typing; so anything that
> > even mention anything dynamic turns into static typing
> vs
> > dynamic checking.
> >
> > Most of this discussion happens at an inane level:
> Static
> > typing advocates with little experience with non-static
> > languages pretends that static typing has no cost,
> dynamic
> > typing advocates claiming that everything should be
> > dynamic and that has no drawbacks to speak of. Both
> > solutions have drawbacks; the question is which is
> least
> > in a specific situation (including the specific skills
> of
> > a particular person and the specific aspects of the
> > particular problems to be solved and languages to be
> > used.)
>
> There is no drawback for the static typing case.

The first part below assumes that we're discussing static typing on the programming language level. I didn't notice that you mentioned JIT and being dynamic at the source level below before I wrote the below, and from my perspective type inference in the JIT is an implementation detail that's not relevant when I talk about static typing vs dynamic checking; type inference at the JIT level is an implementation detail that don't usually affect anything but performance.

On the topic of programmer level static vs dynamic typing and static typing having some drawbacks, we can approach this from two sides: The theoretical or the practical.

From the theoretical side, we can use the fact that any type system will reject some programs that would execute correctly. This can be demonstrated simply: if p() then a op b, where p() is a possibly-terminating function and a op b is inconsistent. If p terminates, then the program is invalid, if p does not terminate then the program is valid; proving if p (for an arbitrary p) violates the Halting theorem. It is, at least to me, clearly a drawback for a system to reject a valid program. (And in practice fully static systems reject many valid programs.)


From the practical side, it depends on the people and actual languages involved.

I would be willing to postulate that you, with your specific skills, may have no drawback to the static typing case.

Would you be willing to postulate that I, with my specific skills, work less efficiently for some problems in common variants of the static typing case (e.g, Java) than I do in common cases of dynamic typing (e.g, Python, Ruby)?


Are you willing to postulate that the natural way *for me* to think about some problems is through heterogeneous collections, and that these are simple and natural to implement in the dynamic languages I usually use (Perl, Python and Ruby) and hard and unnatural to implement in Java (you need to do a ton of casts)? (As far as I know, it is impossible to directly implement in Haskell due to lack of casts; you'd have to restructure the problem, which is obviously possible but miss the point.)


Would you be willing to postulate that *for me* it is an advantage to be able to take different aspects of correctness in the order I choose, instead of having one particular aspect forced to be dealt with first and then have the rest dealt with afterwards?


If so, we agree that there are disadvantages, the question goes down to what the disadvantages are and when they're larger in either environment.

>
> >
> >
> > Anyway, to bring this more back on topic: What would
> the
> > prediction for the future of static typing and dynamic
> > checking? How would this be better in 25 years?
>
> Source code will have a dynamic system but the underlying
> JIT system will convert it to the static version
> automatically.


>
> >
> > I think we'll get the ability to deal with this mess
> > better.
>
> There is no mess.
>
> In my ideal world, which I believe we are likely
> > to have in 25 years, I'll start out with dynamic
> checking,
> > allowing easy exploratory programming, quick response
> > times, easy to do hacks to get things working, and so
> on.
> > This works good for programming at a single module to
> > o medium size program (with a single person or a small,
> > static team).
> >
> > When working for a larger environment, I'd like to be
> able
> > to infer types when my module has become somewhat
> stable,
> > and place these as declarations at the edges of the
> small
> > module, giving definition to an API and giving people
> > something to relate to that is more abstract that then
> > actual code (and more direct than documentation).
>
> Static typing easily allows exploratory programming.

... and also easily gets in the way of exploratory programming, at least some of the ways I like to deal with it (using heterogeneous collections to define a sort of DSL, for instance.)

>
> I'd
> > expect those types to be highly expressive and flexible
> as
> > well - something like full dependent types, supporting
> > much more complete proofs of my program than the
> presently
> > common crop of statically typed languages.
>
> You can't go further than Haskell in proofs. It's math,
> and there are physical limitations to what it can be
> done.

"Common crop" doesn't really include Haskell, but I'll also say that my impression is that there's a lot of work being done on dependent types etc that makes it possible to go further than Haskell does. You can sort of fake it in Haskell, but it's (as far as I understand) not easy or natural for an average Haskell programmer. (I'm just a beginning Haskell programmer, so I may be misunderstanding.)


> > I expect simple transition between code, tests, types
> and
> > proofs for the easy cases. I expect to be able to
> start
> > on any end, and whatever is easy to automatically
> derive
> > from what I have provided will (if I ask) be derived
> from
> > the part I provide. Not anywhere near a 100% solution,
> > but an 80% for the simple cases. If I write some tests
> > for simple code, the system will (often) be able to
> derive
> > code that pass those tests. If I write some untyped
> code,
> > and the system will be able to derive types that fit
> the
> > code (though the types may be too wide), and usually
> tests
> > that make sure that (some) important aspects of the
> code
> > still works. If I write some tests, the system will
> be
> > able to use those to generate some part of the proof
> > skeleton, while needing to help the system the
> complicated
> > parts. I expect the use of code databases to resolve
> some
> > of this - where the system look up the parts you've
> > provided against some sort of template database from
> other
> > people's code, and say "Hmm, this foo looks very much
> like
> > that bar" and then offers me some simple analogy to the
> > tests or proofs for that bar (or, if it is a more or
> less
> > direct replacement, to just let me use bar).
>
> It will never work, because two pieces of code need to be
> identical in order to have the same proof. Remember, code
> is math.

Large parts of the code we write is very close to identical; you are much less of a unique snowflake than you think.

Given a piece of code, I can write a reasonable set of tests for it, or have a reasonable derivation of what sets of properties I would like to prove for it, and cook up a proof. I see no reason that a sufficiently powerful computer can't do the same. Coq already does parts of this when you construct a proof; you can have it do an exhaustive search for how to complete a portion of the proof. I expect this to significantly improve with a 1000x faster computer to do it, better algorithms and mounts of data to use for heuristics.


>
> >
> >
> > That's where I expect us to get: The benefits of
> dynamic
> > typing when that is most beneficial. The benefits of
> deep
> > static typing when that is most beneficial. And the
> > ability to maneuver between these in a relatively easy
> > fashion, with the computer doing as much as it can of
> the
> > lifting.
>
> There are no benefits in dynamic typing that static typing
> doesn't have.

Let's just agree that there are no benefits to dynamic typing that you have the skills to exploit.



>
> Working with a statically typed language interpreter has
> the same benefits as working with a dynamic language: you
> can alter the types on the fly, don't exit the current
> test session, etc. You can do this with C++ and Haskell,
> if you wished.


Doesn't have the same benefits, as shown both as a proof above and from the simple logical proposition "Do you or the computer get to choose what order to do different aspects of your task?"

Working with a REPL for a statically typed language has *some* of the same benefits, but not all of them.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Programming in the Mid-Future Posted: Mar 19, 2010 11:31 AM
Reply to this message Reply
> So the code won't compile. I'm glad we agree.
> *Different* code would compile, and in some languages you
> u could play around with a ton of crappy casts etc to sort
> of simulate how it would be with a dynamically checked
> language, but that's not the same code, and it is
> explicitly in the way of what I want: To not have to deal
> with that aspect of the codebase at that time, instead
> focusing on the aspect that I am interested in.

The code could always be run in an interpreter that does the type checks as the code runs.

> ... and your local changes can include type changes that
> make the other developers changes create type conflicts.

They are local changes. How are they supposed to create conflicts with the local changes of other developers?

> This has happened to me several times over the last
> t weeks; I've been working on a cross-cutting concern that
> changed the a core type that lots of other things
> inherited from, including new classes introduced by other
> developers while I was working on this.

Then the mistake is on your part: you should have worked on a version that did not include the changes of the others. The usual approach for such fundamental changes is to do them on a different branch, isolated from all the others.

>
> Putting your fingers in your ears and saying "That don't
> happen" about the things that regularly happen to me with
> static types isn't a convincing argument. At *best* it
> shows that you've learned to work around the problems with
> static types fairly effectively.

They don't happen if you follow the correct approach.

>
> You know what? I hardly ever get problems from types when
> programming in a dynamically typed language. Over the
> last 15 years, I've only twice seen a type error lead to
> extended debugging: Once in a Python system somebody else
> had written, where the fact that a string is also a list
> of strings had led to problems when somebody meant to pass
> a list containing a single string and passed the string,
> another where auto-vivification in Perl (perl makes up
> variable content when it is used) escalated a typo into a
> larger problem.

Good for you. What about the million of other developers?

>
> But I don't propose that this means "There are no
> advantages to having the checking" - it just means that
> I've worked so much with dynamic languages that I so
> easily work around the challenges that are there.

It's good that you admit that.

>
> I've also worked with static languages, though not as much
> - and I only mention problems that I actually encounter
> when working with static languages.

You have problems because you want to apply the dynamic language approach to static languages. It won't work.

>
> Then you've already significantly interrupted the work I'm
> doing. It was this interruption that we were trying to
> get rid of, it was *not having the interruption* that is
> the advantage. Spending the time to set up a new project
> and then modifying it for each class is going to be as
> much of an interruption as taking the dual phase
> approach.

How about running the code in an interpreter that doesn't do type checking until the code runs?

> ... and also easily gets in the way of exploratory
> programming, at least some of the ways I like to deal with
> it (using heterogeneous collections to define a sort of
> DSL, for instance.)

In any language with generics/templates/subtypes you can have heterogeneous collections. That's practically all modern mainstream languages.

> Large parts of the code we write is very close to
> identical; you are much less of a unique snowflake than
> you think.

"Very close" doesn't work. In math, even the slightest variation between two systems can bring chaotic differences in the end.

>
> Given a piece of code, I can write a reasonable set of
> tests for it, or have a reasonable derivation of what sets
> of properties I would like to prove for it, and cook up a
> proof. I see no reason that a sufficiently powerful
> computer can't do the same. Coq already does parts of
> this when you construct a proof; you can have it do an
> exhaustive search for how to complete a portion of the
> proof. I expect this to significantly improve with a
> 1000x faster computer to do it, better algorithms and
> mounts of data to use for heuristics.

That's not proof though. It's evidence.

> Let's just agree that there are no benefits to dynamic
> typing that you have the skills to exploit.

There are truly no benefits to dynamic typing over static typing. There has never been any demonstrated in any discussion either in this or other sites. And this is an old, very old discussion, with millions of words written about it.

>
> Doesn't have the same benefits, as shown both as a proof
> above and from the simple logical proposition "Do you or
> the computer get to choose what order to do different
> aspects of your task?"

That's the wrong question to ask, as a result of trying to apply the dynamic language development approach to a static language development approach.

>
> Working with a REPL for a statically typed language has
> *some* of the same benefits, but not all of them.

I did not propose to work with a REPL for a statically typed language. I did propose to work with an interpreter that does type checking only when the code runs, i.e. treating the code as dynamically type-checked.

David Rozenberg

Posts: 15
Nickname: drozenbe
Registered: Nov, 2009

Re: Programming in the Mid-Future Posted: Mar 21, 2010 7:42 AM
Reply to this message Reply
I am surprised to see that this blog went away from its original topic. My guess is that several statements in the original posting by Bruce Eckel were simply from the standpoint of a software trainer and not of a practitioner. Especially, the statements related to 'parallel'. We live with matrix computers for over 3 decades already and there is not much progress with what we can do with them or how. It looks like we stocked with the primitive, low-level way of programming such parallel computations and there are very few areas where those are really necessary. About a year ago Intel's chief scientist asked the software community to come up with tasks that could be solved or require parallel computing - at that time Intel was working on a prototype with 70 CPUs on the crystal. Sorry, but I haven't heard of any new problems besides those that exist for decades already.
Another statement that puzzled me was the one that claimed the lowering of the qualifications of those who will be using computers as software professionals. We have this tendency already - most of the univercities teach only one or two high level languages. Nobody prepares software developers who can develop operating systems, compilers, IDEs, databases, etc. Nevertheless to mention that in few years we'll be facing the problems with legacy systems written over 30 years ago for financial and insurance companies for mainframes.
None of the above can be addressed with such forecasts and trends in the education for computer science.

David Snyder

Posts: 1
Nickname: matheme
Registered: Jan, 2010

Re: Programming in the Mid-Future Posted: Mar 26, 2010 8:18 PM
Reply to this message Reply
> Actually I think programming 25 years from now will not be
> very different from what we see now. Argument: not much
> has changed in the last 25 years and there is even less
> reason to suspect a change in the future, as the field
> mature.


But natural languages change as what we think about with them changes. The latter is changing rapidly; I think this will drive the need for a more efficient and scalable programming paradigm. But some people are resistant to change (those with an interest in the status quo), even when it is better for the whole. The end result will be a reconciliation of those forces.

em vee

Posts: 2
Nickname: emvee
Registered: Apr, 2009

Re: Programming in the Mid-Future Posted: Mar 26, 2010 9:03 PM
Reply to this message Reply
I clearly remember 25 years ago, and while we have fancy IDE's now, most of the same fundamental issues in software developement haven't changed much.

There's still a disconnect between programmers and bosses; there's still a reluctance on all sides to take requirements analysis seriously. There's still a large gap between the best programmers and the average, and sloppy work to meet deadlines is still rewarded while quality suffers; buggy software is still, sadly, the norm.

The vision of ubiquitous diskless persistent storage: When SSD drives are cheap and fast like DRAM, some newer type of DRAM will be even faster, on chip caches will be bigger, etc; there will always be performance and cost differences. Ditto for superfast networks (not from my ISP!); local storage will also get faster.

Perhaps the above differences will indeed become miniscule compare to today, but just as we attempt more ambitious types of software today than 25 years ago, and still push the envelope of hardware, so it will be in the future.

Predictions about transparency between local and remote data are also nothing new, I clearly remember this idea from the 1980's. Fact is, there will always be a difference between local and remote; even if blindingly fast, networks have certain fundamental characterisics that are different, including the possiblity of disconnection, and then there are 'trust' issues.


Then there's the holy grail of reusability: this too has been predicted over and again for decades. Fact is, it's the same dysfunctional business dynamics that reward bad work on schedule that perpetuate not only the general lack of reusability, but the common lack of usuabilty at all.


But, hey, it is fun to peer into the crystal ball.....

Oliver Goodman

Posts: 1
Nickname: oag
Registered: Aug, 2006

Re: Programming in the Mid-Future Posted: Mar 26, 2010 10:02 PM
Reply to this message Reply
> <h1><a id="security-via-suspicious-systems"
> name="security-via-suspicious-systems">Security via
> suspicious systems</a></h1>
> <p>I initially wrote about this idea in a science fiction
> story. A robot receives ideas visually, by signals passing
> through the eyes (no physical contact). The ideas -- new
> code, basically -- pass into a kind of limbo where they
> are analyzed for suspicious content. I think not only
> swarm testing would come into play, but logical testing,
> checking whether an object does what it says it will do.
> Only after it is thoroughly tested will it be incorporated
> into the program.</p>
> <p>Note that this not only helps in system integration for
> your own code, but allows much greater use of
> off-the-shelf components as well.</p>

The object capability model and its adherents have solutions for a lot of security problems that (by and large) are currently unaddressed. By solving these problems (such as, for example, allowing us to safely and usefully run code of unknown provenance) it does not just rather boringly 'solve security problems' it actually becomes a true enabling technology. There's a little uptake now (e.g. Caja) but I think we can expect to see a lot more.

Alexei Kaigorodov

Posts: 3
Nickname: rfq
Registered: Jan, 2008

Re: Programming in the Mid-Future Posted: Mar 27, 2010 12:17 AM
Reply to this message Reply
Yes many of the ideas are around for long time. Rather than present them one more time, let's better think why they have been failing so far.

As for:
Stupidly parallel objects
Persistent diskless environment
Transparency between local and cloud
Effortless data stores
Effortless System Integration

this goals cannot be reached if we still base our design on the good old notion of variable (and object considered as a complex variable).
Variable is declared in the source program and is mapped on physical unit of memory. To get, say, transparancy between local and cloud, we have to map one variable to multiple memory chunks and, as a result, synchronize them to keep the semantics of variable. And here we fail.

The way out is to keep local variables of procedures, rethink and somewhant change variables as object fields, and completely discard variables as files or remote objects.
Functional programming is a step in a right direction, but only first step, considering computations, and we need second step, considering storage.

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: Programming in the Mid-Future Posted: Mar 27, 2010 4:56 AM
Reply to this message Reply
> Fact is, it's
> the same dysfunctional business dynamics that reward bad
> work on schedule that perpetuate not only the general lack
> of reusability, but the common lack of usabilty at all.
>
>
> But, hey, it is fun to peer into the crystal ball.....

Making predictions is a mechanism modern societies use to reinforce their moral and inform themselves about what shall be done and what shall be left. On the level of collective behavior the old-testamentarian "Thou shall ..." is replaced with "There will be ..." as a force acting on everyone who wants to be part of a progressive move and doesn't want to fall behind. Those are not few.

Saying re-usability, modularization, thorough testing, abstraction of this and that will dominate the future is actually making value statements. Progress will turn all those good things into reality by magic and you'll be part of this bright future, so why don't you care and start right now?

Just look at the replies here. Some participants took Bruces list of predictions as a requirement list and started to flesh them out ...

Gervase Gallant

Posts: 6
Nickname: gervase
Registered: Dec, 2002

Re: Programming in the Mid-Future Posted: Mar 27, 2010 11:28 AM
Reply to this message Reply
One thing this article doesn't talk much about is the existence of legacy code and strategies we will develop to deal with it.

There's an ever-increasing amount of great code out there that doesn't need to be re-written.. maybe ever!

However, we will need to understand how it behaves and we might need to occasionally refactor it. For example, a web server is coded on the assumption that it will need to read from a file system. We wouldn't need to throw out great existing web server apps.

But we might need some patterns and understanding of anti-patterns that would allow us to write to one of these new-fangled diskless storage devices.

I suspect a lot of programmers will spent a lot of time working with "retro" programs in much the same way that we use and appreciate well-built antiques from an older age.

Charles McKnight

Posts: 3
Nickname: cmcknight
Registered: Dec, 2005

Re: Programming in the Mid-Future Posted: Apr 11, 2010 8:47 PM
Reply to this message Reply
Bruce,

A lot of what you're talking about being in the future is already happening in the functional end of things. For example, Erlang makes it possible to create "stupidly parallel" things (I hesitate to call them objects, but you can if you want) while providing scalability, fault-tolerance and great cross-platform portability. pretty effortless evolvability, etc. I'd hesitate to claim to be an Erlang evangelist (I'm not the religious type), but what I see occurring is that OO seems to have peaked as a paradigm with dynamic scripting languages or functional programming languages coming into their own in a new paradigm.

But your mileage may vary..... ;-)

Best,

Chuck

Flat View: This topic has 88 replies on 6 pages [ « | 3  4  5  6 ]
Topic: Things to Know About Python Super [3 of 3] Previous Topic   Next Topic Topic: Rapid Hiring & Firing to Build the Best Teams


Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2014 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use - Advertise with Us