The Artima Developer Community
Sponsored Link

Weblogs Forum
Software Development Has Stalled

164 replies on 11 pages. Most recent reply: Mar 28, 2010 10:20 AM by Florin Jurcovici

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 164 replies on 11 pages [ « | 1 ... 8 9 10 11 ]
robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: Software Development Has Stalled Posted: Feb 23, 2010 7:14 AM
Reply to this message Reply
Advertisement
> The real issue is that data can be used in a myriad of ways, but the strict and formal representation of data as
tables and relations in an RDBMS is an obstacle to reusing
the data in interesting ways.

That last assertion requires substantiation; relational data can, through well documented algorithms, be transformed into any bloated alternative. Leaving aside blobs of all kinds, Codd and Date and others have proved, with relatively simple set theory, that the relational model (which means it is not just syntax a la xml), in BCNF form, is the most parsimonious data structure; at least an order of magnitude less than xml variants, for example. The relational model supports its predecessors, the hierarchical (now xml) and network (some try this with xml, to their pain). The world really is relational, not hierarchic despite what some people assert. The xml folk have been trying to fake relatonis ever since introducing idref. Stuffing all related data, somehow, into each document is massively redundant. The silliness of binary and compressed xml is backpedaling away from the bloat.


> When I say 'undisciplined data' I mean data not related to a particular schema, not data not going through a transaction manager.

There must always be a schema, you just might not recognize it, and the transaction manager (whichever and wherever it is) must have access to it. With xml, for example, meta-data is mixed in with the data (an early, and by now largely ignored, recognized problem). Other in dtd and xsd. Data without "schema" is not data, just noise. Those who claim that xml is "self describing" don't understand that this "description" is fully dependent on: common human language, commonly understood (assumed) semantic context, and so on.

Take this:

<sdklfj>
sdfkjskkjsldfkj
</sdklfj>

Tell me what it means. Of course, you can't, since the meaning of the tags is unknown in your brain. I could have used Swahili, and you still wouldn't know. The meaning of the data is unknown for the same reason. xml is just syntax and text; if it were truly self-describing, automatons could write the processing text of application code for accessing xml files, and we all know that's not possible. While parsers can manipulate the xml text into code entities, the coder must write explicit code to manage that xml element text, one by one.

Also, and most important, data without a schema will fail when entered to a transaction manager. Some historical context is in order. Back before the relational database, IBM had files, with structure (not unlike xml, by the way) stored externally to the file, but available, thus was born VSAM. COBOL and VSAM procreated happily in multitudes of (separated) nuclear families, and then, one day, many disk drives appeared. And it became clear that these many families wished to procreate together, much as Mormons do, but it was quickly seen that chaos, and many deficient children, ensued. So IBM brought down from the mount, on gold tablets, CICS; the longest lived transaction manager in computerdom.

CICS allowed those many COBOL programs to intermarry with those many VSAM files in such a way as to not birth inbred retards. CICS did this by reading the schema information of the VSAM files, and only allowing one COBOL program to cleave to one VSAM file at a time. Thus was discovered the conflict serializable schedule; although not yet known by that name.

And CICS begat IMS, which is the hierarchical database implementation, inferior to, but not beholden to, the network model (IBM didn't control the network model, CODASYL did, which meant IBM just had to have something that wasn't standardized). All transaction managers require "schema" data; you can't have transactions with such.

One cannot have transactions on undisciplined data (your definition) just because the meta-data is needed by the transaction manager. Now, one can gin up a half-hearted attempt at transaction management in application code; COBOL programmers have been doing that since your grandfather's time, and many still do. But, all requests for update, and even read depending on how strict one wishes to be, on a given datastore must go through one transaction manager (or one application) for that datastore. This is algorithmic, not implementation. One could attempt to embed transaction management in a webserver, but doing so will simply repeat what IBM, Oracle, and M$ have done over the last 4 decades (more or less).

The lock-in to application controlled update, as distinguished from transaction manager controlled, is what today's database ignorant coders crave. They may get their way, but getting one's way doesn't always mean doing what's best.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Feb 23, 2010 7:45 AM
Reply to this message Reply
Robert,

A schema is required in the context of a transaction only if you have a relational DBMS. The meaning of transaction is that all the operations in it succeed or none succeeds.

For example, let's say we have 3 files that we want to update. At first, we read the modification timestamp of the files; then we open each file, and we compare the modification timestamp of each file with the original. If it the same with the original, then we proceed the with changing the file. If we find that a file timestamp has changed, then we abort the operation and we restore the files to their previous state. This is a transaction that does not involve a relational schema.

Another example: transactional memory. With the help of atomic operations, an atomic word in memory is locked by reading its value, then trying to update it when the current value equals the initial value; that's a transaction. We can build a whole lot of thread-safe operations like this, and hence transactional memory.

The problem I referred to in my previous post regarding databases is that information is organized in tables and that there are predefined relations between the tables. This is a rigid structure that prohibits any manipulation outside of the context that it was created for. This is a real problem for software development! it's one of the reasons software development has stalled, in my opinion.

A much better approach would be to forget tables, columns, rows and relations. Information should be stored in key-value pairs; it should be the computer's task to identify redundant information inside the key-value pair store; it should also be the computer's task to identify relations inside the information store. This is way more flexible than any RDBMS and much more future proof, if you ask me.

From a technical point of view, the key-value database system should be free to create indexes, tables and relationships, depending on criteria defined at run time, i.e. size of data, content of queries, frequency of queries, etc. Most of the options in RDBMS are for optimization anyway; for example, primary keys, indexes and relationships exist in order to have optimized searching and avoid data redundancy. I think computers today are powerful enough to do this work themselves and not put it on the burden of the developer.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: Software Development Has Stalled Posted: Feb 23, 2010 8:37 AM
Reply to this message Reply
> Robert,
>
> A schema is required in the context of a transaction only
> if you have a relational DBMS. The meaning of transaction
> is that all the operations in it succeed or none succeeds.
>
Schema is just the name used by (some) RDBMS vendors and users. It is also called catalog, and other terms. It is, whatever it is called, the meta-data. In most cases, this meta-data is more extensive than that provided by alternate datastores; which is partly a function of the designer. RDBMS's don't require that the designer/specifier follow 3NF or BCNF or any normal form. One can, and java and php and COBOL coders typically do, define a set of tables which look just like flat files in the operating system. MySql (vanilla) is an example of a SQL parser fronting the file system. Some refer to it as a RDBMS, but it isn't. It's just a SQL parser, used as a primitive file interface. Looking at such schemas, one will not see the point of the RDBMS, just because the point has been ignored.

>
> For example, let's say we have 3 files that we want to
> update. At first, we read the modification timestamp of
> the files; then we open each file, and we compare the
> modification timestamp of each file with the original. If
> it the same with the original, then we proceed the with
> changing the file. If we find that a file timestamp has
> changed, then we abort the operation and we restore the
> files to their previous state. This is a transaction that
> does not involve a relational schema.

Who controls locking of those files? Your application? Do you ask the OS for a lock? Do you set a flag your application (or language) understands? Do you lock the entire file(s)? What if you need to update one small part of one file? Do you lock all the other data in that file and all the data in the others while you fiddle the bits? How do you know which parts of the 3 files are related? Do you update the files in place, or do you write out a new copy of each file, before and after update? Where do you log updates (including adds and deletes)? It gets really messy really, really fast.

If you want the updates to not trash your data, then your application code *must* know the meta-data of the data it is updating. You can embed that meta-data and logic in each and every application program; COBOL and java coders like to do that. Or you can rely on the database engine to ensure that any application, even a data editor, correctly updates the data. You really need to review G&R and W&V, they describe why such a path is doomed. Remember, it's been 50 years (yes, that long) since CICS attempted to do that. IBM is still fixing it, it ain't an easy task.

> The problem I referred to in my previous post regarding
> databases is that information is organized in tables and
> that there are predefined relations between the tables.
> This is a rigid structure that prohibits any manipulation
> outside of the context that it was created for.

You say that very glibly, but that doesn't change the fact that the hierarchic structure of IMS and xml is far more rigid. This is the main reason that Dr. Codd devised the relational model of data: the hierarchic structure (it wasn't and isn't a model) is fully rigid. That's just the way it happened.

> This is a
> real problem for software development! it's one of the
> reasons software development has stalled, in my opinion.
>
> A much better approach would be to forget tables, columns,
> rows and relations. Information should be stored in
> key-value pairs; it should be the computer's task to
> identify redundant information inside the key-value pair
> store; it should also be the computer's task to identify
> relations inside the information store. This is way more
> flexible than any RDBMS and much more future proof, if you
> ask me.
>
> From a technical point of view, the key-value database
> system should be free to create indexes, tables and

> relationships,

Creating relationships on the fly (by application code, presumably) alters the semantics of the data for all applications that use the data. *If* your application is the only one using the data, then it doesn't matter, except for the fact that all of your programs must be refactored to the altered semantics each time you do this.

> Most of the options in RDBMS are for
> optimization anyway;

That's just wrong. You need to review. The bits and pieces are there to ensure data integrity.

> for example, primary keys, indexes
> and relationships exist in order to have optimized
> searching and avoid data redundancy.

No, absolutely not; search optimization is supported by secondary indices, sometimes. They're there to enforce data integrity. Avoiding data redundancy, yes; which is the whole point of the relational model. Some of us think that's a goal worth pursuing. It's not just avoiding bloat, but mostly about avoiding logic errors.

> I think computers
> today are powerful enough to do this work themselves and
> not put it on the burden of the developer.

And how can the computers do this without the meta-data? As I said last time: if, by way of example, xml were truly self-describing (including dtd and xsd) then one could write an automaton to generate the application language source to manage arbitrary xml. Hasn't, and won't, happen.

In a nutshell, no multi-user datastore can exist without a transaction manager. Whether you choose to write one yourself, or leverage 40 years of research, development, and implementation is up to you. Just be aware that if you think you have a method of doing so which hasn't been tried (and found wanting), you're wrong. These other methods have all been tried, and they failed. The reason: transactions are math, and that has been explored continually since CICS.

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Feb 24, 2010 3:52 AM
Reply to this message Reply
> > For example, let's say we have 3 files that we want to
> > update. At first, we read the modification timestamp of
> > the files; then we open each file, and we compare the
> > modification timestamp of each file with the original.
> If
> > it the same with the original, then we proceed the with
> > changing the file. If we find that a file timestamp has
> > changed, then we abort the operation and we restore the
> > files to their previous state. This is a transaction
> that
> > does not involve a relational schema.
>
> Who controls locking of those files? Your application?
> Do you ask the OS for a lock? Do you set a flag your
> r application (or language) understands? Do you lock the
> entire file(s)? What if you need to update one small part
> of one file? Do you lock all the other data in that file
> and all the data in the others while you fiddle the bits?
> How do you know which parts of the 3 files are related?
> ? Do you update the files in place, or do you write out a
> new copy of each file, before and after update? Where do
> you log updates (including adds and deletes)? It gets
> really messy really, really fast.

It was just an example to show you that transactions and schemas are unrelated things. In my example, the O/S provides the mechanism for locking the files and the applications use this mechanism. I am not saying that this is how information stores should work.

>
> If you want the updates to not trash your data, then your
> application code *must* know the meta-data of the data it
> is updating. You can embed that meta-data and logic in
> each and every application program; COBOL and java coders
> like to do that. Or you can rely on the database engine
> to ensure that any application, even a data editor,
> correctly updates the data. You really need to review G&R
> and W&V, they describe why such a path is doomed.
> Remember, it's been 50 years (yes, that long) since CICS
> S attempted to do that. IBM is still fixing it, it ain't
> an easy task.

I am not saying that meta-data are not useful. I am saying that meta-data are not related to transactions in anyway. Please don't look at the term 'transaction' only in the context of an RDBMS system.

>
> > The problem I referred to in my previous post regarding
> > databases is that information is organized in tables
> and
> > that there are predefined relations between the tables.
> > This is a rigid structure that prohibits any
> manipulation
> > outside of the context that it was created for.
>
> You say that very glibly, but that doesn't change the fact
> that the hierarchic structure of IMS and xml is far more
> rigid. This is the main reason that Dr. Codd devised the
> relational model of data: the hierarchic structure (it
> wasn't and isn't a model) is fully rigid. That's just the
> way it happened.

Yes, it's far more rigid, and so far more problematic. But I am not sure why are you mentioning it. The point of this discussion is what to do to solve the problem of software development having stalled, and part of that process is to identify the problem.

>
> > This is a
> > real problem for software development! it's one of the
> > reasons software development has stalled, in my
> opinion.
> >
> > A much better approach would be to forget tables,
> columns,
> > rows and relations. Information should be stored in
> > key-value pairs; it should be the computer's task to
> > identify redundant information inside the key-value
> pair
> > store; it should also be the computer's task to
> identify
> > relations inside the information store. This is way
> more
> > flexible than any RDBMS and much more future proof, if
> you
> > ask me.
> >
> > From a technical point of view, the key-value database
> > system should be free to create indexes, tables and
>
> > relationships,
>
> Creating relationships on the fly (by application code,
> presumably) alters the semantics of the data for all
> applications that use the data. *If* your application is
> the only one using the data, then it doesn't matter,
> except for the fact that all of your programs must be
> refactored to the altered semantics each time you do
> this.

No, relationships should not be created by the application code. The computer itself will discover the relationships. The relationships would then be available to all.

>
> > Most of the options in RDBMS are for
> > optimization anyway;
>
> That's just wrong. You need to review. The bits and
> pieces are there to ensure data integrity.

Suppose you had a computer with infinite speed, and a bunch of data not normalized. You could easily find the normalized form of the data each time you requested some of them by brute forcing your way, comparing each and every bit of data with another bit of data.

Please understand that I am not saying that the strong typing of data should be abolished; what I am saying is that the recognition of data relationships should be left to the system.

>
> > for example, primary keys, indexes
> > and relationships exist in order to have optimized
> > searching and avoid data redundancy.
>
> No, absolutely not; search optimization is supported by
> secondary indices, sometimes. They're there to enforce
> data integrity. Avoiding data redundancy, yes; which is
> the whole point of the relational model. Some of us think
> that's a goal worth pursuing. It's not just avoiding
> bloat, but mostly about avoiding logic errors.

Primary keys are there to use as record identifiers. If you don't have records (and rows and columns), then you don't need primary keys. The only reason primary keys exist is as record identifiers; i.e. instead of searching the whole table each time you want to locate a record, you use the identifier to explicitly refer to the method.

Again, if you had an infinitely fast computer, you wouldn't use primary keys; you would simply brute force your way through the database and compare records to records.

So, primary keys are an optimization.

In a key-value system, you don't need primary keys, because the key is the identifier.

Indexes are not related to data integrity; their only role is to increase speed of sorting and searching. Again, if you had an infinitely fast computer, you wouldn't need indexes.

Since indexing requirements depend on what queries will be done to the system, it's better to construct the indexes after the queries; which means, it's the system that should take care of indexing.

Relationships between tables exist in order to avoid data redundancy; again, in an infinitely fast computer, there would be no redundancy, because each time new data entered the system, they would be compared to all the existing data and they would not be stored twice in it if they were found to already exist in the system. So, relationships are again, an optimization.

A key-value system takes can easily take care of most data integrity needs, by selecting the appropriate key. All the other integrity requirements that cannot be expressed via a key-value system fall into the non-easily-provable category, and so they need code. RDBMS systems provide store procedures for this reason exactly.

>
> > I think computers
> > today are powerful enough to do this work themselves
> and
> > not put it on the burden of the developer.
>
> And how can the computers do this without the meta-data?
> As I said last time: if, by way of example, xml were
> e truly self-describing (including dtd and xsd) then one
> could write an automaton to generate the application
> language source to manage arbitrary xml. Hasn't, and
> won't, happen.

I did not say that meta-data are not needed at all. Obviously, some form of meta-data is required. What I am saying is that RDBMS is one of the reasons for software development having stalled.

>
> In a nutshell, no multi-user datastore can exist without a
> transaction manager. Whether you choose to write one
> yourself, or leverage 40 years of research, development,
> and implementation is up to you. Just be aware that if
> you think you have a method of doing so which hasn't been
> tried (and found wanting), you're wrong. These other
> methods have all been tried, and they failed. The reason:
> transactions are math, and that has been explored
> d continually since CICS.

I do not disagree with you at all in this. In fact, I support it in more ways than you can imagine. What I am saying though is that the computer can do that math itself without assistance from the programmer, if we choose another representation of data.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: Software Development Has Stalled Posted: Feb 24, 2010 5:19 AM
Reply to this message Reply
> I am not saying that meta-data are not useful. I am saying
> that meta-data are not related to transactions in anyway.
> Please don't look at the term 'transaction' only in the
> context of an RDBMS system.

The only fact which prevents writing "robert" to an integer entity is the meta-data.


> No, relationships should not be created by the application
> code. The computer itself will discover the relationships.
> The relationships would then be available to all.

By what logic?

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Feb 26, 2010 4:19 AM
Reply to this message Reply
> The only fact which prevents writing "robert" to an
> integer entity is the meta-data.

That is about type safety, not about transaction management.

>
>
> > No, relationships should not be created by the
> application
> > code. The computer itself will discover the
> relationships.
> > The relationships would then be available to all.
>
> By what logic?

By query logic. When a query requests information that is combined over a criterion, then a relationship is formed.

For example, if the query is "all products bought by client John Smith", then the computer should make a connection between the client John Smith and the products he has bought.

A traditional RDBMS approach is to design a table of products, a table of customers and a table of purchases that contains foreign keys to the table of products and customers, as well as built the appropriate indexes and primary keys for any possible query that may be required.

Another approach would be to simply store the product, customer and purchase data as simple key/value pairs, and when the query is done, the computer can built any tables or indexes of the requested information as required.

Florin Jurcovici

Posts: 66
Nickname: a0flj0
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Mar 4, 2010 1:16 AM
Reply to this message Reply
> Organised people into teams productively is the oldest
> problem around. There is still plenty of scope to
> improve, but it mostly depends the internal
> characteristics of people involved, and is thus not
> amenable to systematic solutions.

I disagree. I think a good team leader can vs a bad one, for example, can make a huge difference, in terms of productivity. Even if in charge of the same team. A much larger one than the swapping of an individual team member. And proper leading skills can be acquired, with the right training.

Florin Jurcovici

Posts: 66
Nickname: a0flj0
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Mar 4, 2010 1:31 AM
Reply to this message Reply
Idunno, I think there's still some potential in tools too. For instance, debuggers for network apps are still pretty bad.

On the other hand, it all depends on what you see as a development tool. Is the assembly line a tool for car makers, the same way an electric screw driver is? If so, things like google wave are also tools for programmers. And then there's still huge potential for development.

What I'd like to see is tools for pair debugging. For instance, I work on the client part for a web app, written mostly using JavaScript, and just calling some web services on the server. The person in charge of the web services and I get together, each with his own machine, start a common debugging session, and have an integrated view on both the client and the server, as we step through the application. Since more and more apps, both on the web and on intranets, tend to become web apps, I think such a tool would be wonderful. There's no project I know in this direction.

You may be right for part of the programming world. Until now, most programmers had to struggle with low level details, in many cases hardware- or compiler-bound. I do agree that this part of the programming world tends to become rather stable. But until a very few years ago we didn't even have a decent programming framework/model/platform for web applications. So I think there still is a lot of space for innovation.

The most popular IDE of the moment, Eclipse, has become bloatware. I don't think it's because of the stupidity of the good people working on the eclipse project, I think it's because there's no decent solution to the complexity problem in the current world of programming. Tackling complexity at such levels still has to be addressed. Another open challenge, in which I think there will be advances quite soon, given the need for it.

Bill Pyne

Posts: 165
Nickname: billpyne
Registered: Jan, 2007

Re: Software Development Has Stalled Posted: Mar 4, 2010 12:42 PM
Reply to this message Reply
You may find this interesting.

http://www.edge.org/3rd_culture/gelernter/gelernter_index.html

Gabriel Belingueres

Posts: 1
Nickname: nanobomber
Registered: Jan, 2007

Re: Software Development Has Stalled Posted: Mar 11, 2010 7:35 PM
Reply to this message Reply
May be it is just my way of thinking, but I tend to think that every scientific achievement is related to the LAW of supply and demand. LAW with uppercase, because there is NOTHING that can scape from that: it is just like a black hole in our society.

Now, consider that computer science is a relatively young field, a science from the 20th century, compared to older sciences like math or chemistry (aka alchemy?) Now we can ask ourselves: how many CS, math and chemistry professionals our society needs? meaning the least number that fullfills the current/near future demand of CS/math/chemistry work? I guess that CS numbers would be higher, and if you think this way, means that there are more busy CS people than math people, ergo math people have more opportunities to advance their science than we have.

Now CS people are busy (making money or just surviving), but there is no much room for innovation if not driven by customer demand. In java technology I thing industry is more innovative than academy...by far, and this means people is busy innovating for making money, not for the sake of improve science.

What do people try to do at work? They try to finish it and get to the next one, and there is room for improvement when you have some free time to invest on it. If you are busy you just take the next job assignment and move on.

What I'm trying to say is: if software development has stalled is because it is good enough to keep people happy just the way it is.

Bottom line is: if you have some free time you can (or are forced to) innovate (or learn something new, just to eat) because there is no demand, or you just suck.
If you have no free time you walk the least resistance path to get your work done: Do it the way you already know, because innovating have their risks, and because you know it works. That means you are not innovating and your investment is in fact returning.

Gabriel

Martin Blackwell

Posts: 1
Nickname: blackwellm
Registered: Mar, 2010

Re: Software Development Has Stalled Posted: Mar 25, 2010 4:02 AM
Reply to this message Reply
Coming from a 80x24 background, (Apple II onwards), I believe the web has set application development back to before this period. HTML is fundamentally 'display' technology so we have gone from a view that is obvious - the flashing cursor - to one where INPUT is to be guessed at by the user. We have given individuals with no grounding in IT the ability to generate ANY interface they like and so they do. Some will call this 'freedom'. The result is the chaos we now see. We will not get out of this position for a long time.

From an application development perspective the choice is so wide and chaotic that I believe the majority are choosing the comfort zone (.Net or Java). These two technologies of course compete and are rarely complementary - try using any Java toolset to parse a complex WSDL from a web service. A very small number of people have realised that smart client + standard design patterns + well designed metadata = most applications you will need, but they are too busy making money to share the process. I have yet to come across a toolset/framework/IDE/SaaS/PaaS that works well on the web that is also stable, predictable, well-documented and supported (for my sins, I have investigated more than I can remember, so please do not suggest your favourite).

I am saddened and astonished by the constant re-invention of the wheel. Perhaps it is in the nature of the beast to like shiny and new.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: Software Development Has Stalled Posted: Mar 25, 2010 4:38 PM
Reply to this message Reply
> Perhaps it is in the nature of the beast to
> like shiny and new.

It is the nature of youth to ignore the lessons of history. In most professions, youth are ignored for just that reason. In "computer science" (which, by the way, is a curriculum invented to placate undergraduates too stupid to do EE, then the course of study for computer engineering) youth re-invent stuff found wanting by their elders years or even decades earlier. Jonathan Swift probably had Gulliver visit a country with such behaviour.

Florin Jurcovici

Posts: 66
Nickname: a0flj0
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Mar 26, 2010 3:13 AM
Reply to this message Reply
> (...) I believe the web has set application development back to
> before this period.

I believe your view on what the web has brought with it is shallow. It's not the display technology what the web is about, it's a fundamentally different deployment model, IMO. A developer who has grasped the beauty of loosely coupled components, written as lightweight components running inside a browser, talking to various server-side services, and exchanging data over a standardized protocol, will never want to go back to rich clients or old-age client-server applications. This doesn't meant that smart clients, design patterns and well designed meta-data is less important now than it was before web 2.0.

I agree there's a chaos of tools, frameworks, toolkits out there, especially in the Java world. I also agree that 90% of them are unpleasant to work with. But I'd say that working directly with CORBA or DCOM calls was any more pleasant. And enterprise applications - which is what takes most of the development effort - are not about smart clients, but about the network. And there are definitely good choices available for building any part of a modern web application.

As for .Net and Java not being complementary: they never intended to complement each other. They fiercely fight to displace each other. Nevertheless, it is perfectly possible to write a Java client which consumes .Net web services and vice versa. Provided you know what you're doing.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: Software Development Has Stalled Posted: Mar 26, 2010 5:40 AM
Reply to this message Reply
> talking to various server-side
> services,

You miss the point. This is, generally, prohibited. A single application may talk to its server. Unless, and until, the security of http(s) is true security, no client/browser app will be allowed to roam freely. Its server may talk to other servers, but that doesn't change the semantics of the browser app.

The browser app is, semantically, exactly the same as a 3270/370 app from 1970: a disconnected, locally edited program talking to a single application program running on big hardware on the other end of a wire. Just pretty pixel rather than characters.

Florin Jurcovici

Posts: 66
Nickname: a0flj0
Registered: Feb, 2005

Re: Software Development Has Stalled Posted: Mar 28, 2010 10:20 AM
Reply to this message Reply
> > talking to various server-side
> > services,
>
> You miss the point. This is, generally, prohibited.

Not quite. A web app can talk to as many services it wants to, as long as they are all located on the server from which the app was loaded. In most deployment scenarios, many families of services are deployed on the same machine, and are ideally used by many applications, be they web apps or not. Even if this wasn't so, there are ways which are used to have one web application talk to more than one server.

Servers can chat with each other too. Also using web services, instead of some heavyweight protocol, which is difficult to program to, which isn't widely used, which isn't really used in a standardized way by applications, and with which the basic task of serializing and deserializing objects can be sometimes tedious.

I agree, however, that https is essentially insecure. That's why smart companies have everything running over http over VPN instead. IMO, this solves the problem. Not only does it solve the problem, but if the company is really smart, it solves it in a very nice way, employing SSO to authenticate against services with the same credentials used for the VPN itself - although not many companies use this.

Let's make a mental experiment. Imagine you have to write a packaged application, consisting of a client and a server. Both the server and the client need to be deployed on various OSes. You need to be able to work with various databases. The business problem you try to solve involves frequent updates to the business logic (for instance determined by changes in legislation).

IMO, it's several times more costly to write a custom server and client application in portable C++, compile, test and debug the application on all potential target platforms, and manually or automatically update all installations whenever the application changes, than it is to create a simple, services-based web application, which runs off an application server, is easily movable from one application server to another, due to proper use of highly standardized technologies, and for which the update, in most cases, means just replacing some text files on the server. More than that, it allows you to replace parts of the implementation without other parts even noticing it. IMO, the newer technology is clearly better than the old one, since it allows one to deliver the same business value with less effort in less time, and to provide a less costly update process. So there must have been some advances from the older technology to the newer one.

IMO, the essential technological advance is a new paradigm. 20 years ago, you were writing programs to compute something. Nowadays it is widely recognized that computation results are worthless as long as they aren't communicated, and it is usually difficult or impossible for one single, central node, located inside a single enterprise, to compute something meaningful. Which is why the programming effort is nowadays focused on the network and collaboration, rather than solving problems in isolation. Of course, some basic skills and knowledge are as valid and useful today as they were 20 years ago, but there's much more you need to know today than there was 20 years ago. This makes it harder to be a good programmer today than it was 20 years ago, so there are more of the less good programmers today. But IMO that's definitely because the technology is more advanced today than it was 20 years ago, not because technology has returned to where it was 20 years ago.

It is this shift in paradigm, IMO, which opens the way for the most exciting development in tools and technology of the next several years.

Flat View: This topic has 164 replies on 11 pages [ « | 8  9  10  11 ]
Topic: Heron Tackles the WideFinder Challenge Previous Topic   Next Topic Topic: Setting Multiple Inheritance Straight

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use