The Artima Developer Community
Sponsored Link

Weblogs Forum
The Holistic Approach to Software Engineering as a Way to Handle Complexity

74 replies on 5 pages. Most recent reply: Dec 9, 2010 9:28 AM by Mike Swaim

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 74 replies on 5 pages [ « | 1 2 3 4 5 | » ]


Posts: 18
Nickname: evarga
Registered: Feb, 2006

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 29, 2010 12:04 PM
Reply to this message Reply
Advertisement
>
> No one, and I repeat
> NO ONE, has found a class of client/standalone problems
> which benefit from these processors. Until such is
> accomplished, this will be a fools journey.
>

Come to my mind an excellent article written by David Patterson (read at http://spectrum.ieee.org/computing/software/the-trouble-with-multicore) about how "forced" moves transform into hype.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 29, 2010 12:49 PM
Reply to this message Reply
> Would you please elaborate on what you would consider
> dealing with data in a more systemic way?

Sure. At the moment, SQL (and the RM to the extent SQL embodies it) provides a machine/OS/language agnostic data model, complete with constraints. Not all development deals with such "commercial" types of data, and I make no assertion that the RM is appropriate in all cases, although SQLite seems to end up in lots o places one might not view as obvious. SQL, often abused, does provide a common syntax for manipulating flatfile data; by being lax with schema specifiers, SQL databases actually encourage lousy schema design.

(Is the SQL database the best answer? In the abstract, no. But not because the RM is faulty, but because Chamberlin didn't know what he was doing when he specified SQL outside of Codd's control. He was an IMS coder, of all things!! Too many equate SQL with Codd and the RM, and that's an historical falsehood.)

However, for those applications/systems where data outweighs code (and I'll assert that the opposite is conventionally referred to as scientific/engineering computing; e.g. the genetic simulations mentioned a couple of posts above), we've been going down the wrong path. The path we should be following is a better (better meaning greater model ACID qualities; I do not believe such exists, but will grant those who wish to the rope to continue) agnostic datastore. Any application language should be able to interact with the datastore, assured that this application/client code can't munge the data. The language should be a followed standard, unlike SQL, and based on a spec derived from the RM. Such languages do exist, but haven't managed to gain "standard" status and thus displace SQL. Fact is, SQL isn't as bad as some hysterics claim; said hysterics typically are coders who see a threat to the data hegemony they've built into their application/design and grab any straw that floats by.

Such a datastore, by definition, is independent of any of the applications (or the source languages) that interact with it; the datastore exposes its API, kind of like SQL. This is the sticking point with coders: they live by the rubric "all the data be mine, and you must go through my bespoke gateway to touch the data"; few actually understand the RM or even grant its right to exist (kind of like Israel :) ). We have a data crisis (not necessarily separate from the software crisis) in large part because coders do so love their application silos. Dr. Codd found a way to get rid of the silos. The farmers weren't (and still aren't) happy to live that way.

If we want to solve either of the crises, we have to define the problem accurately. For problems where code does overwhelm data, scientific/engineering/games/etc., then perhaps some as yet undefined language will not "suck". But for the rest of application development, the RM is still the only data model which was conceived first then implemented to its specification (well, except for SQL, but that wasn't Codd's doing). Both the network and hierarchical datastores were developed ad hoc, and a semblance of "model" hand waving followed.

It's often said, at least in my hearing, that industrial strength databases cost too much, so let's just use files (VSAM, xml, and such) and not worry about that ACID stuff. The development inevitably spends an order of magnitude or two more than the database fee on all that bespoke coding (and years of maintenance, too) for the file I/O, and then has no way, without spending still more money, on doing even basic analysis of the data.

Here's an interesting link (merges R with PostgreSQL): http://www.postgresonline.com/journal/archives/188-Quick-Intro-to-R-and-PLR-Part-1.html ,

Alex Stojan

Posts: 4
Nickname: staksi
Registered: Sep, 2010

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 29, 2010 7:59 PM
Reply to this message Reply
There's nothing wrong with programming languages - they just need to provide appropriate primitives so that the programmer can create new abstractions. So, for example, if you need list comprehension you create an abstraction for it. If you need some data/knowledge aware constructs you build abstractions for those too, and the same way for many other things. There's no need for languages to support those things directly. You can do it in C++ using templates, in Clojure using macros, ...
The fundamental thing is having the right primitives and the right basic constructs for building new abstractions!

Kay Schluehr

Posts: 302
Nickname: schluehk
Registered: Jan, 2005

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 12:37 AM
Reply to this message Reply
> >
> > No one, and I repeat
> > NO ONE, has found a class of client/standalone problems
> > which benefit from these processors. Until such is
> > accomplished, this will be a fools journey.
> >
>
> Come to my mind an excellent article written by David
> Patterson (read at
> http://spectrum.ieee.org/computing/software/the-trouble-wit
> h-multicore) about how "forced" moves transform into hype.

The whole point of the "multicore revolution" is that the "free lunch is over" i.e. when you buy a new computer your programs will not automatically run faster than they did two years before just by the divine gift of Moore's law. So the class of problems they are dedicated to solve is exactly the same as before and this is what causes all the head scratching. Some favour radical solutions and new languages whereas others would like to transform legacy technology more carefully.

On the other hand software engineers can largely ignore this because when the hardware industry fails to sell new computers because people don't see much benefit in buying new ones it is the hardware vendors which suffer, not the programmers. The geek in me looks at it from another angle though: replacing an old generation of technology by a new one is a matter of fun and when companies spent money for this I have both a job and can practise my hobby.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 5:45 AM
Reply to this message Reply
> > >
> > > No one, and I repeat
> > > NO ONE, has found a class of client/standalone
> problems
> > > which benefit from these processors. Until such is
> > > accomplished, this will be a fools journey.
> > >
> >
> > Come to my mind an excellent article written by David
> > Patterson (read at
> >
> http://spectrum.ieee.org/computing/software/the-trouble-wit
>
> > h-multicore) about how "forced" moves transform into
> hype.
>
> The whole point of the "multicore revolution" is that the
> "free lunch is over" i.e. when you buy a new computer your
> programs will not automatically run faster than they did
> two years before just by the divine gift of Moore's law.

But, it's not Moore's Law which allowed M$ to make Office ever more convoluted and slow on current processors, depending on the next gen to be faster. It was clock increase that was The Gift. A credible argument is that it was *Moore's Law* which taketh away The Gift. The argument goes thus: with the increased density (which *does* continue) of feature, the stability of the chip no longer permitted the clock increases possible at lower densities. Single threaded code doesn't now run faster on new chips. Ack!

> So the class of problems they are dedicated to solve is
> exactly the same as before and this is what causes
> all the head scratching. Some favour radical solutions and
> new languages whereas others would like to transform
> legacy technology more carefully.

Again, Amdahl's Law constrains absolutely; until repealed. The Law is independent of processor architecture and language. So far. And likely to stay that way, since it operates at the logical level, not physical. That's not to say some small (relative to cpu shipment counts) number of problems will be natural fits to multi-core cpu's, only that the applications used most of the time by most of the users (Office, webbing, etc.) don't fit. Smarter OS's can benefit, of course, and if so, these "most users" will see a benefit if they do more multi-tasking. But that's got nothing to do with the language/technique used to write "most user" applications.

>
> On the other hand software engineers can largely ignore
> this because when the hardware industry fails to sell new
> computers because people don't see much benefit in buying
> new ones it is the hardware vendors which suffer, not the
> programmers.

It hurts the likes of M$, and has already. Their post-XP OS's haven't been taken up as upgrades all that much, and even the Fortune X00 CIO/CTO suits are finally recognizing that Office 2020 doesn't really provide much bang for the buck. Office is what keeps M$ afloat. Keeping the Fortune X00 suits in perpetual upgrade is vital; multi-core with little or no clock upgrade (modulo some breakthrough with Amdahl, or complete reworking of usage pattern) is very bad for M$.

> The geek in me looks at it from another angle
> though: replacing an old generation of technology by a new
> one is a matter of fun and when companies spent money for
> this I have both a job and can practise my hobby.



Posts: 18
Nickname: evarga
Registered: Feb, 2006

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 10:54 AM
Reply to this message Reply
>
> The
> argument goes thus: with the increased density (which
> *does* continue) of feature, the stability of the chip no
> longer permitted the clock increases possible at lower
> densities. Single threaded code doesn't now run faster on
> new chips. Ack!
>

It is interesting to note here that stability is quite subjective. It heavily depends on our own perception, or more precisely expectation, of how computers should work. Up to now, we looked at the computers as some perfect machines, which were not allowed to make a single error (from the viewpoint of their execution). It seems that this will change in the near future. In order to carry on the Moore’s Law the computer science has started to change its view about computers’ exactness. This comes in the form of stochastic processing (a.k.a. probabilistic computing), where errors are allowed, and treated as a natural phenomena (in the same way as we treat fastidiousness of current computers as something normal). In that changed world, even error-prone components (with some tolerable error rate) may be qualified as stable, and acceptable.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 11:51 AM
Reply to this message Reply
> >
> > The
> > argument goes thus: with the increased density (which
> > *does* continue) of feature, the stability of the chip
> no
> > longer permitted the clock increases possible at lower
> > densities. Single threaded code doesn't now run faster
> on
> > new chips. Ack!
> >
>
> It is interesting to note here that stability is
> quite subjective. It heavily depends on our own
> perception, or more precisely expectation, of how
> computers should work. Up to now, we looked at the
> computers as some perfect machines, which were not allowed
> to make a single error (from the viewpoint of their
> execution). It seems that this will change in the near
> future. In order to carry on the Moore’s Law the computer
> science has started to change its view about computers’
> exactness. This comes in the form of stochastic
> processing
(a.k.a. probabilistic computing),
> where errors are allowed, and treated as a natural
> phenomena (in the same way as we treat fastidiousness of
> current computers as something normal). In that changed
> world, even error-prone components (with some tolerable
> error rate) may be qualified as stable, and acceptable.

Two distinct issues.

Yes, some Real Engineers are exploring quantum computing on various kinds of hardware. And, yes, such cpu's could become useful for some problems.

No, the stability mentioned is not of that kind, but rather more basic: the gates simply fail at the increasing clocks that were attempted with currently available materials.

So, whether quantum (probabilistic) cpu's would re-instate the clock climb is not certain, AFAIK. Unless the clock climb returns, single threaded code will continue to slow down as feature size diminishes, taking with it, the clock.

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 12:02 PM
Reply to this message Reply
> >
> > The
> > argument goes thus: with the increased density (which
> > *does* continue) of feature, the stability of the chip
> no
> > longer permitted the clock increases possible at lower
> > densities. Single threaded code doesn't now run faster
> on
> > new chips. Ack!
> >
>
> It is interesting to note here that stability is
> quite subjective. It heavily depends on our own
> perception, or more precisely expectation, of how
> computers should work. Up to now, we looked at the
> computers as some perfect machines, which were not allowed
> to make a single error (from the viewpoint of their
> execution). It seems that this will change in the near
> future. In order to carry on the Moore’s Law the computer
> science has started to change its view about computers’
> exactness. This comes in the form of stochastic
> processing
(a.k.a. probabilistic computing),
> where errors are allowed, and treated as a natural
> phenomena (in the same way as we treat fastidiousness of
> current computers as something normal). In that changed
> world, even error-prone components (with some tolerable
> error rate) may be qualified as stable, and acceptable.


And here's a trenchant observation (from: http://ask.metafilter.com/78227/Why-did-CPUs-stop-getting-faster-about-5-years-ago )

FWIW, people say they get around the increase in clock speed by putting multiple cores in the same processor unit. But that's not really an even trade. Parallel programs are enormously difficult to create reliably. Pretty much only Microsoft will be able to take advantage of multiple cores, and that's because they don't care if their programs work. Everybody else will find parallel programming very difficult, frustrating and fraught with errors.
posted by vilcxjo_BLANKA at 12:41 PM on December 10, 2007

Mike Swaim

Posts: 13
Nickname: swami
Registered: Apr, 2004

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 2:00 PM
Reply to this message Reply
> NO ONE, has found a class of client/standalone problems
> which benefit from these processors. Until such is
> accomplished, this will be a fools journey.

Medical research. My division has a Cray cluster and we run Condor on our desktop computers for large jobs.

In a less esoteric example, games. A number of games will use multiple cores, and the number should grow as engines add threading support. (This makes sense because games tend to tax PC hardware more than most other general purpose apps, and they tend to be more parallelizable.)

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Nov 30, 2010 3:15 PM
Reply to this message Reply
> > NO ONE, has found a class of client/standalone problems
> > which benefit from these processors. Until such is
> > accomplished, this will be a fools journey.
>
> Medical research. My division has a Cray cluster and we
> run Condor on our desktop computers for large jobs.
>
> In a less esoteric example, games. A number of games will
> use multiple cores, and the number should grow as engines
> add threading support. (This makes sense because games
> tend to tax PC hardware more than most other general
> purpose apps, and they tend to be more parallelizable.)

I don't see many Intel cpu's being moved to the desktop to mimic Crays ("Mr. Cray, why are your computers so much faster than the others?" "Shorter wires."), A niche, sure, but Intel needs hundreds of millions of units out the door every year.

I don't know enough about how games run these days to disagree, but IIRC most of the load is carried by the GPU(s) and not the cpu/cores. And that the GPU code is already thread/core optimized. Yes? No?

Even so, I don't see the Fortune X00 CIO/CTO brigade getting all hot and bothered about games running faster on the next batch of PC's with quad core i11's (or whatever Intel calls it) headed to accounting.

This does raise the question: where is the future of PC computing, and by implication its coders? It started with home hobbyists, and it may drift back to home entertainment (broadly speaking) if that's the only sector which can leverage the new processor paradigm. The Fortune X00 may well drift back to where it was pre-PC: with centralized databases talking to terminals, now mid-power PC's. The niches for scientific/engineering could run on PC's that look like the Apollo machines of yesteryear. Could happen; Apple has done OK with that approach.



Posts: 18
Nickname: evarga
Registered: Feb, 2006

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Dec 1, 2010 12:17 AM
Reply to this message Reply
.>
> Yes, some Real Engineers are exploring quantum computing
> on various kinds of hardware. And, yes, such cpu's could
> become useful for some problems.
>

I have not talked about quantum computers at all, although it does embrace a lot of probability theory. Most probably the term probabilistic computing was a bit misleading. The issue is very nicely summarized in the next article http://spectrum.ieee.org/semiconductors/processors/the-era-of-errortolerant-computing

P.S. You may also read about this topic in the article Scalable Stochastic Processors at http://portal.acm.org/citation.cfm?id=1871008

Achilleas Margaritis

Posts: 674
Nickname: achilleas
Registered: Feb, 2005

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Dec 1, 2010 5:26 AM
Reply to this message Reply
> In a less esoteric example, games. A number of games will
> use multiple cores, and the number should grow as engines
> add threading support. (This makes sense because games
> tend to tax PC hardware more than most other general
> purpose apps, and they tend to be more parallelizable.)

Raytracing is the best candidate for multicore in games. It's an extremely parallelisable problem, and it vastly simplifies game visuals programming, while offering superior quality.

Mike Swaim

Posts: 13
Nickname: swami
Registered: Apr, 2004

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Dec 1, 2010 7:02 AM
Reply to this message Reply
> I don't see many Intel cpu's being moved to the desktop to
> mimic Crays ("Mr. Cray, why are your computers so much
> faster than the others?" "Shorter wires."), A niche,
> sure, but Intel needs hundreds of millions of units out
> the door every year.

Our Cray cluster uses intel chips. We also use Condor on our desktop machines as a second cluster. After a machine's been idle with no user interaction for 15 minutes, it starts working on the current job submitted to Condor. But once again, it's fairly esoteric, and involves jobs that can run for days before you get a result.

> I don't know enough about how games run these days to
> disagree, but IIRC most of the load is carried by the
> GPU(s) and not the cpu/cores. And that the GPU code is
> already thread/core optimized. Yes? No?

No. We've pretty much reached the point where most games are CPU bound, unless using Intel graphics chips. Typically, when games are made multithreaded, each subsystem (sound, graphics, AI, networking) gets its own thread. Graphics is usually broken down further into several threads.

> Even so, I don't see the Fortune X00 CIO/CTO brigade
> getting all hot and bothered about games running faster on
> the next batch of PC's with quad core i11's (or whatever
> Intel calls it) headed to accounting.

No, but threads work well in C/S batch applications as well. Back when I worked @ a trading firm in the late 90 and early 00s, our portfolio calculators were either multithreaded, or ran as parallel processes on a machine. (The standard trader computer had 2 CPUs and 2-3 screens, better than the hardware developers had.)

> This does raise the question: where is the future of PC
> computing, and by implication its coders? It started with
> home hobbyists, and it may drift back to home
> entertainment (broadly speaking) if that's the only sector
> which can leverage the new processor paradigm.

Both the 360 and PS3 are multicore, although the PS3 doesn't do SMT. (And you can access the cores/threads on a 360 using XNA, the public tool for developing for the 360.) We're also starting to see multicore ARM chips, so your next phone/tablet could be multicore as well.

> The
> Fortune X00 may well drift back to where it was pre-PC:
> with centralized databases talking to terminals, now
> w mid-power PC's.

Sure. My team does web development which is essentially this model. However, the trend for web servers is more CPUs, rather than faster CPUs. Fortunately, web requests parallelize really well. We still have pages which spin off long running work into a background thread or worker process.

I've been writing multithreaded code for over a decade, off and on, and it's really not that hard, although it requires discipline. I've also come to the following conclusion, if your program is slow because it's processing a lot of data, or if your program is structured into a bunch of mostly independent modules, than writing multithreaded code can be a big win. If your application is slow because it's got a lot of code to execute, not so much.
(And it's not just multicore. I've seen massive performance increases in C/S desktop apps after they've gone multithreaded, even on single core machines when they have to deal with lots of data.)

robert young

Posts: 361
Nickname: funbunny
Registered: Sep, 2003

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Dec 1, 2010 10:28 AM
Reply to this message Reply
> > I don't see many Intel cpu's being moved to the desktop
> to
> > mimic Crays ("Mr. Cray, why are your computers so much
> > faster than the others?" "Shorter wires."), A niche,
> > sure, but Intel needs hundreds of millions of units out
> > the door every year.
>
> Our Cray cluster uses intel chips. We also use Condor on
> our desktop machines as a second cluster. After a
> machine's been idle with no user interaction for 15
> minutes, it starts working on the current job submitted to
> Condor. But once again, it's fairly esoteric, and involves
> jobs that can run for days before you get a result.

No disagreement, beyond that such machines/software won't amount to a tear drop in the ocean of chips Intel needs to ship. That's the problem with finding parallel problems.

>
> > I don't know enough about how games run these days to
> > disagree, but IIRC most of the load is carried by the
> > GPU(s) and not the cpu/cores. And that the GPU code is
> > already thread/core optimized. Yes? No?
>
> No. We've pretty much reached the point where most games
> are CPU bound, unless using Intel graphics chips.
> Typically, when games are made multithreaded, each
> subsystem (sound, graphics, AI, networking) gets its own
> thread. Graphics is usually broken down further into
> several threads.

see below.


>
> Both the 360 and PS3 are multicore, although the PS3
> doesn't do SMT. (And you can access the cores/threads on a
> 360 using XNA, the public tool for developing for the
> 360.) We're also starting to see multicore ARM chips, so
> your next phone/tablet could be multicore as well.
>

360 and PS3 run on PPC chips (sort of), which is small potatoes for IBM, and none at all for Intel/AMD, which are the chip suppliers in need of justification.


> > The
> > Fortune X00 may well drift back to where it was pre-PC:
> > with centralized databases talking to terminals, now
> > w mid-power PC's.
>
> Sure. My team does web development which is essentially
> this model. However, the trend for web servers is more
> CPUs, rather than faster CPUs. Fortunately, web requests
> parallelize really well. We still have pages which spin
> off long running work into a background thread or worker
> process.

Again, the server market is a fraction of the desktop/mobile markets, which is where the disconnect is happening. Mainframes/minis/servers have had multi-stuff capabilities for decades; how to make use of multi-stuff in a common desktop/client (and not behave slower to the user than the last generation machine) is the conundrum. For iStuff (and similar), where there isn't as much user history, probably less of an issue. Planned obsolescence still rules the sector, so there's less pressure. Everybody knows what Office does on their machine (often about 5% of Office can do, naturally), so a new machine that doesn't do it any faster will be noticed.


>
> I've been writing multithreaded code for over a decade,
> off and on, and it's really not that hard, although it
> requires discipline. I've also come to the following
> conclusion, if your program is slow because it's
> processing a lot of data, or if your program is structured
> into a bunch of mostly independent modules, than writing
> multithreaded code can be a big win.

That old canard, the three main applications for the desktop: spreadsheets, word processing, spreadsheets. There are small opportunities for multi-threading, but I've seen no evidence that there's a major win there for multi-core. The archetypal desktop application is single threaded because the user has only one brain and two hands. The OS can benefit, but the number of OS writers is even smaller than game or database engine or web server writers.


If your application
> is slow because it's got a lot of code to execute, not so
> much.
> (And it's not just multicore. I've seen massive
> performance increases in C/S desktop apps after they've
> gone multithreaded, even on single core machines when they
> have to deal with lots of data.)

"C/S desktop apps" is, I'd say, a synonym for Engineering Workstation. Important to those that do that sort of thing, but it won't keep the lights on at Intel or AMD (certainly not both).

Morgan Conrad

Posts: 307
Nickname: miata71
Registered: Mar, 2006

Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity Posted: Dec 1, 2010 11:08 AM
Reply to this message Reply
>I don't see many Intel cpu's being moved to the desktop to mimic Crays

Agreed. However, one can easily buy a graphics card with ~1000 GPUs for ~$100. That's 10 cents a GPU, even less if the card is on sale at Fry's. :-)

The availability of such incredibly cheap computing power will drive demand for parallel processing, and applications will arrive. Whether languages and programmers can satisfy the demand is a good question.

Flat View: This topic has 74 replies on 5 pages [ « | 1  2  3  4  5 | » ]
Topic: Java Enums want to be Classes Previous Topic   Next Topic Topic: How Has Functional Programming Influenced Your Coding Style?

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use