The Artima Developer Community
Sponsored Link

Weblogs Forum
Architecture the Accelerator

59 replies on 4 pages. Most recent reply: Mar 5, 2005 8:49 AM by Isaac Gouy

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 59 replies on 4 pages [ « | 1 2 3 4 ]
Levi Cook

Posts: 8
Nickname: levicook
Registered: Feb, 2002

Re: Architecture the Accelerator Posted: Feb 25, 2005 5:58 AM
Reply to this message Reply
Advertisement
How sorry is it to reply to your own post? :)

I meant to pose a question to Bob, rather than guess at his response. Actually, I'm curious to hear anyones answer to these questions. I'm hoping the responses are an indicator of what assumptions each of us makes about development.

If you frame programmers as investors:

Q1) Does your experience suggest short term investments are too irregular to bank on? (Can you share your experience?)

Q2) In your experience, can you make short term investments and capitalize on them more frequently by insisting on quality?

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Architecture the Accelerator Posted: Feb 25, 2005 11:21 AM
Reply to this message Reply
> 1) What does it mean for a method name to be good/bad?
>
> Perhaps it means that the name may be misinterpreted, it
> doesn't say what it does as well as it could.
>
My own personal subject heuristics include things like method names should be verbs. Method names should be relatively concise but also give a pretty good idea of the semantics (i.e., what will happen when I invoke that method). These two goals are often in tension, and when so I tend to lean in the concise direction when I expect a method to be used often (i.e. Collection.iterator() instead of Collection.getIterator()), but in the clear direction when something is used less often (such as BeanContext.removeBeanContextMembershipListener()).

> Could we measure the descriptiveness of a method name by
> having 10 peers write down what they though the method did
> from it's name? (And accept method names which 7 out of 10
> developers figured out.)
>
Yes, one way to measure subjective quality is to ask a large enough sample of people what their subjective judgment is and then look at the statistics. Like any other measurement, though, it is only one data point, not really a solution to the philosophical problem of how to objectively measure something that is subjective. But I think design reviews, pair programming, and hallway conversations can less formally but very effectively build team awareness of what is valued by that team (a culture of quality).

> 2) We say that renaming methods costs time/money - and
> let's so we don't have IDE support for renaming just to
> bump up the cost.
>
> What might not renaming the methods cost - what value do
> we give the expected benefits?
> Is the method name so bad that other developers are
> confused about what the method does; is it so bad that
> they will make mistakes?
>
> It's not just costs versus subjective quality; it's costs
> versus benefits.

Agreed. I'm talking about judging the cost of quality against its benefits, and using that to figure out how good is good enough. What I am hoping people would share here is real world experiences about when quality paid off and when it didn't, because that would leverage the intelligence and experience of the community to help us better make those judgments in the future.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: Architecture the Accelerator Posted: Feb 25, 2005 3:51 PM
Reply to this message Reply
Yes, one way to measure subjective quality is to ask a large enough sample of people what their subjective judgment is and then look at the statistics. Like any other measurement, though, it is only one data point, not really a solution to the philosophical problem of how to objectively measure something that is subjective.

One of the lessons of discount usability is that you can find a lot of the problems with a couple of tests - we're doing sanity checks not peer-reviewed science.

I'm not trying to solve a philosophical problem; I'm just trying get some baseline data for decision making.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Architecture the Accelerator Posted: Feb 25, 2005 8:12 PM
Reply to this message Reply
\> One of the lessons of discount usability is that
> you can find a lot of the problems with a couple of tests
> - we're doing sanity checks not peer-reviewed science.
>
> I'm not trying to solve a philosophical problem; I'm just
> trying get some baseline data for decision making.

I hadn't heard of discount usability, and found this from Jakob Nielsen:

Discount Usability for the Web

http://www.useit.com/papers/web_discount_usability.html

This is a longer aricle about the techniques:

Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier

http://www.useit.com/papers/guerrilla_hci.html

A few months back I was reading through a lot of books about web usability design, and I was convinced of the usefulness and affordability of doing some user testing, so yes, I agree it can definitely be useful.

Parag Shah

Posts: 24
Nickname: parags
Registered: Mar, 2003

Re: Architecture the Accelerator Posted: Feb 26, 2005 4:34 AM
Reply to this message Reply
> > After a point the Canoo test scripts
> > become really unweildy because we had to test every

> It may be that had you separated the UI from the business
> rules, and tested the business rules without the UI, then
> changes to the UI would not have broken so many tests.

The business rules *were* very clearly seperated from the UI. I am not sure what made you deduce otherwise. Canoo was used as a functional testing tool. Canoo tests became unweildy because the screens contained a lot of data, sometimes in complex tables. A change in the UI meant adding a row or sometimes even shuffling the contents. Since Canoo tests the data using XPath statements that resolve to the data being tested, any changes in table structure meant changing one or more of row numbers, column numbers and sometimes table ids in the XPath statements. For complex screens this excercise would take a couple of hours for minor changes.
The fact that we tried to test 100% of the functional specs, as well as all the data that was generated had two *immediate* drawbacks.
1. Minor changes to the UI meant a lot of time investment in modifying the functional tests.
2. The build process took a lot longer, because every developer first ran all tests including Canoo tests locally, and checked in code only if everything passed. JUnit tests took about 5 minutes, while Canoo tests took about 30 minutes. Every checkin was preceded by atleast a 35 minute test round. Since we practiced frequent check-ins, it was normal to spend 1 hour every day running the tests locally.

> It has long been known that separating the UI from the
> business rules is a good thing to do -- especially in a
> heavily tested environment.

I agree. Infact that rule should hold true even in an environment which is not heavily tested.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: Architecture the Accelerator Posted: Feb 26, 2005 6:11 PM
Reply to this message Reply
> I hadn't heard of discount usability, and found this from
> Jakob Nielsen

iirc discount usability is one of Jakob Nielsen's ideas.

> A few months back I was reading through a lot of books
> about web usability design, and I was convinced of the
> usefulness and affordability of doing some user testing,
> so yes, I agree it can definitely be useful.

While off-topic let me mention one book
"Don't Make Me Think"
http://www.sensible.com/

and one design approach
Personas
http://www.cooper.com/content/insights/newsletters_personas.asp


Back on topic, my point was that we don't need "a large enough sample of people" to verify quality, we need a few people and a systematic approach.

We need to state what quality factors are important for the current project; how we will measure them; and for each factor, the quality level we plan to achieve.

Robert C. Martin

Posts: 111
Nickname: unclebob
Registered: Apr, 2003

Re: Architecture the Accelerator Posted: Feb 26, 2005 7:56 PM
Reply to this message Reply
> The business rules *were* very clearly seperated from the
> UI. I am not sure what made you deduce otherwise. Canoo
> was used as a functional testing tool.

The reason I brought up the separation of business rules and GUI is that I want the functional tests run *below* the GUI. The only tests that I want going through the GUI are tests that test the GUI itself. And I don't want those tests testing the business rules.

Am I missing something?

Robert C. Martin

Posts: 111
Nickname: unclebob
Registered: Apr, 2003

Re: Architecture the Accelerator Posted: Feb 26, 2005 8:05 PM
Reply to this message Reply
> I want to set things straight. Nobody intentionally
> develops low quality code or achitectures. It just
> happens.

I think this thread is about trading quality for time. If you believe that reducing quality makes you go faster, then you will intentionally reduce quality when you feel schedule pressure.

My position is that reducing quality does not make you go faster, it makes you go slower. The only way to go fast, is to go well.

> The improvements
> take time. And with limited time it may be a better
> choice to add functionality that is urgently required than
> to add these improvements.

Of course. That's a very different thing from intentionally reducing quality in the hopes of going faster. When the code base has problems, we try to make it better incrementally. That is intentionally *increasing* quality because we know it will make us go faster.

> Of course, such a compromise (like in the case of code
> duplication) incurs a high cost at a later time. But in
> the short term, improving quality is a cost in itself and
> that cost may not be affordable.

We may not be able to afford a big increase in quality, but we can always afford a small increase in quality. We should all take the attitude that when we'll leave the code better than we found it.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Architecture the Accelerator Posted: Feb 27, 2005 1:12 AM
Reply to this message Reply
> While off-topic let me mention one book
> "Don't Make Me Think"
> http://www.sensible.com/
>
That's an excellent book, and the main one that convinced me to do poor-man's usability testing. The last three chapters of it are about testing. The first of those three, "Usability Testing on 10 Minutes a Day" suggests ways to do it cheaply, so you'll "do enough of it."

> Back on topic, my point was that we don't need "a large
> enough sample of people" to verify quality, we need a few
> people and a systematic approach.
>
I see. The author, Steve Krug, suggests 3 or 4 is the optimum number of people to test with each round of web usability testing. I think that kind of testing helps to you find specific problems with the UI, but not to come up with a quality score. JavaWorld used to have a tiny survey at the end of each article that allowed readers to grade it one to five in two or three categories. I would like to eventually do that at Artima, but wouldn't feel it was a useful grade of the article's quality after only 3 or 4 people took the survey.

> We need to state what quality factors are important for
> the current project; how we will measure them; and for
> each factor, the quality level we plan to achieve.

I agree with this, and would add only that we could also use good ways to visualize these measurements. If I make a chair, I can stand back and look at the whole thing. But when we work on a large software project, we can only look at a little piece of it at a time through our IDEs.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Architecture the Accelerator Posted: Feb 27, 2005 1:37 AM
Reply to this message Reply
> We may not be able to afford a big increase in quality,
> but we can always afford a small increase in quality. We
> should all take the attitude that when we'll leave the
> code better than we found it.

Thanks for clarifying. That sounds very reasonable. I think when you said that the only way to go fast it to write "the best code you can possibly write" I thought you meant we should always write the best code we could possibly write.

Michael Feathers talked about how if you make incremental improvements each time you go in you can amortize technical debt over time without anyone feeling a big single time hit. I think that makes sense, and I really like your call for leaving the code better in some way than we found it, because that seems inspiring but also very affordable.

The question I'd be curious to hear your thoughts on is how do we know how far to go with our improvements? For example, I have promised some functionality to some of the C++ Source guys to add one JSP to serve as a column home page. They've been waiting a long time for it, so I'm well past my arbitrary deadline. I want to get it done very soon. The easiest way to do it is to simply copy archive.jsp over to column.jsp and start making changes. Were I to do the best code I could write, I would first refactor archive.jsp off into our poor man's struts MVC design, because right now it is a monolithic JSP, with business logic mixed with presentation logic.

I have done that poor man's struts refactor many times on other JSPs. It takes about an hour. Then I can write tests for the controller portion, which would take another hour at least. If the actual work only takes about an hour, then I am looking at 1 hour to just create a column.jsp that is a monolithic JSP, and 3 hours to both refactor archive.jsp and then get a higher quality, MVC-organized column.jsp. Even if I did that work the code wouldn't yet be "the best code I could possibly write," I'd need to invest even more time to get it that good. That's why I said that in the short term, the higher quality can slow me down. Since I'm planning on in effect obsoleting this code in the next year, it doesn't seem worth it to me to do the refactor. I'll probably write column.jsp as a monolithic JSP with business logic mixed with presentation logic.

In general, how would you recommend developers decide how much to improve code, how much to refactor, how much to test?

Parag Shah

Posts: 24
Nickname: parags
Registered: Mar, 2003

Re: Architecture the Accelerator Posted: Feb 27, 2005 6:51 AM
Reply to this message Reply
> The reason I brought up the separation of business rules
> and GUI is that I want the functional tests run *below*
> the GUI. The only tests that I want going through the GUI
> are tests that test the GUI itself. And I don't want
> those tests testing the business rules.
>
> Am I missing something?
Hi Robert,
Parhaps we mean different things by *functional test*. I use the term to mean testing the software by simulating a real user. Canoo does this by making GET/POST requests to the application using certain scripts and then compares the html response to the expected response. Things like clicking on links, filling forms can be automated using the scripts.
I have always though that is what functional testing meant... but I could be wrong.
Anyways this discussion is veering off-topic... so I will end it here.

Regards
Parag

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: Architecture the Accelerator Posted: Feb 27, 2005 9:37 AM
Reply to this message Reply
> I see. The author, Steve Krug, suggests 3 or 4 is the
> optimum number of people to test with each round of web
> usability testing.

And just 2 (or even 1) will make a difference.

> I think that kind of testing helps to
> you find specific problems with the UI, but not to come up
> with a quality score.

I'm not trying to come up with a quality score; I'm just trying get some specific data for decision making.

This situation isn't unique to software: assessing and making trade-offs across multiple inter-related factors is basic to product and service design - Quality Function Deployment QFD and The House of Quality.


> JavaWorld used to have a tiny survey
> at the end of each article that allowed readers to grade
> it one to five in two or three categories. I would like to
> eventually do that at Artima, but wouldn't feel it was a
> useful grade of the article's quality after only 3 or 4
> people took the survey.

Humbly suggest that only having 2 or 3 "categories", is a bigger problem than the tiny sample size.


> > We need to state what quality factors are important for
> > the current project; how we will measure them; and for
> > each factor, the quality level we plan to achieve.
>
> I agree with this, and would add only that we could also
> use good ways to visualize these measurements.

Ummmm, charts?

> If I make a
> chair, I can stand back and look at the whole thing. But
> when we work on a large software project, we can only look
> at a little piece of it at a time through our IDEs.

And we can look at all of it (and look at it broken down by module) summarized by complexity metrics and whatever other metrics we have accumulated.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Architecture the Accelerator Posted: Mar 1, 2005 9:05 AM
Reply to this message Reply
> > > We need to state what quality factors are important
> for
> > > the current project; how we will measure them; and
> for
> > > each factor, the quality level we plan to achieve.
> >
> > I agree with this, and would add only that we could
> also
> > use good ways to visualize these measurements.
>
> Ummmm, charts?
>
Well, charts showing what? That's the difficult question. I've seen graphs showing test coverage, which are useful but not really a visualization of the design. The best kind of diagram I've seen for visualization is the kind that shows each package or class as a box with a name in it, and draws lines between each package or class showing the dependencies. It is pretty easy to notice high coupling when you look at such a diagram, and it encourages you to clean it up, because you want the diagram to look pretty.

I use UML class diagrams all the time to talk about design, but don't find UML diagrams help me that much in the way of visualizing the design so I can see ways it could be improved. I've also seen metrics on how many methods per class, classes and interfaces per package, statements per method. I think that's interesting, but once again I don't feel it helps me visualize the big picture very well. The main way I visualize Java designs currently is by looking at JavaDoc.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: Architecture the Accelerator Posted: Mar 1, 2005 11:35 AM
Reply to this message Reply
> Well, charts showing what? That's the difficult question.

The quality factors that consider important for the current project... (I imagine the emphasis will vary from project to project.)

> I've seen graphs showing test coverage, which are useful
> but not really a visualization of the design. The best
> kind of diagram I've seen for visualization is the kind
> that shows each package or class as a box with a name in
> it, and draws lines between each package or class showing
> the dependencies. It is pretty easy to notice high
> coupling when you look at such a diagram, and it
> encourages you to clean it up, because you want the
> diagram to look pretty.

Wouldn't it be easier if we collected metrics (coupling metric) for each package/class, dumped the results into a spreadsheet, and sorted to show which had the highest coupling?

(That way the computer does the work of figuring out the classes with high-coupling.)


> I use UML class diagrams all the time to talk about
> design, but don't find UML diagrams help me that much in
> the way of visualizing the design so I can see ways it
> could be improved. I've also seen metrics on how many
> methods per class, classes and interfaces per package,
> statements per method. I think that's interesting, but
> once again I don't feel it helps me visualize the big
> picture very well. The main way I visualize Java designs
> currently is by looking at JavaDoc.

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: Architecture the Accelerator Posted: Mar 5, 2005 8:49 AM
Reply to this message Reply
> I don't have a good definition of quality, other than
> perhaps suggesting reading Zen and the Art of Motorcycle
> Maintenence. It is subjective, a "we know it when we see
> it" kind of thing.

Serendipitously, Amazon's excerpt from "Facts and Fallacies of Software Engineering" is "About Quality"
http://www.pearsoned.co.uk/Bookshop/detail.asp?item=308965

Fact 46. Quality IS: a collection of attributes.

Fact 47. Quality is NOT: user satisfaction, meeting requirements, achieving cost/schedule, or reliability.

http://www.amazon.com/gp/reader/0321117425/ref=sib_rdr_ex/104-9353244-8412752?%5Fencoding=UTF8&p=S00E#reader-page

Flat View: This topic has 59 replies on 4 pages [ « | 1  2  3  4 ]
Topic: I'd rather use a socket. Previous Topic   Next Topic Topic: The Price Of Two

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use