The Artima Developer Community
Sponsored Link

Weblogs Forum
The Weblog Skeptic

2 replies on 1 page. Most recent reply: Oct 7, 2008 7:47 AM by Scott McDaniel

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 2 replies on 1 page
Mark Johnson

Posts: 15
Nickname: mj
Registered: Mar, 2003

The Weblog Skeptic (View in Weblogs)
Posted: Oct 6, 2008 7:26 AM
Reply to this message Reply
Summary
Usage logs can provide useful inputs to user interface and web site designs. But all too often, naive interpretations of log data produce poor (or, at least, unsupported) design decisions. Here are a few of my concerns about the question, "What do the logs say?"
Advertisement

When updating or redesigning user interfaces, beware when people ask "how much is this used?" in design discussions.

Much of the time, the question has an insidious underlying assumption: that current usage of a page, feature, or element is a measure of its usefulness to the user.

I've noticed that when people in design discussions ask how often a page element or feature is used, the question almost always means one of the following:

  1. They're looking to defend some preconceived notion. You will notice that they always have an answer if the numbers don't swing their way.
  • "It isn't used, so it isn't useful."

    Or maybe:
    • there's a design problem hiding a killer feature.
    • the people who need it aren't getting to this page.
    • it's so poorly implemented that nobody can figure out how to use it, even though it's exactly what they need.
  • "It would be useful, but people can't find/don't see it."

    Or maybe:
    • it's a niche feature.
    • it's useless.
  • "It's the most used, so it's crucial/the most useful/whatever."

    Or maybe:
    • users click on it because don't see what else they can do.
    • "most used" means "20% of the time", and so biasing a design toward that feature inconveniences the other 80% of users.
    • what they're clicking on is misleading, and doesn't offer what most users expect.
  1. They want to use the number to make a decision, regardless of whether the metric means anything. Using logs in this way lets designers punt on the hard questions.
  • "We should make this feature prominent because the logs say that's the most useful feature on the page!"

    No, the logs say it's the most heavily-used feature on the page. Not the most useful. Don't confuse the two.

    And maybe:
    • only 3% of the people that use this page ever do anything with it, because the design is broken somehow.
    • the real useful content is hidden or poorly presented.
    • the metric doesn't really measure usefulness, so it's as valid as a coin toss.
  • "Only 3% of users click on that: it's useless!"

    Or maybe:
    • those 3% are crucial users.
    • nothing on the page gets more than 3%, so it's as good or better than anything else (though your page may have too much on it).

It seems to me that most useful information you can get from usage logs is contextual. Like usage of one thing relative to something else (though usage does not equal usefulness: you may simply be measuring design artifacts.) Or changes in user behavior after changes in interface (though the potential to mislead yourself there is even more pronounced.) Case in point: a design change that exposes the links in a hidden menu. It's hard to construe a persistent 500% increase in the usage of exposed links relative to hidden links as anything but improvement. That is, until you notice a 40% decrease in overall page usage, because the links obscure something crucial, like a submit button.

My point is that log data are only one input to the design process, and they are data, not information. Interpretations of log data may be information, if your thinking is careful and you're looking diligently for confounding factors, as good scientists do.

Here are my questions:
  • How do you truly use activity logs to make informed user interface design decisions?
  • Do you have a favorite resource for user activity interpretation?
  • Or, conversely, what are your favorite weblog misconceptions and canards?


Carl Manaster

Posts: 24
Nickname: cmanaster
Registered: Jun, 2003

Re: The Weblog Skeptic Posted: Oct 7, 2008 6:55 AM
Reply to this message Reply
> <p>It seems to me that most useful information you can get
> from usage logs is contextual. Like usage of one thing
> <strong>relative</strong> to something else

Many, many years ago, when we were only given one modifier key, I had an application that was pushing the boundaries of the alphabet. I recorded, for each command, whether users were using the command-key equivalent, or the dropdown menu; this let me know which keys could safely be remapped.

Scott McDaniel

Posts: 1
Nickname: scottmcd
Registered: Oct, 2008

Re: The Weblog Skeptic Posted: Oct 7, 2008 7:47 AM
Reply to this message Reply
Since interpreting web logs requires context, the next logical question is "How do you establish that context?" Based on Whitney Quesenbery's 5 E's of usability, I'd say something like this:

Effectiveness. (Usefulness for a given purpose) You need to be sure the requirements reflect the users' work. You do that through interviews and contextual analysis. You document them through user models (personas) and work models (scenarios and use cases) and, finally, the requirements. If you have confidence in those, you've established that the features in question are in fact important.

Ease of Learning. Now that the feature's usefulness has been establish, can people find it and figure out how to use it? A summative usability test establishes this - one that measures time on task and error rates, or other quantifiable numbers. If people can't find it or figure it out then it's no use looking at web logs. If they can, you've now established the features' usefulness and established that people can find it and learn it.

Efficiency. Maybe users can find it and figure it out, but it just takes too much time or trouble to use. In this case, web logs could lend support to this hypothesis. You could get more evidence for the hypothesis with a formative (qualitative) usability test in which you ask people what they are thinking as they use the features. If it's a useful feature, you'll probably also have help desk complaints about it because people do want to use it but are frustrated.

Error Tolerant. Web logs can be useful here if you've established the usefulness and learnability of the feature. You can look to see how often error messages get served up. Harder to detect are mistakes that aren't necessarily software errors, like navigating to the wrong part of the web app. Usability tests (qualitative or summative) will help establish this aspect.

Engaging. If the graphic design makes a button not appear clickable, or if things are just ugly and hard to figure out, a usability test will help show this. Web logs can again provide evidence to support a hypothesis.

When making design changes to an established system, you should form a hypothesis and collect data from several sources to make a decision. I place particular weight on direct observations of user behavior. Since it's not usually practical to hold dozens of small user tests for all the features that come up, I think it makes sense to look at the issues on the table, prioritize them, and then hold one test that hits the most important ones.

Flat View: This topic has 2 replies on 1 page
Topic: Service Locator Pattern Revisited, Part 1 Previous Topic   Next Topic Topic: Does Anyone Really Care About Desktop Java?

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use