1. Usability Evaluation
Usability is a
quality attribute that assesses the ease to use of interfaces. It also measures
ease-to-use during the design process. It is defined by five quality components
– learnability, efficiency, memorability, errors and satisfaction. Utility (the
design’s functionality) is also an important quality of usability. Usability studies the elegance
and clarity with which the interaction with a computer program
or a web site (web usability) is designed. Usability differs from user satisfaction and user experience because usability also
considers usefulness.
Any system designed for
people should be easy to use, easy to learn, easy to remember, and helpful to
users. John Gould and Clayton Lewis recommend that designers striving for
usability follow these three design principles – early focus on users
and tasks, empirical measurement and iterative design. Usability is so
important because on the web, it is a necessary condition for survival. Simply put,
if a website is difficult to use, people leave. When users encounter any
difficulty on your site, their first line of defense is to leave. If they can’t
understand what your company or site is all about from the home page, or if
your e-commerce website doesn’t clearly define the product you’re selling or
users can’t easily find the products they’re looking for, people simply leave
the site. If employers spend time pondering where to find information on the company’s
website, this is productive time lost, hence money spent on paying users for
doing less or no work. Current best practices are to spend 10% of a Project’s
design budget on usability. This will have the effect of more than doubling a
website’s desired quality metrics, and just less than doubles an intranet’s
quality metrics. Improving usability leads to marked reduction in training
budgets.
There are a variety of
usability studying/evaluation methods. Certain methods use data from users,
while others rely on usability experts. There are usability evaluation methods
for all stages of design and development, from product definition to final
design modifications. When choosing a method, consider cost, time constraints,
and appropriateness. The most basic and useful is user-testing which has
three components:
- · Get a sample users’ community (customers for an e-commerce site or company employees for an organization’s intranet)
- · Users perform representative tasks with the design
- · Observe users’ actions, looking out for where they succeed or fail, and where they have difficulties; let them do the talking, NOT you.
Testing the
users individually and letting them to find their own problems and try to solve
them, rather than redirecting their attentions to possible solutions should be
the ideal process. Five users is a good sample to test with. The best way to
increase the quality of the user experience is through iterative design. The more
versions and interface ideas you test with the users, the better. Using focus groups
is not a good way to evaluate usability design. You have to actually watch them
doing it, rather them listening to what they have to say about it.
Usability is
important in each stage of the design process. Formal usability studies in the
form of controlled experiments aim to advance the field's
understanding of how people use interfaces, to determine which design concepts
work well under what circumstances, and why. They can also be used to help
decide if a new feature or a change in approach improves the performance of an
existing interface, or to compare competing interfaces. The main
steps involved in fast and cheap formal individual studies for usability
testing are:
- · Before beginning the new design, test the old design for elimination and improvement
- · Test your competitor’s design, unless you’re working on an intranet
- · Conduct a field study to evaluate users in their normal habitat
- · Make paper prototypes or one or more new design ideas and test them
- · Use multiple iterations to refine design ideas
- · Use established usability guidelines to refine the design
- · Once the final design is implemented, re-test it
A higher-quality
user experience can only be assured by starting user testing early in the
design process and to keep testing every step of the way.
If usability
testing is conducted at least once a week, it’s recommended to have a dedicated
usability laboratory. Conference rooms and offices are usually uses by most
companies, as long as distractions can be prevented, the most important factor
being to be able to get hold of the users and sit them while they use they
design. All you need yourself is a pencil and notepad.
Designing a
new usable search interface and convincingly assessing its usability can be
surprisingly difficult. Small details in the
design of the interface can have a strong effect on a participant's subjective
reaction to or objective success with the interface.
Traditional information retrieval research focuses on
evaluating the proportion of relevant documents retrieved in response to a query
as a measure of assessing a search interface. Three main aspects of usability
are usually used to evaluate search interfaces – Effectiveness (accuracy
and completeness with which users achieve specified goals), Efficiency
(resources expended in relation to the accuracy and completeness with which
users achieve goals), and Satisfaction (freedom from discomfort and positive
attitude towards the use of the product).
Evaluation of search systems is equivalent to
evaluation of ranking algorithms, and this evaluation is done in an automated
fashion, without involving users. The
most common evaluation measures used for assessing ranking algorithms are
Precision, Recall, the F-measure, and Mean Average Precision (MAP). Precision is defined as the
number of relevant documents retrieved divided by the number of documents
retrieved, and so is the percentage of retrieved documents that are relevant.
Recall is the number of relevant documents retrieved divided by the number of
documents that are known to be relevant, and so is the percentage of all
relevant documents that are retrieved.
The TREC evaluation method has been enormously
valuable for comparison of competing ranking algorithms. This, however, comes with a lot of
criticisms. However, this evaluation does not require searchers to
interact with the system, create the queries, judge the results, or reformulate
their queries. The ad hoc track does not allow for any user interface
whatsoever.
It can be useful to adjust the measures of precision
and recall when assessing interactive systems. the measure of immediate accuracy to capture relevance according to this kind of
behavior. It is measured as the proportion of queries for which the participant
has found at least one relevant document by the time they have looked at k documents
selected from the result set.
Informal usability testing can also be achieved
through various methods, even though there’s no exact formula for producing a
good user interface, but interface design indisputably requires the involvement
of representative users. Before any design starts, prospective users
should be interviewed or observed in field studies doing the tasks which the
interface must support. This is
followed by a repeated cycle of design, assessment with potential users,
analysis of the results, and subsequent re-design and re-assessment.
Involvement of members of the target user base is critical, and so this process
is often referred to as user-centered design. Potential users
who participate in the assessment of interfaces are usually referred to as participants. Showing designs to participants and
recording their responses to ferret out problems as well as identify positive
aspects of the design are referred to as informal usability testing. Informal
usability studies are typically used to test a particular instantiation of an
interface design, or to compare candidate designs, for a particular domain and
context. In the first rounds of evaluation, major problems can be identified
quickly, often with just a few participants. Although participants usually do
not volunteer good design alternatives, they can often accurately indicate
which of several design paths is best to follow. Quick informal usability tests
with a small number of participants is an example of what has been dubbed discount usability testing, as opposed to
full formal laboratory studies. Ideally, usability study participants are drawn from a pool of
potential users of the system under study; for instance, an interface for
entering medical record information should be tested by nurse practitioners.
Often the true users of such a system are too difficult to recruit for academic
studies, and so surrogates are found, such as interns who are training for a
given position or graduate students in a field. For academic HCI research,
participants are usually recruited via flyers on buildings, email
solicitations, as an (optional) part of a course or by a professional
recruiting firm.
To obtain a more accurate understanding of the value and usage
patterns of a search interface, (in order to obtain what is called ecological
validity in the social
sciences literature), it is important to conduct studies in which the
participants use the interface in their daily environments and routines, and over
a significant period of time.
In order to carry out a successful usability test with a paper
prototype, the following should be taken into account:
·
Always remember
to compensate your participants, so the test shouldn’t feel like a chore
·
Put the
participant at ease, and give them control
·
Ask questions
that qualify the participant, like their frame of reference (how often they go
online, what websites they often visit, what are the triggers and conditions
for their activity…)
·
Start with
open questions, then dig deeper if the user is brief
·
Give users
open-ended tasks instead of telling them what to do
·
Ask users
what they expect will happen if they take a particular action
·
Use whatever
medium is easiest to create
·
You can test
complex interaction before investing in coding and design with a little creativity,
since most of the time users are able to interact with paper prototypes as if
it were the real thing, and can easily accommodate unforeseen actions
·
It helps to learn
participants’ preferences, even if they don’t have to be your demographic
target.
·
End with a
question asking if there’s anything else we should talk about or to help
improve the current state of things. Sometimes, you get great information from
this.
A longitudinal
study tracks participant
behavior while using a system over an extended period of time, as opposed to
first-time usages which are what are typically assessed in formal and informal
studies. This kind of study is especially useful for evaluating search user
interfaces, since it allows the evaluator to observe how usage changes as the
participant learns about the system and how usage varies over a wide range of
information needs. The longer time frame also allows potential users to get a
more realistic subjective assessment of how valuable they find the system to
be. This can be measured by questionnaires as well as by how often the
participant chooses to use the system versus alternatives.
Most Web search engines record information about
searchers' queries in their server
logs (also
called query logs).
This information includes the query itself, the date and time it was written,
and the IP address that the request came from. Some systems also record which
search results were clicked on for a given query. These logs, which
characterize millions of users and hundreds of millions of queries, are a
valuable resource for understanding the kinds of information needs that users
have, for improving ranking scores, for showing search history, and for
attempts to personalize information retrieval. They are also used to evaluate
search interfaces and algorithms, as discussed in the next section. In
preparation, the following subsections describe some details behind the use of
query logs. In query log analysis, an individual person is usually
associated with an IP address, although there are a number of problems with
this approach: some people search using multiple different IP addresses and the
same IP address can be used by multiple searchers. Nonetheless, the IP address
is a useful starting point for identifying individual searchers.
Large-scale log-based usability testing or bucket testing is an
important form of usability testing that takes
advantage of the huge numbers of visitors to some Web sites is large-scale log-based usability testing. In
the days of shrink-wrapped software delivery, once an interface was coded, it
was physically mailed to customers in the form of CDs or DVDs and could not be
significantly changed until the next software version was released, usually
multiple years later. The Web has changed this paradigm so that many companies
release products in “beta,” or unfinished, status, with the tacit understanding
that there may be problems with the system, and the system will change before
being officially released. More recently, the assumptions have changed still
further. With some Web sites, especially those related to social media, the
assumption is that the system is a work-in-progress, and changes will
continually be made with little advance warning. The dynamic nature of Web
interfaces makes it acceptable for some organizations to experiment with
showing different versions of an interface to different groups of currently
active users. A major limitation of bucket testing is that the test can
run effectively only over the short term because the user pool shifts over time
and some users clear their cookies (which are used by the bucket tests to keep
track of user IDs). Additionally, it is commonly observed that when comparing a
new interface versus one that users are already familiar with, users nearly
always prefer the original one at first.
Several concerns have been raised with the
evaluation of search interfaces. Evaluating information-intensive applications
such as search is somewhat different, and often times more difficult than
evaluating other types of user interfaces. Some pertinent issues are discussed
below, along with best practices for evaluation of search interfaces.
·
Avoid experimental
bias
·
Encourage participant
motivation
·
Account for
participants’ individual differences
·
Account for
the differences in tasks and queries
·
Control test
collection characteristics
·
Account for
differences in the timing response variable
·
Compare against
a strong baseline
2.
Mobile Design
Designing for the Mobile Web has experienced a recent explosion due
to user adoption of mobile devices. This has greatly revolutionized the World
Wide Web. Though designing for the Mobile Web follow
similar principles to designing websites, there are still noticeable
differences - current mobile device networks don’t run in the same speed as
broadband devices, there are also a myriad of ways our mobile web designs are
displayed in, from touch screens to netbooks, which make even the smallest
desktop monitors look like giants.
The way the mobile design is to be delivered, is
one of the early elements that need to be considered. The ideal scenario would
be that each device simply scales and adapts to your existing website, and some
devices, such as the iPhone, are able to because of their built-in web browser.
But because of so many devices out there, a cross-device mobile design is
difficult to make. Designing for the mobile can be very difficult since the
designer has to deal with more than one markup language (WML, iOS for Apple
devices and Android for Android devices), unlike just a single language for
desktop-based web designs, HTML.
One option to pushing a site to the Mobile Web is
to simply create or modify your existing code and design to work well on mobile
devices, or building from scratch with mobile devices in mind.
Another method for delivering a mobile design is
to build an especially optimized layout for handheld devices. You can build
this yourself or use a web service such as Mobify.
Whichever route you decide to take,
it’s important that:
·
Visitors know that a mobile-friendly
version of your site is available
·
Visitors can have the choice between a mobile
version or the normal version
The next consideration for mobile web design is the Structure and
Code (markup and styles) that goes behind the scenes. The following points
should be succinctly analyzed
·
Use of WML
or HTML for mobile profiles?
·
Build separate
apps for iPhones, Androids and Blackberries?
·
What are the
effects of cost and speed with mobile devices on your design?
·
Consider modern
standards like HTML5 and CSSR?
Choosing the right language for a mobile-friendly
website is paramount; while older devices before the smartphone revolution only
support WML, the W3C produced a mobile-friendly version of XHTML.
Ultimately, whichever language you choose, the
primary considerations you need to think about is speed
and user cost.
Designing
the layout of mobile devices can also be a pain in the neck for the following reasons:
·
Mobile
devices come in all shapes and
sizes
·
Mobile
devices have different levels of
quality and resolutions
·
Mobile
devices may or may not support
zooming, others scroll
content
·
Scrolling in mobile devices is more difficult because of their small screen
The goal of a mobile web design’s
layout is to allow the least amount of burden to the user’s ability to find
(and quickly read) what they’re looking.
Simplicity is of the main concepts to an effective mobile web layout. The more
information you pile into a small space, the harder it becomes to read and the
more scrolling that will be required.
Even though some mobile
devices like the iPhone and iPad have the ability to zoom web pages in and out
to avoid scrolling, not all do. We should try to limit scrolling as much as
possible during mobile web design.
The issue of
navigation and clickable regions is another concern. This is predominantly a
problem with touchscreen mobile devices. Ensuring that your mobile layout has large and easy-to-press links and clickable
objects will be essential in streamlining the experience. Reducing the amount of clicks required to
achieve an action - which is a good practice regardless
of whether or not you’re designing a mobile site - is all the more important in
mobile web designs.
The most costly component of a
website is the content. This is due to the cost of browsing and caps on data
allowances. Knowing how to reduce excess images,
text and media can come in very handy and cost effective. Of all the components
of a site, none plays a more vital role than the text. When working with
a small screen, large CSS background images or byte-heavy infographics can
be problematic. It’s
inevitable in the modern web that utilizing audio and video will be needed.
Even with the bandwidth issues that exist, you shouldn’t stop using these
richer forms of content, as they can be great, especially on handheld mobile
devices that have excellent video/audio quality such as the iPhone or iPod
Touch. But just like with everything else, moderation
and smart usage is key.
Even though the availability of
web-based services are fantastic, I do worry that the dependence on a constant
and reliable (always on) web connection is very much going to be a problem for
web apps at the current state of mobile device networks. While there have been
moves towards local storage mechanisms, for now,
web apps that rely on persistent internet connections could affect mobile
device users due to the capabilities of their networks.
With so much diversity in the mobile
device landscape, you should test your designs
on as many platforms as you can manage. A long list of emulators exist
that will simulate certain devices for you to be able to test your work.
For now, and
until mobile network infrastructure improves and connectivity is widely
available - simple, small and speedy are the three main principles we
should abide by.