Tuesday, May 13, 2014

Week 14 – Making the Case for Information Architecture


Making the case for Information Architecture (IA) has never been the easiest of tasks for even the most seasoned of Information Architects (IA). In fact, this might be the single most important task an IA will have to complete to be able to secure employment, not only for themselves, but for their entire team, which might eventually be for everyone in the organization or company due to heavy returns on investments that can be generated from a properly, professionally designed website or intranet.

As an IA, trying to sell your services and creating a business case for Information Architecture will involve several methodical steps put together by you as a proposal on how you can bring more value than just what is offered by a development team and a graphics designer. I, for one will use the following methodology to present my case:

       I.          Make a short presentation of not more than 8 slides to the each and every stakeholder that will mention and discuss the following deliverables as the outcome of your efforts

                           i.          Site Scope
In your presentation of what the Site Scope brings on the table, you should identify the essential problem your site is going to solve or the opportunity it will afford users. You need to show your understanding of the needs of both the target audience and site sponsors, and articulate a broad, but justified rationale of how the site will meet their needs through its information architecture design. This should bring added value which can never be accomplished by simple graphics design with application development. During your discussion with each stakeholder, you should present a clear cut methodology on how you intend on proceeding with the site scope effort. Typically, in your methodology, you should outline the following techniques:
·       Data collection - discuss the importance of data collection for the site design effort using techniques such as user interviews and questionnaires, competitive benchmarking and stakeholder interviews
·       Data Analysis – discuss and present how you intend to analyze the data you’ll be collecting in the above step and the various data analysis techniques you’ll be employing in this effort like benchmarking analysis and user analysis.
·       Results presentation – you, as the IA should give the stakeholders an afore-taste of the delicious meal they’ll be consuming in the future by telling him how you intend to present the result of you research and analysis carried out above. How you intend to bring out the major problems with the current state of affairs, summarize the results of your benchmarking and present/propose clear cut solutions to take care of the problems you’ll be identifying.
You can even go ahead and tell them that after this initial Site Scope process, if they do not see any value in the project, then it should be scrapped. You should be however very careful here because you just took a very big risk, which can either pay off enormously or lead to the death of your project.

                         ii.          Blueprint
Here, you want to mention and discuss with all the stakeholders how you’ll be present a clear pictorial, as well as literal description of the structure of the site you’ll be designing, something that’s out of touch to a developer or a graphics designer who doesn’t understand how information should be structured on a website.
You should sell the value of the Site’s blueprint by elaborating what it will deliver at the end, which is usually a focus on representing the information organization and navigation of the site, and maybe mention what techniques of labelling you intend to implement on the site.

                       iii.          Wireframes
Selling value for your design with the use of Wireframes can never be over-looked. Your initial discussions with the project stakeholders should definitely involve stating how you intend to create wireframes for the web site or intranet to be developed (again, something out of the scope of a developer or plain graphics designer).
Here, you want to mention how you will represent the layout of the content and navigation for individual pages within the site in the form of a low cost prototype. Discuss how you intend to use this to bring out to the front those pages considered to be complicated or unique, or which serve as templates to other pages.

Using the above methodology and with good presentation skills should go a long way to be able to sell your IA project and convince stakeholders why you, as an IA is needed to lead this effort, NOT developers and graphics designers. However, as an IA, you should do more than the above to sell your project. This will be outlined in section II below.


     II.          Making your case and Selling your Information Architecture

You, as the IA, must be prepared to take the case forward for what you do. Be prepared to change negative thinking into positive since most people still don’t know and understand the value of Information Architecture. You need to be ready for this, not just getting the point across initially, but being able to “sell” what you do on the ground. Hence, you need to be a salesperson at this point in your life to be able to convince the stakeholders to kick off the project.

As a generalization, it’s been found that business people typically fall into two groups: “by the numbers” folks, and “gut reactionaries”. The former require data to help make their decisions. They need figures to rationally consider return on investment (ROI) as the basis of their business decisions. The latter, on the other hand, do what feels right. They trust their instinct and often have plenty of good experience to draw on. As an IA, you’ll encounter, and will have to deal and sell with both types of business people, so be prepared for both. If necessary, you should be able to run the numbers and present the various factors involved in your IA project as a function of cost and convince your stakeholders with these figures.
It’s generally possible to measure the value (and ROI) of some of an architecture’s individual components. For example, we may be able to determine how well users navigate a broad and shallow hierarchy versus a narrow and deep one. Or we might measure how users respond to one way pf presenting search results versus another. If necessary, you, the IA should quantify these values and present to your audience as a justification of your project.

On the other side, the success of the case you present to gut reactionaries often depends on luck as much as anything else, but there should always be that saying at the back of your mind that “we make our own luck”, so you should be able, as an IA to have those words in your mouth that can tilt luck to your favor. One of the best ways to engage and educate such individuals is telling first hand “stories”. You might be lucky your gut reactionary doesn’t have much experience in the subject matter, and when you find this loophole, feel it as fast and as perfectly as you can. Use this technique to put them in the shoes of a peer who faces a comparable situation, feel that person’s pain, and help him see how information architecture helped his situation. An effective story should provide the listener with both a role and a situation to identify with. The role and the situation should set up a painful, problematic situation so that the listener feels the pain and can see how investing in IA can help make it go away.

Making your case as an IA, can and should involve most of the following case-making techniques:

i.                 User sensitivity “boot camp’ sessions” – get decision makers who aren’t web-savvy in front of a web browser. Ask them to try to accomplish three or four basic and common tasks using their own web site (or a competitor’s), and have them think aloud while you make notes of their problems
ii.               Expert site evaluations – quickly identify 5 or 10 major IA problems in a site. This can make a huge impression in a writing presentation or in the context of a sales call.
iii.             Strategy sessions – one to two day sessions geared toward bringing together decision makers and opinion leaders, providing them with a brief introduction to IA and discuss on the company’s strategy and issues with overloaded information.
iv.             Competitive analyses – already discussed above, a site’s IA issues can be riveting when the site is placed alongside its competitors. Always look for opportunities to compare architectural components and features to help prospects and clients see how they stack up. Present these analyses to the stakeholders.
v.               Comparative analyses – compare the existing site or intranet with comparable sites, comparing specific features, such as search interfaces or shopping carts and present your findings to stakeholders.
vi.             Be aggressive and be early – make sure the IA is included in the marketing and branding that comprise the firm’s public face, not to mention the list of services.

Whatever technique you use, consider these three pieces of advice:
·       Pain is your best friend – more than anything else, work hard to identify the source of a prospect or client’s pain
·       Articulation is half the battle – make your clients to be able to talk about their pain and issues, and also be prepared to use the right words to sell to them, the solution to their pain and problems.
·       Get off your high horse – be ready to defuse the jargon with alternative, “real-language” descriptions of what IA really is and what problem it addresses.

Whatever technique you use to make the case for IA, and whether you’re making a quantitative or qualitative case, there must be a checklist that you should be able to follow relevant to your story, answering all potential questions you might get from potential clients and stake holders. As you prepare to make your case, review this check list to make sure you’re not missing any important point. Your typical checklist can be the following advantages and points you intend to present and defend to sell your IA project
·       Reduces the cost of finding information
·       Reduces the cost of finding wrong information
·       Reduces the cost of not finding information at all
·       Provides a competitive advantage
·       Increase product awareness
·       Increases sales
·       Makes using a site a more enjoyable experience
·       Improves brand loyalty
·       Reduces reliance upon documentation
·       Reduces maintenance costs
·       Reduces training costs
·       Reduces staff turnover
·       Reduces organizational upheaval
·       Reduces organization politicking
·       Improves knowledge sharing
·       Reduces duplication of effort
·       Solidifies business strategy

As a final note, which ever points and approaches you use to make your case for IA, keep in mind how difficult this challenge is and be ready to tackle it since IA is still being looked like a new kid in the block and is generally a lot harder to sell that other goods and services out there. Hence, be ready to be that Information sales person.

On the other hand, problems associated with information explosion (unregulated ridiculous growth of information stored in websites and intranets) are only going to get worse as a result of poor maintenance of content and data stores, hence the need for seasoned Information Architects. 

Monday, April 28, 2014

Week 13 – IA Tools and Software

Several tools come in handy to the development and management of IA and web content. These tools, however can also lead to chaos given their varied nature and classification for various functions. To the IA, choosing the right tool or software can sometimes be a challenge because various factors come into play in determining the right choice(s). Some of the various tools and categories available to the IA include software for Automated Categorization, Search Engines, Thesaurus Management Tools, Portal or Enterprise knowledge Platform, Content Management Systems, Web Analysis/Tracking, Diagramming Software, Prototyping Tools, User Research and testing tools.
Automated Categorization Software are also known as automated classification, automated indexing, automated tagging and clustering software. Examples are Interwoven’s Metatagger and Vivisimo’s Clustering Engine. These tools use human-defined rules or pattern matching algorithms to automatically assign controlled vocabulary metadata to documents.
Search Engines provide full-text indexing and searching capabilities. Examples include Google Enterprise Solutions and Fast.
Thesaurus Management tools provide support for the development and management of controlled vocabularies and thesauri. Examples include Factiva Synaptica and WebChoir.
Portals or Enterprise Knowledge Platforms provide completely integrated enterprise portal solutions. Examples are MS SharePoint Portal Server, IBM’s WebSphere Portal and Oracle Portal.
Content management Systems manage workflow from content authoring to editing and publishing. They make it easier and more efficient to create, edit and publish web content and can range from small applications to huge enterprise-wide solutions. Examples include WordPress, Drupal and Documentum.
Analytics Software analyzes the usage and statistical performance of web sites, providing valuable metrics about user behavior and characteristics. Examples are Google Analytics and WebTrends
Diagramming Software are visual communication tools that IAs use to create diagrams, charts, blueprints and wireframes. Examples include MS Visio, PowerPoint and OmniGraffle.
Prototyping tools are web development software that enables IAs and web designers to create interactive wireframes and clickable prototypes. Examples include Dreamweaver, Visio and Flash.
Whatever categories or software/tools you the IA might choose to use, there’s still a lot of research and questions to be asked here so as to make the right decision in order to provide a balance between technology and price, as well as appropriate functionality. The most important advice from experts is to know your needs, your process and the end-users’ abilities before making your choice. CMSwatch.com is a fee-based consulting service that publishes reports on CMS-s and can help you select a CMS. Be realistic about your needs, devote extra time to information architecture and don’t neglect the content in favor of flashy/sexier IA and technology.
Also, prior to choosing any particular tool or software suite or package from a particular vendor, always get an Engineer from within the vendor’s firm who will answer you the most intriguing questions about the tool like what it does well, what it does poorly and what they wish it could do.

Sunday, April 20, 2014

Week 12 - Search Systems and Search Engine Optimization

Information retrieval through searches and search engines is very challenging, expensive and well-established. If search becomes a necessity, some sites or intranets incorporate search systems from sites that allow you to search the entire web. There are three different ways of searching the web:
·        A search within your site or its sub-sites, e.g. a search within www.dice.com and its very sub-sites
·        Search indexes of web pages, e.g. those of www.bing.com
·        Metasearch, which involves searching across multiple sites, e.g. www.clusty.com and www.dogpile.com

The website http://searchenginewatch.com/ is a great resource for the latest information on web searching. The IA has to make the decision whether their site needs to be searchable or not. They should be very careful not to make the typical assumption that a search engine alone will satisfy all users’ information needs. There are browsers who forego the search utility but prefer to peruse the site and have a feel of things. Before the IA makes the decision of adding the search functionality to their site, they should carefully answer the following questions;
·        Is there sufficient content in your site?
·       Does the company have sufficient resources to invest in this effort? Is the investment going to divert resources from more useful navigation systems?
·        Is time and the technical know-how available to invest in optimizing your search system/
·        Are there better alternatives to search?
·        Will your site’s users actually bother to use its search system?

Planning the capacity of your site or intranet can sometime be very tricky and determinant whether to include a search system or not. When sites become very popular, they grow organically and more and more functional features get piled on haphazardly, leading to a navigation nightmare. Certain issues can actually help the IA decide whether or not their site has reached the point of needing a search system:
·        Your site has too much information to browse
·        If the site has become fragmented, it can definitely use some help from a search system
·        Search can actually become a learning tool to help improve the site through the analysis of the search logs
·        Nowadays, search actually needs to be there because it has become a user expectation;  most users typically expect to find a search window on every single web site they visit
·        If your site has highly dynamic content, you should definitely include a search system to it.

The IA should make search inclusion decisions based on the end-users of the site; hence they should know their site’s users. The decision whether or not to include a search functionality to either the intranet or a website is greatly influenced on how much the IA knows his/her site’s users. This decision should be solely made with the users in mind, rather than on the available technology. The search system actually interfaces with the site’s users, hence the user should be the King in influencing this decision.

The working of the search system is usually a three part configuration. At the center of this configuration is the search engine which contains indexes from indexed documents and processes the queries from the searchers via the search interface. Matching indexes are produced in the form of results to the queries which were supplied to the search engine. Documents usually include web pages and web sites serve as the input into the search system. Indexing can be manual or automatic. Traditional commonly used manual systems for compiling indexes of documents make use of cards, such as library catalogue cards, but nowadays a good computerized Personal Reference System is to be preferred. For each document acquired, the bibliographic identification elements are written, or typed, on a card. Thus, for a journal article, the structure is: author's surname and forenames; article title; periodical title; volume number; part number; date of publication; pages. Keywords or descriptors of the contents should be written up. Alternatively, a short abstract or summary can be included (you can often make use of abstracts written by the author). The use of a standardized reference format style is recommended. In automatic indexing, spiders & robots crawl websites and index pages according to their own rules. As a result, they build large databases containing the indexes.

Determining what to search for can also be tricky. Whether to search the entire site or just specific pages or documents or whether to create search zones or not, or whether to index the entire site or just specific pages or documents or zones within the site are all decisions to be made by the IA during the search system design. Sometimes it becomes necessary to determinate search zones to limit searching the entire site/intranet. It might also be necessary to create a mini search site within the website itself. This search site can either be sub-site or a document type. Some sites might necessitate the incorporation of web search within. This involves searching through multimedia and heterogeneous sites with diverse content. Search can also involve full text searches of the information being requested or just the metadata about what’s being requested. The IA also has to decide what type of indexing to incorporate within the search engine for documents, either content words or just important words as those found in the metadata fields. Indexing can also be for specific audiences, by topic or just for recent content, reading level, topic, date of update, user task, etc…

Search algorithms find items with specified properties among a collection of items. The items may be stored individually as records in a database; or may be elements of a search space defined by a mathematical formula or procedure, such as the roots of an equation with integer variables; or a combination of the two, such as the Hamiltonian circuits of a graph. There are about 40 different retrieval algorithms which retrieve information in different ways. Most of these algorithms employ pattern-matching which uses recall and precision.

Query builders affect the outcome of a search by souping up a query’s performance. They are usually invisible to users and common examples include:
·        Spell checks
·        Phonetic tools (the best-known of which is “Soundex”)
·        Stemming tools that allow users to enter a term
·        Natural language processing tools
·        Controlled vocabularies and thesauri

The IA will also need to determine afore-hand and make choices on how the results for the search engines are to be presented. Here, there are two main issues to consider:
·        Which content components to display for each retrieved document– display less information to users who know what they’re looking for, and more information to users who aren’t sure what they want, how much or how many, how much information for each item,.
·        How to list or group the search results – by categories, alphabetically, chronologically, ranking by relevance, ranking by popularity, by users’ or experts’ ratings, by pay-for-placement (different sites bid for the right to be ranked high, or higher, on users’ result lists.

Design the search interface implies putting together what to search, what to retrieve, and how to present the results in a single interface. With a varied user commodity and search-technology functions, there are also many different types of search interfaces. Designing the search interface will involve considering the following variables:
·        Level of searching expertise and motivation
·        Type of information need
·        Type of information being searched

·        Amount of information being searched

Monday, April 14, 2014

Week 11 - Thesauri, Controlled Vocabularies and Metadata

Websites and intranets, as the names suggest, involve nests and webs and inter/intra-connections of systems, data and information which inter-act with each other. Making sense out of these systems and information mumbo jumbo independently can be very tricky, sometimes impossible, and even with the use of reductionism. Controlled vocabularies and Metadata permit the IA to peruse through the network of relationships between these systems. They provide a way to organize knowledge for subsequent retrieval. They are used in subject indexing schemes, subject headings, thesauri, taxonomies and other forms of knowledge organization systems.

A controlled vocabulary is any defined subset of natural language. It is a list of equivalent terms in the form of a synonym ring, or a list of preferred terms in the form of an authority file. Controlled vocabulary schemes mandate the use of predefined, authorized terms that have been preselected by the designer of the vocabulary, in contrast to natural language vocabularies, where there is no restriction on the vocabulary.

Synonym rings connect a set of words that are defined as equivalent for the purposes of data retrieval. These rings can be used when a user enters a search term into a query, if the word is contained in a synonym ring then the result will contain all the words within the ring as-well. Therefor these rings can dramatically improve search results by increasing the amount of recall of the search.

Authoritative files are lists of preferred terms or accepted values. They help in keeping accurate and consistent systems by reducing the allowed terms for a set domain. They can include a synonym ring with one of the words select as being the preferred term to use. These files can be useful with regards to indexes by making sure information that can belong to similar terms can be categorized into only one category; rather than spread over several. They can also be used to guide people into using the preferred term over others, for example when the variant term in an index is linked to a preferred term.

Classification schemes are a hierarchical arrangement of preferred terms aka Taxonomy (Hierarchy). These schemes can be used in either front end (such-as the listing of a category on the results of a search in yahoo or google) or back-end (such-as organizing and indexing tags used by IAs, authors and Architects). There are many schemes that can be used to classify the same information. The choice of scheme depends on its intended application.

Metadata is data about other data. It can be used in any sort of media to describe its contents and give it additional information. It is definitional data that provides information about or documentation of other data managed within an application or environment or system. Metadata is usually stored behind the scenes. Metadata tags are used to describe documents, pages, images, software, video and audio files, and other content objects for the purposes of improved navigation and retrieval. One example of Metadata in use is within web pages tag where it can be used freely to add additional information describing the pages content. This data can be used to help improve navigation and information retrieval on the page. Controlled vocabularies are basically a defined subset of a language. Controlled vocabularies are used to reduce the variability of expressions used to characterize an item. It can come in the form of an authoritative file or a list of equivalent terms.

Thesauri are collections of categorized concepts, denoted by words or phrases that are related to each other by narrower terms; wider terms and related term relations. They are a book of synonyms, of including related and contrasting words and antonyms. They allow for synonymous management by providing the preferred term amongst many variants. It uses semantic relationships: Equivalence (like terms), Hierarchical (sub categories), and Associative (related terms). They come in three forms
·        Classic- Full functional include indexing and searching
·        Indexing- Allows indexes of preferred terms
·        Searching -Is used at the point of searching not indexing to manipulate the search performed. Users may be able to specify their search terms by going narrower or broader.

The IA will need to decide which of the above three forms to include in their site or intranet if they choose to use a thesaurus. This decision should be based on how you intend to use the thesaurus, and will definitely have major implications for design.

The thesaurus sets itself apart from the simpler controlled vocabularies in its rich array of semantic relationships. These relationships are of three types – Equivalence, Hierarchical and Associative. When a number of terms represent the same concept, the equivalence relationship clarifies which indexing term should be used. Hierarchical relationship indicates the superordination and subordination of each preferred term. This kind of relationship divides the information space into categories and subcategories, relating broader and narrower concepts through the familiar parent-child relationship. The associative relationship is a relationship between two concepts which do not belong to the same hierarchical structure, although they have semantic or contextual similarities. The relationship must be made explicit because it suggests to the indexer the use of other indexing terms with connected or similar meanings which could be used for indexing or searches. This relationship is often the trickiest, and by necessity is usually developed after the IA has made a good start on the other two relationship types. They are usually strongly implied semantic connections that aren’t captured within the equivalence or hierarchical relationships.

Faceted Classification is an analytic-synthetic classification scheme. It classifies objects using multiple taxonomies that express their different attributes or facets rather than classifying using a single taxonomy. A faceted classification system allows the assignment of an object to multiple taxonomies (sets of attributes), enabling the classification to be ordered in multiple ways, rather than in a single, predetermined, taxonomic order. A facet comprises "clearly defined, mutually exclusive, and collectively exhaustive aspects, properties or characteristics of a class or specific subject". For example, a collection of books might be classified using an author facet, a subject facet, a date facet, etc. Faceted classification is used in faceted search systems that enable a user to navigate information along multiple paths corresponding to different orderings of the facets. This contrasts with traditional taxonomies in which the hierarchy of categories is fixed and unchanging. In other words, once information is categorized using multiple facets, it can also be retrieved using multiple facets. Thus, a user would not be restricted to one identifying search term in order to retrieve an item. He or she could use a single term or link together multiple terms which increases his or her chances of retrieving the exact information that is being sought.  Another real life implementation can be seen in http://wine.com in which the various wine facets are type (red – merlot, pinot nor, malbec, white – chrdonnay, muscadot, sparkling, etc…), region of origin (South African, Argentinan, Carlifonian, Spanish, French, etc…), Winery/manufacturer (Clos du Bois, Blackstone, etc…), Year (1968, 1996, 2002, 2014, etc…) and price ($5.99, $9.99, $39.99, $156, etc…). This type of classification provides power and flexibility. The interface can be tested and refined over time, while the faceted classification provides an enduring foundation.

The Guided Navigation model encourages users to refine or narrow their own searches based on metadata field s and values built atop faceted classifications. Guided navigation has become the de facto standard for e-commerce and product-related Web sites, from big box stores to product review sites. But e-commerce sites aren’t the only ones joining the facets club. Other content-heavy sites such as media publishers (e.g. The Financial Times), libraries (such as NCSU Libraries) and even non-profits (Urban Land Institute) are tapping into faceted search to make their often broad range of content more findable. Essentially, guided navigation or faceted search has become so ubiquitous that users are not only getting used to it, they are coming to expect it.



Saturday, April 5, 2014

Week 10 - Usability Evaluation and Mobile Design

1.      Usability Evaluation

Usability is a quality attribute that assesses the ease to use of interfaces. It also measures ease-to-use during the design process. It is defined by five quality components – learnability, efficiency, memorability, errors and satisfaction. Utility (the design’s functionality) is also an important quality of usability. Usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability differs from user satisfaction and user experience because usability also considers usefulness.
Any system designed for people should be easy to use, easy to learn, easy to remember, and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles – early focus on users and tasks, empirical measurement and iterative design. Usability is so important because on the web, it is a necessary condition for survival. Simply put, if a website is difficult to use, people leave. When users encounter any difficulty on your site, their first line of defense is to leave. If they can’t understand what your company or site is all about from the home page, or if your e-commerce website doesn’t clearly define the product you’re selling or users can’t easily find the products they’re looking for, people simply leave the site. If employers spend time pondering where to find information on the company’s website, this is productive time lost, hence money spent on paying users for doing less or no work. Current best practices are to spend 10% of a Project’s design budget on usability. This will have the effect of more than doubling a website’s desired quality metrics, and just less than doubles an intranet’s quality metrics. Improving usability leads to marked reduction in training budgets.

There are a variety of usability studying/evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. The most basic and useful is user-testing which has three components:
  • ·  Get a sample users’ community (customers for an e-commerce site or company employees for an organization’s intranet)
  • ·    Users perform representative tasks with the design
  • ·  Observe users’ actions, looking out for where they succeed or fail, and where they have difficulties; let them do the talking, NOT you.

Testing the users individually and letting them to find their own problems and try to solve them, rather than redirecting their attentions to possible solutions should be the ideal process. Five users is a good sample to test with. The best way to increase the quality of the user experience is through iterative design. The more versions and interface ideas you test with the users, the better. Using focus groups is not a good way to evaluate usability design. You have to actually watch them doing it, rather them listening to what they have to say about it.

Usability is important in each stage of the design process. Formal usability studies in the form of controlled experiments aim to advance the field's understanding of how people use interfaces, to determine which design concepts work well under what circumstances, and why. They can also be used to help decide if a new feature or a change in approach improves the performance of an existing interface, or to compare competing interfaces. The main steps involved in fast and cheap formal individual studies for usability testing are:
  • ·        Before beginning the new design, test the old design for elimination and improvement
  • ·        Test your competitor’s design, unless you’re working on an intranet
  • ·        Conduct a field study to evaluate users in their normal habitat
  • ·        Make paper prototypes or one or more new design ideas and test them
  • ·        Use multiple iterations to refine design ideas
  • ·        Use established usability guidelines to refine the design
  • ·        Once the final design is implemented, re-test it

A higher-quality user experience can only be assured by starting user testing early in the design process and to keep testing every step of the way.

If usability testing is conducted at least once a week, it’s recommended to have a dedicated usability laboratory. Conference rooms and offices are usually uses by most companies, as long as distractions can be prevented, the most important factor being to be able to get hold of the users and sit them while they use they design. All you need yourself is a pencil and notepad.

Designing a new usable search interface and convincingly assessing its usability can be surprisingly difficult. Small details in the design of the interface can have a strong effect on a participant's subjective reaction to or objective success with the interface. 

Traditional information retrieval research focuses on evaluating the proportion of relevant documents retrieved in response to a query as a measure of assessing a search interface. Three main aspects of usability are usually used to evaluate search interfaces – Effectiveness (accuracy and completeness with which users achieve specified goals), Efficiency (resources expended in relation to the accuracy and completeness with which users achieve goals), and Satisfaction (freedom from discomfort and positive attitude towards the use of the product).

Evaluation of search systems is equivalent to evaluation of ranking algorithms, and this evaluation is done in an automated fashion, without involving users. The most common evaluation measures used for assessing ranking algorithms are Precision, Recall, the F-measure, and Mean Average Precision (MAP). Precision is defined as the number of relevant documents retrieved divided by the number of documents retrieved, and so is the percentage of retrieved documents that are relevant. Recall is the number of relevant documents retrieved divided by the number of documents that are known to be relevant, and so is the percentage of all relevant documents that are retrieved. 

The TREC evaluation method has been enormously valuable for comparison of competing ranking algorithms. This, however, comes with a lot of criticisms. However, this evaluation does not require searchers to interact with the system, create the queries, judge the results, or reformulate their queries. The ad hoc track does not allow for any user interface whatsoever. 
It can be useful to adjust the measures of precision and recall when assessing interactive systems. the measure of immediate accuracy to capture relevance according to this kind of behavior. It is measured as the proportion of queries for which the participant has found at least one relevant document by the time they have looked at k documents selected from the result set.
Informal usability testing can also be achieved through various methods, even though there’s no exact formula for producing a good user interface, but interface design indisputably requires the involvement of representative users. Before any design starts, prospective users should be interviewed or observed in field studies doing the tasks which the interface must support. This is followed by a repeated cycle of design, assessment with potential users, analysis of the results, and subsequent re-design and re-assessment. Involvement of members of the target user base is critical, and so this process is often referred to as user-centered design. Potential users who participate in the assessment of interfaces are usually referred to as participants. Showing designs to participants and recording their responses to ferret out problems as well as identify positive aspects of the design are referred to as informal usability testing. Informal usability studies are typically used to test a particular instantiation of an interface design, or to compare candidate designs, for a particular domain and context. In the first rounds of evaluation, major problems can be identified quickly, often with just a few participants. Although participants usually do not volunteer good design alternatives, they can often accurately indicate which of several design paths is best to follow. Quick informal usability tests with a small number of participants is an example of what has been dubbed discount usability testing, as opposed to full formal laboratory studies. Ideally, usability study participants are drawn from a pool of potential users of the system under study; for instance, an interface for entering medical record information should be tested by nurse practitioners. Often the true users of such a system are too difficult to recruit for academic studies, and so surrogates are found, such as interns who are training for a given position or graduate students in a field. For academic HCI research, participants are usually recruited via flyers on buildings, email solicitations, as an (optional) part of a course or by a professional recruiting firm. 
To obtain a more accurate understanding of the value and usage patterns of a search interface, (in order to obtain what is called ecological validity in the social sciences literature), it is important to conduct studies in which the participants use the interface in their daily environments and routines, and over a significant period of time.
In order to carry out a successful usability test with a paper prototype, the following should be taken into account:
·        Always remember to compensate your participants, so the test shouldn’t feel like a chore
·        Put the participant at ease, and give them control
·        Ask questions that qualify the participant, like their frame of reference (how often they go online, what websites they often visit, what are the triggers and conditions for their activity…)
·        Start with open questions, then dig deeper if the user is brief
·        Give users open-ended tasks instead of telling them what to do
·        Ask users what they expect will happen if they take a particular action
·        Use whatever medium is easiest to create
·        You can test complex interaction before investing in coding and design with a little creativity, since most of the time users are able to interact with paper prototypes as if it were the real thing, and can easily accommodate unforeseen actions
·        It helps to learn participants’ preferences, even if they don’t have to be your demographic target.
·        End with a question asking if there’s anything else we should talk about or to help improve the current state of things. Sometimes, you get great information from this.
A longitudinal study tracks participant behavior while using a system over an extended period of time, as opposed to first-time usages which are what are typically assessed in formal and informal studies. This kind of study is especially useful for evaluating search user interfaces, since it allows the evaluator to observe how usage changes as the participant learns about the system and how usage varies over a wide range of information needs. The longer time frame also allows potential users to get a more realistic subjective assessment of how valuable they find the system to be. This can be measured by questionnaires as well as by how often the participant chooses to use the system versus alternatives.
Most Web search engines record information about searchers' queries in their server logs (also called query logs). This information includes the query itself, the date and time it was written, and the IP address that the request came from. Some systems also record which search results were clicked on for a given query. These logs, which characterize millions of users and hundreds of millions of queries, are a valuable resource for understanding the kinds of information needs that users have, for improving ranking scores, for showing search history, and for attempts to personalize information retrieval. They are also used to evaluate search interfaces and algorithms, as discussed in the next section. In preparation, the following subsections describe some details behind the use of query logs. In query log analysis, an individual person is usually associated with an IP address, although there are a number of problems with this approach: some people search using multiple different IP addresses and the same IP address can be used by multiple searchers. Nonetheless, the IP address is a useful starting point for identifying individual searchers.
Large-scale log-based usability testing or bucket testing is an important form of usability testing that takes advantage of the huge numbers of visitors to some Web sites is large-scale log-based usability testing. In the days of shrink-wrapped software delivery, once an interface was coded, it was physically mailed to customers in the form of CDs or DVDs and could not be significantly changed until the next software version was released, usually multiple years later. The Web has changed this paradigm so that many companies release products in “beta,” or unfinished, status, with the tacit understanding that there may be problems with the system, and the system will change before being officially released. More recently, the assumptions have changed still further. With some Web sites, especially those related to social media, the assumption is that the system is a work-in-progress, and changes will continually be made with little advance warning. The dynamic nature of Web interfaces makes it acceptable for some organizations to experiment with showing different versions of an interface to different groups of currently active users. A major limitation of bucket testing is that the test can run effectively only over the short term because the user pool shifts over time and some users clear their cookies (which are used by the bucket tests to keep track of user IDs). Additionally, it is commonly observed that when comparing a new interface versus one that users are already familiar with, users nearly always prefer the original one at first.
Several concerns have been raised with the evaluation of search interfaces. Evaluating information-intensive applications such as search is somewhat different, and often times more difficult than evaluating other types of user interfaces. Some pertinent issues are discussed below, along with best practices for evaluation of search interfaces.
·        Avoid experimental bias
·        Encourage participant motivation
·        Account for participants’ individual differences
·        Account for the differences in tasks and queries
·        Control test collection characteristics
·        Account for differences in the timing response variable
·        Compare against a strong baseline

2.      Mobile Design
Designing for the Mobile Web has experienced a recent explosion due to user adoption of mobile devices. This has greatly revolutionized the World Wide Web. Though designing for the Mobile Web follow similar principles to designing websites, there are still noticeable differences - current mobile device networks don’t run in the same speed as broadband devices, there are also a myriad of ways our mobile web designs are displayed in, from touch screens to netbooks, which make even the smallest desktop monitors look like giants.
The way the mobile design is to be delivered, is one of the early elements that need to be considered. The ideal scenario would be that each device simply scales and adapts to your existing website, and some devices, such as the iPhone, are able to because of their built-in web browser. But because of so many devices out there, a cross-device mobile design is difficult to make. Designing for the mobile can be very difficult since the designer has to deal with more than one markup language (WML, iOS for Apple devices and Android for Android devices), unlike just a single language for desktop-based web designs, HTML.
One option to pushing a site to the Mobile Web is to simply create or modify your existing code and design to work well on mobile devices, or building from scratch with mobile devices in mind.
Another method for delivering a mobile design is to build an especially optimized layout for handheld devices. You can build this yourself or use a web service such as Mobify.
Whichever route you decide to take, it’s important that:
·        Visitors know that a mobile-friendly version of your site is available
·        Visitors can have the choice between a mobile version or the normal version
The next consideration for mobile web design is the Structure and Code (markup and styles) that goes behind the scenes. The following points should be succinctly analyzed
·        Use of WML or HTML for mobile profiles?
·        Build separate apps for iPhones, Androids and Blackberries?
·        What are the effects of cost and speed with mobile devices on your design?
·        Consider modern standards like HTML5 and CSSR?
Choosing the right language for a mobile-friendly website is paramount; while older devices before the smartphone revolution only support WML, the W3C produced a mobile-friendly version of XHTML.
Ultimately, whichever language you choose, the primary considerations you need to think about is speed and user cost.
Designing the layout of mobile devices can also be a pain in the neck for the following reasons:
·        Mobile devices come in all shapes and sizes
·        Mobile devices have different levels of quality and resolutions
·        Mobile devices may or may not support zooming, others scroll content
·        Scrolling in mobile devices is more difficult because of their small screen
The goal of a mobile web design’s layout is to allow the least amount of burden to the user’s ability to find (and quickly read) what they’re looking.
Simplicity is of the main concepts to an effective mobile web layout. The more information you pile into a small space, the harder it becomes to read and the more scrolling that will be required.
Even though some mobile devices like the iPhone and iPad have the ability to zoom web pages in and out to avoid scrolling, not all do. We should try to limit scrolling as much as possible during mobile web design.
The issue of navigation and clickable regions is another concern. This is predominantly a problem with touchscreen mobile devices. Ensuring that your mobile layout has large and easy-to-press links and clickable objects will be essential in streamlining the experience. Reducing the amount of clicks required to achieve an action - which is a good practice regardless of whether or not you’re designing a mobile site - is all the more important in mobile web designs.
The most costly component of a website is the content. This is due to the cost of browsing and caps on data allowances. Knowing how to reduce excess images, text and media can come in very handy and cost effective. Of all the components of a site, none plays a more vital role than the text. When working with a small screen, large CSS background images or byte-heavy infographics can be problematic. It’s inevitable in the modern web that utilizing audio and video will be needed. Even with the bandwidth issues that exist, you shouldn’t stop using these richer forms of content, as they can be great, especially on handheld mobile devices that have excellent video/audio quality such as the iPhone or iPod Touch. But just like with everything else, moderation and smart usage is key.
Even though the availability of web-based services are fantastic, I do worry that the dependence on a constant and reliable (always on) web connection is very much going to be a problem for web apps at the current state of mobile device networks. While there have been moves towards local storage mechanisms, for now, web apps that rely on persistent internet connections could affect mobile device users due to the capabilities of their networks.
With so much diversity in the mobile device landscape, you should test your designs on as many platforms as you can manage. A long list of emulators exist that will simulate certain devices for you to be able to test your work.
For now, and until mobile network infrastructure improves and connectivity is widely available - simple, small and speedy are the three main principles we should abide by.