Monday, April 14, 2014

Week 11 - Thesauri, Controlled Vocabularies and Metadata

Websites and intranets, as the names suggest, involve nests and webs and inter/intra-connections of systems, data and information which inter-act with each other. Making sense out of these systems and information mumbo jumbo independently can be very tricky, sometimes impossible, and even with the use of reductionism. Controlled vocabularies and Metadata permit the IA to peruse through the network of relationships between these systems. They provide a way to organize knowledge for subsequent retrieval. They are used in subject indexing schemes, subject headings, thesauri, taxonomies and other forms of knowledge organization systems.

A controlled vocabulary is any defined subset of natural language. It is a list of equivalent terms in the form of a synonym ring, or a list of preferred terms in the form of an authority file. Controlled vocabulary schemes mandate the use of predefined, authorized terms that have been preselected by the designer of the vocabulary, in contrast to natural language vocabularies, where there is no restriction on the vocabulary.

Synonym rings connect a set of words that are defined as equivalent for the purposes of data retrieval. These rings can be used when a user enters a search term into a query, if the word is contained in a synonym ring then the result will contain all the words within the ring as-well. Therefor these rings can dramatically improve search results by increasing the amount of recall of the search.

Authoritative files are lists of preferred terms or accepted values. They help in keeping accurate and consistent systems by reducing the allowed terms for a set domain. They can include a synonym ring with one of the words select as being the preferred term to use. These files can be useful with regards to indexes by making sure information that can belong to similar terms can be categorized into only one category; rather than spread over several. They can also be used to guide people into using the preferred term over others, for example when the variant term in an index is linked to a preferred term.

Classification schemes are a hierarchical arrangement of preferred terms aka Taxonomy (Hierarchy). These schemes can be used in either front end (such-as the listing of a category on the results of a search in yahoo or google) or back-end (such-as organizing and indexing tags used by IAs, authors and Architects). There are many schemes that can be used to classify the same information. The choice of scheme depends on its intended application.

Metadata is data about other data. It can be used in any sort of media to describe its contents and give it additional information. It is definitional data that provides information about or documentation of other data managed within an application or environment or system. Metadata is usually stored behind the scenes. Metadata tags are used to describe documents, pages, images, software, video and audio files, and other content objects for the purposes of improved navigation and retrieval. One example of Metadata in use is within web pages tag where it can be used freely to add additional information describing the pages content. This data can be used to help improve navigation and information retrieval on the page. Controlled vocabularies are basically a defined subset of a language. Controlled vocabularies are used to reduce the variability of expressions used to characterize an item. It can come in the form of an authoritative file or a list of equivalent terms.

Thesauri are collections of categorized concepts, denoted by words or phrases that are related to each other by narrower terms; wider terms and related term relations. They are a book of synonyms, of including related and contrasting words and antonyms. They allow for synonymous management by providing the preferred term amongst many variants. It uses semantic relationships: Equivalence (like terms), Hierarchical (sub categories), and Associative (related terms). They come in three forms
·        Classic- Full functional include indexing and searching
·        Indexing- Allows indexes of preferred terms
·        Searching -Is used at the point of searching not indexing to manipulate the search performed. Users may be able to specify their search terms by going narrower or broader.

The IA will need to decide which of the above three forms to include in their site or intranet if they choose to use a thesaurus. This decision should be based on how you intend to use the thesaurus, and will definitely have major implications for design.

The thesaurus sets itself apart from the simpler controlled vocabularies in its rich array of semantic relationships. These relationships are of three types – Equivalence, Hierarchical and Associative. When a number of terms represent the same concept, the equivalence relationship clarifies which indexing term should be used. Hierarchical relationship indicates the superordination and subordination of each preferred term. This kind of relationship divides the information space into categories and subcategories, relating broader and narrower concepts through the familiar parent-child relationship. The associative relationship is a relationship between two concepts which do not belong to the same hierarchical structure, although they have semantic or contextual similarities. The relationship must be made explicit because it suggests to the indexer the use of other indexing terms with connected or similar meanings which could be used for indexing or searches. This relationship is often the trickiest, and by necessity is usually developed after the IA has made a good start on the other two relationship types. They are usually strongly implied semantic connections that aren’t captured within the equivalence or hierarchical relationships.

Faceted Classification is an analytic-synthetic classification scheme. It classifies objects using multiple taxonomies that express their different attributes or facets rather than classifying using a single taxonomy. A faceted classification system allows the assignment of an object to multiple taxonomies (sets of attributes), enabling the classification to be ordered in multiple ways, rather than in a single, predetermined, taxonomic order. A facet comprises "clearly defined, mutually exclusive, and collectively exhaustive aspects, properties or characteristics of a class or specific subject". For example, a collection of books might be classified using an author facet, a subject facet, a date facet, etc. Faceted classification is used in faceted search systems that enable a user to navigate information along multiple paths corresponding to different orderings of the facets. This contrasts with traditional taxonomies in which the hierarchy of categories is fixed and unchanging. In other words, once information is categorized using multiple facets, it can also be retrieved using multiple facets. Thus, a user would not be restricted to one identifying search term in order to retrieve an item. He or she could use a single term or link together multiple terms which increases his or her chances of retrieving the exact information that is being sought.  Another real life implementation can be seen in http://wine.com in which the various wine facets are type (red – merlot, pinot nor, malbec, white – chrdonnay, muscadot, sparkling, etc…), region of origin (South African, Argentinan, Carlifonian, Spanish, French, etc…), Winery/manufacturer (Clos du Bois, Blackstone, etc…), Year (1968, 1996, 2002, 2014, etc…) and price ($5.99, $9.99, $39.99, $156, etc…). This type of classification provides power and flexibility. The interface can be tested and refined over time, while the faceted classification provides an enduring foundation.

The Guided Navigation model encourages users to refine or narrow their own searches based on metadata field s and values built atop faceted classifications. Guided navigation has become the de facto standard for e-commerce and product-related Web sites, from big box stores to product review sites. But e-commerce sites aren’t the only ones joining the facets club. Other content-heavy sites such as media publishers (e.g. The Financial Times), libraries (such as NCSU Libraries) and even non-profits (Urban Land Institute) are tapping into faceted search to make their often broad range of content more findable. Essentially, guided navigation or faceted search has become so ubiquitous that users are not only getting used to it, they are coming to expect it.



Saturday, April 5, 2014

Week 10 - Usability Evaluation and Mobile Design

1.      Usability Evaluation

Usability is a quality attribute that assesses the ease to use of interfaces. It also measures ease-to-use during the design process. It is defined by five quality components – learnability, efficiency, memorability, errors and satisfaction. Utility (the design’s functionality) is also an important quality of usability. Usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability differs from user satisfaction and user experience because usability also considers usefulness.
Any system designed for people should be easy to use, easy to learn, easy to remember, and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles – early focus on users and tasks, empirical measurement and iterative design. Usability is so important because on the web, it is a necessary condition for survival. Simply put, if a website is difficult to use, people leave. When users encounter any difficulty on your site, their first line of defense is to leave. If they can’t understand what your company or site is all about from the home page, or if your e-commerce website doesn’t clearly define the product you’re selling or users can’t easily find the products they’re looking for, people simply leave the site. If employers spend time pondering where to find information on the company’s website, this is productive time lost, hence money spent on paying users for doing less or no work. Current best practices are to spend 10% of a Project’s design budget on usability. This will have the effect of more than doubling a website’s desired quality metrics, and just less than doubles an intranet’s quality metrics. Improving usability leads to marked reduction in training budgets.

There are a variety of usability studying/evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. The most basic and useful is user-testing which has three components:
  • ·  Get a sample users’ community (customers for an e-commerce site or company employees for an organization’s intranet)
  • ·    Users perform representative tasks with the design
  • ·  Observe users’ actions, looking out for where they succeed or fail, and where they have difficulties; let them do the talking, NOT you.

Testing the users individually and letting them to find their own problems and try to solve them, rather than redirecting their attentions to possible solutions should be the ideal process. Five users is a good sample to test with. The best way to increase the quality of the user experience is through iterative design. The more versions and interface ideas you test with the users, the better. Using focus groups is not a good way to evaluate usability design. You have to actually watch them doing it, rather them listening to what they have to say about it.

Usability is important in each stage of the design process. Formal usability studies in the form of controlled experiments aim to advance the field's understanding of how people use interfaces, to determine which design concepts work well under what circumstances, and why. They can also be used to help decide if a new feature or a change in approach improves the performance of an existing interface, or to compare competing interfaces. The main steps involved in fast and cheap formal individual studies for usability testing are:
  • ·        Before beginning the new design, test the old design for elimination and improvement
  • ·        Test your competitor’s design, unless you’re working on an intranet
  • ·        Conduct a field study to evaluate users in their normal habitat
  • ·        Make paper prototypes or one or more new design ideas and test them
  • ·        Use multiple iterations to refine design ideas
  • ·        Use established usability guidelines to refine the design
  • ·        Once the final design is implemented, re-test it

A higher-quality user experience can only be assured by starting user testing early in the design process and to keep testing every step of the way.

If usability testing is conducted at least once a week, it’s recommended to have a dedicated usability laboratory. Conference rooms and offices are usually uses by most companies, as long as distractions can be prevented, the most important factor being to be able to get hold of the users and sit them while they use they design. All you need yourself is a pencil and notepad.

Designing a new usable search interface and convincingly assessing its usability can be surprisingly difficult. Small details in the design of the interface can have a strong effect on a participant's subjective reaction to or objective success with the interface. 

Traditional information retrieval research focuses on evaluating the proportion of relevant documents retrieved in response to a query as a measure of assessing a search interface. Three main aspects of usability are usually used to evaluate search interfaces – Effectiveness (accuracy and completeness with which users achieve specified goals), Efficiency (resources expended in relation to the accuracy and completeness with which users achieve goals), and Satisfaction (freedom from discomfort and positive attitude towards the use of the product).

Evaluation of search systems is equivalent to evaluation of ranking algorithms, and this evaluation is done in an automated fashion, without involving users. The most common evaluation measures used for assessing ranking algorithms are Precision, Recall, the F-measure, and Mean Average Precision (MAP). Precision is defined as the number of relevant documents retrieved divided by the number of documents retrieved, and so is the percentage of retrieved documents that are relevant. Recall is the number of relevant documents retrieved divided by the number of documents that are known to be relevant, and so is the percentage of all relevant documents that are retrieved. 

The TREC evaluation method has been enormously valuable for comparison of competing ranking algorithms. This, however, comes with a lot of criticisms. However, this evaluation does not require searchers to interact with the system, create the queries, judge the results, or reformulate their queries. The ad hoc track does not allow for any user interface whatsoever. 
It can be useful to adjust the measures of precision and recall when assessing interactive systems. the measure of immediate accuracy to capture relevance according to this kind of behavior. It is measured as the proportion of queries for which the participant has found at least one relevant document by the time they have looked at k documents selected from the result set.
Informal usability testing can also be achieved through various methods, even though there’s no exact formula for producing a good user interface, but interface design indisputably requires the involvement of representative users. Before any design starts, prospective users should be interviewed or observed in field studies doing the tasks which the interface must support. This is followed by a repeated cycle of design, assessment with potential users, analysis of the results, and subsequent re-design and re-assessment. Involvement of members of the target user base is critical, and so this process is often referred to as user-centered design. Potential users who participate in the assessment of interfaces are usually referred to as participants. Showing designs to participants and recording their responses to ferret out problems as well as identify positive aspects of the design are referred to as informal usability testing. Informal usability studies are typically used to test a particular instantiation of an interface design, or to compare candidate designs, for a particular domain and context. In the first rounds of evaluation, major problems can be identified quickly, often with just a few participants. Although participants usually do not volunteer good design alternatives, they can often accurately indicate which of several design paths is best to follow. Quick informal usability tests with a small number of participants is an example of what has been dubbed discount usability testing, as opposed to full formal laboratory studies. Ideally, usability study participants are drawn from a pool of potential users of the system under study; for instance, an interface for entering medical record information should be tested by nurse practitioners. Often the true users of such a system are too difficult to recruit for academic studies, and so surrogates are found, such as interns who are training for a given position or graduate students in a field. For academic HCI research, participants are usually recruited via flyers on buildings, email solicitations, as an (optional) part of a course or by a professional recruiting firm. 
To obtain a more accurate understanding of the value and usage patterns of a search interface, (in order to obtain what is called ecological validity in the social sciences literature), it is important to conduct studies in which the participants use the interface in their daily environments and routines, and over a significant period of time.
In order to carry out a successful usability test with a paper prototype, the following should be taken into account:
·        Always remember to compensate your participants, so the test shouldn’t feel like a chore
·        Put the participant at ease, and give them control
·        Ask questions that qualify the participant, like their frame of reference (how often they go online, what websites they often visit, what are the triggers and conditions for their activity…)
·        Start with open questions, then dig deeper if the user is brief
·        Give users open-ended tasks instead of telling them what to do
·        Ask users what they expect will happen if they take a particular action
·        Use whatever medium is easiest to create
·        You can test complex interaction before investing in coding and design with a little creativity, since most of the time users are able to interact with paper prototypes as if it were the real thing, and can easily accommodate unforeseen actions
·        It helps to learn participants’ preferences, even if they don’t have to be your demographic target.
·        End with a question asking if there’s anything else we should talk about or to help improve the current state of things. Sometimes, you get great information from this.
A longitudinal study tracks participant behavior while using a system over an extended period of time, as opposed to first-time usages which are what are typically assessed in formal and informal studies. This kind of study is especially useful for evaluating search user interfaces, since it allows the evaluator to observe how usage changes as the participant learns about the system and how usage varies over a wide range of information needs. The longer time frame also allows potential users to get a more realistic subjective assessment of how valuable they find the system to be. This can be measured by questionnaires as well as by how often the participant chooses to use the system versus alternatives.
Most Web search engines record information about searchers' queries in their server logs (also called query logs). This information includes the query itself, the date and time it was written, and the IP address that the request came from. Some systems also record which search results were clicked on for a given query. These logs, which characterize millions of users and hundreds of millions of queries, are a valuable resource for understanding the kinds of information needs that users have, for improving ranking scores, for showing search history, and for attempts to personalize information retrieval. They are also used to evaluate search interfaces and algorithms, as discussed in the next section. In preparation, the following subsections describe some details behind the use of query logs. In query log analysis, an individual person is usually associated with an IP address, although there are a number of problems with this approach: some people search using multiple different IP addresses and the same IP address can be used by multiple searchers. Nonetheless, the IP address is a useful starting point for identifying individual searchers.
Large-scale log-based usability testing or bucket testing is an important form of usability testing that takes advantage of the huge numbers of visitors to some Web sites is large-scale log-based usability testing. In the days of shrink-wrapped software delivery, once an interface was coded, it was physically mailed to customers in the form of CDs or DVDs and could not be significantly changed until the next software version was released, usually multiple years later. The Web has changed this paradigm so that many companies release products in “beta,” or unfinished, status, with the tacit understanding that there may be problems with the system, and the system will change before being officially released. More recently, the assumptions have changed still further. With some Web sites, especially those related to social media, the assumption is that the system is a work-in-progress, and changes will continually be made with little advance warning. The dynamic nature of Web interfaces makes it acceptable for some organizations to experiment with showing different versions of an interface to different groups of currently active users. A major limitation of bucket testing is that the test can run effectively only over the short term because the user pool shifts over time and some users clear their cookies (which are used by the bucket tests to keep track of user IDs). Additionally, it is commonly observed that when comparing a new interface versus one that users are already familiar with, users nearly always prefer the original one at first.
Several concerns have been raised with the evaluation of search interfaces. Evaluating information-intensive applications such as search is somewhat different, and often times more difficult than evaluating other types of user interfaces. Some pertinent issues are discussed below, along with best practices for evaluation of search interfaces.
·        Avoid experimental bias
·        Encourage participant motivation
·        Account for participants’ individual differences
·        Account for the differences in tasks and queries
·        Control test collection characteristics
·        Account for differences in the timing response variable
·        Compare against a strong baseline

2.      Mobile Design
Designing for the Mobile Web has experienced a recent explosion due to user adoption of mobile devices. This has greatly revolutionized the World Wide Web. Though designing for the Mobile Web follow similar principles to designing websites, there are still noticeable differences - current mobile device networks don’t run in the same speed as broadband devices, there are also a myriad of ways our mobile web designs are displayed in, from touch screens to netbooks, which make even the smallest desktop monitors look like giants.
The way the mobile design is to be delivered, is one of the early elements that need to be considered. The ideal scenario would be that each device simply scales and adapts to your existing website, and some devices, such as the iPhone, are able to because of their built-in web browser. But because of so many devices out there, a cross-device mobile design is difficult to make. Designing for the mobile can be very difficult since the designer has to deal with more than one markup language (WML, iOS for Apple devices and Android for Android devices), unlike just a single language for desktop-based web designs, HTML.
One option to pushing a site to the Mobile Web is to simply create or modify your existing code and design to work well on mobile devices, or building from scratch with mobile devices in mind.
Another method for delivering a mobile design is to build an especially optimized layout for handheld devices. You can build this yourself or use a web service such as Mobify.
Whichever route you decide to take, it’s important that:
·        Visitors know that a mobile-friendly version of your site is available
·        Visitors can have the choice between a mobile version or the normal version
The next consideration for mobile web design is the Structure and Code (markup and styles) that goes behind the scenes. The following points should be succinctly analyzed
·        Use of WML or HTML for mobile profiles?
·        Build separate apps for iPhones, Androids and Blackberries?
·        What are the effects of cost and speed with mobile devices on your design?
·        Consider modern standards like HTML5 and CSSR?
Choosing the right language for a mobile-friendly website is paramount; while older devices before the smartphone revolution only support WML, the W3C produced a mobile-friendly version of XHTML.
Ultimately, whichever language you choose, the primary considerations you need to think about is speed and user cost.
Designing the layout of mobile devices can also be a pain in the neck for the following reasons:
·        Mobile devices come in all shapes and sizes
·        Mobile devices have different levels of quality and resolutions
·        Mobile devices may or may not support zooming, others scroll content
·        Scrolling in mobile devices is more difficult because of their small screen
The goal of a mobile web design’s layout is to allow the least amount of burden to the user’s ability to find (and quickly read) what they’re looking.
Simplicity is of the main concepts to an effective mobile web layout. The more information you pile into a small space, the harder it becomes to read and the more scrolling that will be required.
Even though some mobile devices like the iPhone and iPad have the ability to zoom web pages in and out to avoid scrolling, not all do. We should try to limit scrolling as much as possible during mobile web design.
The issue of navigation and clickable regions is another concern. This is predominantly a problem with touchscreen mobile devices. Ensuring that your mobile layout has large and easy-to-press links and clickable objects will be essential in streamlining the experience. Reducing the amount of clicks required to achieve an action - which is a good practice regardless of whether or not you’re designing a mobile site - is all the more important in mobile web designs.
The most costly component of a website is the content. This is due to the cost of browsing and caps on data allowances. Knowing how to reduce excess images, text and media can come in very handy and cost effective. Of all the components of a site, none plays a more vital role than the text. When working with a small screen, large CSS background images or byte-heavy infographics can be problematic. It’s inevitable in the modern web that utilizing audio and video will be needed. Even with the bandwidth issues that exist, you shouldn’t stop using these richer forms of content, as they can be great, especially on handheld mobile devices that have excellent video/audio quality such as the iPhone or iPod Touch. But just like with everything else, moderation and smart usage is key.
Even though the availability of web-based services are fantastic, I do worry that the dependence on a constant and reliable (always on) web connection is very much going to be a problem for web apps at the current state of mobile device networks. While there have been moves towards local storage mechanisms, for now, web apps that rely on persistent internet connections could affect mobile device users due to the capabilities of their networks.
With so much diversity in the mobile device landscape, you should test your designs on as many platforms as you can manage. A long list of emulators exist that will simulate certain devices for you to be able to test your work.
For now, and until mobile network infrastructure improves and connectivity is widely available - simple, small and speedy are the three main principles we should abide by.













                                                                                                                                                                                

Sunday, March 30, 2014

Week 9 - Information Architecture Strategy and Design

Information Architects do not typically have the luxury of one or many years to complete their projects. It’s usually a matter of weeks and months. Moving from research to strategy, then design follows tight schedules with specific deliverables required at specific deadlines. It’s usually a very blurry line between research and strategy, and even though the process of moving from research to administration in the IA life cycle might look linear at a high level, it’s usually a very highly iterative and interactive process with the IA switching back and forth between research and design, whilst maintaining tight budget and schedule constraints.

Putting in place of an IA strategy involves defining and realizing a high-level conceptual framework for structuring and organizing a web site or intranet. This provides any firm or organization a sense of direction or scope necessary to proceed into the design and implementation of the various phases involved in the IA life cycle. The IA strategy is typically detailed in the in an IA strategy report, communicated in a high-level strategy presentation, and made actionable through a project plan for information architecture design. It provides high level recommendations regarding:
·         Information architecture administration
·         Technology integration
·         Top-down or bottom-up emphasis
·         Organization and labeling systems (top-down)
·         Document type identification (bottom-up)
·         Metadata field definition
·         Navigation system design

Putting an IA strategy in place can meet major setbacks within the firm. The absence of a defined business strategy or content can lead to several conflicting discussions and questions from stakeholders that can easily derail the IA, like why the need of an IA strategy when there isn’t any business strategy or content in place. These questions shouldn’t derail the IA because it’s been proven that business strategies, content collections and information architectures co-evolve in a highly interactive manner. In fact, developing an IA strategy usually exposes gaps in business strategies and content collections. This can actually lead to major changes within the organization’s business strategy and content policy. In an ideal situation, the IA should work directly with the business strategy and content policy teams, exploring and defining the relationships between these three critical areas.

Moving from research to strategy shouldn’t be a clear cut, formal or isolated. Strategies for structuring and organizing the site should be considered by the IA before the research even begins. In fact, during the research phase, throughout the user interviews, content analysis and benchmarking studies, the IA should be constantly testing and refining the hypotheses mentally against the steady stream of data being compiled. The point where the IA realizes they’re no longer learning anything new by asking the same questions in the interviews , and are anxious to actually start fleshing out a couple of hierarchies, and start introducing their structures and labels to users, clients and colleagues actually defines when they should start moving from research to strategy.

Developing the IA strategy involves the transition from process to a transition between process and product, thereby creating work products and deliverables by applying methodology. Four steps usually define the IA strategy development process, TACT:
·         Think – convert research data to creative ideas
·         Articulate – diagrams, metaphors, stories, scenarios, blueprints, wireframes
·         Communicate – present, react, brainstorm
·         Test – closed card sorts, prototypes. The test results might lead to a new thinking process, hence re-initiating the cycle.
The results of the above process are strategy phase deliverables:
·         The IA strategy report – detailed strategy, direction, scope
·         The IA strategy presentation – high-level strategy, direction, scope
·         The project plan for design – teams, deliverables, schedule, budget.

The IA strategy is usually brought to life through metaphors, scenarios and conceptual designs. A metaphor can be a very powerful tool for communicating complex ideas and generating enthusiasm. The process of metaphor can be a real stimulant in working with clients and colleagues. The three most important applied in the design of websites are:
·         Organizational metaphors – leverage familiarity with one system’s organization to convey quick understanding of a new system’s organization
·         Functional metaphors – make a connection between the tasks you can perform in a traditional environment and those you can perform in a new environment
·         Visual metaphors – leverage familiar graphic elements such as images, icons and colors to create a connection to the new elements.

Scenarios are great tools for helping people to understand how the user will navigate and experience the site being designed. They also help the IA generate new ideas for the architecture and navigating system. Writing a few scenarios that depict how a certain group of people with specific needs can use the site can help provide a multi-dimensional experience that shows the true potential of the site.
Case studies and stories are a great tool to bring concepts of information architecture to life. These usually help a diversified, none-technical audience of both clients and colleagues get a clearer picture of your IA strategy by comparing and contrasting with other real life and past experiences.

Conceptual diagrams are usually pictorial representations of ideas and concepts.  IAs usually have to explain high-level concepts and systems, and conceptual diagrams come in very handy here. Their various concepts and ideas are put in the form of diagrams which can easily be visualized and understood by various audiences, including stakeholders.

The Strategy report presents the most detailed, comprehensive articulation of the IA strategy. It presents the previous results, analysis and ideas into a single document. This report is usually the largest, hardest and most important deliverable for the IA team. It forces the team members to come together around a unified vision for inform architecture, and requires them to find ways to explain or illustrate that vision so that clients and non-IA colleagues will understand their jargon. Organizing the report is one of the hardest tasks to accomplish since the IA strategy isn’t linear, but a report forces a linear presentation. A typical IA strategy report can contain the following major sections:
·         Executive Summary – a high-level outline of the goals and methodology, major problems and major recommendations
·         Audience & Mission/Vision for the site – restate the mission statement of the web site
·         Research – Includes lessons learned from Benchmarking, User interviews and Content Analysis
·         Architectural Strategies and Approach - re-define the main focus of the strategy and how it’s going to work by outlining the various strategies put in place.
·         Content Management – provides a reality check by discussing how these IA architecture recommendations will impact the content management infrastructure.
·         Recommendations – list of recommendations to be applied to the entire site.

The Project Plan for IA design should be created as a part of the strategy phase deliverables. This plan should address the following:
·         How to accomplish the various tasks
·         Time it’ll take to accomplish specific tasks
·         Responsible party/parties for each task
·         Task deliverables
·         Task dependencies
This plan forms the bridge between strategy and design and can be integrated with plans from other teams (integration design, content authoring or application development) toward a structured schedule for overall site design. Short-term plans usually define a process of design changes that can and should be made immediately to improve the IA. The long-term plan presents a methodology for fleshing out the IA, noting interdependences with other teams where appropriate.

Without any form of presentation and discussion, the best recommendations may never “Go Live”. It’s often a best practice to make one or more presentations to the stakeholders to understand your recommendations. This might be just a single presentation to the web site or intranet strategy team, or dozens of presentations to various departments to achieve organization-wide understanding and buy-in.  The IA needs to think about these presentations from a sales perspective since success is usually defined by the extent to which you can communicate and sell your ideas in a clear and compelling manner.

The landscape shifts dramatically when we cross the bridge from research and strategy into design. The emphasis moves from process to deliverables since the IA is expected to move from thinking and talking to actually producing a clear, well-defined information architecture. Ideas must be committed to paper to shape the user experience. The work in this face is so strongly defined by context and influenced by tacit knowledge. The design decisions made, and the deliverables produced will be informed by the total sum of the IA’s experience. The IA paints on a vast, complex ever-changing canvas. Although design focuses on deliverables, process is as important during design as it is during research and strategy.

IAs should follow a set of guidelines for Diagramming an Information Architecture. They rely upon visual representations to communicate their work, whether it’s to help sell the value of IA to a potential client or to explain a design to a colleague. Even though there’s limited information on how best to visually represent information architectures, there are a couple of good guidelines to follow as the IA documents her/his architecture:
·         Provide multiple views of the Information Architecture
·         Develop those views for specific audiences and needs
·         Whenever possible, present IA diagrams in person, especially when the audience is unfamiliar with them.
·         Work with whomever you’re presenting your diagrams to – clients, managers, designers, programmers, to understand in advance what they will need from it.

Communicating visually is a very important component of an IAs design job. The most frequently used diagrams are blue-prints and wireframes. These focus more on the structure of a site’s content than its semantic content. Diagrams communicate the two basic aspects of an information system’s structural elements – content components and connections between content components. A variety of visual vocabularies that provide a clear set of terms and syntax to visually communicate components and their links is now available to help IAs and other designers create diagrams. A good example is Jesse James Garrett’s. Visual vocabularies are at the heart of the many templates used to develop blueprints and wireframes.

Blueprints or site maps show the relationships between pages and other content components, and can be used to portray organization, navigation and labeling systems. High level blueprints are often created by IAs as part of a top-down information architecture process. During the design phase, high level blue prints are most useful for exploring primary organizational schemes and approaches. They map out the organization and labeling of major areas, usually beginning with a bird’s-eye view from the main page of the web site. They are great for stimulating discussions focused on the organization and management of content as well as on the desired access pathways for users. Detailed architecture blue prints communicate detailed organization, labeling and navigation decisions to colleagues on the site development team. They map out the entire site so that the production team can implement the plans to the letter without requiring IA involvement during production. They must present the complete information hierarchy from the main page to the destination pages. They must also detail the labeling and navigation systems to be implemented in each area of the site. Of course, they’ll vary from project to project, depending upon the scope.

Wireframes depict how an individual page or template should look from an architectural perspective. They stand at the intersection of the site’s information architecture and its visual and information design. They describe the content and information architecture to be included on the relatively confined two-dimensional spaces (pages), hence they themselves must be constrained in size. Developing wireframes also help the IA decide how to group content components, how to order them, and which groups of components have priority. Wireframes are usually created for the site’s most important pages – main or home pages, major category pages, and the interfaces to search – and other important applications. They present a degree of look and feel, and straddle the realms of visual design and interaction design. Several best practices are available when creating wireframes.

Content mapping and Inventory bring another dish to the plate during design and production. Here, the IA completes the bottom-up process of collecting and analyzing content. Content mapping is where top-down IA meets bottom-up. The process of detailed content mapping involves breaking down or combining existing content into content chunks that are useful for inclusion in your site. A content chunk is the most finely grained portion of content that merits or requires individual treatment.  Since content is often drawn from multiple sources and in various formats, it must be mapped into the IA so that it will be clear what goes where during the production process. A byproduct pf content mapping is a content inventory describing available content and where it can be found. Depending upon the size and complexity of the web site and the process and technology in place for production, there are many ways to present this inventory.

Content models are micro information architectures made up of small chunks of interconnected content. They support the missing piece in so many sites (contextual navigation that works deep within the site). They rely on consistent sets of objects and logical connections between them to work. They are as much an exercise as a deliverable. While the primary output is a useful IA deliverable that informs the design of contextual navigation deep within a site, the process also generates two invaluable secondary benefits – first, content modeling forces us to determine which content is most important to model. Second, it also forces us to choose which of the many metadata attributes are the ones that will make your content model operational.

The development of controlled vocabularies is associated with two primary types of work products – metadata matrixes that facilitate discussion about the prioritization of vocabularies and an application that enables the IA to manage the vocabulary terms and relationships. The IAs job is to help define which vocabularies should be developed, considering priorities and time and budget constraints. A metadata matrix can help the IA walk clients and colleagues through the difficult decision-making process, weighing the value of each vocabulary to the user experience against the costs of development and administration.

Design Collaboration brings together all parties involved in putting everything together toward developing the site – IAs, visual designers, developers, content authors, or managers. Design sketches and web prototypes are two of many tools used for merging different ideas.




Saturday, March 15, 2014

Week 7 – Research in Web Information Architecture

It might not look very obvious some, but Research plays an integral part in an optimal Web IA design. The design of complex websites requires an interdisciplinary team that involves graphic designers, software developers, content managers, usability and database engineers, and many other experts. Hence, in such cases, integrating IA into the web development process is simply the norm. There needs to be effective collaboration between all parties involved, which in turn will require agreements on a structured development process. This process will thus involve Research, Strategy and Design at the earlier stages, followed later by Implementation and Administration. The research phase usually begins with kick of meetings with the strategy team and a review of existing background materials in a bid to garner a high-level understanding of the goals, business context, the existing IA, the content and the intended audiences. Research then continues with a series of studies, employing a variety of frameworks and methods to explore the information ecology. When done, the research actually provides a contextual understanding that becomes the base for the development of an IA strategy. Design is where you shape the high-level strategy into an Information Architecture, creating detailed blueprints, wireframes, and metadata schema that will be used by graphic designers, programmers, content authors, and the production team. This is where IAs are most involved. Implementation is when your designs are put into test as the site is built, tested and launched, involving tagging documents, testing and troubleshooting. At the end of the program/project is Administration which involves the continuous evaluation and improvement of the site’s IA. It includes daily tasks like tagging new documents and wedding out old ones, monitoring site usage, and identifying opportunities to improve the site through major or minor redesigns.

Research in IA involves paying due diligence and seeking as much information as possible in the areas of Context, Content and the Users. A conceptual framework of the broader environment involving these three key entities is usually necessary to realize this phase of the IA web development process.

The Context of your IA research will involve a thorough investigation of the business goals, funding and their various sources, organization politics and culture, the existing technologies within the environment and the various human resources that will be engaged in the effort. Researching the context by the IA will be achieved, first by getting a buy-in from the stakeholders, background investigation, meetings strategy meeting, content management meeting and IT meeting), and presentations, interviews with stakeholders and an assessment of the technologies in place.

Content is actually what end-users see on and get from your web site, like data, documents, applications, e-services, images, audio and video files, personal web pages, etc… Users need to be able to find content before they can use it – findability precedes usability. Researching the IA content will involve finding out what types or kinds of content (listed above) will be included, and from what sources. This will involve judicious content analysis (gathering and analyzing metadata and content), content mapping (what data, document or image goes where), and benchmarking (both Competitive and Before/After). Heuristic evaluations are very effective in testing a website against a formal or informal set of guidelines. They come in very useful in content analysis. The heuristic evaluation will analyze the Visibility of current System Status, match between the system and the real world, user control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalistic design, how the system helps users recognize, diagnose and recover form errors, as well as help and documentation.


Users are visitors to your sites, respondents, actors, employers customers, etc…They are why you’re employed and why you’re a building a web site in the first place, hence they’re the ultimate designers. That’s just how important they are to your web IA project. Carrying out end-user research involve finding out the very audiences we’ll be serving, the tasks they’ll be performing on the site, their needs and information seeking behavior, their various experiences and vocabulary used. This research can be done by carrying out usage statistics, search logs and clickstream analysis, i.e. accessing data from the web server logs and analyzing this information, use case and personas analysis, contextual inquiry, as well as surveys, focus group meetings, face to face interviews, card sorting, questionnaires and user testing. Google Analytics is a great tool for gathering web usage statistics. When gathering information and analyzing the usage statistics, the IA should be focused on various characteristics of the various web visits (popular pages visited, length of the visits, most popular pages visited), who the users are (country or region of origin, the various platforms and Operating Systems used for their visits, browser choices and screen resolution. Most, if not all of this information should be present in web server logs and can be analyzed using google analytics.