By Anna Mieczakowski and Christopher Wilkinson.

With the rapid development of the mobile and wearable technology industry in the past 5-10 years came massive increase in User Experience (UX) practice in the business context. UX practice, which concerns a “person’s perception and responses that result from the use and/or anticipated use of a product, system or service”, is a “multidimensional concept” that generally requires certain time commitment and the utilisation of a variety of methods to probe into and generate insights. UX practices are needed because user interaction with products and services is inherently dynamic, given the “ever-changing internal and emotional state of a person and differences in the circumstances during and after an interaction with a product” (International Standards Organisation 9241-210, 2010).

UX Challenges
Despite the availability of a plethora of free-to-use UX support tools with reliable estimates about people’s motor, cognitive and sensory capabilities, an abundance of software packages for supporting UX design sprints, as well as a large number of UX jobs, the UX practice across the board continues to suffer from four challenges.

  1. Reliable and freely available sources of information about users’ motor, cognitive, sensory and affective capabilities are still largely underutilised. Part of the problem here is that there are many methods to choose from and that they are often difficult and/or time-consuming to use for accurately eliciting user behaviour.
  2. Common perception that generalisable user experience studies require a large number of participants and consequently bear significant financial and time commitment, which factors are often unfeasible for smaller and mid-sized projects.
  3. Significant time investment required for a rigorous capture and analysis of findings from user studies. To overcome the issue with time, the UX work typically focuses on elicitation of high-level findings or personal experience, often by-passing the intricacies of human behaviour. This, in turn, can lead to outputs which are sometimes more useful for further investigation than for production.
  4. Difficulty with aligning the design process with development cycles, as these two have different objectives, timelines, and deliverables. The goal here is that these two processes should work in sync as part of an Agile process in as efficient, effective and enjoyable way as reasonably possible.

Below section describes these four UX challenges in more detail.

1. Usage of existing information sources on users’ motor, cognitive, sensory and affective capabilities
The ability to make informed decisions in product design and development requires adequate user information. Such information is available, but it requires time commitment for testing it in use, which is not always feasible in a fast-paced industry context that can rarely facilitate untested methods. Walters & Evans (2011) believes that: “Any research that relies on professional observation in the field for an extended period is likely to be costly, and therein lays the barrier to many firms’ engagement with user research” (p. 126). Over the years, hundreds or even thousands of UX methods have been developed, be it personas, affinity maps, eye tracking etc., for helping to predict the multidimentionality of user experience and behaviour in product design and development. For example, Vermeeren et al. (2010) identified some 96 user methods, of which 70% originated from academia, approximately 20% from industry, and some from a combined academic and industrial effort. However, many of these methods have been found to have further development needs and have no clear, universal, protocol for how they can be applied in different ways and without bias. Ultimately, UX practitioners use various method in ways that suit them best.

Moreover, UX methods and readily available sources of user information (i.e., the Cambridge Inclusive Design Toolkit or the Loughborough CAD-based anthropometric environments) are not always used to their full capacity due to hard, immoveable business deadlines, limited budgets and a ‘do it yesterday’ mentality. There is also a problem observed by Leveson (2004) that “technology is changing faster than the engineering techniques to cope with the new technology are being created” (p.237).

Overall, given the immense effort of various academic and commercial institutions in constantly developing better tools for expanding the boundaries of product usage in product design and development, more has to be done to raise awareness of such tools. In addition, UX practitioners should be also always on the lookout for better and more informative tools that can provide reliable user data in situations where it is not possible to engage actual users.

2. Optimal number of participants required for user studies
How many users are enough? There are probably as many answers to this question as there are authors in the field of UX. For example, Sauro and Lewis (2012) believe that small numbers of test participants can ensure discoverability of problems in product prototypes, provided that the following things are present: (1) expert test observers who can assess whether a problem observed once is likely to be a problem for other people and determine the root cause of user problems; (2) multiple test observers including both novice and experienced test participants; (3) products with new interfaces rather than matured interfaces, and (4) coverage of both simple and complex tasks set along a pathway.

In addition, Sauro and Lewis (2012) help UX practitioners plan sample size with problem discoverability in mind. Here participant sample size requirements are perceived as a function of problem occurrence probability (p) and the likelihood of detecting the problem at least once, p(x≥1). For example, if a user study aims to deal with slightly harder-to-find problems (p=0.15) and wishes to be 85% sure of finding them, the user study would need to have a minimum of 12 participants in the test. In this model, the sample sizes grow quickly for the most difficult-to-find problems and the greatest certainty of finding them. Others (e.g. Nielsen and Landauer, 1993 and Virzi, 1992) found that 80% of the usability problems in a test could be detected with as few as four or five participants.

Overall, testing with even one user is always preferential to tests with none. Also, a sample size of five participants can be sufficient for detecting a majority of problems in prototypes, provided that these participants have varied backgrounds and levels of competence.

3. Time investment required to rigorously capture and analyse user studies
The fidelity of UX methods used in industry varies. Lower-fidelity UX methods offer quick desk-based estimations and simulations of user ability to inform design. These include the Cambridge Inclusive Design Toolkit, the Loughborough CAD-based anthropometric environments or basic task analysis. The next level of methods includes practical ‘quick and dirty’ techniques, where designers use basic materials, software such as Axure and Balsamiq, and guerrilla techniques to create low-fidelity, click-through prototypes of the look and feel of a product’s underlying information architecture to permit an early appraisal of how it functions in reality when put into the hands of various users.

Conversely, higher-fidelity UX methods directly involve real users through interviews, surveys, and more ethnographically-orientated studies. At the core of these methods is a belief that all people have something to offer at every stage of the design process (Sanders, 2002) and that this involvement can be fundamental to the generation of new ideas and in developing current thinking. Toward this aim, contextual interviews and estimations have increasingly gained momentum. Participatory design, for example, aims to develop technologies with the close involvement and observation of end-users through cycles of requirements gathering, prototype development, implementation and evaluation (Sharma et al., 2008).

No matter the fidelity of a UX method, rigour in using these methods is always required to efficiently and effectively inform production of a given product. However, rigour is often sacrificed for the sake of releasing new product or updated features quickly. Furthermore, lack of rigour in applying existing UX methods in industry setting might also stem from the finding that a vast majority of UX methods (70% as identified by Vermeeren et al., 2010) originate from academia, and thus they often carry the impracticability in applying them in a fast-paced industry context with cost reduction and limited time to market pressures.

4. Co design and development – Aligning the design process with development cycles
The Dual-Stage Framework for accessible user-centric design and development (Wilkinson and De Angeli, 2014) is a useful example of the modern focus on collaborative design and development with UX. This six-part framework follows a dual-iteration process; in the first iteration, the research design (exploring project needs with stakeholders and plans primary research) is further informed by secondary research (undertaking research into related products, the perceived market, current and target users) and the development of user profiles. The second iteration is concerned with undertaking user observations, conducting analysis on findings and triangulating this against other participant data, and contextualising results in terms of new concepts for co design and development. This framework embraces and iterates all design and development stages – investigation, design, review, production – as needed. In addition, it recognises that the perceived implications for product design and development can be tested through further observations of users interacting with prototypes. For example, additional follow-up tests might need to be run with as many participants as needed to confirm the fixes.

Overtime, much has been written about collaborative design and development of products intended to deliver design guidance through ongoing, iterative tests in a timely manner to inform ongoing development. Despite that, historically these two areas have been divided into separate activities. For example, product development, being the domain of engineers, has long embraced the Agile methodology and its associated sprints aiming to achieve (certain) product requirements in short bursts of work. Contrary-wise, product design with UX, being a newly appreciated field, has yet to develop a formalised way of working. This is because previously product design with UX has been seen as a costly ‘add-on’, because organisations favoured focus on development in order to deliver products quickly and as cheaply as reasonably as possible. However, the latest fashion in modern industry is to join these two processes together and run design sprints (UX) alongside the development sprints (engineering), but at least one or two sprints ahead of development to ensure a steady flow of deliverables into development.

Ultimately, given that UX is fast becoming a major business imperative, more effort needs to be invested in quality work with optimally selected study participants across varied socio-economic backgrounds and competences, as well as with the available support methods.


References:

  1. International Standards Organisation 9241-210, 2010. Ergonomics of human-system interaction — Part 210: Human-centred design for interactive systems. https://www.iso.org/standard/52075.html
  2. Nielsen, J. & Landauer, T.K. (1993). A mathematical model of the finding of usability problems. Proceedings of ACM INTERCHI’93 Conference (Amsterdam, The Netherlands, 24-29 April 1993), 206-213.
  3. Sanders, E. (2002). From user-centred to participatory design approaches. In J. Frascara (Ed.), Design and the social sciences (pp. 1-9). London: Tayloy & Francis.
  4. Sauro, J. & Lewis, J. R (2012). Quantifying the user experience: Practical statistics for user research.Waltham, MA: Morgan Kaufmann.
  5. Sharma V., Simpson, R., LoPresti E., Mostowy, C., Olson, J., Puhlman, J., Hayashi, S., & Cooper, R. (2008). Participatory design in the development of the wheelchair convoy system. Journal of NeuroEngineering and Rehabilitation 5, 1-10.
  6. Vermeeren, A., Law, E., Roto, V., Obrist, M., Hoonhout, J. & Vaananen-Vaino-Mattila, K. (2010). User experience evaluation methods: Current state and development needs. In A. Blandford, J. Gulliksen, E. T. Hvannberg, M. K. Larusdottir, E. L-C Law, H. H. Vilhjalmsson (Eds.), Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (pp. 521-530). ACM.
  7. Virzi, R.A. (1992). Redefining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34, 457-468.
  8. Walters, A., & Evans, J. (2011). Developing a framework for accessible user centric design, Proceedings of 18th International Product Development Management Conference, Delft, Netherlands.
  9. Wilkinson, C. R. and De Angeli, A. (2014). Applying user centred and participatory design approaches to commercial product development. International Journal of Design Studies Design Studies, 35, 614-631.