AI should be understandable (and design can help)

This blog post is a modified version of a talk given at Cogent’s AI Now event series.

With every innovation that our human imagination brings to life, we replace one way of doing things with a slightly different way of doing things. This type of technological change isn’t new, but the pace of change and the potential speed and scale of disruption possible today is new.

This means the risk of designing products without considering the human impact is high. Getting it wrong doesn’t mean that your product is mostly right, it means alienating your audience and potential brand damage.

Google Glass missed the mark

Product development involving tech that seems futuristic, such as AI, is particularly at risk. This is because expectations are less clear in unproven markets. Combine this with the hype surrounding AI and the potential for product rejection is significant. We propose “making AI understandable” to manage these risks.

Note that making AI understandable is not necessarily the same as explainable. We’re not saying that the technology implementation and workings need to be completely transparent. We believe that the reasons, usage, and impact of AI need to be understandable to all parties. If you can convey to users why something exists, how it is used and what impact it has, any surprises are much more likely to be pleasant.

This is why Design, and having designers on your AI product team is important. Making something understandable is not a technical challenge, it’s a design challenge.

So how do you do that?

Articulate purpose clearly

You should ensure that users know the purpose of an AI feature because trying to impress with tech alone is rarely effective. If users understand what the product does for them (suggests clothes that match their tastes), or why they should use it (it makes shopping easier) then you can improve engagement and reduce confusion. This is basic design thinking and becomes extra important when the interface and technology are new. New features that feel intuitive and obvious are much more easily accepted, so spend the effort to build well thought out CX flows that just happen to contain AI functionality.

Integrate with existing behaviours

Can we replace the entire experience? Do we need to include legacy behaviours? Automate a small part or replace the whole? This is new tech, but it’s being used in a familiar world. Think about how to integrate with existing behaviours to increase acceptance and comfort.

If the problem you are solving can have serious consequences, then it can be incredibly important to match the expectation change to the new capability. Users may worry about some systems having too much control and yet are completely willing to give up control to others. For example, take the recent self-driving car accident that occurred because the operator, obliged to be able to take over at any time, was distracted – apparently texting.

Case study: Product Matching

Cogent works with a rapidly growing startup building a disruptive online marketplace. Cogent worked with the startup to identify an opportunity to enhance the onboarding experience of new suppliers joining the platform, using onsite observation, interviews and designing an AI solution integrated with existing expectations and workflows. Read more here.

Don’t do creepy AI

“Creepy AI” is the experience of having a computer know too much about you. This can arise from the product knowing things you haven’t explicitly told them. It could be something easily deducible, or common knowledge offline. But, out of context the surprise can still be creepy.

For example: a person has a conversation with a friend about an upcoming trip to Berlin. The next day they receive ads for hotels, tours, and other businesses in Berlin. It’s creepy because we’re not yet used to the mental model of having our phones act as listening devices in an ‘always on’ mode. It’s too much magic. We want to understand how, and ideally what, our tech knows about us. Even if a product offers a helpful feature – suggesting offline friends, or auto-blocking that guy, there are some boundaries which feel too personal. It’s worth noting that getting this wrong this can affect a user’s perception of your brand’s other products or features – even if they aren’t related. Users are people, so design systems that respect their personal feelings, boundaries and privacy.

Avoid offensive outcomes

There is a joke about a woodsman meeting a fairy in the forest, who offers him a wish. The woodsman says “I wish my children never starve” and the fairy promptly kills all of his children. It’s a good (if extreme) example of how a lack of nuance and context can be a big problem.

Computers aren’t people. Models based on data lack the context beyond that data that humans take for granted. We need to consider how users will interact with a product in the real world. If users are set up to expect certain types of outputs from a product, we need to make sure the system delivers. We all know the frustration from a navigation system that suggests impossible routes. For another example, look at the problems that arise from facial recognition systems trained on biased data sets. This means there’s a product imperative (and an ethical obligation) to put a lot of thought into training sets, consider use context and involve a diverse variety of people in product testing. Product owners must share responsibility for the identification and avoidance of unacceptable bias so the burden doesn’t fall on engineers and data scientists.

Make it auditable

Agile software development requires the build team to make daily decisions that can influence the project and product outcomes. These can be in response to technical challenges, a change in product thinking, or a new round of user feedback. It’s part of the process. If the team doesn’t understand the product then they can’t effectively make those decisions.

Design can offer a clear purpose and suggest next steps in development. There will always be a level of opaqueness, but the more understandable your system structure is, the easier it is to audit it and make adjustments when the unexpected happens.

Case study: Facebook

In 2017 Facebook was creating a negotiating bot. They tested it by having two bots haggle over the best way to apportion books, hats, and balls with each other. During the test, the bots invented a unique shorthand because it was more efficient and English wasn’t prioritised as a requirement. End-users wouldn’t be able to easily understand the negotiations, so the researchers decided to shut down the bots and make some changes.

While the tech worked, the solution wasn’t acceptable for end-users because it wasn’t understandable. Having design principles and a clear purpose along with an auditable system meant the team had a path forward after the unexpected happened. A simple on-off switch doesn’t allow much room for learning, but auditability and purpose do.

At Cogent, we’ve been looking at ways to integrate AI into the systems we build. In the world of software engineering we talk about principles we’ve learned lead to good outcomes in the long run: agile development, testing first, writing reusable and maintainable code. With our AI systems, products and features, we believe we should be talking about understandable AI. Practical examples of understandable AI is one of the major topics of our AI Now event series.

If you’re interested in anything you’ve seen here and have questions about how to apply AI or machine learning to features or products effectively, reach out to us at Cogent.

Find out more about AI at Cogent here.