Why SkyNet isn’t the problem — how UX and AI can work together.

Artificial Intelligence isn’t coming to replace your role in User Experience, but we need to be wary of those who think it could.

Andrew Robert Burgess
7 min readApr 3, 2023
A Terminator style robot giving a presentation in front of a whiteboard covered in sticky notes and process diagrams.
Robots aren’t going to take over our UX jobs. Image created by Microsoft Bing/DALL-E image creator.

With the release of ChatGPT last November, tech sections and timelines have been awash with people presenting what they have done with it, giving their thoughts on how it’s going to shape the future, and of course the usual scare stories about how AI is going to take our jobs, robots will rule the future, and we’re just days away from SkyNet becoming sentient.

Seriously, please calm down. ChatGPT is not going to bring about Judgement Day

The reactions to these developments in Artificial Intelligence and Machine Learning have been interesting from a sociological angle, with many scrambling to be the first to discover new ways to use it, and those of us warning that perhaps it’s better to work out why we should use it, rather than just how we can use it. Only last week, I saw a story on Medium that suggested that ChatGPT could be used to generate user personae and user journeys, instead of developing them from research with actual users. It concerns me to think about how many people will think this is a viable shortcut, when it could end up making products even worse, by assuming that the word of AI is a viable alternative to the needs of users.

This led me to a conversation with a previous colleague and fellow UX thought leader, to consider the ways in which AI could be useful in the User Experience discipline. During our discussion, we ended up grouping the outcomes into several areas:

The tasks that AI could do for us

We imagined how, when given a set of rules and criteria, a computer could produce:

  • Styling and identity: Using a given set of ideas, images, designs, and concepts from a “mood board”, pull together the visual and the verbal to create an overall set of colours, fonts, and styles to be used in a product.
  • User Interface elements: Generate a set of components for a user interface, including text boxes, information panels, and buttons, basing them upon the styling it previously created.
  • Standard code: Write HTML, CSS, and JavaScript code to implement those UI elements into the product itself.

While this could make the process faster, we also identified a key problem with this approach, namely the fact of synthesis, and the lack of virtuosity. When the Bootstrap framework came out in 2011, there was a noticeable trend in websites and products that began to resemble the styling within the framework, primarily because it was being used to quickly put together projects, and the standard styling was just assumed to be sufficient. This led to a lot of projects looking very much the same, and some instances where the bootstrap styling didn’t really work. Using Artificial Intelligence to generate UI elements could well provide the same problem: being able to produce standard elements that could be used in a variety of projects, but failing to provide the creative element that generates new ways of information display or interactivity.

AI can help, but it might not be too original. Created by Microsoft Bing/DALL-E image creator.

Tasks that AI could help us with

With improved computational ability, Machine Learning could well help with:

  • Reviewing large data sets to find patterns and trends: a commonly referenced benefit for Machine Learning, and possibly a benefit to share in a product team with data and business analysis colleagues. Being able to understand the stories within large complex data sets—such as trading information, weather patterns, or results from scientific experiments—could well help UX practitioners to understand the context of a given project in more detail. This might even help them to interview users, stakeholders, and subject matter experts in a more informed way.
  • Finding and tagging themes within long interview transcripts: reviewing interview recordings, generating usable transcriptions, and extracting observations can be very time consuming. Machine Learning could help UX researchers to identify and flag recurrent statements within the transcript, correlating them between different users and user types to identify themes and patterns.
  • Correlating user analytics with user research: combining the two points above, Machine Learning could use computational processing to review user analytics data, and compare it with the themes identified within user research transcripts to identify a correlation between what users say, and what they currently do. For example, if users are saying that they find using a checkout process difficult, large sets of user data could be analysed to see which points during the process people are abandoning their carts, and why.

As above, while these ideas might make the process faster, there are also caveats around relying upon the algorithm to work entirely on its own. Findings around data, themes in research transcripts and, ultimately, the correlation of the two would have to be checked by humans for false positives, ensuring that the criteria aligned, and false negatives, to see if anything had been discounted or missed on the wrong criteria. As this information would be used to inform the development of a product, it would require some degree of supervision, checking, and training to ensure that the machine had discovered all the right results.

Always check your AI’s work. Created by Microsoft Bing/DALL-E image creator.

Tasks that we would have to do ourselves

We also identified that there are some elements of the UX process which, in its current state, Artificial Intelligence is unable to do:

  • Conducting user interviews and exploring that users want: while a computer could generate a standard set of user questions, it would lack the ability to adapt those questions, based upon the response of the user being interviewed. This is an important ability to be able to discover new, previously unknown opportunities for improvement, and can even, at times, be a point of courtesy — if you’re interviewing someone for whom a certain facet of your enquiry is irrelevant, the ability to adjust and revise your questioning can help stop them from getting bored or irritable, losing focus on the interview.
Recently, the Mandalorian showed just how ineffectual robots can be at nuanced interviews. Lucasfilm Ltd.
  • Devising comprehensive solutions to a problem: at the current state, it looks that AI is unable to take the findings from a programme of user research, the stated requirements from business analysis, and the technical limitations to generate a solution that would address all those criteria. This is a key part of UX, bringing the discoveries together together to define what’s going to be built, so at least we can be sure that the robots are still going to need our help on this one.
A Venn diagram with overlapping User Needs, Business Goals and Technical Constraints, with a robot face in the middle overlap, crossed out
AI is currently unable to create solutions that address the needs of these three sets of criteria

In both of the cases above, Artificial Intelligence currently lacks the virtuosity required to work with those situations in a creative and adaptive way. This problem comes down to its ability to synthesise, but not create anew. This may well change in the future, but it seems like developments of this scale may be some way off.

Working well with AI. Created by Microsoft Bing/DALL-E image creator.

Conclusion: helpful robots, not Terminators

In summary, AI doesn’t currently have the capability to take over our roles in UX, and actually it’s the people who exhibit a lack of understanding in the important parts of UX, and assume that everything that we do can be done by AI, who pose a bigger threat to better practice in the discipline.

Of course, as with any comment on rapidly advancing technology, this piece is an opinion, a pin in time, and likely to become out of date before long. Artificial Intelligence will grow, gaining the ability to actually think and create for itself, and able to do all of the things I’ve said above that it can’t currently do. However, we’re not quite there yet. And, in the meantime, we may well find that computers continue to help us do our work better, rather than challenge us for it.

This is also a good thing, as during that time we can explore the implications of living with this technology and how we can combine it with our current macro systems of working to earn a living, or how we develop new ideas to sustain everyone while the machines assist us, which overall has to be the biggest problem to overcome.



Andrew Robert Burgess

Design Leader (UX, UI, Strategy), DJ and music fiend (goth, industrial, metal, alternative), prognosticator and pontificator http://www.sableindustries.org