Tasks, Actions & Agents

Jimmy Tidey
2 min readAug 10, 2024

--

In a pervious post, I looked at Google’s “Unsolved problems in cooperative AI” paper. It’s an incredibly expansive paper, drawing all kinds of disciplines. I want to the opposite this time, focus on one discipline, and think specifically about how AI agents could disrupt the conceptual landscape of HCI / UXR. Where my previous posts have been directly linked to academic papers, this post is just… a bit more stream of consciousness.

User Experience Research often draws on the notion of ‘task’ — the user arrives at your interface with a task in mind, and carries out a series of actions with the goal of completing that task. The interface can be considered successful to text extent that it helps them complete their task. This leads to metrics like ‘task completion time’ for measuring the quality of your UI.

For something like a customer service chatbot, task completion seems like an applicable model. You arrive with a question about, say, how you return a garment, and you are successful once you have sufficient information to return the garment.

This is the paradigm used in Ryen White’s paper on search and AI agents. (White is a well established researcher at Microsoft, in his work gives a picture of where the copilot language has come from).

But the ‘task’ model gets seriously destabilised when we start to imagine AIs as more sophisticated agents that can act autonomously. Such an AI can take actions or decompose your task into subtasks. It might go on to act on your behalf indefinitely, melting the idea of ‘time to task completion’. It might suggest that your task should be modified, or help you define the task in the first place — in short agents AIs will complicate the task model.

Just to give a toy example, imagine that Photoshop includes an agent feature, where you can give instructions such as — “With every photo I put in the ‘headshots’ folder, remove the background, crop appropriately, fix the colour balance and save to ‘headshots_edited’ folder.”

There would be all kinds of challenges to user testing a feature, but the one I was thinking about when I started writing was shifting natures of task and action. The user can arrive at the interface with, compared with the non-AI world, an incredibly high level task (eg. ‘make headshots for the People page on our website’), and nearly all of the actions, even subtasks get pushed down into the AI. The focus of UI testing might move from, ‘can the user complete a task?’, to, ‘can the user understand what tasks the agent is capable of’ and ‘how can they evaluate if it has succeeded?’.

Usability testing might catch interacting agents — a agent that works fine in development might start interacting with other agents once its deployed in the wild, with unpredictable consequences.

--

--

Jimmy Tidey
Jimmy Tidey

Written by Jimmy Tidey

Civic stuff, network analysis, AI & agents, deliberation, design research, UXR.

No responses yet