Negotiating Autonomy with AI Communication Technologies: A Conversation with Camille G. Endacott

Featured

Artificial intelligence communication technologies (AICTs), such as chatbots and virtual assistants, are becoming increasingly integrated into our everyday interactions. These technologies are redefining traditional methods of communication and are prompting a reevaluation of how we manage interpersonal impressions at the workplace, as well as how we navigate the social and organizational complexities they introduce.

Upwork’s User Research team and the Upwork Research Institute recently spoke with Dr. Camille G. Endacott, Assistant Professor of Organizational Communication and Organizational Science at the University of North Carolina at Charlotte. Dr. Endacott’s work focuses on how people’s patterns of work unfold around digital technologies and with what consequences of organizing.

Our conversation was part of Upwork’s Reimagining Work—a lecture series designed to provide a forum for expert practitioners and academics to foster the exchange of views on the present and future of work. In this conversation, we discuss Dr. Endacott’s studies on how users interact with and around AICTs, a process that involves constant negotiation of how users deploy AI’s autonomy (agency). We also ask Dr. Endacott her point of view on how workplace communication and interactions will evolve due to these AICTs, especially the delegation of tasks to AI.

Image Credit: Jacob Wackerhausen/istockphoto.

Allie Blaising, Senior User Researcher: What makes AICTs different from traditional communication platforms or channels?

AICTs differ from traditional communication technologies because they make it possible for users to no longer be in full control of the messages sent on their behalf. While many digital communication technologies mediate communication, either directly as a channel (i.e., email) or as a platform for communication (i.e., social networking sites), AICTs go beyond mediating users’ own communication to making decisions about communication on users’ behalf.

AICTs can act as agents that can learn and make decisions about users’ communication, including what they communicate (especially as generative AI rapidly improves), in what modality, when, and with whom. So rather than only making it easier for one person to communicate with another person, AICTs can represent users to others and communicate for them.

In this way, AICTs are more like an executive assistant than a digital calendar—because they are agents that can themselves make decisions and communicate autonomously, even if the consequences of those decisions fall on the user. This means that AICTs can be involved in communication processes not only as mediators, but as actors themselves.

Image Credit: Drazen Zigic/istockphoto.

[2] Dr. Ted Liu, Economist at the Upwork Research Institute:

In your research, you have compared two types of AICTs of work scheduling: one more autonomous tool that can make decisions on behalf of users, and the other less autonomous. What stood out to you most about how users interacted with AI’s autonomy  and negotiated control with these tools in these two contexts?

Users of both types of AICTs were busy people who had a lot of calendar-related stress. However, they differed in how they worked to control the decisions made by their respective AICT.

With the AICT that was low in autonomy (i.e., AI decisions were offered as suggestions), users engaged in proactive customization, or inputting preferences and approving decisions prior to decisions being sent to their communication partners.

With the AICT that was high in autonomy (i.e., AI decisions could be made and sent to others without the users’ approval), users could offer some initial prompts to guide the tool but it largely acted without user involvement. This allowed the tool to take over more of the cognitive load of scheduling for users, but also reduced the control they had over their calendar.

It was not uncommon for users to be scheduled for meetings in times that they had planned to do other work or pick their kids up from school. They then had to engage in reactive customization, where they had to block out time on their calendars to communicate their preferences to the tool after they had started using it.

I think these findings highlight a central tension that arises in the use of AICTs: users choose tools because they want AI to help them do their work, but they also want to retain control over how that work is structured.

Image Credit: Jacob Wackerhausen/istockphoto.

Ted: What tradeoffs should we keep in mind when designing for reactive versus proactive customization among AICT users?

For users of AICTs, I think it’s important to think about how prepared you are to codify your preferences. My research suggests that users who are prepared to codify their preferences likely benefit from and prefer proactive customization, which allows them to specify their decision-making premises to AICTs.

For example, users who had already done a lot of thinking about how they wanted to spend their work weeks enjoyed using an AICT that operated with low autonomy and allowed them to pretty extensively customize how others interacted with their calendar. In this setup, the tool gave them a lot of control at the cost of not being able to operate on its own.

For users who are less prepared to codify their preferences, it’s probably more helpful to use a tool that is higher in autonomy and that is going to start making decisions with little oversight from the user. My study suggests that people learn more about their preferences as they use autonomous AICTs because they notice when they don’t agree with decisions made by the tool!

In the case of the AI scheduling technologies I studied, a user might notice that even though they are technically free at the times for which their tool scheduled them for meetings, they hate having morning meetings. They may then need to block out that time if they want to keep using an AI scheduling tool. The tradeoff here is that the AICT is taking over more of the task for the user but the user has less control over the AICT.

For designers of AICTs, I think it’s important to know that users want some form of control over AI decisions, full stop. In that sense, developers could think it’s always worth having proactive customization options. For example, a developer might design a tool so a user must always input additional criteria to guide decisions made by AI or opt-in to certain features before they go live.

What’s lost, however, is the opportunity for AI technologies to learn from users’ actual data. Even though highly autonomous AICTs might threaten users’ control over their work, for these tools to learn, users have to try them out “in the wild.” Consequently, there are real gains to be had from designing AICTs that function autonomously because they can learn from user data, even as users engage in reactive customization to keep using them.

Developers should also recognize that users are generating a whole lot of value in their willingness to deploy AI technologies that are still learning, because they are helping these technologies improve. Personally, I believe it’s likely warranted to extend the time for which users can access a product for free in compensation for this labor (i.e., a longer ‘beta’ period).

Image Credit: Jacob Wackerhausen/Istockphoto.

[4] Allie: How can AICTs affect how users form impressions of their communication partners?

Continuing on the impact of AICTs on interpersonal and organizational dynamics from your research, how can AICTs affect how users form impressions of their communication partners, or in many cases coworkers? What do you think this means for how we design text- and audio-based AICTs?

Because AICTs are involved in communication, they are not experienced by the principal user alone – they also communicate with people with whom the user is communicating! We called these people “communication partners” in our study on this topic.

We found that AICTs that aren’t operating autonomously or in natural language don’t make a huge difference to how users are perceived by others—others may have good or bad experiences with their software or they may think it’s current of the user to be deploying a new tool, but it does not make a huge difference to how others perceive the user.

However, when an AICT is operating autonomously and in natural language (for example, a personal AI assistant), it can really affect how users are perceived by others. Our research showed that users’ communication partners often transferred over their impressions of an AICT to users themselves—so if an AICT made a decision that communication partners didn’t like, users were blamed and left to clean up the mess.

However, this occurred most often with communication partners the user didn’t know well. When a communication partner did know a user well, they might have had a bad experience with the AICT but didn’t transfer it over to the user, as it wasn’t a strong enough instance to change the dial on their impressions.

What this all means is that users shouldn’t only consider how they perceive an AICT, they should also consider how their AICT is likely to be perceived by others, as it can reflect back on them. Further research that experimentally tested the relationships that we found in our qualitative research confirmed what we thought: the more people perceive a user and an AICT as one entity, or as a team (the user and the AICT share common goals and the user entrusts the AICT to act on their behalf), the more likely they are to transfer impressions of it to the user.

In terms of design implications, it’s worth remembering that designing an AICT to seem more anthropomorphic, or human-like, may make it seem more innovative but also likely encourages other people to transfer impressions of it over to the person who uses it. Using non-human names or even less human-like voices (for audio AICTs) might help remind people that an AICT is a machine and should not be evaluated or treated like a human assistant.

Relatedly, using a gender-neutral name for these assistants would be helpful in not perpetuating occupational stereotypes. For example, an AI assistant should not be given a feminine name because it risks invoking feminized stereotypes of secretarial work.

While our research focuses on transference that happens among individuals, it’s likely that the same holds true for organizations: people might transfer over negative perceptions of an AI customer service agent to the organization that deploys it.

Our research suggests that making it clear that a tool is AI and not a person can help because it avoids people feeling “tricked” by AI. For users, I would strongly encourage you to experiment with AICTs with people you know well first—these people are less likely to judge you based on any wonky decisions made by the AICT. After developing familiarity with how an AICT operates, then you might be able to use it with acquaintances or in more socially risky interactions.

Image Credit: Santiaga/istockphoto.

[5] Allie: How do you think we can best design AICTs so that they act in the best interest of users?

I think it depends on which users’ interests are being considered. Based on my conversations with AI developers in the course of this research, I think that many developers want to design tools that serve users well. What is difficult is that not all users want the same things when they choose to implement a tool. The question then becomes: whose interest should take priority?

For example, should an AICT be designed to optimize decisions for a message sender’s best interests? Or for a message receiver? One could imagine situations in which it would be really pragmatic to prioritize the message sender’s needs—for example, when a supervisor is communicating instructions to a subordinate. In other situations, it would be important to prioritize the message receiver (for example, a hopeful job candidate drafting an application).

Choosing which users’ interests to prioritize was a thorny issue for the companies I studied and I think it will persist as a challenge in designing AICTs that support users’ desired work patterns. One possible solution is to strive to make the decision-making process optimization criteria as clear as possible, a design framework often referred to as “explainable AI.” It may not always be possible to offer users explanations, but by making the optimization criteria explicit, potential users can at least learn to whose interests the technology is designed to cater, and make an informed decision about adopting it.

Image Credit: KucherAV/istockphoto.

[6] Ted: What are the key future trends related to AICTs that work leaders should think about?

I’ll offer two considerations that I think will be really important to keep in mind as AI technology in general and generative AI in particular advance. First, while there has been a lot of talk on whether AI should automate or augment tasks (i.e., take over tasks completely or work in conjunction with human efforts), we think that delegation to AI will increasingly be a key configuration of human and AI decision making.

In delegation, a person may hand over a task to an AICT but they retain responsibility for the task itself. As more advanced personal AI assistants are developed, we think that understanding how to best delegate tasks will be a crucial competency for work leaders in the same way that learning how to delegate to human workers is. So rather than thinking about which tasks are for humans and which tasks are for people (automation vs. augmentation debate), we should be thinking about how to communicate instructions to AICTs in effective ways.

The second consideration to keep in mind is that work leaders should remember that it takes a lot of work to make AI work. Even the best AI technologies are still making decisions probabilistically and will need help to learn about the cases that diverge from their predictive models.

I think much of the popular rhetoric around AI assumes that AICTs will be able to work “off-the-shelf,” but in reality our research has shown that it takes a lot of extra labor to codify work processes, remediate AICTs’ errors, persuade people to use it, and apologize to others when things go awry. While this work can ultimately lead to more effective use in the long run, It’s important for work leaders to expect and plan for this extra work when making decisions about implementing AICTs.

This article originally appeared on Upwork.com and was syndicated by MediaFeed.org.

Image Credit: GamePH/Istockphoto.

More from MediaFeed

4 Pros & 4 Cons to AI Automation for Businesses

Image Credit: yacobchuk/Istockphoto.

AlertMe