Does the Golden Rule apply to AI?
Whether you treat AI as you would like to be treated may depend on whether it's a partner or a tool. But let's explore a third way to think about AI at work.
Hey there! Welcome to a work-culture newsletter that helps you do more than cope when work’s a lot. Grateful to have you join the convo! Please reply to this email and say what ideas this piece provokes for you! - craig
Malcolm Gladwell suggested the other night at No Small Endeavor—a variety show performed before a packed crowd at Calvin University—that a decline in religious instruction has also meant a decline in opportunities to teach good manners.
Maybe. In the workplace, I suspect rapid technological change hasn’t helped either. At work, the rise of AI, for example, has created a lot of uncertainty about how to comport ourselves. Should you be polite to Claude? Should you try to make Siri curse by asking for the French word for “seal”?
Some say “please” to ChatGPT. Others abuse their bots. How should we converse with these conversational agents at work?
Quick story about how (not) to talk to technology:
I remember one supper in my childhood household in the early 1980s. That night, the phone clanged after we had finished saying grace and were passing the casserole. My dad looked up and nodded for me to get it. You have to understand that I wasn’t usually allowed to touch the phone. Calls were expensive, and my parents kept a close eye on the monthly bills. Besides, the phone was a sort of mystical object, a means of transcendent connection. The slight hum on the line sounded to me as if it came from somewhere in the abyss of deep space. And now I’m supposed to answer it? I had a dim notion, from having watched television, that after picking up the receiver, I should say, “Hello—Mattson residence.” But that’s not what came out. I lifted the phone off the wall and said in a piping voice, Our dear Heavenly Father….
That story replays a persistent human habit: whenever we encounter new technology, our species engages it like my 9-year-old self: we struggle to find the right greeting, the fitting interaction. (Ontogeny recapitulates phylogeny.) When the phone was first developed, people treated it like a radio, treating it as a tool to build community. The radio, on the other hand, felt to early listeners like a means of public but intimate exchange, a point-to-point medium, rather like a phone seems to us. (Want more on this subject? See the work of John Durham Peters, especially in Speaking into the Air.)
As we chat with AI today, it’s not surprising that we’re unsure about manners. It’s not surprising that we wonder about saying “please” and “thank you” to ChatGPT. It’s also not surprising that people speak abusively to their bots.1
In this week’s work-culture newsletter, let’s mind our workplace manners with AI by exploring a way to frame the issue and a way to shift it, too.
A way to frame the question
You engage AI as you network on LinkedIn, design slide decks for clients, craft key images on Canva, check analytics and metrics, and engage digital coaches. I myself use it to filter out ambient HVAC noise in the background of the Mode/Switch Pod. (BTW, please follow our intergenerational team of podcasters on Apple or Spotify!)
So, yes, AI is everywhere. How should we talk to it? We could frame two basic approaches this way:
Treat it like a partner. Ethan Mollick at the Wharton School says we should speak to AI as we would to a collaborator. AI isn’t human, he says, but it is like an alien self of some sort. So, if you want to engage it well, you should treat it, not as a tool but as a partner.
Treat it like a tool. Jaron Lanier, a technologist at Microsoft Research, disagrees. Making AI seem personal wastes a lot of energy. Addressing it as if it’s a person tempts us to turn a tool into a golden calf. Like my 9-year-old self on the phone, what we say to our tech can turn into a weird kind of prayer.
Perhaps these two thinkers might agree with each other: Mollick is, after all, talking to AI users, while Lanier is talking to AI designers. But let’s use their apparent disagreement as a framework for two kinds of bad manners with AI.
On the one hand, we don’t want to treat AI like any other tool—say, like a Bic pen or a computer monitor. Mollick is right: AI is relational. It’s a network, a series of connections. We’re smart to make it a part of our relational network as well.
On the other hand, we don’t want to confuse it with a soul. Or a god. (A little unexpectedly, Lanier’s warnings echo John Calvin: the human heart is an idol factory.)
So, how should we engage AI for the good of our working community?
A way to shift it, too
Here’s my proposal: treat AI like a moveable staircase—you know, like a set of steps on castors, the sort of thing you see stagehands pushing around on a theatre set. In other words, AI can help you and your team stage your work each day. AI not only helps you reach what you couldn’t quite easily reach on your own, but it also helps you connect different levels of your work.
The ladder metaphor has long described individual spiritual and philosophical ascent.2 But a moveable staircase is a humbler item, wide enough to be shared with coworkers. The applications are plentiful and flexible:
Use AI to take notes during meetings, a task rather like the bottom step of committee work, allowing your people to “move up” to tasks requiring more involved deliberation and evaluation.
Draft functional, logistical communications in emails with AI so that you and your team can spend less time clambering around the inbox and more time “stepping up” and showing up to meaningful conversation with team members.
Reschedule meetings with Todoist AI Assistant and Reclaim AI, connecting coworkers and administrators at various levels of your working community.
Give access to more employees to AI-powered analytics, using platforms like Tableau. Anybody in the company can ask the platform to “Explain the data to me,” and then use the analytics to connect their work with organizational goals.
These use cases, instead of treating AI like a human soul or a flat-head screwdriver, imagine it as a flexible and provisional place to stand and to climb with others.
A final thought
Should you treat AI as you yourself would want to be treated? I suppose. Your approaches to technology are habit-forming. If you regularly savage Claude or cuss out Copilot, you’ll cultivate habits that eventually affect your work culture, too.
Maybe, though, it’s not the conversations we have with AI that matters as much as those we have about AI.
I get this idea from reading Angela Williams Gorrell’s book Always On, which notes that we often have a lot of “fruitless conversations” about new technology. She describes these conversations as nostalgic, enthusiastic, anxious, or apathetic. I confess I’ve felt those emotions in conversations about AI at work. Instead, she recommends “interested conversations.” I like the curiosity and the investment that phrase implies. But there’s also a religious vector. Robert Persig used to say “Buddha’s in the machine.” Gorrell adds that “God’s online.” The point is, we and our teams should alert for what’s life-giving in new technology. -craig
A small change in publishing schedule:
Because I’m starting work on my next book—this one about how to use communication to abso-friggin-lutely flourish in work culture—you can expect this newsletter every other Tuesday.
But it won’t feel like a bi-weekly, because on the “off” weeks—when I’m doing deeper book-writing—the Mode/Switch will feature an episode of our intergenerational podcast. You’ll hear a Boomer, an Xer, a Xennial, a Millennial, and a Z engaging experts on tactical questions and big-picture issues like how to keep your footing in career transition.
You’re next book, you say. I didn’t know there was a previous book!
Please pick up a copy of Digital Overwhelm here or here.
According to Dr. Sheryl Brahnams, as much as half of our interactions with conversational agents is classifiable as abusive.
The image of a ladder you kick away is much older than Wittgenstein, who has made it well-known in modern times. He suggested that readers ascend his ideas until they no longer need them. He also wrote that “if the place I want to reach could only be climbed up to by a ladder, I would give up trying to get there. For the place to which I really have to go is one that I must actually be at already” (Culture and Value: Revised Edition, 1991). It’s interesting to think about how many ways we use AI to get where we are already.