Published on MediaPost, 24 February 2023
In the past few weeks, I’ve read approximately eleventy hundred essays and articles on ChatGPT, Bing, Large Language Models, and other AI-related topics. Like most people, my mind has been blown a fair few times — Kevin Roose’s conversation with Sydney, as reported in The New York Times, was a total trip. As I read it, I felt completely unsettled, at a visceral level. What might this “person” do? What are they capable of?
Most of the commentators take great pains to point out that they understand the technology, that they know it isn’t a thinking, feeling creature talking to them — but one leveraging vast troves of written content to string new troves of written content together in a way that seems meaningful.
“We’re not fools!” they seem to be saying. “We know how the magic trick works!”
But there are several points missing from this line of response. The first is obvious: We’re all just stringing new troves of content together in a way that seems meaningful. The second is perhaps less obvious: It doesn’t actually matter if it’s just a computer shuffling words around. The impact on us is the same.
In my courage-building courses, I have participants run through an exercise. They pick a difficult situation (but one they’re happy to talk about in public) and write down how they’re going to initiate a conversation about it: My yogurt’s gone missing for the past three days. The story I’m making up is that you’ve been eating it, so I wanted to check in with you about it. They then pair off with strangers and read their opening statements to each other.
It’s an intense experience for the folks practising: writing it is always different than thinking it, and saying it is always different than writing it.
But it’s also an intense experience for the partners, who are inevitably surprised by their visceral reactions when the opening statements get read to them. I felt so defensive, even though I’ve never even met you before, and I’m lactose-intolerant!
Our systems — by which I mean our physical bodies, our emotional bodies, our thoughts, the totality that is us — do not distinguish between “real” and “fake” the way our frontal lobes do. I can know intellectually that you’re not accusing me of taking your yogurt, but my system is hard-wired to protect me, and it’s going to snap into gear without waiting for further instructions.
In other words, our systems respond how they respond even when we know the external situation isn’t “real.”
In other other words, our internal experience of a situation is distinct from the external “reality” of it.
Falling in love, becoming angry, feeling understood — as much as we think of these things about being in relation to others, they’re all experiences that happen within us. We like to say, “A computer can’t empathize,” but it doesn’t matter; a computer can provoke in us the feeling of being empathized with.
So we may know intellectually that Sydney’s just spitting out words in an order that seems consistent with all of the examples it’s digested. But our system responds exactly the same way as if it were a person writing to us.
AI may not be “sentient” — but that may not matter. In terms of the effect it can have on us, it’s already crossed the uncanny valley.
Any sufficiently advanced AI is indistinguishable from sentience. We’d be wise not to dismiss their impacts.
Kaila Colbin, Certified Dare to Lead™ Facilitator
Founder and CEO, Boma