Live-in robots like Rosie from the Jetsons are not part of our day-to-day … at least not yet, right?! But since fiction precedes science, we’ve got ideas already about what that could be like.
So about that: assuming accessibility wasn’t an issue, if you could invite a full-fledged embodied AI – a robot-- into your home, would you? How about into schools? The workplace?
What do you imagine it would be like, having AI so visibly present? AI is already all around us and has been for quite some time. ChatGPT is like the herald of the newly energized AI Spring and a year after its release, we are still talking about it! The conversation has shifted a bit, but it’s still happening. Many folks are thinking about what it means to have AI in our lives.
The conversation and concerns go back aways – ChatGPT didn’t start it, rather it just recharged it. A few years ago, I recall a bit of a media dust-up over whether it matters if kids are polite to digital voice assistants like Alexa. Did you happen to read Mike Elgan’s (2018) Fast Company piece titled “The case against teaching kids to be polite to Alexa”? The debate wasn’t new; as I read it, I recalled having had a conversation about that issue with my husband long before, when our child was about 6 years old and had started interacting with Siri and Alexa. I can’t actually remember why it made social media waves in 2018. Anyways, about a year later (2019) Simon Coghlan and colleagues waxed Philosophical over the issue in an academic journal article titled Could Social Robots Make us Kinder or Crueler to Humans and Animals?, too.
In Elgan’s informal media piece, his controversial thesis was that we shouldn’t encourage kids to treat machines like people, thus its fine if kids are bullies to the Alexas in their homes. He reasoned that if the purpose of manners is to smooth out social relations, and there are no “social relations” with machines, then manners shouldn’t matter to machines.
There’s are several flaws in Elgan’s reasoning. For one: are manners one sided? That is, only for the receiver, not also the sender? Coghlan and colleagues focused on the nature of AI as the key point, too. In their academic article they considered, for example, whether AI is enough like a “pet” that we should levy protections for it (i.e., do we need “AI cruelty prevention” laws, like we have animal cruelty prevention laws?). After considering the question of “kind,” they then speculated on whether actions towards AI might have implications for actions towards other entities like animals and other humans. They didn’t settle on a clear resolution, but they did conclude that whether AI is deemed “in need of protection” or not, we should care about treating it kindly.
I agree with that sentiment: it doesn’t matter what AI “is” or whether it feels kindness or cruelty. Rather, what matters is the social dynamic that surrounds AI-human interactions and what those interactions mean to the humans in the loop. This perspective revolves around two questions: Can you have social relations with a robot? And does the form of the relation (kindness versus cruelty) matter?
Though Elgan was saying “no” to the first question, it turns out he was wrong. Those questions need to be adjusted. These questions are better:
(1) Does the matter, matter?
(2) Does it matter, what we do, regardless?
Here’s my answers: No, and Yes.
I don’t think the matter matters at all. Humans form relationships with all kinds of people, places, animals, things, and even plants (as an aside: isn’t that why the scenes in Good Omens where Crowley yells at his plants make us cringe and laugh??). The “guts” of a machine may be cold, but if the machine makes me feel warm and fuzzy, like I am heard, then that’s a relationship in the making. The relationship is what matters, and the way I act in that relationship does too.
Self-concept is defined not by objective assessments, but also by those and relationships: The way I act around you, and then interpret how you react to me in response plays a role in how I think about myself!
Following then, in my expert opinion, I believe we should invite our children to interact kindly with AI, reminding them to treat AI as they want to be treated. Do they have to be 100% consistently polite though? No. That would be weird, and unrealistic. After all, Apple’s Siri is programed to respond to “Hey!” and we don’t want children seeking attention from their teachers that way at school, right? In life we are inconsistent with our manners, but we strive for kindness, more often than not, right? We’ve all probably had moments of “acting badly” and have moved on from there, too. All that is to say: treating Alexa, Siri, and other AI programs as-if they are humans, where we encourage kids to be “kind enough” seems wise.
I say that from the perspective of “IMHO”, but my humble opinion is informed by research, too. For example, I recently read a lovely academic commentary by Sandra Calvert, a leading expert on how children form powerful para-social relationships with media characters, about this very issue. Titled “Intelligent digital beings as children’s imaginary social companions” (2021), in it Calvert discusses the power of parasocial relationships and explains that social robotics can check all the boxes of things that signal “companionable interaction” such that they can take on “friend” status with even more ease than lovable (by design) characters like Dora the Explorer.
Curiously, it doesn’t matter to children that Dora, or a social robot, isn’t real (as in made of tangible organic matter). Children have played with imaginary friends for as long as children have played. Research shows that by five years of age kids realize that robotics are machines, not organic beings (e.g., see Baumann and colleagues, 2023 for a series of clever studies on conceptual understanding and trust in this regard that together they titled “People do not always know best: preschooler’s trust in social robots”). For example, when a robot appears to demonstrate more accurate responses to questions than a human does, five-year-old’s will follow the “informed/accurate” robot over the “uninformed/inaccurate” human. Knowing that the robot is a machine isn’t a big deal. It’s the machine’s ability to convey knowledge in a human-like way that matters, to children. [Heck – it matters to adults too. Remember that NYT reporter debacle where AI-Bing tried to convince the journalist to leave his wife?]. That’s an important point to think about.
In another article I read recently (Talking with Machines: Can conversational technologies serve as children’s social partners?) author Ying Xu (2023) reviews some really neat research on what children take away from reading sessions, such as comprehension of the story. For example, children were read a story by either an adult or an AI program on a smart speaker, and in both situations, the narrator engaged children in conversation about the story. A third group was in the mix too, where children heard the story, but didn’t get to discuss it. Children who got to engage in conversation (no matter the matter) showed better comprehension of the story than children who just heard the story. That’s pretty neat. In this case, a robot served as an effective story teller & teacher.
You might have read that last statement and are now thinking: Oh, great! She’s saying that robots should replace teachers, then, right?? And while I know your minds might go there, I do not want that to be your lasting impression. That’s not at all the future I think is coming, nor want.
This research is really cool, but it’s also really new and wholly incomplete. It is quite likely, I expect, to assume that robot storytellers can help supplement a human teacher, but I expect they cannot wholly replace human teachers. Nor would I expect that a Robot Nanny can become an effective attachment figure for an infant or young child. The developmental effects of snuggles with parents are unlikely to be supplanted by a robot. But a robot might make a great playmate! All that is to say that there are *many* unanswered questions and we are nowhere near ready to worry about a Robot take over. I am firmly in the camp that thinks an AI singularity type event is not at all likely.
My point in sharing this here today is this: Rather than asking whether children should interact with AI, we should instead ask “How should children and AI interact together?” Research happening now will help guide our decisions in future as AI projects continue to roll out into the public sphere. My rosy vision for what a future with social robotics in our lives could be like is one where AIs are “in the mix.” Humans and AIs have different strengths, and we should organize ourselves accordingly to work on maintaining a harmonious coexistence. I know, I know – I am a dreamer. But fiction precedes science, so let’s dream now of the future we want to see and then make it happen.