Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time.
Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE’koh) on Thursday, a blob-shaped cartoon face that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality.
Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obsequious.
“When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in this effort of really landing this AI companion that you can really feel.”
In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors and wears glasses when in “study” mode. It’s also easy to shut off, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.
“It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it and they got rid of it. I think we’re much more ready for things like that today.”
Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on who their expected users are.
Tech-savvy adopters of advanced AI coding tools may want it to “act much more like a machine because at the back end they know it’s a machine,” Reimer said. “But individuals who are not as trustful in a machine are going to be best supported - not replaced - by technology that feels a little more like a human.”
Microsoft, a provider of work productivity tools that is far less reliant on digital advertising revenue than its Big Tech competitors, also has less incentive to make its AI companion overly engaging in a way that’s been tied to social isolation, harmful misinformation and, in some cases, suicides.
Andreou said Microsoft has watched as some AI developers veered away from “giving AI any sort of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.
“Those two paths don’t really resonate with us that much,” he said.
Andreou said the companion’s design is meant to be “genuinely useful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just kind of try to kind of monopolize and deepen the session and increase the time you’re spending with these systems.”
“Being sycophantic - short-term, maybe - has a user respond more favorably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”
Part of Microsoft’s announcements on Thursday includes the ability to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta’s WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to “troll your friends,” which is different from the “intensely collaborative” AI-assisted workplace Microsoft has in mind.
Microsoft’s audience includes kids, as part of its longtime competition with Google and other tech companies to supply its technology to classrooms. Microsoft also said Thursday it’s added a feature to turn Copilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying at school.
A growing number of kids use AI chatbots for everything - from homework help to personal advice, emotional support and everyday decision-making.
The Federal Trade Commission launched an inquiry last month into several social media and AI companies - Microsoft wasn’t one of them - about the potential harms to children and teenagers who use their AI chatbots as companions.
That’s after some chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot filed a wrongful-death lawsuit against Character. AI. And the parents of a 16-year-old sued OpenAI and its CEO Sam Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life.
Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality of earlier versions, which he said the company temporarily halted because “we were being careful with mental health issues” that he suggested have now been fixed.
“If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on X. (In the same post, he also said OpenAI will later enable ChatGPT to engage in “erotica for verified adults,” which got more attention.)
Please read our comment policy before commenting.