Microsoft AI CEO: We’re making an AI that you can trust your kids to use
Popular artificial intelligence chatbots like ChatGPT and Meta AI are increasingly blurring the line between real-world and digital relationships by allowing romantic and sometimes sexual conversations — while scrambling to make sure children aren’t accessing that adult content.
But Microsoft wants no part of that, the company’s AI CEO Mustafa Suleyman told CNN.
“We are creating AIs that are emotionally intelligent, that are kind and supportive, but that are fundamentally trustworthy,” Suleyman said. “I want to make an AI that you trust your kids to use, and that means it needs to be boundaried and safe.”
Microsoft is locked in a race with tech giants like OpenAI, Meta and Google to make its Copilot the AI tool of choice in what Silicon Valley believes will be the next big computing wave. Copilot now has 100 million monthly active users across Microsoft’s platforms, the company said in its most recent earnings call. That’s well below competitors like OpenAI, whose ChatGPT has 800 million monthly active users.
But Microsoft is betting its approach will win it a wider audience, especially as AI companies grapple with how to shape their chatbot’s personalities amid reports of AI contributing to users’ mental health crises.
“We must build AI for people; not to be a digital person,” Suleyman wrote in a blog post earlier this year.
The interview came ahead of a series of new Copilot features that Microsoft unveiled on Thursday, which include the ability to refer back to previous chats, engage in group conversations, improved responses to health questions and an optional, sassy tone called “real talk.”
Saying no to erotica
Some of Microsoft’s AI competitors are facing intense pressure to keep young users safe on their platforms.
Families have sued OpenAI and Character.AI claiming their chatbots harmed their children, in some cases allegedly contributing to their suicides. A string of reports earlier this year raised concerns that Meta’s chatbot and other AI characters would engage in sexual conversations even with accounts identifying as minors.
Related articleInstagram will soon let parents stop teens from chatting with AI characters
The tech companies behind popular AI chatbots say they’ve rolled out new protections for kids, including content restrictions and parental controls. Meta and OpenAI are also implementing AI age estimation technology aiming to catch young users who sign up with fake, adult birthdates — but it’s unclear how well those systems work. OpenAI CEO Sam Altman announced earlier this month that with its new safety precautions in place, ChatGPT will soon let adult users discuss “erotica” with the chatbot.
Suleyman said Microsoft is drawing a bright line at romantic, flirtatious and erotic content, even for adults. “That’s just not something that we will pursue,” he said.