Microsoft explains Bing’s bizarre AI chat behavior

Microsoft launched its Bing AI chat product for the Edge browser last week, and it’s been in the news ever since — but not always for the right reasons. Our initial impressions were strong, as it offered up workout routines, travel itineraries and more without a hitch. 

However, users started noticing that Bing’s bot gave incorrect information, berated users for wasting its time and even exhibited “unhinged” behavior. In one bizarre conversation, it refused to give listings for Avatar: The Way of the Water, insisting the movie hadn’t come out yet because it was still 2022. It then called the user “unreasonable and stubborn” (among other things) when they tried to tell Bing it was wrong.

Now, Microsoft has released a blog post explaining what’s been happening and how it’s addressing the issues. To start with, the company admitted that it didn’t envision Bing’s AI being used for “general discovery of the world and for social entertainment.”

Bing subreddit has quite a few examples of new Bing chat going out of control.

Open ended chat in search might prove to be a bad idea at this time!

Captured here as a reminder that there was a time when a major search engine showed this in its results. pic.twitter.com/LiE2HJCV2z

— Vlad (@vladquant) February 13, 2023

Those “long, extended chat sessions of 15 or more questions” can send things off the rails. “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone,” the company said. That apparently occurs because question after question can cause the bot to “forget” what it was trying to answer in the first place. To fix that, Microsoft may add a tool that lets you reset the search context or start from scratch. 

The other issue is more complex and interesting: “The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft wrote. It takes a lot of prompting to get that to happen, but the engineers think they might be able to fix it by giving users more control. 

Despite those issues, testers have generally given Bing’s AI good marks on citations and references for search, Microsoft said, though it needs to get better with “very timely data like live sports scores.” It’s also looking to improve factual answers for things like financial reports by boosting grounding data by four times. Finally, they’ll be “adding a toggle that gives you more control on the precision vs. creativity of the answer to tailor to your query.”

The Bing team thanked users for the testing to date, saying it “helps us improve the product for everyone.” At the same time, they expressed surprise that folks would spend up to two hours in chat sessions. Users will no doubt be just as diligent trying to break any new updates, so we could be in for an interesting ride over the next while.

 

Leave a Comment

Generated by Feedzy