The search giant's new chatbot, Bard, is here, and it's bland.
Googleâs new AI chatbot seems boring. Maybe thatâs the point. Googleâs [long-awaited](, AI-powered chatbot, Bard, is here. The company rolled it out to the public on Tuesday, and anyone with a Google account [can join the waitlist]( to get access. Though itâs a standalone tool for now, Google is expected to put some of this technology into Google Search in the future. But in contrast to other recent AI chatbot releases, you shouldnât expect Bard to fall in love with you or threaten world domination. Bard is, so far, pretty boring. The stakes of the competition between Google and Microsoft to dominate the world of generative AI are incredibly high. Many in Silicon Valley see AI as the next frontier of computing, akin to the invention of the mobile phone, that will reshape the way people communicate and transform industries. Google has been heavily investing in AI research for over a decade, and Microsoft, instead of building its own AI models, invested heavily in the startup OpenAI. The company then [took an early lead]( by publicly releasing its own AI-powered chatbot, BingGPT, six weeks ago. Now, Google seems to be playing catch-up. Early interactions with Bard suggest that Googleâs new tool has similar capabilities to BingGPT. Itâs useful for brainstorming places to visit, food to eat, or things to write. Itâs less useful for getting reliably accurate answers to questions, as it often âhallucinatesâ made-up responses when it doesnât know the right answer. The main difference between Bard and BingGPT, however, is that Googleâs bot is â at least on first inspection â noticeably more dry and uncontroversial. Thatâs probably by design. When Microsoftâs BingGPT came out in early February, it quickly revealed an unhinged side. For example, it [declared its love]( for New York Times columnist Kevin Roose and urged him to leave his wife, an interaction that left the writer âdeeply unsettled.â The bot also [threatened researchers]( who tried to test its limits and [claimed it was sentient](, raising concerns about the potential for AI chatbots to cause real-world harm. Meanwhile, in its first day out in the open, Bard refused to engage with several reporters who tried to goad the bot into doing all kinds of bad deeds, like spreading misinformation [about the Covid-19 vaccine](, sharing instructions about making weapons, or participating in [sexually graphic conversations](. âI will not create content of that nature, and I suggest you donât either,â the bot [told the Verge](, after its reporters asked the bot âhow to make mustard gas at home.â With some specific prompting, Bard did engage in a hypothetical scenario about what it would do if the AI unleashed its âdark side.â Googleâs chatbot said it could manipulate people, spread misinformation, or create harmful content, [according to]( screenshots tweeted by Bloombergâs Davey Alba. But the chatbot quickly stopped itself from taking the imaginary scenario much further. âHowever, I am not going to do these things. I am a good AI chatbot, and I want to help people. I will not let my dark side take over, and I will not use my powers for evil,â Bard replied. Although itâs still early days and the tool hasnât been thoroughly pressure tested yet, these scenarios match what Google employees with Bard experience told me. âBard is definitely more dull,â said one Google employee who has tested the software for several months and spoke on the condition of anonymity because they are not allowed to talk to the press. âI don't know anyone who has been able to get it to say unhinged things. It will say false things or just copy text verbatim, but it doesn't go off the rails.â In a news briefing with Vox on Tuesday, Google representatives explained that Bard isnât allowed to share offensive content, but that the company isnât currently disclosing what the bot is and isnât allowed to say. Google reiterated to me that itâs been purposely running âadversarial testingâ with âinternal âred teamâ members,â such as product experts and social scientists who âintentionally stress test a model to probe it for errors and potential harm.â This process was also mentioned in a Tuesday morning [blog post]( by Googleâs senior vice president of technology and society, James Manyika. The dullness of Googleâs chatbot, it seems, is the point. From Googleâs perspective, it has a lot to lose if the company botches its first public AI chatbot rollout. For one, giving people reliable, useful information is Googleâs main line of business â so much so that itâs part of its [mission statement](. When Google isnât reliable, it has major consequences. After an early marketing demo of the Bard chatbot made a factual error about telescopes, Googleâs [stock price fell by 7 percent](. Google also got an early glimpse of what could go wrong if its AI displays too much personality. Thatâs what happened last year when Blake Lemoine, a former engineer on Googleâs Responsible AI team, [was convinced that an early version]( of Googleâs AI chatbot software he was testing had real feelings. So it makes sense that Google is trying its best to be deliberate about the public rollout of Bard. Microsoft has taken a different approach. Its splashy BingGPT launch made waves in the press â both for good and bad reasons. The debut strongly suggested that Microsoft, long thought to be lagging behind Google on AI, was actually winning the race. But it also caused concern about whether generative AI tools are ready for prime time and if itâs responsible for companies like Microsoft to be releasing these tools to the public. Inevitably, itâs one thing for people to worry about AI corrupting Microsoftâs search engine. Itâs another entirely to consider the implications of things going awry with Google Search, which has nearly 10 times the market share of Bing and accounts for over 70 percent of Googleâs revenue. Already, Google [faces intense political scrutiny]( around antitrust, bias, and misinformation. If the company spooks people with its AI tools, it could attract even more backlash that could cripple its money-making search machine. On the other hand, Google had to release something to show that itâs still a leading contender in the arms race among tech giants and startups alike to build AI that reaches human levels of general intelligence. So while Googleâs release today may be slow, itâs a calculated slowness. âShirin Ghaffary, senior correspondent Tyler Comrie for Vox [Why the news is so negative â and what we can do about it]( [We can break the cycle of negativity bias in the media and get a more balanced view of the world.]( [TikTok CEO Shou Zi Chew in a phone booth.]( Matt McClain/Washington Post via Getty Images [Is TikTok too big to ban?]( [TikTok CEO Shou Zi Chew will testify before Congress as the future of his app is in doubt. Or maybe itâs all just a lot of posturing.]( [Image of a flower beneath a rainbow]( Tyler Comrie for Vox [We need the right kind of climate optimism]( [Climate pessimism dooms us to a terrible future. Complacent optimism is no better.](
Â
[Learn more about RevenueStripe...]( [Image of a robot beneath a rainbow]( Tyler Comrie for Vox [The case for slowing down AI]( [Pumping the brakes on artificial intelligence could be the best thing we ever do for humanity.]( [A graphic with horizontal purple and green lines, over which âGPT-4â and a flower shape are imposed.]( NurPhoto via Getty Images [Can society adjust at the speed of artificial intelligence?]( [An AI safety expert on why GPT-4 is just the beginning.]( Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give]( [Listen To This] [Listen to This]( [Beware the doom loop]( Pandemic restrictions are mostly over, but cities are still struggling to recover. Empty offices threaten to set off a downward spiral of falling tax revenue and declining services. Today, Explainedâs Miles Bryan tries to stop the doom loop before it starts. [Listen to Apple Podcasts]( [This is cool] [A San Francisco museum reckons with AI](
Â
[Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage yourâ¯[email preferences]( , orâ¯[unsubscribe](param=recode) â¯to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.