Plus: Ceasefire chatter, warm winters, and more
February 28, 2024 [View in browser]( Good morning! Things got rather testy online when Google debuted its new AI image generator. Senior reporter Sigal Samuel is here to explain the drama and why it's so interesting â whether or not you care about creating AI images of popes. âCaroline Houck, senior editor of news [A phone screen showing Gemini Advanced.] Pavlo Gonchar/SOPA Images/LightRocket via Getty Images Nobody knows what AI images should look like Just last week, Google was forced to pump the brakes on its AI image generator, called Gemini, after critics complained that it was pushing bias ... against white people. The controversy started with â you guessed it â a [viral post]( on X. According to that post from the user @EndWokeness, when asked for an image of a Founding Father of America, Gemini showed a Black man, a Native American man, an Asian man, and a relatively dark-skinned man. Asked for a portrait of a pope, it showed a Black man and a woman of color. Nazis, too, were reportedly portrayed as [racially diverse](. After complaints from the likes of Elon Musk, who called Geminiâs output [âracistâ]( and Google [âwoke,â]( the company suspended the AI toolâs ability to generate pictures of people. âItâs clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive,â Google Senior Vice President Prabhakar Raghavan [wrote](, adding that Gemini does sometimes âovercompensateâ in its quest to show diversity. Raghavan gave a technical explanation for why the tool overcompensates: Google had taught Gemini to avoid falling into some of AIâs classic traps, like [stereotypically portraying]( all lawyers as men. But, Raghavan wrote, âour tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range.â This might all sound like just the latest iteration of the dreary culture war over âwokenessâ â and one that, at least this time, can be solved by quickly patching a technical problem. (Google plans to relaunch the tool in a few weeks.) But thereâs something deeper going on here. The problem with Gemini is not just a technical problem. Itâs a philosophical problem â one for which the AI world has no clear-cut solution. [Screenshot of @EndWokeness's post on X with images they say were generated by Gemini] Screenshot of @EndWokeness's post on X with images they say were generated by Gemini What does bias mean? Imagine that you work at Google. Your boss tells you to design an AI image generator. Thatâs a piece of cake for you â youâre a brilliant computer scientist! But one day, as youâre testing the tool, you realize youâve got a conundrum. You ask the AI to generate an image of a CEO. Lo and behold, itâs a man. On the one hand, you live in a world where the vast majority of CEOs are male, so maybe your tool should accurately reflect that, creating images of man after man after man. On the other hand, that may reinforce gender stereotypes that keep women out of the C-suite. And thereâs nothing in the definition of âCEOâ that specifies a gender. So should you instead make a tool that shows a balanced mix, even if itâs not a mix that reflects todayâs reality? This comes down to how you understand bias. Computer scientists are used to thinking about âbiasâ in terms of its statistical meaning: A program for making predictions is biased if itâs consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) Thatâs very clear, but itâs also very different from the way most people use the word âbiasâ â which is more like âprejudiced against a certain group.â The problem is, if you design your image generator to make statistically unbiased predictions about the gender breakdown among CEOs, then it will be biased in the second sense of the word. And if you design it not to have its predictions correlate with gender, it will be biased in the statistical sense. So how should you resolve the trade-off? âI donât think there can be a clear answer to these questions,â Julia Stoyanovich, director of the NYU Center for Responsible AI, told me when I previously [reported]( on this topic. âBecause this is all based on values.â Embedded within any algorithm is a value judgment about what to prioritize, including when it comes to these competing notions of bias. So companies have to decide whether they want to be accurate in portraying what society currently looks like, or promote a vision of what they think society could or even should look like â a dream world. [World of AI·magination exhibition at the Artechouse in New York] Fatih Aktas /Anadolu via Getty Images How can tech companies do a better job navigating this tension? The first thing we should expect companies to do is get explicit about what an algorithm is optimizing for: Which type of bias will it focus on reducing? Then companies have to figure out how to build that into the algorithm. Part of that is predicting how people are likely to use an AI tool. They might try to create historical depictions of the world (think: white popes) but they might also try to create depictions of a dream world (female popes, bring it on!). âIn Gemini, they erred towards the âdream worldâ approach, understanding that defaulting to the historic biases that the model learned would (minimally) result in massive public pushback,â [wrote]( Margaret Mitchell, chief ethics scientist at the AI startup Hugging Face. Google might have used certain tricks âunder the hoodâ to push Gemini to produce dream-world images, Mitchell [explained](. For example, it may have been appending diversity terms to usersâ prompts, turning âa popeâ into âa pope who is femaleâ or âa Founding Fatherâ into âa Founding Father who is Black.â But instead of adopting only a dream-world approach, Google could have equipped Gemini to suss out which approach the user actually wants (say, by soliciting feedback about the userâs preferences) â and then generate that, assuming the user isnât asking for something off-limits. What counts as off-limits comes down, once again, to values. Every company needs to explicitly define its values and then equip its AI tool to refuse requests that violate them. Otherwise, we end up with things like [Taylor Swift porn](. AI developers have the technical ability to do this. The question is whether theyâve got the philosophical ability to reckon with the value choices theyâre making â and the integrity to be transparent about them. [âSigal Samuel, senior reporter]( [Listen]( The protest vote against Biden Michigan's primary Tuesday tested President Bidenâs viability with Muslim voters amid the war in Gaza. [Listen now]( AROUND THE WORLD - Can a temporary ceasefire deal be reached before Ramadan begins? US President Joe Biden has suggested some pretty optimistic timelines; Israel and Hamas not so much. [[Guardian](]
- Also on Israel and Gaza, itâs worth stating: Trump and Biden are not the same. Thereâs a lot to dislike about Bidenâs Israel policy, Zack Beauchamp argues. But Trumpâs positions would be worse. [[Vox](]
- An attempt to take down the new face of the Russian opposition: The widespread, misogynistic online disinfo campaign against Alexei Navalnyâs widow, [Yulia Navalnaya](, uncovered. [[Wired](] [yulia navalnaya] Didier Lebrun / Photonews via Getty Images CLIMATE AND ENVIRONMENT - Whether you love it or hate it: Itâs hard not to notice the abnormally warm temperatures across much of the US. This January was the hottest January ever measured, and February looks set to follow. Hereâs why. [[Vox](]
- The real-life sci-fi epic happening underneath our feet: Ants are so damn fascinating. Did you know there are 20 quadrillion of them on our planet? Dive into their âstrange and turbulent global society.â [[Aeon](] CULTURE - Someone stop Andrew Tate: The trend of some teenage British boysâ turn toward misogyny, explained. [[Cosmopolitan](] Ad
Â
[Learn more about RevenueStripe...]( How scientists are searching for aliens Theyâre not looking for UFOs or decoding government secrets. Theyâre doing something much simpler. [Listen now]( Today, Explained and the Vox Media Podcast Network are coming to SXSW March 8-10! See Noel King live on the Vox Media Podcast Stage with Charlamagne Tha God and Angela Rye, plus other influential podcasting voices like Brené Brown, Esther Perel, Kara Swisher, Preet Bharara, and Trevor Noah. Learn more at [voxmedia.com/live](. Also: Are you enjoying the Today, Explained newsletter? Forward it to a friend; they can [sign up for it right here](. Today's edition was produced and edited by Caroline Houck. We'll see you tomorrow! Ad
Â
[Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( [Instagram]( [TikTok]( [WhatsApp]( This email was sent to {EMAIL}. Manage your [email preferences]( [unsubscribe](param=sentences). If you value Voxâs unique explanatory journalism, support our work with a one-time or recurring [contribution](. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036.
Copyright © 2024. All rights reserved.