Zoom returns to the office and to its problematic privacy ways.
Is Zoom using your meetings to train its AI? The week isnât even half over and itâs already been a bad one for Zoom, the videoconferencing service that boomed during the pandemic. Itâs facing yet another privacy scandal, this time over its use of customer data to train artificial intelligence models. And its recent demand that its employees return to the office is a bad sign for the completely remote work life that Zoomâs eponymous product tried to help make possible. Yes, the company that became [synonymous with videoconferencing]( at a time when seemingly everyone was remote is now saying that maybe not everything can be done apart. Itâs not just Zoom thatâs doing this â there is a [larger trend]( of companies [calling their employees back]( to the office after months or years of working from home â but it seems particularly ironic in this case. Now, Zoomâs not making everyone come back all the time. Its [recent memo]( to employees says that everyone who lives within 50 miles of a Zoom office will have to work out of it at least twice a week. This âstructured hybrid approach,â the company said in a statement to Vox, âis most effective for Zoom.â "Weâll continue to leverage the entire Zoom platform to keep our employees and dispersed teams connected and working efficiently,â the company added. Itâs not the best look when a company that relies on people doing as many things remotely as possible wants its employees to do some things together. If even Zoom, the company that helped Make Remote Work Possible, doesnât want its employees to work remotely all the time, it might be time to [Zoom wave]( away your dreams of working from home every day. Lots of people are still using Zoom, of course. But the company has fallen back down to Earth as more people went outside and needed Zoom less. Its [stock price]( is back to roughly where it was before the pandemic; it expressed concern in its most recent [annual report]( that it will not be able to convert enough of its large free user base to paid subscribers to remain profitable. Like [many tech companies](, Zoom had a [round of layoffs](, cutting 1,300 jobs â 15 percent of its workforce â in February. It has more competition from Google Meet and Microsoft Teams and even Slack, which would all surely love to lure Zoomâs considerable user base away from it for good. But it remains profitable. Just not as profitable as it was, and for understandable and predictable reasons. Even so, youâd think it wouldnât want to risk upsetting a user base that now has plenty of other options by sneaking a line into its terms of service that taps into a widespread fear: that generative AI will replace us, very much helped along by the data weâve [unknowingly provided](. And yet, thatâs exactly what Zoom did. The company released an updated and greatly expanded TOS [at the end of March](. Companies do this all the time and almost no one takes the time to read them. But Alex Ivanovs, of Stack Diary, did take the time to read it. On Sunday, he [wrote about]( how Zoom had used the TOS update to give itself what appeared to be some pretty far-reaching rights over customersâ data, and to train its machine learning and artificial intelligence services on that data. That, Ivanovs believed, could include training AI off of Zoom meetings â and there was no way to opt out of it. Hereâs what the TOS says, emphasis ours: "You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoomâs access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoomâs other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement." You can see why Ivanovs thought that Zoom wanted to use customer data and content to train its AI models, as thatâs exactly what it seems to be telling us. His article was picked up and [tweeted out](, which caused an understandable panic and backlash from people who feared that Zoom would be training its generative AI offerings on [private company meetings](, [telehealth visits](, [classes](, and [voice-over]( or [podcast]( recordings. The idea of Zoom watching and ingesting therapy sessions to create AI-generated images is a privacy violation in more ways than one. Thatâs probably, however, not what Zoom is actually doing. The company responded with a small update to its TOS, adding: âNotwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.â It also put up a [blog post]( that said it was just trying to be more transparent with its users that it collects âservice generated data,â which it uses to improve its products. It gave a few examples of this that seem both innocuous and standard. It also promoted its [new generative AI features](, which it does use customer content to train on only after obtaining consent from the meetingâs administrator. But the fact remains that Zoomâs initial TOS wording left it open to be interpreted in the creepiest way possible, and, after a [series of privacy and security missteps]( over the years, thereâs little reason to give Zoom the benefit of the doubt. Quick summary: Zoom was [dinged by the FTC]( in 2020 for claiming that it offered end-to-end encryption, which it didnât, and for secretly installing software that bypassed Safariâs security measures and made it hard for users to delete Zoom from their computers. Itâs under a consent order for the next 20 years for that. Zoom also paid out [$85 million]( to settle a class action lawsuit over [Zoombombing](, where trolls join unsecured meetings and usually show sexually explicit, racist, or even illegal imagery to an unsuspecting audience. It was caught sending user data to [Meta]( and [LinkedIn](. Oh, and it [played fast and loose]( with its user numbers, too. Thereâs also still a question, even after Zoom tried to clear things up, of what counts as Customer Content and what counts as service generated data, which itâs given itself permission to use. âBy its terms, itâs not immediately clear to me what is included or excluded,â Chris Hart, co-chair of the privacy and data security practice at law firm Foley Hoag, said. âFor example, if a video call is not included in Customer Content that will be used for AI training, is the derivative transcript still fair game? The [whiteboard]( used during the meeting? The [polls](? Documents uploaded and shared with a team?â (Zoom did not respond to a request for comment on those questions.) Ivanovs, the author of the blog post that brought all of this to light, wasnât satisfied with Zoomâs explanation either, noting in an update to his post that âthose adjustments ... [donât] do much in terms of privacy.â So, yeah, not a great few days for Zoom, although it remains to be seen just how damaging this is to the company in the long run. The fact is, Zoom isnât the only company that people have real fears about when it comes to its use of AI and how it trains its models. OpenAIâs ChatGPT, which is trying to insert itself into as many business offerings as possible, was trained off of customer data obtained through its API until, OpenAI said, it realized that customers [really donât like that](. There are still concerns over what it does with what people put directly into ChatGPT, and many companies have [warned employees]( not to share sensitive data with the service because of this. And Google recently had its own brush with social media backlash over how it collects training data; you might have read about that [in this very newsletter]( just a few weeks ago. âI do think the reaction to Zoomâs terms changes reflects the concerns that people are generally having over the potential dangers to individual privacy given the increasing ubiquity of AI,â Hart said. âBut the changes to the terms themselves signal the increasing and likely universal business need to organically grow AI technologies.â He added: âTo do that, though, you need a lot of data.â âSara Morrison, senior reporter [A fictional utopian city with tall buildings.]( Futuristic Society [Why a âroom-temperature superconductorâ would be a huge deal]( [The superconductor frenzy, explained.]( [People wearing gas masks and anti-chemical gloves.]( Yoshikazu Tsuno/AFP via Getty Images [ChatGPT could make bioterrorism horrifyingly easy]( [Biological risks from artificial intelligence may be substantial and need to be monitored]( [A colorful illustration of a city with people walking through and enjoying a park with anthropomorphized buildings on either side.]( Michelle Kwon for Vox [The future of cities, according to the experts]( [Cities arenât going anywhere, but they do need to change.](
Â
[Learn more about RevenueStripe...]( [A cheerleader with both arms held up in a V.]( Netflix [The creator of Black Mirror is okay with tech. People, on the other hand ...]( [A chat with Charlie Brooker about AI, creativity, and why tech can be like growing an extra limb.]( [Meta CEO Mark Zuckerberg wearing a suit and a breathing mask in the halls of Congress.]( David Paul Morris/Bloomberg via Getty Images [Why Metaâs move to make its new AI open source is more dangerous than you think]( [If AI really is risky, then opening it up could be a major mistake.]( Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give]( [Listen To This]( [Listen to This]( Black Mirror's creator says the problem isn't tech, itâs us Black Mirror isnât just a hit TV show: Itâs a window into the not-too-distant, dystopian tech future. Creator Charlie Brooker tells Peter Kafka that, despite what you might think, he doesnât hate tech; his problem is with the humans who use it. [Listen to Apple Podcasts]( [This is cool]( [The $0 power bill that costs $93K](
Â
[Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage yourâ¯[email preferences]( , orâ¯[unsubscribe](param=tech) â¯to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.