Newsletter Subject

Zoom is having a week

From

vox.com

Email Address

newsletter@vox.com

Sent On

Wed, Aug 9, 2023 10:55 AM

Email Preheader Text

Zoom returns to the office and to its problematic privacy ways. Is Zoom using your meetings to train

Zoom returns to the office and to its problematic privacy ways. Is Zoom using your meetings to train its AI? The week isn’t even half over and it’s already been a bad one for Zoom, the videoconferencing service that boomed during the pandemic. It’s facing yet another privacy scandal, this time over its use of customer data to train artificial intelligence models. And its recent demand that its employees return to the office is a bad sign for the completely remote work life that Zoom’s eponymous product tried to help make possible. Yes, the company that became [synonymous with videoconferencing]( at a time when seemingly everyone was remote is now saying that maybe not everything can be done apart. It’s not just Zoom that’s doing this — there is a [larger trend]( of companies [calling their employees back]( to the office after months or years of working from home — but it seems particularly ironic in this case. Now, Zoom’s not making everyone come back all the time. Its [recent memo]( to employees says that everyone who lives within 50 miles of a Zoom office will have to work out of it at least twice a week. This “structured hybrid approach,” the company said in a statement to Vox, “is most effective for Zoom.” "We’ll continue to leverage the entire Zoom platform to keep our employees and dispersed teams connected and working efficiently,” the company added. It’s not the best look when a company that relies on people doing as many things remotely as possible wants its employees to do some things together. If even Zoom, the company that helped Make Remote Work Possible, doesn’t want its employees to work remotely all the time, it might be time to [Zoom wave]( away your dreams of working from home every day. Lots of people are still using Zoom, of course. But the company has fallen back down to Earth as more people went outside and needed Zoom less. Its [stock price]( is back to roughly where it was before the pandemic; it expressed concern in its most recent [annual report]( that it will not be able to convert enough of its large free user base to paid subscribers to remain profitable. Like [many tech companies](, Zoom had a [round of layoffs](, cutting 1,300 jobs — 15 percent of its workforce — in February. It has more competition from Google Meet and Microsoft Teams and even Slack, which would all surely love to lure Zoom’s considerable user base away from it for good. But it remains profitable. Just not as profitable as it was, and for understandable and predictable reasons. Even so, you’d think it wouldn’t want to risk upsetting a user base that now has plenty of other options by sneaking a line into its terms of service that taps into a widespread fear: that generative AI will replace us, very much helped along by the data we’ve [unknowingly provided](. And yet, that’s exactly what Zoom did. The company released an updated and greatly expanded TOS [at the end of March](. Companies do this all the time and almost no one takes the time to read them. But Alex Ivanovs, of Stack Diary, did take the time to read it. On Sunday, he [wrote about]( how Zoom had used the TOS update to give itself what appeared to be some pretty far-reaching rights over customers’ data, and to train its machine learning and artificial intelligence services on that data. That, Ivanovs believed, could include training AI off of Zoom meetings — and there was no way to opt out of it. Here’s what the TOS says, emphasis ours: "You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement." You can see why Ivanovs thought that Zoom wanted to use customer data and content to train its AI models, as that’s exactly what it seems to be telling us. His article was picked up and [tweeted out](, which caused an understandable panic and backlash from people who feared that Zoom would be training its generative AI offerings on [private company meetings](, [telehealth visits](, [classes](, and [voice-over]( or [podcast]( recordings. The idea of Zoom watching and ingesting therapy sessions to create AI-generated images is a privacy violation in more ways than one. That’s probably, however, not what Zoom is actually doing. The company responded with a small update to its TOS, adding: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.” It also put up a [blog post]( that said it was just trying to be more transparent with its users that it collects “service generated data,” which it uses to improve its products. It gave a few examples of this that seem both innocuous and standard. It also promoted its [new generative AI features](, which it does use customer content to train on only after obtaining consent from the meeting’s administrator. But the fact remains that Zoom’s initial TOS wording left it open to be interpreted in the creepiest way possible, and, after a [series of privacy and security missteps]( over the years, there’s little reason to give Zoom the benefit of the doubt. Quick summary: Zoom was [dinged by the FTC]( in 2020 for claiming that it offered end-to-end encryption, which it didn’t, and for secretly installing software that bypassed Safari’s security measures and made it hard for users to delete Zoom from their computers. It’s under a consent order for the next 20 years for that. Zoom also paid out [$85 million]( to settle a class action lawsuit over [Zoombombing](, where trolls join unsecured meetings and usually show sexually explicit, racist, or even illegal imagery to an unsuspecting audience. It was caught sending user data to [Meta]( and [LinkedIn](. Oh, and it [played fast and loose]( with its user numbers, too. There’s also still a question, even after Zoom tried to clear things up, of what counts as Customer Content and what counts as service generated data, which it’s given itself permission to use. “By its terms, it’s not immediately clear to me what is included or excluded,” Chris Hart, co-chair of the privacy and data security practice at law firm Foley Hoag, said. “For example, if a video call is not included in Customer Content that will be used for AI training, is the derivative transcript still fair game? The [whiteboard]( used during the meeting? The [polls](? Documents uploaded and shared with a team?” (Zoom did not respond to a request for comment on those questions.) Ivanovs, the author of the blog post that brought all of this to light, wasn’t satisfied with Zoom’s explanation either, noting in an update to his post that “those adjustments ... [don’t] do much in terms of privacy.” So, yeah, not a great few days for Zoom, although it remains to be seen just how damaging this is to the company in the long run. The fact is, Zoom isn’t the only company that people have real fears about when it comes to its use of AI and how it trains its models. OpenAI’s ChatGPT, which is trying to insert itself into as many business offerings as possible, was trained off of customer data obtained through its API until, OpenAI said, it realized that customers [really don’t like that](. There are still concerns over what it does with what people put directly into ChatGPT, and many companies have [warned employees]( not to share sensitive data with the service because of this. And Google recently had its own brush with social media backlash over how it collects training data; you might have read about that [in this very newsletter]( just a few weeks ago. “I do think the reaction to Zoom’s terms changes reflects the concerns that people are generally having over the potential dangers to individual privacy given the increasing ubiquity of AI,” Hart said. “But the changes to the terms themselves signal the increasing and likely universal business need to organically grow AI technologies.” He added: “To do that, though, you need a lot of data.” —Sara Morrison, senior reporter   [A fictional utopian city with tall buildings.]( Futuristic Society [Why a “room-temperature superconductor” would be a huge deal]( [The superconductor frenzy, explained.](   [People wearing gas masks and anti-chemical gloves.]( Yoshikazu Tsuno/AFP via Getty Images [ChatGPT could make bioterrorism horrifyingly easy]( [Biological risks from artificial intelligence may be substantial and need to be monitored](   [A colorful illustration of a city with people walking through and enjoying a park with anthropomorphized buildings on either side.]( Michelle Kwon for Vox [The future of cities, according to the experts]( [Cities aren’t going anywhere, but they do need to change.](    [Learn more about RevenueStripe...](   [A cheerleader with both arms held up in a V.]( Netflix [The creator of Black Mirror is okay with tech. People, on the other hand ...]( [A chat with Charlie Brooker about AI, creativity, and why tech can be like growing an extra limb.](   [Meta CEO Mark Zuckerberg wearing a suit and a breathing mask in the halls of Congress.]( David Paul Morris/Bloomberg via Getty Images [Why Meta’s move to make its new AI open source is more dangerous than you think]( [If AI really is risky, then opening it up could be a major mistake.](   Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give](   [Listen To This]( [Listen to This]( Black Mirror's creator says the problem isn't tech, it’s us Black Mirror isn’t just a hit TV show: It’s a window into the not-too-distant, dystopian tech future. Creator Charlie Brooker tells Peter Kafka that, despite what you might think, he doesn’t hate tech; his problem is with the humans who use it. [Listen to Apple Podcasts](   [This is cool]( [The $0 power bill that costs $93K](  [Learn more about RevenueStripe...](   [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage your [email preferences]( , or [unsubscribe](param=tech)  to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.

EDM Keywords (209)

zoombombing zoom yet years yeah wrote would working workforce work window week ways way want vox voice videoconferencing uses users used use updated update understandable tweeted tuning trying transparent trains training trained train time though think thanks terms tech taps take sunday suit substantial storage statement standard software sneaking signal shared settle services service series sent seen seems seem see saying satisfied said round roughly risky revenuestripe respond request remote remains relies realized readers read reaction purposes purpose profitable products product problem privacy post possible plenty picked permission people part park pandemic options opt opening open one okay office newsletter need much move months monitored might meta meetings meeting maybe manage making make made lot loose listen line like light leverage keep join interpreted insert innocuous increasing included improve idea humans home hard hand halls great good given give generally gave future ftc free february feared fact extent examples example exactly everything everyone enjoying end employees email effective earth dreams dinged despite days data dangerous damaging creator course counts could cool continue content consent concerns computers competition company comment comes claiming city cheerleader chatgpt chat changes caused case brush brought boomed benefit backlash back author article appeared api already almost algorithms ai agreement agree administrator adjustments added actually able 2020

Marketing emails from vox.com

View More
Sent On

06/12/2024

Sent On

05/12/2024

Sent On

03/12/2024

Sent On

29/11/2024

Sent On

27/11/2024

Sent On

27/11/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.