Newsletter Subject

The limits of AI legal protections

From

bloombergbusiness.com

Email Address

noreply@mail.bloombergbusiness.com

Sent On

Mon, Nov 13, 2023 12:04 PM

Email Preheader Text

Gen AI companies pledge to indemnify customers. Hi folks, it’s Brad. AI companies are pledging

Gen AI companies pledge to indemnify customers. [View in browser]( [Bloomberg]( Hi folks, it’s Brad. AI companies are pledging to defend their customers against intellectual property lawsuits. Those indemnities are narrower than the announcements suggest. But first: Three things you need to know today: • Meta is [closing in on a deal]( to sell mixed-reality headsets in China • A German startup is [racing Google]( to develop a universal translator • Apple is [taking extra care]( with an “ambitious” iOS 18 update The fine print Last Monday, twelve minutes into his 45-minute keynote at OpenAI’s first developers conference, CEO Sam Altman unveiled a set of legal protections the company is calling “Copyright Shield.” The new policy “means that we will step in and defend our customers and pay the costs incurred if you face legal claims around copyright infringement,” he said. Talking with journalists later that morning, Altman was asked why OpenAI had the confidence to legally protect customers, particularly with copyright lawsuits from [writers](, [music labels]( and [comedians]( raining down on Silicon Valley as tech companies use content from the web to train chatbots and image-generation services. “We're very confident in our approach, but we want to share that confidence with developers,” Altman said.  A reporter followed up: “So basically, you'll pay for any copyright lawsuit?” Altman’s succinct response: “Yeah.” Copyright Shield got less attention than OpenAI’s other major news last week, like the option to customize its chatbot into [tailored GPTs](, and [the potential of its new features]( to undermine other AI startups. But with so much anxiety and confusion about the legal risks of using generative AI, it’s worth examining further. OpenAI was actually late to the indemnification game. In June, [Adobe]( announced it would protect customers against intellectual property lawsuits stemming from the use of its AI image generator, Firefly. Microsoft followed in September with its [Copilot Copyright Commitment](, pledging to pay settlements for customers who get sued for using or distributing material churned out by AI in software like Windows, Word, PowerPoint and its code-generator, Github Copilot. Last month, Google joined the party [with a flexing blog post]( unveiling legal protections for users of its AI services. “To put it plainly for you, our customers: if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.” The predictable truth, of course, is that if you read the fine print, the protections offered are narrower than what’s suggested by the PR — and by Altman’s curt answer at the press conference. For starters, the policies apply only to commercial customers that are paying to use services like ChatGPT Enterprise and Firefly for Enterprise. These premium options can include an extra set of guardrails that preclude them from inadvertently using copyrighted material in the first place. On the other hand, if you’re using a free service like Dall-E and take artwork that includes images of a [Mickey Mouse lookalike](, slap it on a billboard and get sued by Disney, the protections don’t apply. Ditto for a ChatGPT user who gets the chatbot to create a nifty new trademark that happens to include the phrase “Just Do it.” OpenAI lawyers are not going to come to your rescue. “I can’t just go in and say, ‘Give me a Banksy,’ have it generate something remarkable like a Banksy piece and then sell it,” says Brenda Leong, a partner at Luminos Law, a firm specializing in AI issues. The indemnities apply “to particular enterprise models and customers of specific versions these companies are selling. They have a lot of controls in place on those systems to prevent any protected information that might go in from coming out.” It's also worth noting that the current crop of lawsuits by writers and other creators are aimed at the large quantities of training data going into popular AI models, like copyrighted books, open-source code and web images. The indemnities rolled out by Adobe, Google, Microsoft and OpenAI are designed to protect from IP lawsuits involving the material coming out of AI models. To my knowledge, no one has yet sued a customer or a user of an AI service for an AI-induced copyright violation. All that said, the new policies and the accompanying fanfare do serve a function: soothing nervous companies into embracing the new crop of enterprise-grade generative AI tools. The jumpy lawyers in their in-house legal departments no doubt see headlines about copyright lawsuits and worry that copyrighted code or other intellectual property might sneak into a product, creating massive headaches down the line. These same fears also bedeviled the open-source movement in its early days, before companies like Red Hat rolled out [similar protections]( to the ones that AI firms are talking about today. Leong says she’s already seen an impact from the indemnifications. She says that one client, a consultant, was using Adobe’s Firefly to stimulate new ideas but had a strict prohibition against including the AI images in any work going to a customer. After Adobe made its policy announcement over the summer, however, the client dropped the ban. “It changed their willingness to use AI tools,” Leong says. “They feel like they have an added level of protection.” — [Brad Stone](mailto:bstone12@bloomberg.net) The big story LockBit, one of the most prolific ransomware gangs of all time, has [shaken the financial world]( with its hack of Industrial & Commercial Bank of China. The group is said to be run “like a business,” but little is known about how many people are involved in it and where they are based. One to watch Andrew Ng, co-founder of Coursera, founder of Google Brain and CEO of Landing AI, joins Caroline Hyde and Ed Ludlow to discuss his take on AI regulation moving forward and making generative AI accessible to everyone. Bloomberg Get fully charged World of Warcraft developers [plan to]( release expansions to the game more quickly. One of China’s emerging AI startups stockpiled [18 Months of Nvidia chips]( before the US imposed tighter restrictions. Alphabet CEO Sundar Pichai is [set to testify]( in a Google Play trial on Tuesday. More from Bloomberg Get Bloomberg Tech weeklies in your inbox: - [Cyber Bulletin]( for coverage of the shadow world of hackers and cyber-espionage - [Game On]( for reporting on the video game business - [Power On]( for Apple scoops, consumer tech news and more - [Screentime]( for a front-row seat to the collision of Hollywood and Silicon Valley - [Soundbite]( for reporting on podcasting, the music industry and audio trends - [Q&AI]( for answers to all your questions about AI Follow Us Like getting this newsletter? [Subscribe to Bloomberg.com]( for unlimited access to trusted, data-driven journalism and subscriber-only insights. Want to sponsor this newsletter? [Get in touch here](. You received this message because you are subscribed to Bloomberg's Tech Daily newsletter. If a friend forwarded you this message, [sign up here]( to get it in your inbox. [Unsubscribe]( [Bloomberg.com]( [Contact Us]( Bloomberg L.P. 731 Lexington Avenue, New York, NY 10022 [Ads Powered By Liveintent]( [Ad Choices](

Marketing emails from bloombergbusiness.com

View More
Sent On

20/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

18/07/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.