Newsletter Subject

Facebook’s trust issues

From

bloombergbusiness.com

Email Address

noreply@mail.bloombergbusiness.com

Sent On

Tue, May 14, 2019 11:02 AM

Email Preheader Text

From    Hi all, it's Shelly in Hong Kong. These days it's to hate on tech companies like Faceb

[Bloomberg] [Fully Charged]( From [Bloomberg](   [FOLLOW US [Facebook Share]]( [Twitter Share]( [SUBSCRIBE [Subscribe]](  Hi all, it's Shelly in Hong Kong. These days it's [fashionable]( to hate on tech companies like Facebook, Google and Twitter for slurping up our data while making money off platforms that broadcast shootings, spread fake news, and spur violence. But it wasn't that long ago that these platforms were praised for helping activists connect and bring about political change during the [Arab Spring]( and [Occupy Central]( here in Hong Kong. So by allowing free and unfettered communication on their platforms, do these tech companies bring about good or evil? And is it safer for tech giants or governments to monitor and police what can and can't be said online? These are tough questions that nations are grappling with in the wake of meddled-in elections and live-streamed terrorist attacks on social media sites that until now have mostly been left alone to regulate themselves. Prompted by the New Zealand shooting that unfolded live on Facebook, Prime Minister Jacinda Ardern [heads to Paris]( Wednesday to ask world leaders and tech giants to sign a voluntary pledge promising to clean up violent and extremist content. While U.S. and U.K. lawmakers weigh more regulation, countries including Singapore, France, Germany, and Australia have already passed laws affording wide-ranging powers for governments to police social media companies. Take the ["fake news" law]( that passed last week in Singapore, which requires online sites to show corrections or take down content the government deems to be false. Criminal sanctions include fines of up to $700,000 and 10 years in prison--a chilling prospect for users who might now be too afraid to post an unfavorable opinion of a lawmaker who will be able to arbitrarily decide whether she thinks that opinion is true or false. The U.K. has also proposed appointing a social media regulator with broad powers to slap criminal charges on companies that don't speedily remove content the office decides is harmful. China is a prime example of what happens when the government regulates social media companies. There are broad censorship rules but the real power lies in the self-censorship by tech companies and citizens that keeps so much content off the internet out of fear it might run afoul of Beijing. Yes, there are probably fewer incidents of hate speech, violence and terrorism online in China, but at what cost? Increased social media regulations are probably inevitable at this point but there are ways to make them less arbitrary and more transparent. Among them: - A narrow focus on laws that protect children, prevent violence, and stop hate speech, as opposed to stopping the spread of content governments don't like - Clear and precise definitions of what constitutes categories like hate speech, violent content, and fake news will limit the scope of and more narrowly tailor lawmakers' actions - Requirements that governments and tech companies make the process both public and transparent via some sort of searchable database that explains what content is being removed and why in regular, timely reports - A specific, easy-to-navigate and transparent arbitration mechanism to appeal a take-down or other moderation request - Holding tech companies accountable to enforcing their own stated terms and conditions by requiring them to issue regular reports on how many posts they removed, for what reasons, and what appeals have been lodged, similar to what many already do with government content removal requests The era of letting tech companies sing the praises of a free and open internet, while evading responsibility for all the crummy things that happen on their platforms, is over. Self-governance has pretty much failed. Even Facebook founder Mark Zuckerberg admitted that he [doesn't want to be the internet's arbitrator](. But there's a real danger that increasing content moderation and social media regulation without sufficient controls and transparency could push governments too far into the realm of censorship. There are dangers at both extremes.  And here’s what you need to know in global technology news The Uber blame game is now shifting its [focus](to Morgan Stanley, as the second-guessing intensifies after the ride-hailing giant's 18% share tumble.  China's largest companies report earnings Wednesday. [Here's](what to watch for.  SoftBank has now lost something like $16 billion of market value in three days. Its global [lock](on ride-hailing now looks less enviable than just a week ago.  WhatsApp is urging users to update its [messaging service]( after a report that a software vulnerability allowed attackers to hack people's phones.   Sponsor Content by Darktrace As organizations embrace cloud services, the attack surface is increasing. Meanwhile, cloud-based threats are fast and unpredictable, often outpacing existing defenses. But cyber AI is changing the game. Thousands of companies use AI to detect and respond to advanced attackers in the cloud, before they do damage. [Learn what’s missing in cloud security and how cyber AI can help.](   You received this message because you are subscribed to the Bloomberg Technology newsletter Fully Charged. You can tell your friends to [sign up here](.  [Unsubscribe]( | [Bloomberg.com]( | [Contact Us]( Bloomberg L.P. 731 Lexington, New York, NY, 10022

Marketing emails from bloombergbusiness.com

View More
Sent On

20/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

19/07/2024

Sent On

18/07/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.