[Bloomberg]
[Fully Charged](
From [Bloomberg](
Â
Â
[FOLLOW US [Facebook Share]]( [Twitter Share]( [SUBSCRIBE [Subscribe]](
Â
Hi all, it's Shelly in Hong Kong. These days it's [fashionable]( to hate on tech companies like Facebook, Google and Twitter for slurping up our data while making money off platforms that broadcast shootings, spread fake news, and spur violence. But it wasn't that long ago that these platforms were praised for helping activists connect and bring about political change during the [Arab Spring](Â and [Occupy Central]( here in Hong Kong.
So by allowing free and unfettered communication on their platforms, do these tech companies bring about good or evil? And is it safer for tech giants or governments to monitor and police what can and can't be said online?
These are tough questions that nations are grappling with in the wake of meddled-in elections and live-streamed terrorist attacks on social media sites that until now have mostly been left alone to regulate themselves. Prompted by the New Zealand shooting that unfolded live on Facebook, Prime Minister Jacinda Ardern [heads to Paris]( Wednesday to ask world leaders and tech giants to sign a voluntary pledge promising to clean up violent and extremist content. While U.S. and U.K. lawmakers weigh more regulation, countries including Singapore, France, Germany, and Australia have already passed laws affording wide-ranging powers for governments to police social media companies.
Take the ["fake news" law]( that passed last week in Singapore, which requires online sites to show corrections or take down content the government deems to be false. Criminal sanctions include fines of up to $700,000 and 10 years in prison--a chilling prospect for users who might now be too afraid to post an unfavorable opinion of a lawmaker who will be able to arbitrarily decide whether she thinks that opinion is true or false. The U.K. has also proposed appointing a social media regulator with broad powers to slap criminal charges on companies that don't speedily remove content the office decides is harmful.
China is a prime example of what happens when the government regulates social media companies. There are broad censorship rules but the real power lies in the self-censorship by tech companies and citizens that keeps so much content off the internet out of fear it might run afoul of Beijing. Yes, there are probably fewer incidents of hate speech, violence and terrorism online in China, but at what cost?
Increased social media regulations are probably inevitable at this point but there are ways to make them less arbitrary and more transparent. Among them:
-
A narrow focus on laws that protect children, prevent violence, and stop hate speech, as opposed to stopping the spread of content governments don't like
-
Clear and precise definitions of what constitutes categories like hate speech, violent content, and fake news will limit the scope of and more narrowly tailor lawmakers' actions
-
Requirements that governments and tech companies make the process both public and transparent via some sort of searchable database that explains what content is being removed and why in regular, timely reports
-
A specific, easy-to-navigate and transparent arbitration mechanism to appeal a take-down or other moderation request
-
Holding tech companies accountable to enforcing their own stated terms and conditions by requiring them to issue regular reports on how many posts they removed, for what reasons, and what appeals have been lodged, similar to what many already do with government content removal requests
The era of letting tech companies sing the praises of a free and open internet, while evading responsibility for all the crummy things that happen on their platforms, is over. Self-governance has pretty much failed. Even Facebook founder Mark Zuckerberg admitted that he [doesn't want to be the internet's arbitrator](. But there's a real danger that increasing content moderation and social media regulation without sufficient controls and transparency could push governments too far into the realm of censorship. There are dangers at both extremes.
Â
And here’s what you need to know in global technology news
The Uber blame game is now shifting its [focus](to Morgan Stanley, as the second-guessing intensifies after the ride-hailing giant's 18% share tumble.
Â
China's largest companies report earnings Wednesday. [Here's](what to watch for.
Â
SoftBank has now lost something like $16 billion of market value in three days. Its global [lock](on ride-hailing now looks less enviable than just a week ago.
Â
WhatsApp is urging users to update its [messaging service]( after a report that a software vulnerability allowed attackers to hack people's phones.
Â
Â
Sponsor Content by Darktrace
As organizations embrace cloud services, the attack surface is increasing. Meanwhile, cloud-based threats are fast and unpredictable, often outpacing existing defenses. But cyber AI is changing the game. Thousands of companies use AI to detect and respond to advanced attackers in the cloud, before they do damage. [Learn what’s missing in cloud security and how cyber AI can help.](
Â
Â
You received this message because you are subscribed to the Bloomberg Technology newsletter Fully Charged.
You can tell your friends to [sign up here](.
Â
[Unsubscribe]( | [Bloomberg.com]( | [Contact Us](
Bloomberg L.P. 731 Lexington, New York, NY, 10022