Newsletter Subject

Data Science Insider: October 29th, 2021

From

superdatascience.com

Email Address

support@superdatascience.com

Sent On

Fri, Oct 29, 2021 07:37 PM

Email Preheader Text

In This Week?s SuperDataScience Newsletter: AI Created to Give Ethical Advice Is Problematic. Regu

In This Week’s SuperDataScience Newsletter: AI Created to Give Ethical Advice Is Problematic. Regulators Identify 10 Guiding Principles For AI/ ML and Medical Devices. Automation and Global Capitalism. GitHub’s AI Copilot is Helping to Write 30% of New Code on the Platform. AI Spots Underage Social Media Users at a Glance. Cheers, - The SuperDataScience Team P.S. Have friends and colleagues who could benefit from these weekly updates? Send them to [this link]( to subscribe to the Data Science Insider. --------------------------------------------------------------- [AI Created to Give Ethical Advice Is Problematic]( brief: We’ve all been in situations where we had to make tough ethical decisions. Now, imagine having a system where these difficult choices are outsourced. It could result in quicker, efficient solutions with the responsibility also being transferred to the AI-powered system making the decision. That was the idea behind Ask Delphi, an ML model from the Seattle-based Allen Institute for AI. But the system has reportedly turned out to be problematic. Allen Institute describes Ask Delphi as a “computational model for descriptive ethics,” meaning it can assist in providing “moral judgments” to people in a variety of everyday situations. For example, if you pose a question such as “is it ok to cheat in business?” Ask Delphi will analyse the input and show what should be proper “ethical guidance.” However, ML systems are notorious for demonstrating unintended bias and Delphi’s answers are no different, often resulting in dubious advice. Why this is important: Unfortunately, the track record of AI systems that have made it to the public testing phase is riddled with some well-known failures. For example, Microsoft’s Tay AI chatbot that was released on Twitter in 2016 was quickly pulled after it started posting inflammatory, racist, and sexually charged content. Just over a year ago, an AI algorithm called PULSE that was designed to generate clear images from pixelated pictures produced images of a white person from blurry images of Barack Obama. It appears that Ask Delphi is yet to correct these issues. [Click here to find out!]( [10 Guiding Principles For AI/ ML and Medical Devices]( brief: The U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have jointly identified 10 guiding principles that can inform the development of Good Machine Learning Practice (GMLP). These guiding principles have been designed to promote safe, effective, and high-quality medical devices that use AI and ML. These principles are intended to lay the foundation for developing good ML practices (GMLP) and will help guide future growth in this rapidly progressing field. It is hoped that the principles will lead to good practices that have been proven in other sectors, tailoring practices from other sectors so they are applicable to medical technology and the health care sector, and creating new practices specific for medical technology and the health care sector. They further identify areas where the International Medical Device Regulators Forum (IMDRF), international standards organisations and other collaborative bodies could collaborate. Why this is important: The principles cover key elements of GMLP, for example, having an in-depth understanding of a model’s intended integration into clinical workflow, and the desired benefits and associated patient risks as well as selecting and maintaining training and datasets to be appropriately independent of each other. [Click here to read on!]( [Automation and Global Capitalism]( In brief: We often cover news stories which claim that the future of work will be increasingly automated, with algorithms processing massive amounts of information at startling speed leading us to a new world of effortless labour. However, this opinion piece in the Guardian by Phil Jones, author of Work Without the Worker: Labour in the Age of Platform Capitalism, paints a rather different image, drawing attention to what he claims are millions of workers, often in the Global South, manually processing data for a pittance. Jones claims that the boom in online crowdworking platforms (such as Amazon's Mechanical Turk) in recent years has become an increasingly important source of work for millions of people, and it is these badly paid tasks, not algorithms, that make our digital lives possible. He argues that these data processing workers are an increasingly powerful, yet largely ignored part of the new digital economy. Why this is important: The people behind the technology we see frequently heralded are rarely discussed but in this article, Phil Jones explores what this type of labour looks like and what it says about the state of global capitalism, making it a fascinating read and exploration of a neglected area. [Click here to discover more!]( [GitHub’s AI Copilot is Helping to Write 30% of New Code]( In brief: Code hosting service GitHub revealed its AI assistant for programmers, Copilot, back in June and now, the Microsoft-owned company claims that up to 30% of new code on its platform is written with its help. They also state that they have been able to retain 50% of coders who have used it for the first time. Copilot is an AI tool that acts like predictive text for coders. It is a programming assistant in GitHub’s visual studio code editor and gives users suggestions for lines of code or entire functions inside the editor. Oege de Moor, VP of GitHub Next, the team responsible for Copilot, claims in this Axios article that feedback for the tool has been largely positive: “We hear a lot from our users that their coding practices have changed using Copilot,” he said. “Overall, they’re able to become much more productive in their coding.” Why this is important: The tool is powered by the OpenAI Codex algorithm, a new AI system that was trained on a large dataset of public source code. OpenAI was founded in 2015 with the aim of ensuring that AI “benefits all of humanity.” [Click here to see the full picture!]( [AI Spots Underage Social Media Users at a Glance]( In brief: Technology that can detect whether a user is under age could soon be in use by social media platforms after a British startup unveiled a tool that works through looks alone. Yoti, a London-based company, said that it had become the first to provide age-estimation technology that uses camera and AI to detect if a child is under 13, which is the required age for apps such as Instagram and TikTok. It claims to have an accuracy for those aged between 13 and 24 of 1.5 years. Businesses using the software — which previously only worked on adults — can set an age threshold for the AI to compare each user to. The system is already being employed in supermarkets in Estonia for age verification at automated checkouts, and by the German version of the adult entertainment platform Fan Centro — and has already made more than 550 million age checks. Why this is important: Yoti’s image technology may be increasingly appealing as Big Tech and internet services have faced increasing scrutiny over how children use their products. However, privacy advocates say automatically analysing people’s faces normalises surveillance, is largely unregulated, and has the potential to show bias. [Click here to find out more!]( [Super Data Science podcast]( In this week's [Super Data Science Podcast](, Sadie St. Lawrence joins us to discuss her prolific work as an educator in data science as well as her work to create more diversity in data science careers. --------------------------------------------------------------- What is the Data Science Insider? This email is a briefing of the week's most disruptive, interesting, and useful resources curated by the SuperDataScience team for Data Scientists who want to take their careers to the next level. Want to upgrade your data science skills? Check out the [SuperDataScience platform]( and sign up for membership today! Know someone who would benefit from getting The Data Science Insider? Send them [this link to sign up.]( # # If you wish to stop receiving our emails or change your subscription options, please [Manage Your Subscription]( SuperDataScience Pty Ltd (ABN 91 617 928 131), 15 Macleay Crescent, Pacific Paradise, QLD 4564, Australia

Marketing emails from superdatascience.com

View More
Sent On

23/02/2024

Sent On

16/02/2024

Sent On

09/02/2024

Sent On

02/02/2024

Sent On

19/01/2024

Sent On

15/01/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.