In this week’s Super Data Science newsletter: EU Outlines Wide-Ranging AI Regulation. Adversarial ML and Data Poisoning. Senate Hears Complaints Over Apple and Google’s Use of Data. An AI Ethicist Speaks. AI Unlocks Ancient Dead Sea Scrolls Mystery. Cheers,
- The SuperDataScience Team P.S. Have friends and colleagues who could benefit from these weekly updates? Send them to [this link]( to subscribe to the Data Science Insider. --------------------------------------------------------------- [EU Outlines Wide-Ranging AI Regulation]( brief: The EU proposals for greater AI regulation that we discussed in last week’s SuperDataScience newsletter have been announced, with the details now available to examine. The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of AI. Similar to the EU’s data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6% of their global revenues. The European Commission's rules would ban "AI systems considered a clear threat to the safety, livelihoods and rights of people", it said. It is also proposing far stricter rules on the use of biometrics - such as facial recognition being used by law enforcement, which would be limited. Why this is important: The proposals have been eagerly awaited and critics have been concerned about various aspects of the legislation. Some have highlighted concerns that it could stifle innovation, whilst others have focused on the exceptions to the banning on facial recognition software- stating that they’re so numerous and arbitrary, they risk becoming useless. [Click here to find out!]( [Adversarial ML and Data Poisoning]( brief: This VentureBeat article looks at how AI developers have approached the challenge of adversarial attacks. Adversarial ML is a technique that attempts to fool models by supplying deceptive input, most commonly used to deliberately cause a malfunction in an ML model. The threat of these attacks has become a source of growing interest and concern for the ML community. But despite the growing body of research on adversarial ML, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications. Computer vision has become one of the main research areas for those seeking to prevent attacks, with randomized smoothing being the most proven method of protection, due to its application to DL models. However, recent research has revealed data poisoning (where an attacker inserts corrupt data in the training dataset to compromise a target ML model) as a previously unrecognized threat to robust ML models. Why this is important: The identification of data poisoning as a threat against the prevention of adversarial attacks highlights the importance of training-data quality in achieving high certified adversarial robustness. [Click here to read on!]( [Senate Hears Complaints On Apple and Google’s Use of Data]( In brief: Apple and Google “hold data hostage” from small apps and force competitors to pay high commissions, stifling their ability to compete, a number of companies have said in a US Senate hearing. The hearing before the Senate antitrust committee offered a rare opportunity for smaller competitors – including Spotify, Tile, and Match – to air their grievances against the tech behemoths before lawmakers. Representatives for the companies spoke about their experiences within Google and Apple’s app stores, where they claim to be subjected to high fees and copycat behavior. Tile’s general counsel, Kirsten Daru, testified Apple’s FindMy program is installed by default on Apple phones and cannot be deleted. “Apple has once again exploited its market power and dominance to condition our customers’ access to data on effectively breaking our user experience and directing our users to FindMy,” she said. Why this is important: The hearing was the latest example of the growing scrutiny of Big Tech in the US and the increasing agreement among Democrats, Republicans, and smaller companies that the world’s biggest tech companies have become too powerful and raises questions about the use of data. [Click here to discover more!]( [An AI Ethicist Speaks]( In brief: This Observer interview with AI ethicist Kate Darling brings about interesting questions about the future of human-robot interaction. Darling is a Research Specialist at the MIT Media Lab and author of The New Breed, where she argues that humans would be better prepared for the future if we started thinking about robots and AI in a way that’s more akin to how we see animals. Her main research interest is in how technology intersects with society and looks at the near-term effects of robotic technology, with a particular interest in law, social, and ethical issues. In this interview, she argues that by thinking of robots in the same way that we do humans and thinking of AI in the same way as human intelligence, we are in fact limiting their possibilities. Instead, she argues, that we should see robots as our partners, rather than mere creations. Why this is important: As data scientists, we are always thinking about the next project that we can create, but Darling’s point of view offers an interesting counterpoint and raises ethical questions that we should consider. [Click here to see the full picture!]( [AI Unlocks Ancient Dead Sea Scrolls Mystery]( In brief: AI has helped to solve a long-standing mystery concerning the Dead Sea Scrolls. The technology confirms that one of the ancient manuscripts – the Great Isaiah Scroll – was penned by two scribes who wrote with very similar handwriting. The Dead Sea Scrolls are a set of ancient Hebrew manuscripts comprising Biblical and Jewish texts. The Great Isaiah scroll is a copy of the Book of Isaiah that is found in both the Hebrew Bible and the Old Testament. Scholars weren’t previously able to determine whether the Great Isaiah Scroll was the work of just one or several scribes because the handwriting was very similar throughout, however, researchers found that the scroll was separated into two halves, each written by a different scribe after utilising AI to analyse digital images by looking closely at variation in the shape and style of the letters that cannot be spotted easily by the human eye. Why this is important: The scribes' ability to mimic the other was so good that until now modern scholars had not been able to distinguish between them. However, AI has solved a mystery that offers answers to a great number of interested parties and unlocks an ancient secret. [Click here to find out more!]( [SuperDataScience podcast]( In this week's [SuperDataScience Podcast](, Matt Dancho joins us to discuss his work on Modeltime, a series of packages to help practitioners tackle time series analysis. --------------------------------------------------------------- What is the Data Science Insider? This email is a briefing of the week's most disruptive, interesting, and useful resources curated by the SuperDataScience team for Data Scientists who want to take their careers to the next level. Want more conversations like this? Are you either a data professional or an executive trying to implement AI technologies in your organization? We’re sure you’re always exploring some upskilling opportunities for yourself or your team. Please share your experience with corporate and self-education [right here](. All it takes is 10 minutes of your time to help us create unique programs to make you and your businesses grow. Each participant will receive a 30% coupon code on a BlueLife AI training program. Know someone who would benefit from getting The Data Science Insider? Send them [this link to sign up.]( # # If you wish to stop receiving our emails or change your subscription options, please [Manage Your Subscription](
SuperDataScience Pty Ltd (ABN 91 617 928 131), 15 Macleay Crescent, Pacific Paradise, QLD 4564, Australia