In This Week’s SuperDataScience Newsletter: AI Eye Test Could Accurately Predict a Future Fatal Heart Attack. DeepMind Finds New Way to Multiply Numbers. Tesla’s Robot Reveal Draws Mixed Reviews. US Releases an ‘AI Bill of Rights.’ Leading Robot Makers Pledge to Not Weaponize Them Cheers,
- The SuperDataScience Team P.S. Have friends and colleagues who could benefit from these weekly updates? Send them to [this link]( to subscribe to the Data Science Insider. --------------------------------------------------------------- [AI Eye Test Could Predict a Future Fatal Heart Attack]( brief: A new study by a team of British academics, led by St George's, University of London, has found that AI-enabled imaging of the retina’s veins and arteries can identify the specific risk of cardiovascular disease, cardiovascular death, and stroke. The researchers developed a fully-automated AI-enabled algorithm called QUARTZ to predict cardiovascular health and death from retinal imaging. They said: “AI-enabled vasculometry risk prediction is fully automated, low cost, non-invasive and has the potential for reaching a higher proportion of the population in the community because of ‘high street’ availability and because blood sampling or [blood pressure measurement] are not needed. The results strengthen the evidence from several similar studies that the retina can be a useful and potentially disruptive source of information for CVD risk in personalised medicine.” The breakthrough could enable high-street opticians to offer the non-evasive test in the near future. Why this is important: According to the World Health Organization (WHO), heart disease is the leading global cause of death with an estimated 17.9 million people dying from it each year. They claim that early detection of heart disease has the potential to give patients valuable time for treatment, preventing heart attacks and saving many lives. [Click here to learn more!]( [DeepMind Finds New Way to Multiply Numbers]( brief: Matrix multiplication may not be something you’ve heard of before but it forms the basis of many computing tasks. It is where two grids of numbers are multiplied together and researchers at DeepMind have now used an AI to discover an improved technique (dubbed AlphaTensor) that has the capacity to boost computation speeds by up to 20%. This research was published in the journal Nature and reveals how AI could be used to improve computer science itself. Pushmeet Kohli, head of AI for science at DeepMind, said “It’s also a very intriguing, mind-boggling problem because matrix multiplication is something that we learn in high school,” he said. “It’s an extremely basic operation, yet we don’t currently know the best way to actually multiply these two sets of numbers. So that’s extremely stimulating for us also as researchers to start to understand this better.” Why this is important: AlphaTensor shows the ambition that DeepMind has. The company’s latest AI has the potential to be as influential as their AlphaFold tool which created waves with its ability to predict the structures of almost every know protein on earth. [Click here to read on!]( [Tesla’s Robot Reveal Draws Mixed Reviews]( In brief: In last week’s SuperDataScience weekly newsletter we previewed Tesla’s annual AI day and looked forward to Elon Musk’s unveiling of the humanoid robot, ‘Optimus’. At the event, Tesla’s CEO showcased the highly-anticipated robot by demonstrating a prototype that walked on stage and waved to the seated audience. A video of the robot carrying a box, watering plants, and moving metal bars in the electric vehicle maker’s factory was also shown. However, critics were left largely underwhelmed by Optimus’ unsteady gait and unpolished appearance. Most suggested that Musk’s aspirations for the technology seem wildly ambitious. He has previously made many promises about self-driving vehicles that have also not yet become a reality, leading many to think that he over-promises and under-delivers. This article in Wired is even-handed in its evaluation of Tesla’s announcements and suggests that despite obvious shortcomings, they are producing some incredible work. Why this is important: Elon Musk has previously claimed that a Tesla robot business will be worth more than its cars, but Friday’s reveal suggests that this is still a long way off. [Click here to discover more!]( [US Releases an ‘AI Bill of Rights’]( In brief: Joe Biden’s administration has announced a series of far-reaching guidelines aimed at preventing harm caused by the rise of AI. The package has been called ‘The Blueprint for an AI Bill of Rights’ and includes instructions on how to protect people’s personal data and limit surveillance. However, it does not set out specific enforcement actions, leading many to see it as merely an attempt by the White House to offer a call to action for the US government, encouraging them to safeguard digital and civil rights whilst remaining toothless. This follows much criticism over transparency, bias, and surveillance by the government. Alondra Nelson, the deputy director of the White House Office of Science and Technology Policy, said: “Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies.” Why this is important: In the US there are, currently, no federal laws specifically regulating AI or applications of AI although some individual states do have their own legal requirements. [Click here to see the full picture!]( [Leading Robot Makers Pledge to Not Weaponize Them]( In brief: Six leading tech firms (Boston Dynamics, Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics, and Unitree) have signed an open letter agreeing to never weaponize general-purpose robots. The companies claim that advanced robots have the potential to bring huge benefits to our lives but also create the possibility that they could be used for nefarious purposes. The letter states: "We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society." In addition to pledging to not add weapons technology to their own robots, the companies have vowed to not support any others that do. Why this is important: The Russian invasion of Ukraine led to many circulating images of weaponized robots and created a fear that automated warfare may become a reality. These companies reflect the fact that the general public has no appetite for these kinds of technologies. [Click here to find out more!]( [Super Data Science podcast]( this week's [Super Data Science Podcast](, Jon Krohn speaks with Nick Singh about the tried and tested ways to land a data science job with a MAMAA company. --------------------------------------------------------------- What is the Data Science Insider? This email is a briefing of the week's most disruptive, interesting, and useful resources curated by the SuperDataScience team for Data Scientists who want to take their careers to the next level. Want to take your data science skills to the next level? Check out the [SuperDataScience platform]( and sign up for membership today! Know someone who would benefit from getting The Data Science Insider? Send them [this link to sign up.]( # # If you wish to stop receiving our emails or change your subscription options, please [Manage Your Subscription](
SuperDataScience Pty Ltd (ABN 91 617 928 131), 15 Macleay Crescent, Pacific Paradise, QLD 4564, Australia