Newsletter Subject

Hackaday Newsletter 0xF0

From

hackaday.com

Email Address

editor@hackaday.com

Sent On

Fri, Sep 25, 2020 04:02 PM

Email Preheader Text

Behind Twitter's "Racist" AI Gaff Axe Hacks: New Sounds For Your Electric Guitar Beginning From What

Behind Twitter's "Racist" AI Gaff [HACKADAY]( Axe Hacks: New Sounds For Your Electric Guitar Beginning From What Makes Them Tick [Read Article Now»]( Twitter: It's Not the Algorithm's Fault. It's Much Worse. By [Elliot Williams]( Maybe you heard about the [anger surrounding Twitter's automatic cropping of images](. When users submit pictures that are too tall or too wide for the layout, Twitter automatically crops them to roughly a square. Instead of just picking, say, the largest square that's closest to the center of the image, they use some "algorithm", likely a neural network, trained to find people's faces and make sure they're cropped in. The problem is that when a too-tall or too-wide image includes two or more people, and they've got different colored skin, the crop picks the lighter face. That's really offensive, and something's clearly wrong, but what? A neural network is really just a mathematical equation, with the input variables being in these cases convolutions over the pixels in the image, and [training them essentially consists in picking the values for all the coefficients](. You do this by applying inputs, seeing how wrong the outputs are, and updating the coefficients to make the answer a little more right. Do this a bazillion times, with a big enough model and dataset, and you can make a machine recognize different breeds of cat. What went wrong at Twitter? Right now it's speculation, but my money says it lies with either the training dataset or the coefficient-update step. The problem of including people of all races in the training dataset is so blatantly obvious that we hope that's not the problem; although getting a representative dataset is hard, it's known to be hard, and they should be on top of that. [Not a pipe.] Which means that the issue might be coefficient fitting, and this is where math and culture collide. Imagine that your algorithm just misclassified a cat as an "airplane" or as a "lion". You need to modify the coefficients so that they move the answer away from this result a bit, and more toward "cat". Do you move them equally from "airplane" and "lion" or is "airplane" somehow more wrong? To capture this notion of different wrongnesses, you use a loss function that can numerically encapsulate just exactly what it is you want the network to learn, and then you [take bigger or smaller steps in the right direction]( depending on how bad the result was. Let that sink in for a second. You need a mathematical equation that summarizes what you want the network to learn. (But not how you want it to learn it. That's the revolutionary quality of applied neural networks.) Now imagine, as happened to Google, [your algorithm fits "gorilla" to the image of a black person](. That's wrong, but it's categorically differently wrong from simply fitting "airplane" to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally, you would want them to never happen, so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did -- their "workaround" was to stop classifying "gorilla" entirely because the loss incurred by misclassifying a person as a gorilla was so large. This is a fundamental problem with neural networks -- they're only as good as the data and the loss function. These days, the data has become less of a problem, but getting the loss right is a multi-level game, as these neural network trainwrecks demonstrate. And it's not as easy as writing an equation that isn't "racist", whatever that would mean. The loss function is being asked to encapsulate human sensitivities, navigate around them and quantify them, and eventually weigh the risk of making a particularly offensive misclassification against not recognizing certain animals at all. I'm not sure this problem is solvable, even with tremendously large datasets. (There are mathematical proofs that it is solved with infinitely large datasets, by driving classification error to zero. But how close are we to infinity? Are asymptotic proofs relevant?) Anyway, this problem is bigger than algorithms, or even their writers, being "racist". It may be a fundamental problem of machine learning, and we're definitely going to see further permutations of the Twitter fiasco in the future as machine classification is being increasingly asked to respect human dignity. From the Blog --------------------------------------------------------------- [Exploring the Clouds of Venus; It’s Not Fantasy, But it Will Take Specialized Spacecraft]( By [Tom Nardi]( We've just found a possible marker for life on Venus. It might be time to revisit our nearest neighbor. [Read more »]( [In Praise Of The DT830, The Phenomenal Instrument You Probably Don’t Recognise For What It Is]( By [Jenny List]( The cheapest multimeter in your toolbox is surprisingly valuable. [Read more »]( [Dynamic Soaring: 545 MPH RC Planes Have No Motor]( By [Elliot Williams]( Do you know about the tremendous speeds, significant danger, and amazing engineering behind the fastest model airplanes? [Read more »]( [Hackaday Podcast]( [Hackaday Podcast 086: News Overflow, Formula 1/3 Racer, Standing Up For Rubber Duckies, and Useless Machine Takes a Turn]( By [Hackaday Editors]( What happened last week on Hackaday? Editors Mike Szczys and Elliot Williams get you up to speed. [Read more »]( If You Missed It --------------------------------------------------------------- [Wooden Disc Player Translates Binary Back Into Text]( [Plastic Prosthetics for Rubber Duckies]( [A Big Computer Needs a Big Keyboard]( [A Monotrack Bike With Only Basic Tools And Parts]( [Teleconferencing Like It’s 1988: Connecting Vintage Hardware to Zoom]( [ESP32 Vulnerability affects Older Chips]( [Hackaday]( NEVER MISS A HACK [Share]( [Share]( [Share]( [Terms of Use]( [Privacy Policy]( [Hackaday.io]( [Hackaday.com]( This email was sent to {EMAIL} [why did I get this?]( [unsubscribe from this list]( [update preferences]( Hackaday.com · 61 S Fair Oaks Ave Ste 200 · Pasadena, CA 91105-2270 · USA

Marketing emails from hackaday.com

View More
Sent On

06/12/2024

Sent On

11/10/2024

Sent On

04/10/2024

Sent On

20/09/2024

Sent On

13/09/2024

Sent On

11/09/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.