Newsletter Subject

Can AI Solve the Math’s Biggest Mysteries?

From

popularmechanics.com

Email Address

popularmechanics@newsletter.popularmechanics.com

Sent On

Sun, Mar 17, 2024 03:01 PM

Email Preheader Text

If you’re looking to rub elbows with the who’s who of mathematics before they hit the big

If you’re looking to rub elbows with the who’s who of mathematics before they hit the big time, look no further than the International Math Olympiad (IMO). Each year since 1959, high school math students from more than 100 countries have competed to solve a wide variety of math problems involving algebra, geometry, and number theory quickly and elegantly. Many IMO winners have secured prestigious math awards as adults, including the coveted Fields Medal. In essence, IMO is a benchmark for students to see if they have what it takes to succeed in the field of mathematics. Now, artificial intelligence has aced the test—well, the geometry part at least. In a paper published this January in Nature, a team of scientists from Google’s DeepMind have introduced a new AI called AlphaGeometry that’s capable of passing the geometry section of the International Math Olympiad without relying on human examples. “We’ve made a lot of progress with models like ChatGPT … but when it comes to mathematical problems, these [large language models] essentially score zero,” Thang Luong, Ph.D., a senior staff research scientist at Google DeepMind and a senior author of the AlphaGeometry paper, tells Popular Mechanics. “When you ask [math] questions, the model will give you what looks like an answer, but [it actually] doesn’t make sense.” Math experts aren’t convinced that an AI made to solve high school-level math problems is ready to take off the training wheels and tackle more difficult subjects, e.g. advanced number theory or combinatorics, let alone boundary-pushing math research. [View in Browser]( [Popular Mechanics]( [SHOP]( [EXCLUSIVE]( [SUBSCRIBE]( [Can AI Solve the Math’s Biggest Mysteries?]( [Can AI Solve the Math’s Biggest Mysteries?]( [Can AI Solve the Math’s Biggest Mysteries?]( If you’re looking to rub elbows with the who’s who of mathematics before they hit the big time, look no further than the International Math Olympiad (IMO). Each year since 1959, high school math students from more than 100 countries have competed to solve a wide variety of math problems involving algebra, geometry, and number theory quickly and elegantly. Many IMO winners have secured prestigious math awards as adults, including the coveted Fields Medal. In essence, IMO is a benchmark for students to see if they have what it takes to succeed in the field of mathematics. Now, artificial intelligence has aced the test—well, the geometry part at least. In a paper published this January in Nature, a team of scientists from Google’s DeepMind have introduced a new AI called AlphaGeometry that’s capable of passing the geometry section of the International Math Olympiad without relying on human examples. “We’ve made a lot of progress with models like ChatGPT … but when it comes to mathematical problems, these [large language models] essentially score zero,” Thang Luong, Ph.D., a senior staff research scientist at Google DeepMind and a senior author of the AlphaGeometry paper, tells Popular Mechanics. “When you ask [math] questions, the model will give you what looks like an answer, but [it actually] doesn’t make sense.” Math experts aren’t convinced that an AI made to solve high school-level math problems is ready to take off the training wheels and tackle more difficult subjects, e.g. advanced number theory or combinatorics, let alone boundary-pushing math research. If you’re looking to rub elbows with the who’s who of mathematics before they hit the big time, look no further than the International Math Olympiad (IMO). Each year since 1959, high school math students from more than 100 countries have competed to solve a wide variety of math problems involving algebra, geometry, and number theory quickly and elegantly. Many IMO winners have secured prestigious math awards as adults, including the coveted Fields Medal. In essence, IMO is a benchmark for students to see if they have what it takes to succeed in the field of mathematics. Now, artificial intelligence has aced the test—well, the geometry part at least. In a paper published this January in Nature, a team of scientists from Google’s DeepMind have introduced a new AI called AlphaGeometry that’s capable of passing the geometry section of the International Math Olympiad without relying on human examples. “We’ve made a lot of progress with models like ChatGPT … but when it comes to mathematical problems, these [large language models] essentially score zero,” Thang Luong, Ph.D., a senior staff research scientist at Google DeepMind and a senior author of the AlphaGeometry paper, tells Popular Mechanics. “When you ask [math] questions, the model will give you what looks like an answer, but [it actually] doesn’t make sense.” Math experts aren’t convinced that an AI made to solve high school-level math problems is ready to take off the training wheels and tackle more difficult subjects, e.g. advanced number theory or combinatorics, let alone boundary-pushing math research. [Read More]( [Read More]( [Leaf the Rake Behind and Reach for One of These Editor-Approved Cordless Leaf Blowers Instead]( [Leaf the Rake Behind and Reach for One of These Editor-Approved Cordless Leaf Blowers Instead]( Clear the yard with cordless convenience. [Read More]( [Alternate text] [Alternate text] [Joe Biden’s Military Shopping List Just Dropped With a Bang]( [Joe Biden’s Military Shopping List Just Dropped With a Bang]( From more F-35s to fewer F/A-18s, the Pentagon’s 2025 budget paints a detailed picture of America's future military capabilities. [Read More]( [The 9 Best Electric Lawn Mowers For a Trim Lawn in 2024]( The 9 Best Electric Lawn Mowers For a Trim Lawn in 2024]( Most homeowners cutting front- and backyards will get plenty of grass-cutting power from a battery-powered mower. [Read More]( [Alternate text] [Alternate text] [The Apple MacBook Air Just Went on Sale at Walmart]( [The Apple MacBook Air Just Went on Sale at Walmart]( Grab a Macbook Air at Walmart for less than the cost of a refurbished one elsewhere. [Read More]( [Caught on Camera: A Ballistic Missile’s Wrath Wreaks Havoc]( [Caught on Camera: A Ballistic Missile’s Wrath Wreaks Havoc]( In broad daylight on a Ukrainian road, a sudden explosion may have just altered war strategies. [Read More]( [Alternate text] [POP Membership]( [LiveIntent Logo]( [AdChoices Logo]( [Need Assistance? Contact Us.](mailto:pmpmembership@popularmechanics.com) Follow Us [Unsubscribe]( | [Privacy Notice]( | [CA Notice at Collection]( Popular Mechanics is a publication of Hearst Magazines. ©2024 Hearst Magazine Media, Inc. All Rights Reserved. This email was sent by Hearst Magazines, 300 West 57th Street, New York, NY 10019-3779

Marketing emails from popularmechanics.com

View More
Sent On

31/05/2024

Sent On

30/05/2024

Sent On

29/05/2024

Sent On

28/05/2024

Sent On

27/05/2024

Sent On

26/05/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.