Newsletter Subject

Google’s Long-Awaited AI Falls Short of Expectations

From

brownstoneresearch.com

Email Address

feedback@e.brownstoneresearch.com

Sent On

Thu, Dec 14, 2023 09:01 PM

Email Preheader Text

Google’s Long-Awaited AI Falls Short of Expectations Gemini is out. But it?s not all it?s m

[The Bleeding Edge]( Google’s Long-Awaited AI Falls Short of Expectations Gemini is out. But it’s not all it’s made out to be. On December 6, Google released its long-awaited AI model, Gemini. Rumors of Gemini have been swirling this year. By all accounts, it should have outperformed OpenAI’s GPT-4 model. According to Google, it did just that… But be careful taking Google at face value on that. I’ve been digging into the developer notes from Google. And behind the impressive demo videos and stats is an ugly truth… Gemini is a disappointment. I’ll get into those details in a moment. But the bigger question here is whether Google is failing… or if AI development has hit a plateau. Recommended Link [Man Paid $1.5 Million by Google Reveals How to Profit from their New AI Project]( [image]( See the document above? It’s a pay stub showing that Google paid tech expert Colin Tedards over $1.5 million dollars in 2022 (through their holding company XXVI Holding Inc.) And now Colin says Google is entering a new era that could make you rich. It’s all thanks to the return of Google’s billionaire founders Sergey Brin and Larry Page. They retired three years ago… But now they’re back to help launch Google’s biggest artificial intelligence project yet. And this Google millionaire says he’s found a way for you to profit from this new AI project right from your brokerage account. [Click here for the full story.]( -- Sleight of Hand Google made three Gemini models available… - Ultra - Pro - Nano So far, we only have access to Pro. This mid-range model is a stripped-down version of Ultra. That means I can’t test Ultra on my own. I’ll have to take Google’s word on how it performs. Google’s demo video of Ultra showed off some impressive capabilities. Here’s a clip of it recognizing a game of rock, paper, scissors… Demo of Gemini (Source: Google) But that’s not how it works in real-time. Google’s developer notes showed that it was manually prompted to describe each hand gesture. It wasn’t until it was asked a leading question about all three gestures in one slide that it could guess the game. When I first watched the demo video, I was blown away. I thought Gemini was making sense of real-time video. Its responses were snappy and creative. But the developer notes show us that Gemini could only make sense of still images with text prompts. Even then, the responses were bland and generic. Google tried using similar tricks to show off Gemini’s strengths. Here’s a chart that makes Gemini out to be the strongest AI… AI benchmark testing between Gemini and GPT-4 (Source: Google) In this series of tests, human experts scored 89.8%. GPT-4 scored 86.4%. And Gemini bested them with a score of 90%. But just like the demo video, the devil is in the details. Google only gave Gemini 6.4x as much training data as GPT-4. That’s not the only unfair comparison. Google was testing Gemini against GPT-4. But OpenAI has an even better model, GPT-4 Turbo, that was released in November. Third-party testers put GPT-4 Turbo’s score at about 89%. That means Gemini Ultra is roughly on par with the latest GPT-4 model. Don’t get me wrong. That’s impressive. But it’s a far cry from the predictions that Gemini would leave GPT-4 in the dust. After all, Google is a $1.6 trillion company with $119 billion in cash and more than 182,000 employees to throw at AI development. OpenAI was recently valued at $80 billion. It has only about 770 employees and far less cash. Which makes Google’s inability to compete with – let alone outdo – OpenAI is startling. But is this a Google problem… or a technology one? Wolves at the Door Outside of Google, we’re seeing smaller firms push the boundaries on what’s possible with AI. [I recently highlighted]( breakthroughs in AI video generation from Pika. Here’s a demo video from Pika… Pika 1.0 generates video based on text prompts. Source: Pika.art These videos aren’t free of flaws. But they mark a huge leap in progress. The small, Paris-based AI firm behind Pika, Mistral, released its latest model which outperforms GPT-3.5. That’s impressive for a company with only 22 employees. And OpenAI’s Q* model is reported to be able to reliably do grade school math. That may not seem very impressive… But it marks a major breakthrough in how AI models reason through problems. The point of these examples is to show that big and small firms alike are making progress in AI. And that means the failings of Gemini are a Google problem… not a technological one. Now, it may seem like I’m ragging on Google a lot. But there’s a good reason for this. You have to understand that I’m not going to blindly cheerlead companies during one of the most significant tech revolutions the world will ever see. I like Google as a company. They’ve done great work with AlphaFold and AlphaMissense, which I’ve [covered previously](. But they need to pick up the slack on AI development to stay competitive. Google isn’t out of the race. But Gemini needs to thoroughly outperform GPT-4. Management can’t rely on fudged numbers and demos to win over the masses. Gemini Ultra is still under development. Google estimates that it will launch in early 2024. Developers still have time to push the boundaries on what's possible. I look forward to thoroughly testing Gemini Ultra against GPT-4 and other models upon its release. Regards, Colin Tedards Editor, The Bleeding Edge --------------------------------------------------------------- Like what you’re reading? Send your thoughts to feedback@brownstoneresearch.com. IN CASE YOU MISSED IT… [Billionaires Preparing for Final U.S. Dollar Collapse?]( If you have any money in U.S. dollars, [click here now…]( Because our U.S. Treasury Department has scheduled an event for this month that could have drastic implications for 260 million Americans. The founder of the world’s largest hedge fund, Ray Dalio, legendary hedge fund manager John Paulson, and even the famous Rothschild family have started to prepare. [Click here to see the details or risk losing everything.]( [image]( [Brownstone Research]( Brownstone Research 55 NE 5th Avenue, Delray Beach, FL 33483 [www.brownstoneresearch.com]( To ensure our emails continue reaching your inbox, please [add our email address]( to your address book. This editorial email containing advertisements was sent to {EMAIL} because you subscribed to this service. To stop receiving these emails, click [here](. Brownstone Research welcomes your feedback and questions. But please note: The law prohibits us from giving personalized advice. To contact Customer Service, call toll free Domestic/International: 1-888-512-0726, Mon–Fri, 9am–7pm ET, or email us [here](mailto:memberservices@brownstoneresearch.com). © 2023 Brownstone Research. All rights reserved. Any reproduction, copying, or redistribution of our content, in whole or in part, is prohibited without written permission from Brownstone Research. [Privacy Policy]( | [Terms of Use](

Marketing emails from brownstoneresearch.com

View More
Sent On

08/12/2024

Sent On

07/12/2024

Sent On

06/12/2024

Sent On

06/12/2024

Sent On

05/12/2024

Sent On

05/12/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2025 SimilarMail.