Newsletter Subject

One Step Closer to the Moon and Mars...

From

brownstoneresearch.com

Email Address

feedback@e.brownstoneresearch.com

Sent On

Mon, Apr 24, 2023 08:53 PM

Email Preheader Text

- Twitter, X Corp., and X.ai? - A new open source entrant into large language models - The AI arms

[The Bleeding Edge]( - Twitter, X Corp., and X.ai… - A new open source entrant into large language models - The AI arms race is just getting started --------------------------------------------------------------- Dear Reader, “Everything after clearing the launch pad was icing on the cake.” Those were the words of the SpaceX announcer during last Thursday’s historic launch of the SpaceX Starship from the Boca Chica spaceport in Texas. Long awaited, it was a remarkable launch to watch. Prior to launch, Elon Musk set expectations that he was just hoping to get the integrated booster and Starship off the launch pad. After all, this was the very first test of the integrated system with 33 Raptor engines and an attempt to do something never done before. Which is what made the test launch such a stunning, stunning success. Not only did it get off the launch pad, Starship reached Max Q – the point of maximum stress on the rocket. It managed to reach an apogee of 39 kilometers, and it did so without six of the 33 Raptor engines functioning. [Bear market expert makes new prediction]( Just under four minutes into the flight, around the time that the first stage should have separated, Starship and the booster began to loop in what was clearly a sign of trouble. What happened next was referred to as a “rapid unscheduled disassembly.” What a great engineering term for – BOOM! Which is exactly what is supposed to happen when a rocket starts to malfunction. As usual, the reporting by the press was mostly negative, and inaccurate. The launch was an incredible success by all measures, and yet even Bloomberg came out with an article titled: “Starship Explosion Shows Just How Far SpaceX is From the Moon” The article should have said the opposite. If SpaceX was able to get to 39 kilometers in altitude with six less operational rocket engines on a test launch, just imagine what will happen the next time around. And that’s the point. The reason that SpaceX was able to transform the aerospace industry was because it was willing to test and fail early. SpaceX did the same thing with the Falcon 9, had its series of failures, and then went on to demonstrate a series of successful launches unparalleled in the industry… and all for a fraction of the cost of any rockets that came before the Falcon 9. Musk has already said that SpaceX plans to launch a second test Starship in a month or two. They’re ready to go. With each launch comes more valuable learnings. And that means that SpaceX is one step closer to the moon. Recommended Link [The One Ticker Retirement Plan]( Over the Shoulder Demo Now Available [image]( Market Wizard Larry Benedict crushed the market in 2022. But he didn't do it with a “traditional” method… For a limited time, he’s sharing a free over-the-shoulder “demo” of his strategy in action. It takes less than 10 seconds… [Watch it here.]( -- Get ready for the everything app… Elon Musk is making some dramatic moves with Twitter right now. In the past few days, the corporate press has been fixated on the removing of “legacy blue checks” for accounts that were verified under the old model. As a reminder, Twitter’s previous way of giving out blue checks was deeply corrupt. As we learned, many paid Twitter employees to receive their blue check, or it was given out to those that fit some “preferred” political narrative. But while much of the world is distracted by this silly vanity, something far more interesting is going on just underneath the surface… For starters, Musk announced his plans to create a generative artificial intelligence (AI) to compete with [OpenAI’s ChatGPT]( and [Google’s Bard](. Musk’s goal is to train his AI to be objective, rather than biased. He specifically pointed to the proven bias demonstrated by OpenAI and Google and how it is reflected in the outputs of their generative AIs. We can view this as a form of censorship. The AIs don’t openly censor information. But they will tend to present information that’s biased towards a certain political agenda or narrative depending on what data the AIs are trained on, as well as any guardrails that are programmed into the system. So Musk wants to provide an alternative that’s completely data-driven. The question is – where does this new generative AI fit into Musk’s umbrella of businesses? This is where Twitter comes back into the picture… [Millionaire Trader Reveals: How to Make One “Backdoor” Currency Trade – Every Month – And Start Making All the Money You Need to Fund Your Retirement]( When Musk bought Twitter and took it private, he dissolved Twitter as a corporate entity. Now Twitter exists only as a branded platform. Twitter is now held by a company Musk set up years ago called X Corp. At one time he suggested that this company could build the “everything app.” He envisioned the app doing a wide range of things – similar to WeChat in mainland China. WeChat is a messaging platform that also enables users to make video calls, make payments, play games, order food and groceries, and even book doctor’s appointments. It basically combines the functionality of 10 or 12 different apps here in the U.S. That said, Musk did something telling just a few days ago. He registered a website called X.ai. No doubt this is where his generative AI will live. And this suggests that his new AI will be housed within X Corp. as well… just like Twitter. We can compare this to the setup with Facebook/Meta. Meta is the company, and beneath it are a variety of different products: Facebook, Instagram, and WhatsApp, primarily. So it appears that Musk’s plans for an “everything app” are now in motion. And Twitter is going to be at the heart of it. This isn’t speculation either. X Corp. and X.ai exist, and X Corp. has already ordered 10,000 GPUs which are what is necessary to train the new generative AI. And given that much horsepower, it won’t be long before we’ll learn more about how Musk plans to infuse Twitter with artificial intelligence. This will likely be one of the biggest stories in tech this year. Stay tuned… Recommended Link Millionaire Trader Reveals: [How to Make One “Backdoor” Currency Trade – Every Month – And Start Making All the Money You Need to Fund Your Retirement]( [image]( [Click here for the name of the currency trade.]( -- Stability AI steps into large language models… Speaking of competition in generative AI, a company we know well just entered the ring. Regular readers will remember [Stability AI](. This is the company behind Stable Diffusion – the text-to-image generator that was a precursor to large language models like ChatGPT. Well, Stability AI just released a suite of its own large language models. It’s called StableLM. And right now it consists of two separate generative AIs. StableLM’s focus is on transparency. As such, Stability AI open-sourced the code. Anyone who understands coding can view it to see exactly how StableLM works and what it was trained on. This is in stark contrast to ChatGPT (OpenAI) and Bard (Google). Both OpenAI and Google have kept their code proprietary… so nobody else knows what’s in it. I see this as a very positive development for the industry. We want to have competing AIs to choose from. That way this powerful technology isn’t controlled by just one or two massive corporations. I’m also excited to see that Stability AI is taking a very similar approach to Cerebras. If we remember, Cerebras is the company that designed the [w]( largest AI-specific semiconductor](. And earlier this month the company [released seven new GPT-based language models](. What I love about this is that Cerebras trained each of them with progressively larger data sets. This allows each model to be optimized for specific applications. Not everything needs to be trained on the entire open internet as ChatGPT was. Stability AI is taking the same approach. It trained one of its AIs on 3 billion parameters. And Stability trained the second on 7 billion parameters. From there, Stability AI plans to release models trained on 15 billion and 65 billion parameters, respectively. For comparison, OpenAI trained ChatGPT on 175 billion parameters. So Stability AI’s models are much smaller. And smaller language models don’t require as much computing horsepower to run. They don’t need to be hosted at a big data center. Instead, the smaller models can run on the edge of networks – such as our phones and laptops. So this is a big move by Stability AI. And because it’s StableLM AI suite is light and open-source, I suspect we’ll see a lot of companies adopt its technology as a backend for their own software applications. Recommended Link Top Expert on Seeking Alpha reveals: [The SWAN Retirement Blueprint]( [image]( How to make all the money you need for a comfortable retirement – in any market – with a small portfolio of unique stocks. [Click here for details – including the name and ticker of his #1 stock.]( -- China’s jumping on the generative AI bandwagon… Not to be left behind, a host of Chinese companies announced over the last few weeks that they are working on their own generative AI models. They will each function much like ChatGPT. Perhaps the most notable Chinese company getting in on the act is Alibaba. This is China’s version of Amazon. Alibaba has already launched its own ChatGPT-style chatbot. The company has made it available to select businesses on a trial basis. That will allow Alibaba to work out the kinks and release an optimized version to the general public in the months to come. And it’s not just Alibaba. Baidu, China’s equivalent to Google, launched its own generative AI, also on a trial basis. It’s called Ernie Bot. And both Huawei and SenseTime announced that they are working on a generative AI. Huawei is a major player in the smartphone and wireless technology space. And SenseTime is a Chinese company that specializes in facial recognition technology. So four major Chinese companies are stepping into the ring in a rush to not be left behind. And I’m sure more will follow. It’s very clear that the race is on. This competition isn’t just about a corporate rivalry. This is deeply tied into China’s national ambitions to be the world’s leader in artificial intelligence by 2030. But here’s the thing – all of these generative AIs will need to be trained on datasets approved by the Chinese government. That means the Chinese Communist Party will control what inputs go into the AI. Obviously this will impact the AI’s output. They won’t be trained on all available knowledge. Instead, they will receive a curated dataset that contains only information that the Chinese Communist Party deems appropriate. And that means few parties outside of mainland China will adopt these models. They simply won’t be competitive with the U.S.-based alternatives. So I’m curious – will these Chinese companies make two versions of their AI models? They could make one highly restrictive model for use in mainland China and a separate model trained more broadly for use outside the country. Is that the plan here? It’s going to be interesting to see how this all plays out. What we should take away from this is that the rise of artificial intelligence clearly won’t be a U.S.-centric phenomenon. We’re heading into a world where several nations will build out their own versions of this technology in a bid for dominance. And that doesn’t mean just the largest countries. As a reminder, many large language models have already been open sources. And so much of the research in this area has been openly published. Training a generative AI is within the budget of just about any country. We can think of what’s happening right now as an “AI arms race.” And it’s just getting started. Regards, Jeff Brown Editor, The Bleeding Edge --------------------------------------------------------------- Like what you’re reading? Send your thoughts to feedback@brownstoneresearch.com. IN CASE YOU MISSED IT… [Want to know how to predict economic trends?]( Critically acclaimed author Peter Zeihan comes recommended by Fareed Zakaria, Ian Bremmer, and former Presidential candidate Mitt Romney. Find out why. Order his latest book, The End of the World Is Just the Beginning. [Get it here and receive a $100 Legacy Research credit.]( [image]( [Brownstone Research]( Brownstone Research 55 NE 5th Avenue, Delray Beach, FL 33483 [www.brownstoneresearch.com]( To ensure our emails continue reaching your inbox, please [add our email address]( to your address book. This editorial email containing advertisements was sent to {EMAIL} because you subscribed to this service. To stop receiving these emails, click [here](. Brownstone Research welcomes your feedback and questions. But please note: The law prohibits us from giving personalized advice. To contact Customer Service, call toll free Domestic/International: 1-888-512-0726, Mon–Fri, 9am–7pm ET, or email us [here](mailto:memberservices@brownstoneresearch.com). © 2023 Brownstone Research. All rights reserved. Any reproduction, copying, or redistribution of our content, in whole or in part, is prohibited without written permission from Brownstone Research. [Privacy Policy]( | [Terms of Use](

EDM Keywords (259)

world working work words within willing whole went well weeks wechat way want view versions version verified variety usual use underneath umbrella two twitter trouble transparency transform trained train took time ticker thoughts think thing text test tend technology tech taking system suspect surface sure supposed suite suggests suggested subscribed strategy stepping starship specializes spacex smartphone simply sign sharing setup service series sent sensetime see second said rush run rockets rocket rise ring right retirement research require reporting removing released release registered reflected referred redistribution receive reason ready reach race questions question provide programmed private press precursor point plays plans plan phones past part outputs output order optimized opposite openai one networks need necessary name musk much motion moon months month money models model missed measures means mean mars market managed malfunction making make made love lot loop long live likely light learn leader launch last laptops know kinks kept jumping interesting information industry inaccurate impact imagine icing huawei hosted host hoping held heart heading happen guardrails groceries google going goal go giving given get fund functionality free fraction form follow focus fixated fit feedback failures exactly equivalent envisioned entered ensure end edge earlier doubt dominance distracted designed demonstrate days data curious create country cost controlled control content contains consists competitive competition compete compare company come clearly clearing clear choose china chatgpt cerebras censorship case came cake businesses build budget broadly boom bid biased beneath backend available attempt article area approach appointments appears app apogee altitude alternative already allows alibaba ais ai adopt action act accounts able 2030 2022 10

Marketing emails from brownstoneresearch.com

View More
Sent On

31/05/2024

Sent On

30/05/2024

Sent On

29/05/2024

Sent On

28/05/2024

Sent On

27/05/2024

Sent On

24/05/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.