[The Bleeding Edge]( Better Know Claude By Colin Tedards, Editor, The Bleeding Edge Dear Reader, ChatGPT, Bard, LLaMA 2, Claude, DALL-E 2, and AlphaCode. These are just a few of the generative AI models that are available to use today. It seems like every week, there’s a new model⦠or a newer version of an existing model that comes out. That’s why I’m occasionally going to use The Bleeding Edge to take a closer look at a given AI model â and the company behind it. Today, I’m featuring Anthropic and its latest AI model, Claude 2. Claude 2 is the closest competitor to ChatGPT and GPT-4 from OpenAI. Recommended Link [The #1 Stock of 2023
(Trader Makes Fortune During 2008, 2020, 2022)]( [image]( I’ve joined the ranks of the top 1% of wealthy Americans... by [IGNORING 99% of the entire stock market.]( I only trade ONE stock, helping me nail OVER 800 winning trade recommendations! [I’ve used it through the crashes of 2000, 2008, 2020, and 2022 to deliver huge gains]( – time and time again. I don’t care whether you have $100 in your bank account or $1 million – this single stock has the power to create your dream financial life. I’ll demonstrate HOW to trade it & reveal the ticker symbol and name of the stock, FREE. [>> Simply click here to get all the details. <<](
-- At the core of it, users can submit a prompt and Claude will usually respond with an informed and natural language response. Claude has a few advantages over GPT-4. The biggest difference is that Claude can ingest a book’s worth of information in a single prompt. Claude’s context window is 100,000. The context window refers to how much information it can digest at one time. And the size, 100,000, equates to roughly 70,000 words. That’s about the size of books like Catcher in the Rye or Lord of the Flies. In comparison, GPT-4 has only recently unveiled its 32,000-context window. That means Claude can handle about 3x what GPT-4 can. Recommended Link [âYou need at least $100 of this asset â and itâs NOT goldâ â Dr. Nomi Prins]( [image]( $100 is all you need… Former Goldman Sachs managing director Dr. Nomi Prins has identified an investment she’s calling ‘the world’s hardest asset’ – and she’s recommending it to friends, family, and followers. She’s talked about it on podcasts… live TV… and in her newest, bestselling book, Permanent Distortion. Dr. Prins says: “This asset has nothing to do with gold or silver, but it has many of the same features to protect your wealth – and preserve your privacy.” As the turbulence in our world grows worse and worse… [Click here now to see what Nomi is recommending before it’s too late.](
-- That gives the edge to Claude for being able to digest lengthy documents and provide useful insights. With Claude, users don’t need to come up with creative solutions to feed the AI large amounts of data. Claude’s built ready to take it all in. Claude is also about 5x cheaper than GPT-4. For a prompt, Claude costs $11.02/million tokens whereas GPT-4 costs $60/million tokens. Of course, taking in more data doesn’t mean much if the AI can’t make sense of what it’s ingesting. Here’s how Claude scored on a handful of standardized tests: - 76.5% on the multiple choice section of the Bar exam - 90th percentile on the GRE reading and writing exams - 71.2% on the Codex HumanEval, a Python coding test These scores show that Claude has similar reasoning abilities as GPT-4. Claude’s information is also more recent. Its cutoff point for information is early 2023. That compares to GPT-4 which is September 2021. From my own experiences experimenting with Claude, I feel it offers longer and more nuanced responses. I think that’s generally a good thing coming from an AI. After all, most people are willing to take what an AI says at face value without fact-checking. So the more nuance and context the AI can provide, the better. Recommended Link [Make 2023 all about investing for your retirement]( [image]( In this video, Market Wizard Larry Benedict reveals how to make all the money you need... In any market... Using a single stock. And he’s giving you access to the top strategy that could put 2023 on the right track. [Click here to watch the video.](
-- One of the most interesting features of Claude is what its developers call a constitutional AI model. A big challenge for any AI developer is to make sure the AI doesn’t start spewing hateful or harmful information. After all, these AIs take in huge amounts of data from the internet. And the internet has some very hateful content on it. This challenge brought down early AIs like Microsoft’s Tay which spewed racist and inflammatory remarks. OpenAI uses humans to teach the AI what’s harmful. Developers can manually intervene and tell it not to respond to certain questions. Or it can be trained to not offer hateful or harmful prompts on a case-by-case basis. Claude’s constitutional model is different. The developers managed to “teach” it an ethical framework to use in every response. It basically works by having two different AIs manage the response. The first AI fetches the response based on the given prompt. The second AI then evaluates the response based on the constitutional guidelines it's been given. Claude’s developers hope to eliminate hateful, discriminatory, unethical, and illegal responses entirely. This gives the model a potential edge in business and commercial applications. It’s 2x safer than its previous version, Claude 1.3. Just from experimenting with it, Claude does seem to have tighter guardrails than other AI models. Stricter safety standards aren’t an accident. They’re the entire reason Claude exists. Anthropic was founded by Dario and Daniela Amodei. Both were senior members at OpenAI. According to them, they became concerned with OpenAI’s disregard for safety in favor of commercialization. They wanted to build a better AI that had stricter safety standards. It seems to have worked. Since 2021, Anthropic has raised over $1 billion from the likes of Alphabet and Salesforce. Last February, Anthropic partnered with Alphabet to use its Google Cloud to help train and develop Claude. Anthropic seems to have made the most of the funding and partnership. Claude 2 shows that it’s able to outcompete GPT-4 with its constitutional AI. Anthropic is still a private company. Its last funding round in May valued it at $5 billion. That means there isn’t an opportunity to invest in it yet. But it’s a company that I’m keeping a close eye on. I encourage you to give Claude a try at [Claude.ai](. Then let me know what you think by writing to me at feedback@brownstoneresearch.com. Regards, Colin Tedards
Editor, The Bleeding Edge --------------------------------------------------------------- Like what you’re reading? Send your thoughts to feedback@brownstoneresearch.com. IN CASE YOU MISSED IT… [Buy in once — Collect payouts 8 times per year...]( Want to destroy your money worries? Try what Brad Thomas calls... The "Amazon secret royalty program." In short: It's a simple way to collect up to $10,000 (or more) in “royalty” payouts... Starting September 10th... Brad has been collecting “royalties” for years... It helped him change his life... And he’s already shown over 100,000 regular, everyday folks how to get started... Like Neil P., Tom K., and Elaine T., who are already collecting as much as $30,000 in payouts from “royalty programs” just like this...* [In this short video]( Brad reveals how it works... And you could be earning up to $1,000, $5,000, or even $28,544 per year or more starting this September... Click the link below for the free presentation to learn how to get started. [“Yes, Show Me How”]( *Verified reviews. Past performance does not guarantee future results. [image]( [Brownstone Research]( Brownstone Research
55 NE 5th Avenue, Delray Beach, FL 33483
[www.brownstoneresearch.com]( To ensure our emails continue reaching your inbox, please [add our email address]( to your address book. This editorial email containing advertisements was sent to {EMAIL} because you subscribed to this service. To stop receiving these emails, click [here](. Brownstone Research welcomes your feedback and questions. But please note: The law prohibits us from giving personalized advice. To contact Customer Service, call toll free Domestic/International: 1-888-512-0726, Mon–Fri, 9am–7pm ET, or email us [here](mailto:memberservices@brownstoneresearch.com). © 2023 Brownstone Research. All rights reserved. Any reproduction, copying, or redistribution of our content, in whole or in part, is prohibited without written permission from Brownstone Research. [Privacy Policy]( | [Terms of Use](