How will ChatGPT change everyday life? Sarah Myers West on artificial intelligence and human resilience. â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â The Mimic How will ChatGPT change everyday life? Sarah Myers West on artificial intelligence and human resilience. Arteum Attracting more than a million users within days of its release in November, ChatGPTâthe new artificial-intelligence application from the U.S. research lab OpenAIâhas provoked intensive global media coverage in the month and a half since. Responses to the technologyâwhich can generate sophisticated written replies to complex questions; produce essays, poems, and jokes; and even imitate the prose styles of famous authorsâhave ranged from bemused astonishment to real anxiety: The New York Times tech columnist Kevin Roose wrote that âChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general publicââwhile Elon Musk tweeted, âChatGPT is scary good. We are not far from dangerously strong AI.â Critics worry about the technologyâs potential to replace human work, enable student cheating, and disrupt the world in potentially countless other ways. What to make of it? Sarah Myers West is the managing director of New York Universityâs AI Now Institute, which studies the social implications of artificial intelligence. As West explains, ChatGPT is built on more than 60 years of chatbot innovation, starting with the ELIZA program, created at the Massachusetts Institute of Technology in the late 1960s, and including the popular SmarterChild bot on AOL Instant Messenger in the 2000s. West expects this latest version to affect certain industries significantlyâbut, she says, as uncanny as ChatGPTâs emulation of human writing is, itâs still capable of only a very small subset of functions previously requiring human intelligence. This isnât to say it represents no meaningful problems, but to West, its futureâand the future of artificial intelligence altogetherâwill be shaped less by the technology itself and more by what humanity does to determine its use. Graham Vyse: How do you see ChatGPT in the history of artificial intelligence as a whole? Sarah Myers West: Artificial intelligence has been around as a field for almost 80 years now, but its meaning has changed a lot along the way. In its early days, AI focused on what weâd call âexpert systemsââtechnologies that would replicate human intelligence in certain ways. Now, what we refer to as AI is very differentâlargely, an array of data-centric technologies that, in order to work effectively, rely on a couple of things that didnât really exist before. The first is massive amounts of data. This was enabled by the internet boom of the 2010s, when tech companies developed the capabilities to leverage the production of data on a huge scaleâthat is, to develop systems that could look for patterns in extremely large data sets. In this sense, when we talk about AI today, itâs essentially what people were talking about as big data starting in the 1990s. The second thing these systems rely on is massive amounts of computational power to process all this data. Overall, what this means is that AI, as a field, has become increasingly dependent on the resources of a small number of big tech companies that have built or acquired these two things: huge data sets and huge computational power. What it doesnât really mean, though is any close replication of human intelligence. So although itâs very effective at doing a small subset of tasks, what we refer to as AI is very different from what humans are able to do. Vyse: Whatâs new about ChatGPT in this history, then? Advertisement West: ChatGPT is more effective than anything before it at producing text responses that closely mimic human writing. But even there, Iâd understand it as the newest version of older technology. If you go back to 1966, for example, Joseph Weizenbaum developed a natural-language processing system called ELIZA, an early chatbot. ELIZA was designed specifically to mimic what it was like to be in a therapy session. So if you were to say, I had a really hard day at work, ELIZA would know to say, Tell me more about that. When people interacted with ELIZA in 1966, they had much the same reaction as people are having to ChatGPT todayâthat it was this remarkable technology, In fact, Weizenbaumâs secretaryâbecause her conversations with ELIZA felt so intimateâwould ask him to close the door when she was speaking to it, even though she knew exactly how the system worked. Another example thatâs a little more recent: I remember, back in the 2000s, playing around with a chatbot called SmarterChild on AOL instant messenger that would offer essentially the same kind of interactionâyouâd talk to the system about your day, and it would feel like a very intimate experience. ChatGPT builds on those precursors, but it does so using huge amounts of data, largely culled from the internet. ÐлеÑÑ Ð£ÑÑÑÐ½Ð°Ñ More from Sarah Myers West at The Signal: âI donât see ChatGPT being used effectively in tasks that rely on deeper levels of intelligence. I do think weâre likely to see it accelerate a general preexisting trend, though, thatâs been devaluing certain categories of workâor making their human elements more mundane. For instance, where someone used to write text themselves, work that required real creativity and ingenuity, they might now be asked just to edit AI-generated text and make it incrementally more meaningful.â âThereâve already been extensive concerns about how ChatGPT can be used for cheating in education systems, manipulation in the media, new ways to con people into giving over their information, or fraud generallyâall of which are legitimate, and all of which OpenAI has acknowledged. But thereâs also a deeper concern about the technologyâs use by a very few companies, which is that theyâll develop their use without any clear avenue for broader public input as to how it will affect people and the public interest.â âAs to how widespread the effects will be, I honestly donât knowâand I donât think that even the companies behind the technology necessarily know yet. They certainly havenât discussed publicly how they intend to make it profitable. Which I think is the critical issue at this juncture: There just isnât very much transparency from those whoâre developing this technology, and those whoâll be deploying it, about where itâs heading.â [Continue reading ...]( [The Signal]( explores urgent questions in current events around the worldâto support it and for full access: [Subscribe now]( The Signal | 1717 N St. NW, Washington, DC 20011 [Unsubscribe {EMAIL}](
[Constant Contact Data Notice](
Sent by newsletters@thesgnl.email