ChatGpt

The year we learned to talk to ChatGPT and stopped being afraid of AI

In the past year, artificial intelligence has become an everyday tool and has reached hundreds of millions of users. In an instant, people acquired skills they never had: writing poems, code, and summarizing articles and books in seconds

In a motion to shorten the length of his three-year probation, Michael Cohen, who was former President Donald Trump's closest adviser, apologized to a federal judge hearing his appeal. "As a nonlawyer, I have not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not," he said, trying to explain why he attached three completely made-up precedents in the motion. "Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online." Cohen was wrong: he didn't miss a trend, he missed a revolution.
2023 was the year when artificial intelligence became an everyday tool. It was embedded in large parts of our lives even before, to the point that we didn't even know it was there. It's in our phone camera, Amazon's shopping recommendations and Netflix's platform. But with the launch of ChatGPT about a year ago, the general public experienced for the first time the extraordinary usefulness of the technology.
1 View gallery
ChatGpt בינה מלאכותית
ChatGpt בינה מלאכותית
ChatGpt
(Credit: T.Schneider/Shutterstock)
The launch of OpenAI at the end of 2022 was resounding, and within a short time hundreds of millions began to frequently use the free tool. In an instant, people have acquired skills they never had before. They wrote poems (albeit often bad ones), coded (even if they continuously made mistakes), summarized articles and books in seconds, and drafted official letters and small legal claims. They sent the bot to try its hand at tests, and ChatGPT successfully passed the legal certification tests and even the medical licensing test. The American news site CNET has already integrated the bot for writing articles. Much has been written about the estimated tectonic change that the chatbot would bring to our lives in the near future and within a few weeks it was declared a doctor, lawyer, poet and journalist.
The innovation was extremely impressive. Suddenly we witnessed an event that hadn't occurred in generations: a collective corporate panic. The giants were caught off guard. While Microsoft poured another $10 billion in investments into OpenAI in January, Google summoned its retired founders in a kind of desperate call for help. Small private companies have sprung up like mushrooms after the rain, some offering complementary tools to chatbots, others offering their own chatbots and most attracting large, fast and unusual capital raisings. The French startup Mistral embodied the spirit of the times, raising $113 million in a Seed round just four weeks after being founded.
Public companies did not miss the hype, added AI to the mix and enjoyed a surge in their shares, just because they are part of the sector. The shares of the chip companies, which support the market, saw the biggest jump in their existence. Chip company Nvidia jumped 250% to a value of more than a trillion dollars. The market was crazy.
At the beginning of March, Bill Gates announced that "the age of artificial intelligence has begun." Days later analysts from Goldman Sachs provided a harsh analysis: the current wave of innovation will lead to the elimination of 300 million jobs. On March 14, OpenAI launched an advanced version of ChatGPT (GPT4), further strengthening its leadership. Two weeks later, Google launched a competing tool for the first time, the Bard chatbot, in one of the most embarrassing and error-filled launches in its history. The launch was so embarrassing that the company's stock fell by 9% that day, wiping $100 billion from its value. For Meta it took many more months and it was only in June that it introduced LLaMa, its language model for developers. Amazon announced in April that it had entered into a partnership with several companies, including the Israeli startup AI21 Labs and Anthropic, owner of the chatbot Claude. At the same time Elon Musk incorporated his own new intelligence company - xAI.
Artificial intelligence has become a central preoccupation, and also one that requires the collective learning of a whole world of concepts. We all got to know complex concepts such as machine learning, neural networks, large language models and generative artificial intelligence. We became aware of the dangers of model bias or problems of attributing human characteristics to computers. Computer scientist Gary Marcus conceptualized for us the idea of "hallucinations" or the same principle problem of bots making up facts and delivering them convincingly. Enthusiasm mixed with apprehension. In the same breath that they announced that artificial intelligence would save democracy, they also said that it would destroy it. At the same moment that they announced that it would free us and result in a four-day work week, they also said that it would allow greater exploitation.
The scientists, developers, ethicists and investors entered into a heated discussion among themselves. The panic that apparently gripped them was not about how our world will change from a chatbot, but the fear that artificial intelligence will soon be stronger than human intelligence, will gain awareness and become an existential threat to the human race.
The panic was widespread and at the end of March, several hundred engineers published an urgent letter that urged to stop any development in the field for six months, so that the regulators could monitor it.
There was a grain of truth in their warning: plenty of research has already proven that artificial intelligence is used as a tool to systematically discriminate between populations, has helped others steal original works, is the main technology to expand government surveillance and violate citizens' privacy, and is also the most effective method of exercising brute force control over employees. Unfortunately, the hundreds of engineers were not interested in all these dangers. No wonder, then, that the announcement did nothing to promote substantive discourse in the field. Elon Musk, who signed the letter, submitted, as mentioned, already in April the documents to establish a new artificial intelligence company.
The backlash will not come from those engineers, entrepreneurs and investors who continued to talk about theoretical dangers without referring to the here and now, but from those who have already been harmed by the tools. Those whose original works were used to train the models, whose labor and skill accumulated over the years were harvested without any permission to build profit-generating tools, and also those who will try to take credit from them. It was quickly discovered that creators, writers and others will not miss this revolution as they did with the rise of social networks, and that from now on content will not be given for free.
The first to give revolutionary expressions to these concerns were the Hollywood screenwriters. As part of an unusual labor dispute with the major production companies, the screenwriters demanded to protect their work against the integration of artificial intelligence tools in the future. At the end of a strike that lasted almost six months, the parties agreed that actual screenwriters will continue to receive credit for their work, even if during the creation they use generative artificial intelligence tools like ChatGPT. But this was only the beginning of the global legal saga created by these tools. In July, the first copyright lawsuit was filed against OpenAI and Microsoft. It certainly won't be the last.
So far, dozens of lawsuits have been filed around the world against the massive data harvesting carried out by the companies that operate bots, headed by OpenAI. Writers, poets and publishers refuse to stand by while their original work is taken from them without permission or compensation. John Grisham, Sarah Silverman, Stephen King and George RR Martin are just a few of the famous creators who have taken part in class action lawsuits. They all asked for two things from the tech giants: getting permission from them to use their creations to train the models, and compensation.
While everyone is left to protect themselves—whether workers for their jobs or citizens for their privacy—regulators and politicians continue to move slowly. No substantive piece of legislation has been drafted, and no swift move has been implemented to offer basic protection to users. The most advanced of all is the European Commission, which is expected to complete the first legislation for how artificial intelligence can be deployed and used, for whom and how private information can be collected or data harvested from the Internet, and what obligations of privacy and control technology companies must provide to the general public. But this too has not yet been enacted.
In the United States, President Joe Biden published an "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" in October. In November OpenAI released a new set of tools and developments to the world, Google followed suit and Musk launched his Grok chatbot on the X social network he owns.
As part of a defensive move and another sign that everyone is left to themselves, in November OpenAI signed an unusual deal with Axel Springer, Europe's largest publisher. Under the arrangement, ChatGPT will present users with summaries of selected news content published by the news companies in Axel Springer's group such as Politico or Business Insider. The bot will be able to respond to questions by citing and linking to the group's content. Not a bad deal overall, and perhaps the only visible way that content creators won't (again) lose all their assets for free to tech giants, as they did with the rise of social networks.
The market barely comprehended the meaning of the agreement, and one day in December the OpenAI board of directors fired CEO Sam Altman in a move that came as a complete surprise. Altman, who had been with the company since its inception as a non-profit company, has since pushed it towards a profit model. Some board members who felt that he moved too quickly, fired him, but within a week and in the face of strong opposition from the company's employees - Altman returned, and the board was fired. His return was quick and marked the clear direction of the field if it will be left to its devices - profits above all, speed as a key principle.
Altman's return to OpenAI was immediately accompanied by a new seismic event, befitting a fast-moving market. The world-renowned newspaper, The New York Times, filed a huge lawsuit against OpenAI for systematic violation of the newspaper's property rights. The lawsuit came after OpenAI reportedly opted in April to end negotiations for a settlement with the paper as it reached with Axel Springer, apparently due to complex demands from the paper. In response, The New York Times started its own war to obtain a proper reward for the large cost involved in creating quality and original news content, which is used by the bots for training and constitutes the main part of their knowledge about the world.
Where will this field head next? It’s not entirely clear. What is clear is that some of the enthusiasm for artificial intelligence that has gripped the public this past year, and especially the technology sector, is directionless and complete hype. However, other large parts are deeply rooted in the uniqueness and innovation we've seen this year. What can be assumed already is that if this is really a revolution that will affect all parts of life, it shouldn't be left to companies and entrepreneurs for profit. We all need to take part in a discussion where we define not only what technology can do, but also what we want it to do for us.