ChatGPT

ChatGPT’s Big Bang: We haven't yet internalized the implications of artificial intelligence

The combination of ChatGPT's ability to learn to recognize patterns and make predictions based on huge amounts of data, and the possibility to do so in natural language, has created the big bang in artificial intelligence we are now witnessing

The field of artificial intelligence is not new. The conceptual infrastructure for creating an artificial neural network was already laid in 1943. Seven years later, the mathematician Alan Turing developed the Turing Test, which tests a machine's ability to demonstrate intelligent behavior, indistinguishable from a human. The term itself, artificial intelligence, was coined in 1956.
Over the years, researchers have been able to develop systems that play checkers and win at chess and trivia games. Netflix's famous recommendation algorithm, one of the most well-known applications of AI, came into use for the first time in the late 1990s. It started as a simple algorithm that suggests movies based on the user's previous choices, and over time has become more and more sophisticated, taking into account a variety of factors such as user preferences, viewing history and ratings.
1 View gallery
ChatGPT
ChatGPT
ChatGPT
(Photo: REUTERS/Florence Lo)
During the 2000s, the field developed rapidly. Google introduced a virtual assistant that can understand queries in natural language, and Amazon revealed an artificial intelligence model it developed to run its product recommendation system.
And yet, it was an area of only limited public interest. It's easy to see why: artificial intelligence was a language of machines talking to each other, or to a select and limited group of researchers.
The communication with the machine was reserved for mathematicians, programmers, geeks, crazy talkers, passionate TV fans or those anxious about the possibility of the "rise of the machines".
All this has changed in recent years, after a new and exciting player entered the AI field: ChatGPT. This model, presented by OpenAI for the first time in 2018, presented a significant breakthrough in the field of natural language processing, which changed the way we interact with technology. The model uses deep learning to generate responses that look like they were written by a human.
It is one of the largest language models ever created, with 175 billion parameters. It is this enormous scale that allows it to generate responses that sound much more natural than those of previous language models.
And so, the communication with the machine changed considerably. From being a language only known to true experts it has become easy and natural for everyone, or at least for English speakers. Hebrew speakers can enjoy the wonder only partially, at least for now.
The combination of ChatGPT's ability to learn to recognize patterns and make predictions based on huge amounts of data, alongside our ability to communicate with the machine in natural language, created the explosion we are witnessing now. The technology is accessible as never before, and this is expected to have significant and profound consequences on a wide range of industries, from customer service and marketing, to journalism and content creation, to the pharmaceutical industry and medical research.
This is a defining moment that raises many concerns about the ethical implications of AI. These include issues such as bias, invasion of privacy, abuse and impact on employment. These concerns were expressed, among other things, in the call of researchers and technology experts to suspend the "dangerous race" to develop artificial intelligence.
Whether through the intervention of regulators or through self-regulation, these concerns will have to be addressed. How and when this will happen, it is still too early to predict. But one thing is certain: from now on, nothing will be the same.