AI is fluent in language. Should we trust what he says?

But because the fluency of GPT-3 has dazzled many observers, the approach of the large language model has also attracted significant criticism in recent years. Some skeptics argue that software is only capable of blind mimicry: that it is mimicking the syntactic patterns of human language, but that it is unable to generate its own ideas or make complex decisions, a fundamental limitation that will prevent the LLM approach from maturing. in whatever it looks like. human intelligence. For these critics, GPT-3 is just the latest shining object in a long history of AI exaggeration, channeling research dollars and attention into what will eventually turn out to be a dead end, avoiding that other promising approaches mature. Other critics believe that software like GPT-3 will forever be compromised by biases and propaganda and misinformation about the data it has been trained on, which means using it for anything other than room tricks. he will always be irresponsible.

Wherever you go in this debate, the pace of recent improvement in major language models makes it difficult to imagine that they will not be commercially deployed in the coming years. And that raises the question of how they should be launched into the world, and therefore the other advances of AI. With the rise of Facebook and Google, we’ve seen how dominance in a new realm of technology can quickly lead to astonishing power over society, and AI threatens to be even more transformative than social media in its end effects. . What is the right kind of organization to build and own something of this scale and ambition, with so much promise and so much potential for abuse?

Or should we build it?

The origins of OpenAI dates back to July 2015, when a small group of tech luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the iconic heart of Silicon Valley. The dinner took place amid two recent advances in the world of technology, one positive and one more troubling. On the one hand, radical advances in computing power — and some new advances in neural network design — had created a palpable sense of emotion in the field of machine learning; there was a feeling that the long “AI winter,” the decades when the field didn’t live up to its initial hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a much higher level of accuracy than any neural network had previously achieved. Google quickly came in to hire AlexNet’s creators, at the same time as acquiring DeepMind and launching its own initiative called Google Brain. The widespread adoption of smart assistants like Siri and Alexa proved that even scripted agents could be consumer successes.

But during this same period of time, there was a seismic shift in public attitudes toward Big Tech, with previously popular companies such as Google and Facebook being criticized for their almost monopolistic powers, their amplification of theories of the conspiracy and its inexorable diversion of our attention. towards algorithmic sources. Long-term fears about the dangers of artificial intelligence were appearing on opinion pages and on the TED stage. Nick Bostrom of Oxford University published his book ” Superintelligence ”, presenting a series of scenarios in which advanced AI could deviate from the interests of humanity with potentially disastrous consequences. In late 2014, Stephen Hawking told the BBC that “the development of complete artificial intelligence could mean the end of the human race.” , only this time, the algorithms could not only sow polarization or sell our attention to the highest bidder, but could end up destroying humanity itself. And once again, all the evidence suggested that this power would be controlled by a few Silicon Valley megacorporations.

The dinner agenda on Sand Hill Road that July night was nothing but ambitious: finding the best way to steer AI research toward the most positive outcome possible, avoiding both the negative short-term consequences that affected the was Web 2.0 as the long-term existential threat. From that dinner on, a new idea began to take shape, one that would soon become a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who had recently left Stripe. Interestingly, the idea was not so much technological as organizational: if AI were to be released to the world in a safe and beneficial way, it would require governance-level innovation and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or AGI, was not yet clear to the group. But the disturbing predictions of Bostrom and Hawking convinced them that the achievement of human intelligence by the AI ​​would consolidate a surprising amount of power and moral charge in those who finally managed to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman had signed on to become the company’s chief executive, with Brockman overseeing the technology; Another dinner attendee, AlexNet co-creator Ilya Sutskever, had been hired by Google to head research. (Elon Musk, who was also present at the dinner, joined the board, but left in 2018.) In a blog post, Brockman and Sutskever outlined the scope of their ambition: ” OpenAI is a non-profit artificial intelligence investigation. company ”, they wrote. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, without limitations due to the need to generate financial returns.” They added: “We believe that artificial intelligence should be an extension of the individual human will and, in a spirit of freedom, distributed as widely and uniformly as possible.”

The founders of OpenAI would publish a public letter three years later, explaining the basic principles behind the new organization. The document was easily interpreted as a not-so-subtle investigation into Google’s “Don’t Be Evil” slogan since its inception, a recognition that maximizing the social benefits — and minimizing the harm — of new technology was not always the case. a simple calculation. While Google and Facebook had achieved global dominance through closed source algorithms and proprietary networks, the founders of OpenAI promised to go the other way, sharing new research and code freely with the world.

New Technology Era

Leave a Reply

Your email address will not be published. Required fields are marked *