For as much promise as artificial intelligence shows in making life better, OpenAI CEO Sam Altman is worried.
The tech leader who has done so much to develop AI and make it accessible to the public says the technology could have life-altering effects on nearly everything, particularly if deployed by the wrong hands.
There’s a possible world in which foreign adversaries could use AI to design a bio weapon to take down the power grid, or break into financial institutions and steal wealth from Americans, he said. It’s hard to imagine without superhuman intelligence, but it becomes “very possible,” with it, he said.
“Because we don’t have that, we can’t defend against it,” Altman said at a Federal Reserve conference this week in Washington, D.C.
“We continue to like, flash the warning lights on this. I think the world is not taking us seriously. I don’t know what else we can do there, but it’s like, this is a very big thing coming.”
Altman joined the conference Tuesday to speak about AI’s role in the financial sector, but also spoke about how it is changing the workforce and innovation. The growth of AI in the last five years has surprised even him, Altman said.
He acknowledged real fear that the technology has potential to grow beyond the capabilities that humans prompt it for, but said the time and productivity savings have been undeniable.
OpenAI’s most well-known product, ChatGPT, was released to the public in November 2022, and its current model, GPT-4o, has evolved. Last week, the company had a model that achieved “gold-level performance,” akin to operating as well as humans that are true experts in their field, Altman said.
Many have likened the introduction of AI to the invention of the internet, changing so much of our day-to-day lives and workplaces. But Altman instead compared it to the transistor, a foundational piece of hardware invented in the 1940s that allowed electricity to flow through devices.
“It changed what we were able to build. It became part of, kind of, everything pretty quickly,” Altman said. “And in the same way, I don’t think you’ll be talking about AI companies for very long, you will just expect products and services to use this technology.”
When prompted by the Federal Reserve’s Vice Chair for Supervision Michelle Bowman to predict how AI will continue to evolve the workforce, Altman said he couldn’t make specific predictions.
“There are cases where entire classes of jobs will go away,” Altman said. “There are entirely new classes of jobs that will come and largely, I think this will look somewhat like most of history, and that the tools people have to use their jobs will let them do more, achieve things in new ways.”
One of the unexpected upsides to the rollout of GPT has been how much it is used by small businesses, Altman said. He shared a story of an Uber driver who told him he was using ChatGPT for legal consultations, customer support, marketing decisions and more.
“It was not like he was taking jobs from other people. His business just would have failed,” Altman said. “He couldn’t pay for the lawyers. He couldn’t pay for the customer support people.”
Altman said he was surprised that the financial industry was one of the first to begin integrating GPT models into their work because it is highly regulated, but some of their earliest enterprise partners have been financial institutions like Morgan Stanley. The company is now increasingly working with the government, which has its own standards and procurement process for AI, to roll out OpenAI services to its employees.
Altman acknowledged the risks AI poses in these regulated institutions, and with the models themselves.
Financial services are facing a fraud problem, and AI is only making it worse — it’s easier than ever to fake voice or likeness authentication, Altman said.
AI decisionmaking in financial and other industries presents data privacy concerns and potential for discrimination.
Altman said GPT’s model is “steerable,” in that you can tell it to not consider factors like race or sex in making a decision, and that much of the bias in AI comes from the humans themselves.
“I think AIs are dispassionate and unemotional,” Altman said. “And I think it’ll be possible for AI — correctly built — to be a significant de-biasing force in many industries, and I think that’s not what many people thought, including myself, with the way we used to do AI.”
As much as Altman touted GPT and other AI models’ ability to increase productivity and save humans time, he also spoke about his concerns.
He said that though it’s been greatly improved in more recent models, AI hallucinations, or models that produce inaccurate or made-up outputs, are possible.
He also spoke of a newer concept called prompt injections, the idea that a model that has learned personal information can be tricked into telling a user something they shouldn’t know.
In addition to the threat of foreign adversaries using AI for harm, Altman said he has two other major concerns for the evolution of AI. It feels very unlikely, he said, but “loss of control,” or the idea that AI overpowers humans, is possible.
What concerns him the most is the idea that models could get so integrated into society and get so smart that humans become reliant on them without realizing.
“And even without a drop of malevolence from anyone, society can just veer in a sort of strange direction,” he said.
There are mild cases of this happening, Altman said, like young people overrelying on ChatGPT make emotional, life-altering decisions for them.
“We’re studying that. We’re trying to understand what to do about it,” Altman said. “Even if ChatGPT gives great advice, even if chatGPT gives way better advice than any human therapist, something about kind of collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”
Excerpts or more from this article, originally published on Kansas Reflector appear in this post. Republished, with permission, under a Creative Commons License.
See our third-party content disclaimer.