Skip to main content
Partly Cloudy icon
86º

Inflection.ai CEO Mustafa Suleyman explains how to catch a ride on the 'coming wave' of technology

(AP Illustration/Peter Hamlin) (Ap Illustration/Peter Hamlin)

If you have watched a telecast involving basketball superstar LeBron James during the past 20 years, you probably have heard an announcer declare: “You can't stop him, you can only hope to contain him.” That sentiment sums up how Inflection.ai CEO Mustafa Suleyman feels about artificial intelligence — a technology that he helped advance as a co-founder of DeepMind, which Google acquired in 2014.

After leaving Google last year, Suleyman started Inflection with LinkedIn co-founder Reid Hoffman in an effort to create artificial intelligence, or AI, that won't veer into racist, sexist or violent behavior. Inflection's first proof point is an AI-powered assistant named “Pi” that's touted as a safer alternative to better known chatbots such as Open AI's ChatGPT and Google's Bard.

Recommended Videos



Now Suleyman has co-written a book, “The Coming Wave,” focused on AI's promise and need to limit its potential perils. He expanded upon his ideas in a recent interview with The Associated Press.

Q: How should we be thinking of artificial intelligence at this juncture?

A: I honestly believe we are approaching an era of radical abundance. We are about to distill the essence of what makes us capable — our intelligence — into a piece of software, which can get cheaper, easier to use, more widely available to everybody. As a result, everyone on the planet is going to get broadly equal access to intelligence, which is going to make us all smarter and more productive.

Q: Isn’t there also a risk that part of the human brain starts to atrophy and we collectively become dumber?

A: : I think we are trending in the opposite direction. We are adding masses of new knowledge to the corpus of global knowledge. And that is making everyone, on average, way, way smarter and discerning. These AIs are going to catch and develop your weaknesses. They are going to lift up your strengths. We are going to evolve with these new augmentations. We are going to invent new culture, new habits and new styles to adapt. If you look at the average American today, it’s actually pretty remarkable how different he or she would be to the average American a century ago.

Q: Your book talks a lot about the need to contain AI, but how do we do that?

A: We want to try to maximize the benefits while minimizing the harms. I think we have overcome this challenge many times before. Look at airline safety. It is unbelievably safe to get inside a tube at 1,000 miles an hour at 40,000 feet. We have made so much progress on every one of these new technologies. I think we should be far more inspired and encouraged by the progress we have made and fixate less on the anxiety that everything is going to go wrong. It isn’t going to be easy, it is going to be strange and scary in many ways, but we have done it before and we can do it again.

Q: Should we be worried about the most popular forms of generative AI so far being controlled by Big Tech companies pursuing ever higher profits?

A: The commercial pressure is always going to be there so we have to navigate around it. This is the first time in many years that we have seen governments move so quickly and be so proactive. I also think that the commercial prize is going to be enormous. We have to learn the lessons from the social media age and make sure we move quickly when we start to see signs of some potential harms.

Q: Have homo sapiens evolved into the so-called “homo technologicus?”

A: We were always that since we picked up a hand axe or a club or we invented a pair of glasses or burned down a tree. We are a special species because we use tools. We should think of these new Ais as a set of tools that we control, that are accountable to us that we can put boundaries around that we can fundamentally contain. That’s the way they will remain safe and they will remain of service to us and the species.

Q: Should we be worried AI leads to the end of humans?

A: I worry about a lot of things, but the main things I am focused on are the near-term harms because we have a chance to really affect those and get those right. I think it’s too easy to imagine what might happen in 50 years. I think a lot of people have gotten caught up in the superintelligence framing of things. They are really thinking about things that may or may not happen that are way beyond my time horizon for prediction, especially in an era of climate change.


Loading...

Recommended Videos