START SELLING WITH BigBCC TODAY

Start your free trial with BigBCC today.

BLOG |

OpenAI Cofounder Says Scaling Compute Is Not Enough to Advance AI

OpenAI Cofounder Says Scaling Compute Is Not Enough to Advance AI

Table of Contents

OpenAI cofounder Ilya Sutskever believes the tides of the AI industry will have to shift back to the research phase.

On an episode of the “Dwarkesh Podcast” published Tuesday, Sutskever, who is widely seen as a pioneer in modern artificial intelligence, challenged the conventional wisdom that scaling could be the key road map to AI’s progress.

Tech companies have poured hundreds of billions into acquiring GPUs and building data centers to essentially make their AI tools — whether that’s LLMs or image-generation models — better.

The wisdom goes that the more compute you have or the more training data you have, the smarter your AI tool will be.

Sutskever said in the interview that, for around the past half-decade, this “recipe” has produced impactful results. It’s also efficient for companies because the method provides a simple and “very low-risk way” of investing resources compared to pouring money into research that could lead nowhere.

However, Sutskever, who now runs Safe Superintelligence Inc., believes that method is running out of runway; data is finite, and organizations already have access to a massive amount of compute, he said.

“Is the belief really: ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true,” Sutskever said. “So it’s back to the age of research again, just with big computers.”

Sutskever didn’t discount the need for compute, stating that compute is still necessary for research and that it can be one of the “big differentiators” in an industry where every major organization is operating on the same paradigm.

The research, however, will be critical in order to find effective or productive ways of using all that acquired compute, he said.

One area that will require more research, according to Sutskever, is getting models to generalize — essentially learn using small amounts of information or examples — as well as humans do.

“The thing, which I think is the most fundamental, is that these models somehow just generalize dramatically worse than people,” he said. “It’s super obvious. That seems like a very fundamental thing.”

Source link

Share Article:

The newsletter for entrepreneurs

Join millions of self-starters in getting business resources, tips, and inspiring stories in your inbox.

Unsubscribe anytime. By entering your email, you agree to receive
emails from BigBCC.

The newsletter for entrepreneurs

Join millions of self-starters in getting business resources, tips, and inspiring stories in your inbox.

Unsubscribe anytime. By entering your email, you agree to receive marketing emails from BigBCC. By proceeding, you agree to the Terms and Conditions and Privacy Policy.

SELL ANYWHERE
WITH BigBCC

Learn on the go. Try BigBCC for free, and explore all the tools you need to
start, run, and grow your business.