Omar Marques | Sopa Images | Lightrocket | Getty Images
Meta’s hefty funding in synthetic intelligence consists of growth of an AI system designed to energy Facebook’s total video suggestion engine throughout all its platforms, an organization govt mentioned Wednesday.
Tom Alison, the top of Facebook, mentioned a part of Meta’s “technology roadmap that goes to 2026” includes creating an AI-recommendation mannequin that may energy each the corporate’s TikTok-like Reels quick video service and extra conventional, longer movies.
To date, Meta has usually used a separate mannequin for every of its merchandise, like Reels, Groups and the core Facebook Feed, Alison mentioned onstage at Morgan Stanley’s tech convention in San Francisco.
As a part of Meta’s bold foray into AI, the corporate has been spending billions of {dollars} on Nvidia graphics processing models, or GPUs. They’ve change into the first chips utilized by AI researchers for coaching the kinds of giant language fashions (LLMs) used to energy OpenAI’s common ChatGPT chatbot and different generative AI fashions.
Alison mentioned “phase 1” of Meta’s tech roadmap concerned switching the corporate’s present suggestion methods to GPUs from extra conventional pc chips, serving to to enhance the general efficiency of merchandise.
As curiosity in LLMs exploded final 12 months, Meta executives have been struck by how these large AI fashions may “handle lots of data and all kinds of very general-purpose types of activities like chatting,” Alison mentioned. Meta got here to see the potential of a large suggestion mannequin that may very well be used throughout merchandise, and by final 12 months constructed “this kind of new model architecture,” Alison mentioned, including that the corporate examined it on Reels.
This new “model architecture” helped Facebook acquire “an 8% to 10% gain in Reels watch time” on the core Facebook app, which Alison mentioned helped show that the mannequin was “learning from the data much more efficiently than the previous generation.”
“We’ve really focused on kind of investing more in making sure that we can scale these models up with the right kind of hardware,” he mentioned.
Meta is now in “phase 3” of its re-architecture of the system, which includes attempting to validate the know-how and push it throughout a number of merchandise.
“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model?” Alison mentioned. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”
Illustrating out the way it will work if profitable, Alison mentioned, “If you see something that you’re into in Reels, and then you go back to the Feed, we can kind of show you more similar content.”
Alison mentioned Meta has gathered an enormous stockpile of GPUs that can be used to assist its broader generative AI efforts, corresponding to growth of digital assistants.
Some generative AI tasks Meta is contemplating embrace incorporating extra refined chatting instruments into its core Feed so an individual who sees a “recommended post about Taylor Swift,” may maybe “easily just click a button and say, ‘Hey Meta AI, tell me more about what I’m seeing with Taylor Swift right now.'”
Meta can also be experimenting with integrating its AI chatting software inside Groups, so a member of a Facebook baking group may doubtlessly ask a query about desserts and get a solution from a digital assistant.
“I think we have the opportunity to put generative AI in kind of a multiplayer kind of consumer environment,” Alison mentioned.
WATCH: CNBC’s full interview with Meta’s Nick Clegg
Source: www.cnbc.com”