Meta AI event in London: open-source AI, disinformation, and Llama 3

Meta Ai Event In London: Open Source Ai, Disinformation, And Llama 3

Meta AI’s London event this Tuesday saw Yann LeCun, Nick Clegg, and others discuss current topics in AI.

Clegg, the former UK Deputy Prime Minister and current President of Global Affairs at Meta, discussed the need for AI to break free from the “clammy hands” of Silicon Valley.

Clegg spoke of the importance of making AI tools widely and freely available, releasing them from the monopolistic grasp of a few large tech corporations in the US. 

This is very much in keeping with Meta AI’s ethos, which seeks to challenge proprietary AI R&D at Microsoft, Google, etc.

However, distinguishing Meta itself from ‘large US tech corporations’ is a tenuous position.

While Meta’s Llama series of language models aren’t entirely open-source (and the meaning of the term is hotly debated), they’re certainly more open than models trained by OpenAI, Google, etc. 

Meta’s chief AI scientist, Yann LeCun, one of the field’s most well-known researchers who was also present at the event, also ardently supports open-source AI initiatives. 

“It’s crucial to democratize the technology so it’s not just kept in the clammy hands of a small number of very large and well-heeled companies in California,” Clegg stated, reflecting LeCun’s own sentiment, who remarked, “This cannot be done by a handful of companies on the West Coast of the US.”

Others, such as NVIDIA CEO Jensen Huang and ex-Stability AI Emad Mostaque, have spoken about the need for countries to build their own sovereign AI and relinquish the technology from centralized ownership. 

Speaking at an event earlier this year, Huang said, “[AI] codifies your culture, your society’s intelligence, your common sense, your history – you own your own data.”

Nick Clegg downplays AI’s threat to global democracy

Clegg went against the prevailing narrative when he pointed out that AI tools haven’t been systematically employed to disrupt or subvert major elections in countries like Taiwan, Pakistan, Bangladesh, and Indonesia so far this year.

At face value, it seems a bizarre comment since both Indonesia, Pakistan, and Bangladesh suffered AI-related disinformation issues. In Bangladesh last year, deep fake videos aimed to discredit opposition figures, such as showing them taking unpopular stances on sensitive issues like the Israel-Gaza conflict. 

In Indonesia, Erwin Aksa, the deputy chairman of Golkar, one of Indonesia’s major political parties, posted a deep fake of former dictator Suharto that amassed over 4.7 million views. It was designed to encourage people to vote.

In Pakistan, an AI avatar of ex-Prime Minister Imran Khan declared victory amid a chaotic vote count that remains contentious today. 

 “It is right that we should be alert and we should be vigilant, but it is striking how little these tools have been used in a systematic basis to really try and subvert and disrupt the elections,” Clegg noted during the Meta AI Day event.

Sure – we can’t quantify impacts or harm. However, the tactics are there to be used as such, and evidence from different scientific disciplines shows that deep fakes do impact human decision-making, often with lasting impacts.

Clegg advocates for viewing AI as both a defensive and offensive tool against disinformation or, in his words, our “sword and shield” against disinformation.

Llama 3 is imminent

Meta also disclosed its near-term plans for the debut of Llama 3, its successor language model. Under Meta’s license, it will also be free and open-source to some extent. 

Nick Clegg, Meta’s President of Global Affairs, announced, “Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3. There will be a number of different models with different capabilities, different versatilities [released] during the course of this year, starting really very soon.” 

Chris Cox, Meta’s Chief Product Officer, expanded on the company’s vision and described its intent to integrate Llama 3 across multiple Meta products.

This model will also apparently be more open in its ‘nature,’ with weaker or more flexible guardrails. Meta recently withheld Emu, its image generation tool, due to considerations around latency, safety, and usability – so they’re not throwing caution to the wind entirely. 

Although specifics about Llama 3’s parameters were not disclosed, it is anticipated to possess around 140 billion parameters, surpassing its predecessor, Llama 2, which had 70 billion.