sign up log in
Want to go ad-free? Find out how, here.

Facebook owner Meta stirs up controversy with new Llama 4 AI that has political leanings cleaned out, making it more like Musk's Grok

Technology / news
Facebook owner Meta stirs up controversy with new Llama 4 AI that has political leanings cleaned out, making it more like Musk's Grok
Source: Llama 4
Source: Llama 4

Facebook, WhatsApp and Instagram parent company Meta has released the Llama 4 versions of its generative artificial intelligence (GenAI) models; if you want evidence that tech nowadays is very different from just a few years ago, look no further.

Reading Meta’s blog post announcing the Llama 4 models, I (and a bunch of other people) did a double-take on reading this:

“It’s well-known that all leading LLMs [large language models] have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet.”

The first thought here is that Meta’s AI engineers must hang out in different parts of the Internet than other people, when they go data snarfing. 

Although the public version of Llama 4 denied it had been trained on “ideological data” and the AI insisted it doesn’t have a specific political leaning, it qualified that with:

“That being said, my training data may include content from sources with left-leaning or progressive perspectives, such as:

  • News articles from outlets like The New York Times, The Guardian, or The Washington Post
  • Research papers from academic journals and institutions
  • Books and essays from authors with diverse backgrounds and perspectives.”

To reassure users who do not approve of dreadful liberal sources such as the above - and uh, aren't the news article copyright? - Llama 4 added its training data also includes content from conservative, libertarian and “other” perspectives.

How is Llama 4 better then? This is what Meta says.

  • Reduced refusals: Llama 4 refuses to respond to debated topics only 2% of the time, a significant drop from Llama 3.3's 7%.
  • Balanced responses: the model now responds more evenly to prompts, with less than 1% of responses showing unequal refusal rates.
  • Minimised political lean: Llama 4's responses exhibit strong political lean at a rate comparable to Grok and half that of Llama 3.3.

Yes, that is what Meta spent goodness knows how many engineering hours, and tens of thousands of expensive and power hungry Nvidia graphics cards on. 

To be fair, Meta didn't say the type of political leanings they've minimised. It could very well be that Meta will succeed in annoying both sides of the political spectrum with Llama 4.

However, the reference to Elon Musk's Grok AI, which is trained on X content that has become increasingly rightwing, and Meta claiming LLMs have a problem as they come with left-leaning biases suggest Llama 4 might be the type of anti-woke AI Winston Peters would approve of. 

Ironically enough, Grok disagreed with Llama 4 that LLMs are universally leftist or intentionally leaning in that direction. In fact, some LLMs amplify conservative talking points “when prompted on hot-button issues like immigration,” Grok suggested.

“But it’s not like there’s a grand conspiracy to make AI woke—sometimes it’s just a reflection of the messy, human world the data comes from,” Grok added.

How models respond in political terms is more dependent on how the users prompt them and sometimes, reviewers’ fine-tuning of responses, Grok said. 

Another irony in Meta's tilt towards the right and “anti-woke” can be seen in Llama 4’s rather restrictive “community” licence. This is very much not like a free and open source style one.

For example, Meta is “complying” with the European Union’s AI Act by stating Llama 4 can’t be used in the trading and political bloc, with Switzerland captured by the restriction as well.

To be precise, it’s the multimodal Llama 4 models that can’t be used in the EU by individuals or companies directly; unless they’re end-users of products or services that incorporate said models. 

While people and organisations outside the EU don’t face the above restrictions, there are other bear traps in the Llama 4 licence that’ll make anyone think twice before using the AI:

Licensing issues apart, the DeepSeek-style mixture of experts (MoE) for Llama looks interesting. MoE is yet another opaque AI term, but it describes an architectural approach in machine learning. 

You have multiple specialised neural networks ("experts") that handle different aspects of a task, with a gating network determining which expert(s) should process a given input.

This means the AI can selectively activate a subset of parameters for the input it receives; this instead of running all the model’s parameters which are the internal values it has learnt during training, which requires masses of computing resources.

There’s a huge Llama 4 model that’s still training, Behemoth, that has a total of two trillion parameters; 288 billion are active, and the model has 16 experts. Behemoth will be used for distillation to teach other models.

The other two are Llama 4 Scout (17 billion active parameters, 16 experts and 109 billion total parameters) and Maverick (17 billion active parameters, 128 experts and 400 billion total parameters).

Scout has what appears to be a massive context window of 10 million. That’s the maximum number of tokens (word fragments) that a model can bear in mind during tasks; newer models usually have 32,000 or 128,000 long context windows, limiting the length of conversations and, for example, the size of documents you can feed them.

Meta said Llama 4 is quicker on many tests than competing LLMs, but the company really flubbed the release of the AI in that respect. The company used a customised version of Maverick which isn’t publicly available, leading to accusations of benchmark gaming. Benchmarking AIs looks like a very random process anyway.

More to come as new AI models appear hard and fast. Google’s new Gemini 2.5 Pro looks pretty good for example, and all eyes are on China to see if they’ll pull off another DeepSeek shock with another capable yet cheap model.

To try out Llama 4 in a web browser, head over to Meta AI as usual, and you're not going to escape it in WhatsApp and other Meta properties.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.