sign up log in
Want to go ad-free? Find out how, here.

Meta's Mark Zuckerberg says open source is the way forward for AI, with the company releasing new Llama 3.1 models

Technology / news
Meta's Mark Zuckerberg says open source is the way forward for AI, with the company releasing new Llama 3.1 models
AI rendition of Mark Zuckerberg

Facebook parent Meta is not going down the proprietary software route with its artificial intelligence (AI) efforts, preferring instead to release the new Llama 3.1 large language models (LLMs) as open source.

This is unlike OpenAI, Microsoft, Anthropic, and Google, which is developing commercial AI products and services, on a subscription basis.

Founder of the Social Network and chief executive of Meta, Mark Zuckerberg, wrote a detailed blog post on the launch of the new Llama models, drawing parallels with Linux which is also open source, and which has created a huge eco system, ranging from personal devices to the cloud.

Zuckerberg took a dig at Apple: "One of my formative experiences has been to building our services constrained what Apple will let us build on their platforms," he said.

The background here is that Apple is gearing up towards products and services with AI in earnest this year. One way to compete against Apple and other companies like AI is release what you have as free and open source.

Back to the new models: three versions are available of Llama 3.1: one with 8 billion parameters, another one with 70 billion, and the big dog with 405 billion. The context length has also been bumped up to 128,000, and the models are said to have stronger reasoning capabilities.

Parameters are the numerical values the model learns during training; the more you have, the more complex responses are possible. Here's Llama 3.1's explanation:

There is some argument as to how open source and free to use Llama is, but the models are multilingual and fairly fresh, with a training data cut-off date of December 2023. It appears to perform well against competing models too.

OpenAI, Microsoft and Google probably don't want to see this, but there are no strings attached to running Llama 3.1 as such.

That is, no monthly fees, and no restrictive licensing terms. Instead, Llama 3.1 is released under the permissive Apache 2.0 license, which allows for free use, modification, and distribution of the model. So yes, it is open-source in the truest sense of the word.

To get started with Llama 3.1, users can simply download the model and run it on their own machines. Use for example Ollama that lets you run multiple LLMs, accessing them from the command line in a terminal emulator.

In terms of hardware requirements, Llama 3.1 is surprisingly flexible. While it can take advantage of high-end graphics cards (which are well-suited for AI work) and large amounts of memory, it can also run on more modest hardware. This makes it accessible to a wide range of users, from researchers and developers to hobbyists and enthusiasts.

That doesn't apply for the chunky 405 billion parameter version of Llama 3.1 though. Running that locally would be extremely challenging, if not impossible, for most users. This model requires significant computational resources and memory, far beyond what most personal computers or even high-end workstations can provide.

A model of that size requires multiple high-end graphics cards with tens of gigabytes of memory each though, and a high-performance computing cluster or a cloud-based infrastructure. It also needs large amounts of storage like hundreds of terabytes for the model and the training data for it.

For most users, it's more practical to use cloud-based services like at meta.ai or access the model through application programming interfaces, which allow you to tap into the power of large language models without the need for local infrastructure.

There's the electricity required as well. If you want to read more on what it takes to train the models, including the time, power consumption and CO2 greenhouse gas emissions (in tonnes!), Meta's Github has you covered.

So, what can you do with Llama 3.1? That depends on how much of a Deep Geek you are. The Llama 3.1 model can be fine-tuned for a variety of applications, from chatbots and virtual assistants to content and image generation, and language translation. See Meta's use cases for more examples.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

3 Comments

Wow.  I just ran llama3.1:8b on my machine, and it seems impressive.  It knew the title, journal name and volume of a paper from 1911 that I queried it about... what the heck?  Must have had that in the training data for some reason.  Seems almost better that dolphin-mixtral 8x7b.

Up
1

Because all journal articles for the last couple centuries are well recorded already in public library catalogs. But I get it. You asked AI to do a google search and are surprised it did not fail that. This only goes to prove it is not worth much above a simple catalog search engine designed by any student. Come back next week when you are surprised it can string a sentence together and the human population has added much more toxic & power waste for no functional improvements. Much like how Google's AI is just repeating search results off chat blogs, or how Alexa was literally hundreds of millions down the drain for a fancy timer that served ads none of the users wanted.

It is even sadder that for all that effort spinning up llama, you could have just used existing tech functionality for less time and less effort. The AI was actually decreasing productivity and ability. It is more of a hindrance and a concern as we have graduates who often in arts are graded on home produced AI garbage with no idea how to evaluate it or do it themselves and then they go out in the world to perform roles in education, medical fields, design, management etc. The only saving grace we have is that STEM education actually involves more practical skills that need understanding to perform them. But don't worry scraping stack overflow answers and public repositories has already been encroaching on the quality of tech skills and understanding of basic procedures; just look at what Crowdstrike called a QA testing & deployment plan. 

Up
0

Hmmm, if you think that Google AI is simply repeating search results, you might want to read up a bit more about LLMs. And depending on the level of your technical background, I might have some book recommendations.

Don't fall for the MSM hysteria about the energy requirements of AI. This is just the latest source to fill their outrage quota.

Up
0