It's a safe bet that most people who have used a Windows computer over the past couple of decades are familiar with the name McAfee. It was attached to an anti-virus program, written by British-born John McAfee, who developed the first commercial anti-malware. Over time, McAfee AV ended up being bundled with countless PCs and bought by vast amounts of users.
That sounds well nerdy, but McAfee was a pretty wild character until his death in 2021 in a prison cell near Barcelona, where he awaited extradition to the United States to stand trial on tax evasion charges. What took place prior to his death is not a good story from any angle.
McAfee followed yours truly on Twitter for some reason, and to my surprise a tweet appeared from his account in my timeline a couple of days ago:
I'm back with AIntivirus. An AI version of myself.
— John McAfee (@officialmcafee) January 23, 2025
You didn't think I would miss this cycle did you?https://t.co/thL0LoSZ2l
Do not fall for scam tokens. This is the only official AIntivirus CA:
BAezfVmia8UYLt4rst6PCU4dvL2i2qHzqn4wGhytpNJW@AIntivirus @theemrsmcafee pic.twitter.com/PPJ12X77aQ
There are suggestions that the accounts of McAfee and his widow, Janice, have been hacked by scammers trying to pump a "shitcoin" crypto currency, so as to rug pull people; is it the work of "shitcoinsherpa"?; who knows?
The coin and a John McAfee chatbot are being spammed across multiple sites and apps like Telegram as well.
Do not go anywhere near that creepy stuff.
China has entered the AI chat
Early January I wrote about China's DeepSeek-V3 generative artificial intelligence (GenAI) model that took the geek world by surprise. DeepSeek is big, capable, has open source weights and open research papers, and importantly, it's models are said to have very low training costs, somewhere in the US$5.5 million region.
The earlier model turned out to be just the beginning. Now the new DeepSeek-R1 models have been released, and the AI crowd have really noticed them.
I can't vouch for the accuracy of DeepSeek's benchmarking, but the company says its models are on par with OpenAI's o1 which has been hailed as major breakthrough in AI:
DeepSeek-V3 is censored quite heavily to align with Chinese politics and as such, hobbled for Western users. The R1 model is less censored apparently, but the hosted version is still suspicious of innocuous questions such as this one about Winnie the Pooh:
Meanwhile, the R1-Zero DeepSeek is said to be uncensored. Both have 671 billion parameters (the variables models learn from training data), with 37 billion of them being active, and a 128k context length which could be a bit larger, and they're text-based only.
And, did I mention they're freely available?
Cheap training costs, very big models available for free (or accessible via a very low-cost application programming interface (API), trained on goodness knows what data; it's probably true that American AI vendors are desperately trying to figure out how DeepSeek works, and if the efficiency and cheap training claims are true.
Because if the claims are true, the biggest tech companies in the world look not a little extravagant in comparison. From that perspective, DeepSeek dropping the R1 a day after Trump's US$500 billion AI announcement is absolutely impeccable timing.
We welcome your comments below. If you are not already registered, please register to comment.
Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.