Meta warns of new malware threats using AI tools

Facebook parent company Meta has warned that it has discovered "around ten new malware families" that use AI chatbot tools to hack into users' accounts..

Tech Insights in Your InboxSubscribe to our free newsletter and never miss out on what's happening in the tech world. Learn Tech Today, Lead Tomorrow.

Facebook parent company Meta has warned that it has discovered "around ten new malware families" that use AI chatbot tools to hack into users' accounts. In a new security report, Meta said that hackers are taking advantage of people's interest in OpenAI's ChatGPT to trick them into installing malware that pretends to provide AI functionality.

One of the more pressing schemes identified by Meta is the proliferation of malicious web browser extensions that appear to offer ChatGPT functionality. These extensions contain malware that can access a user's device once installed. According to Meta, it has discovered over 1,000 unique URLs that offer malware disguised as ChatGPT or other AI-related tools and has blocked them from being shared on Facebook, Instagram, and WhatsApp.

The report also highlights the constantly evolving tactics used by hackers to get around security protocols. In one example, bad actors were able to quickly automate the process which takes over business accounts and provides advertising permissions to these bad actors.

Meta has reported the malicious links to the various domain registrars and hosting providers that are used by these bad actors. The security researchers at Meta also dive into the more technical aspects of recent malware, such as Ducktail and NodeStealer.

The report underlines the need for vigilance when it comes to the use of AI tools, which are becoming increasingly popular and are now being weaponized by hackers.

Be sure to follow us

Join our newsletter

Subscribe to our newsletter and never miss out on what's happening in the tech world. It's that simple.
subsc