Fake OpenAI ‘Privacy Filter’ Repository Spreads Malware After Trending on Hugging Face

Fake Open AI

Fake Open AI

A fake repository impersonating OpenAI’s Privacy Filter tool briefly topped Hugging Face trends and reportedly attracted thousands of downloads while secretly distributing dangerous information-stealing malware to users.

Fake Open AI
Fake Open AI

San Francisco | May 12, 2026 | Cybersecurity researchers have uncovered a malicious repository impersonating OpenAI’s Privacy Filter model on AI platform Hugging Face, triggering concerns over rising supply-chain attacks targeting developers and artificial intelligence communities.

According to cybersecurity reports, the fake repository, named “Open-OSS/privacy-filter,” copied the description and appearance of OpenAI’s legitimate Privacy Filter project almost entirely to deceive users into downloading infected files. The malicious project reportedly climbed to the number one trending position on Hugging Face before being removed by the platform.

Researchers from HiddenLayer stated that the fraudulent repository contained a harmful “loader.py” script designed to install a Rust-based infostealer malware on Windows systems. Once executed, the malware disabled SSL verification, downloaded additional malicious payloads and attempted to evade security detection tools.

The malware was reportedly capable of stealing browser credentials, Discord tokens, cryptocurrency wallet information, VPN and FTP credentials, screenshots and sensitive system data from infected devices. Security analysts warned that developers and AI researchers who downloaded or executed files from the repository could face severe data compromise risks.

Investigators also revealed that the attackers may have artificially boosted the repository’s popularity using fake downloads and automated “likes” to increase credibility and visibility on the platform. Reports suggested the repository received nearly 244,000 downloads and hundreds of likes within a short period before it was disabled.

Cybersecurity experts said the incident highlights growing threats within open-source AI ecosystems, where attackers exploit trusted platforms and trending repositories to distribute malware. Researchers further discovered several additional repositories linked to similar malicious activity, indicating a broader coordinated campaign.

The latest incident has renewed calls for stronger verification systems, improved repository moderation and greater caution among developers downloading open-source AI tools from online platforms.

Follow us On Our Social media Handles :
Instagram
Youtube
Facebook
Twitter

Also Read- Pune

Leave a Reply

Your email address will not be published. Required fields are marked *