DeepSeek left critical infrastructure unsecured and friendly hackers at security firm Wiz waltzed right into it, grinning from ear-to-ear (probably.)
The cloud security specialist said with some basic reconnaissance it found a publicly exposed DeepSeek database, accessible without any authentication at all it, jam-packed with API keys, plaintext logs and more.
In a January 29 blog, Wiz security researcher Gal Nagli said the Chinese AI model provider had left a “ClickHouse” columnar database exposed.
He found it with a really simple sniff about at its subdomains), swiftly finding two "unusual, open ports (8123 & 9000) that led to the database.
DeepSeek database exposed, disclosed...
Once in...
“Not only an attacker could retrieve sensitive logs and actual plain-text chat msgs, but they could also potentially exfiltrate plaintext passwords and local files along [with] propriety [sic] information directly from the server using queries like: SELECT LOAD_FILE(‘{FileName}‘);
The Wiz Research team “immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure,” it added today.
Microsoft did a similar whoopsie
Misconfigurations are one of the biggest causes of cloud security incidents and data breaches (the numbers vary depending on which vendor has put out research in any given week, but they are high…)
It’s not just Chinese companies either, of course.
In late 2023 researchers at (guess where?) Wiz found that Microsoft AI researchers had exposed 38 terabytes of data including employees’ personal computer backups, passwords to Microsoft services and secret keys for three years in a major security blunder that could also have let a malicious attacker inject malicious code into exposed AI models.
Scanning just a small portion of the data, they found credentials for Git, MS Teams, Azure, Docker, Slack, as well as personal “@microsoft.com” passwords along with private SSH keys (used for authentication), private GPG keys (used for encryption), and private Azure Storage account keys.
More recently, HuggingFace had a security incident, whilst a trio of popular Python frameworks – Gradio by Hugging Face, Jupyter Server, and Streamlit from Snowflake – were found to be vulnerable to NTLMv2 hash disclosure; security firm Horizon3.ai said last summer that at least one of those vulnerabilities could be "exploited by unauthenticated attackers [and the vulnerabilities disclosed] have come up in real-world pentests."
See also: Hash, crack, and the data scientist: Trio of Python frameworks exposed
Of this week’s security blunder, Agar wrote in Wiz’s blog: “The rapid adoption of AI services without corresponding security is inherently risky. This exposure underscores the fact that the immediate security risks for AI applications stem from the infrastructure and tools supporting them.
“While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like accidental external exposure of databases. These risks, which are fundamental to security, should remain a top priority for security teams,” he concluded.