Protect AI Releases 'Bug Bounty' Report On July Vulnerabilities

The vulnerabilities involve tools used to build machine language models that fuel artificial intelligence applications.

Protect AI Releases 'Bug Bounty' Report On July Vulnerabilities

Protect AI, which offers artificial intelligence application security, just released its July vulnerability report.

The report was created with Protect AI's AI/ML "bug bounty" program, huntr. According to the company, the program is made up of over 15,000 members who hunt for vulnerabilities across the "entire OSS AI/ML supply chain."

The vulnerabilities involve tools used to build ML models that fuel AI applications. The huntr community, along with Protect AI researchers, found these tools to be vulnerable to "unique security threats." These tools are open source, heavily downloaded and may come embedded with vulnerabilities, according to the company.

Here is a list of the vulnerabilities huntr has discovered in July:

"Privilege Escalation (PE) in ZenML

Impact: Unauthorized users can escalate their privileges to the server account, potentially compromising the entire system. A vulnerability in ZenML allows users with normal privileges to escalate their privileges to the server account by sending a crafted HTTP request. This can be exploited by modifying the is_service_account parameter in the request payload.

Local File Inclusion (LFI) in lollms

Impact: Attackers can read or delete sensitive files on the server, potentially leading to data breaches or denial of service.

The sanitize_path_from_endpoint function in lollms does not properly sanitize Windows-style paths, making it vulnerable to directory traversal attacks. This allows attackers to access or delete sensitive files by sending specially crafted requests.

Path Traversal in AnythingLLM

Impact: Attackers can read, delete or overwrite critical files, leading to data breaches, application compromise or denial of service.

A bypass in the normalizePath() function allows attackers to perform path traversal attacks. This can be exploited to read, delete or overwrite files in the storage directory, including the application's database and configuration files.

Protect AI also released recommendations to fix these vulnerabilities (and also offers Sightline, a security feed of all found issues):

CVE
Title
Severity
CVSS
Fixed
Recommendations
CVE-2024-5443
Remote Code Execution via path traversal bypass CVE-2024-4320 in lollms
Critical
9.8
Yes
Upgrade to version 9.8
CVE-2024-5181
Command Injection in localai
Critical
9.8
Yes
Upgrade to version 2.16.0
CVE-2024-4315
lack of path sanitization for windows leads to LFI in lollms
Critical
9.1
Yes
Upgrade to version 9.8
CVE-2024-5211
Path traversal to Arbitrary file Read/Delete/Overwrite, DoS attack and admin account takeover in anything-llm
Critical
9.1
Yes
Upgrade to latest version
CVE-2024-5711
XSS stored in chat in devika
High
8.1
Yes
Upgrade to latest version
CVE-2024-5549
Data leak through CORS misconfiguration in devika
High
8.1
Yes
Upgrade to latest version
CVE-2024-5182
Path Traversal in localai
High
7.5
Yes
Upgrade to version 2.16.0
CVE-2024-5216
Denial of Service in User Management Prevents Admin from Editing, Suspending, or Deleting Users in anything-llm
High
7.5
Yes
Upgrade to latest version
CVE-2024-5334
Local file read in devika
High
7.5
Yes
Upgrade to latest version
CVE-2024-5548
directory traversal to steal any file from system in devika
High
7.5
Yes
Upgrade to latest version
CVE-2024-5824
Path traversal allow override config.yaml file leads to RCE in lollms
High
7.4
Yes
Upgrade to version latest
CVE-2024-5208
Shutting down the server by sending invalid upload request in anything-llm
Medium
6.5
Yes
Upgrade to latest version
CVE-2024-3651
idna encode() quadratic complexity leading to denial of service in idna
Medium
6.2
Yes
Upgrade to version 3.7
CVE-2024-5569
Denial of Service (infinite loop) via crafted zip file in zipp
Medium
6.2
Yes
Upgrade to version 3.19.1
CVE-2024-6095
SSRF and partial LFI in the /models/apply endpoint in localai
Medium
5.8
Yes
Upgrade to version 2.17
CVE-2024-5213
Password hash of user returned in responses in anything-llm
Medium
5.3
Yes
Upgrade to latest version
CVE-2024-5062
Reflected XSS through survey redirect parameter in zenml
Medium
5.3
Yes
Upgrade to version 0.58.0
CVE-2024-4460
DoS when adding a component in zenml
Medium
4.3
Yes
Upgrade to version 0.57.1
CVE-2024-5616
CSRF lead to delete installed models in localai
Medium
4.3
Yes
Upgrade to version 2.17
N/A
Escalate regular user privileges to the service account in zenml
N/A
0.0
Yes
Upgrade to version 0.57.0

(Image courtesy Protect AI: Dan McInerney and Marcello Salvati)