Protect AI Releases 'Bug Bounty' Report On September Vulnerabilities

The vulnerabilities involve tools used to build machine language models that fuel artificial intelligence applications.

Protect AI, which offers artificial intelligence application security, just released its September vulnerability report.

The report was created with Protect AI’s AI/ML “bug bounty” program, huntr. According to the company, the huntr community is made up of over 15,000 members who hunt for vulnerabilities across the “entire OSS AI/ML supply chain.”

The vulnerabilities involve tools used to build ML models that fuel AI applications. These tools are open source and are heavily downloaded to build enterprise AI solutions, Protect AI said in a news release.

This month, the huntr community along with Protect AI researchers, discovered 20 vulnerabilities, some of which allow bad actors to perform complete system takeovers.

Here is a list of some of the major vulnerabilities huntr discovered in September (description quotes from ProtectAI):

Remote Code Execution (RCE) in BerriAI/litellm:

“An attacker can execute arbitrary code on the server by injecting malicious environment variables. The vulnerability occurs in the litellm.get_secret() function, where untrusted data can be passed to the eval function without proper sanitization. This can be exploited by updating environment variables via the /config/update endpoint, allowing an attacker to inject malicious code.”

Insecure Password Reset Token Handling in lunary-ai/lunary

“An attacker can reuse a password reset token to change the victim’s password multiple times.

The vulnerability lies in the password reset functionality, where the token is not invalidated after the password is changed. This allows an attacker who has compromised the token to reuse it and change the password repeatedly.”

Server-Side Request Forgery (SSRF) in gradio-app/gradio

“An attacker can make unauthorized HTTP requests to internal services, potentially accessing sensitive information. The vulnerability is in the save_url_to_cache function, which does not properly validate the path parameter. This allows an attacker to supply a URL that the server will fetch, leading to SSRF.”

Here is the full list of vulnerabilities. Click on the links for recommended fixes and more information on each:

CVE-2023-6016
Remote code execution via source POJO model import in h2o-3
Critical
CVE-2024-5386
https://sightline.protectai.com/vulnerabilities/8b23c69e-bb53-445d-8a40-e2c39759a3d4
Critical
CVE-2023-6038
LFI in h2o-3 API in h2o-3
Critical
CVE-2024-5328
SSRF through backend endpoint auth api in lunary
High
CVE-2024-4151
IDOR- allow view/update any prompts in any projects in lunary
High
CVE-2024-4147
A user can delete prompts from other orgs in lunary
High
CVE-2024-4148
Redos (Regular Expression Denial of Service) in lunary
High
CVE-2024-6587
SSRF Exposes OpenAI API Keys in litellm
High
CVE-2024-5478
XSS in SAML metadata endpoint in lunary
High
CVE-2024-5714
A member can invite/change other users to someone else's project / can change other org's users to own/non-own projects in lunary
High
CVE-2024-6862
CSRF on endpoint for user signup in lunary
High
CVE-2024-4154
unprivileged user can rename a project in lunary
High
CVE-2024-6582
Broken access control in lunary
Medium
CVE-2024-5248
Prompt editor role has access to full list of Org users in lunary
Medium
CVE-2024-6087
Account takeover through the invite-functionality for newly registered users in lunary
Medium
CVE-2024-5389
A user can create/get/edit/delete prompt variations for datasets from other orgs in lunary
Medium
CVE-2024-5755
Creating account with same email in lunary
Medium
CVE-2024-6086
Any role can change Org's name in lunary
Medium
CVE-2024-5998
pickle deserialization vulnerability in langchain
Medium
CVE-2024-6867
Run info leak without valid authorization in lunary
Medium