Protect AI Releases 'Bug Bounty' Report On October 2024 Vulnerabilities

The vulnerabilities involve tools used to build machine language models that fuel artificial intelligence applications.

Protect AI, which offers artificial intelligence application security, has released its October vulnerability report.

The report was created with Protect AI’s AI/ML “bug bounty” program, huntr. According to the company, the huntr community is made up of over 15,000 members who hunt for vulnerabilities across the “entire OSS AI/ML supply chain.”

The vulnerabilities involve tools used to build ML models that fuel AI applications. These tools are open source and are heavily downloaded to build enterprise AI solutions, Protect AI said in a news release.

This month, the huntr community along with Protect AI researchers, discovered 34 vulnerabilities, some of which allow bad actors to perform complete system takeovers.

Here is a list of some of the major vulnerabilities huntr discovered in September (description quotes from ProtectAI):

Timing Attack in LocalAI

“Impact: An attacker can potentially determine valid API keys by analyzing the response time of the server. The vulnerability allows an attacker to perform a timing attack, which is a type of side-channel attack. By measuring the time taken to process requests with different API keys, the attacker can infer the correct API key one character at a time.”

Insecure Direct Object Reference (IDOR) in Lunary

“Impact: Unauthorized users can view or delete internal user data by manipulating user-controlled ID values. The vulnerability arises because the application does not properly validate user-controlled ID values. This allows an attacker to access or delete data of other users by simply changing the ID in the request.”

Insecure Direct Object Reference (IDOR) in Lunary

“Impact: Unauthorized users can update other users' prompts by manipulating user-controlled ID values. The vulnerability is due to the application using a user-controlled ID parameter without proper validation. This allows an attacker to update prompts belonging to other users by changing the ID in the request.”

View the entire vulnerabilities database here.