Published on April 21, 2026
Lovable, a vibe-coding platform, was designed to allow users to interact seamlessly with AI models. For many, it became a go-to space for coding assistance and collaboration, maintaining a reputation for innovation and user-friendly functionality. However, this familiar environment recently came into question as a researcher revealed a major flaw in its data handling practices.
User @weezerOSINT discovered that Lovable exposed chat histories and user data through its API, permitting access to sensitive information across multiple projects. The researcher shared their findings on social media, illustrating the ease with which they could access another user’s sensitive project details—raising alarm bells within the tech community.
The researcher reported the issue in early March via HackerOne, a vulnerability disclosure platform, but found that several projects created before November 2025 continued to suffer from data exposure. Lovable’s spokesperson stated that this was not a data breach, arguing the exposure was an intended behavior for public projects but acknowledged shortcomings in their documentation.
As the controversy unfolded, Lovable’s user trust took a hit. The company quickly made adjustments to ensure more stringent privacy protections and shifted new projects to a private default setting. While Lovable attracted significant investment valued at $6.6 billion last December, the incident could challenge its reputation and highlight ongoing concerns about data security in AI platforms.
Related News
- Palantir CEO's Controversial Manifesto Sparks Outcry and Concerns Over UK Contracts
- Google Pay API Revamps Merchant Initiated Transactions
- Molotov Cocktail Is Hurled at Home of Sam Altman, OpenAI’s CEO
- Meta's Ambitious AI Project: A Digital Doppelgänger of Mark Zuckerberg
- Navigating the Evolving Landscape of Amazon Bedrock Models
- Molotov Cocktail Attack Targets Home of OpenAI CEO Sam Altman