Published on April 21, 2026
Lovable, a vibe-coding platform, was designed to allow users to interact seamlessly with AI models. For many, it became a go-to space for coding assistance and collaboration, maintaining a reputation for innovation and user-friendly functionality. However, this familiar environment recently came into question as a researcher revealed a major flaw in its data handling practices.
User @weezerOSINT discovered that Lovable exposed chat histories and user data through its API, permitting access to sensitive information across multiple projects. The researcher shared their findings on social media, illustrating the ease with which they could access another user’s sensitive project details—raising alarm bells within the tech community.
The researcher reported the issue in early March via HackerOne, a vulnerability disclosure platform, but found that several projects created before November 2025 continued to suffer from data exposure. Lovable’s spokesperson stated that this was not a data breach, arguing the exposure was an intended behavior for public projects but acknowledged shortcomings in their documentation.
As the controversy unfolded, Lovable’s user trust took a hit. The company quickly made adjustments to ensure more stringent privacy protections and shifted new projects to a private default setting. While Lovable attracted significant investment valued at $6.6 billion last December, the incident could challenge its reputation and highlight ongoing concerns about data security in AI platforms.
Related News
- SpaceX IPO: A Trillion-Dollar Challenge for Retail Investors
- Europe's Digital Overhaul: A Shift Toward Competitive Edge
- Emerging Markets Surge on AI Optimism After TSMC's Positive Outlook
- CERN Successfully Transports Antimatter, Opening New Research Possibilities
- Booking.com Data Breach Exposes Customer Information
- Peace Talks Loom as Tech Sector Braces for Change Amid Optimism