The Price of Going Mobile App Viral
The Tea app, a platform for women to rate and review dating experiences, recently went viral on the US App Store but suffered a severe data breach. Due to insecure Firebase storage and poor application security hygiene, over 72,000 private images including ID cards and selfies were exposed, with no access control or rate limiting in place. Metadata extraction further revealed user GPS locations, and sensitive credentials were leaked through hardcoded secrets in both mobile and web applications. Jamieson O’Reilly, an offensive security consultant, independently validated these issues through static analysis and decompilation, identifying hardcoded Mapbox server keys and exposed bearer tokens in production assets. After reaching out to Tea’s CEO, O’Reilly was connected directly with the backend developer, who took full accountability and demonstrated genuine intent to remediate the issues. The post outlines that Tea’s failure wasn’t in the concept, but in scaling without clearing the accrued security debt. It highlights how viral growth often precedes proper security architecture, leading to reputational damage and user risk under public scrutiny.
Posted: Wednesday, Jul 30

i 3 Table of Contents

The Price of Going Mobile App Viral

When you’re trying to grow the next big mobile app, going viral is one of the key signs of success. But, going app-viral costs more than people first realise.

On the surface, that might make sense.

More users mean more costs, it’s just part of doing business, right?

Kind of.

But ask any company that’s had one of its apps go viral, and they’ll tell you, that not all users are created equal.

While most users are well-to-do, others scrutinise your infrastructure with levels of scrutiny that your APIs, servers and apps were just not designed for.

This all happens while your app’s adoption increases, forcing more and more immature code into production and attracting attention from additional researchers, and adversaries (which from a PR perspective can have similar impacts on your brand).

In other words, the same momentum that pushes your app to number one on the App Store is what brings your internal mistakes into full public view.

The Spill

Tea – Dating Safety for Women reached number one on the US App Store this month. It promoted safety for women and let users privately rate and review men they had dated including allowing women run background checks on potential dates.

To join, users had to upload a selfie and a photo of their ID. That verification process was central to how the platform operated.

On 25 July, those same images were found and dumped on the 4chan website. A thread went up with screenshots of driver’s licenses, selfie photos, and shortly after that a Python script that would allow anyone to scrape the backend directly.

No login was required, nor any rate limit existed to stop/slow down the scraping. The files were hosted on an open Firebase bucket and in the end it was said that almost 60GB’s of photos were scraped.

Article content

In addition to this, people online began extracting what they alleged to be GPS coordinates from image metadata and plotted a live map showing where they believed users signed up from with coordinates plotted on Google Maps.

Article content

In one instance, someone found the GPS coordinates to what appears to be a military base in the US along with what looks like a US Department of Defense ID card that was uploaded by a user of the mobile app.

Article content

Article content

Tea later confirmed the incident publicly. Based off multiple news sources and online OSINT it appears that at-least 72,000 images were accessed which included 13,000 identity photos and 59,000 other uploads from comments, messages, and posts. At that point, I started digging a little deeper.

Article content

Reading the Tea Leaves

If you’ve followed me for a while, you’ve heard me talk about how humans build software the same way they live their lives – by habit. That’s why I always tell people, if you want to find security flaws, don’t just scan code. Watch for repeated behaviour.

One good example of this is when I hacked friend, and godfather of vibe coding levelsio https://www.linkedin.com/pulse/hackedin-vibe-coders-real-code-risk-hacks-jamieson-o-reilly-fsvmc/?trackingId=mjJOnQq%2FTeGf9UORkINjlQ%3D%3D which was mainly possible due to repetitive habits.

So, even before digging into Tea’s apps I had a feeling that the initial public reports of a data exposure affecting the Tea application wouldn’t be isolated to just one individual issue, meaning that it was likely there were other high risk development practices involved in other parts of the Tea app ecosystem.

Within 5 minutes of decompiling Tea’s Android app, I hit the first red flag.

Inside the string resources was a hardcoded Mapbox token, one starting with sk_. That prefix means it’s a secret key intended only for server-side use. You’re not supposed to expose it. Not in your code, and definitely not in your mobile app where decompilation takes seconds.

Article content

Mapbox’s own docs make that clear. These keys are powerful. They’re used to manage location data, tilesets, uploads, billing etc.

Article content

Expose them, and you’re handing over backend access – or giving someone the ability to burn your quota or worse.

But it didn’t stop there. I moved to the admin portal to take a look.

Article content

The observations were consistent with that I’d seen in the mobile application.

I pulled the production JavaScript bundle and reviewed the static source. Inside it, I found bearer tokens hardcoded directly into the frontend. These tokens were wired into backend infrastructure calls with no visible environment segmentation, token proxying, or guardrails in place. This wasn’t buried in an edge case or buried behind authentication. It was all sitting in the open.

Article content

I’m not making any assumptions about how these tokens are scoped internally. But in every real-world case I’ve worked – banks, fintech, casinos etc, this pattern creates risk.

Exposing bearer tokens in client-side static assets opens a wide surface for token reuse, impersonation, or privilege escalation. It invites automation. It invites replay. And it shifts control from the server to the browser.

These are the types of risks that get missed when a product goes viral before the security debt is cleared. I’ve seen it happen before. A concept becomes an app, an app becomes a company which then becomes a movement, and all the shortcuts taken to scale start to surface. Especially under scrutiny.

I’m not here to criticise what’s already built.

I’ve been on both sides of the firewall. I left school at 15. I spent years around some of the most creative criminals, learning how real hackers operate – at a time when banks and government systems were still running blind to it. Now I work with defence contractors, regulators, and security-critical industries to help teams catch up before they’re breached again or worse, discredited publicly.

Tea doesn’t need a rebrand. But it does need to show it’s listening, learning, and adapting in real-time. That’s where trust gets rebuilt.

The Human Response

After documenting what I’d found, I reached out to Tea’s CEO directly. Not to create drama. Not to chase headlines. But to give them the chance to get ahead of the story before it spiralled further into sensationalist media coverage.

The response was immediate and well received.

Within hours, I was on a call with Tea’s lead backend developer. There were no lawyers, no PR handlers and no defensive deflections, just a dev who understood the technical reality, took full ownership of the issues, and genuinely wanted to fix them.

Article content

That matters more than you might think.

I’ve worked breach response for companies worth hundreds of millions, sitting in boardrooms where executives spend hours calculating legal liability while user risk becomes an afterthought. I’ve watched leadership teams systematically throw individual developers under the bus to protect corporate reputation. This wasn’t that.

The dev didn’t downplay the severity. He didn’t hide behind complexity or blame timelines.

He acknowledged that while the technical implementation was straightforward to exploit, the impact on user trust was real. That’s the kind of accountability you rarely see, especially from a team under public pressure.

I told him as f*cked up as the situation was, I’d rather work with someone who’s felt the weight of real struggle than someone who’s coasted through life on calm seas.

There’s an old saying about smooth seas never making skilled sailors. The teams that have been through genuine hardship, that have had to fight for everything they’ve built – those are the ones who understand what’s at stake when trust is on the line.

What a Privacy-Minded Platform Should Look Like (my advice to Tea and anyone else building apps)

When building applications that handle sensitive identity-linked data – especially platforms used to anonymously share intimate experiences, security isn’t a feature. It’s the operating model. Anything less is unacceptable.

This is exactly where OWASP’s mobile standards come in. The OWASP Mobile Application Security (MAS) project provides three interconnected frameworks that work together.

MASVS: The Mobile Application Security Verification Standard

MASVS is the industry standard for mobile app security. It provides platform-agnostic security requirements organised into eight critical control areas:

  • MASVS-STORAGE: Secure storage of sensitive data on devices (data-at-rest)
  • MASVS-CRYPTO: Cryptographic functionality used to protect sensitive data
  • MASVS-AUTH: Authentication and authorization mechanisms
  • MASVS-NETWORK: Secure network communication between mobile apps and remote endpoints (data-in-transit)
  • MASVS-PLATFORM: Secure interaction with the underlying mobile platform and other installed apps
  • MASVS-CODE: Security best practices for data processing and keeping apps up-to-date
  • MASVS-RESILIENCE: Resilience to reverse engineering and tampering attempts
  • MASVS-PRIVACY: Privacy controls to protect user privacy

Unlike generic security checklists, MASVS requirements are tailored specifically for mobile environments. They account for the unique risks of client-side code execution, device compromise, and the reality that mobile apps operate in hostile environments.

MASWE: The Mobile Application Security Weakness Enumeration

MASWE bridges the gap between high-level MASVS controls and detailed testing procedures.

It identifies specific weaknesses in mobile applications, similar to Common Weakness Enumerations (CWEs) but focused exclusively on mobile attack surfaces. MASWE acts as the translation layer – taking abstract security requirements and breaking them down into concrete, identifiable weaknesses that can be tested and remediated.

MASTG: The Mobile Application Security Testing Guide

MASTG is the operational manual. It provides detailed, hands-on testing procedures for validating whether applications meet MASVS requirements and are free from MASWE weaknesses. MASTG covers everything from mobile OS internals to advanced reverse engineering techniques, giving security teams the technical processes needed to systematically assess mobile app security.

For example:

  • MASVS-STORAGE-1 requires that “The app securely stores sensitive data”
  • MASWE identifies specific weaknesses like hardcoded secrets in client code
  • MASTG provides step-by-step testing procedures to detect and validate these weaknesses

Real-World Application for High-Trust Platforms

In Tea’s case, the issues we’ve seen – publicly readable Firebase buckets/dbs, hardcoded tokens in static JavaScript etc would all fail basic MASVS testing criteria. Specifically:

  • MASVS-STORAGE: Sensitive credentials stored in client-accessible static assets
  • MASVS-AUTH: No proper token scoping or rotation mechanisms
  • MASVS-NETWORK: Direct backend API exposure without proper proxying or access controls
  • MASVS-PRIVACY: Inadequate data minimisation and user control mechanisms

These aren’t theoretical controls. MASVS represents the baseline security model that any serious mobile platform handling sensitive data should meet. And privacy-by-design doesn’t just mean adding a toggle in settings, it means engineering the application to minimise what’s collected, contain what’s stored, and harden what’s exposed.

The framework exists because these kinds of failures happen too often. When you’re operating in a category that requires user trust to function, especially one that promises anonymity and accountability – then privacy-by-design isn’t optional.

This is the level Tea needs to operate at now. Because once scrutiny hits, everything is discoverable – from unrevoked keys to stale admin routes. The only way to be ready is to build like it will happen from day one.

MASVS, MASWE, and MASTG are more than just checklists. They’re how serious teams operationalise trust. Tea now has a rare opportunity to lead – not just recover.

Jamie O'Reilly
With over 12 years of experience in information security, Jamie specialises in application security, cryptography, secure design & secure application development. Jamie has worked collaboratively with international enterprise and government organisations including: Adobe, The RAND Corporation, Riot Games, Evernote, General Motors, Etsy, Firefox, CERN, Vidyo, Australian Signals Directorate and more to achieve business goals and evolve the way that these organisations approach security.
Share This