Default: your sample is never stored. Analysis runs, result is shown, file is destroyed. We only retain it if you explicitly opt in to help improve detection. Read full policy →
VOICE EXPOSURE CHECK

Check Your Voice Exposure Yourself.
No Upload Required.

After the Mercor leak, 40,000 contractors learned their voice was already in the wild. These three checks below tell you, in five minutes, whether yours is too. Nothing leaves your browser.

BLOCK 1

Where does my voice already exist publicly?

An attacker only needs 3 to 30 seconds of your clean voice to clone it. So the first question is: where is your voice already free to download?

  1. Go to YouTube. Type your full name in quotes ("Your Full Name"). Filter by Videos. Note every result that contains you talking.
  2. Search the same name on Listen Notes, Apple Podcasts, and Spotify. Even a 5-minute guest appearance on a friend's podcast leaves studio-quality voice in the open.
  3. Run two Google queries: "Your Name" interview and "Your Name" conference. Add filetype filters if you want PDFs and recordings of talks.
  4. Open your own LinkedIn profile. Scroll your Activity and Featured. Count the videos where your voice is audible. Most people forget about lives, webinars, podcasts.
If you find more than 3 public sources of your voice, an attacker can clone you. Move to Block 2.
BLOCK 2

Am I in a known data leak?

A leak is worse than a public recording: it usually pairs your voice with your ID and a selfie, all from the same onboarding session. That combo is what enables banking voice-bypass and insurance fraud.

  1. Mercor (2024-2025). If you worked as an AI contractor for Mercor, you are in the 4TB dump. File a deletion request. The breach was disclosed to the California Attorney General, use the official notice list as legal grounding.
  2. Open research recordings. If you ever volunteered your voice to a public research project, your sample may be legally public. Not a leak, but anyone with access can reuse it.
  3. Academic studies. Search "Your Name" speech dataset on Google Scholar. If you participated in a study, contact the principal investigator and ask where your file is stored today.
  4. Reverse voice search does not exist. There is no Google Images for voice yet. No "Have I Been Pwned" for biometrics. This is a real gap and we want to be honest about it.
If you confirm you are in any of these, the next step is suppression requests, not analysis.
BLOCK 3

I received a suspicious voice message. Is it real?

This is the urgent case. A voicemail from "your brother" asking for money. A call from "your boss" asking for a wire transfer. Before any tool, run this decision tree.

  1. Verify the channel. Does the message come from the person's real number or real account? Cross-check on a different channel: a text, a call to their known number, a video.
  2. Ask a private question. A reference only the real human knows. A specific shared memory. A pre-agreed code word set up before the crisis happened.
  3. Listen for artificial tells. Unnatural pauses, missing breath sounds, intonation that plateaus, constant background noise. Most current deepfakes still leak these signs to a careful human ear.
  4. Still in doubt after the first three? That's where forensic analysis helps. Click below to start an assisted analysis.

Still Unsure or Facing a Complex Case?

The steps above cover most common exposure scenarios. If you are dealing with a targeted harassment campaign, a high-quality clone used in a sensitive context, or a file that sounds suspicious but you cannot verify, you may need a deeper forensic analysis. This is a separate, intensive process.

Proceed to the Secure Analysis Portal →

You are entering the Secure Analysis Portal

You are about to leave the self-audit guide. Read this before you upload anything.

Step 1, Free, here, no upload

The three blocks above cover roughly 90% of cases. If you haven't run them, do that first. Go back to the checklist →

Step 2, Assisted analysis

For complex cases: a suspicious file you received, a call recording you can't decide on, a video that might be deepfaked. You upload the suspect file (not your reference voice).

What we do with your file

In-memory only. Your audio is processed in RAM, the result is returned, the buffer is freed. Nothing is written to disk by default. Retention requires a separate opt-in checkbox and applies only to samples you explicitly mark for research use; you can revoke and request deletion at any time.

What we never do
  • No sharing with third parties without your consent.
  • No retention without your explicit opt-in checkbox below.
Continue to Secure Analysis Portal →

Upload, consent checkbox, and analysis are handled on the secure forensic page.

For the full retention and privacy policy, including logged fields, deletion procedure, and engagement against commercial training, see Privacy Policy.

See also: How we train our models