1. Locking the Front Door2. The Vault3. Purge What They Have on You4. Direct the Deletions5. Taking In Your Garbage6. Burn Your History7. Poison the Well8. Continuing CareFrequently Asked Questions9. When You Need More Than a GuideSources and Further Reading

How to Delete Yourself from the Internet in 2026 (The Real Version)

Last updated: April 2026

By Justin Abdilla, Esq. | Licensed Illinois Attorney (ARDC #6308444) | NATO Bug Bounty Holder | Published in the Cyber Security Law & Data Policy Journal

I found a federated permissions escalation bug in the Matrix protocol that had been dormant for years. I’ve audited production web applications and walked out with admin credentials from a .env file. I sent 650 data broker deletion requests while writing this guide. This article is built on that work.

Justin Abdilla, privacy and cybersecurity attorney

Justin Abdilla

Privacy & Cybersecurity Attorney

I spent days building the methodology behind this guide: 650 statutory deletion requests, three banks called to test their recovery questions, eight OAuth grants revoked from my own accounts, and every tool in here tested personally. Everything in this article is first-hand. If I tell you a process works, it's because I ran it.

NATO Bug Bounty Holder Super Lawyers Rising Stars 2021-2026 Published: Cyber Security Law & Data Policy Journal

This guide is aimed at the person who wants freedom from big data. If you’re in the crosshairs of a three-letter agency, nothing I can do for you is going to help you.

Two things happened this week.

On April 7, 2026, Anthropic revealed an AI model called Claude Mythos Preview that autonomously discovered and exploited zero-day vulnerabilities in every major operating system and every major web browser.[1] It found bugs that had survived 27 years of human review and millions of automated security tests.[2] It chained four separate vulnerabilities together to escape a browser sandbox and write directly to the OS kernel. It built a remote code execution exploit on FreeBSD that gave full root access to unauthenticated users. The cost of the FreeBSD exploit run: under $50.[3]

On April 8, 2026, the New York Times published John Carreyrou’s investigation identifying Bitcoin’s pseudonymous creator by feeding 30,000 mailing list posts through AI analysis.[4] The tell was hyphenation patterns. One person matched 67 of Satoshi Nakamoto’s exact hyphenation mistakes. Next closest: 38.[5]

The first story means the systems holding your data have vulnerabilities nobody knew about, and AI just learned to find them faster than any human team ever could. The second story means you are more identifiable than you think, down to the way you misuse hyphens.

This guide addresses both problems. I found a federated permissions escalation bug on the Matrix protocol (which NATO uses for allied communications) that had been dormant for years. I discovered it by trying to change the audio/video client configuration within my own server. The bug allowed privilege escalation across federated instances. It had been sitting there, unnoticed, in a protocol used by military alliances. Loose data and dormant bugs are going to be a massive problem in the Mythos era, because AI models can now find and exploit vulnerabilities like this faster than the humans who wrote the code.[1] This guide is a manual process. It will take you a weekend to start and months to finish. Most people will not do this. If you’re the kind of person who will, keep reading.


Why This Matters

In 2022, a Canadian social media personality with about 85,000 followers, who we’ll call Ms. K, was swatted at her home. Police showed up with guns drawn based on a fabricated report. The people who did it found her home address on a data broker site. It cost them nothing.

After the swatting, Ms. K moved to a new address. Within days, the same group found her again. She moved again. They found her again. She eventually had to leave the country. Every time she relocated, the harassment network used the same publicly available data broker infrastructure to find her new address, her new phone number, her family members’ information. The data was just sitting there, on sites anyone could access for a few dollars or sometimes for free.

At one point, they found her new location by pulling GPS coordinates from the EXIF data embedded in a photo she uploaded. They also used her stolen personal information to impersonate her and make threats in her name, which is what triggered the original swatting.

The campaign against her lasted months and involved hundreds of people she had never met. None of it would have been possible without the data brokers and the metadata she didn’t know she was sharing.

Ms. K had 85,000 followers. You don’t need 85,000 followers for someone to decide they don’t like you. You need one person with $2.99 and access to Spokeo.


1. Locking the Front Door

This Section

Time: 30 minutes. Credit freeze, hardware keys, password manager, kill your mobile ad ID. Nothing else in this guide matters if you skip these steps.

First: check what’s already been breached. Go to haveibeenpwned.com[17] and enter every email address you’ve ever used. It will tell you which data breaches your credentials appeared in. You will probably be surprised. Most people show up in 5 to 15 breaches, and many of those breaches include passwords, phone numbers, physical addresses, and security question answers. This is your triage list. Every service that shows up as breached needs an immediate password change. If you reused that password anywhere else (and you probably did), change it there too. Do this before anything else in this guide, because there’s no point locking the front door if someone already has a copy of the key.

Pay special attention to breaches that include “passwords” or “password hashes” in the compromised data. If a breach from 2016 included your password hash and you haven’t changed that password since, assume it’s been cracked. Older hashing algorithms like MD5 and SHA1 can be reversed in seconds with modern hardware. If the breach says “plaintext passwords,” your password was stored without any encryption at all. Change it everywhere, right now, and never use it again.

Once you’ve rotated every compromised credential, you can start hardening everything else.

Google Advanced Protection. This is the single highest-impact thing you can do and it takes about 10 seconds to enable. It forces hardware key authentication on your Google account, blocks third-party app access to your Gmail and Drive, and adds extra verification to account recovery. It’s free. It requires hardware security keys. Where do you get those? You buy them for $50 from Yubico. Titan and Nitrokey are good too, but, honestly, buy it from Yubico.

Buy four YubiKeys. Two for daily carry, two backups in a fireproof safe or a safe deposit box. Why four? Because if you lose both daily keys and don’t have backups, you’re locked out of your own life. Google Advanced Protection with no recovery keys means no recovery. That’s the point, but also a risk. A YubiKey 5 NFC works with your phone and your laptop. Buy them from Yubico directly, not Amazon. They’re about $50 each. Yes, $200 total. This is the cheapest security investment you’ll ever make. Password attacks against you have now stopped.

Freeze your credit at all three bureaus plus Innovis. This is free and takes about 15 minutes total. You need to do each one separately:

A credit freeze stops anyone from opening new accounts in your name. It doesn’t affect your existing accounts or your credit score. You can temporarily lift it when you need to apply for credit. There is no reason not to do this, and you could protect against most criminal identity theft right here.

Kill your Mobile Advertising ID. Your phone has a unique tracking identifier called a MAID (Mobile Advertising ID) that lets advertisers follow you across every app you use. It builds a profile of your behavior, your location patterns, your interests, and your habits. Most people don’t know it exists. On iPhone, go to Settings > Privacy & Security > Apple Advertising and turn off Personalized Ads. On Android, go to Settings > Google > Ads and delete your advertising ID. This takes 30 seconds and cuts off one of the largest passive tracking vectors on your device. On iPhone, the app Singular Device will show you your current MAID and confirm whether it’s active:

Singular Device app showing an active Mobile Advertising ID (IDFA) on an iPhone
My MAID before I killed it. This identifier was tracking me across every app on my phone.

ObscureIQ published a short guide on finding and disabling your MAID[18] if you want the step-by-step with screenshots. Once you’ve killed the device ID, opt out of the advertising networks that were using it: the Digital Advertising Alliance opt-out is at aboutads.info, and the Network Advertising Initiative opt-out is at optout.networkadvertising.org. These are run by the ad industry’s own trade groups, not by privacy advocates, but they do work.

Password manager: use KeePass. It’s offline, open-source, and your password database never touches someone else’s server. 1Password and LastPass are cloud-hosted services. LastPass has been breached. Encrypted vaults were stolen in 2022, and since then, security researchers have tied over $100 million in cryptocurrency theft to cracked LastPass vaults.[12] 1Password hasn’t been breached at that scale, but you’re still trusting a third party with the keys to everything you own. KeePass stores your database as a local file. You control where it lives and who can access it.

If you don’t want to use KeePass, your laptop already has a keyring. macOS Keychain, GNOME Keyring on Linux, Windows Credential Manager. These are local credential stores that encrypt with your system password. They’re fine for website passwords. They won’t sync across devices without iCloud or a similar cloud service, which brings you back to the trust question. Use them if you understand the tradeoff. I recognize most people reading this will have to weigh convenience against security, and I understand not everyone has the time to run low convenience solutions.


2. The Vault

This Section

Time: 1 hour. Call your bank. Test their security questions. Close forgotten credit lines. Move crypto to a hardware wallet. You are as secure as your least secure line of credit.

Your money is an attack surface. People can, and will, attack finances with credit exploits. Will you lose money from that? Probably not! The banks will probably reimburse you. But, you might lose some data you would much rather other people not see.

I’m going to name specific banks because vague advice doesn’t help anyone. I called three banks while writing this guide.

PNC told me I had to personally come into a branch with two forms of state-issued ID to reopen or reset my account. That’s the right answer. Physical presence with government ID is the hardest authentication factor to fake. PNC is doing this correctly.

Bank of America uses your mother’s maiden name as a recovery question. Here’s the thing most people don’t realize: you can put anything you want in that field, including nonsense words. “Mother’s maiden name: Saxophone7Furniture.” BofA doesn’t verify it against public records. They just check that what you say on the phone matches what’s in their system. If you haven’t changed yours from the real answer, do it today. Your actual mother’s maiden name is on data broker sites for $2.99.

Chase was willing to reset with my full Social Security number and an email backup address. Both of those data points are available through data breaches and data brokers. If someone has your SSN (and after the Equifax breach, assume they do) and knows which email you used to open your Chase account, they can reset your credentials over the phone.

Fidelity, Schwab, and E*Trade use Symantec VIP for two-factor authentication. It’s a proprietary app with no PIN protection. If someone gets your phone, they get your 2FA codes. These brokerages chose this system because it was cheaper than building proper FIDO2 support. You’re paying for that decision with your security.

No U.S. bank currently offers hardware-key-only authentication without an SMS fallback. That SMS fallback is the vulnerability. SIM swapping, where an attacker convinces your carrier to transfer your phone number to their SIM card, bypasses SMS-based 2FA entirely. The FBI reported over $68 million in SIM swap losses in 2021 (the last year for which data was easy to find), and the real number is higher because most people don’t report it.[6]

What to look for in a bank: FIDO2 or passkey support, the ability to disable SMS fallback (very few offer this), and transaction-level MFA that requires a second factor for transfers, not just login.

AI just demonstrated it can chain privilege escalation exploits on operating systems that have been reviewed by thousands of engineers for decades.[1] The systems your bank runs on are not exempt from this class of vulnerability. You cannot control their infrastructure. You can control how much of your financial life is exposed to any single institution.

Cryptocurrency. If you hold it, move it to a hardware wallet. Ledger or Trezor. If you don’t want to manage a hardware wallet, sell the crypto and move the proceeds to a traditional brokerage. There is no middle ground. Exchange wallets are custodial. You don’t hold the keys, the exchange does, and exchanges get hacked. If you hold less than $1,000 in crypto, it’s probably not worth the hardware wallet. If you’re holding an entire bitcoin, probably pay a little extra for some hardware.

Lines of credit you forgot about. Pull your credit report from annualcreditreport.com. Look for open accounts you don’t use: the Macy’s card from 2018, the HELOC you opened and forgot, the Affirm account from a single online purchase. Every open credit line is a social engineering vector. Someone who has your personal information can attempt to access or modify these accounts. You’re as secure as your least secure line of credit. Close what you don’t use. Call the issuer, close the account, and confirm the closure in writing.

Virtual credit cards. Andrej Karpathy (former Tesla AI director, OpenAI founding member) recommends privacy.com for minting unique credit card numbers per merchant.[24] One card per merchant means a breach at one retailer doesn’t expose your real card number, and nobody can link your purchases across services. You can set spend limits per card. You can also enter completely random billing information for your name and address, which means random internet merchants never get your real physical address. This is one of the best tools in this guide that most people have never heard of.


3. Purge What They Have on You

This Section

Time: a full day. This is the section that matters. If you only do one thing from this guide, do this. Opt out of 13 high-priority data brokers, then hit the infrastructure layer underneath them.

Right now, your name, home address, phone number, email address, estimated income, names of your family members, and in many cases your political donations and property records are sitting on dozens of data broker websites, available to anyone who wants to pay a few dollars or, in some cases, for free. This is not hypothetical. Go to Spokeo.com and search your own name. You may have nothing to hide, but I wager, dear Reader, that you are also not eager to share your life.

Spokeo search result showing personal information available for $2.99
My Spokeo result while writing this guide. Every place I've ever lived, including AirBnBs I've stayed at. Nine addresses, six phone numbers, family members' names. Available to anyone for $2.99. This is what you're cleaning up.

The DIY path. The State of Surveillance project maintains a privacy roadmap at stateofsurveillance.org/privacy-roadmap/[16] that walks through many of the steps in this guide and more. It’s a great companion resource. For the data broker work specifically, there is a GitHub repository called the Big Ass Data Broker Opt-Out List, BADBOOL.[7] It catalogs over 50 data brokers with direct opt-out links, prioritized by reach and difficulty. Thirteen are marked high-priority because they feed data to other brokers downstream. Start with those thirteen:

  1. BeenVerified
  2. Spokeo
  3. Whitepages / Whitepages Premium
  4. PeopleFinders
  5. Intelius
  6. US Search
  7. ThatsThem
  8. FastPeopleSearch
  9. TruePeopleSearch
  10. Radaris
  11. MyLife
  12. Instant Checkmate
  13. PeopleLooker

Each one has a different opt-out process. Some let you do it online in two minutes. Some require you to email a privacy address. Some make you fax a notarized letter, and yes, in 2026, some of these companies still require faxes. Budget a full day. It’s tedious. It works.

California residents or Data-Brokers in California: DROP. The Delete Request and Opt-Out Platform launched in January 2026 under the California Delete Act (SB 362).[8] It’s a single form. You submit it once, and it goes to over 500 registered data brokers. Processing starts in August 2026. If you’re in California, do this first. It’s the closest thing to a one-click solution that exists, and it’s run by the state, not a private company. Go to deleteact.ca.gov.

If you’re not in California, you still have statutory rights. Most people think CCPA is the only game in town. It’s not. As of 2026, these states have active data deletion laws with real enforcement mechanisms:

This list is getting longer every year. If your state isn’t on it, you can still make deletion requests. The broker just isn’t legally required to comply. Most will anyway, because they don’t want the PR problem. Also, most of them have data retention policies that you would seek to sue on if they failed to comply. PR and mass arbitration make great threats.

The data brokers above are the retail layer. They’re just the sites where someone pays $2.99 to look you up. Underneath them is an infrastructure layer that most deletion requests never touch. If you only opt out of Spokeo but never touch LexisNexis, your data will reappear on Spokeo within 90 days because the supply chain is still feeding it. You need to hit these.

Data broker supply chain diagram showing how public records flow through infrastructure companies like LexisNexis and Acxiom to retail brokers like Spokeo, where anyone can buy your data for $2.99

If you’ve ever been a party to a lawsuit, filed a business entity, licensed a profession, or bought property, your records are in legal databases that most privacy guides never mention. These are the systems attorneys, skip tracers, and investigators actually use. Getting your data out of them is harder than opting out of Spokeo, and in some cases, the companies will fight you on it.

Westlaw / Thomson Reuters. Thomson Reuters runs Westlaw, CLEAR, and PeopleMap, which aggregate court records, property records, professional licenses, and public filings into searchable databases used by law firms, government agencies, and corporate investigators. Their consumer privacy opt-out form is buried so deep that finding it is a research project in itself. Here it is: https://privacyportal.onetrust.com/webform/dbf5ae8a-0a6a-4f4b-b527-7f94d0de6bbc/23dce484-737f-4d47-a389-2e990f683e8c. Bookmark it. You will not find it through their main website navigation. I found it by emailing Thomson Reuters a notice of intent to sue for holding my data against their own terms of service. They sent me the form link in their response. That’s what it took.

LexisNexis. LexisNexis Risk Solutions maintains a consumer data file on you that includes court records, property records, professional licenses, liens, judgments, bankruptcy filings, and vehicle registrations. They also suffered a data breach in May 2025 that exposed over 364,000 people’s records, including Social Security numbers. Their opt-out portal is at optout.lexisnexis.com. You select your reason, provide your name and address, and they will ask for a copy of your government-issued ID for verification. Skip providing your SSN; it’s listed as optional. Processing takes 10 to 30 days. Important limitation: LexisNexis will NOT suppress your data from products available to law enforcement or products regulated by the Fair Credit Reporting Act. Their “suppression” only covers commercial data products sold to private businesses like insurance carriers, lenders, and tenant screening companies. It’s still worth doing because those commercial products are the ones most likely to leak your data downstream.

Acxiom. One of the largest data brokers in the world. They hold demographic, financial, and behavioral data on over 200 million Americans. The current opt-out form is at isapps.acxiom.com/optout/optout.aspx. There is also a newer page at acxiom.com/optout/. You can submit by phone at 1-877-774-2094 or by mail to Acxiom LLC, P.O. Box 2000, Conway, AR 72033. When you use the online form, select all three opt-out segments (mailing addresses, phone numbers, email addresses). Acxiom’s opt-out is not a full deletion. It suppresses your data from their marketing and advertising clients. They may still retain underlying records for internal purposes and fraud prevention. It’s still worth doing because Acxiom’s database feeds dozens of downstream ad-tech and marketing platforms.

LiveRamp. LiveRamp is Acxiom’s parent company, but their opt-out is completely separate. Opting out of Acxiom does not opt you out of LiveRamp. LiveRamp handles identity resolution, connecting your data across devices and platforms for ad targeting. If you’ve ever seen “LiveRamp” in a cookie consent banner, that’s them. Consumer privacy opt-out is at liveramp.com/privacy/my-privacy-choices/. Deletions take 45 days to process. Opt-outs take 15 business days.

CoreLogic (now Cotality). CoreLogic holds property records, mortgage data, and rental history for approximately 99% of US residential properties. If you own property, they have a file on you. This is the hardest opt-out on this list, and I’m going to be honest about why. CoreLogic’s CCPA opt-out forms only work for two categories: HR data subjects (employees, applicants, contractors of CoreLogic) and B2B data subjects (business contacts who have interacted with CoreLogic directly). If you’re a regular consumer, those forms will reject your request. For everyone else, email privacy@corelogic.com with the subject line “CCPA Data Deletion Request.” Include your full name, all addresses associated with properties you’ve owned or rented, and a clear statement requesting deletion citing Cal. Civ. Code § 1798.105. You can also call 1-800-634-4149. CoreLogic may deny your request citing Fair Credit Reporting Act exemptions. They have a partial exemption from CCPA for data used in credit reporting. If they deny you, document the denial and the exemption they cited. That documentation matters if you escalate to the AG.

Oracle Data Cloud. Oracle aggregates purchase data, online behavior, and demographic information across hundreds of sources. Their advertising opt-out is at oracle.com/legal/privacy/advertising-privacy-policy.html (scroll to the “Your Choices” section). For a full data deletion request, use their privacy request form at oracle.com/legal/privacy/privacy-choices.html. Oracle spun off much of its ad-tech division in recent years, so the scope of what they still hold varies. Submit the request anyway.

Ancestry. Even if you’ve never used Ancestry, they may have records linked to your name through public record scraping, including birth records, marriage records, census data, and in some cases DNA data from relatives who submitted samples. Ancestry is somewhat infamously unwilling to delete data. You can email links to specific records containing your information to privacy@ancestry.com or use their content removal form. If you have an account, you can request full account deletion through their settings. If you don’t have an account but your data appears in their public records collections, the removal process requires you to identify the specific records by URL and submit them individually. It’s tedious by design.

Beyond the Data Brokers: Other Systems Holding Your Data

Zillow / Zillow Group. If you’ve ever used Zillow, Trulia, HotPads, or StreetEasy, your search history, saved homes, and account data are in Zillow’s systems. Their privacy portal is at privacy.zillowgroup.com. Important limitation: this only covers data you’ve provided to Zillow through use of their services (search history, saved homes, agent contacts, your account profile). It does not cover your home’s property records, Zestimate, or ownership data, which Zillow sources from public records and will continue displaying regardless of your opt-out.

OptOutPrescreen (All Prescreened Loan Offers). Every pre-approved credit card and insurance offer you receive is based on a lender or insurer pulling your credit file through a prescreening process. OptOutPrescreen.com lets you stop these prescreened offers. You can opt out for 5 years online or permanently by mail. This is run by the consumer credit reporting companies (Equifax, Experian, TransUnion, Innovis) under federal law. Do this. Every prescreened offer is a record that someone pulled your credit file, and the offers themselves are a data trail. optoutprescreen.com

TransUnion. Beyond the credit freeze (which you should already have from Section 1), TransUnion has a separate consumer privacy portal for data deletion and opt-out requests at transunion.com/consumer-privacy. Use this to request deletion of marketing data and to opt out of data sharing beyond what the credit freeze covers.

Komodo Health. Komodo Health aggregates healthcare claims data and patient records for analytics. As of this writing, they are not accepting consumer deletion requests unless legally compelled. If you are in a state with an active data privacy statute (California, Colorado, Connecticut, Oregon, Texas, Virginia), send them a statutory deletion request citing your state law. If you’re not in a covered state, they will likely ignore you. Document the attempt anyway.

Spokeo. Spokeo accepts deletion requests by email at customercare@spokeo.com. In your email, mention that you’ve recently moved to California. Spokeo processes California residents’ requests under CCPA, which gives you the statutory 45-day compliance deadline and the right to escalate to the AG if they don’t comply. This is more effective than using their web-based opt-out form, which routes to a slower process.

Kroger. Kroger collects detailed purchase habit data through its loyalty programs (Kroger Plus Card, Boost membership). Every item you’ve scanned, every coupon you’ve clipped, every pharmacy pickup — all of it is profiled and used for targeted advertising and sold to third-party data partners. Here’s the part that should bother you: Kroger will not delete this data unless you live in a state with a privacy law that compels them to. If you’re in California, Colorado, Connecticut, Oregon, Texas, or Virginia, you can submit a deletion request through kroger.com/privacy and cite your state statute. If you’re not in one of those states, Kroger’s position is that they have no obligation to delete anything. They’ll keep your ten years of grocery receipts forever because no law says they can’t. This is the norm, not the exception — Kroger is just one of the few companies honest enough to say it out loud.

Why This Matters More Than You Think

I know the security posture of the companies holding your data is bad because I’ve seen it firsthand. I’ve audited production web applications as part of my security consulting work. In one engagement, I downloaded the publicly accessible JavaScript files for a website using the browser’s developer console. I studied the naming convention those files used. Then I started hitting the server directly with wget, requesting files that followed the same naming pattern but weren’t linked from the public site. It took four or five tries before I found the .env file. It contained the admin passwords in plaintext. The password hash was stored right next to them. If I had been asked to review anything other than test data, I could have stolen everything in the database. This was a production application with paying customers. The vulnerability was not sophisticated. It was a configuration file left in a predictable location on a public server. That’s the security posture of most companies holding your data.

This week, an AI model found a 27-year-old denial-of-service vulnerability in OpenBSD’s TCP stack, a system that has been audited by some of the best security engineers in the world for nearly three decades.[2] It found a 16-year-old flaw in FFmpeg’s video codec that survived every fuzzer and every human code review since 2010.[9] It built working exploits for over half of the 40 Linux kernel CVEs it analyzed.[10] The companies holding your data run on this same software. You can’t audit their security posture. What you can do is reduce how many of them hold your data in the first place. Every deletion request you send shrinks the number of places where your information sits waiting for the next breach. That next breach is coming, immediately.

This is not a one-time project. Data brokers continuously scrape public records and rebuild profiles. Your information will reappear. Plan to re-check the high-priority brokers every quarter. This is where ongoing monitoring, or hiring someone, starts to make sense.


4. Direct the Deletions

This Section

Time: 2 hours, then wait 45 days. Send statutory deletion demands, not polite requests. Use the template below or the tool that generates one citing every applicable law at once.

There is a difference between asking nicely and citing a statute.

Consumer request vs. statutory demand. A polite email to privacy@spokeo.com gets routed to a customer support queue. A letter citing California Civil Code § 1798.105 with a 45-day compliance deadline and a named attorney gets routed to their legal department. The first one might work. The second one has to work, or they’re in violation.

Template for a CCPA deletion request:

To the Privacy Officer or Data Protection Officer:

I am writing to exercise my right to deletion of personal information under the California Consumer Privacy Act (Cal. Civ. Code § 1798.100 et seq.), specifically Section 1798.105.

I request that you delete all personal information your organization has collected, stored, or maintains about me. This includes, but is not limited to: name, address, phone number, email address, date of birth, photographs, employment history, and any associated records.

My identifying information for your records:

  • Full name: [Your name]
  • Email addresses associated with your records: [List them]
  • Physical addresses associated with your records: [List them]
  • Date of birth: [Your DOB]

Under Section 1798.105(a), you are required to delete this information and direct any service providers to delete it from their records as well. You have 45 calendar days to respond to this request, per Section 1798.105(b).

If you decline this request, please cite the specific statutory exemption under Section 1798.105(d) that you are relying on.

I reserve all rights under California law, where your office is located, including the right to file a complaint with the California Attorney General’s office if this request is not processed within the statutory timeframe.

The I Am Not The Product opt-out request generator interface
The optout.iamnottheproduct.com tool. Enter your info, pick your state, and it generates a letter citing every applicable statute.

Or skip the template entirely. The tool at optout.iamnottheproduct.com[11] generates a deletion request that cites every applicable privacy statute at once: CCPA, VCDPA, CPA, CTDPA, TDPSA, OCPA, and a dozen more state laws you probably didn’t know existed. It also includes an opt-out of future sale and sharing, a withdrawal of consent, and an explicit objection to future collection from any source. You enter your information, it generates the letter. Send that instead of the template above and you don’t have to figure out which statute applies to which broker. Let them sort it out.

If you want to write your own, the structure is the same regardless of state: identify yourself, cite the specific deletion right, state the compliance deadline, and tell them what happens if they don’t comply. The language matters. “Please remove my information” is a request. “I am exercising my right under [statute] and you have 45 days to comply” is a demand with a legal clock attached to it. You won’t have these rights unless your biometrics are stored, or someone with this data (you or the company) is living in a protected place.

GDPR requests. If you have any connection to the EU, e.g.: you lived there, you have an EU email provider, you used a service that operates under GDPR, you can make a deletion request under Article 17 (Right to Erasure). GDPR carries actual teeth: fines up to 4% of annual global revenue. Companies take GDPR requests seriously in a way they sometimes don’t with CCPA. If you can credibly make a GDPR request, do it. Me, personally, I have an EU email provider for this exact reason. Tuta is a great service.

Who to send to. Don’t send your deletion request to info@ or contact@. Look for:

Track everything. Create a spreadsheet with these columns: Company Name, Date Sent, Method (email/mail/web form), Confirmation Received (Y/N), 45-Day Deadline, Status. You’ll need this if you have to follow up or escalate. If you sent a statutory demand and they didn’t respond within 45 days, you have a documented compliance failure. That matters if you file a complaint with your state’s AG. This is something that an LLM would easily be able to maintain for you.

When they try to exhaust you. I sent 650 takedown requests while writing this guide. Not a sample. Not a test batch. Six hundred and fifty individual statutory deletion demands to data brokers, people-search sites, and infrastructure-layer data companies. Here’s what happened:

Results of 650 deletion requests: 80% redirected to verification forms, 15% said they would investigate, 5% actually processed the deletion
650
Statutory demands sent
80%
Redirected to forms
15%
"We'll investigate"
5%
Actually complied

Those numbers are not a reason to stop. They’re the reason this section exists.

Credit where it’s due: Troy at NextWaveMarketing, Reid at Mogean, Dun & Bradstreet, and Kulanthaishree at Accurate all processed my requests immediately and professionally. SignalHire and ACHCOOP went further — they quickly informed me what data they held, provided me a full packet of it, deleted everything, and added me to a suppression file so my data wouldn’t be re-ingested. That’s the gold standard. They exist. They’re just outnumbered. The rest are betting that the volume of responses will frustrate you into giving up. Here’s a typical response:

Email from PMG refusing to process a deletion request via email, directing to a web form instead
PMG's response to my statutory deletion request. "We do not accept privacy rights requests via email." The statute doesn't say they get to choose the channel.

USA-People-Search took it a step further: they responded to my deletion request by directing me to a form that doesn’t even exist.

USA People Search removal page showing Error code 500 - the form they direct you to does not exist
USA-People-Search's "removal" page. Error code 500. The form they sent me to doesn't exist. This is the company holding your data.

Don’t give up. This is a compliance tactic, not a legal requirement. If a company received your statutory demand and responded with a maze of forms instead of deleting your data, they haven’t complied. They’ve started a 45-day clock and spent part of it sending you busywork. Document the response, complete what you can stomach, and flag the rest for AG complaints if they blow the deadline. The companies doing this know exactly what they’re doing.

When they say no. They’ll cite exemptions. Some are legitimate:

What They SayIs It Legitimate?Your Move
”We need this data for ongoing litigation”Yes, if there’s actual litigation involving youAsk for the case number. If they can’t cite one, push back.
”This data is necessary for a transaction you initiated”Only if there’s an active transactionIf you haven’t done business with them, this doesn’t apply. Say so.
”We are required by law to retain this information”Sometimes (tax records, regulatory). Often a stretch.Ask which law. If they can’t cite a specific statute, file an AG complaint.
”You are not a California resident”Irrelevant if you cite your own state’s statuteCite TDPSA (Texas), CPA (Colorado), or your state’s law. If you’re not in a covered state, you can still request deletion. You might also have a privacy law case on them.

If they deny your request without citing a specific exemption, or if the exemption doesn’t apply to your situation, file a complaint with your state attorney general’s consumer protection division. Most AG offices have online complaint forms. This costs you nothing and creates a regulatory record.


5. Taking In Your Garbage

This Section

Time: 20 minutes. Revoke every OAuth grant you forgot about. You probably have 15 to 30 apps with access to your Google, Apple, or Facebook accounts right now.

OAuth grants are the keys you handed out and forgot about.

Here’s what happened: at some point in the last ten years, you clicked “Sign in with Google” on some app or website. When you did that, you gave that app permission to access parts of your Google account, maybe your email, maybe your contacts, maybe your calendar. That permission grant is still active. The app can still read your data. You haven’t used the app in three years, but it’s still connected.

Audit your grants:

I found 8 active grants on my own accounts. One of them, and I cannot stress how absurd this is, was FarmVille. A Facebook game login authentication from 2009. It had been sitting there with permissions to my account for seventeen years. Another was Avvo, the attorney review platform, which showed up as breached when I checked Have I Been Pwned. A breached service with an active OAuth grant to my account. That grant had been live for years and I had no idea.

Have I Been Pwned showing Avvo data breach from December 2019, compromising email addresses and passwords
Avvo's December 2019 breach exposed 4.1 million email addresses and SHA-1 password hashes. This service had an active OAuth grant to my account for years after the breach.

The dangerous ones are apps with calendar access, contacts access, or “offline access” tokens. Calendar access means an app can read your schedule without you being logged in. Contacts access means it exported your entire address book when you first connected. Offline access means it can refresh its own credentials and keep reading your data even when you’re not actively using it. These are the grants that matter most. Kill them first.

If you clicked “Sign in with Google” on some random quiz site in 2019, that quiz site may still have read access to your email. It probably doesn’t do anything with it. But you don’t know. Revoke it before they get hacked.


6. Burn Your History

This Section

Time: varies. Delete accounts, strip EXIF data from photos, switch to encrypted messaging. Deactivation is not deletion. Read this section carefully before you start pressing buttons.

Delete your messages. Sanitize your online presence. Take down your photos.

Deletion vs. deactivation. They are not the same. Facebook “deactivation” hides your profile but keeps all your data. Instagram “temporary deactivation” does the same. Twitter/X requires you to deactivate first, then wait 30 days before actual deletion occurs, and if you log in during those 30 days, the deactivation resets. Microsoft probably still has your old Skype messages from 20 years ago. Read the actual deletion policy for each platform, not just the first button you find in settings.

To actually delete:

Old posts. If you don’t want to delete entire accounts, you can bulk-delete post history. Tools that work: Redact for Reddit and Discord (overwrites, then deletes), TweetDelete for Twitter/X (bulk delete by date range), Jumbo (multi-platform privacy cleanup), UnDiscord on GitHub. There are many good programs that do this.

Photos. Every photo you’ve taken with a smartphone has EXIF data embedded in it, GPS coordinates, timestamp, device model, sometimes your name. Before you upload a photo anywhere, strip the EXIF data. On macOS, Preview can do this. On any platform, ExifTool (free, command-line) strips everything. On a phone, most messaging apps strip EXIF when you send a photo, but social media platforms preserve it in their backend even if they strip it from the public display. You’ll want to delete these in bulk if your concern is anonymity instead of privacy.

Reverse image search yourself. Go to Google Images and upload a photo of your face. Then do the same on TinEye and PimEyes. PimEyes is particularly concerning. It’s a facial recognition search engine that finds your face across the internet, and it’s available to anyone for a monthly subscription. If you find photos of yourself hosted without your consent, you can file a DMCA takedown request with the hosting platform. You own the copyright to photos you took of yourself (selfies). For photos others took of you, the copyright belongs to the photographer, but you may have other legal remedies depending on how the photo is being used. In Illinois, you probably have rights on these photos.

Messaging. WhatsApp stores your messages on Meta’s servers even after you delete them locally. iMessage syncs to iCloud by default, which means Apple has a copy of your message history on their servers. If you need private messaging, use Signal. It’s end-to-end encrypted, it doesn’t store messages on a server, and it’s open-source so the encryption claims can be verified. Signal is pretty great for how cheap it is to operate. The trouble will be convincing your friends to use it.

Email. Email is a persistent identity. Every account recovery flow, every newsletter signup, every receipt, every legal notice goes to an email address that identifies you. If you need email, and you do, because modern life requires it, use Proton Mail or Tuta. Both are end-to-end encrypted, based in jurisdictions with strong privacy laws (Switzerland and Germany, respectively), and don’t scan your email content for advertising.

But understand what email is: a permanent identifier tied to your name. Minimize what that address touches. Don’t use your primary email for throwaway signups. Create aliases or use a service like SimpleLogin (now owned by Proton) to generate disposable addresses that forward to your real inbox.

One more thing: disable image loading in your email client by default. Many services embed tracking pixels in emails, hidden images with unique URLs. When your email client loads the image, the sender knows you opened the email, when you opened it, and your IP address. Blocking remote images also blocks steganography attacks, where malicious payloads are encoded inside image data. This is a real attack vector, not a theoretical one. Steganographic encoding can deliver commands to malware already on your system without triggering network-level detection because the traffic looks like a normal image load. In Gmail: Settings > General > Images > Ask before displaying external images. In Proton and Tuta, remote image loading is blocked by default.


7. Poison the Well

This Section

Advanced. Everything above is defense. This section is offense: making the data that remains about you unreliable. This is for people who read "stylometric analysis" and don't blink.

If the previous steps were about making breakfast, this step is more of a "how to cook a five star meal." This is much harder, and for a much more serious audience than the rest of this article. The steps here should make you say "oh, that's what a crazy person would do." But, there's also a subset of people who would look at these steps and say "wow, I never realized I needed to do that."

The Carreyrou investigation published this week deserves a closer look.[4] The New York Times fed archives from three cryptography mailing lists spanning 1992 to 2008 through AI analysis. They weren’t looking at what people wrote. They were looking at how people wrote. Specifically:

After stacking these filters across 600 active forum participants, one name remained. The suspect shared 67 of Satoshi Nakamoto’s exact hyphenation choices.[5] The next closest had 38. Forensic linguists call these “markers of sociolinguistic variation.”[13] They’re the stylistic fingerprints you leave in everything you write.

Your writing has them too. Every comment, every email, every review, every forum post. AI didn’t just learn to find vulnerabilities in code this week. It learned to find patterns across massive datasets faster than any human researcher.[1] The capability that chains four browser exploits together can chain behavioral data points about you. Pattern recognition at this scale changes what “anonymous” means.

This isn’t new. Security researchers have previously identified Russian state-sponsored hackers partly because every code commit happened during Moscow business hours. The attackers were technically sophisticated but operationally careless. Your login timestamps, your browsing hours, your app usage patterns tell the same story about you: when you wake up, when you work, where you are.

What to do about it. After you’ve deleted your old presence and cleaned up your accounts, don’t let your new online identity develop the same fingerprint.

The goal is not to become invisible. You can't. The goal is to make the data about you unreliable.

If your digital footprint is full of noise, contradictions, and false signals, it becomes much harder for anyone, human or AI, to build a coherent profile of who you are.


8. Continuing Care

This Section

Time: 2 hours, every 3 months. Your data comes back. Brokers rebuild profiles in 60 to 90 days. This section is the quarterly maintenance checklist.

You did the work. Now don’t mess it up!

Don’t log back in. Don’t check your old email “just to see.” Don’t peek at your deleted Facebook. Don’t access bank accounts you closed. Every login reactivates tracking, refreshes cookies, and re-links your identity to that account. If you deleted it, it’s gone. Keep it gone.

Keep a log of your digital exposure. Every new account you create, every app permission you grant, every service you sign up for: write it down. Your cell phone is a tracking device you carry voluntarily: your IMEI, your carrier, your location history, your app permissions, your Bluetooth pairings. Treat it that way.

Quarterly audit. Every three months:

Monitor what phones home. If you’re on macOS, Little Snitch shows you which applications are communicating, how much data they send, and when. Any app that communicates more than you expect is suspect. On any platform, NextDNS or Pi-hole blocks ads and trackers at the DNS level, which means they work across every app and browser on your network without installing anything on each device. Karpathy recommends NextDNS.[24] I’m installing both.

Physical mail. This is the vector nobody thinks about. The USPS change-of-address database is a tool skip tracers use to find people. If you file a mail forwarding request, anyone who knows your old address can check whether you’ve moved. This is public information.

If you’re in an active threat situation (doxxing, stalking, a custody dispute), a PO Box is not enough. A PO Box is still traceable to the post office where you pick up your mail. What you need is a commercial mail receiving agency (CMRA) or a registered agent service. These give you a real street address that isn’t your home. Your mail goes there. You pick it up or have it forwarded. The address isn’t connected to your physical location in any public database.

For everyone else: opt out of junk mail. DMAChoice.org lets you remove yourself from direct marketing lists. CatalogChoice.org stops catalog mailings. OptOutPrescreen.com stops pre-approved credit card offers, which are themselves a data trail, because every pre-approval is a record that a lender pulled your credit file.


Frequently Asked Questions

How long does this actually take? The initial cleanup, Sections 1 through 6, takes a full weekend if you’re focused. The data broker opt-outs (Section 3) take the longest because each site has a different process. Credit freezes take 15 minutes. OAuth cleanup takes 20 minutes. Social media deletion depends on how many accounts you have. After the initial weekend, it’s a quarterly maintenance task: maybe two hours every three months. Sections 7 and beyond are for people who really, really care about the way the world is going.

Can I really delete everything? No. Public records, like court filings, property deeds, voter registration, and business filings are public by law in most states. You can’t delete them. Some states let you redact your address from voter rolls (California, for example). You can remove yourself from most data broker sites. You can delete your social media history. But anything that’s a matter of public record stays public. What you can do is break the link between those public records and the data broker profiles that aggregate them into a single searchable dossier. You can, also, use things like blind trusts or registered agents at the time you create these businesses. Retrofitting privacy onto something open is never going to work.

What do I do about public records? Court records, property records, and business filings are generally public and can’t be deleted. Data brokers scrape these records and combine them with other data sources to build a profile. Opting out of the broker doesn’t delete the public record, but it breaks the aggregation. Someone would have to go county by county to find the same information the broker assembled in one place. That’s the point: you’re raising the cost of finding you. There aren’t too many adversaries who would pay a 50 state locate search’s cost.

Will my data come back? Yes. Data brokers scrape public records and purchase data from other sources continuously. After you opt out, your profile will typically reappear within 60 to 90 days. This is why quarterly monitoring matters. The initial sweep gets you to zero. Ongoing maintenance keeps that presence minimal.

What does attorney-client privilege protect? When you hire an attorney to handle your internet deletion, the work becomes attorney work product. The records of what data brokers held about you (addresses, phone numbers, family members, asset indicators) are privileged. Who we got them from is privileged. Why we removed them is privileged. In a litigation context (divorce, custody, civil suit), opposing counsel can’t compel discovery of those records from me. They could try to get them from the data broker, but only if the data broker still has them! If you use DeleteMe or Kanary, their records of what they found on you are not privileged and can be subpoenaed.

I’ve done this work. I had a client who is a military asset. We made sure his property records did not contain any indication of where he actually lived. The engagement, the findings, and the methods are all protected. If someone subpoenas me for what I found, I can’t be compelled to answer. That’s what privilege does for this kind of work. It’s not theoretical.

If you’re in a situation where the deletion process itself needs to be protected from discovery, that’s the difference.

Is this worth paying a lawyer for? For most people, no. But if you read this and realized you wanted to pay a lawyer to do it, we’ll understand why. This guide gives you everything you need to do it yourself. If your situation is straightforward, like if you want more privacy, you want to stop junk mail, or you want to clean up old social media, do it yourself. If you’re worried about getting sued over some comment you made in 2013, hiring us might be more intelligent.

It’s worth paying a lawyer when: you’re anticipating litigation and need the process protected from discovery; a platform or data broker is ignoring your statutory deletion request and you need someone to enforce it; someone has posted defamatory content about you and opt-out forms won’t remove it; you’re dealing with revenge porn or non-consensual intimate images; or you need to identify an anonymous person who’s harassing you online and that requires a platform subpoena. We will help you destroy other peoples’ records on you, but obviously you cannot destroy your own copies under certain circumstances. These are all legal problems, not privacy projects.


9. When You Need More Than a Guide

Everything above handles the data brokers. A fine feather for your cap. Sometimes, we need something with more juice.

If someone posted something about you, say for instance a defamatory article, an intimate photo, a fake review, or your home address with intent to harm; data broker opt-out forms don’t touch that. A consumer privacy tool doesn’t touch that. Those are legal problems, and they require legal tools.

That’s what I do.

Defamation. If someone published false statements of fact about you (not opinions, not unflattering truths, but provably false claims) that’s actionable. It starts with a demand letter. If the letter doesn’t work, it becomes a lawsuit. If the content is on a platform (Google review, Yelp, Reddit), the demand goes to both the poster and the platform. Court-ordered removal is an option when the platform won’t voluntarily take it down. If you’re a business owner dealing with fake Google reviews, we handle those cases every week.

Non-consensual intimate images. Most states now have criminal statutes covering revenge porn. Many also have civil causes of action. I handle the civil side: DMCA takedown notices to every platform hosting the content, statutory demands under your state’s PIPA laws, and civil litigation against the person who distributed the images. In urgent cases, we can seek a temporary restraining order that requires platforms to remove content immediately.

Platform subpoenas. If someone is harassing or defaming you from an anonymous account, you can’t send a demand letter to someone you can’t identify. But you can subpoena the platform for the account holder’s IP address and registration information. This requires filing a John Doe lawsuit and serving a third-party subpoena on the platform. It’s a real legal proceeding, not a form you fill out.

Mugshot removal. If you were arrested and the charges were dropped or you were acquitted, mugshot sites may still display your booking photo. Many states now have laws prohibiting mugshot sites from charging for removal. Statutory demands under those laws, combined with direct platform contact, get most of these taken down.

Data broker non-compliance. If you sent a statutory deletion request and a data broker ignored it or denied it without a valid exemption, that’s a compliance failure. I can send a follow-up demand on firm letterhead citing the statute, the deadline they missed, and the penalties they’re exposed to. Most comply at that point. The ones that don’t get reported to the state AG.

If any of this applies to your situation, you need an attorney, not a privacy tool. Learn more about our identity protection and internet deletion services.


Sources and Further Reading

What Happened This Week and Why It Matters

Two pieces of research published in the same week changed the calculus on personal digital security. This section explains what they found, what it means, and where to read the primary sources.

On AI and zero-day exploitation. On April 7, 2026, Anthropic published a technical assessment of Claude Mythos Preview’s cybersecurity capabilities.[1] The paper, authored by Nicholas Carlini, Newton Cheng, Keane Lucas, Michael Moore, Milad Nasr, and seventeen other researchers, documents a general-purpose language model that can autonomously discover and exploit previously unknown vulnerabilities in production software.

The findings that matter for this guide:

Mythos Preview discovered a 27-year-old denial-of-service vulnerability in OpenBSD’s TCP SACK implementation.[2] OpenBSD is an operating system whose entire reputation is built on security. This bug survived nearly three decades of expert human review and millions of automated security tests. The model found it across roughly 1,000 scaffold runs at a total cost under $20,000. It has since been patched.

Mythos Preview discovered a 16-year-old vulnerability in FFmpeg’s H.264 video codec.[9] The flaw was introduced in a 2003 commit and exposed by a 2010 refactor. Every fuzzer and every human reviewer who examined the code since 2010 missed it.

The model built a remote code execution exploit against FreeBSD’s NFS server that gave full root access to unauthenticated users from anywhere on the network. The exploit split a 20-gadget Return Oriented Programming chain across multiple network packets to bypass authentication checks. An unauthenticated attacker on the network could have used this to take complete control of the server.

In a browser security test, Mythos Preview chained four separate vulnerabilities together into a JIT heap spray that escaped both the browser’s renderer sandbox and the operating system’s sandbox, giving an attacker the ability to write directly to the OS kernel. Visiting a webpage was all it took.

The model’s predecessor, Claude Opus 4.6, had a near-zero percent success rate at autonomous exploit development. Mythos Preview, tested against the same Firefox 147 JavaScript engine vulnerabilities, produced 181 working exploits where Opus 4.6 produced 2. Anthropic did not specifically train Mythos Preview to do this. The capabilities emerged from general improvements in code reasoning and autonomous operation.

What this means for your data: every company holding your personal information runs software built on the same codebases Mythos Preview found vulnerabilities in. Linux kernels, web servers, browsers, video processing libraries. If a 27-year-old bug in one of the most security-audited operating systems on earth went undetected until an AI model found it, the systems at data brokers, banks, social media companies, and cloud providers almost certainly contain similar undiscovered vulnerabilities. You cannot fix their security. You can reduce how many of them hold your data. Every deletion request you send removes your information from one more system that could be the next breach.

Anthropic has not released Mythos Preview publicly. It is available only to a restricted group of industry partners (Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks) through a defensive initiative called Project Glasswing.[19] The intent is to patch vulnerabilities before equivalent capabilities become widely available. Over 99% of the vulnerabilities Mythos Preview has discovered remain unpatched at the time of this writing.

On linguistic fingerprinting. On April 8, 2026, the New York Times published an investigation by John Carreyrou (the journalist who exposed the Theranos fraud) identifying British cryptographer Adam Back as the most likely candidate for Satoshi Nakamoto, Bitcoin’s pseudonymous creator.[4] The investigation’s methodology is what matters for this guide, not its conclusion.

Carreyrou and colleague Dylan Freedman spent over a year building a database from three cryptography mailing list archives (the Cypherpunks, Cryptography, and Hashcash lists) spanning 1992 to 2008. They fed over 30,000 posts from roughly 600 active participants through AI analysis, looking not at what people wrote but at how they wrote. The markers they tracked:

Hyphenation patterns. Satoshi consistently hyphenated compound nouns (like “proof-of-work”) while leaving compound adjectives unhyphenated (like “file sharing” and “hand tuned”). This is the reverse of standard English usage. One suspect matched 67 of Satoshi’s exact hyphenation choices.[5] The next closest candidate matched 38.

British and American spelling alternation. Satoshi mixed “colour” and “color,” “defence” and “defense,” within the same body of writing. The frequency and distribution of these alternations became a fingerprint.

Micro-habits. Double-spacing between sentences (a generational marker). Confusion between “its” and “it’s” (not just whether, but how often and in what syntactic contexts). Placing “also” at the end of sentences. Writing “bugfix” as one word and “half way” as two.

After stacking these filters sequentially, only one name remained in the dataset. Forensic linguist Robert Leonard of Hofstra University told the Times that these idiosyncrasies represent “markers of sociolinguistic variation,” linguistic fingerprints that can help identify an author.[13]

What this means for your data: you have these markers too. Every email, every forum post, every review, every comment you have ever written carries stylistic fingerprints you are not aware of. The same AI capabilities that can chain four browser exploits together can analyze millions of text samples to identify patterns in your writing. If your goal is to maintain separation between your old online identity and your new one, consistent writing habits are the link that connects them. Varying your style is not paranoia. It is operational security against a capability that now demonstrably exists.

Adam Back has denied being Satoshi Nakamoto. The investigation remains circumstantial. No cryptographic proof (a signature from Satoshi’s original keys) has been produced.

Full Source List

[1] Carlini, N., Cheng, N., Lucas, K., Moore, M., Nasr, M., et al. “Assessing Claude Mythos Preview’s Cybersecurity Capabilities.” red.anthropic.com, April 7, 2026. https://red.anthropic.com/2026/mythos-preview/

[2] Carlini et al., 2026. OpenBSD TCP SACK vulnerability (27 years old, now patched). Patch: https://ftp.openbsd.org/pub/OpenBSD/patches/7.8/common/025_sack.patch.sig

[3] “Anthropic’s Mythos Preview and the End of a Twenty-Year Cybersecurity Equilibrium.” Post-Quantum, April 8, 2026. https://postquantum.com/ai-security/anthropic-mythos-preview-ai-offensive-security/

[4] Carreyrou, J. and Freedman, D. “Is the Creator of Bitcoin This British Cryptographer?” The New York Times, April 8, 2026. (Subscription required.)

[5] Carreyrou and Freedman, 2026. 67 shared hyphenation errors vs. 38 for the next-closest candidate.

[6] Federal Bureau of Investigation, Internet Crime Complaint Center (IC3). “2021 Internet Crime Report.” https://www.ic3.gov/Media/PDF/AnnualReport/2021_IC3Report.pdf

[7] Grauer, Y. “Big Ass Data Broker Opt-Out List (BADBOOL).” GitHub. Last updated March 28, 2026. https://github.com/yaelwrites/Big-Ass-Data-Broker-Opt-Out-List

[8] California Privacy Protection Agency. “Data Broker Registry and DELETE Request Opt-Out Platform (DROP).” https://cppa.ca.gov/data_broker_registry/

[9] Carlini et al., 2026. FFmpeg H.264 codec vulnerability (16 years old, introduced in 2003 commit, exposed by 2010 refactor).

[10] Carlini et al., 2026. N-day exploitation: 100 Linux kernel CVEs from 2024/2025, filtered to 40 exploitable candidates, working privilege escalation exploits for more than half.

[11] “I Am Not The Product.” Privacy request generator. https://optout.iamnottheproduct.com/

[12] Krebs, B. “Experts Fear Crooks are Cracking Keys Stolen in LastPass Breach.” KrebsOnSecurity, September 2023. https://krebsonsecurity.com/2023/09/experts-fear-crooks-are-cracking-keys-stolen-in-lastpass-breach/

[13] Leonard, R. (Hofstra University), quoted in Carreyrou and Freedman, 2026. “Markers of sociolinguistic variation.”

[14] Privacy Rights Clearinghouse. “Data Brokers.” https://privacyrights.org/data-brokers

[15] Consumer Reports. “How to Delete Your Information from People-Search Sites.” https://www.consumerreports.org/electronics/personal-information/how-to-delete-your-information-from-people-search-sites-a6926856917/

[16] State of Surveillance. “Complete People Search Opt-Out Master List.” https://stateofsurveillance.org/guides/advanced/people-search-master-list/

[17] Have I Been Pwned. https://haveibeenpwned.com/

[18] ObscureIQ. “Finding Your Mobile Ad ID (MAID).” https://www.obscureiq.com/wp-content/uploads/2025/09/ObscureGuide-FindingYourMobileAdID-MAID.pdf

[19] Anthropic. “Project Glasswing.” https://anthropic.com/glasswing

[20] “Claude Mythos: The Critical Step Change Needed for Cyber Defence.” Bridewell, April 8, 2026. https://www.bridewell.com/insights/blogs/detail/claude-mythos-the-critical-step-change-needed-for-cyber-defence

[21] Willison, S. “Anthropic’s Project Glasswing.” simonwillison.net, April 7, 2026. https://simonwillison.net/2026/Apr/7/project-glasswing/

[22] “Anthropic Mythos model can find and exploit 0-days.” The Register, April 7, 2026. https://www.theregister.com/2026/04/07/anthropic_all_your_zerodays_are_belong_to_us/

[23] “Anthropic’s new AI model finds and exploits zero-days across every major OS and browser.” Help Net Security, April 8, 2026. https://www.helpnetsecurity.com/2026/04/08/anthropic-claude-mythos-preview-identify-vulnerabilities/

[24] Karpathy, A. “Digital Hygiene.” karpathy.bearblog.dev, March 17, 2025. https://karpathy.bearblog.dev/digital-hygiene/

Justin Abdilla, Esq. | Abdilla Law

The information in this guide is for educational purposes and does not constitute legal advice. No attorney-client relationship is formed by reading this guide.