How I Audit Old Blog Posts Without Losing My Mind or Rankings
Finding the Garbage: Actually Locating Old Posts That Need a Fix
Step zero is realizing how many of your posts are quietly rotting in the attic of your content folder. For one of my older AdSense-heavy tech blogs, I ran a quick export from Search Console and found over a hundred posts getting clicks but with absolutely miserable CTRs—like under one percent. Classic ghost posts that users bounce from instantly, even if they rank.
The sad part? Some had solid backlinks, but the content was a mid-2010s nightmare of keyword-stuffed nonsense and five H2s that just repeated the title. Google still indexed them, but humans bailed fast.
Here’s what I usually scrape together to build a punch-list:
- Search Console URL data filtered by CTR under 2%
- Analytics for average time on page under 30 seconds (usually bounce-fodder)
- Ahrefs or whatever to find articles with links worth salvaging
- AdSense revenue by URL — if it’s making more than $5 a month, I’ll prioritize even if it’s ugly
Back up the raw HTML or markdown before you patch anything. I made edits to an old tutorial once, accidentally nuked the JSON-LD schema, and rankings vanished for three weeks. (It came back. But yeah, copy-paste backups are sacred.)
Audit Tool Overload: What’s Worth Running and What’s Just Noise
Let’s get this out first: almost any automated SEO audit tool will tell you the sky is falling. Screaming Frog, Ahrefs Site Audit, even Lighthouse—all useful, none gospel.
For content audits, I’ve found actual human review is more efficient unless you’ve scaled past 500 blog posts. That said, here’s the bare-minimum tech stack I cycle through:
- Lighthouse: Performance, accessibility, and front-end disasters like contrast errors
- Search Console: Actual impressions vs CTR punching you in the face
- Wayback Machine: Compare old cached versions of the post (especially useful if you can’t remember when or why something was changed)
- Manual read-through: Yes, out loud. If a sentence makes you cringe, users clicked away too
Watch out for Lighthouse returning a Core Web Vitals pass but the page is actually super annoying due to janky ad layout shifts. It doesn’t always catch when ads injected via JS don’t push layout correctly. You’ll see a CLS under 0.1 and still have users bailing because a leaderboard shoves the article halfway through a paragraph. It’s not a bug—it’s just not real-world tested.
How I Rate What’s Salvageable Versus What to Nuke
I used to try to save every post, and that led to some hilarious fails. Like spending two hours rewriting something about “best free antivirus tools for Windows Vista” in 2023. Should’ve just deleted it immediately.
What I use now is a loose three-bucket system that saves hours of second-guessing:
- KEEP: Evergreen topic, ranked in top 20, some traffic, can be improved cleanly
- TRASH: Dead product reviews, obsolete tech (Flash tutorials…), or anything with zero rank/links/CTR
- REPURPOSE: Keyword loss post with solid links — strip it down, rebuild with new angle
One thing that tricks people: old listicles with numbers in the URL or headline. Like /top-10-chrome-extensions-2017
. Even if it ranks, people bounce because they instantly recognize it’s dated. Don’t update the year in the title only — rebuild the post or 301 it into a newer format. Otherwise, you’re just duct taping a corpse.
Content Cannibalization Is Real and Easier to Miss Than You Think
There’s something subtle and evil about writing multiple posts on the same tool. I had three separate articles on “debugging AdSense in Chrome,” each with slightly different spins — one about iframe quirks, another on certain extensions, a third on Ad Block detection from JavaScript.
They each got traffic. But not predictable traffic. Turns out they were all fighting for the same couple of queries. None of them ranked consistently. After merging them into one monster canonical post and 301’d the others, impressions literally tripled. Clicks didn’t right away, but within a few months it was a straight line up.
Telltale signs you’re eating your own search tail:
- Your CTRs are low across 2–4 thematically similar pages
- You have zero-featured snippets despite being on page one
- Search Console shows impressions scattered strangely between similar pages
Use the URL Inspection tool with full keywords you think you’re targeting. If three of your own posts show up in the “Google selected canonical” summary — collapse baby, collapse.
When Internal Links Are Killing You Instead of Helping
Not joking, I once fixed an issue where the top two ranking blog posts had 60+ outgoing links to lower-performing junk — and those junk pages linked back with keyword-stuffed anchor text. Massive internal link gridlock. I thought crosslinking would boost everything. It did the opposite. After I yanked half of them and only linked up to 5 places max per post (with clean anchors), average position for both main articles improved within weeks.
This caught me off guard: Google didn’t devalue the pages themselves, but the linking structure made it impossible for it to decide which one was centerpiece content. This wasn’t a penalty. It was a resource allocation problem. Basically—shotgun linking dilutes your own authority.
Anchor choice matters more than people admit: I did a fresh audit and noticed internal links kept using “click here,” “this guide,” “covered here,” even when linking to full tech documents. Rewriting those to match the page titles or keywords (without stuffing) clearly correlated with uptick in average time on site. Could be user expectation alignment, could be Google parsing it better. Probably both.
Meta Title Edits That Tank Rankings — Yes, Even Good Ones
This was the weirdest one recently.
I updated the title tag of a high-performing post from “How To Fix Chrome AdSense Issues (2021)” to just “Chrome AdSense Debug Problems Solved.” Cleaner, right? Shorter, still accurate. Rankings dropped off a cliff within 48 hours.
Turns out it was showing as a featured result for a bunch of longtail keyphrases that matched the old date-based title exactly. By removing the year, I killed matches for “fix chrome adsense 2021,” “adsense not working on chrome 2021,” etc. Yes, people still searched with the year. Adding it back (no other change) brought traffic back a week later. Not as high, but clearly tied.
So yeah, even logical cleanups can break things. It’s not about your taste — it’s about user behavior, which often sucks.
Redirects That Lose Authority — And How I Learned to Spot Them
I had a mess of old tutorials that I renamed when switching CMSs — WordPress to Eleventy. Pretty URLs, sure. But my 301s went from:
/debug-adsense-chrome-troubleshooting → /chrome-adsense-fix
Nice short slugs, right? Except rankings dipped sharply. Manually dug through the difference and found the old page had links specifically with the full original title in the anchor. And because the new page wasn’t optimized for the same synonyms (“troubleshoot”, “debug”), link context got nuked.
Google will follow your redirect. But it doesn’t inherit the context of anchor text unless your target page matches intent really closely. Wild how I confirmed it: re-added the original page as a shell, canonical-tagged it to the new one, ranking rebounded within 3–4 days.
A few redirect rules I stick to now:
- Keep the same major keywords in the slug if the post is retaining backlinks
- Don’t redirect to a differently themed page (e.g., “debugging” → “setup”)
- If you MUST redirect, make sure the headers, H1, and first paragraph reinforce the original topic heavily
The Hidden Trap of Schema Carryover
Let’s talk about the lazy copy/paste crime you’ve definitely committed: reusing post templates with outdated or irrelevant schema.
I was wondering why a newer article wasn’t showing rich snippets, even though it had FAQs and was structurally perfect. Turns out I’d copy/pasted schema from an old “review” post — and the main type was still "@type": "Product"
.
Even though the page was completely informational, Google apparently didn’t know how to categorize it. No rich snippet, no structured display.
Changed it to "@type": "Article"
and resubmitted in Search Console — next crawl, boom, FAQ displays returned.
“Google will skip your schema if it doesn’t match page behavior — even if it’s technically valid.”
Undocumented behavior? Absolutely. There’s nowhere on MDN or Schema.org that tells you mislabeling type hides your enhancements — it just… doesn’t work.