What Actually Happens When You Switch SSL Certificates
Googlebot and Expired Certificates: Crawling Through Smoke
One of the first times I swapped an SSL cert on a live site without notifying the crawl gods, Googlebot started acting like I’d locked it out. Indexing flatlined for a few days. No errors in Search Console, just… quiet. Like the site had ghosted itself.
Turns out, Googlebot doesn’t immediately freak out if your cert expires or changes — but the site won’t be crawled properly if the handshake fails, even briefly. Especially frustrating: they don’t expose cert errors in the URL Inspection tool, so you’re stuck debugging with guesswork unless you catch it in server logs.
If you’ve ever used a third-party cloud WAF that auto-renews certs (like Cloudflare’s Universal SSL), you might also get bitten when the chain updates. Establishing valid trust isn’t just about the leaf cert anymore — they check the whole chain each time. If that intermediate CA doesn’t update cleanly, it might silently cause bots to fail handshake and stop crawling.
“No response received — connection failed” felt like a polite way of saying ‘your certificate’s a dumpster fire.’
SEO Signals in TLS: Not a Ranking Factor, But Still Vital
You’ve probably read the 2014 memo that HTTPS is a “lightweight ranking signal.” Spoiler: that’s not the real reason to care. The big impact comes from what breaks when you mess it up — redirects, canonical chains, embeds. So while HTTPS itself might not make you rank higher, breaking it can tank your rankings overnight.
For example:
- Broken HTTPS causes browser preload lists to blacklist your subdomain (hello, HSTS preload)
- Mixed content from unsecure image URLs drops your CWV/image discoverability
- Auto-upgrading links from http:// to https:// might point to expired or invalid cert versions mid crawl
That last one was brutal — we had an old CDN bucket served over HTTP, and Google started rewriting it thinking they were helping. Except the HTTPS version 403’d half the assets. Rankings dipped, and three weeks later we figured out what was happening. The fix? Proper 301s off the CDN domain and a death march of rewriting image URLs manually in the CMS.
Let’s Talk Redirect Chains and Protocol Jumping
301 from HTTP to HTTPS? Cool. 301 from a secondary domain to canonical? Also fine. 301 to a version of HTTPS with a cert mismatch and then back again? Not so fine.
I once audited a client site where:
- ex. http://site.com → https://site.com (cert valid)
- ex. https://site.com → https://www.site.com (cert expired)
- ex. https://www.site.com → https://www.site.com/home (cert valid again)
This looked fine to browsers. But Googlebot doesn’t do middleman cert trust corrections. That expired cert in the middle effectively cut off indexing for any page below it. No, it doesn’t always trigger a crawl error. Sometimes it just… skips.
Lesson? Do a full redirect chain test from every base variant. HTTP/root and HTTPS/non-www are not guaranteed to take the same code path.
Cloudflare’s “Full” vs “Full (Strict)” Modes: A Stealth Trap
Okay, this one still gets people. So Cloudflare offers two SSL modes:
- Full: Validates the connection is encrypted, but doesn’t care if the origin cert is self-signed or expired or wacky.
- Full (Strict): Requires a valid cert chain all the way through.
The problem? If you’ve deployed Full and then suddenly go Strict without installing a real cert on the origin, your site appears fully functional to you — because Cloudflare is caching about 95% of it. But when Googlebot (or any edge location not in the cache) hits a cold resource, it fails silently. The TTLs mask the issue until you get buried three weeks later.
Why did that image file stop getting crawled in March? Because Strict was enabled 12 days earlier and your origin SSL was expired, that’s why.
Canonical Tags and HTTPS Drift: Staying Pointed at the Right URL
Canonical tags don’t fix protocol mismatch automatically. I learned this the hard way when a Shopify install started injecting <link rel="canonical" href="http://...">
on 800+ collection pages. By the time we caught it, non-HTTPS copies of the pages were competing with the real ones, and even outranking them for some longtail queries.
Cool twist: even after we fixed the canonical tag, Google refused to respect it because the site had already been split indexed both ways. We had to flatly disallow HTTP in robots.txt and force HSTS to get re-consolidated.
Canonical tags are trust-based — they’re not law. If you give bots bad input once, they might not forgive you. Especially if your SSL implementation flips around during that period, or involves shady redirect IPs (seen that too — AWS ALB went rogue).
What Search Console Won’t Tell You About TLS Breakage
There is no TLS error watcher in Search Console. Don’t wait for it to warn you. You’re better off checking:
- Your server logs for handshake timeouts or “SSL_ERROR_SYSCALL” returns
- Googlebot via curl with the right UA string and header bundle
- Third-party tools that simulate fetch/render over Googlebot’s IP cluster
There’s a strange behavior I hit with a Chinese registrar-managed cert — it passed local validation, but failed for some subset of Google datacenters due to a missing SAN field. Not the CN — the Subject Alt Name was absent, and newer TLS libraries reject the cert on sight. Search Console never mentioned a thing. Only found out through curl -vL https://example.com --user-agent "Googlebot"
and debugging handshake failures on Wireshark.
“I see nothing wrong,” said the whole GSC interface while half the index vanished.
Don’t Let Let’s Encrypt Screw Your AAAA Records
This sounds melodramatic, but I’ve lost track of how often Let’s Encrypt gets people nailed via IPv6 fallback. Specifically, if your AAAA record points to an IP without a proper cert response, the SSL validation will fail with ACME clients, and the site won’t auto-renew — all while the A record still passes.
So: your IPv6 record produces failed cert challenges, your site flips to HTTP 400 or certificate mismatch for some users, Googlebot hits both A and AAAA alternately depending on PoP location, and you’re left wondering why the German index dropped to near-zero impressions for three weeks.
I now regularly just temporarily disable AAAA records during cert renewals if I suspect anything weird with dual-stack hosting. DNS silently burns you reset after reset, and no part of Let’s Encrypt warns you about this. It assumes you either know or test via staging. (Fun: Cloudflare hides AAAA answers in DNS lookups unless you ask nicely with dig/nslookup.)
Embedded Third-Party Tools That Inject Protocol Poison
Aha moment from two years ago: an earnings drop traced back to a single third-party review widget that linked its CSS and JS via http://
even on HTTPS pages. Chrome blocked them. Googlebot couldn’t render full page layout, so it started flagging CLS and layout shifts.
That dominoed straight into Core Web Vitals reports showing layout elements missing, click zones misaligned, and long interactivity delays — which then pushed the site out of the top 3 for its big revenue query.
Quick tip: DO grep your HTML output for any instance of http://
in script, link, and iframe sources. Just filter it out or force upgrade in the rendering template. Cloud WAFs won’t always catch it — I had one that rewrote 90% of embedded links but missed one specific error because of a malformed attribute (“src =” vs “src=”). That’s all it took for Google to choke.
Unexpected HSTS + Preload Snags On Subdomain Migrations
If you published an HSTS preload header on www.site.com
and then try to migrate to just site.com
, welcome to the penalty box.
Chrome and Firefox preload lists will force-redirect any insecure hit on the subdomain scheme — even before DNS steps in — and if your new subdomain doesn’t serve a fresh cert or isn’t on the preload list in its own right, users (and bots) just see connection refused or invalid cert.
We had a media property go from www to root domain during a big template refresh, not realizing that the preload HSTS policy was still active and flagged as includeSubDomains
. All admin/staging tools broke. Bots saw proto-redirects buried five layers deep, and none of them passed cert challenge. Indexed content fell, and no amount of redirect tinkering could patch it until we completely removed the preload via a 6-week delisting process on hstspreload.org.
There’s no visible error. No Search Console notification. Your domain just turns into a handshake dead-end.