Dealing with the Weirdness of Code Review Tools and QA Platforms
Trying to Get Inline Comments to Work Consistently in Bitbucket
Bitbucket’s pull request experience is just okay until you dare to expect consistent behaviors. Inline comments, for example — sometimes they attach to the right line and persist, and sometimes they just disappear silently when you update the PR diff. I spent a full half-hour once chasing a missing comment thread that simply didn’t get re-linked after rebasing and squashing a three-commit set into one. No warning, no history — just gone.
Turns out, if your reviewer leaves an inline comment on a line that vanishes after a force-push, Bitbucket doesn’t bother preserving the context (unlike GitHub, which at least tries). You have to dig it up in the activity tab, or worse, ask the reviewer to re-comment. Lovely.
Eventually I found that if you freeze the commenting phase until your rebases are done, things are more stable. But short of that, expect things to vanish if you casually reorder functions mid-review like an overzealous linterscript devotee.
If you can’t rewrite cleanly without confusing everyone, just don’t. Your comments won’t anchor if the lines vanish. — Me, after losing three entire threads.
GitHub Reviews Forget Who Commented If Emails Are Not Verified
This one’s subtle but dumb. If your GitHub account is using a public email address that isn’t verified anymore (say, you deleted the Gmail for a side project from 2018), then your comments in PRs will occasionally show as “ghost” — like they’re from a deleted account. It’s not consistent, but I’ve had teammates get confused more than once when a bunch of code review reminders were apparently left by some anonymous banshee instead of, say, Raman from QA.
The kicker? Even though GitHub clearly knows it’s your account (you’re logged in, name and avatar load fine), the email mismatch triggers a weird cascading identity failure if you’re commenting on clones/forks where corporate SSO is configured. It’s a weird edge case of bad multi-tenancy behavior. The fix? Make sure your primary email in GitHub matches what the PR repo expects. Or just re-verify every once in a while if you swap laptops or get archived and re-added to the org.
Phantom ESLint Errors in Reviewdog on Self-Hosted GitLab
I got sucked into this one for two days. Reviewdog was flagging ESLint errors on lines of a pull request that didn’t even exist. Like, the comment was showing on line 74, but the actual error was logged in a nested file somewhere that wasn’t even touched in the MR. Thought it was a misconfigured ESLint setup, but nah. Turns out GitLab’s diff generator for merge requests drops some whitespace context around renamed files, and Reviewdog grabs positions off the raw patch.
Debugging tip:
If you’re using Reviewdog in a CI pipeline and it’s reporting garbage, add this to your manual test run:
git diff origin/main...HEAD --no-prefix > patch.txt
cat patch.txt | reviewdog -f=diff -reporter=github-pr-review
This ensures you’re using the exact same diff format your CI will process. And if you’re triggering reviews off post-submit commits, make damn sure GitLab’s diff policies haven’t kicked in — it sometimes truncates the diff if the file is over their internal newline limit (I never found the actual threshold, but it’s probably under 20K).
After forcing Reviewdog to use unified diffs with --diff=unified
, the false positives mostly vanished.
Why No One Understands Dangerfile Runtime Errors
Danger.js is amazing in theory — write review bots that fail PRs if someone forgets a changelog or tries to sneak in a console.log
. But when your Dangerfile throws, holy hell, the error output is pure noise. I had a `TypeError: Cannot read property ‘matched’ of undefined` appear, and the CI job passed anyway. The Danger reporting hook had silently swallowed it because it only fails in ‘failOnErrors’ mode *and* only if it’s the last hook block.
To figure this out, I had to wrap the entire Dangerfile in a try/catch and manually set the fail flag using fail('Catch-all: Dangerfile exploded ')
.
Even then, half the time your PR bot comments will post twice — once with errors visible, once with nothing because GitHub somehow de-dupes the comments but still shows both in Jenkins. No fix. Just learn to deal with rogue ghost-bots messing up your reviews.
Misleading Coverage Deltas in Codecov Due to Path Mapping
Codecov’s deltas are often just… fake. If you’re using webpack aliases, or some TypeScript baseUrl stuff, you may find that files are showing as uncovered even though they clearly have tests passing. I had ~/services/user.ts
showing zero coverage for weeks, even though all the logic was boilerplate and tested a dozen times over.
Undocumented problem? Path aliases trip up the coverage source maps unless your Jest config includes moduleNameMapper
that matches — *exactly*. Codecov has a GitHub Action that does basic processing, but it assumes file references are absolute, and if Babel or ts-node masks the original paths, the line numbers mismatch.
What finally solved it was adding this nugget to jest.config.js
:
moduleNameMapper: {
'^~/(.*)$': '/src/$1'
},
coveragePathIgnorePatterns: [
'/node_modules/',
'/src/interfaces/'
]
And running npx jest --coverage --runInBand
to avoid concurrency races that cause incomplete coverage files.
TeamCity’s Git Merge Behavior Will Haunt You
If you’re QA-ing a monorepo and your CI uses TeamCity, watch out for how it checks out repositories. It does a synthetic merge by default, but it chooses the target branch based on what’s set in the VCS Trigger configuration — which can silently fall back to refs/heads/master
even if your PR is against develop
.
That means your tests might pass in CI and fail once merged — because it tested against the wrong base. I lost all faith in TeamCity the day it ran unit tests for an Angular submodule against an outdated build of core UI components.
The workaround is buried under VCS Root settings: uncheck “Use tags as branches” and manually set the branch specification to +:refs/pull/(*)/merge for GitHub, or whatever applies. Even better, pipe your CI into a sanity step that checks git merge-base HEAD origin/target
and fails fast if you’re not actually testing against what you think you are.
5 Survival Tips for Review Tools That Break in Weird Ways
- Don’t trust coverage numbers unless you can trace them back via line annotations.
- Log your PR bot’s timestamps and match them to Git SHA output to debug comment duplication.
- Never rebase mid-review unless your reviewers are robots or masochists.
- Use
git diff-tree
instead ofgit diff
with CI tools that process commits individually. - Always test Dangerfile severity escalation by pushing semi-broken logic before enabling it in main branches.
- Test your code review environment with dead/removed accounts to simulate weird org transitions.
- Use a recognizable emoji in bot-posted comments; it helps distinguish auto checks from human reviews at a glance.
Mock Backend Bugs That Never Show in Preview Builds
Ever tried testing an API auth flow using a mocked backend during PR QA, only to find everything breaks once live? Yeah — same. One of our team’s code reviewers was shadow-approved by a staging environment that didn’t implement third-party cookie enforcement, so when we rolled the feature into prod, the login flow exploded in Safari.
The issue? Our mock backend didn’t enforce SameSite=None; Secure
flags on Set-Cookie headers, and local previews subtly shimmed the behavior (somehow!) so developers had no idea it was going to break. That made it past BOTH automated checks and human review. Which wouldn’t have happened if we were testing via the public Cloudflare tunnel instead of our internal preview proxy.
“It worked in review” is not enough. Always check cookies using document.cookie.length
in production-like user agents for your CI systems.
Review Comment Anchoring Is Still a Mystery in GitHub Mobile
Don’t try to use GitHub’s mobile app to verify line comments in a pull request. It still can’t display change-context properly for large PRs. I left a fairly straightforward regex suggestion last month and viewed it later in the iOS app — except it wasn’t there. It scrolled to the wrong line entirely, and hitting “show more” just appended old versions.
Best guess? GitHub internally tracks diff positions as indexes rather than full line references, and the mobile app truncates the review blob for performance — chopping off historical state. No way to work around this unless you use desktop or open the PR code tab manually. Even iPad Pro in Safari has the same issue unless you force desktop mode and wait for the full diff to load.
“I sent the comment, but I can’t find it anymore.” — everyone using GitHub Mobile from the train