Crawled - Currently Not Indexed? 7 Fixes (2026)
Fix the "Crawled - currently not indexed" status in Google Search Console with 7 proven solutions—starting with internal linking.
You publish a new page, submit it to Google Search Console, wait… and then you see the dreaded status: “Crawled - currently not indexed.”
That’s the worst kind of indexing limbo: Google found the page, visited it, read it — and then decided not to include it in the index. Which means the page is basically invisible in organic search (even if it’s live and accessible).
This guide breaks down what “Crawled - currently not indexed” means, why it happens, and the 7 fixes that work most often — with internal linking as Solution #1 (because it’s the fastest lever you fully control).
TL;DR: Fix “Crawled - currently not indexed” in 15 minutes
- Confirm the page is indexable: no
noindex, not blocked by robots.txt, returns 200 OK, and the canonical is correct. - Add 2–5 internal links from relevant, indexed pages that already get impressions/traffic (use descriptive anchor text).
- Make sure the URL is in your XML sitemap (and resubmit the sitemap if needed).
- Upgrade the page if it’s thin/duplicative: add depth, examples, and unique value.
- Request indexing in Google Search Console after the changes go live.
If you’re dealing with this issue across dozens (or hundreds) of URLs, the pattern is almost always the same: those pages don’t have enough internal authority and crawl paths. That’s why Linkbot’s Priority Indexer is built specifically to help.
What does “Crawled - currently not indexed” mean?
In plain English:
- Crawled = Googlebot visited the URL and fetched the page content.
- Currently not indexed = Google decided not to store that page in the index (so it won’t show in search results).
You’ll typically see this status in Google Search Console → Pages → Why pages aren’t indexed.
It’s also commonly grouped near related statuses like:
- Discovered - currently not indexed (Google knows the URL exists, but hasn’t crawled it yet),
- Duplicate / Google chose different canonical (Google thinks another URL is the “real” version),
- Alternate page with proper canonical (your page is intentionally not indexed), and
- Soft 404 (the page is technically “200 OK” but looks like a low-value or error page).
One frustrating part: Google usually won’t tell you a single, explicit reason for each URL in this bucket. Your job is to figure out which signals are weak — and strengthen them.
Why does Google crawl a page but not index it?
“Crawled - currently not indexed” is usually Google saying: “I saw it, but I’m not convinced it’s valuable enough (or unique enough) to store and show.”
The most common causes fall into four buckets:
- Low perceived value: thin content, boilerplate content, doorway-ish pages, or pages that don’t satisfy the query.
- Low internal authority: the page isn’t well linked internally, so it looks unimportant (or Google can’t consistently rediscover it).
- Duplication/canonical ambiguity: Google sees multiple near-identical URLs and chooses none (or chooses a different one).
- Technical blockers: noindex, robots blocks, soft 404 behavior, unstable rendering, or inconsistent status codes.
The fix is almost always a combination of: (1) internal linking, (2) content upgrades, and (3) technical cleanup.
Before you “fix” it: should this page be indexed?
Not every URL deserves a spot in Google’s index. If the page is:
- a thin tag/category page,
- an internal search results page,
- a pagination URL,
- a low-value template page, or
- near-duplicate content,
…then “Crawled - currently not indexed” may be Google doing you a favor. In that case, you may choose to leave it alone (or intentionally noindex it). A leaner index can actually help your important pages get more attention.
But if it’s a page you want ranking — a product page, a landing page, a comparison page, or a high-intent blog post — then the next section is your starting line.
Indexing triage checklist (fast)
Use this table to spot the most common root causes quickly.
| Check | What to look for | Fix |
|---|---|---|
| HTTP status | Should be 200 OK (not 3xx/4xx/5xx) | Fix redirects, soft-404 pages, server errors |
| Indexability | No noindex meta tag or X-Robots-Tag header |
Remove noindex if the page should rank |
| Robots | Not blocked by robots.txt | Allow crawling for important pages |
| Canonical | Canonical points to the correct URL (often self-referencing) | Fix canonicalization and parameter duplicates |
| Content depth | Thin, duplicated, or generic content | Expand + improve usefulness (examples, steps, visuals) |
| Internal links | Few/zero internal links pointing to the page | Add contextual links from strong, relevant pages |
| Sitemap inclusion | URL is included in XML sitemap (and sitemap is submitted) | Add URL + resubmit sitemap if needed |
7 proven solutions to fix “Crawled - currently not indexed”
Not all “crawled but not indexed” URLs fail for the same reason. Start with Solution #1 and work downward.
Solution 1: Add strategic internal links (highest impact)
If you only do one thing, do this.
Internal links are one of the clearest signals you can send that a page matters. They:
- create crawl paths so Googlebot can revisit the page,
- transfer internal authority (“link equity”),
- connect the page to a topical cluster, and
- help Google understand what the page is about (via anchor text + surrounding context).
If you don’t already have a system for internal linking, start here: Internal Linking Strategy (Step-by-Step).
Manual workflow (fast and repeatable):
- In GSC, open Pages → Crawled - currently not indexed and export a list of URLs.
- Pick the highest-value URLs first (money pages, pillar content, pages with commercial intent).
- For each target URL, find 2–5 related source pages that already get impressions/traffic.
- Add a contextual internal link inside a relevant paragraph — not a random footer link.
- Use descriptive anchor text that reflects the destination page topic.
How to find the best source pages:
- Use GSC Performance data: pages with impressions and clicks are more likely to be crawled frequently.
- Search your site (or your CMS) for related topics and link from the most relevant page, not the most random page.
- Prioritize pages that are already “hubs” (they rank, they have backlinks, they sit high in your site architecture).
Quick rules that help:
- Link from pages that are already indexed and visited frequently (these are your “crawl highways”).
- Keep links topically relevant (don’t link from an unrelated page just because it has traffic).
- Aim for 2–5 links per target page, not 50.
- Use descriptive anchor text and avoid “click here” (see: Anchor Text Optimization).
- Make sure the destination page is actually worth indexing (useful, not thin).
Example (what this looks like in real life):
- Unindexed page: “Robots.txt crawl budget optimization”
- Strong source page: a “complete robots.txt guide” that already gets impressions
- Internal link added: “If crawl budget is holding back indexing, use this robots.txt crawl budget checklist…” (linked to the unindexed page)
- Why it works: Google now sees (a) relevance, (b) a crawl path from an important page, and (c) anchor text context.
If you want to make this systematic, run an internal link audit to uncover orphan/under-linked pages: Internal Link Audit (2026).
How Linkbot helps (Priority Indexer): Linkbot’s Priority Indexer is built for this exact problem. It identifies pages that aren’t indexed and boosts their visibility by adding smart internal links from stronger pages — so you don’t have to manually hunt down link opportunities. Linkbot is used by 14,000+ websites and highlights 47% more pages indexed (source).
Want a baseline before you start? Run the free internal linking grader to see where your site is leaking crawl paths and authority.
Solution 2: Improve content quality and depth
Google can’t index everything. If your page looks thin, duplicative, or “not helpful enough,” it may get crawled and then filtered out.
What “low quality” looks like in practice:
- a page that answers the question in one sentence and then stops,
- a page that’s mostly boilerplate or templated text,
- a page that repeats what you already have elsewhere on your site,
- a page with unclear purpose (no clear intent match), or
- a page that exists “for SEO” but doesn’t help a user complete a task.
Content upgrade checklist (high leverage):
- Answer-first opening: make the first 100–150 words useful.
- Depth: include steps, checklists, examples, and edge cases.
- Originality: add screenshots, templates, or a unique point of view.
- Proof: demonstrate experience (examples from real scenarios, not theory only).
- Clarity: short paragraphs, clear headings, and explicit next actions.
Action steps:
- Identify your weakest “crawled but not indexed” pages (thin, duplicative, or outdated).
- Expand them with unique sections (checklists, screenshots, comparisons, FAQs).
- Update the page substantially (not just a few words), then re-request indexing.
- Pair with Solution #1 (internal links) so Google sees both value and importance.
As a practical rule: if a page is under ~500 words and doesn’t provide something distinct, it’s a prime candidate for “Crawled - currently not indexed.”
Solution 3: Request indexing in Google Search Console (after changes)
Once you’ve added internal links and/or improved the page, use the URL Inspection tool to request indexing:
- Open Google Search Console.
- Paste the full URL into the top search bar (URL Inspection).
- Review the result (especially canonical selection and indexability).
- Click Request indexing.
Important: this is a request — not a guarantee. If the underlying quality/authority signals don’t improve, Google may still decline to index the page (or it may fall out later).
Also: don’t panic if the “Crawled - currently not indexed” report doesn’t update immediately. GSC reports can lag behind real-world indexing changes.
Pro tip: If you have a batch of affected URLs, don’t waste your daily request quota on pages that still have weak signals. Make improvements first, then request indexing for your top-priority pages.
Solution 4: Fix technical indexing blockers (noindex, canonical, robots, soft 404)
Sometimes the page isn’t indexed because it can’t be indexed (or Google believes it shouldn’t be). Common technical causes include:
- Noindex directives (meta robots or X-Robots-Tag headers)
- Robots.txt blocks
- Canonical points elsewhere (Google treats your page as a duplicate)
- Soft 404 behavior (a “not found” page returning 200)
- Redirect chains or inconsistent URL versions (http/https, www/non-www)
Fast checks you can run today:
- Noindex: view source and look for a noindex directive like
<meta name="robots" content="noindex">or an HTTP header likeX-Robots-Tag: noindex. If you want to use noindex intentionally, read: How to use the noindex tag. - Canonical: view source and find
rel="canonical". If it points somewhere else, Google may never index this URL. - Robots: confirm robots.txt doesn’t disallow the path.
- Soft 404: if the page looks like “not found,” return an actual 404/410 or add real content.
Also make sure the page is mobile-friendly (Google indexes mobile-first). Background reading: Mobile-first indexing basics.
Solution 5: Build external links to your high-value unindexed pages
External backlinks can trigger discovery and indexing because they signal that a page has real-world value beyond your site. You don’t need hundreds — even a few relevant mentions can help.
Practical approaches:
- Share the page in relevant communities (where it’s genuinely helpful, not spam).
- Link to it from guest posts or partner content (if you have those channels).
- Do simple outreach: find pages that cover similar topics and offer your page as a better reference.
Avoid “instant indexing” gimmicks. If you build links, focus on relevance and quality.
Solution 6: Consolidate duplicates (and redirect the rest)
If you have multiple pages that are “basically the same,” Google may crawl them all and then choose to index only one (or none). Consolidation usually wins:
- Merge overlapping pages into one comprehensive resource.
- 301 redirect old URLs to the new canonical page.
- Update internal links to point to the consolidated URL.
Example: Five thin posts targeting slight variations of the same keyword usually lose to one strong guide. Combine them, redirect the rest, and concentrate authority.
Solution 7: Wait and monitor (for truly low-priority pages)
For low-value pages, Google may index them later as your site gains authority — or never. That’s normal.
If a page is important, don’t rely on waiting. Use Solutions #1–#4 first.
How to prevent “Crawled - currently not indexed” on new content
If you want new pages to index consistently, bake these into your publishing checklist:
- Add internal links before (or immediately after) publishing (see: Internal Linking Strategy).
- Use descriptive anchor text (see: Anchor Text Optimization).
- Publish with depth (don’t launch thin placeholder pages).
- Keep your sitemap clean (indexable URLs only). Background: How XML sitemaps influence indexing speed.
- Automate internal linking where possible (see: Automatic Internal Linking).
How to monitor indexing status in Google Search Console
Use a simple weekly workflow:
- Go to Pages and review “Why pages aren’t indexed.”
- Click into Crawled - currently not indexed to see the affected URLs.
- Use URL Inspection on your highest-value URLs to check the latest status (canonical selection is especially important).
- After fixes, re-check in 7–14 days.
If you want a quick manual spot-check, you can also try a site: query in Google (not perfect, but directionally useful):
site:example.com/your-page/
Tracking template (copy/paste into a spreadsheet):
| URL | Status | Date flagged | Fix applied | Re-check date | Indexed? |
|---|---|---|---|---|---|
| https://example.com/page/ | Crawled - currently not indexed | YYYY-MM-DD | Added 3 internal links + expanded content | YYYY-MM-DD | [ ] |
FAQ: Crawled - currently not indexed
What’s the difference between “Discovered - currently not indexed” and “Crawled - currently not indexed”?
“Discovered” means Google knows the URL exists but hasn’t crawled it yet. “Crawled” means Google visited the URL and still chose not to index it. “Crawled” is usually more about quality/authority signals than discovery.
How long does it take for Google to index a page after fixes?
Often 7–14 days after meaningful changes (especially internal linking). Some pages index faster; some take longer depending on site authority and crawl prioritization.
Does “Request indexing” guarantee indexing?
No. It’s a request to prioritize crawling. Indexing still depends on Google’s evaluation of quality, uniqueness, and authority signals.
Can I force Google to index a page?
Not directly. You can increase the likelihood by improving internal linking, content usefulness, and technical indexability — then requesting indexing.
Why would Google index a page and then remove it later?
This can happen if the page loses internal links, becomes duplicative, gets outclassed by a better page, or quality signals drop. Re-run Solutions #1–#4 and make sure the page still deserves to be indexed.
Should I noindex pages that are “crawled but not indexed”?
Only if the page truly has no search value. If it’s important, fix the underlying signal problem (internal links + usefulness + technical). If it’s low-value or duplicative, noindexing can help keep your index clean.
How many pages is “normal” to have as crawled but not indexed?
It depends on site size. Large sites with lots of low-value URLs often see a meaningful chunk of pages in this bucket. The key is prioritization: focus on pages with real business value and traffic potential first.
Can internal links really make that big of a difference?
Yes — especially for orphan or under-linked pages. Internal links create crawl paths and signal importance. For a scalable system, pair an internal link audit with a repeatable internal linking process.
Next steps: make indexing repeatable
If your site has a large content library, manual “find pages, find sources, add links” gets expensive fast.
That’s exactly what Linkbot’s Priority Indexer is designed to do: identify pages that aren’t indexed and boost them with intelligent internal links.
Bottom line: “Crawled - currently not indexed” is usually a signal problem, not a mystery. Fix indexability, add internal links, improve content quality, and then request indexing. Repeat.