• About
  • Advertise
  • Careers
  • Contact
Monday, April 20, 2026
  • Login
No Result
View All Result
NEWSLETTER
KNN - Keen News Now
  • Business
  • Tech

    Google AI in search: From stalling to scrambling?

    Webinar: How to make the most out of consumer and social media data by Cynthia Ramsaran

    Google Shares New Info About Vulnerabilities Found In Chrome via @sejournal, @MattGSouthern

    Click Bots and Fake Traffic Cost Online Advertisers $35 Billion via @sejournal, @BrianFr07823616

    Lake Tahoe could get clearer over next few years due to tiny changes, report says

    Florida A&M University Launches Investigation After Student Poses Nude On Campus In Viral Graduation Photos

    Trending Tags

    • Sillicon Valley
    • Climate Change
    • Election Results
    • Flat Earth
    • Golden Globes
    • MotoGP 2017
    • Mr. Robot
  • Business
  • Tech

    Google AI in search: From stalling to scrambling?

    Webinar: How to make the most out of consumer and social media data by Cynthia Ramsaran

    Google Shares New Info About Vulnerabilities Found In Chrome via @sejournal, @MattGSouthern

    Click Bots and Fake Traffic Cost Online Advertisers $35 Billion via @sejournal, @BrianFr07823616

    Lake Tahoe could get clearer over next few years due to tiny changes, report says

    Florida A&M University Launches Investigation After Student Poses Nude On Campus In Viral Graduation Photos

    Trending Tags

    • Sillicon Valley
    • Climate Change
    • Election Results
    • Flat Earth
    • Golden Globes
    • MotoGP 2017
    • Mr. Robot
No Result
View All Result
KNN - Keen News Now
No Result
View All Result
Home Marketing

Understanding and resolving ‘Discovered – currently not indexed’

by knn
February 7, 2023
in Marketing
0
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

If you see “Discovered – currently not indexed” in Google Search Console, it means Google is aware of the URL, but hasn’t crawled and indexed it yet. 

It doesn’t necessarily mean the page will never be processed. As their documentation says, they may come back to it later without any extra effort on your part. 

But other factors could be preventing Google from crawling and indexing the page, including:

  • Server issues and onsite technical issues restricting or preventing Google’s crawl capability.
  • Issues relating to the page itself, such as quality.

You can also use Google Search Console Inspection API to queue URLs for their coverageState status (as well as other useful data points) en masse.

Request indexing via Google Search Console

This is an obvious resolution and for the majority of cases, it will resolve the issue.

Sometimes, Google is simply slow to crawl new URLs – it happens. But other times, underlying issues are the culprit. 

When you request indexing, one of two things might happen:

  • URL becomes “Crawled – currently not indexed”
  • Temporary indexing

Both are symptoms of underlying issues. 

The second happens because requesting indexing sometimes gives your URL a temporary “freshness boost” which can take the URL above the requisite quality threshold and, in turn, lead to temporary indexing.

Get the daily newsletter search marketers rely on.

Page quality issues

This is where vocabulary can get confusing. I’ve been asked, “How can Google determine the page quality if it hasn’t been crawled yet?”

This is a good question, and the answer is that it can’t.

Google is making an assumption about the page’s quality based on other pages on the domain. Their classifications are likewise based on URL patterns and website architecture.

As a result, moving these pages from “awareness” to the crawl queue can be de-prioritized based on the lack of quality they have found on similar pages. 

It’s possible that pages on similar URL patterns or those located in similar areas of the site architecture have a low-value proposition compared to other pieces of content targeting the same user intents and keywords.

Possible causes include:

  • The main content depth.
  • Presentation. 
  • Level of supporting content.
  • Uniqueness of the content and perspectives offered.
  • Or even more manipulative issues (i.e., the content is low quality and auto-generated, spun, or directly duplicates already established content).

Working on improving the content quality within the site cluster and the specific pages can have a positive impact on reigniting Google’s interest in crawling your content with greater purpose.

You can also noindex other pages on the website that you acknowledge aren’t of the highest quality to improve the ratio of good-quality pages to bad-quality pages on the site.

Crawl budget and efficiency

Crawl budget is an often misunderstood mechanism in SEO. 

The majority of websites don’t need to worry about this. In fact, Google’s Gary Illyes has gone on the record claiming that probably 90% of websites don’t need to think about crawl budget. It is often regarded as a problem for enterprise websites.

Crawl efficiency, on the other hand, can affect websites of all sizes. Overlooked, it can lead to issues on how Google crawls and processes the website.

To illustrate, if your website: 

  • Duplicates URLs with parameters.
  • Resolves with and without trailing slashes.
  • Is available on HTTP and HTTPS.
  • Serves content from multiple subdomains (e.g., https://website.com and https://www.website.com).

…then you might be having duplication issues that impact Google’s assumptions on crawl priority based on wider site assumptions.

You might be zapping Google’s crawl budget with unnecessary URLs and requests. Given that Googlebot crawls websites in portions, this can lead to Google’s resources not stretching far enough to discover all newly published URLs as fast as you would like.

You want to crawl your website regularly, and ensure that:

  • Pages resolve to a single subdomain (as desired).
  • Pages resolve to a single HTTP protocol.
  • URLs with parameters are canonicalized to the root (as desired).
  • Internal links don’t use redirects unnecessarily.

If your website utilizes parameters, such as ecommerce product filters, you can curb the crawling of these URI paths by disallowing them in the robots.txt file.

Your server can also be important in how Google allocates the budget to crawl your website.

If your server is overloaded and responding too slowly, crawling issues may arise. In this case, Googlebot won’t be able to access the page resulting in some of your content not getting crawled. 

Consequently, Google will try to come back later to index the website, but it will no doubt cause a delay in the whole process.

Internal linking

When you have a website, it’s important to have internal links from one page to another. 

Google usually pays less attention to URLs that don’t have any or enough internal links – and may even exclude them from its index.

You can check the number of internal links to pages through crawlers like Screaming Frog and Sitebulb.

Having an organized and logical website structure with internal links is the best way to go when it comes to optimizing your website. 

But if you have trouble with this, one way to make sure all of your internal pages are connected is to “hack” into the crawl depth using HTML sitemaps. 

These are designed for users, not machines. Although they may be seen as relics now, they can still be useful.

Additionally, if your website has many URLs, it’s wise to split them up among multiple pages. You don’t want them all linked from a single page.

Internal links also need to use the tag for internal links instead of relying on JavaScript functions such as onClick(). 

If you’re utilizing a Jamstack or JavaScript framework, investigate how it or any related libraries handle internal links. These must be presented as tags.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Add Search Engine Land to your Google News feed.    Google News

Related stories

New on Search Engine Land

@media screen and (min-width: 800px) {
#div-gpt-ad-3191538-7 {
display: flex !important;
justify-content: center !important;
align-items: center !important;
min-width:770px;
min-height:260px;
}
}
@media screen and (min-width: 1279px) {
#div-gpt-ad-3191538-7 {
display: flex !important;
justify-content: center !important;
align-items: center !important;
min-width:800px!important;
min-height:440px!important;
}
}

About the author

Dan Taylor

Dan Taylor is head of technical SEO at SALT.agency, a UK-based technical SEO specialist and winner of the 2022 Queens Award. Dan works with and oversees a team working with companies ranging from technology and SaaS companies to enterprise e-commerce.

knn

knn

Recommended

Search Engine Land’s updated What is SEO guide is now available

3 years ago

Increase your page rankings and organic traffic with user-generated content (UGC) by Digital Marketing Depot

3 years ago

Popular News

    Connect with us

    Newsletter

    Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor.
    SUBSCRIBE

    Category

    • Business
    • Finance
    • Marketing
    • Metaverse
    • News
    • NFT
    • Tech

    Site Links

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    About Us

    We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

    • About
    • Advertise
    • Careers
    • Contact

    © 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.

    No Result
    View All Result
    • Home
    • Business
    • Tech

    © 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    Are you sure want to unlock this post?
    Unlock left : 0
    Are you sure want to cancel subscription?