Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to check if the page is indexable for SEs?
-
Hi, I'm building the extension for Chrome, which should show me the status of the indexability of the page I'm on.
So, I need to know all the methods to check if the page has the potential to be crawled and indexed by a Search Engines. I've come up with a few methods:
- Check the URL in robots.txt file (if it's not disallowed)
- Check page metas (if there are not noindex meta)
- Check if page is the same for unregistered users (for those pages only available for registered users of the site)
Are there any more methods to check if a particular page is indexable (or not closed for indexation) by Search Engines?
Thanks in advance!
-
I understand the difference between what you're doing and what Google shows, I guess I'm just not sure when I'd want to know that something could technically be indexed, but isn't?
I guess I'm not your target market!
Good luck with your tool. -
With "site:site.com" you can only see if the page is indexED, but to know if it's indexABLE you need to dig deeper. That is why I've decided to automate this process.
As I already told, this gonna be a browser extension, once you got on any page, this ext. automatically checks the page, and show the status (with color, I guess), if this page indexed, if not - it shows if its indexABLE. When I'm looking for linkbuilding resources, this little tool should help a lot

-
Ah, gotcha. Personally, I use Google itself to find out if something is indexable: if it's my own site, I can use Fetch as Google, and the robots.txt tester; if it's another site, you can search for "site:[URL]" to see if Google's indexed it.
I think this tool could be really good if you keep it as an icon and it glows or something if you've accidentally deindexed the page? Then it's helping you proactively.

Hope this helps!
Kristina
-
Actually I'm not. That's why I'm asking, to not to miss this basic stuff, so I really appreciate your advice. Thank you!
If I get your question correctly, you are asking why this extension is need for?
Well, 2 main aims:
-
When I want to check any of pages on my own websites, I just visit the page and see if it's ok with all the robots stuff. (or if it should be closed from robots, see if it really is)
-
For linkbuilding purposes. When I come to the page and see a link from it to external website and I know for sure that I can get the same link to my site, I'm asking myself, if it worth getting link from the page like this, if it's gonna be indexed. Why waste your time on getting links from pages that are closed from indexation.
-
-
Hello Peter,
First of all, thank you for the great ideas.
I don't think it's necessary to call the API, as this check references to only one URL (so no aggressiveness) , I need it to be done as fast as possible. But the idea with Structured Data - bravo!
Thanks a lot!
-
You're probably already doing this, but make sure that all of your tests are using the Googlebot user agent! That could cause different results, especially with the robots.txt check.
A sense check: what is your plugin going to offer over Google Search Console's Fetch as Google and robots.txt Tester?
-
You also can check for HTTP header results for crawling too:
https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tagAlso you can use some of Google services for this. Specially PageSpeed API:
https://developers.google.com/speed/docs/insights/v2/reference/Once you call this API it return JSON with list of blocked resources. It's little bit slower but i found that this is safe. Some hostings have IDS (intruder detection systems) and when some crawl them little bit aggressive they block whole IP or IP range. I know few cases when site is OK to be seen from users, but blocked from Google IP. Webmasters wasn't happy when they discover this. They call hosting few times and got "there isn't issues from our side, we didn't block anything". And 6 hours later they get "seems that another department was blocked this server for few specific IPs".
About checking for logged/nonloged users. You can use StructuredData Testing Tool. Also one call to get JSON with full HTTP response and then compare it with your result.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page with metatag noindex is STILL being indexed?!
Hi Mozers, There are over 200 pages from our site that have a meta tag "noindex" but are STILL being indexed. What else can I do to remove them from the Index?
Intermediate & Advanced SEO | | yaelslater0 -
My url disappeared from Google but Search Console shows indexed. This url has been indexed for more than a year. Please help!
Super weird problem that I can't solve for last 5 hours. One of my urls: https://www.dcacar.com/lax-car-service.html Has been indexed for more than a year and also has an AMP version, few hours ago I realized that it had disappeared from serps. We were ranking on page 1 for several key terms. When I perform a search "site:dcacar.com " the url is no where to be found on all 5 pages. But when I check my Google Console it shows as indexed I requested to index again but nothing changed. All other 50 or so urls are not effected at all, this is the only url that has gone missing can someone solve this mystery for me please. Thanks a lot in advance.
Intermediate & Advanced SEO | | Davit19850 -
React.js Single Page Application Not Indexing
We recently launched our website that uses React.js and we haven't been able to get any of the pages indexed. Our previous site (which had a .ca domain) ranked #1 in the 4 cities we had pages and we redirected it to the .com domain a little over a month ago. We have recently started using prerender.io but still haven't seen any success. Has anyone dealt with a similar issue before?
Intermediate & Advanced SEO | | m_van0 -
Multiple pages optimised for the same keywords but pages are functionally different and visually different
Hi MOZ community! We're wondering what the implications would be on organic ranking by having 2 pages, which have quite different functionality were optimised for the same keywords. So, for example, one of the pages in question is
Intermediate & Advanced SEO | | TrueluxGroup
https://www.whichledlight.com/categories/led-spotlights
and the other page is
https://www.whichledlight.com/t/led-spotlights both of these pages are basically geared towards the keyword led spotlights the first link essentially shows the options for led spotlights, the different kind of fittings available, and the second link is a product search / results page for all products that are spotlights. We're wondering what the implications of this could be, as we are currently looking to improve the ranking for the site particularly for this keyword. Is this even safe to do? Especially since we're at the bottom of the hill of climbing the ranking ladder of this keyword. Give us a shout if you want any more detail on this to answer more easily 🙂0 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
Indexed Pages in Google, How do I find Out?
Is there a way to get a list of pages that google has indexed? Is there some software that can do this? I do not have access to webmaster tools, so hoping there is another way to do this. Would be great if I could also see if the indexed page is a 404 or other Thanks for your help, sorry if its basic question 😞
Intermediate & Advanced SEO | | JohnPeters0 -
Should I Allow Blog Tag Pages to be Indexed?
I have a wordpress blog with settings currently set so that Google does not index tag pages. Is this a best practice that avoids duplicate content or am I hurting the site by taking eligible pages out of the index?
Intermediate & Advanced SEO | | JSOC0