Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Subpage with own homepage and navigation good or bad?
-
Hi everybody,
I have the following question. At the company I work, we deliver several services. We help people buy the right second hand car (technical inspections). But we also have an import-service.
Because those services are so different, I want to split them on our website. So our main website is all about the technical inspections. Then, when you click on import, you go to www.example.com/import. A subpage with it's own homepage en navigation, all about the import service.
It's like you have an extra website on the same domain. Does anyone has experience with this in terms of SEO?
Thank you for your time!
Kind regards,
Robert
-
Thank you for your answer Cesare,
I don't want Google to see and rate it as an independent website.... It's just to make a clear seperation between the two services.
The services have differens pricings, different How it works-pages etc. So when you are in the import-zone, and you click on Pricing, you will see a page with the pricingof our import-service....
I'm sorry... struggling a bit to explain it in English.
Looking forward to hear from more people!
-
Hi Robert,
If you want to have an extra website on the same domain, that Google also sees and rates as an independent website your need to do it with a subdomain, i.e. import.example.com, not with a directory like you are suggesting (www.example.com/import).
That said I would't be sure if the import business should be integrated as a complete independent website with a complete own navigation. Why not thinking about it as a subpage, with an own subnavigation, not a complete high level menu?
Independently from that if you do it as a directory the new import will benefit SEO wise (e.g. DA) from the second hand pages. Link from pages where it makes sense from one topic to the other and vice versa. Topic wise the two topics match together in my opinion and add to one another. So this shouldn't be a problem.
Hope this helps.
Cheers,
Cesare
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
Homepage not indexed - seems to defy explanation
Hey folks Hoping to get some more eyes on a specific problem I am seeing with a clients site. Site: http:www.ukjuicers.com We have checked everything we can think of and the usual suspects here are not present: Canonical URL is in place Site is shown as indexed in search console No Crawl, DNS, Connectivity or server errors No robots.txt blocking - verified in search console No robots meta tags or directives Fetch as Google works Fetch & render works site command returns all other pages info command does not return the homepage homepage is cached and cache has been updated since this issue started: http://webcache.googleusercontent.com/search?q=cache:www.ukjuicers.com homepage is indexed in yahoo and Bing all variations redirect to the www.ukjuicers.com domain (.co.uk, .com, www, sans www etc) The only issue I found after some extensive digging was some issues with the HTTP and HTTPS versions of the site both being available and both specifying the canonical version as themselves. So, http site used canonicals with http and https site used canonicals with https. So, a conflict there with the canonical exacerbating the problem it is there to solve. The HTTPS site is not indexed though and we have set this up in webmaster tools and now the web developer has set redirects to ensure all versions even the https now 301 redirect to the http://www.ukjuicers.com page so these canonical issues have been ironed out. But... it's still not indexing the homepage. The practical implications of this are quite scary - the site used to be somewhere between 1st and 4th for keywords like 'juicers', 'juicer' etc. Now they are bottom of page 1 or top of page 2 with an internal page. They were jostling with the big boys (amazon, argos, john lewis etc) but now they are right at the bottom of the second page. It's a strange one - i have seen all manor of technical problems over the years but this one seems to defy sensible explanation. The next step is to do a full technical SEO audit of the site but I am always of the opinion that with many eyes all bugs are shallow so if anyone has any input or experience with odd indexation problems like this would love to get your input. Cheers
Technical SEO | | Marcus_Miller
Marcus0 -
Word mentioned twice in URL? Bad for SEO?
Is a URL like the one below going to hurt SEO for this page? /healthcare-solutions/healthcare-identity-solutions/laboratory-management.html I like the match the URL and H1s as close as possible but in this case it looks a bit funky. /healthcare-solutions/healthcare-identity-solutions/laboratory-management.html
Technical SEO | | jsilapas0 -
I have a GoDaddy website and have multiple homepages
I have GoDaddy website builder and a new website http://ecuadorvisapros.com and I notices through your crawl test that there are 3 home pages http://ecuadorvisapros with a 302 temporary redirect, http://www.ecuadorvisapros.com/ with no redirect and http://www.ecuadorvisapros/home.html. GoDaddy says there is only one home page. Is this going to kill my chances of having a successful website and can this be fixed? Or can it. I actually went with the SEO version thinking it would be better, but it wants to auto change my settings that I worked so hard at with your sites help. Please keep it simple, I am a novice although I have had websites in the past I know more about the what's than the how's of websites. Thanks,
Technical SEO | | ScottR.0 -
Good alternatives to Xenu's Link Sleuth and AuditMyPc.com Sitemap Generator
I am working on scraping title tags from websites with 1-5 million pages. Xenu's Link Sleuth seems to be the best option for this, at this point. Sitemap Generator from AuditMyPc.com seems to be working too, but it starts handing up, when a sitemap file, the tools is working on,becomes too large. So basically, the second one looks like it wont be good for websites of this size. I know that Scrapebox can scrape title tags from list of url, but this is not needed, since this comes with both of the above mentioned tools. I know about DeepCrawl.com also, but this one is paid, and it would be very expensive with this amount of pages and websites too (5 million ulrs is $1750 per month, I could get a better deal on multiple websites, but this obvioulsy does not make sense to me, it needs to be free, more or less). Seo Spider from Screaming Frog is not good for large websites. So, in general, what is the best way to work on something like this, also time efficient. Are there any other options for this? Thanks.
Technical SEO | | blrs120 -
2 Versions of Same Homepage
We want to show new and returning visitors different versions of our homepage (same URL) What, if anything, should we use as the markup to tell Google what we are doing?
Technical SEO | | theLotter
Any danger that Google will think we are cloaking? Thanks!0 -
Templates for Meta Description, Good or Bad?
Hello, We have a website where users can browse photos of different categories. For each photo we are using a meta description template such as: Are you looking for a nice and cool photo? [Photo name] is the photo which might be of interest to you. And in the keywords tags we are using: [Photo name] photos, [Photo name] free photos, [Photo name] best photos. I'm wondering, is this any safe method? it's very difficult to write a manual description when you have 3,000+ photos in the database. Thanks!
Technical SEO | | TheSEOGuy10 -
How does a search engine bot navigate past a .PDF link?
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks. If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window? If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this? 1. Add a rel="nofollow" attribute 2. Don't open the link in a new window so the back button remains finctional 3. Both 1 and 2 or 4. Create specs on the page instead of relying on a .pdf Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element] Thoughts and suggestions would be greatly appreciated. Dana
Technical SEO | | danatanseo0