Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Turn SEO data into actionable content briefs

      Turn SEO data into actionable content briefs

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      Let your business shine with Listings AI

      Let your business shine with Listings AI

      Get found
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Access 20 years of data with flexible pricing
      Moz API

      Access 20 years of data with flexible pricing

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Technical SEO
    4. Robots.txt Syntax for Dynamic URLs

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Robots.txt Syntax for Dynamic URLs

    Technical SEO
    3
    4
    1125
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • btreloar
      btreloar Subscriber last edited by

      I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page=

      Which is the proper syntax?
      Disallow: ?Page=
      Disallow: ?Page=*
      Disallow: ?Page=
      Or something else?

      1 Reply Last reply Reply Quote 0
      • btreloar
        btreloar Subscriber @Alick300 last edited by

        Thanks, Alick300 — unfortunately, the slash doesn't appear like that in the URLs on this site: they look like this
        www.domain.com/page.html?Page= .........

        In running through an online robots.txt tester, all three versions in my original question seem to work. Until proven otherwise, I'm using the first one because it's the simplest.

        1 Reply Last reply Reply Quote 0
        • Alick300
          Alick300 last edited by

          Hi Bill,

          Disallow: /?Page= will work

          Thanks

          btreloar 1 Reply Last reply Reply Quote 1
          • btreloar
            btreloar Subscriber last edited by

            Hi, James. It's not pagination I'm trying to disallow. The site structure has URLs that include things like "Page=give&...", that opens up a blank form ... but it comes from scores of web pages we want to spider. Since the "give" page is an empty form, we're getting tons of duplicate content errors as a result.

            1 Reply Last reply Reply Quote 0
            • 1 / 1
            • First post
              Last post

            Got a burning SEO question?

            Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


            Start my free trial


            Browse Questions

            Explore more categories

            • Moz Tools

              Chat with the community about the Moz tools.

            • SEO Tactics

              Discuss the SEO process with fellow marketers

            • Community

              Discuss industry events, jobs, and news!

            • Digital Marketing

              Chat about tactics outside of SEO

            • Research & Trends

              Dive into research and trends in the search industry.

            • Support

              Connect on product support and feature requests.

            • See all categories

            Related Questions

            • ramb

              I have two robots.txt pages for www and non-www version. Will that be a problem?

              There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.

              Technical SEO | | ramb
              0
            • LabeliumUSA

              Robot.txt : How to block a specific file type in several subdirectories ?

              Hello everyone ! I need help setting up a robot.txt. I'm trying to block all pdf files in particular directories so I'm using this command. In the example below the line is blocking all .gif in the entire site. Block files of a specific file type (for example, .gif) | Disallow: /*.gif$ 2 questions : Can I use this command to specify one particular directory in which I want to block pdf files ? Will this line be recognized by googlebots ? Disallow: /fileadmin/xxxxxxx/xxx/xxxxxxx/*.pdf$ Then I realized that I would have to write as many lines as many directories there are in which I want to block pdf files. Let's say I want to block pdf files in all these 3 directories /fileadmin/directory1 /fileadmin/directory1/sub1 /fileadmin/directory1/sub1/pdf Is there a pattern-matching rule I could use to blocks access to pdf files in all subdirectories instead of writing 3x the above line for each subdirectory ? For exemple : Disallow: /fileadmin/directory1*/ Many thanks in advance for any insight you may have.

              Technical SEO | | LabeliumUSA
              0
            • MH-UK

              How to handle dynamic product url that changes regularly

              Hey Moz, It's actually my first post - although I look at the Q&As on a daily basis! I was hoping to get your opinions on how to handle dynamic product url that can change regularly. Before we start, our product page urls get populated by the product titles. So the situation is this. Let’s say we have a product url: /product/12345-abcde-fghj/ Then the client decides to change the title a week later, so the url changes with it to): /listing/12345-klm-qjk Another week later, the agent changes to: /listing/12345-jkhfk-jhf-kjdhfkjdhf So to note, the product ID will always remain the same. Naturally, 301 redirecting every time would cause a bit of page authority to be lost every time 301ed. Also potentially creating new a few hundreds of 301 redirect daily sounds totally mental. (I have been informed by the dev we expect a few hundreds to change url daily) Although I understand there’s no limit on how many 301s you can have on a single domain, this would look completely unnatural - really not ideal. So the potential solution we thought was: we’ll keep the original url, and make sure that is the only url that will get indexed**/product/12345-abcde-fghj/**and put canonical tag on any of the new urls, directing to the original url. The problem we will have then is that the most current url may not exactly match the description of the product -wouldn’t be ideal for ux. Has anyone had dealing with issues like this in the past? Would love to get your input! Many Thanks

              Technical SEO | | MH-UK
              0
            • wcbuckner

              Best way to noindex long dynamic urls?

              I just got a Mozcrawl back and see lots of errors for overly dynamic urls. The site is a villa rental site that gives users the ability to search by bedroom, amenities, price, etc, so I'm wondering what the best way to keep these types of dynamically generated pages with urls like /property-search-page/?location=any&status=any&type=any&bedrooms=9&bathrooms=any&min-price=any&max-price=any from indexing. Any assistance will be greatly appreciated : )

              Technical SEO | | wcbuckner
              0
            • NicDale

              Is there any value in having a blank robots.txt file?

              I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place.  Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?

              Technical SEO | | NicDale
              0
            • well-its-1-louder

              URL - Well Formed or Malformed

              Hi Mozzers, I've been mulling over whether my URLs could benefit a little SEO tweaking. I'd be grateful for your opinion. For instance, we've a product, a vintage (second hand), red Chanel bag. At the moment the URL is: www.vintageheirloom.com/vintage-chanel-bags/2.55-bags/red-2.55-classic-double-flap-bag-1362483150 Broken down... vintage-chanel-bags = this is the main product category, i.e. vintage chanel bags 2.55-bags = is a sub category of the main category above. They are vintage Chanel 2.55 bags, but I've not included 'vintage' again. 2.55 bags are a type of Chanel bag. red-2.55-classic-double-flap-bag = this is the product, the bag **1362483150 **= this is a unique id, to prevent the possibility of duplicate URLs As you no doubt can see we target, in particular, the phrase **vintage. **The actual bag / product title is: Vintage Chanel Red 2.55 classic double flap bag 10” / 25cm With this in mind, would I be better off trying to match the product name with the end of the URL as closely as possible? So a close match below would involve not repeating 'chanel' again: www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-red-2.55-classic-double-flap-bag or an exact match below would involve repeating 'chanel': www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag This may open up more flexibility to experiment with product terms like second hand, preowned etc. Maybe this is a bad idea as I'm removing the phrase 'vintage' from the main category. But this logical extension of this looks like keyword stuffing !! www.vintageheirloom.com/vintage-chanel-bags/vintage-2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag Maybe this is over analyzing, but I doubt it? Thanks for looking. Kevin

              Technical SEO | | well-its-1-louder
              0
            • Mikkehl

              Robots.txt to disallow /index.php/ path

              Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/

              Technical SEO | | Mikkehl
              0
            • TalkInThePark

              Googlebot does not obey robots.txt disallow

              Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin

              Technical SEO | | TalkInThePark
              0

            Get started with Moz Pro!

            Unlock the power of advanced SEO tools and data-driven insights.

            Start my free trial
            Products
            • Moz Pro
            • Moz Local
            • Moz API
            • Moz Data
            • STAT
            • Product Updates
            Moz Solutions
            • SMB Solutions
            • Agency Solutions
            • Enterprise Solutions
            • Digital Marketers
            Free SEO Tools
            • Domain Authority Checker
            • Link Explorer
            • Keyword Explorer
            • Competitive Research
            • Brand Authority Checker
            • Local Citation Checker
            • MozBar Extension
            • MozCast
            Resources
            • Blog
            • SEO Learning Center
            • Help Hub
            • Beginner's Guide to SEO
            • How-to Guides
            • Moz Academy
            • API Docs
            About Moz
            • About
            • Team
            • Careers
            • Contact
            Why Moz
            • Case Studies
            • Testimonials
            Get Involved
            • Become an Affiliate
            • MozCon
            • Webinars
            • Practical Marketer Series
            • MozPod
            Connect with us

            Contact the Help team

            Join our newsletter
            Moz logo
            © 2021 - 2026 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
            • Accessibility
            • Terms of Use
            • Privacy

            Looks like your connection to Moz was lost, please wait while we try to reconnect.