Common Mistake of Robots.txt File [Explained]

First, let’s start with a question, what if Google has access to all of your web pages and shows your personal data on the search engine result page?

Uhh, it’s a horror and unbelievable thing that would happen by Google but don’t worry, they won’t do it without your permission.

But what is the permission for Google and Major Search Engines to access certain files on your website?

It’s Robots.txt. Remember the name; it’s not a robot.txts.

Every active website has a robots.txt file that tells Google’s crawler that. These pages should be crawled and indexed, and those pages should not be crawled and indexed.

I will give you an example to help you understand better, but first, let’s understand the theory of the robots.txt file.

What is a Robots.txt file?

Robots.txt is a file that tells the search engine to access to not access certain pages that they should crawl and index or not.

It helps to prevent unauthorized access from search engines, and your data will be 100% safe.

It’s a .txt file that uploads to the root folder of a website, and here we write some commands for search engines.


User-agent: *
Disallow: /checkout/

# Allow crawling of product pages and categories
Allow: /products/
Allow: /categories/

It’s a sample of a robots.txt file for any website, but it has a certain format to use properly for an eCommerce website and a blog site.

Note: Using inappropriate robots.txt files can harm your website; even your website might be de-indexed from the search engine result page.

Using a proper robots.txt file helps to reduce the crawling budget and makes faster indexing for your website.

I will explain how you can easily use a proper robots.txt file for your eCommerce website or blog site.

How to Use Robots.txt File Properly?

It’s not much more complicated, and there is no rocket science to understand its mechanism.

It’s a simple file, and you can generate it with any WordPress plugin like Yoast SEO, Rank Math, or All In One SEO plugins.

By default, you will get a simple structured robots.txt file that completes your initial fulfilment.

But according to your needs, you can change it and make it more professional to reduce the crawling budget and faster crawling & indexing.

If you use WordPress CMS to manage your website, you can easily generate this file using Rank Math or other SEO plugins; here, I’m using the Rank Math plugin to generate a robots.txt file.

Rank Math Robots.txt File Edit:

Go to “General Settings” and click on the “Edit robots.txt” file, and you will get the editing field.

Rank Math Robots.txt editor setting

Yoast SEO Robots.txt File Edit:

Install and activate the Yoast SEO plugin and Go to “Tools“. You will get different types of editing options here.

Select “File Editor” and click the “Create robots.txt file” button to generate a basic file for your website.

Yoast SEO robots.txt file setting

You can read the full documentation of Yoast SEO robots.txt file create and edit.

Tools - Yoast SEO robots.txt file editor

Let’s have a look at a common robots.txt file;

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php


User-agent: *
Disallow: /wp-content/uploads/wpo-plugins-tables-list.json

This is a common and simple way to tell search engines to index and non-index your website.

Even a blog, real estate, and other less-page sites should use this format for their website.

It can be changed in certain cases depending on the website’s conditions.

Let’s explore the above file;

Line – 1: User-agent: *

Here, we are calling all user agents like, Google, Bing, Baidu, Yandex, Yahoo, AOL and everything that exists online. Here [*] define the allowance of every user agent.

Line – 2: Disallow: /wp-admin/

Now, we disallowed Google and major search engine crawlers to crawl our [/wp-admin/] directory; we store our database, themes file, and other confidential information there.

Line – 3: Allow: /wp-admin/admin-ajax.php

Here, we allow our website’s front-end files like [/admin-ajax.php] and here, Google and search engine crawlers can easily access and index the web page.

Line – 5: Sitemap:

Here we define the whole XML sitemap of our website, and Google or search engines can easily understand the website’s structure and indexable links.

XML sitemap is a search engine crawler readable file where they easily get all of the website’s URLs and index.

Line – 5 isn’t required, but you can use it here.

And some things you can see in the 7 and 8 lines.

This is a simple and well-structured file for almost regular websites, but if you run an eCommerce website. The robots.txt structure should be different and appropriate.

Let’s have a look;

Robots.txt File for eCommerce Website

First of all, let’s have a look at how should we use robots.txt file for an eCommerce website;

User-agent: *
Disallow: /checkout/
Disallow: /cart/
Disallow: /account/
Disallow: /wishlist/
Disallow: /search/
Disallow: /login/
Disallow: /admin/
Disallow: /private/
Disallow: /restricted/

# Allow crawling of product pages and categories
Allow: /products/
Allow: /categories/

Here you can see everything is the same, and here are some differences for disallowed.

You can see 2,3,4,5,6,7,8,9,10 lines are disallowed for obvious reasons. Just look at that; checkout, cart, account, wishlist and similar confidential pages aren’t necessary to index.

These basic things should not be published publicly, even for search engines. But it can be the most harmful if you don’t command search engines not to crawl and index these URLs.

It will reduce your crawling budget and expose essential and confidential information.

Disclosure: First, consider your URL indexing: “Is it should be indexed or not.” If you place any unappreciated disallowed list here, it can de-index your whole website from Google.

Be careful before adding a single list here, and you can check the URL listing from the Google Robots.txt tester.

The best practice is always checking a URL before adding an allowed or disallowed list.

Hope you understand the concept of robots.txt file, and you can use it for your website, whether it’s an eCommerce website or blog, news, magazine or other website.

Common Mistake of Robots.txt File

Every website is different, and they have their own page structure that they manage by themselves.

So ultimately, robots.txt file listing will be different from other sites, and you will get some different types of listing here like;

User-agent: *
Disallow: /checkout/
Disallow: /customer/
Disallow: /cart/
Disallow: /*index.scss
Disallow: /*reqwest/index
Disallow: /wangpu/
Disallow: /shop/*.html
Disallow: /catalog/
Disallow: /*from=
Disallow: /wow/gcp/
Disallow: /wow/camp/

Look at Line – 5, Line – 6, Line – 8 and Line – 10; you can see the extra [*] before or after [/].

[*] means the URL isn’t specific here, and it can be multiple URLs based on the specified list.

For example;

Line – 8: Disallow: /shop/*.html

Now, it can be different types of URLs like;

  • https://akramhossain. com/shop/product1.html
  • https://akramhossain. com/shop/product2.html
  • https://akramhossain. com/shop/productX.html

But if we want to exclude every URL using a robots.txt file, it will be troublesome.

So, we define that every URL from shop/will be excluded from search engines.

So, we use [*] to call multiple URLs with a single list.

Again, You should understand the theory and mechanism of the robots.txt file, and then you can apply it on your website.

Otherwise, it can be a costly punishment for you.

Is it required to use robots.txt for every website?

No, you don’t need to use this file if you have a simple website that runs based on HTML or a custom-coded simple site with no confidential information to reveal.

Basically, it’s a command for search engines where they should crawl or not. But for security reasons, you can use this file to prevent search crawlers from unauthorized crawling.

Final Line

Using robots.txt file allows you to communicate with search engines for crawling and indexing certain URLs and not crawling specific URLs.

It helps to save crawling budgets from search engines, and you will get some quality indexing priority from Google.

You have to always think about the importance of every URL, whether it is important or not. If you think some of your URLs should not be indexed. Then add this to the disallowed list.

Using the proper robots.txt file, you will get help for faster indexing, and it will make a clear indication to search engines that you are serious about your website’s technical part.

Lastly, before using, don’t forget to test the URL in Google Tester for extra safety.

Leave a Reply

Your email address will not be published. Required fields are marked *

Make Sure Your Ranking Is High

Enjoy reading! Right?

You already spent 30 seconds here!

Get freemium Update
Direct to your Inbox

Promisingly! Won't sell your data.

Get weekly Freemium Update

Promisingly! I don't sell your data!

Open chat
Scan the code
Hello 👋
Can I help you?