We value your feedback. Please take a 1-minute survey to help us improve our Help Center. Click ‘Yes’ to participate.

Yes
  • Topic 主题 トピック Topic Topic
  • Sign in

Robots.txt Management for Advanced SEO

 

A robots.txt file is a text file that provides instructions to search engine crawlers on which parts of your website they can access. It helps control which pages are included or excluded from search engine results. 

By creating or modifying your robots.txt file, you can gain more control over how search engines crawl and index your website. This allows for greater flexibility in your SEO strategy.

 

In This Article

 


 

When to Edit Robots.txt

While the default Robots.txt file works for most stores, you may need to edit it in specific scenarios where you want to restrict Google or other search engine crawlers from accessing certain URLs. Common scenarios include:

  • Allowing or disallowing the crawling of specific URLs
  • Adding crawl delay rules for specific crawlers
  • Incorporating additional sitemap URLs
  • Blocking certain crawlers

 


 

Editing the Robots.txt File

Editing the robots.txt file requires a basic understanding of coding and SEO principles. To access and edit the file, follow these steps:

  1. In your SHOPLINE admin panel, go to Channels > Online Store > Preferences.
    2.1.png
  2. Locate Robots.txt Management and click Go to settings.
    2.2.png
  3. In the Robots editor, click the link to your robots.txt file.
    2.3.png
  4. Copy the file content and paste it back into the Robots editor.
    2.4.png
  5. Modify the Disallow and Allow directives as needed to control which pages search engines can crawl.
    2.5.png
  6. Click Update to save the settings. Then, click the robots.txt file link again to verify the changes.

 

For more detailed information on robots.txt, refer to the "Introduction to robots.txt" and "How to write and submit a robots.txt file" guides on Google Search Central.

Note: This feature is an optimized version of the previous Anti-crawler settings. If you've made changes to the Anti-crawler settings, the system will automatically append the corresponding disallow rules at the end of the robots.txt file.

 

Have more questions? Submit a request

Comments