robots.txt Configuration

1. Overview

The system automatically generates a robots.txt file at the root of your tracking domain (e.g., https://subdomain.yourdomain.com/robots.txt). This file provides instructions to search engine crawlers regarding which parts of the site they are permitted to explore or index.

2. Interface Configuration

The behavior of the robots.txt file can be managed directly through the administration console under the "Search engine robots" section.

You can select one of three options from the dropdown menu:

This is the default configuration generated by the system. It allows access to essential technical resources while protecting the rest of the server's directory.

  • Behavior: Specifically allows .js and .html files but disallows all other paths.

  • File Content:

HTTP

User-agent: *
Allow: /*.js$
Allow: /*.js?*$
Allow: /*/js$
Allow: /*/js?*$
Allow: /*.html$
Allow: /*.html?*$
Disallow: /

B. Block

This option is used to strictly prevent any indexing or crawling of the tracking domain.

  • Behavior: Crawlers are forbidden from visiting any part of the domain.

  • File Content:

HTTP

C. Allow

This option removes all restrictions, making the entire domain accessible to crawlers.

  • Behavior: All content is open for exploration and indexing.

  • File Content:

HTTP


3. Options Summary

UI Option

Robot Visibility

Recommended Use Case

Limited access

Partial (Scripts & HTML only)

Standard setup. Ensures technical functionality without unnecessary indexing.

Block

None

High privacy requirements or restricted test environments.

Allow

Full

Specific indexing needs or full transparency requirements.

Changes made in the online interface are applied dynamically to the robots.txt file at the root of the sub-domain.

Mis à jour