The system automatically generates a robots.txt file at the root of your tracking domain (e.g., https://subdomain.yourdomain.com/robots.txt). This file provides instructions to search engine crawlers regarding which parts of the site they are permitted to explore or index.
2. Interface Configuration
The behavior of the robots.txt file can be managed directly through the administration console under the "Search engine robots" section.
You can select one of three options from the dropdown menu:
A. Limited access (recommended)
This is the default configuration generated by the system. It allows access to essential technical resources while protecting the rest of the server's directory.
Behavior: Specifically allows .js and .html files but disallows all other paths.