All features

robots.txt

A robots.txt file tells search engine crawlers which pages they can and can't access on your website, giving you control over how your site is indexed.

Every website talks to search engines, whether you realise it or not. The robots.txt file is how your site communicates the ground rules — which pages should be crawled and indexed, and which should be left alone.

What it does

robots.txt is a plain text file that sits at the root of your website. When a search engine bot like Googlebot arrives, it checks this file first before crawling anything else. It’s the bouncer at the door — polite but firm about who goes where.

Why it matters for your business

Without a robots.txt file, crawlers will attempt to index everything on your site, including pages you might not want appearing in search results — admin panels, staging content, duplicate pages, or internal tools. This can dilute your SEO by splitting search engine attention across pages that don’t serve your customers.

A well-configured robots.txt file ensures that search engines focus their crawl budget on the pages that actually matter: your services, your portfolio, your contact page — the content that brings in business.

What happens without it

Technically, your site still works. But you lose control over how search engines interact with it. You might find internal pages showing up in Google results, or crawlers wasting time on low-value pages while your important content gets indexed less frequently.

It’s a small file with an outsized impact on how your site performs in search.

Your digital presence should work as hard as you do.

Website, email, listings, branding — let's get it all working together for your business.