Your website's next visitor is an AI agent
AI agents are your website’s new visitors. People are already using them to gather information, perform research or even make purchases online. Whether we like it or not, it’s a new way to use the internet. Agents do not care about modern design or fancy animations. They need an easy way to understand and interact with your content. If your website isn’t ready for them, you’re already losing traffic you don’t even know about. Let’s fix that.
What does “agent-friendly” even mean?
An agent-friendly website is one that AI agents can reliably navigate, understand, and interact with. There’s a growing set of standards and conventions that make this possible. Some already exist and just need to be adopted. Others are brand new.
In this article, we’ll cover five building blocks - from controlling agent access with robots.txt, through helping them discover your content with llms.txt and markdown, to letting them interact with your product via OpenAPI and WebMCP.
Let’s start simple - robots.txt
You already have a robots.txt. But have you looked at it recently?
Traditionally, robots.txt controlled which crawlers can access your website.
It also applies to AI agents. Make sure that you’re not blocking them.
At this point, you may think - “hold on, I don’t want them to use my content for training!”.
It’s a valid point, but luckily, it’s possible to distinguish between an AI agent acting on behalf of a human and a crawler grabbing content for training.
For example, OpenAI provides a list of user-agent strings they use. Other providers like Anthropic and Google do the same.
So you may allow access for OAI-SearchBot and ChatGPT-User but block it for GPTBot.
Example robots.txt:
User-Agent: *
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: GPTBot
Disallow: /
llms.txt - a guide for AI agents
robots.txt tells agents what they can access. llms.txt tells them what they should access.
It’s a markdown file placed at the root of your website that provides an overview of your site. Think of it as a README for AI agents - short description, links to key pages, and guidance on what matters.
# My SaaS Product
> A tool that helps developers ship faster.
## Docs
- [Getting Started](/docs/getting-started)
- [API Reference](/docs/api)
- [Changelog](/changelog)
## Optional
- [Blog](/blog)
- [Pricing](/pricing)
Why markdown? Because it’s simple. No parsing, no noise. Just pure content that won’t eat thousands of tokens.
Some sites also provide an llms-full.txt - a more detailed version with full content inlined, so agents don’t even need to follow links.
Markdown versions of your content
llms.txt can point agents to specific pages, which ideally should also be available in markdown.
Providing markdown versions of your content is a great way to make sure agents can access it efficiently.
There are a few ways to do this - you can allow .md extension, a ?format=markdown query param or just serve it via HTTP content negotiation (Accept header).
If you’re running a documentation site, a blog, or any content-heavy platform, this is low-hanging fruit. Your content probably already exists in markdown before it gets rendered to HTML anyway. Just make the source accessible and agents will love you for it.
OpenAPI - let agents use your product
Everything we’ve covered so far is about agents reading your content. Let’s talk about using your product.
If your product has an API, an OpenAPI spec is the most important thing you can offer to AI agents. It’s a standardized description of every endpoint, parameter, and response your API supports.
Agents can read an OpenAPI spec and immediately know how to interact with your product - no documentation crawling, no guessing, no hallucinated endpoints.
If you already have an OpenAPI spec, make sure it’s discoverable. Link to it from your llms.txt and your docs.
WebMCP - agent-native integration
You’ve probably heard about MCP already, but in case you haven’t - it’s a standardized way to integrate AI agents with external services. MCP servers expose tools (actions the agent can take), resources (data it can read), and prompts (predefined interactions). Obviously, you can expose an MCP server just like an API.
WebMCP is a JavaScript interface. Wait, JavaScript? Didn’t we want to skip all HTML/CSS/JS noise and provide pure markdown? Well, yes, but consider it as an alternative approach. A lot of SPAs (which are terrible for agents) can be easily adjusted - you’ll just need to expose existing functions as MCP tools. It can also create a hybrid experience, where a user and an agent cooperate. Consider this example - you want to buy shoes. You’ll browse the catalog, find the ones you like, the agent can look for similar ones. Then, you pick one pair, the agent fills the address and you confirm the payment.
WebMCP is still very fresh and experimental. At this time it’s available in Chrome’s early preview program.
Your new users are coming
Great, now you know the key building blocks of agent-friendly websites. If you’d like to test if everything is in order on your website, take a look at AgentProbe. It’s a simple free tool that I built. It checks various aspects of your website and suggests improvements.
Keep an eye out, things are changing fast. 2026 is the year of AI agents, and we’re still in the early stages of their development.