You turn a live website into a machine-readable knowledge graph (KG) and an interactive HTML explorer, then derive llms.txt and robots.txt directly from that graph.
The result is a single source of truth that works simultaneously for agentic AI grounding, human exploration, and crawler discovery, without manual duplication or SEO-era hacks.
Load the website in ChatGPT Atlas
Open the page you want to convert and make sure the full page context is available.
Generate a dense knowledge graph as interactive HTML
Use the Atlas prompt below to extract all semantically meaningful entities and relationships and output a single self-contained interactive HTML file with embedded KG JSON.
Export the KG JSON
Use the “Export JSON” button in the HTML to download the raw graph (nodes + edges) as structured data.
Generate llms.txt and robots.txt from the KG JSON
Feed the KG JSON back into ChatGPT (or Atlas) using the conversion prompt to generate:
llms.txt for LLM/agent grounding and discoveryrobots.txt for crawler directives and discoverability hintsDeploy to your site (Lovable)
Add llms.txt and robots.txt at the root (or as plain-text routes).
Optionally host the interactive KG HTML as a public artifact or internal tool.
You are running inside ChatGPT Atlas with full access to the active page context.
Goal:
Extract all semantically meaningful information from the currently loaded webpage and generate a single self-contained interactive HTML file that visualizes the content as a knowledge graph suitable for agentic AI workflows.
Rules:
- Use ONLY the provided browser page context
- Do NOT infer or add information
- Preserve exact wording and URLs
- Prefer granular nodes over broad summaries
Requirements:
- Model the page as nodes (Person, Org, Project, Resource, Post, Topic, Service, etc.)
- Model explicit edges only (workedOn, authored, offers, guidedBy, linksTo, etc.)
- Embed the KG data directly in the HTML as JSON
- Include search, type filtering, node details, outbound links, JSON export
- No server-side code
Output:
Return ONLY the complete HTML file. No explanations.
You are given a Knowledge Graph JSON with `nodes` and `edges`.
Convert it into:
1) llms.txt
2) robots.txt
Rules:
- Do not infer missing data
- Preserve URLs exactly
- Use KG entities and relations as the source of truth
- Output only the two files, clearly labeled
Example Output based on my own personal website (dimadurah.com) :