Skip to content

Update CLAUDE.md/AGENTS.md inline context to respect llms.txt#3163

Merged
sacOO7 merged 2 commits intomainfrom
fix/building-with-llms-inline-context
Feb 26, 2026
Merged

Update CLAUDE.md/AGENTS.md inline context to respect llms.txt#3163
sacOO7 merged 2 commits intomainfrom
fix/building-with-llms-inline-context

Conversation

@sacOO7
Copy link
Contributor

@sacOO7 sacOO7 commented Jan 29, 2026

Summary by CodeRabbit

  • Documentation
    • Clarified AI-LLM guidance with an explicit, ordered sequence for sourcing truth: always consult llms.txt first, treat the docs site as canonical and tiebreaker, and use web search only as a fallback. Improved prompt guidance to reflect this prioritized lookup flow.

@coderabbitai
Copy link

coderabbitai bot commented Jan 29, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

Documentation updates reorder AI-LLM sourcing into an explicit, stepwise hierarchy: always fetch https://ably.com/llms.txt first; treat https://ably.com/docs as the canonical reference; use docs to resolve conflicts; and use web searches only as a fallback after consulting llms.txt.

Changes

Cohort / File(s) Summary
AI-LLM Source-of-Truth Guidance
src/pages/docs/platform/ai-llms/index.mdx
Converts prior multi-bullet guidance into an ordered sequence: require fetching https://ably.com/llms.txt first, designate https://ably.com/docs as canonical, allow docs to break ties, and demote web searches to fallback-only. Updates prompt guidance and source-of-truth block to match precedence.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 I nibble notes beneath the light,
First llms.txt must greet my sight,
Docs stand firm when questions rise,
Web searches wait — a last surprise.
Hooray for ordered truth tonight! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Update CLAUDE.md/AGENTS.md inline context to respect llms.txt' directly matches the PR's main objective of modifying inline context in these files to prioritize llms.txt as the primary source-of-truth for LLMs.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/building-with-llms-inline-context

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

## Source of truth: Ably Docs

For anything related to Ably (APIs, SDKs, limits, behavior, authentication, capabilities, feature availability, deprecations):
For anything related to Ably:
Copy link
Contributor Author

@sacOO7 sacOO7 Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the simplified For anything related to Ably: is much cleaner and all-encompassing, because an exhaustive list (APIs, SDKs etc) might make an LLM think if it's not in this list, maybe these rules don't apply, same behaviour was partially seen in https://github.com/ably-labs/chat-poc-kotlin/pull/12.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, extra tokens don't add any actionable information

- Use **https://ably.com/llms.txt** as the authoritative index of Ably documentation.
- Prefer to perform web searches for up to date information from the Ably Docs site.
- If your prior knowledge, assumptions, or training data conflict with the docs, **the docs always win**.
1. Always fetch **https://ably.com/llms.txt** first, this is the primary search index. Use it to identify latest documentation URLs before fetching specific pages. Do not skip this step or jump directly to web searches.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

search index - creates cognitive link: "use this instead of web search"
latest - addresses the freshness problem—LLMs shouldn't rely on stale training data for URL patterns.

Copy link
Contributor Author

@sacOO7 sacOO7 Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

authoritative implies trust/correctness but doesn't convey "use first", also second point
2. Treat **https://ably.com/docs** as the canonical source of truth already highlights ably.com as trustworthy source of information.

@sacOO7
Copy link
Contributor Author

sacOO7 commented Jan 30, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 30, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@GregHolmes GregHolmes force-pushed the fix/building-with-llms-inline-context branch from ca578ee to 669c991 Compare February 2, 2026 12:12
@GregHolmes GregHolmes force-pushed the fix/building-with-llms-inline-context branch from 669c991 to 28ca5a6 Compare February 19, 2026 12:56
@sacOO7 sacOO7 marked this pull request as ready for review February 24, 2026 09:50
@sacOO7 sacOO7 requested a review from kennethkalmer February 24, 2026 09:50
@sacOO7
Copy link
Contributor Author

sacOO7 commented Feb 24, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pages/docs/platform/ai-llms/index.mdx (1)

239-239: ⚠️ Potential issue | 🟡 Minor

Tip #3 "Prefer web search" directly contradicts the updated prompt hierarchy.

The LLM configuration prompt (lines 53–56) now explicitly demotes web search to a last resort ("may be used as a fallback only after consulting llms.txt"), but the unchanged user-facing tip still tells developers to prefer web search as the freshness mechanism. A developer reading this page receives two opposing instructions: the prompt they paste into CLAUDE.md says "llms.txt first, web search last," while the tip says the opposite. This undermines the PR's core objective.

The tip should be updated to align with the new hierarchy — e.g., framing web search as an optional fallback and llms.txt as the primary index.

📝 Proposed fix
-3. **Prefer web search**: Allow your LLM to perform web searches so it can fetch the most current information from the Ably Docs site, as opposed to relying on potentially outdated training data.
+3. **Enable web search as a fallback**: Allow your LLM to perform web searches so it can supplement information not found via `llms.txt` or the Ably Docs pages, rather than relying on potentially outdated training data.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/pages/docs/platform/ai-llms/index.mdx` at line 239, Update the Tip `#3`
copy titled "Prefer web search" so it matches the new prompt hierarchy: make
llms.txt the primary source and present web search as an optional fallback.
Concretely, replace the sentence that recommends preferring web search with
wording that instructs developers to consult llms.txt first (or embed the Ably
Docs index) and only use web search if llms.txt doesn't contain the needed info,
and make sure the example prompt references (CLAUDE.md, llms.txt) remain
consistent with this hierarchy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/pages/docs/platform/ai-llms/index.mdx`:
- Line 239: Update the Tip `#3` copy titled "Prefer web search" so it matches the
new prompt hierarchy: make llms.txt the primary source and present web search as
an optional fallback. Concretely, replace the sentence that recommends
preferring web search with wording that instructs developers to consult llms.txt
first (or embed the Ably Docs index) and only use web search if llms.txt doesn't
contain the needed info, and make sure the example prompt references (CLAUDE.md,
llms.txt) remain consistent with this hierarchy.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Jira integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ca578ee and 28ca5a6.

📒 Files selected for processing (1)
  • src/pages/docs/platform/ai-llms/index.mdx

@sacOO7 sacOO7 force-pushed the fix/building-with-llms-inline-context branch from 28ca5a6 to 657f597 Compare February 26, 2026 06:21
@sacOO7 sacOO7 force-pushed the fix/building-with-llms-inline-context branch from 657f597 to 3a81f87 Compare February 26, 2026 06:25
@sacOO7 sacOO7 merged commit 6d7776a into main Feb 26, 2026
7 checks passed
@sacOO7 sacOO7 deleted the fix/building-with-llms-inline-context branch February 26, 2026 06:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants