-
-
Notifications
You must be signed in to change notification settings - Fork 221
doc: support markdown response for AI #1171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
…3 in package.json
…M text generation - Added middleware to handle markdown rewrites for LLM documentation. - Introduced new routes for serving LLM text from markdown files. - Updated source configuration to include processed markdown in documentation. - Created utility functions for fetching and formatting LLM text.
|
@aryasaatvik is attempting to deploy a commit to the 47ng Team on Vercel. A member of the Team first needs to authorize it. |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
The typecheck step of the docs fails with this version of Fumadocs (you can see the results locally by running pnpm build --filter docs...).
I see this uses a middleware, is there a way to do this completely statically? Middleware would run on every request and incur additional costs & latency.
Also is the /docs/llms.mdx/[...] a convention used by agents? Or could it be changed to something like /llms/docs/[...], the mdx "extension" seems a bit odd in the middle of the pathname.
Disclaimer: I don't know much about how these agents fetch content, but clean URL suggestions could be:
/llms.txtfor the root index (from https://llmstxt.org/)/llms-full.txtfor the detailed index/llms/docs/installation.mdfor the markdown content equivalent of/docs/installation
What do you think?
packages/docs/src/middleware.ts
Outdated
| const { rewrite: rewriteLLM } = rewritePath("/docs/*path", "/docs/llms.mdx/*path"); | ||
|
|
||
| export function middleware(request: NextRequest) { | ||
| if (isMarkdownPreferred(request)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: What does this function do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it checks the Accept header for the presence of text/plain or text/markdown preference and the order and q value if both text/html and (text/plain or text/markdown) are present.
using a static rewrite also works for claude code and opencode since they dont rely on q value and order
- Replace middleware with static Next.js rewrites for better performance - Implement clean URL structure: /llms/docs/ instead of /docs/llms.mdx/ - Add Accept header logic for AI agent detection - Remove middleware.ts to eliminate runtime overhead - Update route handler path and TypeScript types
|
All alerts resolved. Learn more about Socket for GitHub. This PR previously contained dependency changes with security issues that have been resolved, removed, or ignored. |
…n documentation - Introduced a new redirect rule to handle requests for markdown files in the documentation. - This allows for cleaner URLs and better organization of LLM documentation resources.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
- Replace custom cn.ts with existing @/src/lib/utils import - Ensures proper clsx + twMerge combination for className merging - Maintains consistency with rest of codebase
- Replace manual URLSearchParams with nuqs createSerializer - Demonstrates dogfooding of nuqs library capabilities - Maintains same functionality with cleaner, type-safe approach
- Wrap LLMCopyButton and ViewOptions in Suspense with skeleton fallback - Use buttonVariants for perfect styling consistency with real buttons - Prevents layout shifts on slow connections - Skeleton matches exact dimensions and styling of actual components - Improves perceived performance and user experience
- Refactor PageActionsSkeleton into its own component for better modularity - Import and use PageActionsSkeleton in the documentation page - Enhances code organization and reusability across documentation pages
|
Looking at the output of Also, the output of llms.txt and llms-full.txt appear to be the same. I wonder if we could fence off human-only content with HTML comments, like: content for both humans & llms
<!-- llms:off -->
human only content
<!-- llms:on -->
we could also have LLM-only contents like so:
<!-- llms:only
this is not rendered for humans, but emitted in llms.txt and other markdown endpoints.
--> |
314e638 to
0027b7c
Compare
- Refactor the GET function to organize documentation pages by directory - Create a map to group pages and format output with titles and descriptions - Enhance readability of the generated documentation
- Update the GET function to exclude specific pages from LLM text processing. - Remove PAGE_EXCLUSIONS from the getLLMText function to streamline logic.
updated llms.txt to only include a table of contents also added support for page exclusions for llms-full I think fencing content for llms/humans in .mdx would require creating a fumadocs loader plugin to transform the comments at the mdx compilation step and filtering content. i don't want to do this as a part of this PR |
🤖 LLM-Friendly Documentation Support
This PR implements AI agent-friendly documentation features, addressing maintainer feedback for better performance and cleaner URLs.
✨ Key Features
Content Negotiation
text/markdownortext/plainwithouttext/html/llms/docs/instead of/docs/llms.mdx/Multiple Endpoints
/llms-full.txt- Complete documentation corpus for LLMs/llms/docs/[...]- Individual page markdown content/docs/[...]- Regular HTML documentationEnhanced UX
🔧 Technical Changes
next.config.mjs/llms/docs/[[...slug]]for markdown content/llms-full.txtand/llms.txtfor corpus accessincludeProcessedMarkdownfor LLM text extraction✅ Addresses Maintainer Feedback
/llms/docs/pattern🧪 Testing
📚 References