generated from martinthomson/internet-draft-template
-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
The current draft describes best practices for "web crawlers" and refers to the target audience generally as "automated clients". To ensure the best practices are applied correctly and to avoid ambiguity regarding scope (e.g., distinguishing between search crawlers and AI agents), we need a precise definition of what constitutes a "crawler" in this context.
Proposed Action:
Perhaps create a "Definitions" section that explicitly defines the target audience. Or just expand the intro like we did in RFC9309. We should investigate and potentially align with existing definitions from:
- WebBotAuth: The WebBotAuth Working Group focuses on the "Authentication of non-human users to human-oriented Web sites". We should determine if our definition of "crawler" is a subset of this or if it requires distinct characteristics (e.g., traversal vs. single-point access).
- Regulations: Review how existing regulations (e.g., EU AI Act?) defines crawlers to ensure our terminology is at least ballpark consistent with regulatory frameworks.
References:
https://datatracker.ietf.org/wg/webbotauth/about/
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels