Skip to content
This repository was archived by the owner on Oct 15, 2025. It is now read-only.

Conversation

@slauriat
Copy link
Contributor

Updated pull request with changes to the Measurable Guidance opportunity as a result of the Silver TF discussion 18 February in response to #41
See https://www.w3.org/2022/02/18-silver-minutes.html#ResolutionSummary

Updated pull request with changes to the Measurable Guidance opportunity as a result of the Silver TF discussion 18 February in response to #41
See https://www.w3.org/2022/02/18-silver-minutes.html#ResolutionSummary
<p>There are several areas for exploration in how conformance can work. These opportunities may or may not be incorporated. Then need to work together, and that interplay will be governed by the design principles</p>
<ul>
<li><strong>Measurable Guidance</strong>: Certain accessibility guidance is quite clear and measurable. Others, far less so. There are needs of people with disabilities, especially cognitive and low vision disabilities that are not well served by guidance that can only be measured by pass/fail statement. Multiple means of measurement, in addition to pass/fail statements, allow inclusion of more accessibility guidance.</li>
<li><strong>Measurable Guidance</strong>: There are needs of people with disabilities, such as cognitive and low vision disabilities that cannot be stated in a single easy-enough-to-measure pass/fail statement. In addition, one area of exploration is to innovate in the area of ways to know whether or not you have followed guidance. By expressing how-to implement and how to measure separately, as well as by allowing multiple means of measurement, we can allow inclusion of more accessibility guidance.</li>
Copy link

@detlevhfischer detlevhfischer Mar 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the bit "innovate in the area of ways to know whether or not you have followed guidance" is vague and confusing and not really an improvement on what we had before.
It is evident that there are criteria that are difficult to rate (say, is language sufficiently clear). Here, having a range like excellent/good/fair/not so good/insufficient does more justice to such a criterion (and the content it applies to). But whatever the "means of measurement", the issue remains that different people will arrive at different ratings. However, in terms of overall results (say across a page sample), once aggregated, distortions and artifacts will be less pronounced in a graded rating scheme, and inter-tester consensus likely better. For example, one evaluator will rate some content as "fair", another one as "good". That is still fairly close. In a pass / fail scheme, you'd have to fall on either side of the "pass" or "fail" divide. When you aggregate graded results across a number of pages with different quality of content, the averaged rating will more truly reflect the quality of content overall. If you then finally have to translate to pass/fail per outcome for the entire sample (or whatever new unit of conformance), it is a matter of where you set the cut-off point, and also, whether you identify critical errors and allow them to fail an overall outcome result even where an averaged rating would otherwise translate to "pass" (per defined cut-off point).

Copy link

@detlevhfischer detlevhfischer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants