Skip to content

Commit a11e36d

Browse files
author
Open Ethics
authored
Update canvas.md
1 parent f2d04f0 commit a11e36d

1 file changed

Lines changed: 76 additions & 69 deletions

File tree

canvas.md

Lines changed: 76 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -1,116 +1,123 @@
11
# The Open Ethics Canvas v1.0
22
modified: 2021-08-07
33

4-
Designed For
5-
Designed By
6-
Date
7-
Version
4+
---
5+
6+
- Designed For
7+
- Designed By
8+
- Date
9+
- Version
10+
11+
12+
---
813

914
## Scope
10-
What is this product designed for?
11-
In which context it operates?
15+
- What is this product designed for?
16+
- In which context it operates?
1217

1318
## Users
14-
What type of users does this product have? (customers/admins/ etc)
15-
What are their roles?
19+
- What type of users does this product have? (customers/admins/ etc)
20+
- What are their roles?
1621

1722
## Training Data
18-
How was the training data collected?
19-
How do you ensure its representativeness?
20-
Does your training dataset contain personal data?
21-
Who annotates the data and how quality is controlled?
22-
What is the data labeling process that you employ?
23+
- How was the training data collected?
24+
- How do you ensure its representativeness?
25+
- Does your training dataset contain personal data?
26+
- Who annotates the data and how quality is controlled?
27+
- What is the data labeling process that you employ?
2328

2429
## Algorithms & Source Code
25-
Do you use open or proprietary sources? Why? Which?
26-
Who in the team is setting the heuristics (rules) which influence the output?
27-
How do you ensure the quality of used third-party codebases?
28-
What is your process of making the key architectural choices?
30+
- Do you use open or proprietary sources? Why? Which?
31+
- Who in the team is setting the heuristics (rules) which influence the output?
32+
- How do you ensure the quality of used third-party codebases?
33+
- What is your process of making the key architectural choices?
2934

3035
## Decision Space
31-
What exactly does the product do?
32-
Can you provide the list of all possible outputs?
33-
How incorrectly supplied inputs are spotted?
34-
Is there anomaly detection in place?
36+
- What exactly does the product do?
37+
- Can you provide the list of all possible outputs?
38+
- How incorrectly supplied inputs are spotted?
39+
- Is there anomaly detection in place?
3540

3641
## Key Stakeholders
37-
Who are the key stakeholders?
38-
What influence do they have over the product?
39-
How do stakeholders interact with each other?
40-
How is the power distributed?
42+
- Who are the key stakeholders?
43+
- What influence do they have over the product?
44+
- How do stakeholders interact with each other?
45+
- How is the power distributed?
4146

4247
## Values & Interests
43-
What values do stakeholders/users have?
44-
Where these values can clash or create tensions?
45-
What is known at the moment and how assumptions are tested?
46-
How can you align your technology to the values you want to support/people desire?
48+
- What values do stakeholders/users have?
49+
- Where these values can clash or create tensions?
50+
- What is known at the moment and how assumptions are tested?
51+
- How can you align your technology to the values you want to support/people desire?
4752

4853
## Personal Data Processing
49-
Which personal data is collected by the product?
50-
What is the purpose of collecting personal data?
51-
How is this data processed? Used? Stored? Deleted?
54+
- Which personal data is collected by the product?
55+
- What is the purpose of collecting personal data?
56+
- How is this data processed? Used? Stored? Deleted?
5257

5358
## Components & Subprocessing
54-
Which third parties are engaged by the product?
55-
How do you evaluate the potential impacts of API on the quality of your product’s output?
56-
How do you check the reliability of your data processing contractors?
59+
- Which third parties are engaged by the product?
60+
- How do you evaluate the potential impacts of API on the quality of your product’s output?
61+
- How do you check the reliability of your data processing contractors?
5762

5863
## Failure modes
59-
How failures are detected and monitored?
60-
What are the possible failures of a product?
61-
What actions are performed if a product fails?
64+
- How failures are detected and monitored?
65+
- What are the possible failures of a product?
66+
- What actions are performed if a product fails?
6267

6368
## Explainability
64-
How is interpretability defined for the system?
65-
What interpretability methods are used?
66-
What metrics are used in result interpretation?
67-
How interpretations of the output are communicated?
69+
- How is interpretability defined for the system?
70+
- What interpretability methods are used?
71+
- What metrics are used in result interpretation?
72+
- How interpretations of the output are communicated?
6873

6974
## Human in the Loop (HITL)
70-
What is the role of a human agent in the validation/verification of the outputs?
71-
What is the role of a human agent in refining the model performance?
72-
What is the decision-making power assigned to human agents responsible for the quality of output?
75+
- What is the role of a human agent in the validation/verification of the outputs?
76+
- What is the role of a human agent in refining the model performance?
77+
- What is the decision-making power assigned to human agents responsible for the quality of output?
7378

7479
## Model Performance Metrics
75-
Which metrics are used to evaluate the product performance?
76-
Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score?
80+
- Which metrics are used to evaluate the product performance?
81+
- Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score?
7782

7883
## Decision Feedback & Objection
79-
How does the product allow for structured feedback?
80-
How can the user challenge the application output?
81-
Which are the third parties involved in claims/objection resolution?
84+
- How does the product allow for structured feedback?
85+
- How can the user challenge the application output?
86+
- Which are the third parties involved in claims/objection resolution?
8287

8388
## Impact Assessment
84-
What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)?
85-
What are the risks of the product’s failure?
86-
What impact product can cause if deployed at scale?
87-
How is the product influencing the existing markets?
89+
- What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)?
90+
- What are the risks of the product’s failure?
91+
- What impact product can cause if deployed at scale?
92+
- How is the product influencing the existing markets?
8893

8994
## Regulatory Landscape
90-
What is the regulatory context in which the product operates?
91-
Is the model portable to other market verticals?
92-
What are the involved regulatory risks?
95+
- What is the regulatory context in which the product operates?
96+
- Is the model portable to other market verticals?
97+
- What are the involved regulatory risks?
9398

9499
## Mitigation
95-
How do you test for bias and fairness? What fairness definitions do you employ and why?
96-
Does your team reflect a diversity of opinions, backgrounds, and thoughts?
97-
Do you have a process for redress if people are harmed by the outputs?
98-
How fast can you shut down your product in production if it behaves badly?
99-
Who and how should be informed?
100+
- How do you test for bias and fairness? What fairness definitions do you employ and why?
101+
- Does your team reflect a diversity of opinions, backgrounds, and thoughts?
102+
- Do you have a process for redress if people are harmed by the outputs?
103+
- How fast can you shut down your product in production if it behaves badly?
104+
- Who and how should be informed?
100105

101106
## Changes in Behavior
102-
Do the automated decisions have significant legal or similar effects on the users/stakeholders?
103-
How the users may change their behavior after use?
104-
What are the potentials for power imbalance?
105-
Group Interactions
106-
What are potential changes in group behavior?
107-
How is the product addressing group interests?
108-
What new groups could be born due to the product deployment at scale?
107+
- Do the automated decisions have significant legal or similar effects on the users/stakeholders?
108+
- How the users may change their behavior after use?
109+
- What are the potentials for power imbalance?
110+
- Group Interactions
111+
- What are potential changes in group behavior?
112+
- How is the product addressing group interests?
113+
- What new groups could be born due to the product deployment at scale?
109114

110115
## Comments
111116

117+
112118
---
113119

120+
114121
The Open Ethics Canvas v1.0 © 2021 by Open Ethics contributors
115122
Designed by Nikita Lukianets, Alice Pavaloiu, Vlad Nekrutenko
116123
Licensed under Attribution-ShareAlike 4.0 International

0 commit comments

Comments
 (0)