|
1 | 1 | # The Open Ethics Canvas v1.0 |
2 | 2 | modified: 2021-08-07 |
3 | 3 |
|
4 | | -Designed For |
5 | | -Designed By |
6 | | -Date |
7 | | -Version |
| 4 | +--- |
| 5 | + |
| 6 | +- Designed For |
| 7 | +- Designed By |
| 8 | +- Date |
| 9 | +- Version |
| 10 | + |
| 11 | + |
| 12 | +--- |
8 | 13 |
|
9 | 14 | ## Scope |
10 | | -What is this product designed for? |
11 | | -In which context it operates? |
| 15 | +- What is this product designed for? |
| 16 | +- In which context it operates? |
12 | 17 |
|
13 | 18 | ## Users |
14 | | -What type of users does this product have? (customers/admins/ etc) |
15 | | -What are their roles? |
| 19 | +- What type of users does this product have? (customers/admins/ etc) |
| 20 | +- What are their roles? |
16 | 21 |
|
17 | 22 | ## Training Data |
18 | | -How was the training data collected? |
19 | | -How do you ensure its representativeness? |
20 | | -Does your training dataset contain personal data? |
21 | | -Who annotates the data and how quality is controlled? |
22 | | -What is the data labeling process that you employ? |
| 23 | +- How was the training data collected? |
| 24 | +- How do you ensure its representativeness? |
| 25 | +- Does your training dataset contain personal data? |
| 26 | +- Who annotates the data and how quality is controlled? |
| 27 | +- What is the data labeling process that you employ? |
23 | 28 |
|
24 | 29 | ## Algorithms & Source Code |
25 | | -Do you use open or proprietary sources? Why? Which? |
26 | | -Who in the team is setting the heuristics (rules) which influence the output? |
27 | | -How do you ensure the quality of used third-party codebases? |
28 | | -What is your process of making the key architectural choices? |
| 30 | +- Do you use open or proprietary sources? Why? Which? |
| 31 | +- Who in the team is setting the heuristics (rules) which influence the output? |
| 32 | +- How do you ensure the quality of used third-party codebases? |
| 33 | +- What is your process of making the key architectural choices? |
29 | 34 |
|
30 | 35 | ## Decision Space |
31 | | -What exactly does the product do? |
32 | | -Can you provide the list of all possible outputs? |
33 | | -How incorrectly supplied inputs are spotted? |
34 | | -Is there anomaly detection in place? |
| 36 | +- What exactly does the product do? |
| 37 | +- Can you provide the list of all possible outputs? |
| 38 | +- How incorrectly supplied inputs are spotted? |
| 39 | +- Is there anomaly detection in place? |
35 | 40 |
|
36 | 41 | ## Key Stakeholders |
37 | | -Who are the key stakeholders? |
38 | | -What influence do they have over the product? |
39 | | -How do stakeholders interact with each other? |
40 | | -How is the power distributed? |
| 42 | +- Who are the key stakeholders? |
| 43 | +- What influence do they have over the product? |
| 44 | +- How do stakeholders interact with each other? |
| 45 | +- How is the power distributed? |
41 | 46 |
|
42 | 47 | ## Values & Interests |
43 | | -What values do stakeholders/users have? |
44 | | -Where these values can clash or create tensions? |
45 | | -What is known at the moment and how assumptions are tested? |
46 | | -How can you align your technology to the values you want to support/people desire? |
| 48 | +- What values do stakeholders/users have? |
| 49 | +- Where these values can clash or create tensions? |
| 50 | +- What is known at the moment and how assumptions are tested? |
| 51 | +- How can you align your technology to the values you want to support/people desire? |
47 | 52 |
|
48 | 53 | ## Personal Data Processing |
49 | | -Which personal data is collected by the product? |
50 | | -What is the purpose of collecting personal data? |
51 | | -How is this data processed? Used? Stored? Deleted? |
| 54 | +- Which personal data is collected by the product? |
| 55 | +- What is the purpose of collecting personal data? |
| 56 | +- How is this data processed? Used? Stored? Deleted? |
52 | 57 |
|
53 | 58 | ## Components & Subprocessing |
54 | | -Which third parties are engaged by the product? |
55 | | -How do you evaluate the potential impacts of API on the quality of your product’s output? |
56 | | -How do you check the reliability of your data processing contractors? |
| 59 | +- Which third parties are engaged by the product? |
| 60 | +- How do you evaluate the potential impacts of API on the quality of your product’s output? |
| 61 | +- How do you check the reliability of your data processing contractors? |
57 | 62 |
|
58 | 63 | ## Failure modes |
59 | | -How failures are detected and monitored? |
60 | | -What are the possible failures of a product? |
61 | | -What actions are performed if a product fails? |
| 64 | +- How failures are detected and monitored? |
| 65 | +- What are the possible failures of a product? |
| 66 | +- What actions are performed if a product fails? |
62 | 67 |
|
63 | 68 | ## Explainability |
64 | | -How is interpretability defined for the system? |
65 | | -What interpretability methods are used? |
66 | | -What metrics are used in result interpretation? |
67 | | -How interpretations of the output are communicated? |
| 69 | +- How is interpretability defined for the system? |
| 70 | +- What interpretability methods are used? |
| 71 | +- What metrics are used in result interpretation? |
| 72 | +- How interpretations of the output are communicated? |
68 | 73 |
|
69 | 74 | ## Human in the Loop (HITL) |
70 | | -What is the role of a human agent in the validation/verification of the outputs? |
71 | | -What is the role of a human agent in refining the model performance? |
72 | | -What is the decision-making power assigned to human agents responsible for the quality of output? |
| 75 | +- What is the role of a human agent in the validation/verification of the outputs? |
| 76 | +- What is the role of a human agent in refining the model performance? |
| 77 | +- What is the decision-making power assigned to human agents responsible for the quality of output? |
73 | 78 |
|
74 | 79 | ## Model Performance Metrics |
75 | | -Which metrics are used to evaluate the product performance? |
76 | | -Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score? |
| 80 | +- Which metrics are used to evaluate the product performance? |
| 81 | +- Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score? |
77 | 82 |
|
78 | 83 | ## Decision Feedback & Objection |
79 | | -How does the product allow for structured feedback? |
80 | | -How can the user challenge the application output? |
81 | | -Which are the third parties involved in claims/objection resolution? |
| 84 | +- How does the product allow for structured feedback? |
| 85 | +- How can the user challenge the application output? |
| 86 | +- Which are the third parties involved in claims/objection resolution? |
82 | 87 |
|
83 | 88 | ## Impact Assessment |
84 | | -What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)? |
85 | | -What are the risks of the product’s failure? |
86 | | -What impact product can cause if deployed at scale? |
87 | | -How is the product influencing the existing markets? |
| 89 | +- What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)? |
| 90 | +- What are the risks of the product’s failure? |
| 91 | +- What impact product can cause if deployed at scale? |
| 92 | +- How is the product influencing the existing markets? |
88 | 93 |
|
89 | 94 | ## Regulatory Landscape |
90 | | -What is the regulatory context in which the product operates? |
91 | | -Is the model portable to other market verticals? |
92 | | -What are the involved regulatory risks? |
| 95 | +- What is the regulatory context in which the product operates? |
| 96 | +- Is the model portable to other market verticals? |
| 97 | +- What are the involved regulatory risks? |
93 | 98 |
|
94 | 99 | ## Mitigation |
95 | | -How do you test for bias and fairness? What fairness definitions do you employ and why? |
96 | | -Does your team reflect a diversity of opinions, backgrounds, and thoughts? |
97 | | -Do you have a process for redress if people are harmed by the outputs? |
98 | | -How fast can you shut down your product in production if it behaves badly? |
99 | | -Who and how should be informed? |
| 100 | +- How do you test for bias and fairness? What fairness definitions do you employ and why? |
| 101 | +- Does your team reflect a diversity of opinions, backgrounds, and thoughts? |
| 102 | +- Do you have a process for redress if people are harmed by the outputs? |
| 103 | +- How fast can you shut down your product in production if it behaves badly? |
| 104 | +- Who and how should be informed? |
100 | 105 |
|
101 | 106 | ## Changes in Behavior |
102 | | -Do the automated decisions have significant legal or similar effects on the users/stakeholders? |
103 | | -How the users may change their behavior after use? |
104 | | -What are the potentials for power imbalance? |
105 | | -Group Interactions |
106 | | -What are potential changes in group behavior? |
107 | | -How is the product addressing group interests? |
108 | | -What new groups could be born due to the product deployment at scale? |
| 107 | +- Do the automated decisions have significant legal or similar effects on the users/stakeholders? |
| 108 | +- How the users may change their behavior after use? |
| 109 | +- What are the potentials for power imbalance? |
| 110 | +- Group Interactions |
| 111 | +- What are potential changes in group behavior? |
| 112 | +- How is the product addressing group interests? |
| 113 | +- What new groups could be born due to the product deployment at scale? |
109 | 114 |
|
110 | 115 | ## Comments |
111 | 116 |
|
| 117 | + |
112 | 118 | --- |
113 | 119 |
|
| 120 | + |
114 | 121 | The Open Ethics Canvas v1.0 © 2021 by Open Ethics contributors |
115 | 122 | Designed by Nikita Lukianets, Alice Pavaloiu, Vlad Nekrutenko |
116 | 123 | Licensed under Attribution-ShareAlike 4.0 International |
|
0 commit comments