You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sections/data-ethics.qmd
+8-10Lines changed: 8 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -295,21 +295,19 @@ The concern for transparency in using personal data is an active space for debat
295
295
As new AI developments and applications rapidly emerge and transform everyday life, we need to pause and ensure these technologies are fair, sustainable, and transparent. We must acknowledge human responsibility in designing and implementing AI systems to use these novel tools fairly and with accountability. Finally, we acknowledge that the information covered here is a lightning introduction to AI's ethical considerations and implications. Whether you are a researcher interested in using AI for the first time or a seasoned ML practitioner, we urge you to dive into the necessary and ever-expanding AI ethics work to learn how to best incorporate these concepts into your work.
296
296
297
297
298
-
### Bonus - ImageNet: A case study of ethics and bias in machine learning
298
+
### Discussion Activity
299
299
300
-
The stories that hit the news are often of privacy breaches or biases seeping into the training data. Bias can enter at any point of the research project, from preparing the training data, designing the algorithms, to collecting and interpreting the data. When working with sensitive data, a question to also consider is how to deanonymize, anonymized data. A unique aspect to machine learning is how personal bias can influence the analysis and outcomes. A great example of this is the case of ImageNet.
300
+
Given the discussion of the CARE priciples and the FAST principles, let's discuss what responsible AI considerations might exist in the context of Arctic research, particularly with respect to Indigneous peoples of the Arctic. Geospatial data spanning the Arctic typically includes the traditional lands and waters of Arctic Indigenous peoples, and often intersects with current local communities distributed throughout the Arctic. Recent work by projects like [Abundant Intelligences](https://www.indigenous-ai.net/abundant/]) are starting to explore the intersection of Indigineous Knowledge systems, Artifical Intelligence models, and how to guide the "development of AI \[to support\] a more humane future".
301
301
302
-

303
-
Image source: Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Training Sets for Machine Learning" (September 19, 2019).
304
-
305
-
ImageNet is a great example of how personal bias can enter machine learning through the training data. ImageNet was a training data set of photos that was used to train image classifiers. The data set was initially created as a large collection of pictures, which were mainly used to identify objects, but some included images of people. The creators of the data set created labels to categorize the images, and through crowdsourcing, people from the internet labeled these images. (This example is from Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Training Sets for Machine Learning", September 19, 2019).
302
+
Let's take, for example, a researcher that wants to run an machine learning model to detect changes in environmental features at a large regional or Arctic scale. We've seen several of these so far, including 1) AI predictions of the distribution of permafrost ice wedges and retrogressive thaw slumps across the Arctic; 2) use of AI to detect changes in surface water extent and lake drainage events across the Arctic; 3) use of AI in a mechanistic process models that helps understand the global source/sink tradeoff of permafrost loss and its impact on climate.
306
303
307
304
::: {.callout-tip}
308
-
**Discussion:**
305
+
##Discussion Questions
306
+
307
+
Divide into groups, find a comfortable place to sit, and pick a large-scale AI application that is of interest to the group. Let’s discuss some of the following questions:
309
308
310
-
1. Where are the two areas bias could enter this scenario?
311
-
2. Are there any ways that this bias could be avoided?
312
-
3. While this example is specific to images, can you think of any room for bias in your research?
309
+
1. Thinking of CARE, does that model provide Collective Benefit to Indigenous populations that it might impact?
310
+
2. Thinking of FAST, what would researchers need to do to ensure that their research process could meet the goals of the four categories of Fairness (Outcome Fairness, Data Fairness, Design Fairness, and Implementation Fairness) for Indigenous people in their research area?
0 commit comments