Open educational resource, workbook, and workshop.
AI tools increasingly support all stages of research projects. At the same time, their use is constrained by ethical principles, research-integrity standards, and governance (legal and regulatory) requirements. This workshop introduces these constraints, presents the risks of AI use in research, and provides practical strategies for mitigating them across the entire research lifecycle. Both unintentional harms (AI safety) and deliberate threats (AI security) will be addressed. Participants will learn how to use AI tools safely, securely, and in compliance with ethical, integrity, and governance expectations.
- INTRODUCTION
- PART 1: Theory
- 1.1 Freedom of research under ethical, integrity, and governance (legal and regulatory) constraints
- 1.2 AI risks, AI safety, and AI security
- 1.3 Risk management
- PART 2: Practice
- 2.1 Risk management framework adapted to the trinity of risks (ethical, integrity, and governance)
- 2.2 Hands-On: Use cases of AI and risk management across the research lifecycle. Plan & design, collect & create, analyze & collaborate, evaluate & archive, share & disseminate, access & reuse.
- 2.3 AI policies and checklists for research groups and research projects
- SUMMARY
Shigapov, R. (2025, Dezember 15). Safe and secure use of AI in research projects. Zenodo. https://doi.org/10.5281/zenodo.17940943
To export jupyter book into PDF format, run:
jupyter book build --pdf
It will be saved in book/safe_ai.pdf
This repository is licensed under CC BY 4.0 license unless otherwise noted.