Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
6a5a37a
docs - fixed typos and made small adjustments
Jun 14, 2024
876b39a
add my democracy-sim model, server version in run.py rudimentary work…
Jun 21, 2024
ae43861
depiction of colors and count valules of agents in cells work
Jun 24, 2024
80fe2c0
Areas implemented (adjustable with borders being drawn if needed) - e…
Jul 5, 2024
c7cdd93
improved the initial color distribution to make it less homogeneous a…
Jul 5, 2024
f12c204
improved setup - color distribution is now not uniform and areas calc…
Jul 6, 2024
9b20b0a
adapted the color distribution again, such that the randomness apears…
Jul 8, 2024
fdea74b
implemented distance functions (kendall tau & spearman) with unit tes…
Jul 19, 2024
0a947ff
started to implement the election but many things are needed for it -…
Jul 21, 2024
0a9ca44
continued with adding relevant functionality for the 'conduct_electio…
Jul 26, 2024
8ba6549
Added more unit-tests and improved comments and doc-strings
Aug 14, 2024
51c03e7
added majority_rule but tie-breaking is not satisfactory yet
Aug 21, 2024
c84807a
Completed implementing majority rule including unit tests
Aug 26, 2024
b6094ae
Continued implementing 'conduct_election'. Split up unit-testing for …
Sep 15, 2024
86968a9
conduct_election is implemented - needs to be tested thoroughly, star…
Sep 23, 2024
3d686da
color_distribution_chart implemented (works only for four colors so far)
Sep 26, 2024
6ef3e95
color_distribution_chart works now with all colors - set max colors t…
Sep 26, 2024
a76aa7d
minor code fixes / typos and cleaning up
Sep 26, 2024
8a0f660
implemented (global) Gini-Index statistics in charts
Sep 26, 2024
9a4667f
implemented more stats, fixed typos, plan to merge the two schedulers…
Oct 9, 2024
33d2db8
started major cleanup - removed schedulers in favor of a single custo…
Oct 16, 2024
17f7ed2
fixed the way voting agents are added to the system + streamlined the…
Oct 16, 2024
7da602a
changed grid type to SingleGrid and some minor changes
Oct 17, 2024
700fc71
mutation of color cells according to election results implemented
Oct 21, 2024
da4d574
Fixed unit tests, added gloval area, added feedback of election to ce…
Oct 28, 2024
76b7114
introduced an election_impact_on_mutation factor that steers the impa…
Oct 31, 2024
87383b0
introduced a mutation rate variable mu to the system
Nov 1, 2024
75c1ea0
started to implement a normal-distribution among agent personalities
Nov 6, 2024
caf72a6
Fix major confusion in ranking logic in social welfare functions AND …
Nov 28, 2024
b4e092d
combined the creation of the two area-stats overviews into one render…
Nov 30, 2024
546f83e
changed the area stats to be depicted side by side for each area for …
Nov 30, 2024
7786c9a
added a plot to view personality-distributions per area, removed num_…
Dec 2, 2024
471be43
changed the approval logic to get from std pref-rel to approval_votin…
Dec 9, 2024
dd0c79d
added known_cells system var, implemented a randomized update of know…
Jan 10, 2025
e9bf681
added concept description to docs
Jan 23, 2025
c633a62
update docs further
Mar 14, 2025
276fda6
worked on docs - major improve - not entirely finished
Mar 21, 2025
9acdade
update README
Jun 15, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
.*
!.gitignore
/.idea
.DS_Store
__pycache__/
*.ipynb
/examples
/starter_model
/mesa
site/
sorted-out-tests
/benchmarks
/notes
/docs/work_in_progress_exclude
# short term:
Dockerfile
docker-compose.yml
Singularity.def
/app.py
/main.py
templates
/docs/images/CI-images
.coverage*
*.cache
ai_info.txt
convert_docstrings.py
TODO.txt
democracy_sim/simulation_output
81 changes: 30 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[![Pages](https://github.com/jurikane/DemocracySim/actions/workflows/ci.yml/badge.svg)](https://jurikane.github.io/DemocracySim/)
[![pytest main](https://github.com/jurikane/DemocracySim/actions/workflows/python-app.yml/badge.svg?branch=main)](https://github.com/jurikane/DemocracySim/actions/workflows/python-app.yml)
[![codecov](https://codecov.io/gh/jurikane/DemocracySim/graph/badge.svg?token=QVNSXWIGNE)](https://codecov.io/gh/jurikane/DemocracySim)
[![codecov](https://codecov.io/gh/jurikane/DemocracySim/branch/main/graph/badge.svg)](https://codecov.io/gh/jurikane/DemocracySim)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

[//]: # ([![pytest dev](https://github.com/jurikane/DemocracySim/actions/workflows/python-app.yml/badge.svg?branch=dev)](https://github.com/jurikane/DemocracySim/actions/workflows/python-app.yml))
Expand All @@ -18,53 +18,32 @@ This project is kindly supported by [OpenPetition](https://osd.foundation).

For details see the [documentation](https://jurikane.github.io/DemocracySim/) on GitHub-pages.

## Short overview in German

• Multi-Agenten Simulation
- untersucht werden soll die Partizipation der Agenten an den Wahlen
- Auswirkung von verschiedenen Wahlverfahren auf die Partizipation
- Verlauf der Partizipation über die Zeit (Umgebungsänderung, Änderung der Vermögensverteilung, ...)

• Umgebung:
- Gitterstruktur ohne Ränder
- jedes Viereck im Gitter ist ein Feld, eine Menge von (zusammenhängenden) Feldern ist ein Gebiet
- Jedes Feld im Gitter hat eine von vier Farben (r, g, b, w) diese verändert sich mit einer bestimmten
"Mutationsrate", die Änderung ist abhängig von den Ergebnissen der letzten Gruppenentscheidung
über ein Gebiet, welches das Feld einschließt (d.h. die Umgebung reagiert auf die Gruppenentscheidungen)

• Agenten
- Intelligenz: top-down approach d.h. die Agenten bekommen durch ein training eine einfache KI
(auf Basis von Entscheidungsbäumen um das Verhalten nachvollziehen zu können)
- Haben bestimmtes Budget (Motivation)
- Entscheidungen (Agenten können):
- umliegende Felder erkunden ("sich bilden") - kostet
- an Wahlen teilnehmen - kostet
- abwarten - geringe kosten
- "Agentenpersönlichkeit": jede Agentin besitzt zwei Präferenzrelationen über die Farben r, g und b
- ⇨ 15 Agententypen (zufällig und normalverteilt)
- diese wirken sich auf die Belohnungen der Abstimmungsergebnisse aus

• Wahlen
- Abgestimmt wird über die Häufigkeitsverteilung der 4 Farben im Wahlgebiet (objektive Wahrheit soll
"kluge Gruppenentscheidung" simulieren)
- Belohnung wird an alle Agenten im Wahlgebiet ausgeschüttet:
- je näher (kendal-tau Distanz) die abgestimmte Verteilung (Wahlergebnis) an der wahren Verteilung liegt,
umso größer die Belohnung
- eine Hälfte der Belohnung geht an alle zu gleichen Teilen
- zweite Hälfte wird entsprechend der "Agentenpersönlichkeit" (siehe oben) ausgeschüttet

- ⇨ abstimmende Agenten im Zwiespalt:
- so abstimmen wie die Verteilung vermutlich wirklich ist (gute Entscheidung für alle - nach bestem wissen)
- oder eher egoistisch, sodass jetzt und in Zukunft möglichst viel an den Agenten selbst geht
(beachte: Das Ergebnis beeinflusst auch die zukünftige Verteilung im Gebiet)

Interessante Fragen:
- machen die verschiedenen Wahlverfahren einen Unterschied und wenn ja welchen?
- wie verhalten sich Agententypen die in der Minderheit/Mehrheit sind?
- wie wirkt sich (nicht) Partizipation langfristig aus?
- wie wirkt sich die Verteilung von Vermögen auf die Partizipation aus und umgekehrt?
- welche Muster entstehen in den Gebieten (lokal, regional, global)?
Auch interessant:
- was passiert, wenn die Gruppe der abstimmenden Agenten zufällig ausgewählt wird
(„Bürgerräte“ also kostenlose oder vergütete Teilnahme von x% aller Agenten)?
(werden die Entscheidungen „besser“, wie verteilt sich der Wohlstand, …)
## Overview

**DemocracySim** is a multi-agent simulation framework designed to study democratic participation
and group decision-making in a dynamic, evolving environment.
Agents interact within a grid-based world, form beliefs about their surroundings,
and vote in elections that influence both their individual outcomes and the state of the system.

The environment consists of a toroidal grid of colored fields, where neighboring cells form territories.
Each territory holds regular elections in which agents vote on the observed color distribution.
The results of these elections not only influence how agents are rewarded
but also shape the environment itself through controlled mutation processes.

Agents have limited resources and face decisions about whether to participate in elections, or remain inactive.
Each agent belongs to a personality type defined by preferences over the possible field colors,
with types distributed to create majority and minority dynamics.
During elections, agents face a strategic trade-off between voting for what benefits them personally
and voting for what they believe to be the most accurate representation of their territory—decisions
that impact both immediate rewards and the system’s future state.

The simulation tracks a range of metrics including participation rates, collective accuracy,
reward inequality (Gini index), and behavioral indicators such as altruism and diversity of expressed opinions.
**DemocracySim** also allows for the evaluation of group performance under different normative goals—utilitarian,
egalitarian, or Rawlsian—by comparing actual outcomes to theoretically optimal decisions.

By modeling participation dilemmas, reward mechanisms, and personality-driven behavior,
**DemocracySim** provides a controlled environment for investigating how democratic systems
respond to different institutional rules and individual incentives.
It is intended both as a research tool and as a foundation for future explorations into deliberation, representation,
and fairness in collective choice.
Empty file added democracy_sim/__init__.py
Empty file.
86 changes: 86 additions & 0 deletions democracy_sim/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from mesa.experimental import JupyterViz, make_text, Slider
import solara
from model_setup import *
# Data visualization tools.
from matplotlib.figure import Figure


def get_agents_assets(model: ParticipationModel):
"""
Display a text count of how many happy agents there are.
"""
all_assets = list()
# Store the results
for agent in model.voting_agents:
all_assets.append(agent.assets)
return f"Agents wealth: {all_assets}"


def agent_portrayal(agent: VoteAgent):
# Construct and return the portrayal dictionary
portrayal = {
"size": agent.assets,
"color": "tab:orange",
}
return portrayal


def space_drawer(model, agent_portrayal):
fig = Figure(figsize=(8, 5), dpi=100)
ax = fig.subplots()

# Set plot limits and aspect
ax.set_xlim(0, model.grid.width)
ax.set_ylim(0, model.grid.height)
ax.set_aspect("equal")
ax.invert_yaxis() # Match grid's origin

fig.tight_layout()

return solara.FigureMatplotlib(fig)


model_params = {
"height": grid_rows,
"width": grid_cols,
"draw_borders": False,
"num_agents": Slider("# Agents", 200, 10, 9999999, 10),
"num_colors": Slider("# Colors", 4, 2, 100, 1),
"color_adj_steps": Slider("# Color adjustment steps", 5, 0, 9, 1),
"heterogeneity": Slider("Color-heterogeneity factor", color_heterogeneity, 0.0, 0.9, 0.1),
"num_areas": Slider("# Areas", num_areas, 4, min(grid_cols, grid_rows)//2, 1),
"av_area_height": Slider("Av. Area Height", area_height, 2, grid_rows//2, 1),
"av_area_width": Slider("Av. Area Width", area_width, 2, grid_cols//2, 1),
"area_size_variance": Slider("Area Size Variance", area_var, 0.0, 1.0, 0.1),
}


def agent_portrayal(agent):
portrayal = participation_draw(agent)
if portrayal is None:
return {}
else:
return portrayal

def agent_portrayal(agent):
portrayal = {
"Shape": "circle",
"Color": "red",
"Filled": "true",
"Layer": 0,
"r": 0.5,
}
return portrayal

grid = mesa.visualization.CanvasGrid(agent_portrayal, 10, 10, 500, 500)


page = JupyterViz(
ParticipationModel,
model_params,
#measures=["wealth", make_text(get_agents_assets),],
agent_portrayal=agent_portrayal,
#agent_portrayal=participation_draw,
#space_drawer=space_drawer,
)
page # noqa
168 changes: 168 additions & 0 deletions democracy_sim/distance_functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
from math import comb
import numpy as np
from numpy.typing import NDArray
from typing import TypeAlias

FloatArray: TypeAlias = NDArray[np.float64]


def kendall_tau_on_ranks(rank_arr_1, rank_arr_2, search_pairs, color_vec):
"""
DON'T USE
(don't use this for orderings!)

This function calculates the kendal tau distance between two rank vektors.
(The Kendall tau rank distance is a metric that counts the number
of pairwise disagreements between two ranking lists.
The larger the distance, the more dissimilar the two lists are.
Kendall tau distance is also called bubble-sort distance).
Rank vectors hold the rank of each option (option = index).
Not to be confused with an ordering (or sequence) where the vector
holds options and the index is the rank.

Args:
rank_arr_1: First (NumPy) array containing the ranks of each option
rank_arr_2: The second rank array
search_pairs: The pairs of indices (for efficiency)
color_vec: The vector of colors (for efficiency)

Returns:
The kendall tau distance
"""
# Get the ordering (option names being 0 to length)
ordering_1 = np.argsort(rank_arr_1)
ordering_2 = np.argsort(rank_arr_2)
# print("Ord1:", list(ordering_1), " Ord2:", list(ordering_2))
# Create the mapping array
mapping_array = np.empty_like(ordering_1) # Empty array with same shape
mapping_array[ordering_1] = color_vec # Fill the mapping
# Use the mapping array to rename elements in ordering_2
renamed_arr_2 = mapping_array[ordering_2] # Uses NumPys advanced indexing
# print("Ren1:",list(range(len(color_vec))), " Ren2:", list(renamed_arr_2))
# Count inversions using precomputed pairs
kendall_distance = 0
# inversions = []
for i, j in search_pairs:
if renamed_arr_2[i] > renamed_arr_2[j]:
# inversions.append((renamed_arr_2[i], renamed_arr_2[j]))
kendall_distance += 1
# print("Inversions:\n", inversions)
return kendall_distance


def unnormalized_kendall_tau(ordering_1, ordering_2, search_pairs):
"""
This function calculates the kendal tau distance on two orderings.
An ordering holds the option names in the order of their rank (rank=index).

Args:
ordering_1: First (NumPy) array containing ranked options
ordering_2: The second ordering array
search_pairs: Containing search pairs of indices (for efficiency)

Returns:
The kendall tau distance
"""
# Rename the elements to reduce the problem to counting inversions
mapping = {option: idx for idx, option in enumerate(ordering_1)}
renamed_arr_2 = np.array([mapping[option] for option in ordering_2])
# Count inversions using precomputed pairs
kendall_distance = 0
for i, j in search_pairs:
if renamed_arr_2[i] > renamed_arr_2[j]:
kendall_distance += 1
return kendall_distance


def kendall_tau(ordering_1, ordering_2, search_pairs):
"""
This calculates the normalized Kendall tau distance of two orderings.
The Kendall tau rank distance is a metric that counts the number
of pairwise disagreements between two ranking lists.
The larger the distance, the more dissimilar the two lists are.
Kendall tau distance is also called bubble-sort distance.
An ordering holds the option names in the order of their rank (rank=index).

Args:
ordering_1: First (NumPy) array containing ranked options
ordering_2: The second ordering array
search_pairs: Containing the pairs of indices (for efficiency)

Returns:
The kendall tau distance
"""
# TODO: remove these tests (comment out) on actual simulations to speed up
n = ordering_1.size
if n > 0:
expected_arr = np.arange(n)
assert (np.array_equal(np.sort(ordering_1), expected_arr)
and np.array_equal(np.sort(ordering_2), expected_arr)) , \
f"Error: Sequences {ordering_1}, {ordering_2} aren't comparable."

# Get the unnormalized Kendall tau distance
dist = unnormalized_kendall_tau(ordering_1, ordering_2, search_pairs)
# Maximum possible Kendall tau distance
max_distance = comb(n, 2) # This is n choose 2, or n(n-1)/2
# Normalize the distance
normalized_distance = dist / max_distance

return normalized_distance


def spearman_distance(rank_arr_1, rank_arr_2):
"""
Beware: don't use this for orderings!

This function calculates the Spearman distance between two rank vektors.
Spearman's foot rule is a measure of the distance between ranked lists.
It is given as the sum of the absolute differences between the ranks
of the two lists.
This function is meant to work with numeric values as well.
Hence, we only assume the rank values to be comparable (e.q. normalized).

Args:
rank_arr_1: First (NumPy) array containing the ranks of each option
rank_arr_2: The second rank array

Returns:
The Spearman distance
"""
# TODO: remove these tests (comment out) on actual simulations
assert rank_arr_1.size == rank_arr_2.size, \
"Rank arrays must have the same length"
if rank_arr_1.size > 0:
assert (rank_arr_1.min() == rank_arr_2.min()
and rank_arr_1.max() == rank_arr_2.max()), \
f"Error: Sequences {rank_arr_1}, {rank_arr_2} aren't comparable."
return np.sum(np.abs(rank_arr_1 - rank_arr_2))


def spearman(ordering_1, ordering_2, _search_pairs=None):
"""
This calculates the normalized Spearman distance between two orderings.
Spearman's foot rule is a measure of the distance between ranked lists.
It is given as the sum of the absolute differences between the ranks
of the two orderings (values from 0 to n-1 in any order).

Args:
ordering_1: The first (NumPy) array containing the option's ranks.
ordering_2: The second rank array.
_search_pairs: This parameter is intentionally unused.

Returns:
The Spearman distance
"""
# TODO: remove these tests (comment out) on actual simulations to speed up
n = ordering_1.size
if n > 0:
expected_arr = np.arange(n)
assert (np.array_equal(np.sort(ordering_1), expected_arr)
and np.array_equal(np.sort(ordering_2), expected_arr)) , \
f"Error: Sequences {ordering_1}, {ordering_2} aren't comparable."
distance = np.sum(np.abs(ordering_1 - ordering_2))
# Normalize
if n % 2 == 0: # Even number of elements
max_dist = n**2 / 2
else: # Odd number of elements
max_dist = n * (n - 1) / 2
return distance / max_dist
Loading