A portal to data sources for cosmograph
To install: pip install cosmodata
This repository contains datasets for various projects, each prepared for visualization and analysis using Cosmograph. The raw data consists of structured information from sources like academic publications, GitHub repositories, political debates, and Spotify playlists. The prepared datasets feature embeddings and 2D projections that enable scatter and force-directed graph visualizations.
- Raw Data: Academic publications metadata from the EuroVis conference, including titles, abstracts, authors, and awards.
- Prepared Data: merged_artifacts.parquet (5599 rows, 18 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x,y - Point Size:
n_tokens(number of tokens in the abstract) - Color: Cluster labels (
cluster_05,cluster_08, etc.) - Label:
title
- X & Y Coordinates:
- More:
- Potential columns for visualization:
- Raw Data: Transcript of a political debate between Kamala Harris and Donald Trump.
- Prepared Data: harris_vs_trump_debate_with_extras.parquet (1,141 rows, 21 columns)
- Potential columns for visualization:
- X & Y Coordinates:
tsne__x,tsne__y,pca__x,pca__y - Point Size:
certainty - Color:
speaker_color - Label:
text
- X & Y Coordinates:
- More:
- Potential columns for visualization:
- Raw Data: Metadata on popular songs from various playlists, including holiday songs and the greatest 500 songs.
- Prepared Data: holiday_songs_spotify_with_embeddings.parquet (167 rows, 27 columns)
- Potential columns for visualization:
- X & Y Coordinates:
umap_x,umap_y,tsne_x,tsne_y - Point Size:
popularity - Color:
genre(derived from playlist) - Label:
track_name
- X & Y Coordinates:
- More:
- Potential columns for visualization:
- Raw Data: Collection of 1,638 famous quotes.
- Prepared Data: micheleriva_1638_quotes_planar_embeddings.parquet (1,638 rows, 3 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x,y - Label:
quote
- X & Y Coordinates:
- More:
- Potential columns for visualization:
- Raw Data: Data related to prompt injection attacks and defenses.
- Prepared Data: prompt_injection_w_umap_embeddings.tsv (662 rows, 6 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x,y - Point Size:
size - Color:
label - Label:
text
- X & Y Coordinates:
- More:
- Potential columns for visualization:
- Raw Data: Conversations from AI chat systems.
- Prepared Data: lmsys_with_planar_embeddings_pca500.parquet (2,835,490 rows, 38 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x_umap,y_umap - Point Size:
num_of_tokens - Color:
model - Label:
content
- X & Y Coordinates:
- Related code file: lmsys_ai_conversations.py
- Potential columns for visualization:
- Raw Data: Human Connectome Project (HCP) publications and citation networks.
- Prepared Data: aggregate_titles_embeddings_umap_2d_with_info.parquet (340,855 rows, 9 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x,y - Point Size:
n_cits(citation count) - Color:
main_field(research domain) - Label:
title
- X & Y Coordinates:
- Related code file: hcp.py
- Potential columns for visualization:
- Raw Data: GitHub repository metadata including stars, forks, programming languages, and repository descriptions, from kaggle dataset
- Prepared Data: github_repositories.parquet (3,065,063 rows, 28 columns)
- Potential columns for visualization:
- X & Y Coordinates:
x,y - Point Size:
stars(star count),forks - Color:
primaryLanguage - Label:
nameWithOwner
- X & Y Coordinates:
- Related code file: github_repos.py
- Potential columns for visualization:
- Load the prepared
.parquetfiles into a Pandas DataFrame. - Use Cosmograph or another visualization tool to create scatter or force-directed plots.
- Customize the x/y coordinates, size, color, and labels based on your analysis needs.
- The data has been curated and prepared by Thor Whalen and contributors.
- Data sources include Kaggle, Hugging Face, GitHub, and various public datasets.
For further details, please refer to the individual dataset documentation or the linked preparation scripts.
This section:
- Explains the why (seamless Colab/local experience)
- Shows quick examples (copy-pasteable)
- Highlights key features (caching, auto-detection)
- Includes practical workflows (typical notebook usage)
- Provides cache management (helpful for power users)
cosmodata includes utilities to make working with data in notebooks (especially Colab) seamless.
Install packages only when needed, with smart local/Colab detection.
Note: Most of the time you can just do %pip install -q ...packages in your notebook,
but if you want to ask permission to the user first (which I like doing),
or need to ensure installation from python itself, this could help.
from cosmodata import ensure_installed
# Simple: space-separated package names
ensure_installed('graze tabled pandas')
# With version requirements
ensure_installed('graze>=0.1.0 tabled pandas<2.0')
# Handle import/pip name mismatches
ensure_installed('PIL cv2', pip_names={'PIL': 'Pillow', 'cv2': 'opencv-python'})Behavior:
- In Colab: Auto-installs missing packages silently
- Locally: Shows what will be installed and asks for confirmation (default: Yes)
- Smart: Only installs if package is missing or version doesn't satisfy requirements
Load data from URLs or files with automatic caching. Works seamlessly in Colab (Google Drive) and locally.
from cosmodata import acquire_data
# Load CSV from URL (cached automatically)
df = acquire_data('https://example.com/data.csv')
# Custom getter for APIs
data = acquire_data(
'https://api.example.com/endpoint',
getter=lambda url: requests.get(url).json(),
cache_key='api_data'
)
# Force refresh cached data
df = acquire_data(url, refresh=True)
# Custom cache location
df = acquire_data(url, cache_dir='/path/to/cache')Features:
- Auto-caching:
- Colab: Saves to Google Drive (
MyDrive/.colab_cache) for persistence across sessions - Local: Saves to
~/.local/share/cosmodata/datasets
- Colab: Saves to Google Drive (
- Smart getters: Auto-detects appropriate loader (graze → tabled → requests)
- Refresh support: Bypass cache with
refresh=True - Format support: Handles CSV, JSON, Excel, Parquet, etc. (via
tabled)
Typical notebook workflow:
# Cell 1: Setup
!pip install cosmodata
from cosmodata import ensure_installed, acquire_data
ensure_installed('graze tabled pandas')
# Cell 2: Load data (fast on subsequent runs)
df = acquire_data('https://example.com/large_dataset.csv', cache_key='my_dataset')
# Cell 3: Your analysis
df.head()Cache management:
# See where data is cached
import os
from pathlib import Path
# In Colab
cache_dir = Path('/content/drive/MyDrive/.colab_cache')
# Locally
cache_dir = Path('~/.local/share/cosmodata/datasets').expanduser()
# List cached files
list(cache_dir.glob('*.pkl'))
# Clear specific cache
os.remove(cache_dir / 'my_dataset.pkl')Pro tip: Combine both utilities for the smoothest notebook experience:
from cosmodata import ensure_installed, acquire_data
# One-time setup
ensure_installed('graze tabled requests')
# Now your data loading "just works" with caching
df1 = acquire_data('https://example.com/data.csv')
df2 = acquire_data('local_file.parquet')
api_data = acquire_data(
'https://api.example.com/data',
getter=lambda u: requests.get(u).json()
)