A conceptual gambit in collaboration with AI /// Pre-Production Release
Version: Dynamically fetched from GitHub Releases (current: v0.10.7)
Built from first principles and drawing upon 30 years experience scaling laboratory process. Constructed with as few object model shortcuts as I could manage (I believe these shortcuts are among the main reasons LIMS nearly universally disappoint). Supporting both arbitrary and prescribed interacting objects. Intended for use: by small to factory scale laboratories, in regulated environments, for both research & operations use cases. Bloom can handle multiple areas LIS tend to touch: accessioning, lab processes, specimen/sample management, equipment, regulatory and compliance.
- Spoilers (Screenshots)
- Executive Summary
- Features
- Installation
- System Architecture
- Core Data Model
- Database Schema
- Object Hierarchy
- Template System
- Workflow Engine
- Action System
- File Management
- API Layer
- Web Interface
- External Integrations
- Configuration
- Deployment
- Testing
- Regulatory & Compliance
- Design Principles
- Dev Tools
- Support, Authors & License
bloom early peeks
and flexible whitelisting, etc...
bloom natively will support arbitrarily defined labware, a 96w plate is just one example. Anything that nested arrays of arrays can describe can be configured as a type of labware with next to no effort!

Package receipt -> kits registration (multiple) -> specimen registration (multiple) -> requisition capture & association -> adding specimens to assay queues. Fedex tracking details fetched, barcode printing available.
managing all object relationships, tracking all details, printing labels, etc.
BLOOM (Bioinformatics Laboratory Operations and Object Management) is a Laboratory Information Management System (LIMS) designed for managing laboratory workflows, sample tracking, and data management. The system is built on a flexible, template-driven architecture that allows laboratories to define custom object types, workflows, and actions without code changes.
- Template-driven object creation: All laboratory objects (containers, samples, workflows) are created from JSON templates
- Hierarchical lineage tracking: Parent-child relationships between all objects with full audit trail
- Flexible workflow engine: Configurable multi-step workflows with queue management
- Action system: Extensible action framework for object state transitions and operations
- File management: S3-compatible file storage with metadata tracking
- Barcode/label printing: Integration with Zebra label printers via zebra_day
- FedEx tracking: Package tracking integration via fedex_tracking_day
- Multi-interface support: FastAPI REST API and standard+admin user web page
- Language: Python 3.12+
- Database: PostgreSQL 15+ (via SQLAlchemy ORM)
- Web Frameworks: FastAPI (primary)
- Storage: AWS S3 / Supabase Storage
- Authentication: Supabase Auth (OAuth2 with social providers)
- Label Printing: zebra_day library
- Package Tracking: fedex_tracking_day library
- Validation: Pydantic v2 with pydantic-settings
- Migrations: Alembic
- Domain-Driven Architecture: Clean separation into 8 domain modules (
bloom_lims/domain/) - Database Migrations: Alembic integration with baseline migration (
bloom_lims/migrations/) - Pydantic Schema Validation: 10 schema modules for comprehensive input validation (
bloom_lims/schemas/) - Structured Exception Handling: Typed exception hierarchy (
bloom_lims/exceptions.py,bloom_lims/core/exceptions.py) - Session Management: Context managers,
_TransactionContext, proper rollback inBLOOMdb3 - API Versioning:
/api/v1/prefix structure with version negotiation (bloom_lims/api/versioning.py) - Health Check Endpoints: Kubernetes-ready probes at
/health,/health/live,/health/ready,/health/metrics - Dynamic Version Management: Version pulled from GitHub releases (
bloom_lims/_version.py)
- Template-Driven Object Creation: All objects created from JSON templates without code changes
- Hierarchical Lineage Tracking: Full parent-child relationships with comprehensive audit trail
- Multi-Step Workflow Engine: Configurable workflows with queue management
- Action System: Extensible framework for object state transitions and operations
- Operational Workflows: Accessioning → Plasma Isolation → DNA Extraction → Quant pipeline
- OAuth2/Supabase Authentication: Enterprise-grade auth with Google, GitHub, and social providers
- Domain Whitelisting: Flexible access control configuration
- JWT Token Validation: Secure API authentication
- S3-Compatible Storage: AWS S3 and Supabase Storage support
- File Sets: Grouping related files with metadata tracking
- Dewey File Manager: Organized file intake, storage, and retrieval system
- Zebra Label Printing: Full barcode printing via zebra_day library
- FedEx Tracking: Package tracking integration via fedex_tracking_day
- Graph Visualization: Cytoscape integration for complex relationship exploration
- Cross-Platform CI/CD: GitHub Actions for macOS, Ubuntu, CentOS
- Comprehensive Logging: Structured logging with rotation
- CLI Tools:
bloom-backup,bloomcommand-line interfaces - Interactive Shell:
bloom_shell.pyfor development
- Caching Layer Integration: Redis/memcached distributed caching backend (
bloom_lims/core/cache_backends.py) - Async Operations: Non-blocking operations for high-throughput automation
- Rate Limiting: API request limiting middleware
- Batch Operations: Bulk processing API endpoints
- Read Replicas: Database scaling with read replicas (
bloom_lims/core/read_replicas.py)
- Plugin Architecture: Custom extensions without core code changes
- Workflow Orchestration: Airflow/Prefect integration for automation
- Enhanced Reporting/Analytics: Built-in insights and dashboards
- Mobile/Tablet Optimization: Responsive lab-friendly interface
- GraphQL API: Flexible queries for complex many-to-many relationships
- Multi-Tenancy Support: Schema-per-tenant isolation
- Secrets Management: HashiCorp Vault / AWS Secrets Manager integration
- Observability Stack: OpenTelemetry, Prometheus metrics, distributed tracing
- Development Containers: devcontainer configuration for consistent environments
- Template Library Expansion: More out-of-box templates for common lab workflows
- User Documentation: Comprehensive guides and tutorials
- Contributor Guide: Documentation for community contributions
%%{init: {
"flowchart": {"defaultRenderer": "elk"}
}}%%
flowchart TB
subgraph BLOOM["BLOOM LIMS"]
subgraph Presentation["Presentation Layer"]
FastAPI["FastAPI API<br/>(Port 8000)"]
CLI["CLI Tools"]
end
subgraph Business["Business Logic Layer"]
BloomObj["BloomObj<br/>(bobjs.py)"]
BloomWF["BloomWorkflow<br/>Step"]
BloomFile["BloomFile<br/>Set"]
BloomEquip["BloomEquipment"]
end
subgraph DataAccess["Data Access Layer"]
BLOOMdb3["BLOOMdb3 (db.py)<br/>- SQLAlchemy Session Management<br/>- Connection Pooling<br/>- Transaction Management"]
end
subgraph ORM["ORM Models (bdb.py)"]
BloomObjModel["BloomObj Model"]
GenericLineage["GenericLineage"]
EquipmentInst["EquipmentInst"]
DataLineage["DataLineage"]
end
end
subgraph DB["PostgreSQL Database"]
bloom_obj["bloom_obj"]
generic_lineage["generic_instance_lineage"]
equipment["equipment_instance"]
data_lineage["data_lineage"]
end
FastAPI --> Business
CLI --> Business
Business --> DataAccess
DataAccess --> ORM
ORM --> DB
bloom_lims/
├── bdb.py # SQLAlchemy ORM models and base classes
├── db.py # Database connection and session management (BLOOMdb3)
├── bobjs.py # Business logic classes (BloomObj, BloomWorkflow, etc.)
├── bfile.py # File management (BloomFile, BloomFileSet)
├── bequip.py # Equipment management (BloomEquipment)
├── env.py # Environment configuration
├── config/ # Configuration files
│ ├── assay_config.yaml
│ └── fedex_config.yaml
└── templates/ # Jinja2 HTML templates for Flask UI
| Entry Point | File | Port | Purpose |
|---|---|---|---|
| Flask UI | bloom_lims/bkend/bkend.py |
5000 | Web-based user interface |
Every entity in BLOOM is a BloomObj. This includes:
- Templates: Blueprint definitions for creating instances
- Instances: Actual laboratory objects created from templates
- Containers: Tubes, plates, wells, boxes
- Content: Samples, specimens, reagents
- Workflows: Process definitions and instances
- Workflow Steps: Individual steps within workflows
- Equipment: Laboratory instruments and devices
- Files: Uploaded documents and data files
Objects are classified using a four-level hierarchy:
super_type / btype / b_sub_type / version
Examples:
container/tube/tube-generic-10ml/1.0content/sample/blood-plasma/1.0workflow/assay/rare-mendelian/1.0workflow_step/queue/accessioning/1.0equipment/instrument/sequencer/1.0
| Aspect | Template | Instance |
|---|---|---|
is_template |
True |
False |
template_uuid |
NULL |
Points to template |
| Purpose | Define structure | Represent real objects |
json_addl |
Contains instantiation_layouts |
Contains properties, actions |
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
euid |
Text | Enterprise Unique Identifier (human-readable, variable length) |
name |
String(400) | Object name |
super_type |
String(100) | Top-level classification |
btype |
String(100) | Object type |
b_sub_type |
String(100) | Object subtype |
version |
String(100) | Version string |
is_template |
Boolean | True if this is a template |
is_singleton |
Boolean | True if only one instance allowed |
template_uuid |
UUID | Reference to template (for instances) |
json_addl |
JSONB | Flexible JSON storage for properties, actions, etc. |
bstatus |
String(100) | Object status (active, complete, destroyed, etc.) |
bstate |
String(100) | Object state |
is_deleted |
Boolean | Soft delete flag |
created_dt |
DateTime | Creation timestamp |
modified_dt |
DateTime | Last modification timestamp |
created_by |
String | Creator username |
modified_by |
String | Last modifier username |
audit_comment |
String | Audit trail comment |
polymorphic_discriminator |
String | For SQLAlchemy inheritance |
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
parent_instance_uuid |
UUID | Parent object UUID |
child_instance_uuid |
UUID | Child object UUID |
relationship_type |
String | Type of relationship |
created_dt |
DateTime | Creation timestamp |
is_deleted |
Boolean | Soft delete flag |
polymorphic_discriminator |
String | For inheritance |
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
euid |
Text | Enterprise Unique Identifier (human-readable) |
name |
String(400) | Equipment name |
equipment_type |
String(100) | Type of equipment |
json_addl |
JSONB | Equipment properties |
bstatus |
String(100) | Equipment status |
is_deleted |
Boolean | Soft delete flag |
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
parent_data_uuid |
UUID | Parent data UUID |
child_data_uuid |
UUID | Child data UUID |
relationship_type |
String | Type of data relationship |
The Enterprise Unique Identifier (EUID) is a human-readable identifier designed for laboratory operations:
Format: [PREFIX][SEQUENCE_NUMBER]
Examples: CX1, CX12, CX123, WX1000, CWX5, MRX42
Components:
- PREFIX: 2-3 uppercase letter code identifying object type
- SEQUENCE_NUMBER: Integer with NO leading zeros (critical LIMS design principle)
EUID Prefixes by Object Type:
| Prefix | Object Type | Description |
|---|---|---|
GT |
Template | Generic templates |
GL |
Lineage | Instance lineage records |
CX |
Container | Tubes, plates, racks, etc. |
CWX |
Well | Plate wells |
MX |
Content | Samples, specimens |
MRX |
Reagent | Reagent contents |
MCX |
Control | Control contents |
WX |
Workflow | Workflow instances |
WSX |
Workflow Step | Workflow step instances |
QX |
Queue | Queue instances |
TRX |
Test Requisition | Test requisitions |
EX |
Equipment | Equipment instances |
DX |
Data | Data instances |
AY |
Assay | Assay workflows |
FI |
File | File instances |
FS |
File Set | File set instances |
GX |
Generic | Generic instances |
Design Principles:
- EUIDs start with a prefix to make them human-readable for lab operations
- The numeric portion MUST NOT have leading zeros (e.g.,
CX1notCX001) - Variable length - grows with sequence number
- Prefixes are defined in
bloom_lims/config/{super_type}/metadata.json - Generated by PostgreSQL trigger function
set_generic_instance_euid()inpostgres_schema_v3.sql
| Super Type | Description | Examples |
|---|---|---|
container |
Physical containers | tubes, plates, wells, boxes |
content |
Material contents | samples, specimens, reagents |
workflow |
Process definitions | assays, accessioning workflows |
workflow_step |
Workflow components | queues, processing steps |
equipment |
Laboratory equipment | sequencers, thermocyclers |
file |
Digital files | data files, reports |
file_set |
File collections | result sets, batch uploads |
data |
Data records | measurements, results |
control |
Control samples | positive/negative controls |
test_requisition |
Test orders | clinical test requests |
container/
├── tube/
│ ├── tube-generic-10ml/1.0
│ ├── tube-cryovial/1.0
│ └── tube-blood-collection/1.0
├── plate/
│ ├── fixed-plate-24/1.0
│ ├── fixed-plate-96/1.0
│ └── fixed-plate-384/1.0
├── well/
│ └── well-standard/1.0
├── box/
│ ├── box-81-position/1.0
│ └── box-freezer/1.0
└── rack/
└── rack-tube/1.0
workflow (assay instance)
├── workflow_step (queue: accessioning)
│ └── workset (batch of samples)
│ └── containers/samples
├── workflow_step (queue: extraction)
│ └── workset
│ └── containers/samples
├── workflow_step (queue: library-prep)
│ └── workset
│ └── containers/samples
└── workflow_step (queue: sequencing)
└── workset
└── containers/samples
Templates are stored in json_addl with the following structure:
{
"properties": {
"name": "Template Name",
"description": "Template description",
"lab_code": "LAB001"
},
"instantiation_layouts": [
{
"container/well/well-standard/1.0/": {
"json_addl": {
"cont_address": {
"name": "A1",
"row": "A",
"col": "1"
}
}
}
}
],
"actions": {},
"action_groups": {}
}Templates are loaded from JSON files in bloom_lims/templates/ directory:
# Load template from file
bobj = BloomObj(BLOOMdb3())
template = bobj.create_template_from_json_file("path/to/template.json")
# Or create from code string
template = bobj.create_template_by_code("container/plate/fixed-plate-96/1.0")# Create instance from template EUID
bobj = BloomObj(BLOOMdb3())
instances = bobj.create_instances(template_euid)
# Returns: [[parent_instance], [child_instances...]]
# For a plate: [[plate], [well1, well2, ..., well96]]
# Create instance by code path
instance = bobj.create_instance_by_code(
"container/tube/tube-generic-10ml/1.0",
{"json_addl": {"properties": {"name": "My Tube"}}}
)| Component | Description | Class |
|---|---|---|
| Workflow | Top-level process definition | BloomWorkflow |
| Workflow Step | Individual step/queue | BloomWorkflowStep |
| Workset | Batch of items in a queue | Part of workflow_step |
| Action | Operations on objects | Defined in json_addl |
stateDiagram-v2
[*] --> created
created --> in_progress: Start Work
in_progress --> complete: Finish Successfully
in_progress --> abandoned: Cancel/Stop
in_progress --> failed: Error Occurred
complete --> [*]
abandoned --> [*]
failed --> [*]
| Status | Description |
|---|---|
created |
Initial state after creation |
in_progress |
Work has started |
complete |
Successfully finished |
abandoned |
Cancelled/stopped |
failed |
Error occurred |
destroyed |
Object destroyed (containers) |
active |
Currently active |
class BloomWorkflow(BloomObj):
"""Manages workflow instances and their lifecycle."""
def create_empty_workflow(self, template_euid):
"""Create a new workflow instance from template."""
return self.create_instances(template_euid)
def do_action(self, wf_euid, action, action_group, action_ds={}):
"""Execute an action on a workflow."""
# Supported actions:
# - do_action_create_and_link_child
# - do_action_create_package_and_first_workflow_step
# - do_action_destroy_specimen_containersclass BloomWorkflowStep(BloomObj):
"""Manages individual workflow steps and queues."""
def do_action(self, wfs_euid, action, action_group, action_ds={}):
"""Execute an action on a workflow step."""
# Supported actions:
# - do_action_create_and_link_child
# - do_action_create_input
# - do_action_create_child_container_and_link_child_workflow_step
# - do_action_create_test_req_and_link_child_workflow_step
# - do_action_add_container_to_assay_q
# - do_action_fill_plate_undirected
# - do_action_fill_plate_directed
# - do_action_link_tubes_auto
# - do_action_cfdna_quant
# - do_action_stamp_copy_plate
# - do_action_log_temperatureActions are defined in the json_addl field of objects:
{
"action_groups": {
"status_actions": {
"label": "Status Actions",
"actions": {
"set_in_progress": {
"label": "Start Work",
"action_enabled": "1",
"method_name": "do_action_set_object_status",
"captured_data": {
"object_status": "in_progress"
}
},
"set_complete": {
"label": "Mark Complete",
"action_enabled": "1",
"method_name": "do_action_set_object_status",
"captured_data": {
"object_status": "complete"
}
}
}
}
},
"actions": {
"print_label": {
"label": "Print Barcode Label",
"action_enabled": "1",
"method_name": "do_action_print_barcode_label",
"lab": "main_lab",
"printer_name": "zebra_1",
"label_style": "2x1_basic"
}
}
}| Method | Description |
|---|---|
do_action_set_object_status |
Change object status |
do_action_print_barcode_label |
Print barcode label |
do_action_destroy_specimen_containers |
Mark containers as destroyed |
do_action_create_package_and_first_workflow_step_assay |
Create package workflow |
do_action_move_workset_to_another_queue |
Move workset between queues |
do_stamp_plates_into_plate |
Stamp multiple plates into one |
do_action_download_file |
Download file from storage |
do_action_add_file_to_file_set |
Add file to file set |
do_action_remove_file_from_file_set |
Remove file from file set |
do_action_add_relationships |
Create lineage relationships |
| Method | Description |
|---|---|
do_action_create_and_link_child |
Create child object and link |
do_action_create_input |
Create input object |
do_action_create_child_container_and_link_child_workflow_step |
Create container with workflow step |
do_action_create_test_req_and_link_child_workflow_step |
Create test requisition |
do_action_add_container_to_assay_q |
Add container to assay queue |
do_action_fill_plate_undirected |
Fill plate without position mapping |
do_action_fill_plate_directed |
Fill plate with position mapping |
do_action_link_tubes_auto |
Auto-link tubes |
do_action_cfdna_quant |
cfDNA quantification action |
do_action_stamp_copy_plate |
Create plate copy |
do_action_log_temperature |
Log temperature reading |
# 1. Get object and action definition
bobj = BloomObj(BLOOMdb3())
obj = bobj.get_by_euid(euid)
action_ds = obj.json_addl["action_groups"][action_group]["actions"][action]
# 2. Add captured data from user input
action_ds["captured_data"] = user_input_data
action_ds["curr_user"] = current_user
# 3. Execute action
result = bobj.do_action(euid, action, action_group, action_ds)
# 4. Action records execution in json_addl["action_log"]Every action execution is logged:
{
"action_log": [
{
"action": "set_in_progress",
"action_group": "status_actions",
"timestamp": "2024-01-15T10:30:00",
"user": "lab_tech_1",
"captured_data": {
"object_status": "in_progress"
}
}
]
}The BloomFile class (bfile.py) manages file uploads and downloads:
class BloomFile(BloomObj):
"""Manages file objects in BLOOM."""
def upload_file(self, file_path, bucket="bloom-files", metadata=None):
"""Upload file to S3/Supabase storage."""
# Creates BloomObj record
# Uploads to storage bucket
# Returns file EUID
def download_file(self, euid, save_path="./", include_metadata=False):
"""Download file from storage."""
# Retrieves file from storage
# Optionally includes metadata JSON
def get_file_metadata(self, euid):
"""Get file metadata without downloading."""Groups related files together:
class BloomFileSet(BloomObj):
"""Manages collections of files."""
def create_file_set(self, name, description=None):
"""Create a new file set."""
def add_files_to_file_set(self, euid, file_euid):
"""Add files to an existing file set."""
def remove_files_from_file_set(self, euid, file_euid):
"""Remove files from a file set."""
def get_files_in_set(self, euid):
"""Get all files in a file set."""Files are stored in S3-compatible storage (AWS S3 or Supabase Storage):
# Environment variables for storage
SUPABASE_URL = os.getenv("SUPABASE_URL")
SUPABASE_KEY = os.getenv("SUPABASE_KEY")
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
S3_BUCKET = os.getenv("S3_BUCKET", "bloom-files")The FastAPI backend provides versioned REST API access at /api/v1/. All endpoints support pagination and filtering.
API modules are organized in bloom_lims/api/v1/:
| Endpoint | Method | Description |
|---|---|---|
/api/v1/objects/ |
GET | List objects with filters (btype, b_sub_type, status, name_contains) |
/api/v1/objects/{euid} |
GET | Get object by EUID |
/api/v1/objects/ |
POST | Create new object |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/containers/ |
GET | List containers (filter by type, b_sub_type, status) |
/api/v1/containers/{euid} |
GET | Get container (optionally include_contents) |
/api/v1/containers/ |
POST | Create container from template |
/api/v1/containers/{euid}/contents |
POST | Add content to container |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/content/ |
GET | List content items |
/api/v1/content/{euid} |
GET | Get content by EUID |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/workflows/ |
GET | List workflows (filter by status, workflow_type) |
/api/v1/workflows/{euid} |
GET | Get workflow details |
/api/v1/workflows/{euid}/advance |
POST | Advance workflow to next step |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/files/ |
GET | List files (filter by file_type, status) |
/api/v1/files/{euid} |
GET | Get file metadata |
/api/v1/files/ |
POST | Create file record (with optional upload) |
/api/v1/files/{file_euid}/link/{parent_euid} |
POST | Link file to parent object |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/equipment/ |
GET | List equipment |
/api/v1/equipment/{euid} |
GET | Get equipment details |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/auth/login |
POST | Authenticate user |
/api/v1/auth/refresh |
POST | Refresh access token |
/api/v1/auth/me |
GET | Get current user info |
# Example: Create instance from template
POST /api/templates/{template_euid}/instantiate
Content-Type: application/json
{
"json_addl": {
"properties": {
"name": "Sample Tube 001",
"lab_code": "LAB001"
}
}
}
# Response
{
"success": true,
"data": {
"euid": "CX1234",
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"name": "Sample Tube 001",
"super_type": "container",
"btype": "tube",
"b_sub_type": "tube-generic-10ml",
"version": "1.0",
"bstatus": "created"
}
}API authentication uses Supabase Auth:
# JWT token in Authorization header
Authorization: Bearer <jwt_token>
# Token validation
from gotrue import SyncGoTrueClient
client = SyncGoTrueClient(url=SUPABASE_URL, headers={"apikey": SUPABASE_KEY})
user = client.get_user(token)bloom_lims/bkend/
├── bkend.py # Main Flask application
├── templates/ # Jinja2 templates
│ ├── base.html
│ ├── index.html
│ ├── object_detail.html
│ ├── workflow_view.html
│ └── ...
└── static/ # Static assets
├── css/
├── js/
└── images/
| Route | Description |
|---|---|
/ |
Home page / dashboard |
/object/<euid> |
Object detail view |
/workflow/<euid> |
Workflow view |
/search |
Search interface |
/templates |
Template browser |
/action/<euid>/<action_group>/<action> |
Action execution |
/print/<euid> |
Print barcode label |
@app.route('/object/<euid>')
def object_detail(euid):
bobj = BloomObj(BLOOMdb3())
obj = bobj.get_by_euid(euid)
return render_template(
'object_detail.html',
obj=obj,
lineages=obj.parent_of_lineages,
actions=obj.json_addl.get('actions', {}),
action_groups=obj.json_addl.get('action_groups', {})
)BLOOM integrates with Zebra label printers for barcode printing:
from zebra_day import ZebraDay
# Configuration in json_addl
{
"actions": {
"print_label": {
"method_name": "do_action_print_barcode_label",
"lab": "main_lab",
"printer_name": "zebra_zd420",
"label_style": "2x1_basic",
"alt_a": "", # Custom field A
"alt_b": "", # Custom field B
"alt_c": "", # Custom field C
}
}
}
# Printing execution
def print_label(self, lab, printer_name, label_zpl_style, euid, **kwargs):
zd = ZebraDay()
zd.print_label(
printer=printer_name,
template=label_zpl_style,
data={
"euid": euid,
"barcode": euid,
**kwargs
}
)Package tracking integration for shipment management:
from fedex_tracking_day import FedexTracker
# Get tracking information
tracker = FedexTracker()
tracking_data = tracker.get_fedex_ops_meta_ds(tracking_number)
# Returns:
{
"tracking_number": "1234567890",
"status": "Delivered",
"Transit_Time_sec": 172800,
"delivery_date": "2024-01-15",
"events": [...]
}BLOOM uses Supabase for:
- Authentication: User management and JWT tokens
- Storage: File storage (alternative to S3)
- Realtime: (Optional) Real-time updates
from supabase import create_client
supabase = create_client(
os.getenv("SUPABASE_URL"),
os.getenv("SUPABASE_KEY")
)
# File upload
supabase.storage.from_("bloom-files").upload(
path=f"files/{euid}/{filename}",
file=file_data
)
# Authentication
user = supabase.auth.sign_in_with_password({
"email": email,
"password": password
})Alternative file storage using AWS S3:
import boto3
s3_client = boto3.client(
's3',
aws_access_key_id=os.getenv('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.getenv('AWS_SECRET_ACCESS_KEY')
)
# Upload file
s3_client.upload_file(
local_path,
bucket_name,
f"bloom/{euid}/{filename}"
)
# Download file
s3_client.download_file(
bucket_name,
f"bloom/{euid}/{filename}",
local_path
)Create a .env file with the following variables:
# Database
BLOOM_DB_HOST=localhost
BLOOM_DB_PORT=5432
BLOOM_DB_NAME=bloom_lims
BLOOM_DB_USER=bloom_user
BLOOM_DB_PASSWORD=secure_password
# Supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key
# AWS S3 (optional, alternative to Supabase storage)
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
S3_BUCKET=bloom-files
S3_REGION=us-east-1
# FedEx API
FEDEX_API_KEY=your-fedex-key
FEDEX_SECRET=your-fedex-secret
# Application
FLASK_SECRET_KEY=your-flask-secret
DEBUG=false
LOG_LEVEL=INFO# Constructed from environment variables
DATABASE_URL = f"postgresql://{BLOOM_DB_USER}:{BLOOM_DB_PASSWORD}@{BLOOM_DB_HOST}:{BLOOM_DB_PORT}/{BLOOM_DB_NAME}"
# Or set directly
DATABASE_URL = os.getenv("DATABASE_URL")Printer configuration is stored in YAML files:
# config/printers.yaml
labs:
main_lab:
printers:
zebra_zd420:
ip: 192.168.1.100
port: 9100
type: zpl
zebra_zd621:
ip: 192.168.1.101
port: 9100
type: zpl
label_styles:
2x1_basic:
width: 2
height: 1
template: |
^XA
^FO50,50^BY3
^BCN,100,Y,N,N
^FD{euid}^FS
^FO50,180^A0N,30,30^FD{alt_a}^FS
^XZ# config/assay_config.yaml
assays:
rare-mendelian:
name: "Rare Mendelian Disease Panel"
version: "1.0"
steps:
- name: accessioning
queue: workflow_step/queue/accessioning/1.0
- name: extraction
queue: workflow_step/queue/extraction/1.0
- name: library_prep
queue: workflow_step/queue/library-prep/1.0
- name: sequencing
queue: workflow_step/queue/sequencing/1.0# Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 8000 8080
CMD ["python", "-m", "bloom_lims.bkend.bkend"]# docker-compose.yml
version: '3.8'
services:
bloom-web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://bloom:password@db:5432/bloom_lims
depends_on:
- db
bloom-api:
build: .
command: ["uvicorn", "bloom_lims.bkend.fastapi_bkend:app", "--host", "0.0.0.0", "--port", "8000"]
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://bloom:password@db:5432/bloom_lims
depends_on:
- db
db:
image: postgres:15
environment:
- POSTGRES_USER=bloom
- POSTGRES_PASSWORD=password
- POSTGRES_DB=bloom_lims
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:# Create database
createdb bloom_lims
# Initialize schema (SQLAlchemy creates tables)
python -c "from bloom_lims.db import BLOOMdb3; BLOOMdb3()"
# Load initial templates
python -c "
from bloom_lims.bobjs import BloomObj
from bloom_lims.db import BLOOMdb3
bobj = BloomObj(BLOOMdb3())
bobj.load_templates_from_directory('bloom_lims/templates/')
"# FastAPI (development)
uvicorn bloom_lims.bkend.fastapi_bkend:app --reload --port 8000
# Production with gunicorn
gunicorn -w 4 -b 0.0.0.0:5000 bloom_lims.bkend.bkend:appsee build test badges above for all supported platforms
- Mac (14+)
brew install coreutilsis required for thegtimeoutcommand for some rclone functionality. Runalias timeout=gtimeoutto use the gtimeout w/zsh.
- Ubuntu 22+
- Centos 9
-
Conda (you may swap in mamba if you prefer). Installing conda:
- Be sure
wgetis available to you.
Linux a pinned version: https://repo.anaconda.com/miniconda/Miniconda3-py312_24.5.0-0-Linux-x86_64.sh
x86_64:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sharm64:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.shmacOS
intel:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.shARM:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh- Then execute the Miniconda.sh script, follow the prompts, when installation completed, follow these last 2 steps:
- Be sure
~/miniconda3/bin/conda init
bash # newly created shells should not auto load the conda (base) env.assumes you have completed the prerequisites
# Clone the repository
git clone git@github.com:Daylily-Informatics/bloom.git
cd bloom
# This will attempt to build the conda env, install postgres, the database, build the schema and start postgres
source bloom_lims/env/install_postgres.sh
# conda activate BLOOM if it has not happened already.
# Start the Bloom LIMS UI
source run_bloomui.sh# RUN TESTS
pytest
# START THE UIs (on localhost:8080)
source bloom_lims/env/install_pgadmin.shconda activate BLOOM
pytest- There is no reason bloom can not be used in a CLIA regulated environment.
- Bloom can satisfy all relevant CAP checklist items which apply to it. But, as it is s/w you will be running yourself, most checklist items will be concerned with the environment you are using bloom in.
- If installed in an already HIPAA compliant environment, bloom should not need much or any work to be compliant.
- Using a UUID on children objects for convenience will lead to a mess as the need to know details about each object is next to impossible when a UUID is assigned to multiple objects.
- Keeping metadata out of the UUID formula is a fundamental requirement in building flexible and scalable systems. FUNDAMENTAL.
- There are few/no compelling reasons to use CSV's over TSV's & so many reasons not to use CSV's.
- It is! Fully (though with some safeguards still not in place).
- soft deletes need to be reviewed more closely
- Requiring as little code changes as possible.
- Simple
- Scalable
- Secure
- Flexible & Extensible
- Open Source
- Operationally Robust
- Free
- Sustainable
All other relationships are subsets of this, and designing parts of the LIMS which disallow many to many will result in an inflexible system.
Objects May All Be: Root (Singleton, Parent & Able To Become A Child At Some Point), Child (Singleton, Parent And Possibly Terminal) Of One Another
note: all commands below are expected to be run from a shell with conda activated.
conda activate BLOOM
Drop The Entire Database (Loose All Data!) > Rebuild The Database / Re-seed With All Accessible JSON Templates
The steps are wrapped in a script, please see clear_and_rebuild_postgres.sh.
It is executed as follows:
source clear_and_rebuild_postgres.shsource bloom_lims/bin/stop_bloom_db.sh
rm -rf bloom_lims/database/*
source bloom_lims/env/install_postgres.sh skipthe skip will skip building the conda env. This will start pgsql in the env, and build the schema.
Similar to pytest, but more extensive. Useful for development and smoke testing. Run the accessioning/extraction workflow generator:
python smoke_exams/accession_extract_qant.py <num_iterations> <assay_type>- Example:
python smoke_exams/accession_extract_qant.py 2 1(runs 2 iterations with HLA-typing assay)
source run_bloomui.sh
source bloom_lims/env/install_pgadmin.sh
python bloom_shell.py
echo "test" > test.log
echo "TEST" > TEST.LOG
more test.log
# OUTPUT: TEST
more TEST.log
# OUTPUT: TEST- This still shocks me & is worth a reminder.
echo "test" > test.log
echo "TEST" > TEST.LOG
more test.log
# OUTPUT: test
more TEST.LOG
# OUTPUT: TEST- Given we can not be certain where files will be reconstituted, we must assume that files might be created in a case insensitive file system when allowing download.
A widely adopted UUID spec (and used by postgres), rfc4122, treats uc and lc as the same character. Bloom EUIDs only contain uc characters in a prefix followed by integers.
No promises, please file issues to log a bug or request a feature.
- John Major:li aka iamh2o:gh
- Josh Durham
- Adam Tracy
You may deploy bloom wherever it will run. This does mean you are responsible for all aspects of the deployment, including security, backups (AND recovery), performance optimization, monitoring, etc. This need not be daunting. I am available for consulting on these topics.
- MIT
- chatGPT4 for helping me build this.
- All the folks I've built systems for to date and were patient with my tools and offered helpful feedback.
- snakemake :: inspiration.
- multiqc :: inspiration.
- ga4cgh :: inspiration.
- the human genome project :: where I learned I dug LIS.
- cytoscape :: incredible graph visualization tools!
- The OSS world.
- Semantic Mediawiki :: inspiration.
- Datomic :: inspiration.
from bloom_lims.bobjs import BloomObj, BloomWorkflow
from bloom_lims.db import BLOOMdb3
# 1. Get workflow template
bobj = BloomObj(BLOOMdb3())
wf_template = bobj.query_template_by_component_v2(
"workflow", "assay", "rare-mendelian", "1.0"
)[0]
# 2. Create workflow instance
bwf = BloomWorkflow(BLOOMdb3())
workflow = bwf.create_empty_workflow(wf_template.euid)
# 3. Create sample container
tube_template = bobj.query_template_by_component_v2(
"container", "tube", "tube-generic-10ml", "1.0"
)[0]
tube = bobj.create_instances(tube_template.euid)[0][0]
# 4. Link sample to workflow step
first_step = workflow[0][0].parent_of_lineages[0].child_instance
bobj.create_generic_instance_lineage_by_euids(first_step.euid, tube.euid)from bloom_lims.bobjs import BloomObj
from bloom_lims.db import BLOOMdb3
bobj = BloomObj(BLOOMdb3())
# Get object by EUID (format: PREFIX + sequence number)
obj = bobj.get_by_euid("CX1234")
# Prepare action data
action_ds = obj.json_addl["action_groups"]["status_actions"]["actions"]["set_complete"]
action_ds["captured_data"] = {"object_status": "complete"}
action_ds["curr_user"] = "lab_tech_1"
# Execute action
result = bobj.do_action(
obj.euid,
"set_complete",
"status_actions",
action_ds
)from bloom_lims.bobjs import BloomObj
from bloom_lims.db import BLOOMdb3
bobj = BloomObj(BLOOMdb3())
# By EUID (format: PREFIX + sequence number, e.g., CX1234, WX100)
obj = bobj.get_by_euid("CX1234")
# By UUID
obj = bobj.get_by_uuid("550e8400-e29b-41d4-a716-446655440000")
# By type (templates)
templates = bobj.query_template_by_component_v2(
super_type="container",
btype="plate",
b_sub_type="fixed-plate-96",
version="1.0"
)
# By type (instances)
instances = bobj.query_instance_by_component_v2(
super_type="workflow",
btype="assay",
b_sub_type="rare-mendelian",
version="1.0"
)
# Search with filters
results = bobj.search_objects(
super_type="container",
bstatus="active",
name_contains="Sample"
)| Term | Definition |
|---|---|
| EUID | Enterprise Unique Identifier - Prefix + sequence number (e.g., CX123, WX1000) |
| UUID | Universally Unique Identifier - Standard 128-bit identifier |
| Template | Blueprint for creating object instances |
| Instance | Actual object created from a template |
| Lineage | Parent-child relationship between objects |
| Workflow | Multi-step process definition |
| Workflow Step | Individual step/queue in a workflow |
| Workset | Batch of items being processed together |
| Action | Operation that can be performed on an object |
| Action Group | Collection of related actions |
| json_addl | JSON field for flexible object properties |
| super_type | Top-level object classification |
| btype | Object type within super_type |
| b_sub_type | Object subtype within btype |
| bstatus | Current status of an object |
Document Version: 2.0 Last Updated: 2024-12-24 BLOOM LIMS - Version dynamically fetched from GitHub releases







