Skip to content

louislichen/paradox_machine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Paradox Machine

A minimum viable tool (MVP) for detecting paradoxes in viewpoints and proposals.

The core goal is to move an LLM from default agreement mode into strict logical evaluation mode, then output a structured paradox report to surface self-contradictions, hidden fallacies, and costly trade-offs earlier.

What It Does

  • Structured multi-step analysis (S1 + Phase I/II/III)
  • Standardized Paradox Report output (human-readable text or JSON)
  • Model switching via YAML configuration
  • A direct Q&A mode (src/demo.py) for A/B comparison against the structured pipeline

Project Layout

.
├── run.py                    # Structured paradox detection CLI entry
├── src/
│   ├── agents.py             # Main agent pipeline + API client
│   ├── prompts.py            # All prompt templates
│   ├── apis.py               # YAML model config loader
│   └── demo.py               # Direct Q&A mode (for comparison)
├── assets/models/
│   ├── deepseek-chat.yaml
│   └── deepseek-reasoner.yaml
└── docs/
    ├── intro.md
    └── full_process.md

Quick Start

1) Install

python -m venv .venv
source .venv/bin/activate
pip install -e .

2) Configure Model API

Configure model settings in assets/models/*.yaml (DeepSeek examples are included):

provider: deepseek
model: deepseek-chat
base_url: https://api.deepseek.com
chat_completions_path: /chat/completions
api_key_env: DEEPSEEK_API_KEY
api_key: ""
timeout_seconds: 90
default_temperature: 0.2
headers:
  Content-Type: application/json

If assets/models/*.yaml is missing locally, create the file manually and fill in the fields above.

Usage

A) Structured Paradox Detection

python run.py --config deepseek-chat "To reduce latency, we plan to add unlimited cache layers and minimize consistency checks."

JSON output:

python run.py --config deepseek-chat --json "To reduce latency, we plan to add unlimited cache layers and minimize consistency checks."

Switch model:

python run.py --config deepseek-reasoner "Your proposal here"

B) Direct Q&A (For A/B Comparison)

Single-turn Q&A:

python -m src.demo --config deepseek-chat "To reduce latency, we plan to add unlimited cache layers and minimize consistency checks."

Interactive mode:

python -m src.demo --config deepseek-chat --interactive

Suggested A/B comparison flow:

  1. Run python -m src.demo ... to observe a natural model response.
  2. Run python run.py ... to generate a structured paradox report.
  3. Compare rebuttal strength, structure quality, and actionable guidance.

References

  • Motivation: docs/intro.md
  • Method pipeline: docs/full_process.md

About

This is a tool that helps analyze the paradoxes in various scenarios.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages