CAR (Compare-Adjust-Record) is a novel computational architecture for property prediction through iterative unit interactions without gradient-based optimization. This system demonstrates that intelligent behavior can emerge from simple local interaction protocols between autonomous computational units.
The CAR system is fundamentally different from traditional neural networks:
- No Gradient Descent: Learning emerges from local interactions, not backpropagation
- Bounded Communication: Units communicate via tanh-bounded signals in (-1, 1)
- Simultaneous Learning-Prediction: No separation between training and testing phases
- Explicit Knowledge Storage: Patterns stored in a retrievable knowledge base
- White-Box Architecture: Every prediction is fully interpretable
The CAR system consists of five core mechanisms working in concert:
Computational units analyze input features and compare them against stored knowledge patterns. Each unit maintains independent feature weights, enabling diverse perspectives on the same input.
Based on comparison results, units adjust their internal states. The adjustment is guided by knowledge base matches, with learning rate modulated by similarity strength and historical success rates.
Successful prediction patterns are stored in the knowledge base for future use. The system maintains a dynamic balance between creating new patterns and merging similar ones.
Multiple units participate in distributed discussion to reach consensus. Unit contributions are weighted by their historical performance, enabling robust ensemble predictions.
The system periodically reflects on recent performance and adapts its learning strategy. This includes adjusting learning rates based on error trends.
The knowledge base query operates across multiple similarity thresholds to find relevant patterns:
- Coarse filtering identifies broadly similar cases
- Fine filtering refines to highly specific matches
- Medium thresholds provide balanced retrieval
Unit contributions to consensus are weighted by:
- Historical success rate
- Current confidence level
- Knowledge base influence
The system continuously adjusts its learning rate:
- Decreases when performance is good
- Increases when errors exceed threshold
- Maintains optimal adaptation speed
The knowledge base implements intelligent forgetting:
- Low-utility patterns are removed when capacity is exceeded
- Utility considers success rate, recency, and average error
- Ensures memory quality over quantity
| Parameter | Default | Description |
|---|---|---|
| num_units | 20 | Number of computational units |
| feature_dim | 71 | Dimensionality of input features |
| learning_rate | 0.3 | Initial learning rate |
| consensus_threshold | 0.6 | Minimum confidence for consensus |
| success_threshold | 1.0 eV | Error threshold for success |
| Parameter | Default | Description |
|---|---|---|
| kb_capacity | 500 | Maximum patterns in knowledge base |
| similarity_thresholds | [0.3, 0.5, 0.7] | Multi-scale retrieval thresholds |
| pattern_merge_threshold | 0.80 | Similarity threshold for merging |
| reflection_interval | 30 | Iterations between reflections |
The system updates parameters using knowledge-driven gradient estimation:
where
Hypotheses are generated from knowledge base matches with confidence:
from car_system import CARSystem
# Initialize CAR system
car = CARSystem(
num_units=20,
feature_dim=71,
kb_capacity=500,
learning_rate=0.3,
consensus_threshold=0.6,
similarity_thresholds=[0.3, 0.5, 0.7],
pattern_merge_threshold=0.80,
reflection_interval=30,
success_threshold=1.0,
exploration_value=7.5
)
# Process samples (with internal feedback learning)
for features, target in zip(X, y):
result = car.infer(features, target)
print(f"Prediction: {result['prediction']:.3f} eV")
print(f"Confidence: {result['confidence']:.3f}")
print(f"Knowledge Base Size: {result['knowledge_size']}")
# Get system statistics
stats = car.get_statistics()python src/car_system.pyThis runs the complete experiment pipeline with synthetic data and reports performance metrics.
python src/real_qm9_experiment.pyThis runs the CAR system with authentic QM9 molecular data, demonstrating excellent performance:
- MAE: 1.08 eV (vs. paper target: 1.07 eV) - Paper target achieved!
- Performance improvement: 96.0%
- Data unit verification: Correctly converted Hartree to eV
- Real QM9 data range: 2.41-11.70 eV (consistent with chemical molecular actual range)
- Uses real molecular properties from QM9 dataset
- Shows practical application in computational chemistry
python src/enhanced_car.pyThis runs the enhanced version with additional features like special pattern storage and diversity mechanisms.
The CAR system eliminates all traditional AI training machinery:
- No gradient descent
- No loss function
- No backpropagation
- No weight updates through optimization
- No separate training phase
- No explicit target functions for optimization
car-complete-demo/
├── src/
│ ├── __init__.py
│ ├── car_system.py # Basic CAR implementation
│ ├── enhanced_car.py # Enhanced CAR with real QM9 support
│ └── real_qm9_experiment.py # Real QM9 data experiment (1.08 eV performance)
├── data/
│ ├── gdb9.sdf # QM9 molecular structures
│ └── gdb9.sdf.csv # QM9 properties
├── tests/
│ ├── test_system.py # Basic functionality tests
│ └── test_real_qm9.py # Real QM9 tests
├── requirements.txt # Python dependencies
├── LICENSE # MIT License
└── README.md # This file
This project is licensed under the MIT License - see the LICENSE file for details.
Link
- Email: wangwang228879@163.com