- System Architecture
- Core Components
- Kubernetes Integration
- Database Management
- Authentication Flow
- Sandbox Lifecycle
- API Design
- Security Implementation
- Performance Optimizations
- Troubleshooting Guide
FullStack Agent follows a microservices-inspired architecture deployed on Kubernetes, with clear separation between the control plane (Next.js application) and the data plane (sandbox environments).
┌──────────────────────────────────────────────────────────────────┐
│ Control Plane │
├────────────────────────────────────────────────────────────────── │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Next.js │───▶│ Prisma │───▶│ PostgreSQL │ │
│ │ App Router │ │ ORM │ │ Database │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ NextAuth │───▶│ GitHub │ │ Kubernetes │ │
│ │ v5 │ │ OAuth │ │ Service │ │
│ └─────────────┘ └─────────────┘ └─────▬───────┘ │
│ │ │
└─────────────────────────────────────────────── │ ────────────────┘
│
┌──────────────────────────────────────────────── ▼ ────────────────┐
│ Data Plane │
├────────────────────────────────────────────────────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Kubernetes Cluster │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ KubeBlocks │ │ Sandbox │ │ Ingress │ │ │
│ │ │ PostgreSQL │ │ Deployment │ │ Controller │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────── │
The heart of the platform's infrastructure management.
- Cluster Connection: Manages kubeconfig loading and API client initialization
- Resource Creation: Creates deployments, services, and ingresses
- Database Provisioning: Integrates with KubeBlocks for PostgreSQL management
- Status Monitoring: Tracks pod and deployment states
export class KubernetesService {
private kc: k8s.KubeConfig;
private k8sApi: k8s.CoreV1Api;
private k8sAppsApi: k8s.AppsV1Api;
private k8sNetworkingApi: k8s.NetworkingV1Api;
constructor() {
// Load kubeconfig with proper error handling
const kubeconfigPath = path.join(process.cwd(), '.secret', 'kubeconfig');
if (fs.existsSync(kubeconfigPath)) {
this.kc.loadFromFile(kubeconfigPath);
// Verify correct server endpoint (not localhost)
const cluster = this.kc.getCurrentCluster();
if (!cluster?.server || cluster.server.includes('localhost')) {
throw new Error(`Invalid server endpoint: ${cluster?.server}`);
}
}
}
}Problem: Kubernetes API responses have inconsistent structure
Solution: Handle both response.body.items and response.items patterns
// Before (causes errors):
const deployments = await this.k8sAppsApi.listNamespacedDeployment({ namespace });
const items = deployments.body?.items || [];
// After (fixed):
const deployments = await this.k8sAppsApi.listNamespacedDeployment({ namespace });
const items = deployments.body?.items || (deployments as any).items || [];Uses KubeBlocks for managed PostgreSQL instances with automatic:
- High availability configuration
- Backup and restore capabilities
- Connection credential management
- Create ServiceAccount with proper labels
- Create Role with full permissions
- Create RoleBinding
- Create KubeBlocks Cluster resource
- Wait for cluster to be ready
- Retrieve connection credentials from generated secret
async createPostgreSQLDatabase(projectName: string, namespace?: string) {
const clusterName = `${projectName}-agentruntime-${randomSuffix}`;
// 1. Create RBAC resources
await this.createServiceAccount(clusterName, namespace);
await this.createRole(clusterName, namespace);
await this.createRoleBinding(clusterName, namespace);
// 2. Create KubeBlocks Cluster
const cluster = {
apiVersion: 'apps.kubeblocks.io/v1alpha1',
kind: 'Cluster',
metadata: {
name: clusterName,
namespace,
labels: {
'clusterdefinition.kubeblocks.io/name': 'postgresql',
'clusterversion.kubeblocks.io/name': 'postgresql-14.8.0'
}
},
spec: {
clusterDefinitionRef: 'postgresql',
clusterVersionRef: 'postgresql-14.8.0',
componentSpecs: [{
componentDefRef: 'postgresql',
replicas: 1,
resources: {
limits: { cpu: '1000m', memory: '1024Mi' },
requests: { cpu: '100m', memory: '102Mi' }
},
volumeClaimTemplates: [{
spec: {
accessModes: ['ReadWriteOnce'],
resources: { requests: { storage: '3Gi' } },
storageClassName: 'openebs-backup'
}
}]
}]
}
};
// 3. Wait for cluster ready and get credentials
await this.waitForDatabaseReady(clusterName, namespace);
return await this.getDatabaseCredentials(clusterName, namespace);
}- Deployment: Runs the fullstack-web-runtime container
- Service: Internal networking for pod access
- Ingress: External HTTPS access with SSL termination
- Environment Variables: Injected Claude Code API credentials
image: fullstackagent/fullstack-web-runtime:latest
ports:
- 3000 # Next.js application
- 5000 # Python/Flask
- 7681 # ttyd web terminal
- 8080 # General HTTP
resources:
requests:
cpu: 20m
memory: 25Mi
limits:
cpu: 200m
memory: 256MiSpecial command override for ttyd compatibility:
command: ['/bin/sh']
args: [
'-c',
`ttyd --port 7681 --interface 0.0.0.0 --check-origin false /bin/bash`
]export const authOptions = {
providers: [
GitHubProvider({
clientId: process.env.GITHUB_CLIENT_ID!,
clientSecret: process.env.GITHUB_CLIENT_SECRET!,
authorization: {
params: {
scope: 'read:user user:email repo'
}
}
})
],
callbacks: {
async jwt({ token, account, profile }) {
if (account && profile) {
// Store GitHub access token (encrypted)
token.accessToken = account.access_token;
token.githubId = profile.id;
}
return token;
},
async session({ session, token }) {
// Attach user ID for database queries
session.user.id = token.sub!;
return session;
}
}
}- Uses single namespace from kubeconfig:
ns-ajno7yq7 - No namespace creation permissions required
- All resources tagged with project labels
[project-name]-agentruntime-[6-char-random]
- Ensures uniqueness across deployments
- Allows easy resource filtering
- Supports multiple sandboxes per project
Critical labels for resource management:
labels:
cloud.sealos.io/app-deploy-manager: [resource-name]
project.fullstackagent.io/name: [project-name]
app: [deployment-name]Two ingresses per sandbox:
- Application Ingress: Port 3000
- Terminal Ingress: Port 7681 with WebSocket support
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "32m"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
# For ttyd WebSocket support:
nginx.ingress.kubernetes.io/proxy-set-headers: |
Upgrade $http_upgrade
Connection "upgrade"postgresql://[username]:[password]@[host]:[port]/[database]?schema=public
KubeBlocks automatically creates secrets with naming pattern:
[cluster-name]-conn-credential
Secret structure:
data:
host: [base64-encoded]
port: [base64-encoded]
database: [base64-encoded]
username: [base64-encoded]
password: [base64-encoded]Problem: System assumed database exists if project.databaseUrl is set
Solution: Verify actual cluster existence before using existing database
if (project.databaseUrl) {
try {
// Try to get database info from Kubernetes
const dbInfo = await k8sService.getDatabaseSecret(project.name, namespace);
// Database exists, use it
} catch (error) {
// Database doesn't exist, create new one
needCreateDatabase = true;
}
}-
Pre-flight Checks
- Verify user authentication
- Check project ownership
- Delete existing terminated sandboxes
-
Database Provisioning
- Check if database exists in Kubernetes
- Create if needed (KubeBlocks cluster)
- Wait for database ready state
- Retrieve connection credentials
-
Sandbox Deployment
- Create Kubernetes Deployment
- Create Service for internal networking
- Create Ingress for external access
- Inject environment variables
-
Post-Creation
- Update database records
- Return public URLs
- Monitor pod startup status
Five-stage progress indication:
- Database Creation: PostgreSQL provisioning
- Container Provisioning: Deploying runtime pod
- Network Configuration: Setting up services/ingress
- Terminal Initialization: Starting ttyd service
- Environment Ready: Sandbox operational
- Delete Kubernetes Deployment
- Delete Service
- Delete Ingress resources
- Update database status to "TERMINATED"
- Keep database for data persistence
// GET /api/sandbox/[projectId]
// Returns sandbox status and URLs
// POST /api/sandbox/[projectId]
// Creates or restarts sandbox
// Body: { envVars: Record<string, string> }
// DELETE /api/sandbox/[projectId]
// Terminates sandboxComprehensive error responses:
{
"error": "Failed to create sandbox",
"details": {
"name": "Error",
"message": "Detailed error message",
"stack": "Stack trace (dev only)",
"kubernetesStatus": "Pod status if available"
},
"projectId": "project-id",
"timestamp": "2025-10-11T10:00:00.000Z"
}- GitHub OAuth for user authentication
- Session-based authorization
- Encrypted access token storage
- ServiceAccount with scoped permissions
- Resource limits to prevent DoS
- Network isolation between sandboxes
- No privileged container access
// Environment variables injection
const containerEnv = {
...claudeEnvVars, // From .secret/.env
...projectEnvVars, // From database
...requestEnvVars, // From API request
DATABASE_URL: dbConnectionString,
NODE_ENV: 'development'
};- Project name sanitization for Kubernetes compatibility
- SQL injection prevention via Prisma ORM
- XSS protection in React components
Conservative defaults to maximize density:
- CPU: 20m request, 200m limit
- Memory: 25Mi request, 256Mi limit
- Allows ~50 sandboxes per node (2 CPU, 4GB RAM)
Prisma connection management:
const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.DATABASE_URL,
},
},
log: ['error', 'warn'],
});- KubeBlocks cluster list cached for getDatabaseSecret
- Deployment status cached for 5 seconds
- Static assets cached via Next.js
Multiple tool calls for efficiency:
// Parallel deletion of resources
await Promise.all([
deleteDeployments(projectName),
deleteServices(projectName),
deleteIngresses(projectName)
]);Symptoms: Sandbox creation fails immediately Causes:
- Kubeconfig not properly loaded
- Kubernetes API unreachable
- Insufficient permissions
Solutions:
# Verify kubeconfig
cat .secret/kubeconfig | grep server
# Should show: https://usw.sealos.io:6443
# Test API connection
node test-k8s.mjs
# Check logs
npm run dev 2>&1 | grep -E "Error|Failed"Symptoms: Sandbox created but no database Causes:
- KubeBlocks CRDs not installed
- Project has stale databaseUrl
Solutions:
// Check existing clusters
node check-databases.mjs
// Clear stale database URL in database
UPDATE projects SET "databaseUrl" = NULL WHERE name = 'project-name';Symptoms: ttyd URL returns 404 or connection refused Causes:
- ttyd not starting properly
- Ingress misconfiguration
- WebSocket headers missing
Solutions:
- Check pod logs for ttyd errors
- Verify ingress has WebSocket annotations
- Ensure port 7681 is exposed
Symptoms: "Cannot read property 'items' of undefined" Fix Applied: Handle both response formats
const items = response.body?.items || (response as any).items || [];# Check pod status
kubectl --kubeconfig=.secret/kubeconfig get pods -n ns-ajno7yq7
# View pod logs
kubectl --kubeconfig=.secret/kubeconfig logs [pod-name] -n ns-ajno7yq7
# Describe deployment
kubectl --kubeconfig=.secret/kubeconfig describe deployment [name] -n ns-ajno7yq7
# Check ingress
kubectl --kubeconfig=.secret/kubeconfig get ingress -n ns-ajno7yq7- Next.js server: Console output
- Kubernetes events:
kubectl get events - Pod logs:
kubectl logs [pod-name] - Database logs: KubeBlocks cluster logs
- TypeScript strict mode enabled
- ESLint configuration for consistency
- Prettier for formatting
- Unit tests for utilities
- Integration tests for API endpoints
- E2E tests for critical flows
- Feature branches from
main - Pull request with review
- Squash merge to main
- Automatic deployment via CI/CD
.env.local # Local development
.env.production # Production settings
.secret/.env # Claude Code credentials
.secret/kubeconfig # Kubernetes config
- Multi-region Support: Deploy sandboxes across regions
- Custom Images: User-provided Docker images
- Persistent Storage: Volume mounts for data persistence
- Collaborative Editing: Real-time code collaboration
- CI/CD Integration: Automatic deployment pipelines
- Resource Scaling: Dynamic resource allocation
- Monitoring Dashboard: Resource usage visualization
- Backup/Restore: Project state snapshots
- Message Queue: Async sandbox creation
- WebSocket Updates: Real-time status updates
- Distributed Caching: Redis for session/data caching
- Metrics Collection: Prometheus integration
- Log Aggregation: Centralized logging system
FullStack Agent represents a modern approach to AI-assisted development, combining the power of Claude Code with Kubernetes orchestration. The architecture prioritizes:
- Security: Isolated environments with proper authentication
- Scalability: Kubernetes-based resource management
- Developer Experience: Seamless project creation and management
- Reliability: Comprehensive error handling and recovery
The platform continues to evolve with community feedback and contributions, aiming to make AI-powered development accessible to everyone.
Last Updated: 2025-10-11 Version: 1.0.0