diff --git a/content/en/building/interoperability/overview.md b/content/en/building/interoperability/overview.md index d5dbd23bb..d600632d7 100644 --- a/content/en/building/interoperability/overview.md +++ b/content/en/building/interoperability/overview.md @@ -65,4 +65,3 @@ The flexibility of mediators also means the CHT is future-proof and can be confi 8. [Communication](https://build.fhir.org/communication.html) You can find additional information and instructions for setting up [cht-interoperability](https://github.com/medic/cht-interoperability) in the [dedicated guidelines](/building/interoperability/openhim) . - diff --git a/content/en/hosting/_index.md b/content/en/hosting/_index.md index 973b2d35d..142ac2818 100644 --- a/content/en/hosting/_index.md +++ b/content/en/hosting/_index.md @@ -21,4 +21,5 @@ System administrators looking to deploy CHT into production should understand [t {{< card link="couch2pg/" title="couch2pg" subtitle="Guides for using couch2pg" icon="presentation-chart-line" tag="deprecated" tagIcon="warning" tagType="warning" >}} {{< card link="sso" title="SSO" subtitle="Setting up Single Sign On" icon="key" >}} {{< card link="medic" title="At Medic" subtitle="Guidelines internal to Medic-hosted CHT instances" icon="briefcase" tag="medic internal" tagIcon="key" >}} + {{< card link="interoperability" title="Interoperability Stack Hosting" subtitle="Guides for hosting CHT Interoperability components" icon="puzzle" >}} {{< /cards >}} diff --git a/content/en/hosting/interoperability/_index.md b/content/en/hosting/interoperability/_index.md new file mode 100644 index 000000000..f7f4d272f --- /dev/null +++ b/content/en/hosting/interoperability/_index.md @@ -0,0 +1,39 @@ +--- +title: "Interoperability Stack Kubernetes Deployment" +linkTitle: "Interoperability" +weight: 3 +description: > + Deploy the CHT Interoperability Stack to Kubernetes using Helm charts +--- + +## Overview + +The CHT Interoperability Stack (CHT + OpenHIM + FHIR) can be deployed to Kubernetes using Helm charts. This provides a production-ready, scalable deployment suitable for both local development and cloud environments. + +## What's Included + +The Helm chart deploys: + +- **CHT (Community Health Toolkit)**: API, Nginx, Sentinel, and CouchDB +- **OpenHIM**: Health information mediator with Core API, Router, Console UI, and MongoDB +- **HAPI FHIR**: FHIR R4 server with PostgreSQL database +- **Custom Services**: Configurator for initial setup and Mediator for integration logic + +## Deployment Options + +Choose your deployment target: + +{{< cards >}} +{{< card link="kind" title="Local Development (KIND)" subtitle="Deploy to your local machine using KIND for development and testing" icon="desktop-computer" >}} +{{< card link="eks" title="AWS EKS Deployment" subtitle="Deploy to AWS EKS with Application Load Balancer and SSL" icon="cloud" >}} +{{< /cards >}} + +## Prerequisites + +Both deployment options require: + +- Docker +- kubectl +- Helm 3+ + +Additional prerequisites are listed in each deployment guide. diff --git a/content/en/hosting/interoperability/eks.md b/content/en/hosting/interoperability/eks.md new file mode 100644 index 000000000..b58f76a87 --- /dev/null +++ b/content/en/hosting/interoperability/eks.md @@ -0,0 +1,406 @@ +--- +title: "AWS EKS Deployment" +linkTitle: "EKS Deployment" +weight: 2 +description: > + Deploy the interoperability stack to AWS EKS with Application Load Balancer +--- + +## Overview + +Deploy the CHT Interoperability Stack to Amazon Elastic Kubernetes Service (EKS) for a production-ready, scalable deployment with proper SSL termination and load balancing. + +## Prerequisites + +### AWS Infrastructure + +- AWS account with appropriate permissions +- [AWS CLI](https://aws.amazon.com/cli/) configured +- Existing EKS cluster (or create one) +- [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) installed on the cluster +- Valid SSL certificate in AWS Certificate Manager (ACM) or IAM +- DNS domain for your services + +### Local Tools + +- kubectl configured for your EKS cluster +- Helm 3+ installed +- Docker for building images + +### Required AWS Permissions + +You'll need permissions for: +- ECR (create repositories, push images) +- EKS (describe cluster, update kubeconfig) +- EC2 (for load balancers and security groups) +- IAM (for service roles) +- Route53 (optional, for DNS) + +## Step 1: Push Images to ECR + +### Configure AWS +```bash +export AWS_REGION=eu-west-2 # Change to your region +export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +export ECR_REGISTRY=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com +``` + +### Create ECR Repositories +```bash +# Create repositories +aws ecr create-repository \ + --repository-name cht-interop/configurator \ + --region $AWS_REGION + +aws ecr create-repository \ + --repository-name cht-interop/mediator \ + --region $AWS_REGION +``` + +### Authenticate to ECR +```bash +aws ecr get-login-password --region $AWS_REGION | \ + docker login --username AWS --password-stdin $ECR_REGISTRY +``` + +### Build and Push Images +```bash +# Build images +docker build -f configurator/Dockerfile -t configurator:local . +docker build -t mediator:local ./mediator + +# Tag for ECR +docker tag configurator:local $ECR_REGISTRY/cht-interop/configurator:latest +docker tag mediator:local $ECR_REGISTRY/cht-interop/mediator:latest + +# Push to ECR +docker push $ECR_REGISTRY/cht-interop/configurator:latest +docker push $ECR_REGISTRY/cht-interop/mediator:latest +``` + +### Verify Images +```bash +aws ecr list-images \ + --repository-name cht-interop/configurator \ + --region $AWS_REGION + +aws ecr list-images \ + --repository-name cht-interop/mediator \ + --region $AWS_REGION +``` + +## Step 2: Configure values-eks.yaml + +Create or update `charts/values-eks.yaml` with your configuration: +```yaml +global: + namespace: your-namespace + +createNamespace: false # Create namespace manually with proper labels +cluster_type: "eks" + +persistence: + storageClass: gp2 # or gp3 if available + +configurator: + image: 123456789.dkr.ecr.eu-west-2.amazonaws.com/cht-interop/configurator:latest + imagePullPolicy: Always + +mediator: + image: 123456789.dkr.ecr.eu-west-2.amazonaws.com/cht-interop/mediator:latest + imagePullPolicy: Always + +ingress: + annotations: + groupname: "your-alb-group-name" + tags: "Environment=prod,Team=Platform" + certificate: "arn:aws:acm:eu-west-2:123456789:certificate/your-cert-id" + + chtHost: "cht.yourdomain.com" + openhimConsoleHost: "openhim-console.yourdomain.com" + openhimCoreHost: "openhim-api.yourdomain.com" + openhimRouterHost: "openhim-router.yourdomain.com" + +resources: + requests: + memory: "512Mi" + cpu: "250m" + limits: + memory: "2Gi" + cpu: "1000m" +``` + +### Configuration Options + +| Field | Description | Example | +|----------------|------------------------------------------------|---------------------------------------------| +| `namespace` | Kubernetes namespace | `cht-interop-prod` | +| `storageClass` | EBS storage class | `gp2` or `gp3` | +| `groupname` | ALB group name (must match existing ALB group) | `prod-alb` | +| `certificate` | ACM/IAM certificate ARN | `arn:aws:acm:region:account:certificate/id` | +| `*Host` | Domain names (must match certificate) | `*.yourdomain.com` | + +## Step 3: Deploy to EKS + +### Prepare Namespace +```bash +# Create namespace with Helm labels +kubectl create namespace your-namespace +kubectl label namespace your-namespace app.kubernetes.io/managed-by=Helm +kubectl annotate namespace your-namespace meta.helm.sh/release-name=cht-interop +kubectl annotate namespace your-namespace meta.helm.sh/release-namespace=your-namespace +``` + +### Install with Helm +```bash +helm install cht-interop ./charts \ + -n your-namespace \ + -f charts/values-eks.yaml +``` + +### Monitor Deployment +```bash +# Watch pods start +kubectl get pods -n your-namespace -w + +# Check all resources +kubectl get all -n your-namespace + +# Check PVCs are bound +kubectl get pvc -n your-namespace + +# Check ingress status +kubectl get ingress -n your-namespace +``` + +## Step 4: Configure DNS + +### Get Load Balancer Address +```bash +kubectl get ingress -n your-namespace -o wide +``` + +Look for the `ADDRESS` column, which will show your ALB hostname (e.g., `k8s-groupname-xxx.region.elb.amazonaws.com`). + +### Create DNS Records + +In your DNS provider (Route53, Cloudflare, etc.), create CNAME records: +``` +cht.yourdomain.com → k8s-groupname-xxx.region.elb.amazonaws.com +openhim-console.yourdomain.com → k8s-groupname-xxx.region.elb.amazonaws.com +openhim-api.yourdomain.com → k8s-groupname-xxx.region.elb.amazonaws.com +openhim-router.yourdomain.com → k8s-groupname-xxx.region.elb.amazonaws.com +``` + +### Wait for DNS Propagation +```bash +# Test DNS resolution +nslookup cht.yourdomain.com + +# Test HTTPS access +curl -I https://cht.yourdomain.com +``` + +## Accessing Services + +Once DNS is configured, access services at: + +| Service | URL | Purpose | +|----------------------|----------------------------------------|-------------------------------| +| **CHT** | https://cht.yourdomain.com | Main CHT application | +| **OpenHIM Console** | https://openhim-console.yourdomain.com | OpenHIM management UI | +| **OpenHIM Core API** | https://openhim-api.yourdomain.com | OpenHIM management API | +| **OpenHIM Router** | https://openhim-router.yourdomain.com | Health data exchange endpoint | + +Default credentials: +- OpenHIM: `root@openhim.org` / `openhim-password` +- CHT: `admin` / `password` + +## Common Operations + +### View Logs +```bash +# View logs for a specific service +kubectl logs deployment/openhim-core -n your-namespace + +# Follow logs in real-time +kubectl logs -f deployment/mediator -n your-namespace --tail=100 + +# View logs from all containers +kubectl logs deployment/api -n your-namespace --all-containers +``` + +### Scale Services +```bash +# Scale a deployment +kubectl scale deployment openhim-core --replicas=3 -n your-namespace + +# Or update Helm values and upgrade +``` + +### Upgrade Deployment + +After pushing new images or changing configuration: +```bash +# Push new images to ECR (if needed) +docker push $ECR_REGISTRY/cht-interop/configurator:latest +docker push $ECR_REGISTRY/cht-interop/mediator:latest + +# Upgrade Helm release +helm upgrade cht-interop ./charts \ + -n your-namespace \ + -f charts/values-eks.yaml + +# Monitor rollout +kubectl rollout status deployment/mediator -n your-namespace +``` + +### Restart a Service +```bash +kubectl rollout restart deployment/openhim-core -n your-namespace +``` + +## Troubleshooting + +### Pods Stuck in Pending (Storage Issues) + +Check storage class and PVC status: +```bash +# Check available storage classes +kubectl get storageclass + +# Check PVC status +kubectl get pvc -n your-namespace + +# Describe PVC to see errors +kubectl describe pvc couchdb-data -n your-namespace +``` + +If using `gp3` storage class but it doesn't exist, change to `gp2` in `values-eks.yaml`. + +### Ingress Not Creating Load Balancer + +Check AWS Load Balancer Controller: +```bash +# Verify controller is running +kubectl get deployment -n kube-system aws-load-balancer-controller + +# Check controller logs +kubectl logs -n kube-system deployment/aws-load-balancer-controller + +# Describe ingress to see events +kubectl describe ingress -n your-namespace +``` + +Common issues: +- Missing IAM permissions for the controller +- Incorrect subnet tags (`kubernetes.io/role/elb=1`) +- Security group issues + +### SSL/TLS Certificate Issues +```bash +# Verify certificate ARN +aws acm describe-certificate \ + --certificate-arn your-cert-arn \ + --region $AWS_REGION + +# Check if domains match +# Certificate must cover all ingress hostnames +``` + +### Image Pull Errors + +If pods show `ImagePullBackOff`: +```bash +# Check if images exist in ECR +aws ecr describe-images \ + --repository-name cht-interop/configurator \ + --region $AWS_REGION + +# Verify EKS node has ECR permissions +# Nodes need AmazonEC2ContainerRegistryReadOnly policy +``` + +### Database Connection Issues +```bash +# Check if databases are ready +kubectl get pods -n your-namespace -l app=mongo + +# Test connectivity from a pod +kubectl exec -it deployment/openhim-core -n your-namespace -- \ + nc -zv mongo 27017 + +# Check service endpoints +kubectl get endpoints -n your-namespace +``` + +### 502 Bad Gateway Errors + +Check target health and service configuration: +```bash +# Check pod health +kubectl get pods -n your-namespace + +# Check service endpoints +kubectl describe svc nginx -n your-namespace + +# Check ingress configuration +kubectl describe ingress -n your-namespace + +# View ALB target groups in AWS Console +# Look for unhealthy targets +``` + +## Monitoring and Logging + +### Application Logs +```bash +# Export logs to local file +kubectl logs deployment/mediator -n your-namespace > mediator.log + +# Search logs +kubectl logs deployment/openhim-core -n your-namespace | grep ERROR +``` + +## Security Best Practices + +1. **Change default passwords** before deployment change the default passwords as they are public +2. **Use AWS Secrets Manager** or Kubernetes Secrets for sensitive data +3. **Enable Pod Security Standards** for your namespace +4. **Restrict network access** using Network Policies +5. **Enable audit logging** on your EKS cluster +6. **Regularly update** images and Helm charts +7. **Use least-privilege IAM roles** for service accounts + +## Clean Up + +### Delete Helm Release +```bash +helm uninstall cht-interop -n your-namespace +``` + +### Delete PVCs +```bash +# List PVCs +kubectl get pvc -n your-namespace + +# Delete all PVCs (this deletes data!) +kubectl delete pvc --all -n your-namespace +``` + +### Delete ECR Images +```bash +aws ecr delete-repository \ + --repository-name cht-interop/configurator \ + --region $AWS_REGION \ + --force + +aws ecr delete-repository \ + --repository-name cht-interop/mediator \ + --region $AWS_REGION \ + --force +``` + +{{< callout type="warning" >}} +Deleting PVCs will permanently delete all data including databases. Make sure to backup data before cleaning up. +{{< /callout >}} diff --git a/content/en/hosting/interoperability/kind.md b/content/en/hosting/interoperability/kind.md new file mode 100644 index 000000000..8302fda7d --- /dev/null +++ b/content/en/hosting/interoperability/kind.md @@ -0,0 +1,232 @@ +--- +title: "Local Development with KIND" +linkTitle: "Local KIND Development" +weight: 1 +description: > + Deploy the interoperability stack locally using KIND (Kubernetes in Docker) +--- + +## Overview + +KIND (Kubernetes in Docker) allows you to run a Kubernetes cluster on your local machine using Docker containers. This is perfect for development and testing. + +## Prerequisites + +- [Docker Desktop](https://www.docker.com/products/docker-desktop) or Docker Engine +- [KIND](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) installed +- [Helm 3+](https://helm.sh/docs/intro/install/) installed +- kubectl configured + +## Quick Start + +The fastest way to get started is using our automated setup script: +```bash +# Clone the repository +git clone git@github.com:medic/cht-interoperability.git +cd cht-interoperability + +# Run the setup script +chmod +x ./start_local_kubernetes.sh +./start_local_kubernetes.sh +``` + +The script will: +1. Create a KIND cluster named `cht-interop` +2. Build custom Docker images (configurator and mediator) +3. Load images into the KIND cluster +4. Deploy the Helm chart +5. Set up port forwarding automatically + +## Manual Setup + +If you prefer to run commands manually: + +### 1. Create KIND Cluster +```bash +kind create cluster --name cht-interop +``` + +### 2. Build Custom Images +```bash +# Build configurator +docker build -f configurator/Dockerfile -t configurator:local . + +# Build mediator +docker build -t mediator:local ./mediator +``` + +### 3. Load Images into KIND +```bash +kind load docker-image configurator:local --name cht-interop +kind load docker-image mediator:local --name cht-interop +``` + +### 4. Deploy with Helm +```bash +helm install cht-interop ./charts -n cht-interop --create-namespace +``` + +### 5. Wait for Pods to be Ready +```bash +# Watch pods start +kubectl get pods -n cht-interop -w + +# Or wait for specific services +kubectl wait --for=condition=ready pod -l app=openhim-core -n cht-interop --timeout=300s +``` + +### 6. Set Up Port Forwarding +```bash +# Use the provided script +chmod +x port-forward.sh +./port-forward.sh + +# Or manually forward ports +kubectl port-forward svc/openhim-console 9000:80 -n cht-interop & +kubectl port-forward svc/openhim-core 8080:8080 -n cht-interop & +kubectl port-forward svc/api 5988:5988 -n cht-interop & +``` + +## Accessing Services + +Once deployed, access services at these URLs: + +| Service | URL | Default Credentials | +|----------------------|------------------------|-------------------------------------| +| **OpenHIM Console** | http://localhost:9000 | root@openhim.org / openhim-password | +| **OpenHIM Core API** | https://localhost:8080 | root@openhim.org / openhim-password | +| **OpenHIM Router** | http://localhost:5001 | - | +| **CHT API** | http://localhost:5988 | admin / password | +| **Mediator** | http://localhost:6000 | - | + +{{< callout type="info" >}} +OpenHIM Core uses a self-signed certificate. Your browser will show a security warning - this is expected for local development. +{{< /callout >}} + +## Configuration + +The deployment uses `charts/values.yaml` with these key settings: +```yaml +cluster_type: "kind" +persistence: + storageClass: standard # KIND's default storage class +configurator: + image: configurator:local + imagePullPolicy: Never # Don't pull, use local image +mediator: + image: mediator:local + imagePullPolicy: Never +``` + +## Common Operations + +### View Logs +```bash +# View logs for a specific service +kubectl logs deployment/openhim-core -n cht-interop + +# Follow logs in real-time +kubectl logs -f deployment/mediator -n cht-interop + +# View logs for all containers in a pod +kubectl logs deployment/api -n cht-interop --all-containers +``` + +### Restart a Service +```bash +kubectl rollout restart deployment/openhim-core -n cht-interop +``` + +### Upgrade Deployment + +After making changes to your code or configuration: +```bash +# Rebuild images +docker build -f configurator/Dockerfile -t configurator:local . +docker build -t mediator:local ./mediator + +# Reload into KIND +kind load docker-image configurator:local --name cht-interop +kind load docker-image mediator:local --name cht-interop + +# Upgrade Helm release +helm upgrade cht-interop ./charts -n cht-interop +``` + +### Access Kubernetes Dashboard +```bash +# Install K9s for a better CLI experience +brew install k9s # macOS +# or download from https://k9scli.io/ + +# Run K9s +k9s --context kind-cht-interop +``` + +## Troubleshooting + +### Pods Stuck in Pending + +Check if PVCs are bound: +```bash +kubectl get pvc -n cht-interop +``` + +If PVCs show `Pending`, check storage class: +```bash +kubectl get storageclass +``` + +KIND should have a `standard` storage class by default. + +### Port Already in Use + +If port forwarding fails with "port already in use": +```bash +# Kill existing port forwards +pkill -f 'kubectl port-forward' + +# Try again +./port-forward.sh +``` + +### Images Not Loading + +If pods show `ImagePullBackOff`: +```bash +# Verify images are loaded in KIND +docker exec -it cht-interop-control-plane crictl images | grep -E "configurator|mediator" + +# Reload images +kind load docker-image configurator:local --name cht-interop +kind load docker-image mediator:local --name cht-interop +``` + +### Database Connection Issues + +If services can't connect to databases: +```bash +# Check if databases are ready +kubectl get pods -n cht-interop -l app=mongo +kubectl get pods -n cht-interop -l app=couchdb + +# Test connectivity from a pod +kubectl exec -it deployment/openhim-core -n cht-interop -- nc -zv mongo 27017 +``` + +## Clean Up + +### Stop Port Forwarding +```bash +pkill -f 'kubectl port-forward' +``` + +### Delete Helm Release +```bash +helm uninstall cht-interop -n cht-interop +``` + +### Delete KIND Cluster +```bash +kind delete cluster --name cht-interop +```