A cloud-native microservice for continuous network compliance checking using NETCONF and Kubernetes.
This project demonstrates a complete DevOps workflow for network automation by implementing a Network Compliance Checker that periodically validates network device configurations against security policies. The system connects to network equipment via NETCONF protocol to ensure compliance with best practices such as disabling Telnet, enabling NTP synchronization, and enforcing proper hostname conventions. This showcases the integration of network automation, container orchestration, and modern CI/CD practices.
graph LR
A[GitHub Push] --> B[GitHub Actions CI/CD]
B --> C[Build Docker Image]
C --> D[GCP Artifact Registry]
D --> E[Deploy to GKE Cluster]
E --> F[Router Pod<br/>NETCONF Server]
E --> G[Checker CronJob<br/>Compliance Validator]
G --> H[Creates Job Pod Every 5min]
H --> I[Connect to Router via NETCONF]
I --> F
I --> J[Check Compliance Rules]
J --> K[Log Results: PASS/FAIL]
Component Roles:
- GitHub Actions: Automates the build, test, and deployment pipeline
- GCP Artifact Registry: Stores Docker images securely in the cloud
- GKE Cluster: Orchestrates containerized workloads in a Kubernetes environment
- Router Pod: Runs a NETCONF-enabled network device simulator (sysrepo/netopeer2)
- Checker CronJob: Schedules periodic compliance checks every 5 minutes
- Compliance Logic: Validates configuration against policies (NTP enabled, Telnet disabled, correct hostname)
For a deep dive into the architecture, including screenshots and detailed flow diagrams, see the Architecture Documentation.
The project is fully deployed and running on Google Kubernetes Engine:
Production Environment:
- Deployment: netconf-router-deployment (1/1 pods running)
- CronJob: netconf-checker-cronjob (executing every 5 minutes)
- Service: netconf-router-service (ClusterIP on port 830)
- Platform: Google Kubernetes Engine (GKE) in us-central1-a
To reproduce this project, you'll need to configure GCP:
-
Create a GCP Project
gcloud projects create YOUR_PROJECT_ID gcloud config set project YOUR_PROJECT_ID -
Enable Required APIs
gcloud services enable container.googleapis.com gcloud services enable artifactregistry.googleapis.com gcloud services enable compute.googleapis.com
-
Create a GKE Cluster
gcloud container clusters create netconf-cluster \ --zone=us-central1-a \ --num-nodes=2 \ --machine-type=e2-medium
-
Create an Artifact Registry Repository
gcloud artifacts repositories create netconf-repo \ --repository-format=docker \ --location=us-central1 \ --description="Docker repository for netconf-k8s" -
Create a Service Account for GitHub Actions
gcloud iam service-accounts create github-actions-sa \ --display-name="GitHub Actions Service Account" gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:github-actions-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/container.developer" gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:github-actions-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/artifactregistry.writer" gcloud iam service-accounts keys create key.json \ --iam-account=github-actions-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com
-
Configure GitHub Secrets
- Go to your GitHub repository settings
- Navigate to
Secrets and variables>Actions - Add the following secrets:
GCP_PROJECT_ID: Your GCP project IDGCP_SA_KEY: Contents of thekey.jsonfile (base64 encoded)
-
Code Push: When you push code to the
mainbranch, GitHub Actions triggers automatically -
Build Phase:
- The workflow checks out the code
- Builds the Docker image using the multi-stage Dockerfile
- Tags the image with the commit SHA
-
Push Phase:
- Authenticates to GCP using the service account key
- Pushes the Docker image to GCP Artifact Registry
-
Deploy Phase:
- Connects to the GKE cluster
- Applies Kubernetes manifests (
k8s/*.yaml) - Updates the CronJob with the new image
-
Runtime:
- The Router Deployment runs continuously, simulating a NETCONF-enabled network device
- The Checker CronJob spawns a Pod every 5 minutes
- Each Pod connects to the router, retrieves the configuration, and validates compliance
- Results are logged and visible in GKE console
.
βββ .github/
β βββ workflows/
β βββ ci-cd.yml # GitHub Actions CI/CD pipeline
βββ assets/
β βββ logo.png # Project logo
βββ cmd/
β βββ main.go # Main Go application (NETCONF client)
βββ docs/
β βββ architecture.md # Detailed architecture documentation
βββ k8s/
β βββ checker-cronjob.yaml # Kubernetes CronJob for compliance checks
β βββ router-deployment.yaml # Kubernetes Deployment for NETCONF router
βββ Dockerfile # Multi-stage Docker build
βββ go.mod # Go module dependencies
βββ go.sum # Go module checksums
βββ README.md # This file
- Go 1.21+
- Docker
- kubectl
- minikube (optional, for local Kubernetes testing)
Create a docker-compose.yml file:
version: '3.8'
services:
router:
image: sysrepo/sysrepo-netopeer2:latest
ports:
- "830:830"
checker:
build: .
command: ["--router-address=router:830"]
depends_on:
- routerRun the stack:
docker-compose up# Start minikube
minikube start
# Build the image
docker build -t netconf-k8s-inspector:local .
# Load image into minikube
minikube image load netconf-k8s-inspector:local
# Apply Kubernetes manifests
kubectl apply -f k8s/
# View logs
kubectl logs -l app=netconf-router
kubectl logs -l app=netconf-checker# Install dependencies
go mod download
# Run the compliance checker
go run cmd/main.go --router-address=localhost:830The compliance checker validates the following rules:
- β NTP Configuration: Ensures NTP is enabled for time synchronization
- β Telnet Disabled: Verifies that insecure Telnet access is not enabled
- β Hostname Compliance: Checks that the hostname follows naming conventions
Example output:
[INFO] Connecting to NETCONF router at netconf-router-service:830
[INFO] Retrieving running configuration...
[PASS] β NTP is enabled
[PASS] β Telnet is disabled
[PASS] β Hostname follows naming convention
[PASS] Compliance check successful!
This project is licensed under the MIT License. See the LICENSE file for details.
Made with β and Kubernetes
