Production-ready Kubernetes operator for automatic DNS synchronization with Gandi.net using Kopf framework.
# Generate deployment manifests
./k8s/deploy.sh development
# Or for production
./k8s/deploy.sh production
Base Configuration (k8s/base/)
Environment Overlays (k8s/overlays/)
┌──────────────────────────────────────────────────────┐
│ Kubernetes Cluster (Any Cloud) │
├──────────────────────────────────────────────────────┤
│ │
│ CENTRALIZED IP DETECTION (Leader) │
│ ┌────────────────────────────────────────────────┐ │
│ │ esddns-operator (Single Instance) │ │
│ │ CentralizedIPDetector (detect_wan_ip) │ │
│ │ - Runs once every 5 minutes │ │
│ │ - Detects WAN IP from external services │ │
│ │ - Stores in ConfigMap: esddns-wan-ip │ │
│ └─────────────────────┬──────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ ConfigMap │ │
│ │ esddns-wan-ip │ │
│ │ - current_ip: X.X.X.X │ │
│ │ - timestamp: ... │ │
│ └───────────────────────┘ │
│ ▲ │
│ │ (Read cached IP) │
│ │ │
│ DISTRIBUTED DNS UPDATES (All Nodes) │
│ Node 1 Node 2 Node N │
│ ┌──────────┐ ┌──────────┐ │
│ │ Operator │ │ Operator │ ... DaemonSet │
│ │ Pod │ │ Pod │ (hostNetwork) │
│ │ NodeDNS │ │ NodeDNS │ Distributed │
│ │ Updater │ │ Updater │ │
│ └────┬─────┘ └────┬─────┘ │
│ │ │ │
│ │ (Update if IP changed) │
│ │ │ │
│ └──────────────────┴────────┐ │
│ │ │
│ ┌────────────────────────────────▼──┐ │
│ │ esddns-service (Deployment) │ │
│ │ Flask Web Service (Single Replica) │ │
│ └────────────────────────────────────┘ │
│ │ │
│ ┌──────────▼──────────────────┐ │
│ │ LoadBalancer Service │ │
│ │ (cloud provider ELB/LB) │ │
│ └──────────┬──────────────────┘ │
│ │ │
└─────────────┼────────────────────────────────────────┘
│ External IP
│
┌─────▼──────────┐
│ Gandi.net API │ ◄──────────────────────────┘
│ (DNS Updates) │
└────────────────┘
esddns-wan-ip# SetupAWS credentials
export AWS_PROFILE=my-profile
# Deploy
./k8s/deploy.sh production
# Get LoadBalancer IP
kubectl get svc -n esddns esddns-service
# AWS Classic LB or NLB automatically assigned
# Setup GCP credentials
gcloud auth application-default login
# Deploy
./k8s/deploy.sh production
# Get LoadBalancer IP
kubectl get svc -n esddns esddns-service
# GCP Network LB automatically assigned
# Setup Azure credentials
az login
# Deploy
./k8s/deploy.sh production
# Get LoadBalancer IP
kubectl get svc -n esddns esddns-service
# Azure LB automatically assigned
For bare-metal Kubernetes clusters, use MetalLB for LoadBalancer support:
# Automated deployment (installs MetalLB + ESDDNS)
./k8s/deploy-metallb.sh production 192.168.1.100-192.168.1.110
# Or manual deployment
# 1. Install MetalLB
helm install metallb metallb/metallb -n metallb-system --create-namespace
# 2. Create IP address pool
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: esddns-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.100-192.168.1.110
EOF
# 3. Create L2 advertisement
cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: esddns-l2
namespace: metallb-system
spec:
ipAddressPools:
- esddns-pool
EOF
# 4. Deploy ESDDNS
./k8s/deploy.sh production
For clusters without cloud provider or MetalLB, use NodePort:
# Edit service.yaml - change type from LoadBalancer to NodePort
kubectl patch svc esddns-service -n esddns \
-p '{"spec":{"type":"NodePort"}}'
# Access via: <node-ip>:<node-port>
kubectl get svc -n esddns esddns-service
k8s/
├── esddns_operator.py # Main Kopf operator implementation
├── Dockerfile # Container image definition
├── deploy.sh # Cloud deployment automation script
├── deploy-metallb.sh # On-premises MetalLB + ESDDNS deployment
├── DEPLOYMENT.md # Detailed deployment guide
├── README.md # This file
│
├── base/ # Base Kubernetes manifests
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ ├── serviceaccount.yaml
│ ├── clusterrole.yaml
│ ├── clusterrolebinding.yaml
│ ├── daemon-deployment.yaml # Kopf operator DaemonSet
│ ├── service-deployment.yaml # Flask service Deployment
│ ├── service.yaml # LoadBalancer Service
│ ├── configmap.yaml # Configuration and dns.ini
│ └── secrets.yaml # API credentials placeholder
│
├── overlays/
│ ├── development/ # Development environment
│ │ ├── kustomization.yaml
│ │ ├── daemon-dev-patch.yaml
│ │ └── service-dev-patch.yaml
│ │
│ └── production/ # Production environment
│ ├── kustomization.yaml
│ ├── daemon-prod-patch.yaml
│ └── service-prod-patch.yaml
│
└── monitoring/ # Prometheus monitoring
├── prometheus-servicemonitor.yaml
└── prometheus-rules.yaml
See DEPLOYMENT.md for detailed step-by-step installation.
Quick summary:
# 1. Prepare domain and API key
export GANDI_API_KEY="your-api-key"
# 2. Generate and deploy
./k8s/deploy.sh production
# 3. Verify
kubectl get all -n esddns
kubectl get svc -n esddns esddns-service
# Operator logs
kubectl logs -n esddns -l app=esddns-operator -f
# Service logs
kubectl logs -n esddns -l app=esddns-service -f
# All logs with timestamps
kubectl logs -n esddns --all-containers=true -f --prefix
# View cached WAN IP
kubectl get configmap -n esddns esddns-wan-ip -o yaml
# Example output:
# data:
# current_ip: 203.0.113.42
# detected_at: 2025-11-13T15:30:45.123456
# timestamp: 2025-11-13T15:30:45.123456
# Port-forward to metrics
kubectl port-forward -n esddns daemonset/esddns-operator-daemon 8080:8080
# View metrics
curl http://localhost:8080/metrics
# Get external IP
EXTERNAL_IP=$(kubectl get svc -n esddns esddns-service \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# View current state
curl http://$EXTERNAL_IP/
# Get raw JSON
curl http://$EXTERNAL_IP/raw
# Edit ConfigMap
kubectl edit configmap esddns-config -n esddns
# Restart pods to pick up changes
kubectl rollout restart daemonset/esddns-operator-daemon -n esddns
kubectl rollout restart deployment/esddns-service -n esddns
# Delete old secret
kubectl delete secret esddns-gandi-credentials -n esddns
# Create new secret
kubectl create secret generic esddns-gandi-credentials \
--from-literal=api-key=$NEW_API_KEY \
-n esddns
# Restart pods
kubectl rollout restart daemonset/esddns-operator-daemon -n esddns
kubectl rollout restart deployment/esddns-service -n esddns
# Check events
kubectl describe daemonset -n esddns esddns-operator-daemon
# Check pod logs
kubectl logs -n esddns -l app=esddns-operator --previous
# Check operator logs
kubectl logs -n esddns -l app=esddns-operator | grep -i dns
# Verify API key
kubectl get secret -n esddns esddns-gandi-credentials -o jsonpath='{.data.api-key}' | base64 -d
# Check domain configuration
kubectl get configmap -n esddns esddns-config -o jsonpath='{.data.target-domain}'
Check your cloud provider’s load balancer quota and status.
See DEPLOYMENT.md for more troubleshooting steps.
ServiceMonitor resources automatically configured. Add to Prometheus config:
serviceMonitorSelector:
matchLabels:
app: esddns
Available metrics for dashboards:
esddns_dns_updates_totalesddns_dns_update_failures_totalesddns_last_dns_update_timestampesddns_current_wan_ip_infoesddns_state_in_syncesddns_service_healthPrometheusRules include:
Minimal permissions assigned:
Optional NetworkPolicy resources can restrict:
Adjust in overlays based on your needs:
Configure via DNS_SYNC_INTERVAL environment variable (seconds).
Default: 300 seconds (5 minutes)
This controls how often:
Adjust in deployment manifests for slow networks. Default: 5 seconds
# Remove everything
kubectl delete namespace esddns
# Or just the deployment
kubectl delete -f esddns-prod.yaml
# Remove monitoring
kubectl delete -f k8s/monitoring/
For issues:
Same as ESDDNS project (MIT)