Kubernetes ile Container Orchestration: Best Practices
Kubernetes, modern containerized uygulamaların yönetimi ve orchestration'ı için endüstri standardı haline gelmiştir. Bu kapsamlı rehberde, production ortamlarında Kubernetes kullanımı için kritik best practice'leri detaylı olarak inceleyeceğiz.
İçindekiler
- Kubernetes Temelleri ve Mimari
- Resource Management
- Güvenlik Best Practices
- Monitoring ve Logging
- High Availability Konfigürasyonu
- CI/CD Entegrasyonu
- Troubleshooting ve Debugging
Kubernetes Temelleri ve Mimari {#kubernetes-temelleri}
Cluster Mimarisi
Kubernetes cluster'ı temel olarak iki ana bileşenden oluşur:
Control Plane Bileşenleri:
- kube-apiserver: Cluster'ın kalbi, tüm API çağrılarını yönetir
- etcd: Cluster state'ini saklayan distributed key-value store
- kube-scheduler: Pod'ları uygun node'lara atayan bileşen
- kube-controller-manager: Çeşitli controller'ları çalıştıran bileşen
Node Bileşenleri:
- kubelet: Her node'da çalışan ana agent
- kube-proxy: Network proxy ve load balancer
- Container Runtime: Docker, containerd veya CRI-O
Namespace Stratejisi
# development-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: dev
team: backend
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "10"
Resource Management {#resource-management}
CPU ve Memory Limit'leri
Resource management, Kubernetes cluster'ınızın kararlı çalışması için kritiktir:
# web-app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: myregistry/web-app:v1.2.3
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Horizontal Pod Autoscaler (HPA)
# hpa-config.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
Güvenlik Best Practices {#guvenlik-best-practices}
RBAC (Role-Based Access Control)
# rbac-config.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: developer-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: development
subjects:
- kind: User
name: jane.developer@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer-role
apiGroup: rbac.authorization.k8s.io
Pod Security Standards
# pod-security-policy.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
namespace: production
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
image: myregistry/secure-app:v1.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
- name: tmp-volume
mountPath: /tmp
- name: cache-volume
mountPath: /app/cache
volumes:
- name: tmp-volume
emptyDir: {}
- name: cache-volume
emptyDir: {}
Network Policies
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-app-netpol
namespace: production
spec:
podSelector:
matchLabels:
app: web-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-system
- podSelector:
matchLabels:
app: load-balancer
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Monitoring ve Logging {#monitoring-logging}
Prometheus Monitoring
# monitoring-config.yaml
apiVersion: v1
kind: ServiceMonitor
metadata:
name: web-app-metrics
namespace: production
labels:
app: web-app
spec:
selector:
matchLabels:
app: web-app
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
apiVersion: v1
kind: Service
metadata:
name: web-app-metrics
namespace: production
labels:
app: web-app
spec:
ports:
- name: metrics
port: 9090
targetPort: 9090
- name: http
port: 8080
targetPort: 8080
selector:
app: web-app
Centralized Logging
# fluentd-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/containers.log.pos
tag kubernetes.*
format json
read_from_head true
</source>
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<match kubernetes.**>
@type elasticsearch
host elasticsearch.logging.svc.cluster.local
port 9200
index_name kubernetes
type_name _doc
</match>
High Availability Konfigürasyonu {#high-availability}
Multi-Zone Deployment
# ha-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-ha
namespace: production
spec:
replicas: 6
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-app
topologyKey: "kubernetes.io/hostname"
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-west-2a
- us-west-2b
- us-west-2c
containers:
- name: web-app
image: myregistry/web-app:v1.2.3
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
PodDisruptionBudget
# pod-disruption-budget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: web-app-pdb
namespace: production
spec:
minAvailable: 2
selector:
matchLabels:
app: web-app
CI/CD Entegrasyonu {#cicd-entegrasyonu}
GitOps ile Deployment
# gitops-pipeline.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: deployment-config
namespace: production
data:
deploy.sh: |
#!/bin/bash
set -e
# Image tag'ini environment variable'dan al
IMAGE_TAG=${GITHUB_SHA:-latest}
# Deployment'ı güncelle
kubectl set image deployment/web-app \
web-app=myregistry/web-app:${IMAGE_TAG} \
--namespace=production
# Rollout'un tamamlanmasını bekle
kubectl rollout status deployment/web-app \
--namespace=production \
--timeout=300s
# Health check
kubectl get pods -l app=web-app \
--namespace=production
Canary Deployment
# canary-deployment.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: web-app-rollout
namespace: production
spec:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
- pause: {}
- setWeight: 40
- pause: {duration: 10s}
- setWeight: 60
- pause: {duration: 10s}
- setWeight: 80
- pause: {duration: 10s}
canaryService: web-app-canary
stableService: web-app-stable
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: myregistry/web-app:v1.2.3
Troubleshooting ve Debugging {#troubleshooting}
Temel Debug Komutları
# Pod durumunu kontrol et
kubectl get pods -o wide
# Pod loglarını görüntüle
kubectl logs -f deployment/web-app
# Pod'a bağlan ve debug yap
kubectl exec -it pod/web-app-xxx -- /bin/bash
# Cluster events'lerini kontrol et
kubectl get events --sort-by=.metadata.creationTimestamp
# Resource kullanımını kontrol et
kubectl top pods
kubectl top nodes
# Network bağlantısını test et
kubectl run debug --image=nicolaka/netshoot -it --rm -- /bin/bash
Performance Tuning
# performance-config.yaml
apiVersion: v1
kind: Pod
metadata:
name: high-performance-app
spec:
containers:
- name: app
image: myregistry/high-perf-app:v1.0
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"
env:
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
Sonuç
Kubernetes ile başarılı bir container orchestration stratejisi oluşturmak, yukarıda detaylandırdığımız best practice'lerin sistematik olarak uygulanmasını gerektirir. Özellikle:
- Resource Management: Doğru CPU/Memory limit'leri ve HPA konfigürasyonu
- Güvenlik: RBAC, Pod Security Standards ve Network Policies
- Monitoring: Prometheus ve centralized logging altyapısı
- High Availability: Multi-zone deployment ve PDB yapılandırması
- CI/CD: GitOps ve canary deployment stratejileri
Bu prensipler doğrultusunda yapılandırılan Kubernetes cluster'ları, production ortamlarında güvenilir, ölçeklenebilir ve yönetilebilir container orchestration platformu sağlayacaktır.
TekTık Yazılım DevOps ekibi olarak, Kubernetes implementasyonlarınızda profesyonel danışmanlık ve teknik destek sağlamaktayız. Detaylı bilgi için iletişim sayfamızı ziyaret edebilirsiniz.