AWS vs Azure vs GCP: Cloud Provider Karşılaştırması
Cloud computing dünyasında üç büyük oyuncu olan Amazon Web Services (AWS), Microsoft Azure ve Google Cloud Platform (GCP), her biri kendine özgü avantajlar ve özelliklere sahiptir. Bu kapsamlı karşılaştırmada, DevOps perspektifinden bu platformları derinlemesine analiz edeceğiz.
İçindekiler
- Genel Bakış ve Market Pozisyonları
- Compute Services Karşılaştırması
- Container ve Kubernetes Çözümleri
- Serverless ve Function Services
- Database ve Storage Seçenekleri
- DevOps ve CI/CD Araçları
- Networking ve Security
- Pricing ve Cost Management
- Enterprise Features
- Migration Stratejileri
Genel Bakış ve Market Pozisyonları {#genel-bakis}
Market Share ve Ekosistem
Amazon Web Services (AWS)
- Market lideri: ~32% market share
- 2006'dan beri hizmet veriyor
- En geniş hizmet yelpazesi (200+ servis)
- En olgun ve kapsamlı ekosistem
- En büyük third-party partner ekosistemi
Microsoft Azure
- İkinci büyük: ~23% market share
- 2010'da lansmanı
- Enterprise entegrasyonu en güçlü
- Hibrit cloud çözümlerinde lider
- Microsoft ekosistemiyle güçlü entegrasyon
Google Cloud Platform (GCP)
- Üçüncü büyük: ~10% market share
- 2011'de başladı
- AI/ML ve data analytics'te güçlü
- Kubernetes'in doğduğu yer
- Google-scale teknolojiler
Global Infrastructure Karşılaştırması
# Infrastructure Comparison
AWS:
regions: 33
availability_zones: 105
edge_locations: 400+
coverage: "En geniş küresel kapsama"
Azure:
regions: 60+
availability_zones: 140+
edge_locations: 200+
coverage: "En fazla region sayısı"
GCP:
regions: 35
zones: 106
edge_locations: 146
coverage: "Premium network altyapısı"
Compute Services Karşılaştırması {#compute-services}
Virtual Machines
AWS EC2
# EC2 Instance Types
General_Purpose:
- t4g.nano: "ARM-based, burstable"
- m6i.large: "Intel-based balanced"
- m6a.xlarge: "AMD-based performance"
Compute_Optimized:
- c6i.xlarge: "High-performance CPU"
- c6gn.medium: "Network optimized"
Memory_Optimized:
- r6i.large: "High memory ratio"
- x1e.xlarge: "Extreme memory"
Storage_Optimized:
- i4i.large: "NVMe SSD"
- d3.xlarge: "Dense HDD storage"
Features:
- spot_instances: "90% cost savings"
- reserved_instances: "1-3 year commitments"
- dedicated_hosts: "Compliance requirements"
Azure Virtual Machines
# Azure VM Series
General_Purpose:
- B-series: "Burstable performance"
- Dv5: "Latest general purpose"
- Av2: "Entry-level economical"
Compute_Optimized:
- Fsv2: "High CPU-to-memory ratio"
- FX: "High frequency CPU"
Memory_Optimized:
- Ev5: "High memory-to-core ratio"
- Mv2: "Largest memory offerings"
GPU_Accelerated:
- NC-series: "NVIDIA Tesla"
- ND-series: "Deep learning"
Features:
- spot_vms: "Cost-effective workloads"
- reserved_instances: "1-3 year savings"
- dedicated_hosts: "Physical server isolation"
Google Compute Engine
# GCE Machine Types
General_Purpose:
- e2-standard: "Cost-optimized"
- n2-standard: "Balanced performance"
- n1-standard: "First generation"
Compute_Optimized:
- c2-standard: "Ultra-high performance"
- c2d-highcpu: "AMD EPYC processors"
Memory_Optimized:
- m2-ultramem: "Highest memory"
- m1-megamem: "Large memory workloads"
Accelerator_Optimized:
- a2-highgpu: "NVIDIA A100"
- a2-megagpu: "High GPU ratio"
Features:
- preemptible_instances: "80% cost reduction"
- committed_use_discounts: "1-3 year commitments"
- sole_tenant_nodes: "Physical isolation"
Performance Karşılaştırması
# Performance Benchmark Results
import matplotlib.pyplot as plt
providers = ['AWS', 'Azure', 'GCP']
compute_performance = [100, 95, 105] # Relative performance
network_latency = [1.2, 1.5, 0.8] # ms average
storage_iops = [16000, 20000, 25000] # Average IOPS
# AWS: Balanced, mature
# Azure: Slightly lower compute, higher IOPS
# GCP: Best network, highest compute performance
Container ve Kubernetes Çözümleri {#container-kubernetes}
Managed Kubernetes Services
Amazon EKS
# EKS Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: eks-comparison
data:
features: |
- Kubernetes versions: 1.24-1.28
- Control plane: Fully managed
- Node groups: Managed and self-managed
- Fargate: Serverless containers
- Add-ons: AWS Load Balancer Controller, EBS CSI
- Security: IAM integration, Pod Security Standards
- Monitoring: CloudWatch Container Insights
- Service mesh: AWS App Mesh integration
pricing: |
- Control plane: $0.10/hour per cluster
- Worker nodes: EC2 pricing
- Fargate: Per vCPU and GB-second
strengths: |
- Deep AWS service integration
- Excellent security model
- Mature ecosystem
- Strong enterprise features
limitations: |
- Higher learning curve
- More complex setup
- Vendor lock-in concerns
Azure Kubernetes Service (AKS)
# AKS Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: aks-comparison
data:
features: |
- Kubernetes versions: 1.24-1.28
- Control plane: Free managed
- Node pools: System and user pools
- Virtual nodes: Serverless with ACI
- Add-ons: Application Gateway Ingress, Azure Policy
- Security: Azure AD integration, Azure RBAC
- Monitoring: Azure Monitor for containers
- Service mesh: Istio, Linkerd, Consul Connect
pricing: |
- Control plane: Free
- Worker nodes: VM pricing
- Virtual nodes: ACI pricing
strengths: |
- Free control plane
- Excellent enterprise integration
- Strong Windows container support
- Good developer experience
limitations: |
- Less service ecosystem than AWS
- Regional availability limitations
- Some features still in preview
Google Kubernetes Engine (GKE)
# GKE Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: gke-comparison
data:
features: |
- Kubernetes versions: 1.24-1.29
- Control plane: Standard and Autopilot modes
- Node pools: Multiple pools with auto-scaling
- Serverless: Cloud Run for Anthos
- Add-ons: Istio, Knative, Config Connector
- Security: Binary Authorization, GKE Sandbox
- Monitoring: Google Cloud Monitoring
- Service mesh: Anthos Service Mesh (Istio)
pricing: |
- Standard: $0.10/hour per cluster
- Autopilot: Resource-based pricing
- Node pools: Compute Engine pricing
strengths: |
- Kubernetes innovation leader
- Best auto-scaling capabilities
- Excellent observability
- Advanced security features
- Autopilot mode simplicity
limitations: |
- Smaller ecosystem than AWS/Azure
- Less enterprise tooling
- Regional limitations in some areas
Container Registry Karşılaştırması
#!/bin/bash
# Container Registry Comparison
# AWS ECR
aws ecr create-repository --repository-name myapp
docker build -t myapp .
docker tag myapp:latest 123456789012.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com
docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
# Azure Container Registry
az acr create --name myregistry --resource-group mygroup --sku Standard
docker build -t myapp .
docker tag myapp:latest myregistry.azurecr.io/myapp:latest
az acr login --name myregistry
docker push myregistry.azurecr.io/myapp:latest
# Google Container Registry
gcloud auth configure-docker
docker build -t myapp .
docker tag myapp:latest gcr.io/my-project/myapp:latest
docker push gcr.io/my-project/myapp:latest
# Feature Comparison
echo "ECR: Deep AWS integration, vulnerability scanning, lifecycle policies"
echo "ACR: Geo-replication, Helm charts, content trust"
echo "GCR: Fast global distribution, automatic base image updates"
Serverless ve Function Services {#serverless-functions}
Function as a Service (FaaS)
AWS Lambda
# AWS Lambda Function
import json
import boto3
import os
def lambda_handler(event, context):
"""
AWS Lambda function example
Features:
- Multiple runtime support (Python, Node.js, Java, Go, .NET, Ruby)
- 15-minute maximum execution time
- 10GB memory limit
- VPC support
- Dead letter queues
- Layers for code sharing
"""
# Environment variables
table_name = os.environ['DYNAMODB_TABLE']
# AWS service integration
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(table_name)
try:
# Process event
response = table.put_item(Item=event['body'])
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps({
'message': 'Success',
'requestId': context.aws_request_id
})
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({
'error': str(e)
})
}
# Pricing: Pay per request and compute time
# Free tier: 1M requests and 400,000 GB-seconds per month
Azure Functions
# Azure Functions example
import logging
import azure.functions as func
import os
import json
from azure.cosmos import CosmosClient
def main(req: func.HttpRequest) -> func.HttpResponse:
"""
Azure Functions example
Features:
- Multiple hosting plans (Consumption, Premium, Dedicated)
- Durable Functions for stateful workflows
- 230-second timeout on Consumption plan
- VNET integration
- Hybrid connections
- Visual Studio integration
"""
logging.info('Python HTTP trigger function processed a request.')
# Cosmos DB integration
cosmos_client = CosmosClient(
os.environ['COSMOS_ENDPOINT'],
os.environ['COSMOS_KEY']
)
try:
name = req.params.get('name')
if not name:
req_body = req.get_json()
name = req_body.get('name') if req_body else None
if name:
# Insert to Cosmos DB
database = cosmos_client.get_database_client('mydb')
container = database.get_container_client('items')
container.create_item({
'id': name,
'name': name,
'timestamp': func.datetime.utcnow().isoformat()
})
return func.HttpResponse(
json.dumps({'message': f'Hello, {name}!'}),
status_code=200,
mimetype='application/json'
)
else:
return func.HttpResponse(
"Please pass a name parameter",
status_code=400
)
except Exception as e:
logging.error(f'Error: {str(e)}')
return func.HttpResponse(
json.dumps({'error': str(e)}),
status_code=500,
mimetype='application/json'
)
# Pricing: Pay per execution and resource consumption
# Free grant: 1M executions and 400,000 GB-s per month
Google Cloud Functions
# Google Cloud Functions example
import functions_framework
import os
import json
from google.cloud import firestore
@functions_framework.http
def hello_http(request):
"""
Google Cloud Functions example
Features:
- Event-driven architecture
- 540-second maximum execution time
- Auto-scaling to zero
- Multiple trigger types
- Cloud Build integration
- Source-based deployment
"""
# Initialize Firestore client
db = firestore.Client()
try:
request_json = request.get_json(silent=True)
request_args = request.args
name = None
if request_json and 'name' in request_json:
name = request_json['name']
elif request_args and 'name' in request_args:
name = request_args['name']
if name:
# Add to Firestore
doc_ref = db.collection('users').document(name)
doc_ref.set({
'name': name,
'timestamp': firestore.SERVER_TIMESTAMP
})
return json.dumps({'message': f'Hello {name}!'})
else:
return 'Hello World!'
except Exception as e:
return json.dumps({'error': str(e)}), 500
# Pricing: Pay per invocation, compute time, and network
# Free tier: 2M invocations per month
Database ve Storage Seçenekleri {#database-storage}
Relational Database Services
Amazon RDS
# AWS RDS Comparison
RDS_Engines:
MySQL: "8.0, Multi-AZ, Read Replicas"
PostgreSQL: "15.x, Advanced features"
MariaDB: "10.6, Open source"
Oracle: "19c, Enterprise features"
SQL_Server: "2019, Windows licensing"
Amazon_Aurora: "MySQL/PostgreSQL compatible"
Aurora_Features:
- "5x MySQL, 3x PostgreSQL performance"
- "Multi-master configurations"
- "Global database replication"
- "Serverless v2 auto-scaling"
- "Backtrack for point-in-time recovery"
- "15 read replicas"
Backup_Recovery:
- "Automated backups (1-35 days)"
- "Manual snapshots"
- "Point-in-time recovery"
- "Cross-region replication"
Security:
- "VPC network isolation"
- "Encryption at rest and in transit"
- "IAM database authentication"
- "Parameter groups for configuration"
Azure SQL Database
# Azure SQL Services
SQL_Database:
- "PaaS offering"
- "vCore and DTU purchasing models"
- "Hyperscale tier for large databases"
- "Serverless compute tier"
- "Elastic pools for cost optimization"
SQL_Managed_Instance:
- "Near 100% SQL Server compatibility"
- "Instance-level features"
- "Advanced security features"
- "Hybrid scenarios support"
Features:
- "Automatic tuning and optimization"
- "Built-in intelligence"
- "Advanced threat protection"
- "Always Encrypted"
- "Temporal tables"
- "JSON support"
High_Availability:
- "99.99% SLA"
- "Active geo-replication"
- "Auto-failover groups"
- "Zone redundancy"
Google Cloud SQL
# Google Cloud SQL
Supported_Engines:
MySQL: "8.0, 5.7"
PostgreSQL: "15, 14, 13"
SQL_Server: "2019, 2017"
Features:
- "Automatic storage increase"
- "Point-in-time recovery"
- "High availability configurations"
- "Read replicas"
- "Private IP connectivity"
- "IAM database authentication"
Performance:
- "Up to 60,000 IOPS"
- "SSD persistent disks"
- "Memory up to 624 GB"
- "416 vCPUs maximum"
Security:
- "Data encryption at rest and in transit"
- "VPC native networking"
- "Private services access"
- "SQL audit logs"
NoSQL Database Karşılaştırması
// AWS DynamoDB
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();
const awsExample = {
features: [
'Serverless and fully managed',
'Single-digit millisecond latency',
'Global Tables for multi-region',
'DynamoDB Streams for change capture',
'On-demand and provisioned billing',
'ACID transactions',
'PartiQL query language'
],
usage: async () => {
// Put item
await dynamodb.put({
TableName: 'Users',
Item: {
userId: '123',
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString()
}
}).promise();
// Query with GSI
const result = await dynamodb.query({
TableName: 'Users',
IndexName: 'EmailIndex',
KeyConditionExpression: 'email = :email',
ExpressionAttributeValues: {
':email': 'john@example.com'
}
}).promise();
return result.Items;
}
};
// Azure Cosmos DB
const { CosmosClient } = require('@azure/cosmos');
const client = new CosmosClient({ endpoint, key });
const azureExample = {
features: [
'Multi-model database (SQL, MongoDB, Cassandra, Gremlin, Table)',
'Global distribution with 99.999% availability',
'Multiple consistency levels',
'Automatic and instant scaling',
'Serverless option available',
'Change feed for real-time processing',
'Built-in analytics with Azure Synapse Link'
],
usage: async () => {
const database = client.database('mydb');
const container = database.container('users');
// Create item
const { resource: item } = await container.items.create({
id: '123',
name: 'John Doe',
email: 'john@example.com',
partitionKey: 'users'
});
// Query items
const querySpec = {
query: 'SELECT * FROM c WHERE c.email = @email',
parameters: [{ name: '@email', value: 'john@example.com' }]
};
const { resources: results } = await container.items
.query(querySpec)
.fetchAll();
return results;
}
};
// Google Firestore
const { Firestore } = require('@google-cloud/firestore');
const firestore = new Firestore();
const gcpExample = {
features: [
'Real-time synchronization',
'Offline support with automatic sync',
'ACID transactions',
'Multi-region replication',
'Security rules for access control',
'Mobile and web SDK support',
'Automatic scaling'
],
usage: async () => {
// Add document
const docRef = firestore.collection('users').doc('123');
await docRef.set({
name: 'John Doe',
email: 'john@example.com',
createdAt: Firestore.FieldValue.serverTimestamp()
});
// Query documents
const snapshot = await firestore
.collection('users')
.where('email', '==', 'john@example.com')
.get();
const results = [];
snapshot.forEach(doc => {
results.push({ id: doc.id, ...doc.data() });
});
return results;
}
};
DevOps ve CI/CD Araçları {#devops-cicd}
CI/CD Pipeline Karşılaştırması
AWS DevOps Stack
# AWS CodePipeline Configuration
CodeCommit: "Git repository service"
CodeBuild: "Build and test service"
CodeDeploy: "Deployment automation"
CodePipeline: "CI/CD orchestration"
CodeStar: "Project templates and dashboards"
CodeArtifact: "Package repository"
CodeGuru: "Code review and performance insights"
# Example Pipeline
AWSTemplateFormatVersion: '2010-09-09'
Resources:
BuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: MyAppBuild
ServiceRole: !Ref CodeBuildRole
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_MEDIUM
Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Strengths:
- "Deep AWS service integration"
- "Pay-per-use pricing"
- "Serverless build agents"
- "Cross-account deployments"
Limitations:
- "Limited to AWS ecosystem"
- "Less feature-rich than third-party tools"
- "Complex setup for advanced scenarios"
Azure DevOps
# Azure DevOps Pipeline
Azure_Repos: "Git repositories"
Azure_Pipelines: "CI/CD with YAML or visual designer"
Azure_Boards: "Agile project management"
Azure_Test_Plans: "Manual and exploratory testing"
Azure_Artifacts: "Package management"
# Example azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
dockerRegistryServiceConnection: 'myRegistry'
imageRepository: 'myapp'
containerRegistry: 'myregistry.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
latest
- stage: Deploy
displayName: Deploy to AKS
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
inputs:
action: deploy
manifests: |
k8s/deployment.yaml
k8s/service.yaml
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
Strengths:
- "Complete DevOps platform"
- "Excellent Microsoft ecosystem integration"
- "Visual pipeline designer"
- "Comprehensive project management"
- "Free for small teams"
Limitations:
- "Can be overwhelming for simple needs"
- "Primarily Windows-focused historically"
- "Complex pricing for larger teams"
Google Cloud DevOps
# GCP DevOps Tools
Cloud_Source_Repositories: "Git repository hosting"
Cloud_Build: "Containerized CI/CD"
Cloud_Deploy: "Delivery pipeline management"
Binary_Authorization: "Deploy-time security controls"
Artifact_Registry: "Container and package registry"
# Example cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/myapp:$COMMIT_SHA'
- '-t'
- 'gcr.io/$PROJECT_ID/myapp:latest'
- '.'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args:
- 'push'
- '--all-tags'
- 'gcr.io/$PROJECT_ID/myapp'
# Deploy to GKE
- name: 'gcr.io/cloud-builders/gke-deploy'
args:
- 'run'
- '--filename=k8s/'
- '--image=gcr.io/$PROJECT_ID/myapp:$COMMIT_SHA'
- '--location=us-central1-c'
- '--cluster=my-cluster'
# Trigger configuration
trigger:
github:
owner: myorg
name: myrepo
push:
branch: ^main$
substitutions:
_DEPLOY_REGION: us-central1
_GKE_CLUSTER: my-cluster
Strengths:
- "Fast build execution"
- "Native Kubernetes integration"
- "Powerful trigger configurations"
- "Good security controls"
- "Pay-per-minute billing"
Limitations:
- "Limited project management features"
- "Less ecosystem compared to others"
- "Requires external Git hosting for advanced features"
Networking ve Security {#networking-security}
Network Architecture
AWS Networking
# AWS VPC Configuration
VPC_Features:
- "Virtual Private Cloud isolation"
- "Multiple Availability Zones"
- "Internet and NAT Gateways"
- "VPC Peering and Transit Gateway"
- "PrivateLink for service connectivity"
- "Direct Connect for hybrid connectivity"
Security_Groups:
- "Instance-level firewall"
- "Stateful rules"
- "Allow rules only"
- "Protocol, port, and source/destination based"
Network_ACLs:
- "Subnet-level firewall"
- "Stateless rules"
- "Allow and deny rules"
- "Numbered rule evaluation"
Load_Balancers:
- Application_Load_Balancer: "Layer 7, HTTP/HTTPS"
- Network_Load_Balancer: "Layer 4, Ultra-high performance"
- Gateway_Load_Balancer: "Third-party appliances"
- Classic_Load_Balancer: "Legacy, Layer 4/7"
CDN_WAF:
- CloudFront: "Global CDN with edge locations"
- AWS_WAF: "Web application firewall"
- AWS_Shield: "DDoS protection"
Azure Networking
# Azure Virtual Network
VNet_Features:
- "Regional virtual network"
- "Subnet segmentation"
- "Service endpoints"
- "Private endpoints"
- "VNet peering and Virtual WAN"
- "ExpressRoute for hybrid"
Network_Security_Groups:
- "Subnet and NIC level"
- "Priority-based rules"
- "Allow and deny rules"
- "Service tags for Azure services"
Application_Security_Groups:
- "Application-centric security"
- "Grouping VMs by function"
- "Simplified rule management"
Load_Balancers:
- Azure_Load_Balancer: "Layer 4, Regional"
- Application_Gateway: "Layer 7, WAF integrated"
- Front_Door: "Global load balancer with CDN"
- Traffic_Manager: "DNS-based routing"
Security_Services:
- Azure_Firewall: "Managed firewall service"
- Azure_DDoS: "DDoS protection"
- Azure_CDN: "Content delivery network