Hosting & Deployment
Understanding web hosting, deployment strategies and infrastructure options
Last updated: 8/15/2025
Learn how to get your applications from development to production, understanding different hosting options, deployment strategies and infrastructure choices.
What is Web Hosting?
The Fundamentals
Making your app accessible on the internet
Web hosting is like renting space for your website to live. Just as you need a physical location for a shop, your website needs a server connected to the internet where visitors can find it.
Real-world analogy: Think of hosting like real estate. You can:
- Rent a room (shared hosting)
- Rent an apartment (VPS)
- Rent a whole building (dedicated server)
- Build your own property (on-premise)
- Use flexible workspace (cloud hosting)
Types of Hosting
Shared Hosting
Multiple sites on one server
Like sharing an apartment - affordable but with neighbours who might be noisy (use too many resources).
Pros:
- Very affordable ($3-10/month)
- Managed by provider
- Good for beginners
Cons:
- Limited resources
- Performance affected by other sites
- Less control
Best for: Personal blogs, small business sites
VPS (Virtual Private Server)
Your own virtual space
Like having your own apartment in a building - dedicated resources but still sharing the physical building.
# Typical VPS specs
CPU: 2-4 vCPUs
RAM: 4-8 GB
Storage: 80-160 GB SSD
Bandwidth: 3-5 TB
Providers: DigitalOcean, Linode, Vultr
Dedicated Servers
Entire server for yourself
Like owning a whole building - complete control and all resources to yourself.
Use cases:
- High-traffic applications
- Resource-intensive processing
- Strict compliance requirements
- Gaming servers
Cloud Hosting
Scalable, distributed hosting
Resources from multiple servers work together, scaling up or down based on demand.
Major providers:
- AWS: Most comprehensive
- Google Cloud: Best for AI/ML
- Azure: Enterprise-friendly
- DigitalOcean: Developer-focused simplicity
Static Site Hosting
JAMstack Platforms
Optimised for static content
Perfect for sites built with frameworks like Next.js, Gatsby, or plain HTML/CSS/JS.
Vercel
// vercel.json configuration
{
"builds": [
{ "src": "package.json", "use": "@vercel/next" }
],
"routes": [
{ "src": "/(.*)", "dest": "/" }
]
}
Features:
- Automatic deployments from Git
- Global CDN
- Serverless functions
- Preview deployments
Netlify
# netlify.toml
[build]
command = "npm run build"
publish = "dist"
[[redirects]]
from = "/api/*"
to = "/.netlify/functions/:splat"
status = 200
GitHub Pages
Free hosting from your repository
# .github/workflows/deploy.yml
name: Deploy to GitHub Pages
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: npm ci && npm run build
- uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./dist
CDN (Content Delivery Network)
Global content distribution
CDNs cache your content in multiple locations worldwide, serving it from the nearest server to each user.
User in Sydney → Sydney CDN Edge → Your content
User in London → London CDN Edge → Same content
User in NYC → NYC CDN Edge → Same content
Popular CDNs:
- Cloudflare: Free tier, DDoS protection
- Fastly: Real-time purging
- AWS CloudFront: Deep AWS integration
- Akamai: Enterprise-grade
Platform as a Service (PaaS)
Understanding PaaS
Focus on code, not infrastructure
PaaS handles servers, networking and storage so you can focus on building your application.
Heroku
The original developer-friendly PaaS
# Deploy with Git
git push heroku main
# Scale dynos
heroku ps:scale web=3
# Add add-ons
heroku addons:create heroku-postgresql
Procfile:
web: node server.js
worker: node worker.js
Railway
Modern alternative to Heroku
# Deploy from CLI
railway up
# Link to GitHub
railway link
# Add database
railway add postgresql
Render
Unified cloud platform
Features automatic builds, deploys and scaling with simple configuration:
# render.yaml
services:
- type: web
name: myapp
env: node
buildCommand: npm install
startCommand: npm start
envVars:
- key: DATABASE_URL
fromDatabase:
name: mydb
property: connectionString
databases:
- name: mydb
plan: starter
Container Platforms
Docker-based Hosting
Google Cloud Run
Serverless containers
# Deploy a container
gcloud run deploy myapp \
--image gcr.io/myproject/myapp \
--platform managed \
--region us-central1
Benefits:
- Scale to zero
- Pay per request
- Automatic HTTPS
AWS Fargate
Serverless compute for containers
{
"family": "myapp-task",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"containerDefinitions": [{
"name": "myapp",
"image": "myapp:latest",
"portMappings": [{
"containerPort": 3000
}]
}]
}
Fly.io
Containers at the edge
# fly.toml
app = "myapp"
[build]
image = "myapp:latest"
[[services]]
internal_port = 3000
protocol = "tcp"
[[services.ports]]
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
Deployment Strategies
Blue-Green Deployment
Zero-downtime deployments
Current (Blue): v1.0 → Live traffic
New (Green): v2.0 → Deploy and test
Switch: Route traffic from Blue to Green
Rollback: Switch back if issues
Rolling Deployment
Gradual replacement
Instance 1: v1.0 → v2.0 ✓
Instance 2: v1.0 → v2.0 ✓
Instance 3: v1.0 → v2.0 ✓
Canary Deployment
Test with small traffic percentage
95% traffic → v1.0 (stable)
5% traffic → v2.0 (canary)
Monitor → If good, increase %
→ If bad, rollback
Feature Flags
Deploy code without activating features
if (featureFlag('new-checkout')) {
return <NewCheckout />;
} else {
return <OldCheckout />;
}
CI/CD Pipelines
Continuous Integration
Automated testing on every commit
# .gitlab-ci.yml
stages:
- test
- build
- deploy
test:
stage: test
script:
- npm install
- npm test
build:
stage: build
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker push myapp:$CI_COMMIT_SHA
deploy:
stage: deploy
script:
- kubectl set image deployment/myapp myapp=myapp:$CI_COMMIT_SHA
only:
- main
Deployment Automation
GitHub Actions
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
- run: npm ci
- run: npm run build
- uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_KEY }}
- run: aws s3 sync ./dist s3://my-bucket
Jenkins Pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Deploy') {
steps {
sh 'rsync -avz dist/ user@server:/var/www/html/'
}
}
}
}
Infrastructure as Code
Terraform
Declarative infrastructure
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "WebServer"
}
}
resource "aws_s3_bucket" "static" {
bucket = "my-static-files"
acl = "public-read"
website {
index_document = "index.html"
error_document = "error.html"
}
}
AWS CloudFormation
AWS-native IaC
Resources:
WebServer:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-0c55b159cbfafe1f0
InstanceType: t2.micro
SecurityGroups:
- !Ref WebServerSecurityGroup
WebServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable HTTP access
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
Ansible
Configuration management
---
- name: Configure web servers
hosts: webservers
tasks:
- name: Install nginx
apt:
name: nginx
state: present
- name: Copy website files
copy:
src: ./dist/
dest: /var/www/html/
- name: Start nginx
service:
name: nginx
state: started
enabled: yes
Monitoring and Logging
Application Monitoring
New Relic:
// Track custom metrics
newrelic.recordMetric('Custom/OrderProcessing', processingTime);
Datadog:
from datadog import statsd
# Track metrics
statsd.increment('page.views')
statsd.histogram('database.query.time', query_time)
Uptime Monitoring
Services:
- UptimeRobot
- Pingdom
- StatusCake
- Better Uptime
Configuration example:
{
"monitors": [
{
"url": "https://api.yourdomain.com/health",
"interval": 60,
"timeout": 30,
"alerts": ["email", "slack"]
}
]
}
Log Management
Centralised logging:
// Winston with multiple transports
const winston = require('winston');
const logger = winston.createLogger({
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' }),
new winston.transports.Console()
]
});
Scaling Strategies
Vertical Scaling
Adding more power
# Upgrade server resources
CPU: 2 cores → 8 cores
RAM: 4GB → 16GB
Storage: 100GB → 500GB
Horizontal Scaling
Adding more servers
# Load balancer configuration
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
Auto-scaling
Dynamic resource adjustment
# Kubernetes HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Cost Optimisation
Reserved Instances
Commit for savings
- AWS: Up to 72% discount
- Azure: Up to 72% discount
- Google Cloud: Up to 57% discount
Spot/Preemptible Instances
Use spare capacity
# AWS Spot Instance request
aws ec2 request-spot-instances \
--instance-count 1 \
--type "one-time" \
--launch-specification file://specification.json
Resource Optimisation
Right-sizing:
- Monitor actual usage
- Downgrade overprovisioned resources
- Use auto-scaling
Storage optimisation:
- Archive old data to cold storage
- Compress large files
- Use lifecycle policies
Security Best Practices
Server Hardening
# Basic security setup
# Update system
apt update && apt upgrade -y
# Configure firewall
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
# Disable root login
sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
# Install fail2ban
apt install fail2ban -y
Web Application Firewall (WAF)
Cloudflare WAF Rules:
{
"rules": [
{
"expression": "http.request.uri.path contains \"../\"",
"action": "block",
"description": "Block directory traversal"
},
{
"expression": "http.request.uri.query contains \"<script\"",
"action": "challenge",
"description": "Challenge potential XSS"
}
]
}
Backup Strategies
3-2-1 Rule:
- 3 copies of data
- 2 different storage types
- 1 offsite backup
# Automated backup script
#!/bin/bash
DATE=$(date +%Y%m%d)
mysqldump -u user -p database > backup_$DATE.sql
tar -czf backup_$DATE.tar.gz /var/www/html backup_$DATE.sql
aws s3 cp backup_$DATE.tar.gz s3://my-backups/
Disaster Recovery
RTO and RPO
Recovery objectives
- RTO (Recovery Time Objective): How long to restore
- RPO (Recovery Point Objective): How much data loss is acceptable
DR Strategies
Backup and Restore:
- Lowest cost
- Highest RTO
Pilot Light:
- Minimal core infrastructure running
- Medium RTO and cost
Warm Standby:
- Scaled-down version running
- Lower RTO, higher cost
Multi-site Active/Active:
- Full redundancy
- Near-zero RTO, highest cost
Summary
Hosting and deployment have evolved from simple FTP uploads to sophisticated CI/CD pipelines and infrastructure as code. Modern deployment practices emphasise automation, scalability and reliability.
Key takeaways:
- Choose hosting based on your needs and budget
- Automate deployments with CI/CD
- Monitor everything
- Plan for growth and failure
- Security is not optional
- Start simple, evolve as needed
The hosting landscape continues to evolve with edge computing, serverless platforms and improved developer experiences making deployment easier than ever!