Growth is a good problem to have. When your application starts outgrowing its current VPS resources, you have two fundamental scaling strategies: vertical scaling (upgrading your existing server) and horizontal scaling (adding more servers behind a load balancer). This guide covers both approaches, helping you choose the right strategy and implement it on your SakuraHost infrastructure.

When to Scale: Monitor your VPS regularly. If CPU usage consistently exceeds 80%, RAM is frequently at capacity, disk I/O is saturated, or response times are degrading, it is time to scale. See our guide VPS Performance Monitoring: Tools and Best Practices for monitoring tools.

1. Vertical Scaling (Scale Up)

Vertical scaling means upgrading your existing VPS to a larger plan with more CPU cores, RAM, and storage. This is the simplest scaling approach and works well up to a certain point.

Upgrading Your SakuraHost VPS Plan

Log in to the SakuraHost Client Area:

Visit billing.sakurahost.co.tz and navigate to your VPS service.

Request an upgrade:

Select "Upgrade/Downgrade" to view available plans. Choose a plan with the resources you need. Upgrades can typically be processed within minutes to a few hours.

Post-upgrade verification:
# Verify new CPU count nproc # Verify new RAM free -h # Verify new disk space df -h

Advantages of Vertical Scaling

  • No application changes required - your code runs exactly the same
  • No need for load balancers or distributed architecture
  • Database remains on a single server (no replication complexity)
  • Simple to implement and manage

Limitations of Vertical Scaling

  • Physical limits - there is a maximum size for any single server
  • Single point of failure - if the server goes down, everything goes down
  • May require brief downtime for the upgrade
  • Cost increases can be non-linear at the highest tiers

2. Optimizing Before Scaling

Before spending on additional resources, ensure your current allocation is being used efficiently. Often, optimization eliminates the need to scale.

Application-Level Optimization

# Enable Nginx caching for static assets location ~* .(jpg|jpeg|png|gif|ico|css|js|woff2)$ { expires 365d; add_header Cache-Control "public, immutable"; } # Enable gzip compression gzip on; gzip_comp_level 6; gzip_types text/plain text/css application/json application/javascript;

Database Optimization

# MySQL - Identify slow queries sudo mysqldumpslow /var/log/mysql/slow.log # PostgreSQL - Find slow queries SELECT query, calls, mean_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;

Implement Caching Layers

Redis or Memcached can dramatically reduce database load by caching frequently accessed data:

# Install Redis sudo apt install redis-server -y sudo systemctl enable redis-server # Verify Redis is running redis-cli ping

Configure your application to use Redis for session storage, query caching, and frequently accessed data. This single change can reduce database queries by 60-90% for read-heavy applications.

3. Horizontal Scaling (Scale Out)

Horizontal scaling distributes your workload across multiple servers. This approach provides high availability, fault tolerance, and virtually unlimited scalability.

Architecture Overview

A typical horizontally scaled setup includes:

  • Load Balancer - Distributes incoming traffic across application servers
  • Application Servers (2+) - Run your web application code
  • Database Server - Dedicated server for MySQL/PostgreSQL
  • Cache Server - Shared Redis instance for sessions and caching
  • File Storage - Shared storage for uploads and media

Setting Up Nginx as a Load Balancer

Deploy a dedicated SakuraHost VPS as your load balancer with this Nginx configuration:

upstream app_servers { least_conn; server 10.0.1.10:80 weight=3; server 10.0.1.11:80 weight=3; server 10.0.1.12:80 weight=2 backup; keepalive 32; } server { listen 80; server_name example.com; location / { proxy_pass http://app_servers; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Health checks proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_connect_timeout 5s; proxy_read_timeout 60s; } }

Load Balancing Algorithms

  • Round Robin (default) - Distributes requests evenly in rotation
  • Least Connections (least_conn) - Sends traffic to the server with fewest active connections. Best for varying request processing times
  • IP Hash (ip_hash) - Routes each client to the same server consistently. Useful for session persistence without shared session storage
  • Weighted - Assign higher weights to more powerful servers so they receive proportionally more traffic

Session Management in Distributed Environments

When running multiple application servers, sessions must be shared. Use Redis as a centralized session store:

# Install Redis on a dedicated server or the database server sudo apt install redis-server -y # Configure Redis to accept remote connections sudo nano /etc/redis/redis.conf # Change: bind 10.0.1.20 # Set: requirepass YourRedisPassword

Configure your application to use the Redis server for sessions instead of local file-based sessions.

4. Database Scaling

Read Replicas

For read-heavy applications, set up MySQL or PostgreSQL read replicas. Your application writes to the primary server and reads from replicas:

# On the primary MySQL server, enable binary logging # /etc/mysql/mysql.conf.d/mysqld.cnf server-id = 1 log_bin = /var/log/mysql/mysql-bin.log binlog_do_db = myapp_production

Connection Pooling

Use connection poolers like PgBouncer (PostgreSQL) or ProxySQL (MySQL) to efficiently manage database connections from multiple application servers, reducing connection overhead significantly.

5. Scaling Checklist

  • Monitor and identify the actual bottleneck (CPU, RAM, I/O, network)
  • Optimize your application and database queries first
  • Implement caching (Redis/Memcached) before adding servers
  • For vertical scaling: upgrade your SakuraHost VPS plan
  • For horizontal scaling: separate database, add application servers, configure load balancer
  • Centralize sessions with Redis
  • Set up health checks and monitoring across all nodes
  • Implement automated backups for every server in the cluster
Plan for Failure: In a multi-server setup, design every component to handle the failure of any single server. Use the backup directive in Nginx upstream blocks, configure database replication, and always maintain current backups.
Need Help Scaling? The SakuraHost team can help you architect the right solution for your growth. Contact us at billing.sakurahost.co.tz or reach out via sms.sakuragroup.co.tz. For additional reading, explore the DigitalOcean Load Balancing Guide and Nginx Load Balancing Documentation.
Was this answer helpful? 0 Users Found This Useful (0 Votes)