For a long time, I ran everything on a single bare-metal server at OVH. It hosted my self-hosted GitLab, GitLab runners, Docker images, and all my production workloads. It was powerful, reliable, and predictable, but, over time, it became expensive and rigid. I wanted something more flexible, easier to scale, and cheaper to run. That’s when I started looking at Dokploy.
In this post, I’ll share how I moved from that single-server setup to a multi-VPS architecture powered by Dokploy. It’s now simpler, faster, and a lot cheaper, yet still fully self-hosted.
The Old Setup: One Server to Rule Them All
My old setup was classic: one beefy server did everything.
- GitLab runners built my Docker images.
- The runners then SSH’d into the same machine or another Docker host.
- Each deployment pulled the latest image and restarted the containers.
It worked, but it came with several issues:
- Cost: Bare-metal servers at OVH are great, but overkill for small projects.
- Complexity: Maintaining runners, SSH keys, and manual image pulls added friction.
- Scalability: Everything was tightly coupled to one machine.
- Zero Downtime: Not easy to implement.
Whenever I wanted to add a staging environment or test a new stack, I had to juggle ports, containers, and firewall rules manually.
Why Move Away from Bare Metal
The main motivation was flexibility and cost. I realized I didn’t need one large, powerful server for everything. Instead, I could run multiple smaller VPSs for specific roles:
- One VPS for GitLab
- One for staging
- One for production
Each VPS is cheaper, isolated, and easier to maintain. And since they’re all running Docker, I could still keep everything consistent.
Updating the kernel of a full bare metal server would require to rent a new one and hopefully migrate everything within a month to not keep paying for both. Here it is easier to move projects individually to new VPS servers.
But I needed a way to orchestrate deployments across them easily. That’s where Dokploy came in.
Enter Dokploy
Dokploy is a self-hosted platform for deploying and managing Docker-based applications. It’s like a lightweight mix between CapRover, Coolify, and Portainer, but with a focus on simplicity and modern CI/CD.
What caught my attention was how easily it integrates with external build pipelines. Instead of having Dokploy build the image, I could let GitLab CI handle the build, push the image to my private registry, and then trigger a Dokploy webhook to deploy the new version.
That meant I could keep my existing GitLab setup while offloading the deployment logic to Dokploy.
The New Setup: One Dokploy to Manage Them All
In my new setup, I now have one VPS running Dokploy alongside some other tools like Grafana and n8n. Dokploy is responsible for deploying multiple environments:
- It deploys GitLab itself to another VPS.
- It deploys my staging environment to a second VPS.
- It deploys my production environment to a third VPS.
Those other VPSs do not run Dokploy themselves. The main Dokploy instance manages everything remotely through its deployment system.
Here’s the high-level workflow:
- GitLab builds the Docker image.
- The image is pushed to my private registry.
- GitLab triggers the Dokploy webhook.
- Dokploy pulls the new image and redeploys the corresponding service.
Here’s a simplified version of the .gitlab-ci.yml snippet:
deploy:
  stage: deploy
  script:
    - curl -X POST https://dokploy.example.com/webhook/PROJECT_ID?token=$DOKPLOY_TOKEN
  only:
    - mainNo SSH scripts, no manual pulls, no downtime. Just one clean webhook trigger per environment.
Staging with Basic Auth
One of the small but great Dokploy features is how easy it is to enable Basic Auth for a project. All you have to do it go to Advanced Settings > Security > Add Security > Create
 .
.
Now random visits to the website, or scraping that will mark your site as a duplicate will be avoided.
Zero Downtime with Docker Swarm
Dokploy uses Docker Swarm under the hood, which means zero downtime updates are just part of the workflow. When a new image is deployed, Dokploy triggers a rolling update. The old container is only stopped once the new one is healthy. There’s no need for complex load balancer rules or manual coordination. For me, that was a big win compared to my old docker-compose restart scripts. For that, some Advanced > Cluster Settings need to be added. We need a Healthcheck route available on the server. It will be locally called and, as soon as it is successful, traffic will be redirected to the new container and the old one will be shut down. Here the route debest.fr/uptime is used for that.
 .
.
You can also define resource limits, environment variables, and replicas directly from the UI, which makes managing multiple services more predictable.
Automated Backups with MinIO
Another highlight is how easy it is to schedule backups of both volumes and databases. In the Dokploy dashboard, I configured automatic periodic backups to my MinIO instance running on my local NAS.
Every project can have its own backup schedule, and Dokploy manages the compression and upload automatically. It gives me peace of mind knowing that I have full local control over my backups without depending on AWS.
Monitoring and Alerts with Grafana and n8n
The same VPS that runs Dokploy also hosts Grafana. Grafana monitors each VPS and collects metrics like CPU, RAM, and disk usage. When any metric exceeds a certain threshold, Grafana triggers an alert on my Discord server.
This lightweight setup lets me keep a real-time view of all my VPSs and get notified instantly if something’s wrong. Dokpoy itself is configured to send messages to Discord when a deployment succeeds or fails or when volume or database backups occurred.
I plan to cover this monitoring setup in more detail in a future post.
The Cost and Simplicity Payoff
Since moving from bare metal to VPSs, I’ve cut hosting costs significantly. Each VPS now serves a specific purpose, and if one fails, the others keep running.
Final Thoughts
Dokploy turned out to be a great middle ground between full manual Docker management and large PaaS platforms. It’s simple enough to understand in a day, but powerful enough to handle real deployments with zero downtime and automatic backups.
If you’re still running a big bare-metal setup, consider trying this approach: one Dokploy instance managing multiple small VPSs. Combine it with tools like Grafana for monitoring, and you’ll have a fully self-hosted, automated, and monitored environment with minimal cost.
Stay tuned for the next post, where I’ll dive deeper into how I use Grafana, n8n, and Discord alerts to monitor my infrastructure.
Dennis de Best
 
                                         
                                         
                                         
                                         
                                         
                                        
Comments