
Cloud Networking in 2026: When Simple Private Networks Make More Sense Than Full VPC Architectures
April 30, 2026Once you have committed to a European cloud provider — Hetzner, OVHcloud, Scaleway, or similar — a second decision follows quickly: do you use their managed database offering, or do you provision compute and run PostgreSQL, MySQL, or MariaDB yourself?
The marketing pages make managed services sound obvious. Automated backups, point-in-time recovery, high availability out of the box. But those conveniences come with a performance cost that is rarely discussed in concrete terms. Below, we break down the numbers — IOPS, latency, throughput — so you can weigh the trade-off with your eyes open.
A Note on Methodology
The figures in this article reflect ranges commonly observed across standard European cloud configurations in 2026: 8–16 vCPU instances with NVMe local storage for self-hosted setups, and equivalent-tier managed database plans. We are comparing like-for-like price brackets, not a bare-metal dedicated server against the cheapest managed tier. Real-world results vary with workload shape, dataset size, and provider-specific implementation. Treat these as representative magnitudes, not guarantees.
IOPS: Transaction Volume Under Load
For OLTP workloads — the typical mix of inserts, updates, and point lookups that drives most web applications — raw IOPS is the headline metric.
Self-hosted PostgreSQL or MySQL on local NVMe storage routinely sustains 85,000–110,000 mixed read/write IOPS on a well-tuned instance. Dedicated bare-metal servers from Hetzner or OVHcloud push past 150,000 IOPS because there is zero virtualisation overhead between the database engine and the drive.
Managed database services from the same providers typically land between 10,000 and 15,000 IOPS on comparable plan tiers.
The gap is not a mystery. Managed services rely on network-attached block storage — Ceph clusters, in most cases — to decouple data from any single physical host. That architecture is what enables transparent failover and live resizing, but it also means every I/O operation crosses the network fabric before hitting a disk. Providers further cap per-volume IOPS to protect cluster stability and ensure fair sharing across tenants.
If your workload genuinely requires sustained throughput above 20,000–30,000 IOPS on a single node, managed services will hit a ceiling that no plan upgrade can lift. Running the database yourself on local storage removes that ceiling entirely.
Latency: The Milliseconds Your Users Feel
Average query latency translates directly into application response time. Even small differences compound across the dozens of queries a single page load or API call might trigger.
A self-hosted database reading from local NVMe responds to simple key-value lookups in roughly 0.3–0.5 ms. Enable synchronous replication to a standby node — which you should, unless you are comfortable losing the last few seconds of writes during a failure — and write latency rises to 1.2–1.8 ms as each commit waits for network acknowledgement.
Managed services typically report 2.5–4.0 ms average latency for equivalent operations. The extra time comes from two sources layered on top of each other: network-attached storage adds a round trip to every I/O call, and the built-in replication that guarantees durability adds another.
For a standard SaaS application making 20–40 queries per page, the difference between 1.5 ms and 3.5 ms per query amounts to roughly 40–80 ms of additional page load time. Most users will never notice. But for latency-sensitive workloads — real-time bidding, financial order matching, multiplayer game state — those milliseconds are the entire margin. In those scenarios, local storage is not an optimisation; it is a hard requirement.
Throughput: Moving Data in Bulk
Sequential read and write throughput matters most for analytics queries scanning large tables, nightly ETL jobs, bulk imports, and full backup runs.
Self-hosted instances with direct NVMe access sustain sequential read throughput around 2,500–3,000 MB/s and sequential writes around 1,200–1,500 MB/s on current-generation drives.
Managed services cap out around 300–500 MB/s for reads and 200–300 MB/s for writes. Again, the bottleneck is network-attached storage: providers throttle per-volume bandwidth to prevent a single tenant from saturating shared storage infrastructure.
The practical impact is significant for batch-heavy workloads. A nightly analytics pipeline processing 50 GB of data completes in under 20 seconds on local NVMe versus several minutes through managed storage. A full logical backup of a 200 GB database that takes two minutes self-hosted might take ten minutes or more through a managed service’s storage layer.
What the Numbers Do Not Tell You
The performance comparison above paints a clear picture: self-hosted wins on every raw metric by a wide margin. If this were the only consideration, the article would end here.
It is not.
Matching the operational reliability of a managed database service is a serious engineering undertaking. You need automated failover with connection rerouting, streaming replication monitoring, backup scheduling with offsite storage and regular restore testing, OS-level patching on a cadence that does not conflict with maintenance windows, and 24/7 alerting for replication lag, disk usage, and connection pool exhaustion. Under NIS2 and DORA — both now in enforcement across the EU — you also need documented recovery procedures and evidence that you actually test them.
None of this is impossible. But it requires either a dedicated database team or a very disciplined platform engineering practice. For a five-person startup shipping features at full speed, managing your own PostgreSQL cluster is time spent not building your product.
When Self-Hosted Is the Right Call
Choose self-hosted when your application genuinely pushes the boundaries of what managed services offer. Concrete signals include sustained query volumes above 20,000 IOPS on a single node, p99 latency budgets below 2 ms for write paths, batch processing jobs where the managed storage throughput cap turns a 30-second task into a 10-minute bottleneck, or workloads with unpredictable I/O spikes that risk hitting provider rate limits.
If you recognise your system in any of those descriptions, self-hosting is not just a preference — it is the architecture that fits your requirements. If you are moving databases away from a hyperscaler, this is often the moment to make the switch.
When Managed Databases Make More Sense
Choose managed services when your workload sits comfortably within their performance envelope. For the majority of web applications, internal tools, and standard SaaS platforms, 10,000–15,000 IOPS and 3 ms average latency are more than sufficient. You gain automated backups with tested restore paths, high availability without hand-rolled failover scripts, and compliance-friendly audit trails — all operated by a team whose sole job is keeping that infrastructure running.
The trade-off is real, but it is also rational: you exchange raw throughput for operational predictability and reclaim engineering hours for the work that differentiates your product. If managing the underlying infrastructure is not your core competency, a managed cloud hosting partner can handle the operational layer — whether you choose managed databases or self-hosted — so your team stays focused on shipping.
