Blog
Reviews, benchmarks and provider deep-dives. We test what we recommend — every review uses the same playbook so the numbers are comparable.
showing all posts
RTX 3090 Cloud Pricing: Runpod, Vast.ai, Vultr Compared
We pitted three providers against each other for budget 3090 rentals, tracking costs, stability, and real-world performance for ML workloads.
Runpod Bare-Metal vs Serverless: Llama 3 8B Cost and Latency
We put Llama 3 8B through its paces on Runpod's bare-metal pods and their Serverless platform, measuring real costs, cold starts, and throughput.
A100 Cloud Pricing: Runpod, Vultr, Lambda, Vast.ai Battle for Your DL Dollars
We put four A100 providers through our standard LLM inference benchmark and tracked every dollar, queue, and cold-start in the weeks leading up to May 2026.
AMD MI300X vs H100: Cloud LLM Inference, Price-Per-Token
We pitted AMD's new challenger against Nvidia's incumbent for Llama 3 70B inference in the wild.
H200 Cloud Pricing: The Hunt for Nvidia's Newest GPU
We scoured Runpod, Lambda Labs, and Vultr for Nvidia's H200, comparing listed prices, actual availability, and the hidden costs that follow the hype.
Nvidia L40S: Does Vultr, Runpod, or Lambda Labs justify the cost?
We rented L40S instances from three providers for a week to see where your inference dollars go farthest.
RTX 4090 Cloud Rentals: SDXL Performance vs. Price
We put Runpod, Vast.ai, and Vultr's RTX 4090 instances through their paces with Stable Diffusion XL workloads.
Egress Fees Still Trap You in 2026: A Four-Provider Audit
We dug into AWS, Azure, GCP, and Cloudflare R2 to see if the data gravity problem has actually improved.
Runpod Serverless Cold Starts: A Thousand Invocations, Three Weeks Later
We measured cold start latency for a common PyTorch model across 1,000 invocations on Runpod Serverless.
Runpod review: bare-metal H100s without the enterprise tax
Six weeks on Runpod across Community Cloud, Secure Cloud, and Serverless. The benchmarks, the bills, and where it falls short of AWS and Lambda.
Hetzner AX52 vs OVH Rise-3: which dedicated box wins on $/perf?
Two of the cheapest mid-tier dedicated servers in Europe, head-to-head over a 30-day rental.
Vast.ai for hobbyist ML: when the marketplace beats Runpod
A field guide to renting GPUs from random people on the internet — and why it's sometimes the right call.
Lambda Labs review: pretty UI, predictable bill, painful queue
Lambda is the boring-good option for ML teams that hate surprises. Here's what we'd warn you about.
The actual cost of egress on AWS, Hetzner, OVH and Runpod
Egress is the hidden tax of the cloud. We modelled four real workloads against four providers.
Hetzner Cloud CCX13 review: when shared cores are enough
Hetzner's smallest dedicated-CPU plan, on a one-month real workload. Spoiler: it's good.
Cold start times: Runpod Serverless vs Modal vs Replicate
Three serverless GPU platforms, 1,000 cold-start invocations each, the same Llama-3 8B container.
Setting up a Palworld dedicated server on Hetzner CX22 for £4/mo
Step-by-step: spin up a Hetzner Cloud box, install steamcmd, run Palworld in tmux, and survive.
Why we moved off DigitalOcean Droplets after 6 years
Six years of habit, ended by a series of small annoyances and one big one.
OVH Eco vs Hetzner Server Auction: where the real bargains live
Two markets for second-hand dedicated servers, both badly designed, both worth knowing.
Vultr Cloud GPU review: A100s on tap, but read the fine print
Vultr's GPU offering is finally GA. We rented an A100 for two weeks and ran the suite.
Linode dedicated CPU benchmark: Akamai era, six months on
We ran the same workload on Linode every quarter for two years. Here's what changed under Akamai.
How to actually benchmark a server: our standard playbook
The exact suite we run on every provider review. Steal it, run it on your own boxes, send us yours.
Paperspace Gradient review: Jupyter-first GPU rentals in 2026
DigitalOcean's GPU subsidiary, three years after acquisition. Still a good notebook tool, still expensive.
Tenstorrent's first Wormhole rentals: who's selling them?
Wormhole-based instances are starting to appear. We tracked down which providers actually have them.
Self-hosting Llama 3 70B: cheapest providers per million tokens
Eight providers, the same prompt distribution, ranked by cost per million output tokens.
Runpod Serverless deep-dive: cold starts, queueing, billing edges
We pushed Runpod Serverless to its limits over a 30-day production deployment. The good and the gotchas.
Are bare-metal H200s actually shipping yet? we asked nine providers
H200 SKUs are listed on a lot of websites. Far fewer providers actually have inventory.
Game server hosting in 2026: PufferPanel + Hetzner is hard to beat
The DIY stack for hosting Minecraft, Palworld, Valheim and friends without paying a managed-host markup.