- Регистрация
- 1 Мар 2015
- Сообщения
- 1,481
- Баллы
- 155
For years, serverless was sold as the future.
No servers to manage. Just write a function, deploy it, and let the platform scale it for you. AWS Lambda kicked it off, and then everyone followed: Vercel, Cloudflare Workers, Deno Deploy.
But the dream is falling apart.
Last week, Deno quietly announced that they're scaling back Deploy's global footprint from 35 regions to just 6. And performance actually improved.
It's not a sign that serverless is dead, but it is a clear signal: trying to stretch Functions-as-a-Service (FaaS) into a general purpose app platform hasn't worked.
What Serverless Promised
The original pitch was simple.
Write a small function. Don't worry about infrastructure. It scales when you need it. You only pay when it runs.
And it works well for the right use cases:
But once you start building full products, the cracks show fast.
Real Apps Aren't Stateless
Most real-world apps:
Trying to force that into a stateless function that spins up in a random region leads to cold starts, latency spikes, and awkward workarounds.
Your app isn't a webhook. You need control. You need state. You need things that serverless just doesn't do well out of the box.
Deno's Pivot: Back to the Basics
Deno just confirmed this.
They scaled back to fewer regions because edge compute wasn't helping most use cases. Almost every app needed to call a database, usually pinned to a single region. Cold regions caused latency spikes. Routing to a warm region even farther away was often faster.
So they're pivoting.
Instead of trying to be an everywhere-at-once function platform, Deno is moving toward a full app hosting platform. They're adding:
In other words, they're slowly recreating the things servers and containers have offered for years.
This Isn't a Serverless Problem It's a Misuse Problem
Let's be clear: serverless isn't bad. Misusing it is.
In , I use Cloudflare Workers for stateless compute. It works great for things like:
This is where serverless makes sense. It's lightweight, fast, and scales to zero. But that's also where the boundary should be.
Once you start layering on state, background jobs, and region awareness, you're not writing a function anymore you're building an app. And the moment you try to build an app on top of a FaaS model, you run into limitations.
Serverless Platforms Are Quietly Rebuilding the Server
This isn't just Deno.
They're all recreating the things that made servers and containers useful. But now it's buried behind layers of proprietary tooling.
The Lock-In Trap
Here's the other issue: vendor lock-in.
All these platforms are introducing proprietary services that only work on their own stack:
Most of these are closed or "open core" at best. You can't self-host them. Even if they're technically open source, they rely on infrastructure scale that only the platform can provide.
FaaS was supposed to free you from infrastructure. Instead, you're now tightly coupled to someone else's idea of how apps should work.
Most Apps Don't Need to Be Global
The majority of apps are perfectly fine living in a single region.
Forcing global distribution adds complexity for very little gain. You're suddenly dealing with consistency, replication, latency trade-offs, and CAP theorem headaches without actually needing to.
We've seen attempts to make databases work seamlessly across multiple regions, but most of them come with rough edges. Transactions, failover, and consistency all get harder the moment you go global.
Unless you really need it, you're better off staying in one region and scaling out later if you have to. Most apps never reach that point.
Serverless Has Hard Limits
There are real limitations that don't show up until your app grows:
For example, on Cloudflare Workers, you might import a package in dev, deploy it, and find out it silently fails or throws weird errors. Then you're hunting for polyfills or rewriting logic just to make it work.
These friction points add up. What starts as "less to manage" turns into "harder to debug."
The Takeaway
Serverless isn't dead. It's just shrinking back to the role it should've always had: a specialized tool.
Use it for:
But for full products? Stick to what works:
Trying to force every workload into a serverless model leads to complexity, tech debt, and a mess of proprietary services that lock you in.
One Last Thing
I'm building , a feedback platform, and I use serverless where it makes sense stateless logic at the edge, quick compute tasks, stuff that scales on burst.
But for the core product? I stick to servers and containers.
Serverless is a tool. Not a foundation.
Use it smartly.
No servers to manage. Just write a function, deploy it, and let the platform scale it for you. AWS Lambda kicked it off, and then everyone followed: Vercel, Cloudflare Workers, Deno Deploy.
But the dream is falling apart.
Last week, Deno quietly announced that they're scaling back Deploy's global footprint from 35 regions to just 6. And performance actually improved.
It's not a sign that serverless is dead, but it is a clear signal: trying to stretch Functions-as-a-Service (FaaS) into a general purpose app platform hasn't worked.
What Serverless Promised
The original pitch was simple.
Write a small function. Don't worry about infrastructure. It scales when you need it. You only pay when it runs.
And it works well for the right use cases:
- Webhooks
- Scheduled tasks
- Background jobs
- Small APIs
- Bursty traffic
But once you start building full products, the cracks show fast.
Real Apps Aren't Stateless
Most real-world apps:
- Talk to a database
- Rely on fast and consistent latency
- Need sessions, auth, background processing
- Require long-running or multi-step logic
Trying to force that into a stateless function that spins up in a random region leads to cold starts, latency spikes, and awkward workarounds.
Your app isn't a webhook. You need control. You need state. You need things that serverless just doesn't do well out of the box.
Deno's Pivot: Back to the Basics
Deno just confirmed this.
They scaled back to fewer regions because edge compute wasn't helping most use cases. Almost every app needed to call a database, usually pinned to a single region. Cold regions caused latency spikes. Routing to a warm region even farther away was often faster.
So they're pivoting.
Instead of trying to be an everywhere-at-once function platform, Deno is moving toward a full app hosting platform. They're adding:
- Region pinning
- KV storage
- Durable Objects-style state+compute
- Background tasks
- Subprocesses
- Build pipelines
In other words, they're slowly recreating the things servers and containers have offered for years.
This Isn't a Serverless Problem It's a Misuse Problem
Let's be clear: serverless isn't bad. Misusing it is.
In , I use Cloudflare Workers for stateless compute. It works great for things like:
- Running logic close to the user
- No database access or only a single upstream request
- High-burst traffic patterns
This is where serverless makes sense. It's lightweight, fast, and scales to zero. But that's also where the boundary should be.
Once you start layering on state, background jobs, and region awareness, you're not writing a function anymore you're building an app. And the moment you try to build an app on top of a FaaS model, you run into limitations.
Serverless Platforms Are Quietly Rebuilding the Server
This isn't just Deno.
- Cloudflare has KV, R2, Durable Objects, and their own database (D1).
- Vercel recently introduced Fluid Compute, allowing multiple requests inside a single Lambda instance to avoid cold start overhead.
- Deno is going full app-hosting with long-lived processes, caching, background tasks, and region control.
They're all recreating the things that made servers and containers useful. But now it's buried behind layers of proprietary tooling.
The Lock-In Trap
Here's the other issue: vendor lock-in.
All these platforms are introducing proprietary services that only work on their own stack:
- Cloudflare's KV, Durable Objects, Queues
- Deno's upcoming persistent compute and storage
- Vercel's edge and now Fluid Compute
Most of these are closed or "open core" at best. You can't self-host them. Even if they're technically open source, they rely on infrastructure scale that only the platform can provide.
FaaS was supposed to free you from infrastructure. Instead, you're now tightly coupled to someone else's idea of how apps should work.
Most Apps Don't Need to Be Global
The majority of apps are perfectly fine living in a single region.
Forcing global distribution adds complexity for very little gain. You're suddenly dealing with consistency, replication, latency trade-offs, and CAP theorem headaches without actually needing to.
We've seen attempts to make databases work seamlessly across multiple regions, but most of them come with rough edges. Transactions, failover, and consistency all get harder the moment you go global.
Unless you really need it, you're better off staying in one region and scaling out later if you have to. Most apps never reach that point.
Serverless Has Hard Limits
There are real limitations that don't show up until your app grows:
- Execution time limits (varies by platform, but always short)
- Memory limits per request
- Concurrency issues (shared state between invocations is tricky)
- Cold starts that affect performance on first request
- Function size limits and deploy quirks
- Incomplete Node.js compatibility (especially on Cloudflare Workers)
For example, on Cloudflare Workers, you might import a package in dev, deploy it, and find out it silently fails or throws weird errors. Then you're hunting for polyfills or rewriting logic just to make it work.
These friction points add up. What starts as "less to manage" turns into "harder to debug."
The Takeaway
Serverless isn't dead. It's just shrinking back to the role it should've always had: a specialized tool.
Use it for:
- Quick APIs
- Background jobs
- Scheduled tasks
- Stateless edge compute
But for full products? Stick to what works:
- Containers
- App servers
- Single-region setups
- Predictable infrastructure
Trying to force every workload into a serverless model leads to complexity, tech debt, and a mess of proprietary services that lock you in.
One Last Thing
I'm building , a feedback platform, and I use serverless where it makes sense stateless logic at the edge, quick compute tasks, stuff that scales on burst.
But for the core product? I stick to servers and containers.
Serverless is a tool. Not a foundation.
Use it smartly.