When Heroku
Stops Scaling
Heroku pioneered platform-as-a-service and remains excellent for prototyping and small applications. It stops scaling when dyno pricing, performance limitations, and the end of the free tier make it economically and architecturally uncompetitive with modern cloud platforms.
Dyno costs exceed equivalent AWS compute by 3-5x
Heroku's Standard and Performance dynos cost significantly more per compute unit than equivalent EC2 instances, ECS tasks, or Lambda invocations on AWS. When your monthly Heroku bill crosses $1,000-2,000/month for compute alone, the premium you are paying for Heroku's managed platform exceeds the operational cost of running the same workloads on AWS with modern deployment tools (Copilot, CDK, or Terraform). The gap widens as you scale — a workload costing $5,000/month on Heroku might cost $1,000-1,500/month on AWS.
Heroku Postgres performance ceiling blocks application growth
Heroku Postgres plans have fixed connection limits, storage caps, and row count recommendations. When your application needs more than 500 connections (Standard-tier maximum), exceeds the row counts where Heroku's shared infrastructure maintains performance, or requires read replicas beyond what the plan supports, the managed database has become the bottleneck. AWS RDS or Aurora provide the same managed Postgres experience with significantly higher ceilings for connections, storage, IOPS, and replication.
30-second request timeout kills long-running operations
Heroku's router enforces a 30-second timeout on all HTTP requests. When your application needs to handle file uploads, complex report generation, data exports, or AI inference that exceeds 30 seconds, you must architect around this hard limit with background workers and polling — adding complexity that would not exist on a platform without this constraint. AWS API Gateway, ALB, and direct compute services provide configurable timeouts up to 15-30 minutes.
No infrastructure-as-code or environment parity
Heroku's deployment model uses Git push and a web dashboard. When your team needs infrastructure-as-code (Terraform, CloudFormation), environment parity between staging and production, custom networking (VPCs, private subnets), or fine-grained IAM controls, Heroku's simplified model becomes a limitation. The same simplicity that makes Heroku fast to start makes it impossible to implement enterprise operational practices as the team and application mature.
Salesforce acquisition has created platform uncertainty
Since Salesforce's acquisition of Heroku, the platform has seen reduced investment in new features, the elimination of the free tier, and organizational changes that have created uncertainty about Heroku's long-term roadmap. When platform stability and long-term investment trajectory are factors in infrastructure decisions, this uncertainty creates risk. AWS's scale and investment trajectory provide a more predictable platform foundation for multi-year application roadmaps.
What to do when Heroku hits these limits
If cost is the primary driver, evaluate AWS App Runner or ECS Fargate as the migration target — both provide managed container hosting with Heroku-like deployment simplicity at significantly lower cost. App Runner is the closest analogue to Heroku's push-to-deploy model. If you need more control, ECS Fargate with AWS Copilot CLI provides a similar developer experience with full AWS capability underneath.
If you are moving to AWS for the first time, start by migrating your most expensive Heroku dyno workloads first — this captures the largest cost savings immediately. Migrate databases last, as they carry the most risk. Use Heroku's database follower feature to set up read replicas on AWS RDS before cutting over.
Evaluate Your Migration Options
Get a free technical assessment and understand whether migration or optimization is the right path.
See Full Migration Process