Cloud Database Services: Managed Options on AWS, Azure, and GCP
Managed cloud database services from Amazon Web Services, Microsoft Azure, and Google Cloud Platform represent a structural shift in how organizations provision, operate, and scale database infrastructure. Rather than managing hardware, OS patching, replication configuration, and failover logic internally, organizations delegate those layers to a cloud provider under a shared-responsibility model. This page maps the service landscape across the three dominant providers, describes how managed database services function mechanically, identifies the professional and operational scenarios where each service category applies, and establishes the decision boundaries that separate appropriate from inappropriate use cases. For broader context on how database infrastructure fits into the overall technology stack, the Database Systems Authority reference covers the full scope of database system types and professional roles.
Definition and scope
Managed cloud database services — a subset of the broader Database as a Service (DBaaS) category — are cloud-hosted database systems in which the provider assumes operational responsibility for infrastructure provisioning, storage management, automated backups, software patching, and high-availability configuration. The subscribing organization retains control of schema design, query logic, user access, and application-layer configuration.
The three dominant providers each maintain distinct service portfolios:
Amazon Web Services (AWS) offers Amazon RDS (supporting MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server engines), Amazon Aurora (a cloud-native relational engine with MySQL- and PostgreSQL-compatible modes), Amazon DynamoDB (a fully managed key-value and document store), and Amazon Redshift (a columnar analytical warehouse). AWS also provides Amazon ElastiCache for in-memory database caching and Amazon Neptune for graph database workloads.
Microsoft Azure provides Azure SQL Database (a fully managed SQL Server–compatible service), Azure Cosmos DB (a multi-model distributed database supporting document, key-value, graph, and columnar APIs), Azure Database for PostgreSQL and MySQL, and Azure Synapse Analytics for data warehousing and OLAP workloads.
Google Cloud Platform (GCP) offers Cloud SQL (managed MySQL, PostgreSQL, and SQL Server), Cloud Spanner (a globally distributed, strongly consistent relational database), Firestore (a document-oriented NoSQL store), Bigtable (a wide-column store derived from the architecture described in Google's 2006 Bigtable paper published in USENIX OSDI proceedings), and BigQuery (a serverless columnar analytical engine).
NIST SP 800-145 defines cloud deployment models and service categories that form the definitional baseline for understanding where managed database services sit within cloud infrastructure taxonomies (NIST SP 800-145).
How it works
Managed cloud database services operate through a layered abstraction model. The provider controls five infrastructure layers while exposing a management API and console to the subscriber:
- Hardware and compute provisioning — The provider allocates physical or virtualized compute instances from its own data center fleet. Subscribers select instance types (e.g., AWS db.r6g.xlarge) that determine CPU, RAM, and network throughput ceilings.
- Storage management — Underlying storage is abstracted from the subscriber. AWS RDS, for example, uses Amazon EBS volumes with configurable IOPS; Spanner uses Colossus, Google's distributed file system, with automatic storage scaling.
- Automated backup and point-in-time recovery — Providers snapshot the database at configurable intervals and maintain transaction logs sufficient for point-in-time restore (PITR). AWS RDS retains automated backups for up to 35 days by default. Database backup and recovery procedures at the application layer remain the subscriber's responsibility.
- Replication and high availability — Managed services handle synchronous or asynchronous database replication internally. AWS Aurora maintains 6 copies of data across 3 Availability Zones by default. Azure SQL Database Business Critical tier uses an Always On availability group with 4 replicas. Database high availability SLAs are defined in each provider's published service agreements.
- Patching and version management — The provider applies OS-level and database engine patches during configurable maintenance windows. Major version upgrades may require subscriber-initiated action.
Database security and access control at the network perimeter (VPC configuration, security groups, private endpoints) and identity layer (IAM policies, database users, role grants) remain subscriber-managed. Database encryption at rest is enabled by default on all three platforms; encryption in transit uses TLS and is configurable at the connection level.
Common scenarios
Managed cloud database services address four structurally distinct operational scenarios:
Transactional application backends — Web and mobile applications requiring low-latency reads and writes at variable scale use RDS, Cloud SQL, or Azure SQL Database. ACID-compliant transaction guarantees are preserved across all three platforms for relational engines. Database connection pooling tools such as PgBouncer or RDS Proxy are commonly layered in front of managed instances to prevent connection exhaustion under high concurrency. Database concurrency control remains a schema and application-level concern.
Globally distributed workloads — Applications requiring low-latency reads across geographically dispersed users use Cloud Spanner (which Google describes as offering external consistency across global nodes) or Azure Cosmos DB's multi-region write capability. These services address CAP theorem tradeoffs by prioritizing consistency and partition tolerance at the cost of increased write latency.
Analytical and reporting workloads — BigQuery, Amazon Redshift, and Azure Synapse Analytics serve columnar database analytical patterns. These platforms are architecturally separated from transactional systems and optimized for full-table scans across billions of rows rather than point lookups. BigQuery's serverless model bills per terabyte scanned (Google Cloud BigQuery pricing documentation).
Flexible schema and document storage — DynamoDB, Firestore, and Cosmos DB's document API serve document database patterns where schema flexibility and horizontal scaling are prioritized over join-heavy relational modeling. These platforms apply database sharding transparently, distributing data across partitions without subscriber-managed shard configuration.
Decision boundaries
Choosing between managed cloud database providers and service tiers involves four structural decision factors:
Engine compatibility — Applications built on specific SQL dialects (T-SQL, PL/pgSQL, PL/SQL) constrain provider selection. Azure SQL Database offers the highest T-SQL compatibility; Aurora PostgreSQL and Cloud SQL PostgreSQL offer strong fidelity to open-source PostgreSQL behavior. SQL fundamentals and stored procedures and triggers written for one engine require validation before migration to another. Database migration tooling — AWS Database Migration Service, Azure Database Migration Service, and Google Database Migration Service — supports heterogeneous engine moves but introduces schema and code translation risk.
Consistency versus availability tradeoff — Under the CAP theorem framework, strongly consistent systems (Spanner, Aurora with global writes enabled) sacrifice availability under network partition; eventually consistent systems (DynamoDB default mode, Cosmos DB with relaxed consistency levels) sacrifice read consistency for availability and throughput. The correct choice depends on the application's tolerance for stale reads and the cost of conflicting writes.
Operational cost structure — Managed services eliminate DBA infrastructure management overhead but introduce per-hour instance costs, storage costs, I/O operation costs, and data egress fees. At high sustained workloads, reserved instance pricing (AWS Reserved Instances, Azure Reserved Capacity, GCP Committed Use Discounts) reduces per-hour rates by up to 72% compared to on-demand pricing, per AWS published rate tables (AWS RDS Pricing). Organizations evaluating total cost of ownership should consult database licensing and costs reference material.
Compliance and data residency requirements — Regulated industries must verify that managed database services satisfy applicable frameworks. NIST SP 800-53 Rev 5 provides the control baseline most commonly referenced in FedRAMP assessments (NIST SP 800-53 Rev 5). AWS GovCloud, Azure Government, and GCP's Assured Workloads configurations are the provider-specific mechanisms for hosting FedRAMP-authorized managed database workloads. Database auditing and compliance obligations — including audit logging, access controls, and retention policies — remain subscriber responsibilities regardless of provider.
A structured comparison of managed versus self-hosted database deployment patterns, alongside classifications across relational, NoSQL, and NewSQL database categories, is available through the popular database platforms compared reference.
References
- NIST SP 800-145: The NIST Definition of Cloud Computing
- NIST SP 800-53 Rev 5: Security and Privacy Controls for Information Systems and Organizations
- AWS RDS Pricing
- Google Cloud BigQuery Pricing
- FedRAMP Program — General Services Administration
- Google Bigtable: A Distributed Storage System for Structured Data — USENIX OSDI 2006