Bring Your
Own Cloud

Apache Kafka-Compatible Data Streaming

10x Cheaper Than Kafka

The best of both
self-hosted and cloud

WarpStream's BYOC deployment model gives you the security and data sovereignty benefits of self hosting, but without any of the management hassle.
WarpStream console

Zero disks

Cloud disks are really expensive and tiered storage is not a solution – it results in more operational burden and no reduction in interzone networking costs.

WarpStream’s Zero Disk Architecture eliminates local disks entirely and reduces storage costs by more than 24x.
Zero disks diagram
Zero interzone networking fees

Zero interzone
networking fees

More than 80% of Kafka costs are not hardware – they’re interzone networking fees. Because WarpStream runs on top of S3-compatible object storage and does not manually replicate data between zones, those fees are completely eliminated. Goodbye, gone forever – never to be seen again, no matter how much you use WarpStream.
Currently in production at

Zero ops
auto-scaling

WarpStream's Zero Disk Architecture has more benefits than just eliminating interzone networking and expensive local disks. WarpStream replaces stateful Kafka brokers with stateless Agents, so your team can skip the weekly burden of partition rebalancing, scaling headaches, volume management, capacity planning, and more.
We call all this operational simplicity. We’ve made auto-scaling so easy (just add more containers) that nearly every WarpStream BYOC customer uses it by default. You can view our public-facing dashboard to see our live auto-scaling benchmark.
Apache Kafka®
WarpStream BYOC
Zero local disk and EBS volume management
Zero partition rebalancing
Zero broker rebalancing
Zero hot spots and disks
Zero snapshot replication issues
Zero over-provisioning for peak load
Zero networking headaches like VPC peering, NAT gateways, load balancers and private links
Zero-ops auto-scaling (no custom tooling, scripts or operators required)
Perfect provisioning (pay for what you use)
Use Agent Groups to isolate workloads
Deploy data pipelines with no custom code or third-party services
We call all this operational simplicity. We’ve made auto-scaling so easy (just add more containers) that nearly every WarpStream BYOC customer uses it by default.
You can view our public-facing dashboard to see our live auto-scaling benchmark.

Get started for free

Accounts come pre-loaded with $400 in free credits.
No credit card required to start.

Zero access -
Secure by default

Make your security and infrastructure teams happy. You’re only responsible for the compute: scheduling and running WarpStream’s stateless containers or Agents. Zero cross-account IAM access or privileges are needed by WarpStream

The WarpStream Agents run on your VMs, in your cloud account / VPC, and store data in your object storage buckets. Traffic flows seamlessly from producers to consumers without ever leaving your virtual private cloud (VPC). Raw data never leaves your environment.

WarpStream only hosts the cloud control plane, so all it ingests is metadata.
Other BYOC Products
Requires access to your VPC to deploy clusters.

Requires expansive cross-IAM roles to manage the cluster remotely.

Requires expensive and complex VPC peering.

Remote access is required for support, and elevated permissions can raise security issues. Break glass enables root access.
WarpStream logo
WarpStream's BYOC
Data / Metadata split enables WarpStream's control plane to function with no access to your VPC or object storage buckets.

Only metadata is transferred between VPCs.

It is impossible for WarpStream to access your data under any circumstances.

Works in
Every Cloud!

WarpStream has native support for AWS S3, GCP GCS and Azure Blob Storage, and works with any cloud or self-hosted solution that has an S3-compatible object storage.

AWS cloud logoGoogle cloud logoAzure cloud logoVultr cloud logoIBM cloud logoMINIO cloud logoFly.io cloud logoOracle cloud logoDigitalOcean cloud logoCloudflare logo

Create flex clusters
with Agent Groups

Agent Groups are distinct sets of Agents that all belong to the same logical cluster that use a shared object storage bucket as the storage and network layer.

Groups enable a single logical cluster to be split into many different "groups" that are isolated at the network / service discovery layer.

Leverage Agent Groups to:

Isolate analytical workloads from transactional ones.

Flex a single, logical cluster across multiple VPCs, regions or cloud providers without the need for complex VPC peering.

Split workloads at the hardware level, while still sharing data sets.

Pricing Info

Pay only for what you use – writes, cluster minutes and storage. No hidden fees or charges based on number of Agents, cores or partitions.

FAQs

Don't see an answer to your question? Check our docs, or contact us directly.

How is BYOC priced?

BYOC clusters are only charged for uncompressed writes, cluster-minutes, and storage. There are no network ingress or egress charges, and we don't bill for reads at all.

Normally, networking charges associated with running Kafka are accrued based on compressed network throughput. However, WarpStream charges for uncompressed data written because we want to offer predictable pricing, and we believe that your bill should not fluctuate based on which compression algorithm you choose to use in your client. This also aligns incentives so that we are encouraged to reduce your cloud infrastructure costs.

WarpStream has no per-Agent, per node, or per-vCPU charges, and does not charge extra for replication. That's because there is no local data to replicate. WarpStream has no per-partition charges. And unlike Kafka, there is no requirement to increase the number of WarpStream Agents when the number of partitions increases. 

All pricing is transparent and you can learn more about it by visiting our pricing page.

What cloud providers does WarpStream support?

The WarpStream Agents have native support for AWS S3, GCP GCS and Azure Blob Storage built in, so they can run in all three of the major cloud providers, plus others like Oracle Cloud, Cloudflare, DigitalOcean, IBM Cloud, MinIO, Vultr and more. You can deploy in any cloud or region as long as you use an S3-compatible object store.

You can read more about object storage support in our documentation.

What data does WarpStream collect? How is access controlled?

Your data never leaves your account. WarpStream only ingests metadata, and WarpStream personnel have zero access to your data or the environment where you deploy the Agents. The metadata transferred to WarpStream cannot be used to access your data, and WarpStream is unable to take actions on your behalf.

To learn more about data isolation and the metadata transferred to WarpStream’s environment, see our documentation, or read our blog post that describes WarpStream’s Zero Access BYOC model in detail.

How does WarpStream ensure data quality? Does it offer a Schema Registry?

WarpStream supports Schema Validation via external schema registries and services like AWS Glue. It also has its own BYOC-native Schema Registry. 

Like all Schema Registries, the WarpStream BYOC Schema Registry ensures data compatibility and compliance by validating schemas during data production and consumption. This helps minimize downstream data issues and enables schemas to evolve without breaking consumers.

In addition, it has unique features that are only possible with WarpStream’s stateless, zero-disk architecture, such as native integration with the WarpStream Agents, data retrieval via object storage with no intermediate disks, easy scaling, usage of zone-aware routing to avoid interzone networking fees, and no need to wait on “leaders” to be elected, as consensus is handled by WarpStream's metadata store, and Agents can read and write.

Learn more about WarpStream BYOC Schema Registry via our docs and announcement blog.

Explore More of WarpStream

Managed Data Pipelines
ETL and stream processing from within your WarpStream Agents and cloud account. No additional infrastructure needed. You own the data pipeline – end to end.
Orbit
Offset-preserving replication from any source Apache Kafka cluster. Replicate topics, consumer groups, offset gaps, ACLs and cluster configurations.

Heading goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Close ToggleClose Toggle