Reliability Guarantees in Apache Kafka®
Speaker: Gwen Shapira, Product Manager, Confluent
In the financial industry, losing data is unacceptable. Financial firms are adopting Kafka for their critical applications. Kafka provides the low latency, high throughput, high availability, and scale that these applications require. But can it also provide complete reliability? As a system architect, when asked, “Can you guarantee that we will always get every transaction,” you want to be able to say “yes” with total confidence.
In this session, we go over everything that happens to a message – from producer to consumer, and pinpoint all the places where data can be lost – if you are not careful. You will learn how developers and operation teams can work together to build a bulletproof data pipeline with Kafka. And if you need proof that you built a reliable system – we’ll show you how you can build the system to prove this too.
This is part 2 out of 5 in the Best Practices for Apache Kafka in Production Confluent Online Talk Series.
In the financial industry, losing data is unacceptable. Financial firms are adopting Kafka for their critical applications. Kafka provides the low latency, high throughput, high availability, and scale that these applications require. But can it also provide complete reliability? As a system architect, when asked, “Can you guarantee that we will always get every transaction,” you want to be able to say “yes” with total confidence.
In this session, we go over everything that happens to a message – from producer to consumer, and pinpoint all the places where data can be lost – if you are not careful. You will learn how developers and operation teams can work together to build a bulletproof data pipeline with Kafka. And if you need proof that you built a reliable system – we’ll show you how you can build the system to prove this too.
This is part 2 out of 5 in the Best Practices for Apache Kafka in Production Confluent Online Talk Series.