Click any tag below to further narrow down your results
+ infrastructure
(2)
+ isolation
(1)
+ event-processing
(1)
+ performance
(1)
+ consumer-proxy
(1)
+ microservices
(1)
+ service-discovery
(1)
+ architecture
(1)
+ personalization
(1)
+ push-notifications
(1)
+ cloud-integration
(1)
+ data-streaming
(1)
+ cross-cloud
(1)
+ pub-sub
(1)
+ log-storage
(1)
Links
Uber developed uFowarder, a consumer proxy for Apache Kafka, to address issues like head-of-line blocking and hardware efficiency. This blog details the challenges faced during its production and the solutions implemented, such as context-aware routing and active head-of-line blocking resolution.
This article details LinkedIn's transition from Zookeeper to a new scalable service discovery system designed to handle the demands of a growing number of microservices. The new system, which uses Kafka and a Service Discovery Observer, improves scalability, compatibility, and extensibility while supporting multiple programming languages.
Eloelo's push notification architecture is designed to handle millions of personalized notifications in real-time, addressing challenges such as volume, latency, and reliability. The system employs an event-driven model with Kafka pipelines, dynamic template orchestration, and a resilient delivery mechanism that includes intelligent retries and fallback strategies to ensure effective communication with users.
LinkedIn has introduced Northguard, a scalable log storage system designed to improve the operability and manageability of data as the platform grows. Northguard addresses the challenges faced with Kafka, including scalability, operability, availability, and consistency, by implementing advanced features such as log striping and a refined data model. Additionally, Xinfra serves as a virtualized Pub/Sub layer over Northguard to further enhance data processing capabilities.
The article discusses the concept of cross-cloud cluster linking, which enables organizations to connect and manage Kafka clusters across multiple cloud environments. This capability facilitates seamless data sharing and resilience in operations, helping businesses to optimize their data architecture. It highlights the benefits of such integrations for enhancing scalability and reliability in data streaming applications.
Klaviyo successfully migrated its event processing pipeline from RabbitMQ to a Kafka-based architecture, handling up to 170,000 events per second while ensuring zero data loss and minimal impact on ongoing operations. The new system enhances performance, scales for future growth, and improves operational efficiency, positioning Klaviyo to meet the demands of over 176,000 businesses worldwide. Key design principles focused on decoupling ingestion from processing, eliminating blocking issues, and ensuring reliability in the face of transient failures.