New tutorial: Kafka Streams - Aggregate APIs
It gives me great pleasure to announce that the third, and final, tutorial in the quick-start series is now live .
The Kafka Streams aggregate API tutorial builds upon the work done in the first Basic Kafka Streams tutorial to walk users through defining the API of an aggregate, wrapping parts of a system that don’t use Creek in an aggregate, and how to integrate one aggregate with another.
Combined, its hoped the quick-start tutorial series will provide a great introduction to the power of Creek and how to use it to build a tested, reliable microservice architecture quickly.
I’m very happy to announce this tutorial because it completes the series, but mainly because it means I can stop working on documentation and tutorials for a moment and pivot to coding !
Next on the list of tasks is adding JSON support to Creek. This is a biggie in terms of effort and impact. Creek’s not much use in a real-world situation util it’s done.
Once JSON support is complete, Creek will be close to moving from alpha to beta release status. Feel free to view the MVP project board to see what’s remaining.
It’s worth noting, while it isn’t documented yet the serialisation formats used by Creek Kafka are totally customisable. JSON support is the first on the cards, but Avro, Protobuf, and others, including organisation-specific serialisation formats are easily supportable.
I’ll update you once JSON support is out…