What, how, where
Benefits of Qubinets for Apache Kafka® as-a-service
Automatic updates and upgrades. Zero stress.
99.99% uptime. 100% human support.
Super-transparent pricing. No networking costs.
Scale up or scale down as you need.
How Apache Kafka® is used by Qubinets customers
Using a set of various supervised learning models, ranging from decision trees to LSTMs, it is possible to train models to predict next-in-series points or make sequence classifications. In case of insufficient training data, unsupervised approaches with iForests, k-means, dbscan, etc. can be used. Optionally, autoencoders which leverage self-supervision (using same input data as output data) learn normal data sequences. Note: as the nature of metric data is sync (collected at given time intervals – usually in minutes), this information can hardly be used for real-time observations.
Our experience and achievements range from visualizing data directly from Kafka streams while controlling Flink processors in real-time for the purpose of filtering, all the way to usage of standard systems as Grafana, Kibana and Superset in combination with data sources ranging from relational databases to OLAP databases.
Based on the Kafka message broker, setup data flows such that scalable components take raw data, perform filtering, splitting, aggregating, routing, batching and finally serve the data to the ML model(s), maintaining at the same time metadata store (PostgreSQL). The whole concept of plugging in ML models (which satisfy both north- and south-bound interface requirements) in the pipeline we call Marketplace.