12 results found
-
Provide documentation for Karapace REST API
As a developer
I want to use a REST API against my Kafka instance
so that I can write simple scripts without using client libraries.It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.
1 vote -
Kafka fined grained ACLs
As an ops engineer,
I want to be able to declare fined grained ACLs
so that I avoid having to grant "admin" rights to users which only need "DeleteRecords" rights on specific topics.
The client uses Kafka Streams, which needs specific rights (https://docs.confluent.io/platform/current/streams/developer-guide/security.html#required-acl-setting-for-secure-ak-clusters) which are not covered by aiven predefined rights.
Currently, the "admin" is to broad for such access (I don't want the user to be able to create topics).8 votes -
Kafka tiered storage with external S3 bucket
As a developer
I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)3 votes -
Improve support for Debezium SQL Server use cases
As an architect, I want to bring business logic out of the database and into a decoupled stream processing / event driven architecture framework. With SQL Server, I want to stream changes to Apache Kafka, using Debezium. This must support use cases where sensitivity classifications in SQL Server are used/required for things like PII.
2 votes -
Custom Kafka Connectors on Aiven (Bring your own Connector)
As Developer, I want to utilize Custom Connectors on Aiven,
so that I can integrate our proprietary data systems and custom applications with Apache Kafka without having to manage the underlying infrastructure.In addition, this will allow developers to concentrate more on the building of business-critical applications instead of getting tied up with infrastructural tasks.
14 votesRoadmapped · AdminMichael Tansini (Product Manager Data Streaming (Kafka and Flink), Aiven.io) responded -
Certificate file support for Kafka connectors
As a developer / security engineer,
I want to be able to authenticate my Apache Kafka connectors via mTLS,
so that I connect to my external services in a secure way.24 votesRoadmapped · AdminMichael Tansini (Product Manager Data Streaming (Kafka and Flink), Aiven.io) responded -
Ability to choose the Apache Kafka Connect connector version
As a developer,
I want to choose which Apache Kafka Connect connector version to use,
so that I can control connector version and make sure it is compatible with my applications.6 votes -
Run Karapace as dedicated managed service
As a developer,
I want to run Karapace as fully managed dedicated service,
so that I can use it with Apache Kafka running on and outside of Aiven.
In addition, I can use same Karapace service against multiple Apache Kafka services.17 votes -
Support Zookeeper-less (KRaft) mode in Aiven for Apache Kafka
As a SRE,
I want to operate Apache Kafka without Zookeeper,
so that I can have more resources available for Apache Kafka itself.
In addition, it will allow faster up- and down- scaling of my cluster and it will support more partitions per broker.8 votes -
Azure Blob Storage Kafka connector
As a developer,
I want to read and write data to Azure Blob Storage,
so that I can use that data for stream processing and analytics, and backup / restore data in Apache Kafka.17 votesRoadmapped · AdminMichael Tansini (Product Manager Data Streaming (Kafka and Flink), Aiven.io) responded -
ClickHouse sink for Kafka Connect
As a developer,
I want to sync my data in to Clickhouse,
so that I can store large volumes of data and run analytics on top of it.10 votes -
Support for AWS S3 Source Kafka Connector
As a developer / data engineer,
I want to be able to read data stored in AWS S3 bucket,
so that I can transfer, process and transform that data to other applications.
In addition, I can use data stored in S3 bucket as backup and I can rehydrate my Apache Kafka with it.16 votesRoadmapped · AdminMichael Tansini (Product Manager Data Streaming (Kafka and Flink), Aiven.io) responded
- Don't see your idea?