14 results found
-
Certificate file support for Kafka connectors
As a developer / security engineer,
I want to be able to authenticate my Apache Kafka connectors via mTLS,
so that I connect to my external services in a secure way.24 votes -
Run Karapace as dedicated managed service
As a developer,
I want to run Karapace as fully managed dedicated service,
so that I can use it with Apache Kafka running on and outside of Aiven.
In addition, I can use same Karapace service against multiple Apache Kafka services.17 votes -
Support for AWS S3 Source Kafka Connector
As a developer / data engineer,
I want to be able to read data stored in AWS S3 bucket,
so that I can transfer, process and transform that data to other applications.
In addition, I can use data stored in S3 bucket as backup and I can rehydrate my Apache Kafka with it.16 votes -
Custom Kafka Connectors on Aiven (Bring your own Connector)
As Developer, I want to utilize Custom Connectors on Aiven,
so that I can integrate our proprietary data systems and custom applications with Apache Kafka without having to manage the underlying infrastructure.In addition, this will allow developers to concentrate more on the building of business-critical applications instead of getting tied up with infrastructural tasks.
15 votes -
Kafka versioned updates
As a platform engineer
I want the service updates to be versioned and to be able to select a specific version (Kafka, Karapace, etc) to update to so that I can perform correct change promotion from non-prod to production (instead of being forced to always apply the latest) and so that if a new version is released while an update is in-flight it would not lead to different versions running across the brokers (as is currently the case, which we have experienced can lead to incompatibilities). Additionally, I would like it to be clear what version(s) is currently running in…12 votesThis is partially being realised with users being able to view what service is available and what update will take place.
-
ClickHouse sink for Kafka Connect
As a developer,
I want to sync my data in to Clickhouse,
so that I can store large volumes of data and run analytics on top of it.10 votes -
Support for "Apache Iceberg" format while sinking CDC
As a developer / DevOps
I want to be able to Sink CDC data into Apache Iceberg format
so that I can analyze data using time travel feature of AWS Athena
In addition, we may find a way for the current "Aiven - Amazon AWS S3 Sink" connector to be able to produce "Apache Iceberg" in addition of "Parquet" format or we may provide a dedicated connector like the one from this repository : https://github.com/tabular-io/iceberg-kafka-connectYours faithfully,
LCDP9 votes -
Kafka fined grained ACLs
As an ops engineer,
I want to be able to declare fined grained ACLs
so that I avoid having to grant "admin" rights to users which only need "DeleteRecords" rights on specific topics.
The client uses Kafka Streams, which needs specific rights (https://docs.confluent.io/platform/current/streams/developer-guide/security.html#required-acl-setting-for-secure-ak-clusters) which are not covered by aiven predefined rights.
Currently, the "admin" is to broad for such access (I don't want the user to be able to create topics).8 votes -
Support Zookeeper-less (KRaft) mode in Aiven for Apache Kafka
As a SRE,
I want to operate Apache Kafka without Zookeeper,
so that I can have more resources available for Apache Kafka itself.
In addition, it will allow faster up- and down- scaling of my cluster and it will support more partitions per broker.8 votes -
Ability to choose the Apache Kafka Connect connector version
As a developer,
I want to choose which Apache Kafka Connect connector version to use,
so that I can control connector version and make sure it is compatible with my applications.6 votes -
Kafka rate limits and quotas configuration via Terraform
As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.
3 votes -
Create a Backup to Azure Blob Storage for Local Region Restore - DR
As an application owner,
I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.2 votes -
Provide documentation for Karapace REST API
As a developer
I want to use a REST API against my Kafka instance
so that I can write simple scripts without using client libraries.It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.
2 votes -
Improve support for Debezium SQL Server use cases
As an architect, I want to bring business logic out of the database and into a decoupled stream processing / event driven architecture framework. With SQL Server, I want to stream changes to Apache Kafka, using Debezium. This must support use cases where sensitivity classifications in SQL Server are used/required for things like PII.
2 votes
- Don't see your idea?