97 results found
-
Aiven Kafka Connect Debezium Exactly Once Support
As a software engineer
I want to be able to configure exactly once support on Aiven Kafka Connect
so that I can ensure each record from the source system has been delivered once1 vote -
Allow partition reassignment via Kafka API
As an organisation (DevOps/Security/Vendor Manager) using Aiven Kafka I want to be able to use the native Kafka partition management APIs.
Taking the use case of monitoring where we want to produce and consume from a test topic and ensure every broker is a leader for at least one partition to prove end to end functionality across all brokers. Kminion's end to end monitoring being one example.
2 votes -
Cost Attribution or Cost Showback for Kafka
As Central Data Platform Team
I want to be able to track who is using my Kafka cluster, and attribute cost to those Business Units.
so that I can either charge them for the usage, or justify my team's spend on cloud services.
In addition, it would be nice if I could define my own rules, e.g.
- Producers pay for their networking
- Producers pay for Storage
- Consumers Pay for their Networking
And if it could tie into Aiven governance tools.4 votes -
Expose limits for number of service users/ACLs in API/Aiven Console
As a platform engineer
I want to easily find out the current limits for number of service users/ACLs in a Kafka service
so that I can keep track of how close to the limit I am and avoid outages caused by not being able to create new service users/ACLs.
In addition, a self-service option for increasing the limit would reduce the need to contact support.2 votes -
Kafka GCS connector - give ability to set offset.flush.interval.ms per connector
We are using Kafka connectors for GCS: https://github.com/Aiven-Open/cloud-storage-connectors-for-apache-kafka
And we set offset.flush.interval.ms to some value (https://kafka.apache.org/documentation/#connectconfigs). However in some topics we have more data, in some less. We'd like to have this value configurable per topic: https://kafka.apache.org/documentation/#topicconfigs
So, we'd like to have for example 5 minutes for all topics, but for some specific - 1 minute. Can this option be added to connectors? Thanks.
3 votes -
Kafka consumer lag predictor in DataDog integration
As a Kafka operator
I want to understand consumer lag
so that I can know potential impact to customer experience, latency, and if I need to size up my clusterCurrently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.
2 votes -
Serverless option for Aiven for Kafka
As an architect
I want to have a message bus solution that is cost effective while still maintaining level of service for rare high load scenarios
so that I can save money, simplify operations, and align value to usage1 vote -
Emails should be valid Kafka usernames with OAuth
As Data Platform Principal Engineer
I want to use emails as Kafka usernames when OAuth authentication is configured
so that I can use Databricks as SSO provider to reduce the amount of credentials that I need to manage and share with each user. This also improves security, because it automatically disable the access when someone leaves the company.In particular, when I am using a Databricks Service Principal for the authentication, it works as expected. The Databricks Service Principal is identified by an unique UUID. To make it working I have added a Kafka service user with that UUID as…
1 vote -
Create a Backup to Azure Blob Storage for Local Region Restore - DR
As an application owner,
I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.2 votes -
ACL
As SRE Engineer
I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.
so that even when disk space reaches critical levels, it will not immediately impact consumer side operations
1 vote -
Support Grouping of OAuth2/OIDC Users
As a platform engineer
I want to group multiple users based on their role (OAuth2/OIDC claim)
so that I can reduce the number of required Kafka users and ACL entries that need to be managed.Currently, every user / identity connecting via OAuth2/OIDC has a 1:1 mapping to a Kafka user (the username is taken from the sub claim). This is cumbersome and leads to significant overhead if for example multiple identities / users with the same permissions want to access the Kafka service. Kafka users and ACLs need to be created for every single identity, even though they share…
11 votes -
Monitoring consume lag for kafka not out of the box
As a Solutions Architect
I want to have consumer lag for kafka consumers available out of the box
so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.Background
1. there is a consumer lag on the default metrics dashboard but it does not work.
2. I contact support and found that…1 vote -
Provide documentation for Karapace REST API
As a developer
I want to use a REST API against my Kafka instance
so that I can write simple scripts without using client libraries.It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.
2 votes -
Exactly-Once support in Storage Write API from our GBQ sink connector
Add support for exactly-once delivery in Storage Write API for GBQ sink connector.
1 vote -
Enhance error messages and logs with schema names and versions
As a developer,
I want error messages and logs to contain schema names and versions,
so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.3 votes -
Enforce schema compatibility check on level change
As a developer,
I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.3 votes -
untyped metrics from prometheus endpoint
using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .
see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
see exampleTYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped
TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped
1 vote -
"Last Used" field on Kafka Certificates in Console/API
As an organization (DevOps/Security/Vendor Manager) using Aiven Kafka,
we want to determine the last connected cert status of Kafka users,
so that we can know whether a kafka user certificate has been successfully updated.We automate certificate rolling to an extent with terraform. Different teams of devs generally own their section of terraform creating Kafka users. As of right now every two years those certs expire, and clicking the "Yes I've updated" in the Aiven console just silences the alert, and provides no real time verification from the running kafka that a certificate has been updated.
This means a user…
7 votes -
Kafka Connect GCS Sink: Support using field values to define bucket name or file name prefix
As data streaming architect
I want to be able to export records from Kafka to GCS and use values in the record to define the bucket or file name
so that I can organize data by those values to make them easier to find and process.Use case is a multi-user/multi-tenant application where user info is a value in the record. Need to be able to organize the output in object storage by that value somehow.
3 votes -
Support for "Apache Iceberg" format while sinking CDC
As a developer / DevOps
I want to be able to Sink CDC data into Apache Iceberg format
so that I can analyze data using time travel feature of AWS Athena
In addition, we may find a way for the current "Aiven - Amazon AWS S3 Sink" connector to be able to produce "Apache Iceberg" in addition of "Parquet" format or we may provide a dedicated connector like the one from this repository : https://github.com/tabular-io/iceberg-kafka-connectYours faithfully,
LCDP10 votes
- Don't see your idea?