11 results found
-
Cost Attribution or Cost Showback for Kafka
As Central Data Platform Team
I want to be able to track who is using my Kafka cluster, and attribute cost to those Business Units.
so that I can either charge them for the usage, or justify my team's spend on cloud services.
In addition, it would be nice if I could define my own rules, e.g.
- Producers pay for their networking
- Producers pay for Storage
- Consumers Pay for their Networking
And if it could tie into Aiven governance tools.3 votes -
Expose limits for number of service users/ACLs in API/Aiven Console
As a platform engineer
I want to easily find out the current limits for number of service users/ACLs in a Kafka service
so that I can keep track of how close to the limit I am and avoid outages caused by not being able to create new service users/ACLs.
In addition, a self-service option for increasing the limit would reduce the need to contact support.2 votes -
Kafka GCS connector - give ability to set offset.flush.interval.ms per connector
We are using Kafka connectors for GCS: https://github.com/Aiven-Open/cloud-storage-connectors-for-apache-kafka
And we set offset.flush.interval.ms to some value (https://kafka.apache.org/documentation/#connectconfigs). However in some topics we have more data, in some less. We'd like to have this value configurable per topic: https://kafka.apache.org/documentation/#topicconfigs
So, we'd like to have for example 5 minutes for all topics, but for some specific - 1 minute. Can this option be added to connectors? Thanks.
3 votes -
Kafka consumer lag predictor in DataDog integration
As a Kafka operator
I want to understand consumer lag
so that I can know potential impact to customer experience, latency, and if I need to size up my clusterCurrently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.
2 votes -
Serverless option for Aiven for Kafka
As an architect
I want to have a message bus solution that is cost effective while still maintaining level of service for rare high load scenarios
so that I can save money, simplify operations, and align value to usage1 vote -
Emails should be valid Kafka usernames with OAuth
As Data Platform Principal Engineer
I want to use emails as Kafka usernames when OAuth authentication is configured
so that I can use Databricks as SSO provider to reduce the amount of credentials that I need to manage and share with each user. This also improves security, because it automatically disable the access when someone leaves the company.In particular, when I am using a Databricks Service Principal for the authentication, it works as expected. The Databricks Service Principal is identified by an unique UUID. To make it working I have added a Kafka service user with that UUID as…
1 vote -
ACL
As SRE Engineer
I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.
so that even when disk space reaches critical levels, it will not immediately impact consumer side operations
1 vote -
Enhance error messages and logs with schema names and versions
As a developer,
I want error messages and logs to contain schema names and versions,
so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.3 votes -
Enforce schema compatibility check on level change
As a developer,
I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.3 votes -
untyped metrics from prometheus endpoint
using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .
see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
see exampleTYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped
TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped
1 vote -
"Last Used" field on Kafka Certificates in Console/API
As an organization (DevOps/Security/Vendor Manager) using Aiven Kafka,
we want to determine the last connected cert status of Kafka users,
so that we can know whether a kafka user certificate has been successfully updated.We automate certificate rolling to an extent with terraform. Different teams of devs generally own their section of terraform creating Kafka users. As of right now every two years those certs expire, and clicking the "Yes I've updated" in the Aiven console just silences the alert, and provides no real time verification from the running kafka that a certificate has been updated.
This means a user…
7 votes
- Don't see your idea?