Skip to content

Event Streaming

Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

9 results found

  1. As an organization (DevOps/Security/Vendor Manager) using Aiven Kafka,
    we want to determine the last connected cert status of Kafka users,
    so that we can know whether a kafka user certificate has been successfully updated.

    We automate certificate rolling to an extent with terraform. Different teams of devs generally own their section of terraform creating Kafka users. As of right now every two years those certs expire, and clicking the "Yes I've updated" in the Aiven console just silences the alert, and provides no real time verification from the running kafka that a certificate has been updated.

    This means a user…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. As Central Data Platform Team
    I want to be able to track who is using my Kafka cluster, and attribute cost to those Business Units.
    so that I can either charge them for the usage, or justify my team's spend on cloud services.
    In addition, it would be nice if I could define my own rules, e.g.
    - Producers pay for their networking
    - Producers pay for Storage
    - Consumers Pay for their Networking
    And if it could tie into Aiven governance tools.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. We are using Kafka connectors for GCS: https://github.com/Aiven-Open/cloud-storage-connectors-for-apache-kafka

    And we set offset.flush.interval.ms to some value (https://kafka.apache.org/documentation/#connectconfigs). However in some topics we have more data, in some less. We'd like to have this value configurable per topic: https://kafka.apache.org/documentation/#topicconfigs

    So, we'd like to have for example 5 minutes for all topics, but for some specific - 1 minute. Can this option be added to connectors? Thanks.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As a platform engineer
    I want to easily find out the current limits for number of service users/ACLs in a Kafka service
    so that I can keep track of how close to the limit I am and avoid outages caused by not being able to create new service users/ACLs.
    In addition, a self-service option for increasing the limit would reduce the need to contact support.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As a Kafka operator
    I want to understand consumer lag
    so that I can know potential impact to customer experience, latency, and if I need to size up my cluster

    Currently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As an architect
    I want to have a message bus solution that is cost effective while still maintaining level of service for rare high load scenarios
    so that I can save money, simplify operations, and align value to usage

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. As Data Platform Principal Engineer
    I want to use emails as Kafka usernames when OAuth authentication is configured
    so that I can use Databricks as SSO provider to reduce the amount of credentials that I need to manage and share with each user. This also improves security, because it automatically disable the access when someone leaves the company.

    In particular, when I am using a Databricks Service Principal for the authentication, it works as expected. The Databricks Service Principal is identified by an unique UUID. To make it working I have added a Kafka service user with that UUID as…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As SRE Engineer

    I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.

    so that even when disk space reaches critical levels, it will not immediately impact consumer side operations

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .

    see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
    see example

    TYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped

    TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?