Skip to content

Event Streaming

Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

42 results found

  1. As a developer
    I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
    so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. The Aiven platform allows custom key/value tags to be added to resources such as Kafka topics. It would be useful to have these exposed as additional labels on metrics so that alerts can be triggered based on this metadata.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. As an organisation (DevOps/Security/Vendor Manager) using Aiven Kafka I want to be able to use the native Kafka partition management APIs.

    Taking the use case of monitoring where we want to produce and consume from a test topic and ensure every broker is a leader for at least one partition to prove end to end functionality across all brokers. Kminion's end to end monitoring being one example.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As a platform engineer
    I want to easily find out the current limits for number of service users/ACLs in a Kafka service
    so that I can keep track of how close to the limit I am and avoid outages caused by not being able to create new service users/ACLs.
    In addition, a self-service option for increasing the limit would reduce the need to contact support.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As a Kafka operator
    I want to understand consumer lag
    so that I can know potential impact to customer experience, latency, and if I need to size up my cluster

    Currently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As an application owner,
    I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. As a Cloud platform engineer, I need to have a view of Kafka cluster resources consumption across all clients IDs. Such information is needed to configure quotas and to manage them.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As a kafka user, I want to configure the threshold used to send CPU warning emails. Right now we are getting emails because the CPU exceeds 50%, which is irrelevant. We know we can configure the emails to go to another mail address but in that case we would loose all other technical mails as well.
    It would also be good for us if we can turn off that single alert.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. As a developer / operator,
    I want to see Kafka topics grouped by type in Aiven Console
    so that I can distinguish between topics created by Schema Registry, MirrorMaker2, Connect and directly in Kafka itself.
    In addition, i want to apply filters and sorting by type.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. As Software Engineer at Wex
    I want to send additional metrics to Datadog
    so that I can send important metrics from Aiven's cluster as these ones:
    IsrShrinksPerSec
    IsrExpandsPerSec
    ActiveControllerCount
    OfflinePartitionsCount
    TotalTimeMs
    PurgatorySize
    RequestsPerSec
    Network bytes sent/received
    BytesInPerSec/BytesOutPerSec

    According to Datadog's documentation, these metrics are considered highly significant.
    https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/#kafka-emitted-metrics

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. As a developer,
    I want to be able to select password length and difficulty when creating a service user,
    so that I can stay complaint with internal security and compliance rules.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. As an architect
    I want to have a message bus solution that is cost effective while still maintaining level of service for rare high load scenarios
    so that I can save money, simplify operations, and align value to usage

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. As Data Platform Principal Engineer
    I want to use emails as Kafka usernames when OAuth authentication is configured
    so that I can use Databricks as SSO provider to reduce the amount of credentials that I need to manage and share with each user. This also improves security, because it automatically disable the access when someone leaves the company.

    In particular, when I am using a Databricks Service Principal for the authentication, it works as expected. The Databricks Service Principal is identified by an unique UUID. To make it working I have added a Kafka service user with that UUID as…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. As SRE Engineer

    I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.

    so that even when disk space reaches critical levels, it will not immediately impact consumer side operations

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. As a Solutions Architect
    I want to have consumer lag for kafka consumers available out of the box
    so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.

    Background
    1. there is a consumer lag on the default metrics dashboard but it does not work.
    2. I contact support and found that…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .

    see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
    see example

    TYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped

    TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. As a managed Kafka administrator
    I want to have the ability to black-list a bunch of IP addresses I believe are suspicious from my Kafka service, so that I can prevent my service being disrupted by unexpected traffic
    In addition, I would like to keep the current allow-list to be able to allow know IPs and a way to resolve a conflict between allow and block list where block list takes priority.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. As an OPS Engineer
    I want to be able to balance the cluster based on CPU usage rather than Disk usage. The current algorithm focuses on Disk usage which is not optimum for our application.
    Can we have an option to apply partition rebalancing based on CPU usage?

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. As a Kafka Administrator
    I want to ahve ability to scalse Kafka clsuter without upgrade, that increses time of scale.
    In production this can become critical and lead to downtime. For example last 2 times the scale lasted for 12 hours.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Our customer Jago wants to monitor the status of connectors and tasks but currently can't find relevant metrics to do so. They want to be able to monitor the status of connectors and tasks on a dashboard and also get notified whenever a connector has not been running for X minutes.

    Jago has a connector running but can not find the metrics for kafka.connect:type=connector-metrics,connector=*.

    The specific metrics they are looking for is the one related to the status of a connector. For example, in the customers current self-managed kafka connect, they have the following metrics. This is convenient because they…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?