Skip to content

Event Streaming

Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

91 results found

  1. Support RocksDB as a persistent data store for Flink

    As an engineer I need be able to run larger state Flink jobs in order to meet my data processing requirements

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Debugging failed connectivity between Aiven for Apache MirrorMaker and an external Kafka configured using an integration endpoint is very difficult today. No errors are surfaced in the integration endpoint configuration screen itself, we must wait for the replication flow to attempt to start and then dig out cryptic errors from MM2 logs.

    Some kinds of errors (e.g. failure to build SSL keystores) are not even surfaced in MM2 logs and are only visible to Aiven operators.

    Please consider adding a basic connectivity check to allow for quicker troubleshooting and iteration. This check should ensure that the network path between Aiven…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Our customer Jago wants to monitor the status of connectors and tasks but currently can't find relevant metrics to do so. They want to be able to monitor the status of connectors and tasks on a dashboard and also get notified whenever a connector has not been running for X minutes.

    Jago has a connector running but can not find the metrics for kafka.connect:type=connector-metrics,connector=*.

    The specific metrics they are looking for is the one related to the status of a connector. For example, in the customers current self-managed kafka connect, they have the following metrics. This is convenient because they…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As a developer I should be able to authenticate with same service user certificate for both Kafka and Schema Registry

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As a developer that uses Aiven's S3 sink connector,
    I want to be able to set the offset.flush.interval.ms only for my specific connector from the connector's configuration
    so that I can avoid configuring it in the cluster level (for all connectors).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As a non-technical user
    I want to be able to aggregate and join different streams of data
    without the need of developers

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Shelved  ·  Jonah Kowall responded

    This function will largely be replaced by ChatGPT or other LLMs which can generate clear code and instructions, making a visual builder unnecessary. 

  7. As a developer
    I want to know when a connector is paused or resumed
    so that I can have timestamps and know if anybody is doing what they are not supposed to do.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As an application developer
    I want to compress my kafka messages, but be able to decomrpess them using a transform before sinking them into a destination
    so that I can save on storage costs
    In addition, I'd like to use ZTSD, but more common libraries might be enough.

    Note, Confluent has something similar :
    https://docs.confluent.io/platform/current/connect/transforms/gzipdecompress.html

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. As a data engineer
    I want to use my integrate my existing Datadog subscription with Flink
    so that I can store and monitor all metrics across my stack in a single location.
    In addition, this functionality is already available on other services in Aiven

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. As a customer
    I want to be able to select Karapace version updates in Terraform code instead of maintenance updates GUI
    so that I can control when Karapace updates are applied from my IaC definition.
    In addition, Karapace updates can introduce service interruptions for users because all consumer instances are terminated. Moving it out of maintenance updates means less user interruptions on often more important maintenance updates.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. The Aiven platform allows custom key/value tags to be added to resources such as Kafka topics. It would be useful to have these exposed as additional labels on metrics so that alerts can be triggered based on this metadata.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. As a developer,
    I want to have an easy setup of my Kafka connectors when they are using internal Aiven services such as Postgres or OpenSearch
    so that I can save time, avoid mistakes, and enforce the strength of the platform.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. As a developer I want to be able to read and write data to my S3 object storage, in order to simply integrate Flink into my existing data architecture. Using Flink to read data from S3, transform it, and then write to another S3 location allows easy consolidation and data quality management in a common reference data architecture.

    0 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. As an architect, I want to bring business logic out of the database and into a decoupled stream processing / event driven architecture framework. With SQL Server, I want to stream changes to Apache Kafka, using Debezium. This must support use cases where sensitivity classifications in SQL Server are used/required for things like PII.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. As a developer, I need to be able to read and write from Databricks so that I can complement my existing data architecture

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Kafka consumer clients on Aiven for Kafka should be able to use client.rack configuration introduced in KIP-392

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. As a developer, I need Flink to be able to automatically increase compute resources to a self-defined threshold to seamlessly process significant increases in traffic in order to maintain performance. This increase in traffic would be for either a permanently higher workload, or for a temporary pre-defined amount of time, and would be triggered by a customer request, with subsequent additional billing impacts

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. As a Cloud platform engineer, I need to have an automated way to set up and update quota configurations on a cluster, taking into account changes in resources consumption patterns amongst producers and consumers.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. As a Cloud platform engineer, I need to have a view of Kafka cluster resources consumption across all clients IDs. Such information is needed to configure quotas and to manage them.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?