Skip to content

Event Streaming

  • Aiven for Apache Kafka® - Apache Kafka as a fully managed service, deployed in the cloud of your choice and a full set of capabilities to build your streaming data pipelines. Find more info in our documentation or developer center.
  • Aiven for Apache Kafka® Connect - Seamlessly transport data with Kafka, integrate with external systems using Kafka Connect
  • Aiven for Apache Kafka® MirrorMaker 2 - replicate data with MirrorMaker 2
  • Karapace - easily manage schemas with Karapace
Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

99 results found

  1. As application engg,
    I want to develop a csv connector where data is ingested from flat files (CSV) and create a stream of records that can be processed in Apache Kafka. It is similar to what confluent kafka provides here (https://docs.confluent.io/kafka-connectors/spooldir/current/connectors/csv_source_connector.html)

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. As a customer
    I want to be able to select Karapace version updates in Terraform code instead of maintenance updates GUI
    so that I can control when Karapace updates are applied from my IaC definition.
    In addition, Karapace updates can introduce service interruptions for users because all consumer instances are terminated. Moving it out of maintenance updates means less user interruptions on often more important maintenance updates.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As a developer,
    I want to be able to pull data from SAP systems,
    so I could build my data pipelines.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As a Avro schema administrator
    I want to be able to quickly find an Avro schema subject in schema registry
    so that I can view the schema, compare schema versions and change compatibility level.
    In addition, we have 5-10k schema subjects in some of our projects, and there is no paging in the schema subject view - all subjects are loaded which makes the view very slow.

    Possible solutions might be to add paging, or simply limiting the number of subjects listed.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As a developer,
    I want error messages and logs to contain schema names and versions,
    so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
    In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Pending Review  ·  0 comments  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. As a developer,
    I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
    so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
    In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Pending Review  ·  0 comments  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As data streaming architect
    I want to be able to export records from Kafka to GCS and use values in the record to define the bucket or file name
    so that I can organize data by those values to make them easier to find and process.

    Use case is a multi-user/multi-tenant application where user info is a value in the record. Need to be able to organize the output in object storage by that value somehow.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. As IT Architect
    I want to provide complete traceability within my microservice mesh where 2 microservice communicate via Outbox pattern via Debezium connector. In order to achieve this, Debezium connector requires some of the OpenTelemtry APIs are on the Kafka Connect classpath.
    I want to have this feature so that I can see complete chain of interactions for specific request, observe what and where time is spent inside particular microservice and find out possible bottlenecks.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. As a developer
    I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
    so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. As a data engineer
    I want to use my integrate my existing Datadog subscription with Flink
    so that I can store and monitor all metrics across my stack in a single location.
    In addition, this functionality is already available on other services in Aiven

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Shelved  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. The Aiven platform allows custom key/value tags to be added to resources such as Kafka topics. It would be useful to have these exposed as additional labels on metrics so that alerts can be triggered based on this metadata.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. As a developer I would like to push the output of a Flink operation to an HTTP API sink.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Shelved  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. As a platform developer
    I want to rotate client certificates for service users without downtime
    so that I can maintain a proper security level and availability.

    In order to rotate to new certificates while the old one is still valid, we need to create a new service user. Combining a predefined pattern for service user names and clever use of wildcards in ACLs makes this work, but it increases the number of active service users by 2-3x.
    When Aiven services have a limit on the number of active service users, this 2-3x increase causes problems.

    When a certificate expires, a…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. As an architect
    I want to connect Aiven for MirrorMaker2 with my self-managed Kafka instance using AWS PrivateLink
    so that I can migrate from self-managed Kafka in AWS to Aiven for Kafka securely.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. As an organisation (DevOps/Security/Vendor Manager) using Aiven Kafka I want to be able to use the native Kafka partition management APIs.

    Taking the use case of monitoring where we want to produce and consume from a test topic and ensure every broker is a leader for at least one partition to prove end to end functionality across all brokers. Kminion's end to end monitoring being one example.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. As a platform engineer
    I want to easily find out the current limits for number of service users/ACLs in a Kafka service
    so that I can keep track of how close to the limit I am and avoid outages caused by not being able to create new service users/ACLs.
    In addition, a self-service option for increasing the limit would reduce the need to contact support.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. As a Kafka operator
    I want to understand consumer lag
    so that I can know potential impact to customer experience, latency, and if I need to size up my cluster

    Currently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. As an application owner,
    I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. As a developer
    I want to use a REST API against my Kafka instance
    so that I can write simple scripts without using client libraries.

    It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Roadmapped  ·  1 comment  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?