Skip to content

Event Streaming

Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

13 results found

  1. As an application owner,
    I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. As a developer
    I want to use a REST API against my Kafka instance
    so that I can write simple scripts without using client libraries.

    It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Roadmapped  ·  1 comment  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. As a developer / DevOps
    I want to be able to Sink CDC data into Apache Iceberg format
    so that I can analyze data using time travel feature of AWS Athena
    In addition, we may find a way for the current "Aiven - Amazon AWS S3 Sink" connector to be able to produce "Apache Iceberg" in addition of "Parquet" format or we may provide a dedicated connector like the one from this repository : https://github.com/tabular-io/iceberg-kafka-connect

    Yours faithfully,
    LCDP

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As a platform engineer
    I want the service updates to be versioned and to be able to select a specific version (Kafka, Karapace, etc) to update to so that I can perform correct change promotion from non-prod to production (instead of being forced to always apply the latest) and so that if a new version is released while an update is in-flight it would not lead to different versions running across the brokers (as is currently the case, which we have experienced can lead to incompatibilities). Additionally, I would like it to be clear what version(s) is currently running in…

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As an ops engineer,
    I want to be able to declare fined grained ACLs
    so that I avoid having to grant "admin" rights to users which only need "DeleteRecords" rights on specific topics.
    The client uses Kafka Streams, which needs specific rights (https://docs.confluent.io/platform/current/streams/developer-guide/security.html#required-acl-setting-for-secure-ak-clusters) which are not covered by aiven predefined rights.
    Currently, the "admin" is to broad for such access (I don't want the user to be able to create topics).

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As an architect, I want to bring business logic out of the database and into a decoupled stream processing / event driven architecture framework. With SQL Server, I want to stream changes to Apache Kafka, using Debezium. This must support use cases where sensitivity classifications in SQL Server are used/required for things like PII.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As Developer, I want to utilize Custom Connectors on Aiven,
    so that I can integrate our proprietary data systems and custom applications with Apache Kafka without having to manage the underlying infrastructure.

    In addition, this will allow developers to concentrate more on the building of business-critical applications instead of getting tied up with infrastructural tasks.

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. As a developer / security engineer,
    I want to be able to authenticate my Apache Kafka connectors via mTLS,
    so that I connect to my external services in a secure way.

    24 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. As a developer,
    I want to choose which Apache Kafka Connect connector version to use,
    so that I can control connector version and make sure it is compatible with my applications.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. As a developer,
    I want to run Karapace as fully managed dedicated service,
    so that I can use it with Apache Kafka running on and outside of Aiven.
    In addition, I can use same Karapace service against multiple Apache Kafka services.

    17 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Roadmapped  ·  2 comments  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. As a SRE,
    I want to operate Apache Kafka without Zookeeper,
    so that I can have more resources available for Apache Kafka itself.
    In addition, it will allow faster up- and down- scaling of my cluster and it will support more partitions per broker.

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. As a developer / data engineer,
    I want to be able to read data stored in AWS S3 bucket,
    so that I can transfer, process and transform that data to other applications.
    In addition, I can use data stored in S3 bucket as backup and I can rehydrate my Apache Kafka with it.

    16 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?