Skip to content

Event Streaming

Join our forum to discuss your ideas with Aiven community or check out our public roadmap.

Event Streaming

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

94 results found

  1. As a platform engineer
    I want the service updates to be versioned and to be able to select a specific version (Kafka, Karapace, etc) to update to so that I can perform correct change promotion from non-prod to production (instead of being forced to always apply the latest) and so that if a new version is released while an update is in-flight it would not lead to different versions running across the brokers (as is currently the case, which we have experienced can lead to incompatibilities). Additionally, I would like it to be clear what version(s) is currently running in…

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. As a developer / DevOps
    I want to be able to Sink CDC data into Apache Iceberg format
    so that I can analyze data using time travel feature of AWS Athena
    In addition, we may find a way for the current "Aiven - Amazon AWS S3 Sink" connector to be able to produce "Apache Iceberg" in addition of "Parquet" format or we may provide a dedicated connector like the one from this repository : https://github.com/tabular-io/iceberg-kafka-connect

    Yours faithfully,
    LCDP

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. As a developer / SRE,
    I want to have my Apache Kafka cluster to run across multiple regions and potentially even clouds,
    so that I can ensure high availability setup and have near zero RTO and RPO in case of DR, i.e. network, region or provider failure.
    In addition, I want to benefit from the geo-distributed cluster setup and read/write from/to the geographically closest broker to optimise my network latency and cost.

    55 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As an ops engineer,
    I want to be able to declare fined grained ACLs
    so that I avoid having to grant "admin" rights to users which only need "DeleteRecords" rights on specific topics.
    The client uses Kafka Streams, which needs specific rights (https://docs.confluent.io/platform/current/streams/developer-guide/security.html#required-acl-setting-for-secure-ak-clusters) which are not covered by aiven predefined rights.
    Currently, the "admin" is to broad for such access (I don't want the user to be able to create topics).

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. As a developer,
    I want to be able to upload my custom code (JARs),
    so that I can use it to build sophisticated or non-standard use-cases in Flink and Kafka Connect,
    In addition, I can keep custom implementation private.

    51 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. As an application owner,
    I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. As a developer,
    I want error messages and logs to contain schema names and versions,
    so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
    In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Pending Review  ·  0 comments  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As a developer,
    I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
    so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
    In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Pending Review  ·  0 comments  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. As application engg,
    I want to develop a csv connector where data is ingested from flat files (CSV) and create a stream of records that can be processed in Apache Kafka. It is similar to what confluent kafka provides here (https://docs.confluent.io/kafka-connectors/spooldir/current/connectors/csv_source_connector.html)

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. As a Solutions Architect
    I want to have consumer lag for kafka consumers available out of the box
    so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.

    Background
    1. there is a consumer lag on the default metrics dashboard but it does not work.
    2. I contact support and found that…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. As a developer
    I want to use a REST API against my Kafka instance
    so that I can write simple scripts without using client libraries.

    It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Roadmapped  ·  1 comment  ·  Karapace  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add support for exactly-once delivery in Storage Write API for GBQ sink connector.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. As data streaming architect
    I want to be able to export records from Kafka to GCS and use values in the record to define the bucket or file name
    so that I can organize data by those values to make them easier to find and process.

    Use case is a multi-user/multi-tenant application where user info is a value in the record. Need to be able to organize the output in object storage by that value somehow.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. As Developer
    I want to use Aiven's S3 sink connector and have the ability to control the size of output files (keep same size even if the traffic changes). In general, the connector has no lags and we want to flush the offset and write to file only when we have enough data. We use offset.flush.interval.ms for it but when the traffic increases, the amount of data arrives in the configured interval is increased and can cause an OOM issue. In addition, when we pause the connector for couple of minutes and we gather a lag, it can also lead…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. As a developer / security engineer,
    I want to be able to authenticate my Apache Kafka connectors via mTLS,
    so that I connect to my external services in a secure way.

    24 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. As SRE Engineer

    I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.

    so that even when disk space reaches critical levels, it will not immediately impact consumer side operations

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. As a Kafka user
    I want to be able to have consumers fetch from the closest replica (KIP-392)
    so that I can reduce Inter-AZ costs
    In addition, this will also reduce latency of consumer calls.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. As a platform engineer
    I want to group multiple users based on their role (OAuth2/OIDC claim)
    so that I can reduce the number of required Kafka users and ACL entries that need to be managed.

    Currently, every user / identity connecting via OAuth2/OIDC has a 1:1 mapping to a Kafka user (the username is taken from the sub claim). This is cumbersome and leads to significant overhead if for example multiple identities / users with the same permissions want to access the Kafka service. Kafka users and ACLs need to be created for every single identity, even though they share…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. As a data engineer
    I want Aiven Kafka Connect to offer the option to use the protocol buffer data format when serializíng the events that is sends to a Kafka broker.
    In my specific case, I need it to be possible in a Debezium connector for PostgreSQL.
    Additionally, it would be good to have as a user the option to define oneself the protobuf schema to use for serializing.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. As a developer,
    I want to make sure schema is being validated not just on client but also on a broker side,
    so that I can make sure all messages in the topic correspond to the same schema and does not contain any mixed schemas.

    18 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3 4 5
  • Don't see your idea?