99 results found
-
Monitoring consume lag for kafka not out of the box
As a Solutions Architect
I want to have consumer lag for kafka consumers available out of the box
so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.Background
1. there is a consumer lag on the default metrics dashboard but it does not work.
2. I contact support and found that…1 vote -
Exactly-Once support in Storage Write API from our GBQ sink connector
Add support for exactly-once delivery in Storage Write API for GBQ sink connector.
1 vote -
untyped metrics from prometheus endpoint
using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .
see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
see exampleTYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped
TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped
1 vote -
Support for both allow and black lists in Kafka service
As a managed Kafka administrator
I want to have the ability to black-list a bunch of IP addresses I believe are suspicious from my Kafka service, so that I can prevent my service being disrupted by unexpected traffic
In addition, I would like to keep the current allow-list to be able to allow know IPs and a way to resolve a conflict between allow and block list where block list takes priority.1 voteat the moment we recommend using our current networking whitelisting capabilities
-
Cluster leader balancing CPU vs Disk - can we choose?
As an OPS Engineer
I want to be able to balance the cluster based on CPU usage rather than Disk usage. The current algorithm focuses on Disk usage which is not optimum for our application.
Can we have an option to apply partition rebalancing based on CPU usage?1 vote -
Ability to scale Kafka cluster without upgrade
As a Kafka Administrator
I want to ahve ability to scalse Kafka clsuter without upgrade, that increses time of scale.
In production this can become critical and lead to downtime. For example last 2 times the scale lasted for 12 hours.1 vote -
kafka_connect_connector_metrics availability over Prometheus
Our customer Jago wants to monitor the status of connectors and tasks but currently can't find relevant metrics to do so. They want to be able to monitor the status of connectors and tasks on a dashboard and also get notified whenever a connector has not been running for X minutes.
Jago has a connector running but can not find the metrics for kafka.connect:type=connector-metrics,connector=*.
The specific metrics they are looking for is the one related to the status of a connector. For example, in the customers current self-managed kafka connect, they have the following metrics. This is convenient because they…
1 vote -
Aiven's S3 sink connector - Configure `offset.flush.interval.ms` on the connector's level
As a developer that uses Aiven's S3 sink connector,
I want to be able to set theoffset.flush.interval.ms
only for my specific connector from the connector's configuration
so that I can avoid configuring it in the cluster level (for all connectors).1 vote -
A no-code solution for Flink to unlock the usage for non-technical users
As a non-technical user
I want to be able to aggregate and join different streams of data
without the need of developers1 voteThis function will largely be replaced by ChatGPT or other LLMs which can generate clear code and instructions, making a visual builder unnecessary.
-
Event log should show when a connector is paused and resumed.
As a developer
I want to know when a connector is paused or resumed
so that I can have timestamps and know if anybody is doing what they are not supposed to do.1 vote -
Decompression transform for Connectors
As an application developer
I want to compress my kafka messages, but be able to decomrpess them using a transform before sinking them into a destination
so that I can save on storage costs
In addition, I'd like to use ZTSD, but more common libraries might be enough.Note, Confluent has something similar :
https://docs.confluent.io/platform/current/connect/transforms/gzipdecompress.html1 vote -
Support S3 as a source and sink for Flink
As a developer I want to be able to read and write data to my S3 object storage, in order to simply integrate Flink into my existing data architecture. Using Flink to read data from S3, transform it, and then write to another S3 location allows easy consolidation and data quality management in a common reference data architecture.
1 vote -
Automated dynamic quota configuration
As a Cloud platform engineer, I need to have an automated way to set up and update quota configurations on a cluster, taking into account changes in resources consumption patterns amongst producers and consumers.
1 vote -
Field Level Encryption Support for Aiven Products
TL;DR;
As Aiven Customer
I want to be able to be able to encrypt any form of sensitive data (PII or PCI) so that I can manage sensitive data in a legally compliant and user-privacy respectful manner.Detailed description of the proposal:
Hi Aiven!
Hope all is well with you. I have a feature/service suggestion which I believe will make your existing product portfolio even stronger!It’s a thing I call “Aiven for Privacy FTW!” and it’s basically a standalone “field-level-encryption” service used for managing of PII and PCI fields/properties in the event payload in legally (eg. GDPR and/or CCPA)…
1 voteI will be closing this out, but the idea is valid. We are looking at building a proxy service for Kafka and this could be in part of the roadmap for that component as you have described. We suggest for those who want to use encryption on Kafka to do this on the producer and consumer sides as the data would be encrypted from end to end.
-
io.debezium.transforms.partitions.PartitionRouting
It would be great it the
io.debezium.transforms.partitions.PartitionRouting
SMT was available for use when configuring a Kafka Debezium source connector on the Aiven platform.1 vote -
Expose MirrorMaker 2 replication.policy.separator property
As a devops developer
I want to change the replication policy separator
so that I can use our existing topic separators without conflicting with MM2.
In addition, I find it extremely inconvenient to have to change our existing schema because we cannot configure the replication.policy.separator in MM2.1 voteWe can see it as useful configuration to expose. At the same time it will need to gather a bit more interest to be taken in to development.
Thanks!
-
Support for schemaless JSON messages in the Big Query Sink connector
As a developer
I want to be able to publish schemaless JSON messages to Big Query and have the BigQuery schema be updated to reflect those changes
so that I can evolve my message schema without breaking my pipeline.1 vote -
Rockset Sink connector for Kafka Connect
As a data engineer,
I want to write data to Rockset,
so that I can run my analytical workloads.1 vote -
Apache NiFi Source and Sink Kafka connector
As a developer
I want to use Apache NiFi Kafka Connector
so that I can move data in and out Apache NiFi to orchestrate data flow.1 vote
- Don't see your idea?