101 results found
-
A no-code solution for Flink to unlock the usage for non-technical users
As a non-technical user
I want to be able to aggregate and join different streams of data
without the need of developers1 voteThis function will largely be replaced by ChatGPT or other LLMs which can generate clear code and instructions, making a visual builder unnecessary.
-
Event log should show when a connector is paused and resumed.
As a developer
I want to know when a connector is paused or resumed
so that I can have timestamps and know if anybody is doing what they are not supposed to do.1 vote -
Decompression transform for Connectors
As an application developer
I want to compress my kafka messages, but be able to decomrpess them using a transform before sinking them into a destination
so that I can save on storage costs
In addition, I'd like to use ZTSD, but more common libraries might be enough.Note, Confluent has something similar :
https://docs.confluent.io/platform/current/connect/transforms/gzipdecompress.html1 vote -
Add Datadog integration to Flink
As a data engineer
I want to use my integrate my existing Datadog subscription with Flink
so that I can store and monitor all metrics across my stack in a single location.
In addition, this functionality is already available on other services in Aiven3 votes -
Move Karapace updates out of maintenance updates into IaC definitions
As a customer
I want to be able to select Karapace version updates in Terraform code instead of maintenance updates GUI
so that I can control when Karapace updates are applied from my IaC definition.
In addition, Karapace updates can introduce service interruptions for users because all consumer instances are terminated. Moving it out of maintenance updates means less user interruptions on often more important maintenance updates.4 votes -
Include custom Kafka topic tags as Prometheus metric labels
The Aiven platform allows custom key/value tags to be added to resources such as Kafka topics. It would be useful to have these exposed as additional labels on metrics so that alerts can be triggered based on this metadata.
3 votes -
Ease the setup of Kafka Connectors when using internal Aiven services
As a developer,
I want to have an easy setup of my Kafka connectors when they are using internal Aiven services such as Postgres or OpenSearch
so that I can save time, avoid mistakes, and enforce the strength of the platform.2 votes -
Support S3 as a source and sink for Flink
As a developer I want to be able to read and write data to my S3 object storage, in order to simply integrate Flink into my existing data architecture. Using Flink to read data from S3, transform it, and then write to another S3 location allows easy consolidation and data quality management in a common reference data architecture.
1 vote -
Improve support for Debezium SQL Server use cases
As an architect, I want to bring business logic out of the database and into a decoupled stream processing / event driven architecture framework. With SQL Server, I want to stream changes to Apache Kafka, using Debezium. This must support use cases where sensitivity classifications in SQL Server are used/required for things like PII.
2 votes -
Add external Delta Lake connector for Apache Flink
As a developer, I need to be able to read and write from Databricks so that I can complement my existing data architecture
2 votes -
Auto-Scaling
As a developer, I need Flink to be able to automatically increase compute resources to a self-defined threshold to seamlessly process significant increases in traffic in order to maintain performance. This increase in traffic would be for either a permanently higher workload, or for a temporary pre-defined amount of time, and would be triggered by a customer request, with subsequent additional billing impacts
2 votes -
Kafka rate limits and quotas configuration via Terraform
As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.
4 votes -
Automated dynamic quota configuration
As a Cloud platform engineer, I need to have an automated way to set up and update quota configurations on a cluster, taking into account changes in resources consumption patterns amongst producers and consumers.
1 vote -
Visualization of client IDs that interact with a Kafka cluster
As a Cloud platform engineer, I need to have a view of Kafka cluster resources consumption across all clients IDs. Such information is needed to configure quotas and to manage them.
2 votes -
Field Level Encryption Support for Aiven Products
TL;DR;
As Aiven Customer
I want to be able to be able to encrypt any form of sensitive data (PII or PCI) so that I can manage sensitive data in a legally compliant and user-privacy respectful manner.Detailed description of the proposal:
Hi Aiven!
Hope all is well with you. I have a feature/service suggestion which I believe will make your existing product portfolio even stronger!It’s a thing I call “Aiven for Privacy FTW!” and it’s basically a standalone “field-level-encryption” service used for managing of PII and PCI fields/properties in the event payload in legally (eg. GDPR and/or CCPA)…
1 voteI will be closing this out, but the idea is valid. We are looking at building a proxy service for Kafka and this could be in part of the roadmap for that component as you have described. We suggest for those who want to use encryption on Kafka to do this on the producer and consumer sides as the data would be encrypted from end to end.
-
Configure CPU threshold for alert mails in kafka
As a kafka user, I want to configure the threshold used to send CPU warning emails. Right now we are getting emails because the CPU exceeds 50%, which is irrelevant. We know we can configure the emails to go to another mail address but in that case we would loose all other technical mails as well.
It would also be good for us if we can turn off that single alert.2 votes -
MirrorMaker 2 connectivity for Kafka Brokers with private or internal hostnames
If an external Kafka service has brokers with hostnames that are not publicly resolvable, for example, where hostnames are registered in a corporate DNS server which is not accessible from outside, it is currently not possible for MM2 to successfully connect.
The workaround to use IP addresses is not sufficient, since after bootstrap the broker hostnames will be returned and subsequent connections will go there.
We will also likely see issues with TLS handshakes when brokers present certificates belonging to hostnames that the Aiven side does not understand.
The only current workaround is for the configuration of the external Kafka…
2 votes -
Azure Data Lake Gen2 (ADLS) Kafka connector
Kafka Connector to allow for streaming data to Azure Data Lake Gen2 as a sink.
5 votes -
Flink HTTP API sink
As a developer I would like to push the output of a Flink operation to an HTTP API sink.
3 votes -
Schema references support for AVRO schema in Karapace
As data engineer, developer
I want to be able use schema references in AVRO schemas in Karapace
so that I can define and reuse complex data structures or types within other schemas10 votes
- Don't see your idea?