15 results found
-
Monitoring consume lag for kafka not out of the box
As a Solutions Architect
I want to have consumer lag for kafka consumers available out of the box
so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.Background
1. there is a consumer lag on the default metrics dashboard but it does not work.
2. I contact support and found that…1 vote -
Exactly-Once support in Storage Write API from our GBQ sink connector
Add support for exactly-once delivery in Storage Write API for GBQ sink connector.
1 vote -
Kafka tiered storage with external S3 bucket
As a developer
I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)3 votes -
Support for both allow and black lists in Kafka service
As a managed Kafka administrator
I want to have the ability to black-list a bunch of IP addresses I believe are suspicious from my Kafka service, so that I can prevent my service being disrupted by unexpected traffic
In addition, I would like to keep the current allow-list to be able to allow know IPs and a way to resolve a conflict between allow and block list where block list takes priority.1 voteat the moment we recommend using our current networking whitelisting capabilities
-
Add Datadog integration to Flink
As a data engineer
I want to use my integrate my existing Datadog subscription with Flink
so that I can store and monitor all metrics across my stack in a single location.
In addition, this functionality is already available on other services in Aiven3 votes -
ClickHouse Driver for Apache Kafka Connect JDBC Sink connector
As a developer,
I want to have ClickHouse driver support in JDBC sink connector,
so that I can write data from Apache Kafka to ClickHouse for further processing and analysis.7 votes -
A no-code solution for Flink to unlock the usage for non-technical users
As a non-technical user
I want to be able to aggregate and join different streams of data
without the need of developers1 voteThis function will largely be replaced by ChatGPT or other LLMs which can generate clear code and instructions, making a visual builder unnecessary.
-
Add Oracle JDBC support for Aiven for Kafka Connect
As a developer,
I want to be able to connect to my Oracle database from Aiven for Apache Kafka Connect,
so that I can read and write data from/to Oracle DB and/or Apache Kafka to enable my use-cases.9 votes -
Customer is interested to have Pyflink be supported with Flink
As a Developer I want to use Pyflink library with Aiven for Apache Flink so that I can directly use it in my project.
7 votes -
Flink HTTP API sink
As a developer I would like to push the output of a Flink operation to an HTTP API sink.
3 votes -
Field Level Encryption Support for Aiven Products
TL;DR;
As Aiven Customer
I want to be able to be able to encrypt any form of sensitive data (PII or PCI) so that I can manage sensitive data in a legally compliant and user-privacy respectful manner.Detailed description of the proposal:
Hi Aiven!
Hope all is well with you. I have a feature/service suggestion which I believe will make your existing product portfolio even stronger!It’s a thing I call “Aiven for Privacy FTW!” and it’s basically a standalone “field-level-encryption” service used for managing of PII and PCI fields/properties in the event payload in legally (eg. GDPR and/or CCPA)…
1 voteI will be closing this out, but the idea is valid. We are looking at building a proxy service for Kafka and this could be in part of the roadmap for that component as you have described. We suggest for those who want to use encryption on Kafka to do this on the producer and consumer sides as the data would be encrypted from end to end.
-
Support S3 as a source and sink for Flink
As a developer I want to be able to read and write data to my S3 object storage, in order to simply integrate Flink into my existing data architecture. Using Flink to read data from S3, transform it, and then write to another S3 location allows easy consolidation and data quality management in a common reference data architecture.
1 vote -
Sending Apache Kafka metrics to Datadog
As Software Engineer at Wex
I want to send additional metrics to Datadog
so that I can send important metrics from Aiven's cluster as these ones:
IsrShrinksPerSec
IsrExpandsPerSec
ActiveControllerCount
OfflinePartitionsCount
TotalTimeMs
PurgatorySize
RequestsPerSec
Network bytes sent/received
BytesInPerSec/BytesOutPerSecAccording to Datadog's documentation, these metrics are considered highly significant.
https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/#kafka-emitted-metrics2 votes -
Rockset Sink connector for Kafka Connect
As a data engineer,
I want to write data to Rockset,
so that I can run my analytical workloads.1 vote -
Apache NiFi Source and Sink Kafka connector
As a developer
I want to use Apache NiFi Kafka Connector
so that I can move data in and out Apache NiFi to orchestrate data flow.1 vote
- Don't see your idea?