101 results found
-
Kafka tiered storage with external S3 bucket
As a developer
I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)3 votes -
Aiven API support for Kafka Connect Java clients
As an AIven developer user
I want to use the Java code generated from Aiven API to communicate with an Aiven managed KafkaConnect cluster
so that I can build micro services in Java platform to manage various connectors on the cluster
In addition, this gives more security for Aiven users and data while the standard REST API supports only BasicAuth.2 votes -
Schema references support for AVRO schema in Karapace
As data engineer, developer
I want to be able use schema references in AVRO schemas in Karapace
so that I can define and reuse complex data structures or types within other schemas10 votes -
Karapace OAuth2 support
As a developer / security engineer,
I want to be able to authenticate (i.e. Azure AD) and authorize (OAuth) my service users using OAuth2,
so that I can manage all users in a central place and don't need to worry about missing something during on- and off- boarding.11 votes -
Support for both allow and black lists in Kafka service
As a managed Kafka administrator
I want to have the ability to black-list a bunch of IP addresses I believe are suspicious from my Kafka service, so that I can prevent my service being disrupted by unexpected traffic
In addition, I would like to keep the current allow-list to be able to allow know IPs and a way to resolve a conflict between allow and block list where block list takes priority.1 voteat the moment we recommend using our current networking whitelisting capabilities
-
Move Karapace updates out of maintenance updates into IaC definitions
As a customer
I want to be able to select Karapace version updates in Terraform code instead of maintenance updates GUI
so that I can control when Karapace updates are applied from my IaC definition.
In addition, Karapace updates can introduce service interruptions for users because all consumer instances are terminated. Moving it out of maintenance updates means less user interruptions on often more important maintenance updates.4 votes -
Configurable maximum validity for service user certificates
As a developer / security engineer,
I want to be able to define lifetime of certificates (i.e. 30-90 days) used to authenticate my service users,
so that I can have certificate rotation policies in place to ensure compliance and security best practices.11 votes -
Temporarily disable service user on Aiven for Apache Kafka
As a developer,
I want to temporarily disable service user,
so that I can test is service user still in use by any of the applications or temporarily ban abusive clients.12 votes -
Capture and expose "last used" date for service users
As a developer / SRE,
I want to know when service users were last used,
so that I can know is service user still in use or was abandoned and could be removed.12 votes -
Cluster leader balancing CPU vs Disk - can we choose?
As an OPS Engineer
I want to be able to balance the cluster based on CPU usage rather than Disk usage. The current algorithm focuses on Disk usage which is not optimum for our application.
Can we have an option to apply partition rebalancing based on CPU usage?1 vote -
Azure Data Lake Gen2 (ADLS) Kafka connector
Kafka Connector to allow for streaming data to Azure Data Lake Gen2 as a sink.
6 votes -
Support Databricks driver for Apache Kafka Connect JDBC sink connector
As a developer,
I want to have support for Databricks driver for Apache Kafka Connect JDBC sink connector,
so that I can write data from Aiven for Apache Kafka to Databricks Spark for further processing, analysis and consumption.11 votes -
Audit logging as self-service option
As a developer,
I want to enable audit logging for my service,
so that I can keep track of breadcrumbs left by all the changes made to a service to ensure compliance.11 votes -
Support for Apache Kafka Connect to be run standalone against a 3rd party Apache Kafka service
As a developer,
I want to run standalone Apache Kafka Connect cluster against 3rd party Apache Kafka service,
so that I can benefit from managed Apache Kafka Connect service and read/write data from/to Apache Kafka service running outside Aiven.11 votes -
Support for RocksDB in Flink
Support RocksDB as a persistent data store for Flink
As an engineer I need be able to run larger state Flink jobs in order to meet my data processing requirements
2 votes -
Enforcing naming convention for Apache Kafka topics
As a SRE / Apache Kafka operator / developer,
I want to make sure all my topics adhere to a naming convention,
so that I can ensure consistency of my topics.
In addition, I can use naming convention to better identify, group, locate and categorise my topics.10 votes -
Connectivity check for MirrorMaker with external Kafka integration
Debugging failed connectivity between Aiven for Apache MirrorMaker and an external Kafka configured using an integration endpoint is very difficult today. No errors are surfaced in the integration endpoint configuration screen itself, we must wait for the replication flow to attempt to start and then dig out cryptic errors from MM2 logs.
Some kinds of errors (e.g. failure to build SSL keystores) are not even surfaced in MM2 logs and are only visible to Aiven operators.
Please consider adding a basic connectivity check to allow for quicker troubleshooting and iteration. This check should ensure that the network path between Aiven…
2 votes -
kafka_connect_connector_metrics availability over Prometheus
Our customer Jago wants to monitor the status of connectors and tasks but currently can't find relevant metrics to do so. They want to be able to monitor the status of connectors and tasks on a dashboard and also get notified whenever a connector has not been running for X minutes.
Jago has a connector running but can not find the metrics for kafka.connect:type=connector-metrics,connector=*.
The specific metrics they are looking for is the one related to the status of a connector. For example, in the customers current self-managed kafka connect, they have the following metrics. This is convenient because they…
1 vote -
Add Datadog integration to Flink
As a data engineer
I want to use my integrate my existing Datadog subscription with Flink
so that I can store and monitor all metrics across my stack in a single location.
In addition, this functionality is already available on other services in Aiven3 votes -
Kafka rate limits and quotas configuration via Terraform
As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.
4 votes
- Don't see your idea?