94 results found
-
CSV kafka connector
As application engg,
I want to develop a csv connector where data is ingested from flat files (CSV) and create a stream of records that can be processed in Apache Kafka. It is similar to what confluent kafka provides here (https://docs.confluent.io/kafka-connectors/spooldir/current/connectors/csv_source_connector.html)4 votes -
Move Karapace updates out of maintenance updates into IaC definitions
As a customer
I want to be able to select Karapace version updates in Terraform code instead of maintenance updates GUI
so that I can control when Karapace updates are applied from my IaC definition.
In addition, Karapace updates can introduce service interruptions for users because all consumer instances are terminated. Moving it out of maintenance updates means less user interruptions on often more important maintenance updates.4 votes -
SAP Source connector for Aiven for Apache Kafka Connect
As a developer,
I want to be able to pull data from SAP systems,
so I could build my data pipelines.4 votes -
Schema subjects view with massive amounts of subjects
As a Avro schema administrator
I want to be able to quickly find an Avro schema subject in schema registry
so that I can view the schema, compare schema versions and change compatibility level.
In addition, we have 5-10k schema subjects in some of our projects, and there is no paging in the schema subject view - all subjects are loaded which makes the view very slow.Possible solutions might be to add paging, or simply limiting the number of subjects listed.
4 votes -
Kafka GCS connector - give ability to set offset.flush.interval.ms per connector
We are using Kafka connectors for GCS: https://github.com/Aiven-Open/cloud-storage-connectors-for-apache-kafka
And we set offset.flush.interval.ms to some value (https://kafka.apache.org/documentation/#connectconfigs). However in some topics we have more data, in some less. We'd like to have this value configurable per topic: https://kafka.apache.org/documentation/#topicconfigs
So, we'd like to have for example 5 minutes for all topics, but for some specific - 1 minute. Can this option be added to connectors? Thanks.
3 votes -
Enhance error messages and logs with schema names and versions
As a developer,
I want error messages and logs to contain schema names and versions,
so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.3 votes -
Enforce schema compatibility check on level change
As a developer,
I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.3 votes -
Kafka Connect GCS Sink: Support using field values to define bucket name or file name prefix
As data streaming architect
I want to be able to export records from Kafka to GCS and use values in the record to define the bucket or file name
so that I can organize data by those values to make them easier to find and process.Use case is a multi-user/multi-tenant application where user info is a value in the record. Need to be able to organize the output in object storage by that value somehow.
3 votes -
Kafka tiered storage with external S3 bucket
As a developer
I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)3 votes -
Add Datadog integration to Flink
As a data engineer
I want to use my integrate my existing Datadog subscription with Flink
so that I can store and monitor all metrics across my stack in a single location.
In addition, this functionality is already available on other services in Aiven3 votes -
Include custom Kafka topic tags as Prometheus metric labels
The Aiven platform allows custom key/value tags to be added to resources such as Kafka topics. It would be useful to have these exposed as additional labels on metrics so that alerts can be triggered based on this metadata.
3 votes -
Kafka rate limits and quotas configuration via Terraform
As a Cloud platform engineer I need to provide the capability, to developers, to set quotas via Terraform when then set-up applications that will produce to/consume from a Kafka cluster.
3 votes -
Flink HTTP API sink
As a developer I would like to push the output of a Flink operation to an HTTP API sink.
3 votes -
Kafka consumer lag predictor in DataDog integration
As a Kafka operator
I want to understand consumer lag
so that I can know potential impact to customer experience, latency, and if I need to size up my clusterCurrently, Aiven provides a consumer lag predictor through Prometheus which is really useful. However, for someone who wants all their metrics in DataDog, it'd be nice to have this data available through DataDog. Currently, the options are to have a separate dashboard using Prometheus/Grafana or deploy a DataDog agent somewhere that hits our Prometheus endpoint and send data to DataDog.
2 votes -
Create a Backup to Azure Blob Storage for Local Region Restore - DR
As an application owner,
I want to be able to store data in Blob Storage for local recover from an outage using the backups on Blob storage and also be able to restore accidentally dropped topics. This backup would potentially include hundreds of topics.2 votes -
Provide documentation for Karapace REST API
As a developer
I want to use a REST API against my Kafka instance
so that I can write simple scripts without using client libraries.It doesn't seem like there's comprehensive API documentation for what endpoints and functionality are supported by the Karapace REST API. The website says it's a drop-in replacement for the Kafka REST API proxy but unless that comes with guarantees that it'll stay up-to-date with any changes in the Confluent Kafka REST API proxy, it's hard to trust that. Some users may find it preferable to just have the documentation for Karapace's endpoints.
2 votes -
Add OpenTelemetry API and SDK to enable traceability in Debezium connector
As IT Architect
I want to provide complete traceability within my microservice mesh where 2 microservice communicate via Outbox pattern via Debezium connector. In order to achieve this, Debezium connector requires some of the OpenTelemtry APIs are on the Kafka Connect classpath.
I want to have this feature so that I can see complete chain of interactions for specific request, observe what and where time is spent inside particular microservice and find out possible bottlenecks.2 votes -
Aiven API support for Kafka Connect Java clients
As an AIven developer user
I want to use the Java code generated from Aiven API to communicate with an Aiven managed KafkaConnect cluster
so that I can build micro services in Java platform to manage various connectors on the cluster
In addition, this gives more security for Aiven users and data while the standard REST API supports only BasicAuth.2 votes -
Support for RocksDB in Flink
Support RocksDB as a persistent data store for Flink
As an engineer I need be able to run larger state Flink jobs in order to meet my data processing requirements
2 votes -
Connectivity check for MirrorMaker with external Kafka integration
Debugging failed connectivity between Aiven for Apache MirrorMaker and an external Kafka configured using an integration endpoint is very difficult today. No errors are surfaced in the integration endpoint configuration screen itself, we must wait for the replication flow to attempt to start and then dig out cryptic errors from MM2 logs.
Some kinds of errors (e.g. failure to build SSL keystores) are not even surfaced in MM2 logs and are only visible to Aiven operators.
Please consider adding a basic connectivity check to allow for quicker troubleshooting and iteration. This check should ensure that the network path between Aiven…
2 votes
- Don't see your idea?