94 results found
-
Support for "Apache Iceberg" format while sinking CDC
As a developer / DevOps
I want to be able to Sink CDC data into Apache Iceberg format
so that I can analyze data using time travel feature of AWS Athena
In addition, we may find a way for the current "Aiven - Amazon AWS S3 Sink" connector to be able to produce "Apache Iceberg" in addition of "Parquet" format or we may provide a dedicated connector like the one from this repository : https://github.com/tabular-io/iceberg-kafka-connectYours faithfully,
LCDP9 votes -
Add Oracle JDBC support for Aiven for Kafka Connect
As a developer,
I want to be able to connect to my Oracle database from Aiven for Apache Kafka Connect,
so that I can read and write data from/to Oracle DB and/or Apache Kafka to enable my use-cases.9 votes -
Kafka fined grained ACLs
As an ops engineer,
I want to be able to declare fined grained ACLs
so that I avoid having to grant "admin" rights to users which only need "DeleteRecords" rights on specific topics.
The client uses Kafka Streams, which needs specific rights (https://docs.confluent.io/platform/current/streams/developer-guide/security.html#required-acl-setting-for-secure-ak-clusters) which are not covered by aiven predefined rights.
Currently, the "admin" is to broad for such access (I don't want the user to be able to create topics).8 votes -
Schema references support for AVRO schema in Karapace
As data engineer, developer
I want to be able use schema references in AVRO schemas in Karapace
so that I can define and reuse complex data structures or types within other schemas8 votes -
Support Zookeeper-less (KRaft) mode in Aiven for Apache Kafka
As a SRE,
I want to operate Apache Kafka without Zookeeper,
so that I can have more resources available for Apache Kafka itself.
In addition, it will allow faster up- and down- scaling of my cluster and it will support more partitions per broker.8 votes -
Support for Protobuf serialization of Events (and Keys) in Aiven Kafka Connnect
As a data engineer
I want Aiven Kafka Connect to offer the option to use the protocol buffer data format when serializíng the events that is sends to a Kafka broker.
In my specific case, I need it to be possible in a Debezium connector for PostgreSQL.
Additionally, it would be good to have as a user the option to define oneself the protobuf schema to use for serializing.7 votes -
ClickHouse Driver for Apache Kafka Connect JDBC Sink connector
As a developer,
I want to have ClickHouse driver support in JDBC sink connector,
so that I can write data from Apache Kafka to ClickHouse for further processing and analysis.7 votes -
Tagging functionality for Apache Kafka ACLs
As a developer,
I want to tag Apache Kafka ACLs,
so that I can group, search and identify what my ACLs are used for.7 votes -
Customer is interested to have Pyflink be supported with Flink
As a Developer I want to use Pyflink library with Aiven for Apache Flink so that I can directly use it in my project.
7 votes -
"Last Used" field on Kafka Certificates in Console/API
As an organization (DevOps/Security/Vendor Manager) using Aiven Kafka,
we want to determine the last connected cert status of Kafka users,
so that we can know whether a kafka user certificate has been successfully updated.We automate certificate rolling to an extent with terraform. Different teams of devs generally own their section of terraform creating Kafka users. As of right now every two years those certs expire, and clicking the "Yes I've updated" in the Aiven console just silences the alert, and provides no real time verification from the running kafka that a certificate has been updated.
This means a user…
6 votes -
Add MaxTimestampSpec data to the topic/partition details from the Aiven API
As a developer,
I want to know latest offset and max timestamp for a given topic/partition,
so that I can identify when topic/partition was last time used and in case it is no longer in use I can cleanup stale topics/partitions.6 votes -
Support for cross-project and cross-vpc service integration for all services in streaming domain
As a developer,
I want to integrate my services in streaming domain, i.e. Apache Kafka, Apache Kafka Connect, Apache MirrorMaker2, etc with other services in any project and/or VPC,
so that I can build data streaming pipelines and benefit from the use use of aforementioned services.6 votes -
Ability to choose the Apache Kafka Connect connector version
As a developer,
I want to choose which Apache Kafka Connect connector version to use,
so that I can control connector version and make sure it is compatible with my applications.6 votes -
Salesforce Kafka Connector
As a developer,
I want to connect my Apache Kafka to Salesforce
so that I can read and write data from/to SFDC.6 votes -
Azure Cosmos DB Kafka Connector
As a developer,
I want to connect to Azure Cosmos DB from Apache Kafka,
so that I can access my data stored in Cosmos DB and use it in my stream processing pipeline.6 votes -
Aiven's S3 sink connector - Support configure offset flush max size
As Developer
I want to use Aiven's S3 sink connector and have the ability to control the size of output files (keep same size even if the traffic changes). In general, the connector has no lags and we want to flush the offset and write to file only when we have enough data. We useoffset.flush.interval.ms
for it but when the traffic increases, the amount of data arrives in the configured interval is increased and can cause an OOM issue. In addition, when we pause the connector for couple of minutes and we gather a lag, it can also lead…5 votes -
Karapace Schema Registry certificate authentication
As a developer I should be able to authenticate with same service user certificate for both Kafka and Schema Registry
5 votes -
Azure Data Lake Gen2 (ADLS) Kafka connector
Kafka Connector to allow for streaming data to Azure Data Lake Gen2 as a sink.
5 votes -
Aiven for Apache MirrorMaker2 should replicate ACLs
As a developer,
I want to replicate ACLs when migrating or duplicating my Apache Kafka clusters,
so that I can make sure my applications have the same rights and permissions on several clusters and I don't need to manually replicate them.5 votes -
Apache Camel connector for Kafka Connect
As a software / data science engineer,
I want to be able to connect Apache Kafka to Apache Camel,
so that I can define my rules, routing and ETL pipeline to process and route data from and to different systems.5 votes
- Don't see your idea?