101 results found
-
Enhance error messages and logs with schema names and versions
As a developer,
I want error messages and logs to contain schema names and versions,
so that I can quickly identify and troubleshoot issues related to specific schemas more efficiently.
In addition, this improvement is very important when dealing with issues in referenced schemas because it provides more context in error messages and logs, making it easier to diagnose and resolve problems. This can significantly reduce the time spent on debugging and improve overall system maintainability.3 votes -
Enforce schema compatibility check on level change
As a developer,
I want to have an enforced compatibility check on all existing schemas when the compatibility level is set to a more restrictive one (or for any change),
so that I can ensure all schemas comply with the new restrictive compatibility level and maintain consistency in the schema registry.
In addition, this improvement is important because it prevents potential issues when new schemas are registered or existing ones are updated, thereby increasing the reliability of the schema registry.3 votes -
Emails should be valid Kafka usernames with OAuth
As Data Platform Principal Engineer
I want to use emails as Kafka usernames when OAuth authentication is configured
so that I can use Databricks as SSO provider to reduce the amount of credentials that I need to manage and share with each user. This also improves security, because it automatically disable the access when someone leaves the company.In particular, when I am using a Databricks Service Principal for the authentication, it works as expected. The Databricks Service Principal is identified by an unique UUID. To make it working I have added a Kafka service user with that UUID as…
1 vote -
CSV kafka connector
As application engg,
I want to develop a csv connector where data is ingested from flat files (CSV) and create a stream of records that can be processed in Apache Kafka. It is similar to what confluent kafka provides here (https://docs.confluent.io/kafka-connectors/spooldir/current/connectors/csv_source_connector.html)4 votes -
Monitoring consume lag for kafka not out of the box
As a Solutions Architect
I want to have consumer lag for kafka consumers available out of the box
so that I can correctly monitor streaming applications without having to setup an external prometheus. In addition Consumer lag is the key metric to monitor for end to end health of streaming apps(ie to ensure they are keeping up with demand), you cannot put a streaming app into production with out correct monitoring and alerting on this metric.Background
1. there is a consumer lag on the default metrics dashboard but it does not work.
2. I contact support and found that…1 vote -
Exactly-Once support in Storage Write API from our GBQ sink connector
Add support for exactly-once delivery in Storage Write API for GBQ sink connector.
1 vote -
Support for Protobuf serialization of Events (and Keys) in Aiven Kafka Connnect
As a data engineer
I want Aiven Kafka Connect to offer the option to use the protocol buffer data format when serializíng the events that is sends to a Kafka broker.
In my specific case, I need it to be possible in a Debezium connector for PostgreSQL.
Additionally, it would be good to have as a user the option to define oneself the protobuf schema to use for serializing.7 votes -
Kafka Connect GCS Sink: Support using field values to define bucket name or file name prefix
As data streaming architect
I want to be able to export records from Kafka to GCS and use values in the record to define the bucket or file name
so that I can organize data by those values to make them easier to find and process.Use case is a multi-user/multi-tenant application where user info is a value in the record. Need to be able to organize the output in object storage by that value somehow.
3 votes -
Aiven's S3 sink connector - Support configure offset flush max size
As Developer
I want to use Aiven's S3 sink connector and have the ability to control the size of output files (keep same size even if the traffic changes). In general, the connector has no lags and we want to flush the offset and write to file only when we have enough data. We useoffset.flush.interval.ms
for it but when the traffic increases, the amount of data arrives in the configured interval is increased and can cause an OOM issue. In addition, when we pause the connector for couple of minutes and we gather a lag, it can also lead…5 votes -
Certificate file support for Kafka connectors
As a developer / security engineer,
I want to be able to authenticate my Apache Kafka connectors via mTLS,
so that I connect to my external services in a secure way.24 votes -
Programmatic Apache Kafka Consumer (group) management
As a developer,
I want to programmatically manage my consumers and consumer groups,
so that I can see the status of them, perform CRUD operations, show members of a group, reset offsets and similar.
In addition, I want to be able to do so also in Aiven Console.26 votes -
ACL
As SRE Engineer
I want to customize ACL to allow kafka consumer operations to carry on, even while write lock gets triggered when the disk space reaches threshold limits of 95 or 97%. Given that the Kafka consumers' offset commits are relatively smaller, this option will not be detrimental.
so that even when disk space reaches critical levels, it will not immediately impact consumer side operations
1 vote -
Schema Validation on Apache Kafka broker side
As a developer,
I want to make sure schema is being validated not just on client but also on a broker side,
so that I can make sure all messages in the topic correspond to the same schema and does not contain any mixed schemas.22 votes -
Custom Kafka Connectors on Aiven (Bring your own Connector)
As Developer, I want to utilize Custom Connectors on Aiven,
so that I can integrate our proprietary data systems and custom applications with Apache Kafka without having to manage the underlying infrastructure.In addition, this will allow developers to concentrate more on the building of business-critical applications instead of getting tied up with infrastructural tasks.
18 votes -
Add OpenTelemetry API and SDK to enable traceability in Debezium connector
As IT Architect
I want to provide complete traceability within my microservice mesh where 2 microservice communicate via Outbox pattern via Debezium connector. In order to achieve this, Debezium connector requires some of the OpenTelemtry APIs are on the Kafka Connect classpath.
I want to have this feature so that I can see complete chain of interactions for specific request, observe what and where time is spent inside particular microservice and find out possible bottlenecks.3 votes -
Karapace Schema Registry certificate authentication
As a developer I should be able to authenticate with same service user certificate for both Kafka and Schema Registry
5 votes -
Run Karapace as dedicated managed service
As a developer,
I want to run Karapace as fully managed dedicated service,
so that I can use it with Apache Kafka running on and outside of Aiven.
In addition, I can use same Karapace service against multiple Apache Kafka services.17 votes -
Support for public CA for SASL for Aiven for Apache Kafka
As developer,
I want to use public CA (i.e. Let's Encrypt),
so that I can connect to my Apache for Kafka service without installing any additional certificates.
In addition, I can trust publish authority issuing certificates instead of validating 3rd party certificates.17 votes -
untyped metrics from prometheus endpoint
using prometheus endpoint at our kafka-service we need to have metrics from kafka with type .
see below the metrics have notype - this should be set at the endpoint ,to ease the use of these metrics .
see exampleTYPE kafkaservergroupcoordinatormetricsgroupcompletedrebalancecount untyped
TYPE kafkaservergroupcoordinatormetricsoffsetcommit_rate untyped
1 vote -
Kafka tiered storage with external S3 bucket
As a developer
I want to have the ability to use our own S3 bucket for storing Kafka tiered storage
so that I can access the data from S3 and query some data for debugging (without streaming all the data to Kafka)3 votes
- Don't see your idea?