103 results found
-
Datadog Database Monitoring Explain Insights for PostgreSQL
As a database admin or a developer,
I would like the explain parameterized queries are enabled and the user agent's search path is configured to include all schemas in the Datadog agent configuration,
so that I would be able to fully utilize the Explain Insights module of the Datadog Database Monitoring feature with my PostgreSQL databases.21 votes -
PG replication status through API
As a DBA
I want to monitor the PG replication status through API
so that I can know the replication status/log before promoting the replica
In addition, we a DRP process that will promote all our replicas to master through github action and we would like to know the status before promoting the replica7 votes -
Support restoring PostgreSQL to a point before a major in-place upgrade
As Database Engineer
I want to be able to restore to a time before a major in-place upgrade
so that I can restore data in certain tables for our customers.
In addition, we communicate to customers that we can restore data back 30 days and an in-place upgrade does not allow that. Doing a fork and an upgrade causes additional downtime and complicates automation that makes use of naming standards.5 votes -
Almost zero downtime major release upgrades with logical replication
Based on a similar mechanism with aivendbmigrate I would like to have major release upgrade mechanism based on logical replication.
Steps:
- create an empty target instance with new major release
- create a full logical replication setup (like aivendbmigrate does) 2.1. inform users about limitations (lobs, schema changes during that period etc.) 2.2. inform users about testing and rollback capabilities
- perform the usual migrate thing with transferring everything and switch to a CDC state until next steps are defined
- provide specific cutover scenarios 4.1. DNS switch if possible? 4.2. some pgbouncer magic to prevent forced reconnections of…
4 votes -
pg_duckdb
DuckDB has just launched a new pg_duckdb extension and I want to use it in my PostgreSQL database to improve the performance in analytics.
pg_duckdb is a Postgres extension that embeds DuckDB's columnar-vectorized analytics engine and features into Postgres.
5 votes -
Support pg_jsonschema as a PostgreSQL Extension
As a Software Engineer or DBA, I want to be able to enforce schemas on JSON/JSONB fields in a convenient manner, so that I can ensure bad data doesn't getting written to my database. https://github.com/supabase/pg_jsonschema seems to be the most popular extension for doing this.
8 votes -
Support Certificate Authentication on Postgres
As a developer,
I want to connect to my Aiven for Postgres services without providing credentials and use only certificate authentication as offered by Postgres https://www.postgresql.org/docs/current/auth-cert.html
so that I can be compliant with my internal security policies18 votes -
Logs in JSON format
As an Operations Engineer
I want to have logs in JSON format
so that I can process them in third-party log management and analytics systems without having to write custom parsing rules.Related PostgreSQL documentation: https://www.postgresql.org/docs/15/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-JSONLOG
15 votes -
Failover ready scaled read replica
As Database administrator, with a large footprint of applications being served,
I want to have a read only replica cluster in another region, that is sized appropriately, with it's own read replica nodes provisioned,
so that I can fail over and that read replica can quickly handle the production workload in the scenario of a disaster in one region/cloud vendor, to minimise any disruption to service.41 votes -
PGBouncer: Multiple processes
As a DBA I want to activate "so_reuseport" option so that PGBouncer can use multiple CPUs.
In the context of a very high concurrent database using PGBouncer, the single process can reach 100% CPU usage. Having multiple PGBouncer processes in the same instance would distribute the load between several processes.2 votes -
Support Citus as a PostgreSQL Extension
As a developer or database administrator,
I want to scale and distribute my PostgreSQL databases,
so that I can handle large-scale data processing and analytics workloads efficiently.The Citus extension offers the benefits of scalability, improved performance, fault tolerance, high availability, efficient data distribution, and seamless integration with PostgreSQL, enables the users to effectively manage and process large-scale data workloads.
58 votes -
Support pgai as a PostgreSQL Extension
As a developer, I would like to simplify the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL so that embedding and generation AI models are closer to the database, enabling me to create embeddings, retrieve LLM chat completions, generate responses, and reason over data directly within SQL queries.
3 votes -
Support postgresql_anomymizer as a PostgreSQL Extension
As a data privacy officer, database administrator, data analyst/researcher, or compliance officer, I would like the ability to anonymize sensitive data in the database by masking or replacing personally identifiable information (PII) or commercially sensitive data so that I can protect individuals' privacy and ensure compliance with data protection regulations such as HIPAA and GDPR.
26 votes -
Support postgres extension ip4r
As developer,
I want to use the ip4r extension as the built-in IP address types in Postgres work okay for many applications, but are lacking when it comes to applications that heavily work with IPs, like ours. ip4r adds several IP types with expanded functionalities that enable us to perform far more efficient "IP in range" type lookups, as well as tracking IP ranges that don't fall neatly in CIDR boundaries, plus general performance improvements. Very useful for our malicious IP threat feed service, abuseipdb.com, as you can imagine.2 votes -
Expose MySQL slow query logs in service logs
Currently, MySQL slow query logs are stored in the mysql.slow_log table. Adding an option to redirect those logs into service logs would allow log integrations, like OpenSearch, to pick those up and give customers greater value and flexibility.
It would implicitly solve the following:
https://ideas.aiven.io/forums/951280-operational-databases/suggestions/46266829-allow-mysql-slow-query-log-on-read-only-replicas4 votes -
Support pgvectorscale as a PostgreSQL Extension
As a developer,
I want to be able to perform efficient vector similarity and embeddings techniques directly in PostgreSQL, so that I can enable efficient handling of high-dimensional vector data within the PostgreSQL database for tasks like similarity search and machine learning.Pgvectorscale is an open-source PostgreSQL extension that builds on pgvector, enabling greater performance and scalability.
https://www.timescale.com/blog/pgvector-is-now-as-fast-as-pinecone-at-75-less-cost/
https://github.com/timescale/pgvectorscale2 votes -
Support oracle_fdw as a PostgreSQL Extension
As a database administrator or developer, I would like to integrate PostgreSQL with Oracle databases using the oracle_fdw extension so that I can seamlessly access data in Oracle databases from within PostgreSQL using standard SQL queries. This feature simplifies data integration and allows for efficient cross-database operations without the need for complex data migration processes.
3 votes -
Tiered Storage on Redis
As an devops engineers
I want to store extensive amount of data on Redis but only need part of them to be on memory
so that I can still utilize the low-latency nature of Redis but also be able to retain as much data as I can without pay the cost for unnecessary memory.
In addition, i also want to be able to move data around different tiers12 votes -
Support pgvecto.rs as a PostgreSQL Extension
As a developer,
I want to be able to use the pgvector.rs extension (https://docs.pgvecto.rs/),
so that I can have Scalable Vector Search in PostgreSQL.1 vote -
Active Active Cross region replication - Redis
As a devops engineer/admin,
I want to run both read/write request to the Redis clusters that locates closest geographically region
So that I can reduce the latency and improve the experience of my end-user20 votes
- Don't see your idea?