--- title: Grafana Cloud integration subtitle: Send metrics and logs from Neon Postgres to Grafana Cloud enableTableOfContents: true updatedOn: '2025-08-18T12:11:47.174Z' --- How to set up the Grafana Cloud integration How to configure log forwarding The full list of externally-available metrics Grafana Cloud OTLP Documentation Grafana Cloud Authentication and Permissions The Grafana Cloud integration lets you monitor Neon database performance, resource utilization, and system health directly from Grafana Cloud. The integration requires [OTEL support](https://neon.com/docs/guides/opentelemetry), which is available with Neon's Scale plan. ## How it works The integration uses Grafana Cloud's native OTLP endpoint to securely transmit Neon metrics and Postgres logs. By configuring the integration with your Grafana Cloud OTLP endpoint and authentication token, Neon automatically sends data from your project to your Grafana Cloud stack, where it's automatically routed to the appropriate storage backends (Mimir for metrics, Loki for logs, and Tempo for traces). Data is sent for all computes in your Neon project. For example, if you have multiple branches, each with an attached compute, both metrics and logs will be collected and sent for each compute. ### Neon metrics The integration exports [a comprehensive set of metrics](#available-metrics) including: - **Connection counts** — Tracks active and idle database connections. - **Database size** — Monitors total size of all databases in bytes. - **Replication delay** — Measures replication lag in bytes and seconds. - **Compute metrics** — Includes CPU and memory usage statistics for your compute. ### Postgres logs With the the Grafana Cloud integration, you can forward Postgres logs to your Grafana Cloud stack. These logs provide visibility into database activity, errors, and performance. See [Export Postgres logs to Grafana Cloud](#export-postgres-logs-to-grafana-cloud) for details. ## Prerequisites Before getting started, ensure the following: - You have a Neon account and project. If not, see [Sign up for a Neon account](/docs/get-started/signing-up). - You have a Grafana Cloud account access to the Grafana Cloud Portal. ## Set up the integration 1. **Get your Grafana Cloud OTLP configuration** 1. Sign in to the [Grafana Cloud Portal](https://grafana.com/orgs/) 1. Click on the **OpenTelemetry** card 1. Copy your OTLP endpoint URL and authentication credentials from the configuration details The Authentication key should follow the format `:` 2. **Configure the Neon OpenTelemetry integration** 1. In the Neon Console, navigate to the **Integrations** page in your Neon project 1. Locate the **OpenTelemetry** card and click **Add** 1. Select **HTTP** as the connection protocol (recommended) 1. Enter your Grafana Cloud OTLP endpoint URL 1. Choose **Bearer** authentication and paste your Grafana Cloud authentication token 1. Configure the `service.name` resource attribute (e.g., "neon-postgres-production") 1. Select what you want to export: - **Metrics**: System metrics and database statistics (CPU, memory, connections, etc.) - **Postgres logs**: Error messages, warnings, connection events, and system notifications 1. Click **Add** to complete the integration You can change these settings later by editing your integration configuration from the **Integrations** page. Once the integration is set up, Neon will start sending metrics and logs to your Grafana Cloud stack, where they'll be automatically stored in Mimir (metrics) and Loki (logs). Neon computes only send logs and metrics when they are active. If the [Scale to Zero](/docs/introduction/scale-to-zero) feature is enabled and a compute is suspended due to inactivity, no logs or metrics will be sent during the suspension. This may result in gaps in your data. If you notice missing data, check if your compute is suspended. You can verify a compute's status as `Idle` or `Active` on the **Branches** page in the Neon console, and review **Suspend compute** events on the **System operations** tab of the **Monitoring** page. Additionally, if you are setting up the Grafana Cloud integration for a project with an inactive compute, you'll need to activate the compute before it can send data. To activate it, simply run a query from the [Neon SQL Editor](/docs/get-started/query-with-neon-sql-editor) or any connected client. ## Example usage Once integrated, you can explore your Neon metrics and logs in Grafana Cloud using the Explore feature. Navigate to **Explore** in your Grafana Cloud instance and query metrics like `neon_connection_counts`, `neon_db_total_size`, and `host_cpu_seconds_total` using your Prometheus data source. You can also create custom dashboards and set alerts based on threshold values for critical metrics. ## Import the Neon dashboard Import the provided dashboard JSON configuration to get started with pre-built visualizations: 1. In your Grafana Cloud stack, navigate to **Dashboards** → **New** → **Import** 2. Copy and paste the [dashboard JSON below](#dashboard-json) 3. Click **Load** and configure the dashboard settings 4. The dashboard will automatically detect your Neon metrics and display key performance indicators If any of the computes in your project are active, you should start seeing data in the resulting dashboard right away. By default, the dashboard shows metrics for all active endpoints in your project. You can filter results to one or more selected endpoints using the endpoint_id variable dropdown selector. ### Dashboard JSON Copy Neon PostgreSQL Monitoring Dashboard JSON ```json shouldWrap { "id": null, "uid": "neon-complete-monitoring", "title": "Neon PostgreSQL", "description": "Comprehensive monitoring dashboard for Neon PostgreSQL with metrics and logs", "tags": ["neon", "postgresql", "database", "monitoring"], "timezone": "browser", "editable": true, "graphTooltip": 1, "refresh": "30s", "schemaVersion": 39, "version": 1, "time": { "from": "now-1h", "to": "now" }, "timepicker": { "refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"], "time_options": ["5m", "15m", "1h", "6h", "12h", "24h", "2d", "7d", "30d"] }, "panels": [ { "id": 1, "title": "Database Overview", "type": "stat", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "sum(neon_connection_counts{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"})", "legendFormat": "Total Connections", "refId": "A" }, { "expr": "neon_db_total_size{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / 1024 / 1024 / 1024", "legendFormat": "Database Size (GB)", "refId": "B" }, { "expr": "neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / (neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} + neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}) * 100", "legendFormat": "Cache Hit Rate %", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Cache Hit Rate %"}, "properties": [{"id": "unit", "value": "percent"}, {"id": "max", "value": 100}] }, { "matcher": {"id": "byName", "options": "Database Size (GB)"}, "properties": [{"id": "unit", "value": "decbytes"}] } ] }, "gridPos": {"h": 6, "w": 24, "x": 0, "y": 0} }, { "id": 2, "title": "Connection Activity", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_connection_counts{state=\"active\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Active - {{datname}}", "refId": "A" }, { "expr": "neon_connection_counts{state=\"idle\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Idle - {{datname}}", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 6} }, { "id": 3, "title": "Database Size Growth", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_pg_stats_userdb{kind=\"db_size\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "{{datname}} Size", "refId": "A" }, { "expr": "neon_db_total_size{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Total Size", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "bytes", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 6} }, { "id": 4, "title": "CPU Usage", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "100 - (avg(rate(host_cpu_seconds_total{mode=\"idle\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])) * 100)", "legendFormat": "CPU Usage %", "refId": "A" }, { "expr": "rate(host_cpu_seconds_total{mode=\"system\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m]) * 100", "legendFormat": "System CPU %", "refId": "B" }, { "expr": "rate(host_cpu_seconds_total{mode=\"user\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m]) * 100", "legendFormat": "User CPU %", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "percent", "max": 100, "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 14} }, { "id": 5, "title": "Memory Usage", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "host_memory_total_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Total Memory", "refId": "A" }, { "expr": "host_memory_available_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Available Memory", "refId": "B" }, { "expr": "host_memory_cached_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Cached Memory", "refId": "C" }, { "expr": "host_memory_total_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} - host_memory_available_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Used Memory", "refId": "D" } ], "fieldConfig": { "defaults": { "unit": "bytes", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 14} }, { "id": 6, "title": "Database Activity Rates", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "rate(neon_pg_stats_userdb{kind=\"inserted\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Inserts/sec - {{datname}}", "refId": "A" }, { "expr": "rate(neon_pg_stats_userdb{kind=\"updated\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Updates/sec - {{datname}}", "refId": "B" }, { "expr": "rate(neon_pg_stats_userdb{kind=\"deleted\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Deletes/sec - {{datname}}", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "rps", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 22} }, { "id": 7, "title": "Cache Performance", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} / (neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} + neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}) * 100", "legendFormat": "Cache Hit Rate %", "refId": "A" }, { "expr": "rate(neon_lfc_hits{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Cache Hits/sec", "refId": "B" }, { "expr": "rate(neon_lfc_misses{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Cache Misses/sec", "refId": "C" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Cache Hit Rate %"}, "properties": [{"id": "unit", "value": "percent"}, {"id": "max", "value": 100}] } ] }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 22} }, { "id": 8, "title": "Replication Status", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "neon_replication_delay_bytes{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Replication Delay (Bytes)", "refId": "A" }, { "expr": "neon_replication_delay_seconds{endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "legendFormat": "Replication Delay (Seconds)", "refId": "B" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 }, "overrides": [ { "matcher": {"id": "byName", "options": "Replication Delay (Bytes)"}, "properties": [{"id": "unit", "value": "bytes"}] }, { "matcher": {"id": "byName", "options": "Replication Delay (Seconds)"}, "properties": [{"id": "unit", "value": "s"}] } ] }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 30} }, { "id": 9, "title": "Deadlocks & Errors", "type": "timeseries", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "targets": [ { "expr": "increase(neon_pg_stats_userdb{kind=\"deadlocks\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}[5m])", "legendFormat": "Deadlocks - {{datname}}", "refId": "A" } ], "fieldConfig": { "defaults": { "unit": "short", "min": 0 } }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 30} }, { "id": 10, "title": "PostgreSQL Error Logs", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)error|fatal|panic\"", "refId": "A" } ], "options": { "showTime": true, "showLabels": true, "showCommonLabels": false, "wrapLogMessage": true, "prettifyLogMessage": false, "enableLogDetails": true, "dedupStrategy": "none", "sortOrder": "Descending" }, "gridPos": {"h": 10, "w": 24, "x": 0, "y": 38} }, { "id": 11, "title": "Connection Events", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)connection|connect|disconnect\"", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "gridPos": {"h": 8, "w": 12, "x": 0, "y": 48} }, { "id": 12, "title": "Query Performance Logs", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"} |~ \"(?i)slow|duration|statement|query\" | logfmt", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "gridPos": {"h": 8, "w": 12, "x": 12, "y": 48} }, { "id": 13, "title": "Recent Log Activity", "type": "logs", "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "targets": [ { "expr": "{service_name=\"$service_name\", endpoint_id=~\"$endpoint_id\", project_id=~\"$project_id\"}", "refId": "A" } ], "options": { "showTime": true, "showLabels": false, "showCommonLabels": false, "wrapLogMessage": true, "enableLogDetails": true, "sortOrder": "Descending" }, "maxDataPoints": 1000, "gridPos": {"h": 10, "w": 24, "x": 0, "y": 56} } ], "templating": { "list": [ { "name": "DS_PROMETHEUS", "label": "Prometheus Datasource", "type": "datasource", "query": "prometheus", "hide": 0, "refresh": 1, "current": { "selected": false, "text": "Prometheus", "value": "prometheus" } }, { "name": "DS_LOKI", "label": "Loki Datasource", "type": "datasource", "query": "loki", "hide": 0, "refresh": 1, "current": { "selected": false, "text": "Loki", "value": "loki" } }, { "name": "endpoint_id", "label": "Endpoint ID", "type": "query", "query": { "query": "label_values(neon_connection_counts, endpoint_id)", "refId": "StandardVariableQuery" }, "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "refresh": 2, "multi": true, "includeAll": true, "allValue": ".*", "current": { "selected": false, "text": "All", "value": "$__all" } }, { "name": "project_id", "label": "Project ID", "type": "query", "query": { "query": "label_values(neon_connection_counts, project_id)", "refId": "StandardVariableQuery" }, "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "refresh": 2, "multi": true, "includeAll": true, "allValue": ".*", "current": { "selected": false, "text": "All", "value": "$__all" } }, { "name": "service_name", "label": "Service Name", "type": "query", "query": { "query": "label_values({__name__=~\".+\"}, service_name)", "refId": "StandardVariableQuery" }, "datasource": { "type": "loki", "uid": "${DS_LOKI}" }, "refresh": 2, "multi": false, "includeAll": false, "current": { "selected": false, "text": "", "value": "" } } ] }, "annotations": { "list": [ { "name": "High CPU", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "expr": "100 - (avg(rate(host_cpu_seconds_total{mode=\"idle\"}[5m])) * 100) > 80", "titleFormat": "High CPU Usage", "textFormat": "CPU usage is above 80%", "iconColor": "red" }, { "name": "Low Cache Hit Rate", "datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" }, "expr": "neon_lfc_hits / (neon_lfc_hits + neon_lfc_misses) * 100 < 90", "titleFormat": "Low Cache Hit Rate", "textFormat": "Cache hit rate dropped below 90%", "iconColor": "yellow" } ] }, "links": [ { "title": "Neon Console", "url": "https://console.neon.tech", "type": "link", "icon": "external link" }, { "title": "Metrics Reference", "url": "https://neon.com/docs/reference/metrics-logs", "type": "link", "icon": "doc" } ] } ``` ## Available metrics Neon exports a comprehensive set of metrics including connection counts, database size, replication delay, and compute metrics (CPU and memory usage). For a complete list of all available metrics with detailed descriptions, see the [Metrics and logs reference](/docs/reference/metrics-logs). ## Export Postgres logs You can export your Postgres logs from your Neon compute to your Grafana Cloud stack. These logs provide visibility into database activity, errors, and performance. The logs are automatically sent to Grafana Cloud Loki and can be queried using LogQL. ### Performance impact Enabling this feature may result in: - An increase in compute resource usage for log processing - Additional network egress for log transmission, which is billed after 100 GB on paid plans - Associated costs based on log volume in Grafana Cloud ### Querying logs in Grafana Cloud Once logs are flowing, you can query them in Grafana's Explore view using LogQL: ```logql # View all logs from your Neon service {service_name="your-service-name"} # Filter for errors only {service_name="your-service-name"} |= "ERROR" # View connection events {service_name="your-service-name"} |= "connection" ``` ## Set up alerts Create alerts for key metrics to monitor your database health: 1. **High CPU Usage**: Alert when CPU usage exceeds 80% ```promql rate(host_cpu_seconds_total{mode!="idle"}[5m]) > 0.8 ``` 2. **Low Cache Hit Rate**: Alert when cache hit rate drops below 90% ```promql neon_lfc_hits / (neon_lfc_hits + neon_lfc_misses) < 0.9 ``` 3. **High Connection Count**: Alert when connections exceed your threshold ```promql sum(neon_connection_counts) > 100 ``` ## Feedback and future improvements We're always looking to improve! If you have feature requests or feedback, please let us know via the [Feedback form](https://console.neon.tech/app/projects?modal=feedback) in the Neon Console or on our [Discord channel](https://discord.com/channels/1176467419317940276/1176788564890112042).
How to set up the Grafana Cloud integration
How to configure log forwarding
The full list of externally-available metrics