Grafana Data Sources
Data sources are connections to where your observability data lives. Grafana supports 50+ data sources including Prometheus, Loki, Elasticsearch, and many others. This guide covers adding, configuring, and using data sources in Grafana.
What are Data Sources?
Data sources are connections to external systems that store your data:
- Metrics - Prometheus, InfluxDB, Graphite
- Logs - Loki, Elasticsearch, CloudWatch
- Traces - Tempo, Jaeger, Zipkin
- Databases - MySQL, PostgreSQL, MongoDB
Adding a Data Source
Basic Workflow
- Go to Configuration → Data Sources
- Click Add data source
- Select data source type
- Configure connection settings
- Test connection
- Click Save & Test
Configuration Steps
Step 1: Select Type
- Choose from available data sources
- Popular choices: Prometheus, Loki, Tempo
Step 2: Configure Connection
- URL: Data source endpoint
- Access: Proxy or Direct
- Authentication: If required
Step 3: Test Connection
- Click Save & Test
- Verify “Data source is working” message
Step 4: Set as Default (Optional)
- Check “Set as default” for primary data source
Common Data Sources
Prometheus
Metrics and time-series data:
Configuration:
Name: Prometheus
Type: Prometheus
URL: http://prometheus.monitoring.svc.cluster.local:9090
Access: Server (default)
Settings:
- URL: Prometheus server URL
- Access: Server (recommended for Kubernetes)
- Scrape interval: Default scrape interval
- Query timeout: Query timeout in seconds
Advanced Settings:
- HTTP Method: GET or POST
- Custom query parameters: Additional URL parameters
- Timeout: Request timeout
Loki
Log aggregation:
Configuration:
Name: Loki
Type: Loki
URL: http://loki.monitoring.svc.cluster.local:3100
Access: Server (default)
Settings:
- URL: Loki server URL
- Access: Server (recommended)
- Maximum lines: Max log lines to return
Derived Fields:
- Extract fields from logs
- Link to other data sources (e.g., Tempo for traces)
Tempo
Distributed tracing:
Configuration:
Name: Tempo
Type: Tempo
URL: http://tempo.monitoring.svc.cluster.local:3200
Access: Server (default)
Settings:
- URL: Tempo server URL
- Access: Server
- Basic auth: If required
Trace to Logs:
- Link traces to logs in Loki
- Configure Loki data source
- Configure trace ID field
Elasticsearch
Logs and search:
Configuration:
Name: Elasticsearch
Type: Elasticsearch
URL: http://elasticsearch.logging.svc.cluster.local:9200
Access: Server (default)
Settings:
- URL: Elasticsearch URL
- Index name: Default index pattern
- Time field: Time field name
- Access: Server or Direct
Authentication:
- Basic auth: Username/password
- With credentials: Use Grafana credentials
Connection Settings
Access Modes
Server (Proxy):
- Grafana proxies requests to data source
- Recommended for Kubernetes
- Better security (no CORS issues)
- Uses Grafana’s network context
Browser (Direct):
- Browser connects directly to data source
- Requires CORS configuration
- Better for public data sources
- Direct client connection
URL Configuration
Kubernetes Services:
http://<service-name>.<namespace>.svc.cluster.local:<port>
Examples:
# Prometheus in monitoring namespace
http://prometheus.monitoring.svc.cluster.local:9090
# Loki in logging namespace
http://loki.logging.svc.cluster.local:3100
External URLs:
https://prometheus.example.com
Authentication
No Auth:
- No authentication required
- For internal services
Basic Auth:
- Username and password
- Configure in data source settings
With Credentials:
- Use Grafana database credentials
- Store credentials in Grafana
Header Auth:
- Custom HTTP headers
- For API key authentication
TLS/SSL:
- Skip TLS verification (development only)
- TLS client certificate
- Custom CA certificate
Testing Data Source Connectivity
Test Connection
After configuration:
- Click Save & Test
- Verify success message: “Data source is working”
Manual Testing
Prometheus:
# Test from Grafana pod
kubectl exec -it <grafana-pod> -n monitoring -- \
wget -qO- http://prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=up
Loki:
# Test from Grafana pod
kubectl exec -it <grafana-pod> -n monitoring -- \
wget -qO- http://loki.monitoring.svc.cluster.local:3100/ready
Data Source Variables
Use variables to make data sources dynamic:
Creating Data Source Variable
- Go to Configuration → Variables
- Click Add variable
- Set:
- Name:
datasource - Type:
Datasource - Type:
Prometheus(or specific type)
- Name:
Using in Queries
In PromQL queries:
# Use variable data source
rate(http_requests_total[$__rate_interval])
In panel data source selection:
- Select $datasource from data source dropdown
Multiple Data Sources
Using Multiple Sources in One Dashboard
Different Panels:
- Each panel can use different data source
- Select data source per panel
Same Panel:
- Some visualizations support multiple data sources
- Configure multiple queries from different sources
Example: Correlating Metrics and Logs
Panel 1: Metrics (Prometheus)
rate(http_requests_total[5m])
Panel 2: Logs (Loki)
{container="my-app"} |= "error"
Correlation:
- Link panels using time range
- Use trace IDs to link traces and logs
Data Source Configuration Examples
Prometheus in Kubernetes
# ConfigMap for Grafana data source provisioning
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: monitoring
data:
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus.monitoring.svc.cluster.local:9090
isDefault: true
jsonData:
timeInterval: "30s"
Loki with Derived Fields
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki.logging.svc.cluster.local:3100
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: "traceID=(\\w+)"
name: TraceID
url: '$${__value.raw}'
Tempo with Trace to Logs
datasources:
- name: Tempo
type: tempo
access: proxy
url: http://tempo.monitoring.svc.cluster.local:3200
jsonData:
tracesToLogs:
datasourceUid: loki
tags: ['job', 'instance', 'pod', 'namespace']
mappedTags: [{ key: 'service.name', value: 'service' }]
mapTagNamesEnabled: false
spanStartTimeShift: '1h'
spanEndTimeShift: '1h'
filterByTraceID: false
filterBySpanID: false
Dashboard Provisioning
Provisioning Data Sources
Store data source configurations as YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: monitoring
data:
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus.monitoring.svc.cluster.local:9090
isDefault: true
- name: Loki
type: loki
access: proxy
url: http://loki.logging.svc.cluster.local:3100
- name: Tempo
type: tempo
access: proxy
url: http://tempo.monitoring.svc.cluster.local:3200
Mount in Grafana:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
template:
spec:
containers:
- name: grafana
volumeMounts:
- name: datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: datasources
configMap:
name: grafana-datasources
Troubleshooting
Connection Failed
Problem: “Data source is not working”
Solutions:
# Check service exists
kubectl get svc -n monitoring prometheus
# Test connectivity from Grafana pod
kubectl exec -it <grafana-pod> -n monitoring -- \
wget -qO- http://prometheus.monitoring.svc.cluster.local:9090/api/v1/status/config
# Check network policies
kubectl get networkpolicies -n monitoring
# Verify DNS resolution
kubectl exec -it <grafana-pod> -n monitoring -- nslookup prometheus.monitoring.svc.cluster.local
Authentication Issues
Problem: “Unauthorized” or “Forbidden”
Solutions:
- Verify credentials
- Check RBAC policies
- Verify authentication method
- Check TLS certificates
No Data Returned
Problem: Queries return no data
Solutions:
- Verify query syntax
- Check time range
- Verify data exists in data source
- Check label matching
Performance Issues
Problem: Slow queries or timeouts
Solutions:
- Increase query timeout
- Optimize queries
- Check data source performance
- Use recording rules
Best Practices
1. Use Server Access Mode
Prefer Server (Proxy) access mode:
- Better security
- No CORS issues
- Uses Kubernetes service DNS
2. Use Kubernetes Service URLs
Use internal service URLs:
http://<service>.<namespace>.svc.cluster.local:<port>
3. Set Default Data Source
Set your primary data source as default:
- Simplifies panel creation
- Faster workflow
4. Test Connections
Always test connections after configuration:
- Click “Save & Test”
- Verify success message
5. Provision Data Sources
Store configurations as code:
- Use ConfigMaps
- Version control
- Automated deployment
6. Use Variables
Create data source variables for flexibility:
- Switch between environments
- A/B testing
- Multi-cluster setups
7. Configure Timeouts
Set appropriate timeouts:
- Default: 30 seconds
- Adjust based on query complexity
8. Monitor Data Sources
Monitor data source health:
- Check connectivity
- Monitor query performance
- Track errors
See Also
- Dashboards - Creating dashboards with data sources
- Prometheus - Prometheus configuration
- Loki - Loki configuration