mirror of https://github.com/grafana/grafana
Postgres Data Source (#9475)
* add postgresql datasource * add rest of files for postgres datasource * fix timeseries query, remove unused code * consistent naming, refactoring * s/mysql/postgres/ * s/mysql/postgres/ * couple more tests * tests for more datatypes * fix macros for postgres * add __timeSec macro * add frontend for postgres datasource * adjust documentation * fix formatting * add proper plugin description * merge editor changes from mysql * port changes from mysql datasource * set proper defaultQuery for postgres * add time_sec to timeseries query accept int for value for timeseries query * revert allowing time_sec and handle int or float values as unix timestamp for "time" column * fix tslint error * handle decimal values in timeseries query * allow setting sslmode for postgres datasource * use type switch for handling data types * fix value for timeseries query * refactor timeseries queries to make them more flexible * remove debug statement from inner loop in type conversion * use plain for loop in getTypedRowData * fix timeseries queries * adjust postgres datasource to tsdb refactoring * adjust postgres datasource to frontend changes * update lib/pq to latest version * move type conversion to getTypedRowData * handle address types cidr, inet and macaddr * adjust response parser and docs for annotations * convert unknown types to string * add documentation for postgres datasource * add another example query with metric column * set more helpful default query * update help text in query editor * handle NULL in value column of timeseries query * add __timeGroup macro * add test for __timeGroup macro * document __timeGroup and set proper default query for annotations * fix typos in docs * add postgres to list of datasources * add postgres to builtInPlugins * mysql: refactoring as prep for merging postgres Refactors out the initialization of the xorm engine and the query logic for an sql data source. * mysql: rename refactoring + test update * postgres:refactor to use SqlEngine(same as mysql) Refactored to use a common base class with the MySql data source. Other changes from the original PR: - Changed time column to be time_sec to allow other time units in the future and to be the same as MySQL - Changed integration test to test the main Query method rather than the private transformToTable method - Changed the __timeSec macro name to __timeEpoch - Renamed PostgresExecutor to PostgresQueryEndpoint Fixes #9209 (the original PR) * postgres: encrypt password on config page With some other cosmetic changes to the config page: - placeholder texts - reset button for the password after it has been encrypted. - default value for the sslmode field. * postgres: change back col name to time from time_sec * postgres mysql: remove annotation title Title has been removed from annotations * postgres: fix images for docs page * postgres mysql: fix specspull/9497/head
parent
630e6f5da6
commit
d1c9760fa8
@ -0,0 +1,186 @@ |
||||
+++ |
||||
title = "Using PostgreSQL in Grafana" |
||||
description = "Guide for using PostgreSQL in Grafana" |
||||
keywords = ["grafana", "postgresql", "guide"] |
||||
type = "docs" |
||||
[menu.docs] |
||||
name = "PostgreSQL" |
||||
parent = "datasources" |
||||
weight = 7 |
||||
+++ |
||||
|
||||
# Using PostgreSQL in Grafana |
||||
|
||||
Grafana ships with a built-in PostgreSQL data source plugin that allows you to query and visualize data from a PostgreSQL compatible database. |
||||
|
||||
## Adding the data source |
||||
|
||||
1. Open the side menu by clicking the Grafana icon in the top header. |
||||
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`. |
||||
3. Click the `+ Add data source` button in the top header. |
||||
4. Select *PostgreSQL* from the *Type* dropdown. |
||||
|
||||
### Database User Permissions (Important!) |
||||
|
||||
The database user you specify when you add the data source should only be granted SELECT permissions on |
||||
the specified database & tables you want to query. Grafana does not validate that the query is safe. The query |
||||
could include any SQL statement. For example, statements like `DELETE FROM user;` and `DROP TABLE user;` would be |
||||
executed. To protect against this we **Highly** recommmend you create a specific postgresql user with restricted permissions. |
||||
|
||||
Example: |
||||
|
||||
```sql |
||||
CREATE USER grafanareader WITH PASSWORD 'password'; |
||||
GRANT USAGE ON SCHEMA schema TO grafanareader; |
||||
GRANT SELECT ON schema.table TO grafanareader; |
||||
``` |
||||
|
||||
Make sure the user does not get any unwanted privileges from the public role. |
||||
|
||||
## Macros |
||||
|
||||
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros. |
||||
|
||||
Macro example | Description |
||||
------------ | ------------- |
||||
*$__time(dateColumn)* | Will be replaced by an expression to rename the column to `time`. For example, *dateColumn as time* |
||||
*$__timeSec(dateColumn)* | Will be replaced by an expression to rename the column to `time` and converting the value to unix timestamp. For example, *extract(epoch from dateColumn) as time* |
||||
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *dateColumn > to_timestamp(1494410783) AND dateColumn < to_timestamp(1494497183)* |
||||
*$__timeFrom()* | Will be replaced by the start of the currently active time selection. For example, *to_timestamp(1494410783)* |
||||
*$__timeTo()* | Will be replaced by the end of the currently active time selection. For example, *to_timestamp(1494497183)* |
||||
*$__timeGroup(dateColumn,'5m')* | Will be replaced by an expression usable in GROUP BY clause. For example, *(extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int* |
||||
*$__unixEpochFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name with times represented as unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183* |
||||
*$__unixEpochFrom()* | Will be replaced by the start of the currently active time selection as unix timestamp. For example, *1494410783* |
||||
*$__unixEpochTo()* | Will be replaced by the end of the currently active time selection as unix timestamp. For example, *1494497183* |
||||
|
||||
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo. |
||||
|
||||
The query editor has a link named `Generated SQL` that shows up after a query as been executed, while in panel edit mode. Click on it and it will expand and show the raw interpolated SQL string that was executed. |
||||
|
||||
## Table queries |
||||
|
||||
If the `Format as` query option is set to `Table` then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns & rows your query returns. |
||||
|
||||
Query editor with example query: |
||||
|
||||
 |
||||
|
||||
|
||||
The query: |
||||
|
||||
```sql |
||||
SELECT |
||||
title as "Title", |
||||
"user".login as "Created By", |
||||
dashboard.created as "Created On" |
||||
FROM dashboard |
||||
INNER JOIN "user" on "user".id = dashboard.created_by |
||||
WHERE $__timeFilter(dashboard.created) |
||||
``` |
||||
|
||||
You can control the name of the Table panel columns by using regular `as ` SQL column selection syntax. |
||||
|
||||
The resulting table panel: |
||||
|
||||
 |
||||
|
||||
### Time series queries |
||||
|
||||
If you set `Format as` to `Time series`, for use in Graph panel for example, then the query must return a column named `time` that returns either a sql datetime or any numeric datatype representing unix epoch in seconds. |
||||
Any column except `time` and `metric` is treated as a value column. |
||||
You may return a column named `metric` that is used as metric name for the value column. |
||||
|
||||
Example with `metric` column |
||||
|
||||
```sql |
||||
SELECT |
||||
min(time_date_time) as time, |
||||
min(value_double), |
||||
'min' as metric |
||||
FROM test_data |
||||
WHERE $__timeFilter(time_date_time) |
||||
GROUP BY metric1, (extract(epoch from time_date_time)/extract(epoch from $__interval::interval))::int |
||||
ORDER BY time asc |
||||
``` |
||||
|
||||
Example with multiple columns: |
||||
|
||||
```sql |
||||
SELECT |
||||
min(time_date_time) as time, |
||||
min(value_double) as min_value, |
||||
max(value_double) as max_value |
||||
FROM test_data |
||||
WHERE $__timeFilter(time_date_time) |
||||
GROUP BY metric1, (extract(epoch from time_date_time)/extract(epoch from $__interval::interval))::int |
||||
ORDER BY time asc |
||||
``` |
||||
|
||||
## Templating |
||||
|
||||
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data being displayed in your dashboard. |
||||
|
||||
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different types of template variables. |
||||
|
||||
### Query Variable |
||||
|
||||
If you add a template variable of the type `Query`, you can write a PostgreSQL query that can |
||||
return things like measurement names, key names or key values that are shown as a dropdown select box. |
||||
|
||||
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting. |
||||
|
||||
```sql |
||||
SELECT hostname FROM host |
||||
``` |
||||
|
||||
A query can return multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`. |
||||
|
||||
```sql |
||||
SELECT host.hostname, other_host.hostname2 FROM host JOIN other_host ON host.city = other_host.city |
||||
``` |
||||
|
||||
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value: |
||||
|
||||
```sql |
||||
SELECT hostname AS __text, id AS __value FROM host |
||||
``` |
||||
|
||||
You can also create nested variables. For example if you had another variable named `region`. Then you could have |
||||
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values): |
||||
|
||||
```sql |
||||
SELECT hostname FROM host WHERE region IN($region) |
||||
``` |
||||
|
||||
### Using Variables in Queries |
||||
|
||||
Template variables are quoted automatically so if it is a string value do not wrap them in quotes in where clauses. If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values. |
||||
|
||||
There are two syntaxes: |
||||
|
||||
`$<varname>` Example with a template variable named `hostname`: |
||||
|
||||
```sql |
||||
SELECT |
||||
atimestamp as time, |
||||
aint as value |
||||
FROM table |
||||
WHERE $__timeFilter(atimestamp) and hostname in($hostname) |
||||
ORDER BY atimestamp ASC |
||||
``` |
||||
|
||||
`[[varname]]` Example with a template variable named `hostname`: |
||||
|
||||
```sql |
||||
SELECT |
||||
atimestamp as time, |
||||
aint as value |
||||
FROM table |
||||
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]]) |
||||
ORDER BY atimestamp ASC |
||||
``` |
||||
|
||||
## Alerting |
||||
|
||||
Time series queries should work in alerting conditions. Table formatted queries is not yet supported in alert rule |
||||
conditions. |
||||
@ -0,0 +1,99 @@ |
||||
package postgres |
||||
|
||||
import ( |
||||
"fmt" |
||||
"regexp" |
||||
"strings" |
||||
|
||||
"github.com/grafana/grafana/pkg/tsdb" |
||||
) |
||||
|
||||
//const rsString = `(?:"([^"]*)")`;
|
||||
const rsIdentifier = `([_a-zA-Z0-9]+)` |
||||
const sExpr = `\$` + rsIdentifier + `\(([^\)]*)\)` |
||||
|
||||
type PostgresMacroEngine struct { |
||||
TimeRange *tsdb.TimeRange |
||||
} |
||||
|
||||
func NewPostgresMacroEngine() tsdb.SqlMacroEngine { |
||||
return &PostgresMacroEngine{} |
||||
} |
||||
|
||||
func (m *PostgresMacroEngine) Interpolate(timeRange *tsdb.TimeRange, sql string) (string, error) { |
||||
m.TimeRange = timeRange |
||||
rExp, _ := regexp.Compile(sExpr) |
||||
var macroError error |
||||
|
||||
sql = replaceAllStringSubmatchFunc(rExp, sql, func(groups []string) string { |
||||
res, err := m.evaluateMacro(groups[1], strings.Split(groups[2], ",")) |
||||
if err != nil && macroError == nil { |
||||
macroError = err |
||||
return "macro_error()" |
||||
} |
||||
return res |
||||
}) |
||||
|
||||
if macroError != nil { |
||||
return "", macroError |
||||
} |
||||
|
||||
return sql, nil |
||||
} |
||||
|
||||
func replaceAllStringSubmatchFunc(re *regexp.Regexp, str string, repl func([]string) string) string { |
||||
result := "" |
||||
lastIndex := 0 |
||||
|
||||
for _, v := range re.FindAllSubmatchIndex([]byte(str), -1) { |
||||
groups := []string{} |
||||
for i := 0; i < len(v); i += 2 { |
||||
groups = append(groups, str[v[i]:v[i+1]]) |
||||
} |
||||
|
||||
result += str[lastIndex:v[0]] + repl(groups) |
||||
lastIndex = v[1] |
||||
} |
||||
|
||||
return result + str[lastIndex:] |
||||
} |
||||
|
||||
func (m *PostgresMacroEngine) evaluateMacro(name string, args []string) (string, error) { |
||||
switch name { |
||||
case "__time": |
||||
if len(args) == 0 { |
||||
return "", fmt.Errorf("missing time column argument for macro %v", name) |
||||
} |
||||
return fmt.Sprintf("%s AS \"time\"", args[0]), nil |
||||
case "__timeEpoch": |
||||
if len(args) == 0 { |
||||
return "", fmt.Errorf("missing time column argument for macro %v", name) |
||||
} |
||||
return fmt.Sprintf("extract(epoch from %s) as \"time\"", args[0]), nil |
||||
case "__timeFilter": |
||||
if len(args) == 0 { |
||||
return "", fmt.Errorf("missing time column argument for macro %v", name) |
||||
} |
||||
return fmt.Sprintf("%s >= to_timestamp(%d) AND %s <= to_timestamp(%d)", args[0], uint64(m.TimeRange.GetFromAsMsEpoch()/1000), args[0], uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil |
||||
case "__timeFrom": |
||||
return fmt.Sprintf("to_timestamp(%d)", uint64(m.TimeRange.GetFromAsMsEpoch()/1000)), nil |
||||
case "__timeTo": |
||||
return fmt.Sprintf("to_timestamp(%d)", uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil |
||||
case "__timeGroup": |
||||
if len(args) < 2 { |
||||
return "", fmt.Errorf("macro %v needs time column and interval", name) |
||||
} |
||||
return fmt.Sprintf("(extract(epoch from \"%s\")/extract(epoch from %s::interval))::int", args[0], args[1]), nil |
||||
case "__unixEpochFilter": |
||||
if len(args) == 0 { |
||||
return "", fmt.Errorf("missing time column argument for macro %v", name) |
||||
} |
||||
return fmt.Sprintf("%s >= %d AND %s <= %d", args[0], uint64(m.TimeRange.GetFromAsMsEpoch()/1000), args[0], uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil |
||||
case "__unixEpochFrom": |
||||
return fmt.Sprintf("%d", uint64(m.TimeRange.GetFromAsMsEpoch()/1000)), nil |
||||
case "__unixEpochTo": |
||||
return fmt.Sprintf("%d", uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil |
||||
default: |
||||
return "", fmt.Errorf("Unknown macro %v", name) |
||||
} |
||||
} |
||||
@ -0,0 +1,80 @@ |
||||
package postgres |
||||
|
||||
import ( |
||||
"testing" |
||||
|
||||
"github.com/grafana/grafana/pkg/tsdb" |
||||
. "github.com/smartystreets/goconvey/convey" |
||||
) |
||||
|
||||
func TestMacroEngine(t *testing.T) { |
||||
Convey("MacroEngine", t, func() { |
||||
engine := &PostgresMacroEngine{} |
||||
timeRange := &tsdb.TimeRange{From: "5m", To: "now"} |
||||
|
||||
Convey("interpolate __time function", func() { |
||||
sql, err := engine.Interpolate(nil, "select $__time(time_column)") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select time_column AS \"time\"") |
||||
}) |
||||
|
||||
Convey("interpolate __time function wrapped in aggregation", func() { |
||||
sql, err := engine.Interpolate(nil, "select min($__time(time_column))") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select min(time_column AS \"time\")") |
||||
}) |
||||
|
||||
Convey("interpolate __timeFilter function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "WHERE $__timeFilter(time_column)") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "WHERE time_column >= to_timestamp(18446744066914186738) AND time_column <= to_timestamp(18446744066914187038)") |
||||
}) |
||||
|
||||
Convey("interpolate __timeFrom function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "select $__timeFrom(time_column)") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select to_timestamp(18446744066914186738)") |
||||
}) |
||||
|
||||
Convey("interpolate __timeGroup function", func() { |
||||
|
||||
sql, err := engine.Interpolate(timeRange, "GROUP BY $__timeGroup(time_column,'5m')") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "GROUP BY (extract(epoch from \"time_column\")/extract(epoch from '5m'::interval))::int") |
||||
}) |
||||
|
||||
Convey("interpolate __timeTo function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "select $__timeTo(time_column)") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select to_timestamp(18446744066914187038)") |
||||
}) |
||||
|
||||
Convey("interpolate __unixEpochFilter function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFilter(18446744066914186738)") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738 >= 18446744066914186738 AND 18446744066914186738 <= 18446744066914187038") |
||||
}) |
||||
|
||||
Convey("interpolate __unixEpochFrom function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFrom()") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738") |
||||
}) |
||||
|
||||
Convey("interpolate __unixEpochTo function", func() { |
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochTo()") |
||||
So(err, ShouldBeNil) |
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914187038") |
||||
}) |
||||
|
||||
}) |
||||
} |
||||
@ -0,0 +1,245 @@ |
||||
package postgres |
||||
|
||||
import ( |
||||
"container/list" |
||||
"context" |
||||
"fmt" |
||||
"strconv" |
||||
"time" |
||||
|
||||
"github.com/go-xorm/core" |
||||
"github.com/grafana/grafana/pkg/components/null" |
||||
"github.com/grafana/grafana/pkg/log" |
||||
"github.com/grafana/grafana/pkg/models" |
||||
"github.com/grafana/grafana/pkg/tsdb" |
||||
) |
||||
|
||||
type PostgresQueryEndpoint struct { |
||||
sqlEngine tsdb.SqlEngine |
||||
log log.Logger |
||||
} |
||||
|
||||
func init() { |
||||
tsdb.RegisterTsdbQueryEndpoint("postgres", NewPostgresQueryEndpoint) |
||||
} |
||||
|
||||
func NewPostgresQueryEndpoint(datasource *models.DataSource) (tsdb.TsdbQueryEndpoint, error) { |
||||
endpoint := &PostgresQueryEndpoint{ |
||||
log: log.New("tsdb.postgres"), |
||||
} |
||||
|
||||
endpoint.sqlEngine = &tsdb.DefaultSqlEngine{ |
||||
MacroEngine: NewPostgresMacroEngine(), |
||||
} |
||||
|
||||
cnnstr := generateConnectionString(datasource) |
||||
endpoint.log.Debug("getEngine", "connection", cnnstr) |
||||
|
||||
if err := endpoint.sqlEngine.InitEngine("postgres", datasource, cnnstr); err != nil { |
||||
return nil, err |
||||
} |
||||
|
||||
return endpoint, nil |
||||
} |
||||
|
||||
func generateConnectionString(datasource *models.DataSource) string { |
||||
password := "" |
||||
for key, value := range datasource.SecureJsonData.Decrypt() { |
||||
if key == "password" { |
||||
password = value |
||||
break |
||||
} |
||||
} |
||||
|
||||
sslmode := datasource.JsonData.Get("sslmode").MustString("require") |
||||
return fmt.Sprintf("postgres://%s:%s@%s/%s?sslmode=%s", datasource.User, password, datasource.Url, datasource.Database, sslmode) |
||||
} |
||||
|
||||
func (e *PostgresQueryEndpoint) Query(ctx context.Context, dsInfo *models.DataSource, tsdbQuery *tsdb.TsdbQuery) (*tsdb.Response, error) { |
||||
return e.sqlEngine.Query(ctx, dsInfo, tsdbQuery, e.transformToTimeSeries, e.transformToTable) |
||||
} |
||||
|
||||
func (e PostgresQueryEndpoint) transformToTable(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error { |
||||
|
||||
columnNames, err := rows.Columns() |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
table := &tsdb.Table{ |
||||
Columns: make([]tsdb.TableColumn, len(columnNames)), |
||||
Rows: make([]tsdb.RowValues, 0), |
||||
} |
||||
|
||||
for i, name := range columnNames { |
||||
table.Columns[i].Text = name |
||||
} |
||||
|
||||
rowLimit := 1000000 |
||||
rowCount := 0 |
||||
|
||||
for ; rows.Next(); rowCount++ { |
||||
if rowCount > rowLimit { |
||||
return fmt.Errorf("PostgreSQL query row limit exceeded, limit %d", rowLimit) |
||||
} |
||||
|
||||
values, err := e.getTypedRowData(rows) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
table.Rows = append(table.Rows, values) |
||||
} |
||||
|
||||
result.Tables = append(result.Tables, table) |
||||
result.Meta.Set("rowCount", rowCount) |
||||
return nil |
||||
} |
||||
|
||||
func (e PostgresQueryEndpoint) getTypedRowData(rows *core.Rows) (tsdb.RowValues, error) { |
||||
|
||||
types, err := rows.ColumnTypes() |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
|
||||
values := make([]interface{}, len(types)) |
||||
valuePtrs := make([]interface{}, len(types)) |
||||
|
||||
for i := 0; i < len(types); i++ { |
||||
valuePtrs[i] = &values[i] |
||||
} |
||||
|
||||
if err := rows.Scan(valuePtrs...); err != nil { |
||||
return nil, err |
||||
} |
||||
|
||||
// convert types not handled by lib/pq
|
||||
// unhandled types are returned as []byte
|
||||
for i := 0; i < len(types); i++ { |
||||
if value, ok := values[i].([]byte); ok == true { |
||||
switch types[i].DatabaseTypeName() { |
||||
case "NUMERIC": |
||||
if v, err := strconv.ParseFloat(string(value), 64); err == nil { |
||||
values[i] = v |
||||
} else { |
||||
e.log.Debug("Rows", "Error converting numeric to float", value) |
||||
} |
||||
case "UNKNOWN", "CIDR", "INET", "MACADDR": |
||||
// char literals have type UNKNOWN
|
||||
values[i] = string(value) |
||||
default: |
||||
e.log.Debug("Rows", "Unknown database type", types[i].DatabaseTypeName(), "value", value) |
||||
values[i] = string(value) |
||||
} |
||||
} |
||||
} |
||||
|
||||
return values, nil |
||||
} |
||||
|
||||
func (e PostgresQueryEndpoint) transformToTimeSeries(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error { |
||||
pointsBySeries := make(map[string]*tsdb.TimeSeries) |
||||
seriesByQueryOrder := list.New() |
||||
columnNames, err := rows.Columns() |
||||
|
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
rowLimit := 1000000 |
||||
rowCount := 0 |
||||
timeIndex := -1 |
||||
metricIndex := -1 |
||||
|
||||
// check columns of resultset
|
||||
for i, col := range columnNames { |
||||
switch col { |
||||
case "time": |
||||
timeIndex = i |
||||
case "metric": |
||||
metricIndex = i |
||||
} |
||||
} |
||||
|
||||
if timeIndex == -1 { |
||||
return fmt.Errorf("Found no column named time") |
||||
} |
||||
|
||||
for rows.Next() { |
||||
var timestamp float64 |
||||
var value null.Float |
||||
var metric string |
||||
|
||||
if rowCount > rowLimit { |
||||
return fmt.Errorf("PostgreSQL query row limit exceeded, limit %d", rowLimit) |
||||
} |
||||
|
||||
values, err := e.getTypedRowData(rows) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
switch columnValue := values[timeIndex].(type) { |
||||
case int64: |
||||
timestamp = float64(columnValue * 1000) |
||||
case float64: |
||||
timestamp = columnValue * 1000 |
||||
case time.Time: |
||||
timestamp = float64(columnValue.Unix() * 1000) |
||||
default: |
||||
return fmt.Errorf("Invalid type for column time, must be of type timestamp or unix timestamp") |
||||
} |
||||
|
||||
if metricIndex >= 0 { |
||||
if columnValue, ok := values[metricIndex].(string); ok == true { |
||||
metric = columnValue |
||||
} else { |
||||
return fmt.Errorf("Column metric must be of type char,varchar or text") |
||||
} |
||||
} |
||||
|
||||
for i, col := range columnNames { |
||||
if i == timeIndex || i == metricIndex { |
||||
continue |
||||
} |
||||
|
||||
switch columnValue := values[i].(type) { |
||||
case int64: |
||||
value = null.FloatFrom(float64(columnValue)) |
||||
case float64: |
||||
value = null.FloatFrom(columnValue) |
||||
case nil: |
||||
value.Valid = false |
||||
default: |
||||
return fmt.Errorf("Value column must have numeric datatype, column: %s type: %T value: %v", col, columnValue, columnValue) |
||||
} |
||||
if metricIndex == -1 { |
||||
metric = col |
||||
} |
||||
e.appendTimePoint(pointsBySeries, seriesByQueryOrder, metric, timestamp, value) |
||||
rowCount++ |
||||
|
||||
} |
||||
} |
||||
|
||||
for elem := seriesByQueryOrder.Front(); elem != nil; elem = elem.Next() { |
||||
key := elem.Value.(string) |
||||
result.Series = append(result.Series, pointsBySeries[key]) |
||||
} |
||||
|
||||
result.Meta.Set("rowCount", rowCount) |
||||
return nil |
||||
} |
||||
|
||||
func (e PostgresQueryEndpoint) appendTimePoint(pointsBySeries map[string]*tsdb.TimeSeries, seriesByQueryOrder *list.List, metric string, timestamp float64, value null.Float) { |
||||
if series, exist := pointsBySeries[metric]; exist { |
||||
series.Points = append(series.Points, tsdb.TimePoint{value, null.FloatFrom(timestamp)}) |
||||
} else { |
||||
series := &tsdb.TimeSeries{Name: metric} |
||||
series.Points = append(series.Points, tsdb.TimePoint{value, null.FloatFrom(timestamp)}) |
||||
pointsBySeries[metric] = series |
||||
seriesByQueryOrder.PushBack(metric) |
||||
} |
||||
e.log.Debug("Rows", "metric", metric, "time", timestamp, "value", value) |
||||
} |
||||
@ -0,0 +1,125 @@ |
||||
package postgres |
||||
|
||||
import ( |
||||
"testing" |
||||
"time" |
||||
|
||||
"github.com/go-xorm/xorm" |
||||
"github.com/grafana/grafana/pkg/components/simplejson" |
||||
"github.com/grafana/grafana/pkg/log" |
||||
"github.com/grafana/grafana/pkg/services/sqlstore/sqlutil" |
||||
"github.com/grafana/grafana/pkg/tsdb" |
||||
_ "github.com/lib/pq" |
||||
. "github.com/smartystreets/goconvey/convey" |
||||
) |
||||
|
||||
// To run this test, remove the Skip from SkipConvey
|
||||
// and set up a PostgreSQL db named grafanatest and a user/password grafanatest/grafanatest
|
||||
func TestPostgres(t *testing.T) { |
||||
SkipConvey("PostgreSQL", t, func() { |
||||
x := InitPostgresTestDB(t) |
||||
|
||||
endpoint := &PostgresQueryEndpoint{ |
||||
sqlEngine: &tsdb.DefaultSqlEngine{ |
||||
MacroEngine: NewPostgresMacroEngine(), |
||||
XormEngine: x, |
||||
}, |
||||
log: log.New("tsdb.postgres"), |
||||
} |
||||
|
||||
sess := x.NewSession() |
||||
defer sess.Close() |
||||
|
||||
sql := ` |
||||
CREATE TABLE postgres_types( |
||||
c00_smallint smallint, |
||||
c01_integer integer, |
||||
c02_bigint bigint, |
||||
|
||||
c03_real real, |
||||
c04_double double precision, |
||||
c05_decimal decimal(10,2), |
||||
c06_numeric numeric(10,2), |
||||
|
||||
c07_char char(10), |
||||
c08_varchar varchar(10), |
||||
c09_text text, |
||||
|
||||
c10_timestamp timestamp without time zone, |
||||
c11_timestamptz timestamp with time zone, |
||||
c12_date date, |
||||
c13_time time without time zone, |
||||
c14_timetz time with time zone, |
||||
c15_interval interval |
||||
); |
||||
` |
||||
_, err := sess.Exec(sql) |
||||
So(err, ShouldBeNil) |
||||
|
||||
sql = ` |
||||
INSERT INTO postgres_types VALUES( |
||||
1,2,3, |
||||
4.5,6.7,1.1,1.2, |
||||
'char10','varchar10','text', |
||||
|
||||
now(),now(),now(),now(),now(),'15m'::interval |
||||
); |
||||
` |
||||
_, err = sess.Exec(sql) |
||||
So(err, ShouldBeNil) |
||||
|
||||
Convey("Query with Table format should map PostgreSQL column types to Go types", func() { |
||||
query := &tsdb.TsdbQuery{ |
||||
Queries: []*tsdb.Query{ |
||||
{ |
||||
Model: simplejson.NewFromAny(map[string]interface{}{ |
||||
"rawSql": "SELECT * FROM postgres_types", |
||||
"format": "table", |
||||
}), |
||||
RefId: "A", |
||||
}, |
||||
}, |
||||
} |
||||
|
||||
resp, err := endpoint.Query(nil, nil, query) |
||||
queryResult := resp.Results["A"] |
||||
So(err, ShouldBeNil) |
||||
|
||||
column := queryResult.Tables[0].Rows[0] |
||||
So(column[0].(int64), ShouldEqual, 1) |
||||
So(column[1].(int64), ShouldEqual, 2) |
||||
So(column[2].(int64), ShouldEqual, 3) |
||||
So(column[3].(float64), ShouldEqual, 4.5) |
||||
So(column[4].(float64), ShouldEqual, 6.7) |
||||
// libpq doesnt properly convert decimal, numeric and char to go types but returns []uint8 instead
|
||||
// So(column[5].(float64), ShouldEqual, 1.1)
|
||||
// So(column[6].(float64), ShouldEqual, 1.2)
|
||||
// So(column[7].(string), ShouldEqual, "char")
|
||||
So(column[8].(string), ShouldEqual, "varchar10") |
||||
So(column[9].(string), ShouldEqual, "text") |
||||
|
||||
So(column[10].(time.Time), ShouldHaveSameTypeAs, time.Now()) |
||||
So(column[11].(time.Time), ShouldHaveSameTypeAs, time.Now()) |
||||
So(column[12].(time.Time), ShouldHaveSameTypeAs, time.Now()) |
||||
So(column[13].(time.Time), ShouldHaveSameTypeAs, time.Now()) |
||||
So(column[14].(time.Time), ShouldHaveSameTypeAs, time.Now()) |
||||
|
||||
// libpq doesnt properly convert interval to go types but returns []uint8 instead
|
||||
// So(column[15].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
}) |
||||
}) |
||||
} |
||||
|
||||
func InitPostgresTestDB(t *testing.T) *xorm.Engine { |
||||
x, err := xorm.NewEngine(sqlutil.TestDB_Postgres.DriverName, sqlutil.TestDB_Postgres.ConnStr) |
||||
|
||||
// x.ShowSQL()
|
||||
|
||||
if err != nil { |
||||
t.Fatalf("Failed to init postgres db %v", err) |
||||
} |
||||
|
||||
sqlutil.CleanDB(x) |
||||
|
||||
return x |
||||
} |
||||
@ -0,0 +1,134 @@ |
||||
package tsdb |
||||
|
||||
import ( |
||||
"context" |
||||
"sync" |
||||
|
||||
"github.com/go-xorm/core" |
||||
"github.com/go-xorm/xorm" |
||||
"github.com/grafana/grafana/pkg/components/simplejson" |
||||
"github.com/grafana/grafana/pkg/models" |
||||
) |
||||
|
||||
// SqlEngine is a wrapper class around xorm for relational database data sources.
|
||||
type SqlEngine interface { |
||||
InitEngine(driverName string, dsInfo *models.DataSource, cnnstr string) error |
||||
Query( |
||||
ctx context.Context, |
||||
ds *models.DataSource, |
||||
query *TsdbQuery, |
||||
transformToTimeSeries func(query *Query, rows *core.Rows, result *QueryResult) error, |
||||
transformToTable func(query *Query, rows *core.Rows, result *QueryResult) error, |
||||
) (*Response, error) |
||||
} |
||||
|
||||
// SqlMacroEngine interpolates macros into sql. It takes in the timeRange to be able to
|
||||
// generate queries that use from and to.
|
||||
type SqlMacroEngine interface { |
||||
Interpolate(timeRange *TimeRange, sql string) (string, error) |
||||
} |
||||
|
||||
type DefaultSqlEngine struct { |
||||
MacroEngine SqlMacroEngine |
||||
XormEngine *xorm.Engine |
||||
} |
||||
|
||||
type engineCacheType struct { |
||||
cache map[int64]*xorm.Engine |
||||
versions map[int64]int |
||||
sync.Mutex |
||||
} |
||||
|
||||
var engineCache = engineCacheType{ |
||||
cache: make(map[int64]*xorm.Engine), |
||||
versions: make(map[int64]int), |
||||
} |
||||
|
||||
// InitEngine creates the db connection and inits the xorm engine or loads it from the engine cache
|
||||
func (e *DefaultSqlEngine) InitEngine(driverName string, dsInfo *models.DataSource, cnnstr string) error { |
||||
engineCache.Lock() |
||||
defer engineCache.Unlock() |
||||
|
||||
if engine, present := engineCache.cache[dsInfo.Id]; present { |
||||
if version, _ := engineCache.versions[dsInfo.Id]; version == dsInfo.Version { |
||||
e.XormEngine = engine |
||||
return nil |
||||
} |
||||
} |
||||
|
||||
engine, err := xorm.NewEngine(driverName, cnnstr) |
||||
engine.SetMaxOpenConns(10) |
||||
engine.SetMaxIdleConns(10) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
engineCache.cache[dsInfo.Id] = engine |
||||
e.XormEngine = engine |
||||
|
||||
return nil |
||||
} |
||||
|
||||
// Query is a default implementation of the Query method for an SQL data source.
|
||||
// The caller of this function must implement transformToTimeSeries and transformToTable and
|
||||
// pass them in as parameters.
|
||||
func (e *DefaultSqlEngine) Query( |
||||
ctx context.Context, |
||||
dsInfo *models.DataSource, |
||||
tsdbQuery *TsdbQuery, |
||||
transformToTimeSeries func(query *Query, rows *core.Rows, result *QueryResult) error, |
||||
transformToTable func(query *Query, rows *core.Rows, result *QueryResult) error, |
||||
) (*Response, error) { |
||||
result := &Response{ |
||||
Results: make(map[string]*QueryResult), |
||||
} |
||||
|
||||
session := e.XormEngine.NewSession() |
||||
defer session.Close() |
||||
db := session.DB() |
||||
|
||||
for _, query := range tsdbQuery.Queries { |
||||
rawSql := query.Model.Get("rawSql").MustString() |
||||
if rawSql == "" { |
||||
continue |
||||
} |
||||
|
||||
queryResult := &QueryResult{Meta: simplejson.New(), RefId: query.RefId} |
||||
result.Results[query.RefId] = queryResult |
||||
|
||||
rawSql, err := e.MacroEngine.Interpolate(tsdbQuery.TimeRange, rawSql) |
||||
if err != nil { |
||||
queryResult.Error = err |
||||
continue |
||||
} |
||||
|
||||
queryResult.Meta.Set("sql", rawSql) |
||||
|
||||
rows, err := db.Query(rawSql) |
||||
if err != nil { |
||||
queryResult.Error = err |
||||
continue |
||||
} |
||||
|
||||
defer rows.Close() |
||||
|
||||
format := query.Model.Get("format").MustString("time_series") |
||||
|
||||
switch format { |
||||
case "time_series": |
||||
err := transformToTimeSeries(query, rows, queryResult) |
||||
if err != nil { |
||||
queryResult.Error = err |
||||
continue |
||||
} |
||||
case "table": |
||||
err := transformToTable(query, rows, queryResult) |
||||
if err != nil { |
||||
queryResult.Error = err |
||||
continue |
||||
} |
||||
} |
||||
} |
||||
|
||||
return result, nil |
||||
} |
||||
@ -0,0 +1,3 @@ |
||||
# Grafana PostgreSQL Datasource - Native Plugin |
||||
|
||||
This is the built in PostgreSQL Datasource that is used to connect to PostgreSQL databases. |
||||
@ -0,0 +1,132 @@ |
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash'; |
||||
import ResponseParser from './response_parser'; |
||||
|
||||
export class PostgresDatasource { |
||||
id: any; |
||||
name: any; |
||||
responseParser: ResponseParser; |
||||
|
||||
/** @ngInject **/ |
||||
constructor(instanceSettings, private backendSrv, private $q, private templateSrv) { |
||||
this.name = instanceSettings.name; |
||||
this.id = instanceSettings.id; |
||||
this.responseParser = new ResponseParser(this.$q); |
||||
} |
||||
|
||||
interpolateVariable(value) { |
||||
if (typeof value === 'string') { |
||||
return '\'' + value + '\''; |
||||
} |
||||
|
||||
var quotedValues = _.map(value, function(val) { |
||||
return '\'' + val + '\''; |
||||
}); |
||||
return quotedValues.join(','); |
||||
} |
||||
|
||||
query(options) { |
||||
var queries = _.filter(options.targets, item => { |
||||
return item.hide !== true; |
||||
}).map(item => { |
||||
return { |
||||
refId: item.refId, |
||||
intervalMs: options.intervalMs, |
||||
maxDataPoints: options.maxDataPoints, |
||||
datasourceId: this.id, |
||||
rawSql: this.templateSrv.replace(item.rawSql, options.scopedVars, this.interpolateVariable), |
||||
format: item.format, |
||||
}; |
||||
}); |
||||
|
||||
if (queries.length === 0) { |
||||
return this.$q.when({data: []}); |
||||
} |
||||
|
||||
return this.backendSrv.datasourceRequest({ |
||||
url: '/api/tsdb/query', |
||||
method: 'POST', |
||||
data: { |
||||
from: options.range.from.valueOf().toString(), |
||||
to: options.range.to.valueOf().toString(), |
||||
queries: queries, |
||||
} |
||||
}).then(this.responseParser.processQueryResult); |
||||
} |
||||
|
||||
annotationQuery(options) { |
||||
if (!options.annotation.rawQuery) { |
||||
return this.$q.reject({message: 'Query missing in annotation definition'}); |
||||
} |
||||
|
||||
const query = { |
||||
refId: options.annotation.name, |
||||
datasourceId: this.id, |
||||
rawSql: this.templateSrv.replace(options.annotation.rawQuery, options.scopedVars, this.interpolateVariable), |
||||
format: 'table', |
||||
}; |
||||
|
||||
return this.backendSrv.datasourceRequest({ |
||||
url: '/api/tsdb/query', |
||||
method: 'POST', |
||||
data: { |
||||
from: options.range.from.valueOf().toString(), |
||||
to: options.range.to.valueOf().toString(), |
||||
queries: [query], |
||||
} |
||||
}).then(data => this.responseParser.transformAnnotationResponse(options, data)); |
||||
} |
||||
|
||||
metricFindQuery(query, optionalOptions) { |
||||
let refId = 'tempvar'; |
||||
if (optionalOptions && optionalOptions.variable && optionalOptions.variable.name) { |
||||
refId = optionalOptions.variable.name; |
||||
} |
||||
|
||||
const interpolatedQuery = { |
||||
refId: refId, |
||||
datasourceId: this.id, |
||||
rawSql: this.templateSrv.replace(query, {}, this.interpolateVariable), |
||||
format: 'table', |
||||
}; |
||||
|
||||
return this.backendSrv.datasourceRequest({ |
||||
url: '/api/tsdb/query', |
||||
method: 'POST', |
||||
data: { |
||||
queries: [interpolatedQuery], |
||||
} |
||||
}) |
||||
.then(data => this.responseParser.parseMetricFindQueryResult(refId, data)); |
||||
} |
||||
|
||||
testDatasource() { |
||||
return this.backendSrv.datasourceRequest({ |
||||
url: '/api/tsdb/query', |
||||
method: 'POST', |
||||
data: { |
||||
from: '5m', |
||||
to: 'now', |
||||
queries: [{ |
||||
refId: 'A', |
||||
intervalMs: 1, |
||||
maxDataPoints: 1, |
||||
datasourceId: this.id, |
||||
rawSql: "SELECT 1", |
||||
format: 'table', |
||||
}], |
||||
} |
||||
}).then(res => { |
||||
return { status: "success", message: "Database Connection OK"}; |
||||
}).catch(err => { |
||||
console.log(err); |
||||
if (err.data && err.data.message) { |
||||
return { status: "error", message: err.data.message }; |
||||
} else { |
||||
return { status: "error", message: err.status }; |
||||
} |
||||
}); |
||||
} |
||||
} |
||||
|
||||
|
After Width: | Height: | Size: 4.4 KiB |
@ -0,0 +1,103 @@ |
||||
// jshint ignore: start
|
||||
// jscs: disable
|
||||
|
||||
ace.define("ace/mode/sql_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) { |
||||
"use strict"; |
||||
|
||||
var oop = require("../lib/oop"); |
||||
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; |
||||
|
||||
var SqlHighlightRules = function() { |
||||
|
||||
var keywords = ( |
||||
"select|insert|update|delete|from|where|and|or|group|by|order|limit|offset|having|as|case|" + |
||||
"when|else|end|type|left|right|join|on|outer|desc|asc|union|create|table|primary|key|if|" + |
||||
"foreign|not|references|default|null|inner|cross|natural|database|drop|grant" |
||||
); |
||||
|
||||
var builtinConstants = ( |
||||
"true|false" |
||||
); |
||||
|
||||
var builtinFunctions = ( |
||||
"avg|count|first|last|max|min|sum|upper|lower|substring|char_length|round|rank|now|" + |
||||
"coalesce" |
||||
); |
||||
|
||||
var dataTypes = ( |
||||
"int|int2|int4|int8|numeric|decimal|date|varchar|char|bigint|float|bool|bytea|text|timestamp|" + |
||||
"time|money|real|integer" |
||||
); |
||||
|
||||
var keywordMapper = this.createKeywordMapper({ |
||||
"support.function": builtinFunctions, |
||||
"keyword": keywords, |
||||
"constant.language": builtinConstants, |
||||
"storage.type": dataTypes |
||||
}, "identifier", true); |
||||
|
||||
this.$rules = { |
||||
"start" : [ { |
||||
token : "comment", |
||||
regex : "--.*$" |
||||
}, { |
||||
token : "comment", |
||||
start : "/\\*", |
||||
end : "\\*/" |
||||
}, { |
||||
token : "string", // " string
|
||||
regex : '".*?"' |
||||
}, { |
||||
token : "string", // ' string
|
||||
regex : "'.*?'" |
||||
}, { |
||||
token : "constant.numeric", // float
|
||||
regex : "[+-]?\\d+(?:(?:\\.\\d*)?(?:[eE][+-]?\\d+)?)?\\b" |
||||
}, { |
||||
token : keywordMapper, |
||||
regex : "[a-zA-Z_$][a-zA-Z0-9_$]*\\b" |
||||
}, { |
||||
token : "keyword.operator", |
||||
regex : "\\+|\\-|\\/|\\/\\/|%|<@>|@>|<@|&|\\^|~|<|>|<=|=>|==|!=|<>|=" |
||||
}, { |
||||
token : "paren.lparen", |
||||
regex : "[\\(]" |
||||
}, { |
||||
token : "paren.rparen", |
||||
regex : "[\\)]" |
||||
}, { |
||||
token : "text", |
||||
regex : "\\s+" |
||||
} ] |
||||
}; |
||||
this.normalizeRules(); |
||||
}; |
||||
|
||||
oop.inherits(SqlHighlightRules, TextHighlightRules); |
||||
|
||||
exports.SqlHighlightRules = SqlHighlightRules; |
||||
}); |
||||
|
||||
ace.define("ace/mode/sql",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/sql_highlight_rules"], function(require, exports, module) { |
||||
"use strict"; |
||||
|
||||
var oop = require("../lib/oop"); |
||||
var TextMode = require("./text").Mode; |
||||
var SqlHighlightRules = require("./sql_highlight_rules").SqlHighlightRules; |
||||
|
||||
var Mode = function() { |
||||
this.HighlightRules = SqlHighlightRules; |
||||
this.$behaviour = this.$defaultBehaviour; |
||||
}; |
||||
oop.inherits(Mode, TextMode); |
||||
|
||||
(function() { |
||||
|
||||
this.lineCommentStart = "--"; |
||||
|
||||
this.$id = "ace/mode/sql"; |
||||
}).call(Mode.prototype); |
||||
|
||||
exports.Mode = Mode; |
||||
|
||||
}); |
||||
@ -0,0 +1,43 @@ |
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import {PostgresDatasource} from './datasource'; |
||||
import {PostgresQueryCtrl} from './query_ctrl'; |
||||
|
||||
class PostgresConfigCtrl { |
||||
static templateUrl = 'partials/config.html'; |
||||
|
||||
current: any; |
||||
constructor($scope) { |
||||
this.current.jsonData.sslmode = this.current.jsonData.sslmode || 'require'; |
||||
} |
||||
} |
||||
|
||||
const defaultQuery = `SELECT
|
||||
extract(epoch from time_column) AS time, |
||||
title_column as title, |
||||
description_column as text |
||||
FROM |
||||
metric_table |
||||
WHERE |
||||
$__timeFilter(time_column) |
||||
`;
|
||||
|
||||
class PostgresAnnotationsQueryCtrl { |
||||
static templateUrl = 'partials/annotations.editor.html'; |
||||
|
||||
annotation: any; |
||||
|
||||
/** @ngInject **/ |
||||
constructor() { |
||||
this.annotation.rawQuery = this.annotation.rawQuery || defaultQuery; |
||||
} |
||||
} |
||||
|
||||
export { |
||||
PostgresDatasource, |
||||
PostgresDatasource as Datasource, |
||||
PostgresQueryCtrl as QueryCtrl, |
||||
PostgresConfigCtrl as ConfigCtrl, |
||||
PostgresAnnotationsQueryCtrl as AnnotationsQueryCtrl, |
||||
}; |
||||
|
||||
@ -0,0 +1,41 @@ |
||||
|
||||
<div class="gf-form-group"> |
||||
<div class="gf-form-inline"> |
||||
<div class="gf-form gf-form--grow"> |
||||
<textarea rows="10" class="gf-form-input" ng-model="ctrl.annotation.rawQuery" spellcheck="false" placeholder="query expression" data-min-length=0 data-items=100 ng-model-onblur ng-change="ctrl.panelCtrl.refresh()"></textarea> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="gf-form-inline"> |
||||
<div class="gf-form"> |
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showHelp = !ctrl.showHelp"> |
||||
Show Help |
||||
<i class="fa fa-caret-down" ng-show="ctrl.showHelp"></i> |
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showHelp"></i> |
||||
</label> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="gf-form" ng-show="ctrl.showHelp"> |
||||
<pre class="gf-form-pre alert alert-info"><h6>Annotation Query Format</h6> |
||||
An annotation is an event that is overlayed on top of graphs. The query can have up to four columns per row, the time column is mandatory. Annotation rendering is expensive so it is important to limit the number of rows returned. |
||||
|
||||
- column with alias: <b>time</b> for the annotation event. Format is UTC in seconds, use extract(epoch from column) as "time" |
||||
- column with alias <b>title</b> for the annotation title |
||||
- column with alias: <b>text</b> for the annotation text |
||||
- column with alias: <b>tags</b> for annotation tags. This is a comma separated string of tags e.g. 'tag1,tag2' |
||||
|
||||
|
||||
Macros: |
||||
- $__time(column) -> column as "time" |
||||
- $__timeFilter(column) -> column ≥ to_timestamp(1492750877) AND column ≤ to_timestamp(1492750877) |
||||
- $__unixEpochFilter(column) -> column > 1492750877 AND column < 1492750877 |
||||
|
||||
Or build your own conditionals using these macros which just return the values: |
||||
- $__timeFrom() -> to_timestamp(1492750877) |
||||
- $__timeTo() -> to_timestamp(1492750877) |
||||
- $__unixEpochFrom() -> 1492750877 |
||||
- $__unixEpochTo() -> 1492750877 |
||||
</pre> |
||||
</div> |
||||
</div> |
||||
@ -0,0 +1,52 @@ |
||||
|
||||
<h3 class="page-heading">PostgreSQL Connection</h3> |
||||
|
||||
<div class="gf-form-group"> |
||||
<div class="gf-form max-width-30"> |
||||
<span class="gf-form-label width-7">Host</span> |
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.url' placeholder="localhost:5432" bs-typeahead="{{['localhost:5432', 'localhost:5433']}}" required></input> |
||||
</div> |
||||
|
||||
<div class="gf-form max-width-30"> |
||||
<span class="gf-form-label width-7">Database</span> |
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.database' placeholder="database name" required></input> |
||||
</div> |
||||
|
||||
<div class="gf-form-inline"> |
||||
<div class="gf-form max-width-15"> |
||||
<span class="gf-form-label width-7">User</span> |
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.user' placeholder="user"></input> |
||||
</div> |
||||
<div class="gf-form max-width-15" ng-if="!ctrl.current.secureJsonFields.password"> |
||||
<span class="gf-form-label width-7">Password</span> |
||||
<input type="password" class="gf-form-input" ng-model='ctrl.current.secureJsonData.password' placeholder="password"></input> |
||||
</div> |
||||
<div class="gf-form max-width-19" ng-if="ctrl.current.secureJsonFields.password"> |
||||
<span class="gf-form-label width-7">Password</span> |
||||
<input type="text" class="gf-form-input" disabled="disabled" value="configured"> |
||||
<a class="btn btn-secondary gf-form-btn" href="#" ng-click="ctrl.current.secureJsonFields.password = false">reset</a> |
||||
</div> |
||||
</div> |
||||
<div class="gf-form"> |
||||
<label class="gf-form-label width-7">SSL Mode</label> |
||||
<div class="gf-form-select-wrapper max-width-15 gf-form-select-wrapper--has-help-icon"> |
||||
<select class="gf-form-input" ng-model="ctrl.current.jsonData.sslmode" ng-options="mode for mode in ['disable', 'require', 'verify-ca', 'verify-full']" ng-init="ctrl.current.jsonData.sslmode"></select> |
||||
<info-popover mode="right-absolute"> |
||||
This option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. |
||||
</info-popover> |
||||
</div> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="gf-form-group"> |
||||
<div class="grafana-info-box"> |
||||
<h5>User Permission</h5> |
||||
<p> |
||||
The database user should only be granted SELECT permissions on the specified database & tables you want to query. |
||||
Grafana does not validate that queries are safe so queries can contain any SQL statement. For example, statements |
||||
like <code>DELETE FROM user;</code> and <code>DROP TABLE user;</code> would be executed. To protect against this we |
||||
<strong>Highly</strong> recommmend you create a specific PostgreSQL user with restricted permissions. |
||||
</p> |
||||
</div> |
||||
</div> |
||||
|
||||
@ -0,0 +1,79 @@ |
||||
<query-editor-row query-ctrl="ctrl" can-collapse="false"> |
||||
<div class="gf-form-inline"> |
||||
<div class="gf-form gf-form--grow"> |
||||
<code-editor content="ctrl.target.rawSql" datasource="ctrl.datasource" on-change="ctrl.panelCtrl.refresh()" data-mode="sql"> |
||||
</code-editor> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="gf-form-inline"> |
||||
<div class="gf-form"> |
||||
<label class="gf-form-label query-keyword">Format as</label> |
||||
<div class="gf-form-select-wrapper"> |
||||
<select class="gf-form-input gf-size-auto" ng-model="ctrl.target.format" ng-options="f.value as f.text for f in ctrl.formats" ng-change="ctrl.refresh()"></select> |
||||
</div> |
||||
</div> |
||||
<div class="gf-form"> |
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showHelp = !ctrl.showHelp"> |
||||
Show Help |
||||
<i class="fa fa-caret-down" ng-show="ctrl.showHelp"></i> |
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showHelp"></i> |
||||
</label> |
||||
</div> |
||||
<div class="gf-form" ng-show="ctrl.lastQueryMeta"> |
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showLastQuerySQL = !ctrl.showLastQuerySQL"> |
||||
Generated SQL |
||||
<i class="fa fa-caret-down" ng-show="ctrl.showLastQuerySQL"></i> |
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showLastQuerySQL"></i> |
||||
</label> |
||||
</div> |
||||
<div class="gf-form gf-form--grow"> |
||||
<div class="gf-form-label gf-form-label--grow"></div> |
||||
</div> |
||||
</div> |
||||
|
||||
<div class="gf-form" ng-show="ctrl.showLastQuerySQL"> |
||||
<pre class="gf-form-pre">{{ctrl.lastQueryMeta.sql}}</pre> |
||||
</div> |
||||
|
||||
<div class="gf-form" ng-show="ctrl.showHelp"> |
||||
<pre class="gf-form-pre alert alert-info">Time series: |
||||
- return column named <i>time</i> (UTC in seconds or timestamp) |
||||
- return column(s) with numeric datatype as values |
||||
- (Optional: return column named <i>metric</i> to represent the series name. If no column named metric is found the column name of the value column is used as series name) |
||||
|
||||
Table: |
||||
- return any set of columns |
||||
|
||||
Macros: |
||||
- $__time(column) -> column as "time" |
||||
- $__timeEpoch -> extract(epoch from column) as "time" |
||||
- $__timeFilter(column) -> column ≥ to_timestamp(1492750877) AND column ≤ to_timestamp(1492750877) |
||||
- $__unixEpochFilter(column) -> column > 1492750877 AND column < 1492750877 |
||||
|
||||
To group by time use $__timeGroup: |
||||
-> (extract(epoch from column)/extract(epoch from column::interval))::int |
||||
|
||||
Example of group by and order by with $__timeGroup: |
||||
SELECT |
||||
min(date_time_col) AS time_sec, |
||||
sum(value_double) as value |
||||
FROM yourtable |
||||
group by $__timeGroup(date_time_col, '1h') |
||||
order by $__timeGroup(date_time_col, '1h') ASC |
||||
|
||||
Or build your own conditionals using these macros which just return the values: |
||||
- $__timeFrom() -> to_timestamp(1492750877) |
||||
- $__timeTo() -> to_timestamp(1492750877) |
||||
- $__unixEpochFrom() -> 1492750877 |
||||
- $__unixEpochTo() -> 1492750877 |
||||
</pre> |
||||
</div> |
||||
|
||||
</div> |
||||
|
||||
<div class="gf-form" ng-show="ctrl.lastQueryError"> |
||||
<pre class="gf-form-pre alert alert-error">{{ctrl.lastQueryError}}</pre> |
||||
</div> |
||||
|
||||
</query-editor-row> |
||||
@ -0,0 +1,20 @@ |
||||
{ |
||||
"type": "datasource", |
||||
"name": "PostgreSQL", |
||||
"id": "postgres", |
||||
|
||||
"info": { |
||||
"author": { |
||||
"name": "Grafana Project", |
||||
"url": "https://grafana.com" |
||||
}, |
||||
"logos": { |
||||
"small": "img/postgresql_logo.svg", |
||||
"large": "img/postgresql_logo.svg" |
||||
} |
||||
}, |
||||
|
||||
"alerting": true, |
||||
"annotations": true, |
||||
"metrics": true |
||||
} |
||||
@ -0,0 +1,84 @@ |
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash'; |
||||
import {QueryCtrl} from 'app/plugins/sdk'; |
||||
|
||||
export interface PostgresQuery { |
||||
refId: string; |
||||
format: string; |
||||
alias: string; |
||||
rawSql: string; |
||||
} |
||||
|
||||
export interface QueryMeta { |
||||
sql: string; |
||||
} |
||||
|
||||
|
||||
const defaultQuery = `SELECT
|
||||
$__time(time_column), |
||||
value1 |
||||
FROM |
||||
metric_table |
||||
WHERE |
||||
$__timeFilter(time_column) |
||||
`;
|
||||
|
||||
export class PostgresQueryCtrl extends QueryCtrl { |
||||
static templateUrl = 'partials/query.editor.html'; |
||||
|
||||
showLastQuerySQL: boolean; |
||||
formats: any[]; |
||||
target: PostgresQuery; |
||||
lastQueryMeta: QueryMeta; |
||||
lastQueryError: string; |
||||
showHelp: boolean; |
||||
|
||||
/** @ngInject **/ |
||||
constructor($scope, $injector) { |
||||
super($scope, $injector); |
||||
|
||||
this.target.format = this.target.format || 'time_series'; |
||||
this.target.alias = ""; |
||||
this.formats = [ |
||||
{text: 'Time series', value: 'time_series'}, |
||||
{text: 'Table', value: 'table'}, |
||||
]; |
||||
|
||||
if (!this.target.rawSql) { |
||||
|
||||
// special handling when in table panel
|
||||
if (this.panelCtrl.panel.type === 'table') { |
||||
this.target.format = 'table'; |
||||
this.target.rawSql = "SELECT 1"; |
||||
} else { |
||||
this.target.rawSql = defaultQuery; |
||||
} |
||||
} |
||||
|
||||
this.panelCtrl.events.on('data-received', this.onDataReceived.bind(this), $scope); |
||||
this.panelCtrl.events.on('data-error', this.onDataError.bind(this), $scope); |
||||
} |
||||
|
||||
onDataReceived(dataList) { |
||||
this.lastQueryMeta = null; |
||||
this.lastQueryError = null; |
||||
|
||||
let anySeriesFromQuery = _.find(dataList, {refId: this.target.refId}); |
||||
if (anySeriesFromQuery) { |
||||
this.lastQueryMeta = anySeriesFromQuery.meta; |
||||
} |
||||
} |
||||
|
||||
onDataError(err) { |
||||
if (err.data && err.data.results) { |
||||
let queryRes = err.data.results[this.target.refId]; |
||||
if (queryRes) { |
||||
this.lastQueryMeta = queryRes.meta; |
||||
this.lastQueryError = queryRes.error; |
||||
} |
||||
} |
||||
} |
||||
} |
||||
|
||||
|
||||
@ -0,0 +1,141 @@ |
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash'; |
||||
|
||||
export default class ResponseParser { |
||||
constructor(private $q) {} |
||||
|
||||
processQueryResult(res) { |
||||
var data = []; |
||||
|
||||
if (!res.data.results) { |
||||
return {data: data}; |
||||
} |
||||
|
||||
for (let key in res.data.results) { |
||||
let queryRes = res.data.results[key]; |
||||
|
||||
if (queryRes.series) { |
||||
for (let series of queryRes.series) { |
||||
data.push({ |
||||
target: series.name, |
||||
datapoints: series.points, |
||||
refId: queryRes.refId, |
||||
meta: queryRes.meta, |
||||
}); |
||||
} |
||||
} |
||||
|
||||
if (queryRes.tables) { |
||||
for (let table of queryRes.tables) { |
||||
table.type = 'table'; |
||||
table.refId = queryRes.refId; |
||||
table.meta = queryRes.meta; |
||||
data.push(table); |
||||
} |
||||
} |
||||
} |
||||
|
||||
return {data: data}; |
||||
} |
||||
|
||||
parseMetricFindQueryResult(refId, results) { |
||||
if (!results || results.data.length === 0 || results.data.results[refId].meta.rowCount === 0) { return []; } |
||||
|
||||
const columns = results.data.results[refId].tables[0].columns; |
||||
const rows = results.data.results[refId].tables[0].rows; |
||||
const textColIndex = this.findColIndex(columns, '__text'); |
||||
const valueColIndex = this.findColIndex(columns, '__value'); |
||||
|
||||
if (columns.length === 2 && textColIndex !== -1 && valueColIndex !== -1) { |
||||
return this.transformToKeyValueList(rows, textColIndex, valueColIndex); |
||||
} |
||||
|
||||
return this.transformToSimpleList(rows); |
||||
} |
||||
|
||||
transformToKeyValueList(rows, textColIndex, valueColIndex) { |
||||
const res = []; |
||||
|
||||
for (let i = 0; i < rows.length; i++) { |
||||
if (!this.containsKey(res, rows[i][textColIndex])) { |
||||
res.push({text: rows[i][textColIndex], value: rows[i][valueColIndex]}); |
||||
} |
||||
} |
||||
|
||||
return res; |
||||
} |
||||
|
||||
transformToSimpleList(rows) { |
||||
const res = []; |
||||
|
||||
for (let i = 0; i < rows.length; i++) { |
||||
for (let j = 0; j < rows[i].length; j++) { |
||||
const value = rows[i][j]; |
||||
if ( res.indexOf( value ) === -1 ) { |
||||
res.push(value); |
||||
} |
||||
} |
||||
} |
||||
|
||||
return _.map(res, value => { |
||||
return { text: value}; |
||||
}); |
||||
} |
||||
|
||||
findColIndex(columns, colName) { |
||||
for (let i = 0; i < columns.length; i++) { |
||||
if (columns[i].text === colName) { |
||||
return i; |
||||
} |
||||
} |
||||
|
||||
return -1; |
||||
} |
||||
|
||||
containsKey(res, key) { |
||||
for (let i = 0; i < res.length; i++) { |
||||
if (res[i].text === key) { |
||||
return true; |
||||
} |
||||
} |
||||
return false; |
||||
} |
||||
|
||||
transformAnnotationResponse(options, data) { |
||||
const table = data.data.results[options.annotation.name].tables[0]; |
||||
|
||||
let timeColumnIndex = -1; |
||||
let titleColumnIndex = -1; |
||||
let textColumnIndex = -1; |
||||
let tagsColumnIndex = -1; |
||||
|
||||
for (let i = 0; i < table.columns.length; i++) { |
||||
if (table.columns[i].text === 'time') { |
||||
timeColumnIndex = i; |
||||
} else if (table.columns[i].text === 'text') { |
||||
textColumnIndex = i; |
||||
} else if (table.columns[i].text === 'tags') { |
||||
tagsColumnIndex = i; |
||||
} |
||||
} |
||||
|
||||
if (timeColumnIndex === -1) { |
||||
return this.$q.reject({message: 'Missing mandatory time column in annotation query.'}); |
||||
} |
||||
|
||||
const list = []; |
||||
for (let i = 0; i < table.rows.length; i++) { |
||||
const row = table.rows[i]; |
||||
list.push({ |
||||
annotation: options.annotation, |
||||
time: Math.floor(row[timeColumnIndex]) * 1000, |
||||
title: row[titleColumnIndex], |
||||
text: row[textColumnIndex], |
||||
tags: row[tagsColumnIndex] ? row[tagsColumnIndex].trim().split(/\s*,\s*/) : [] |
||||
}); |
||||
} |
||||
|
||||
return list; |
||||
} |
||||
} |
||||
@ -0,0 +1,196 @@ |
||||
import {describe, beforeEach, it, expect, angularMocks} from 'test/lib/common'; |
||||
import moment from 'moment'; |
||||
import helpers from 'test/specs/helpers'; |
||||
import {PostgresDatasource} from '../datasource'; |
||||
|
||||
describe('PostgreSQLDatasource', function() { |
||||
var ctx = new helpers.ServiceTestContext(); |
||||
var instanceSettings = {name: 'postgresql'}; |
||||
|
||||
beforeEach(angularMocks.module('grafana.core')); |
||||
beforeEach(angularMocks.module('grafana.services')); |
||||
beforeEach(ctx.providePhase(['backendSrv'])); |
||||
|
||||
beforeEach(angularMocks.inject(function($q, $rootScope, $httpBackend, $injector) { |
||||
ctx.$q = $q; |
||||
ctx.$httpBackend = $httpBackend; |
||||
ctx.$rootScope = $rootScope; |
||||
ctx.ds = $injector.instantiate(PostgresDatasource, {instanceSettings: instanceSettings}); |
||||
$httpBackend.when('GET', /\.html$/).respond(''); |
||||
})); |
||||
|
||||
describe('When performing annotationQuery', function() { |
||||
let results; |
||||
|
||||
const annotationName = 'MyAnno'; |
||||
|
||||
const options = { |
||||
annotation: { |
||||
name: annotationName, |
||||
rawQuery: 'select time, title, text, tags from table;' |
||||
}, |
||||
range: { |
||||
from: moment(1432288354), |
||||
to: moment(1432288401) |
||||
} |
||||
}; |
||||
|
||||
const response = { |
||||
results: { |
||||
MyAnno: { |
||||
refId: annotationName, |
||||
tables: [ |
||||
{ |
||||
columns: [{text: 'time'}, {text: 'text'}, {text: 'tags'}], |
||||
rows: [ |
||||
[1432288355, 'some text', 'TagA,TagB'], |
||||
[1432288390, 'some text2', ' TagB , TagC'], |
||||
[1432288400, 'some text3'] |
||||
] |
||||
} |
||||
] |
||||
} |
||||
} |
||||
}; |
||||
|
||||
beforeEach(function() { |
||||
ctx.backendSrv.datasourceRequest = function(options) { |
||||
return ctx.$q.when({data: response, status: 200}); |
||||
}; |
||||
ctx.ds.annotationQuery(options).then(function(data) { results = data; }); |
||||
ctx.$rootScope.$apply(); |
||||
}); |
||||
|
||||
it('should return annotation list', function() { |
||||
expect(results.length).to.be(3); |
||||
|
||||
expect(results[0].text).to.be('some text'); |
||||
expect(results[0].tags[0]).to.be('TagA'); |
||||
expect(results[0].tags[1]).to.be('TagB'); |
||||
|
||||
expect(results[1].tags[0]).to.be('TagB'); |
||||
expect(results[1].tags[1]).to.be('TagC'); |
||||
|
||||
expect(results[2].tags.length).to.be(0); |
||||
}); |
||||
}); |
||||
|
||||
describe('When performing metricFindQuery', function() { |
||||
let results; |
||||
const query = 'select * from atable'; |
||||
const response = { |
||||
results: { |
||||
tempvar: { |
||||
meta: { |
||||
rowCount: 3 |
||||
}, |
||||
refId: 'tempvar', |
||||
tables: [ |
||||
{ |
||||
columns: [{text: 'title'}, {text: 'text'}], |
||||
rows: [ |
||||
['aTitle', 'some text'], |
||||
['aTitle2', 'some text2'], |
||||
['aTitle3', 'some text3'] |
||||
] |
||||
} |
||||
] |
||||
} |
||||
} |
||||
}; |
||||
|
||||
beforeEach(function() { |
||||
ctx.backendSrv.datasourceRequest = function(options) { |
||||
return ctx.$q.when({data: response, status: 200}); |
||||
}; |
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; }); |
||||
ctx.$rootScope.$apply(); |
||||
}); |
||||
|
||||
it('should return list of all column values', function() { |
||||
expect(results.length).to.be(6); |
||||
expect(results[0].text).to.be('aTitle'); |
||||
expect(results[5].text).to.be('some text3'); |
||||
}); |
||||
}); |
||||
|
||||
describe('When performing metricFindQuery with key, value columns', function() { |
||||
let results; |
||||
const query = 'select * from atable'; |
||||
const response = { |
||||
results: { |
||||
tempvar: { |
||||
meta: { |
||||
rowCount: 3 |
||||
}, |
||||
refId: 'tempvar', |
||||
tables: [ |
||||
{ |
||||
columns: [{text: '__value'}, {text: '__text'}], |
||||
rows: [ |
||||
['value1', 'aTitle'], |
||||
['value2', 'aTitle2'], |
||||
['value3', 'aTitle3'] |
||||
] |
||||
} |
||||
] |
||||
} |
||||
} |
||||
}; |
||||
|
||||
beforeEach(function() { |
||||
ctx.backendSrv.datasourceRequest = function(options) { |
||||
return ctx.$q.when({data: response, status: 200}); |
||||
}; |
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; }); |
||||
ctx.$rootScope.$apply(); |
||||
}); |
||||
|
||||
it('should return list of as text, value', function() { |
||||
expect(results.length).to.be(3); |
||||
expect(results[0].text).to.be('aTitle'); |
||||
expect(results[0].value).to.be('value1'); |
||||
expect(results[2].text).to.be('aTitle3'); |
||||
expect(results[2].value).to.be('value3'); |
||||
}); |
||||
}); |
||||
|
||||
describe('When performing metricFindQuery with key, value columns and with duplicate keys', function() { |
||||
let results; |
||||
const query = 'select * from atable'; |
||||
const response = { |
||||
results: { |
||||
tempvar: { |
||||
meta: { |
||||
rowCount: 3 |
||||
}, |
||||
refId: 'tempvar', |
||||
tables: [ |
||||
{ |
||||
columns: [{text: '__text'}, {text: '__value'}], |
||||
rows: [ |
||||
['aTitle', 'same'], |
||||
['aTitle', 'same'], |
||||
['aTitle', 'diff'] |
||||
] |
||||
} |
||||
] |
||||
} |
||||
} |
||||
}; |
||||
|
||||
beforeEach(function() { |
||||
ctx.backendSrv.datasourceRequest = function(options) { |
||||
return ctx.$q.when({data: response, status: 200}); |
||||
}; |
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; }); |
||||
ctx.$rootScope.$apply(); |
||||
}); |
||||
|
||||
it('should return list of unique keys', function() { |
||||
expect(results.length).to.be(1); |
||||
expect(results[0].text).to.be('aTitle'); |
||||
expect(results[0].value).to.be('same'); |
||||
}); |
||||
}); |
||||
}); |
||||
@ -1,147 +0,0 @@ |
||||
package hstore |
||||
|
||||
import ( |
||||
"database/sql" |
||||
_ "github.com/lib/pq" |
||||
"os" |
||||
"testing" |
||||
) |
||||
|
||||
type Fatalistic interface { |
||||
Fatal(args ...interface{}) |
||||
} |
||||
|
||||
func openTestConn(t Fatalistic) *sql.DB { |
||||
datname := os.Getenv("PGDATABASE") |
||||
sslmode := os.Getenv("PGSSLMODE") |
||||
|
||||
if datname == "" { |
||||
os.Setenv("PGDATABASE", "pqgotest") |
||||
} |
||||
|
||||
if sslmode == "" { |
||||
os.Setenv("PGSSLMODE", "disable") |
||||
} |
||||
|
||||
conn, err := sql.Open("postgres", "") |
||||
if err != nil { |
||||
t.Fatal(err) |
||||
} |
||||
|
||||
return conn |
||||
} |
||||
|
||||
func TestHstore(t *testing.T) { |
||||
db := openTestConn(t) |
||||
defer db.Close() |
||||
|
||||
// quitely create hstore if it doesn't exist
|
||||
_, err := db.Exec("CREATE EXTENSION IF NOT EXISTS hstore") |
||||
if err != nil { |
||||
t.Skipf("Skipping hstore tests - hstore extension create failed: %s", err.Error()) |
||||
} |
||||
|
||||
hs := Hstore{} |
||||
|
||||
// test for null-valued hstores
|
||||
err = db.QueryRow("SELECT NULL::hstore").Scan(&hs) |
||||
if err != nil { |
||||
t.Fatal(err) |
||||
} |
||||
if hs.Map != nil { |
||||
t.Fatalf("expected null map") |
||||
} |
||||
|
||||
err = db.QueryRow("SELECT $1::hstore", hs).Scan(&hs) |
||||
if err != nil { |
||||
t.Fatalf("re-query null map failed: %s", err.Error()) |
||||
} |
||||
if hs.Map != nil { |
||||
t.Fatalf("expected null map") |
||||
} |
||||
|
||||
// test for empty hstores
|
||||
err = db.QueryRow("SELECT ''::hstore").Scan(&hs) |
||||
if err != nil { |
||||
t.Fatal(err) |
||||
} |
||||
if hs.Map == nil { |
||||
t.Fatalf("expected empty map, got null map") |
||||
} |
||||
if len(hs.Map) != 0 { |
||||
t.Fatalf("expected empty map, got len(map)=%d", len(hs.Map)) |
||||
} |
||||
|
||||
err = db.QueryRow("SELECT $1::hstore", hs).Scan(&hs) |
||||
if err != nil { |
||||
t.Fatalf("re-query empty map failed: %s", err.Error()) |
||||
} |
||||
if hs.Map == nil { |
||||
t.Fatalf("expected empty map, got null map") |
||||
} |
||||
if len(hs.Map) != 0 { |
||||
t.Fatalf("expected empty map, got len(map)=%d", len(hs.Map)) |
||||
} |
||||
|
||||
// a few example maps to test out
|
||||
hsOnePair := Hstore{ |
||||
Map: map[string]sql.NullString{ |
||||
"key1": {"value1", true}, |
||||
}, |
||||
} |
||||
|
||||
hsThreePairs := Hstore{ |
||||
Map: map[string]sql.NullString{ |
||||
"key1": {"value1", true}, |
||||
"key2": {"value2", true}, |
||||
"key3": {"value3", true}, |
||||
}, |
||||
} |
||||
|
||||
hsSmorgasbord := Hstore{ |
||||
Map: map[string]sql.NullString{ |
||||
"nullstring": {"NULL", true}, |
||||
"actuallynull": {"", false}, |
||||
"NULL": {"NULL string key", true}, |
||||
"withbracket": {"value>42", true}, |
||||
"withequal": {"value=42", true}, |
||||
`"withquotes1"`: {`this "should" be fine`, true}, |
||||
`"withquotes"2"`: {`this "should\" also be fine`, true}, |
||||
"embedded1": {"value1=>x1", true}, |
||||
"embedded2": {`"value2"=>x2`, true}, |
||||
"withnewlines": {"\n\nvalue\t=>2", true}, |
||||
"<<all sorts of crazy>>": {`this, "should,\" also, => be fine`, true}, |
||||
}, |
||||
} |
||||
|
||||
// test encoding in query params, then decoding during Scan
|
||||
testBidirectional := func(h Hstore) { |
||||
err = db.QueryRow("SELECT $1::hstore", h).Scan(&hs) |
||||
if err != nil { |
||||
t.Fatalf("re-query %d-pair map failed: %s", len(h.Map), err.Error()) |
||||
} |
||||
if hs.Map == nil { |
||||
t.Fatalf("expected %d-pair map, got null map", len(h.Map)) |
||||
} |
||||
if len(hs.Map) != len(h.Map) { |
||||
t.Fatalf("expected %d-pair map, got len(map)=%d", len(h.Map), len(hs.Map)) |
||||
} |
||||
|
||||
for key, val := range hs.Map { |
||||
otherval, found := h.Map[key] |
||||
if !found { |
||||
t.Fatalf(" key '%v' not found in %d-pair map", key, len(h.Map)) |
||||
} |
||||
if otherval.Valid != val.Valid { |
||||
t.Fatalf(" value %v <> %v in %d-pair map", otherval, val, len(h.Map)) |
||||
} |
||||
if otherval.String != val.String { |
||||
t.Fatalf(" value '%v' <> '%v' in %d-pair map", otherval.String, val.String, len(h.Map)) |
||||
} |
||||
} |
||||
} |
||||
|
||||
testBidirectional(hsOnePair) |
||||
testBidirectional(hsThreePairs) |
||||
testBidirectional(hsSmorgasbord) |
||||
} |
||||
@ -1,74 +0,0 @@ |
||||
// +build ignore
|
||||
|
||||
// Generate the table of OID values
|
||||
// Run with 'go run gen.go'.
|
||||
package main |
||||
|
||||
import ( |
||||
"fmt" |
||||
"log" |
||||
"os" |
||||
"os/exec" |
||||
|
||||
"database/sql" |
||||
_ "github.com/lib/pq" |
||||
) |
||||
|
||||
func main() { |
||||
datname := os.Getenv("PGDATABASE") |
||||
sslmode := os.Getenv("PGSSLMODE") |
||||
|
||||
if datname == "" { |
||||
os.Setenv("PGDATABASE", "pqgotest") |
||||
} |
||||
|
||||
if sslmode == "" { |
||||
os.Setenv("PGSSLMODE", "disable") |
||||
} |
||||
|
||||
db, err := sql.Open("postgres", "") |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
cmd := exec.Command("gofmt") |
||||
cmd.Stderr = os.Stderr |
||||
w, err := cmd.StdinPipe() |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
f, err := os.Create("types.go") |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
cmd.Stdout = f |
||||
err = cmd.Start() |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
fmt.Fprintln(w, "// generated by 'go run gen.go'; do not edit") |
||||
fmt.Fprintln(w, "\npackage oid") |
||||
fmt.Fprintln(w, "const (") |
||||
rows, err := db.Query(` |
||||
SELECT typname, oid |
||||
FROM pg_type WHERE oid < 10000 |
||||
ORDER BY oid; |
||||
`) |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
var name string |
||||
var oid int |
||||
for rows.Next() { |
||||
err = rows.Scan(&name, &oid) |
||||
if err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
fmt.Fprintf(w, "T_%s Oid = %d\n", name, oid) |
||||
} |
||||
if err = rows.Err(); err != nil { |
||||
log.Fatal(err) |
||||
} |
||||
fmt.Fprintln(w, ")") |
||||
w.Close() |
||||
cmd.Wait() |
||||
} |
||||
@ -0,0 +1,93 @@ |
||||
package pq |
||||
|
||||
import ( |
||||
"math" |
||||
"reflect" |
||||
"time" |
||||
|
||||
"github.com/lib/pq/oid" |
||||
) |
||||
|
||||
const headerSize = 4 |
||||
|
||||
type fieldDesc struct { |
||||
// The object ID of the data type.
|
||||
OID oid.Oid |
||||
// The data type size (see pg_type.typlen).
|
||||
// Note that negative values denote variable-width types.
|
||||
Len int |
||||
// The type modifier (see pg_attribute.atttypmod).
|
||||
// The meaning of the modifier is type-specific.
|
||||
Mod int |
||||
} |
||||
|
||||
func (fd fieldDesc) Type() reflect.Type { |
||||
switch fd.OID { |
||||
case oid.T_int8: |
||||
return reflect.TypeOf(int64(0)) |
||||
case oid.T_int4: |
||||
return reflect.TypeOf(int32(0)) |
||||
case oid.T_int2: |
||||
return reflect.TypeOf(int16(0)) |
||||
case oid.T_varchar, oid.T_text: |
||||
return reflect.TypeOf("") |
||||
case oid.T_bool: |
||||
return reflect.TypeOf(false) |
||||
case oid.T_date, oid.T_time, oid.T_timetz, oid.T_timestamp, oid.T_timestamptz: |
||||
return reflect.TypeOf(time.Time{}) |
||||
case oid.T_bytea: |
||||
return reflect.TypeOf([]byte(nil)) |
||||
default: |
||||
return reflect.TypeOf(new(interface{})).Elem() |
||||
} |
||||
} |
||||
|
||||
func (fd fieldDesc) Name() string { |
||||
return oid.TypeName[fd.OID] |
||||
} |
||||
|
||||
func (fd fieldDesc) Length() (length int64, ok bool) { |
||||
switch fd.OID { |
||||
case oid.T_text, oid.T_bytea: |
||||
return math.MaxInt64, true |
||||
case oid.T_varchar, oid.T_bpchar: |
||||
return int64(fd.Mod - headerSize), true |
||||
default: |
||||
return 0, false |
||||
} |
||||
} |
||||
|
||||
func (fd fieldDesc) PrecisionScale() (precision, scale int64, ok bool) { |
||||
switch fd.OID { |
||||
case oid.T_numeric, oid.T__numeric: |
||||
mod := fd.Mod - headerSize |
||||
precision = int64((mod >> 16) & 0xffff) |
||||
scale = int64(mod & 0xffff) |
||||
return precision, scale, true |
||||
default: |
||||
return 0, 0, false |
||||
} |
||||
} |
||||
|
||||
// ColumnTypeScanType returns the value type that can be used to scan types into.
|
||||
func (rs *rows) ColumnTypeScanType(index int) reflect.Type { |
||||
return rs.colTyps[index].Type() |
||||
} |
||||
|
||||
// ColumnTypeDatabaseTypeName return the database system type name.
|
||||
func (rs *rows) ColumnTypeDatabaseTypeName(index int) string { |
||||
return rs.colTyps[index].Name() |
||||
} |
||||
|
||||
// ColumnTypeLength returns the length of the column type if the column is a
|
||||
// variable length type. If the column is not a variable length type ok
|
||||
// should return false.
|
||||
func (rs *rows) ColumnTypeLength(index int) (length int64, ok bool) { |
||||
return rs.colTyps[index].Length() |
||||
} |
||||
|
||||
// ColumnTypePrecisionScale should return the precision and scale for decimal
|
||||
// types. If not applicable, ok should be false.
|
||||
func (rs *rows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) { |
||||
return rs.colTyps[index].PrecisionScale() |
||||
} |
||||
Loading…
Reference in new issue