Add new docs folder structure

remotes/origin/dc-updating-rate-limits
csubira 7 years ago
parent 6d20ddb4f0
commit c8344354e1

@ -0,0 +1,11 @@
## Introduction
CARTO's SQL API allows you to interact with your tables and data inside CARTO, as if you were running SQL statements against a normal database. The database behind CARTO is PostgreSQL so if you need help with specific SQL statements or you want to learn more about it, visit the [official documentation](http://www.postgresql.org/docs/9.1/static/sql.html).
There are two main situations in which you would want to use the SQL API:
- You want to **insert, update** or **delete** data. For example, you would like to insert a new column with a latitude and longitude data.
- You want to **select** data from public tables in order to use it on your website or in your app. For example, you need to find the 10 closest records to a particular location.
Remember that in order to access, read or modify data in private tables, you will need to authenticate your requests. When a table is public, you can do non-authenticated queries that read data, but you cannot write or modify data without authentication.

@ -0,0 +1,17 @@
## Authentication
For all access to private tables, and for write access to public tables, CARTO enforces secure API access that requires you to authorize your queries. In order to authorize queries, you can use an API Key or a Consumer Key.
### API Key
The API Key offers the simplest way to access private data, or perform writes and updates to your public data. Remember that your API Key protects access to your data, so keep it confidential and only share it if you want others to have this access. If necessary, you can reset your API Key from your CARTO dashboard.
**Tip:** For details about how to access, or reset, your API Key, see [Your Account](http://docs.carto.com/carto-editor/your-account/#api-key) details.
To use your API Key, pass it as a parameter in an URL call to the CARTO API. For example, to perform an insert into your table, you would use the following URL structure.
##### Example
```bash
https://{username}.carto.com/api/v2/sql?q={SQL statement}&api_key={api_key}
```

@ -0,0 +1,491 @@
## Batch Queries
A Batch Query enables you to request queries with long-running CPU processing times. Typically, these kind of requests raise timeout errors when using the SQL API. In order to avoid timeouts, you can use Batch Queries to [create](#create-a-job), [read](#read-a-job) and [cancel](#cancel-a-job) queries. You can also run a [chained batch query](#chaining-batch-queries) to chain several SQL queries into one job. A Batch Query schedules the incoming jobs and allows you to request the job status for each query.
_Batch Queries are not intended to be used for large query payloads that contain over 16384 characters (16kb). For instance, if you are inserting a large number of rows into your table, you still need to use the [Import API](https://carto.com/docs/carto-engine/import-api/) or [SQL API]({{ site.sqlapi_docs }}/guides/) for this type of data management. Batch Queries are specific to queries and CPU usage._
**Note:** In order to use Batch Queries, you **must** be [authenticated]({{ site.sqlapi_docs }}/guides/authentication/) using API keys.
### Authentication
An API Key is required to manage your jobs. The following error message appears if you are not [authenticated]({{ site.sqlapi_docs }}/guides/authentication/):
```bash
{
"error": [
"permission denied"
]
}
```
In order to get full access, you must use your API Key.
Using cURL tool:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": "{query}"
}' "http://{username}.carto.com/api/v2/sql/job?api_key={api_key}"
```
Using Node.js request client:
```bash
var request = require("request");
var options = {
method: "POST",
url: "http://{username}.carto.com/api/v2/sql/job",
qs: {
"api_key": "{api_key}"
},
headers: {
"content-type": "application/json"
},
body: {
query: "{query}"
},
json: true
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
### Batch Queries Job Schema
A Batch Query request to your CARTO account includes the following job schema elements. _Only the `query` element can be modified._ All other elements of the job schema are defined by the Batch Query and are read-only.
Name | Description
--- | ---
`job_id` | a universally unique identifier (uuid).
`user` | user identifier, as displayed by the username.
`status` | displays the result of the long-running query. The possible status results are:
--- | ---
|_ `pending` | job waiting to be executed.
|_ `running` | indicates that the job is currently running.
|_ `done` | job executed successfully.
|_ `failed` | job executed but failed, with errors.
|_ `canceled` | job canceled by user request.
|_ `unknown` | appears when it is not possible to determine what exactly happened with the job.
`query` | the SQL statement to be executed in a database. _You can modify the select SQL statement to be used in the job schema._<br/><br/>**Tip:** In some scenarios, you may need to retrieve the query results from a finished job. See [Fetching Job Results](#fetching-job-results) for details.
`created_at` | the date and time when the job schema was created.
`updated_at` | the date and time of when the job schema was last updated, or modified.
`failed_reason` | displays the database error message, if something went wrong.
##### Example
```bash
HEADERS: 201 CREATED; application/json
BODY: {
"job_id": "de305d54-75b4-431b-adb2-eb6b9e546014",
"user": "cartofante",
"query": "UPDATE airports SET type = 'international'",
"status": "pending",
"created_at": "2015-12-15T07:36:25Z",
"updated_at": "2015-12-15T07:36:25Z"
}
```
#### Create a Job
To create a Batch Query job, make a POST request with the following parameters.
Creates a Batch Query job request.
```bash
HEADERS: POST /api/v2/sql/job
BODY: {
"query": "UPDATE airports SET type = 'international'"
}
```
##### Response
```bash
HEADERS: 201 CREATED; application/json
BODY: {
"job_id": "de305d54-75b4-431b-adb2-eb6b9e546014",
"user": "cartofante"
"query": "UPDATE airports SET type = 'international'",
"status": "pending",
"created_at": "2015-12-15T07:36:25Z",
"updated_at": "2015-12-15T07:36:25Z"
}
```
###### POST Examples
If you are using the Batch Query create operation for a cURL POST request, use the following code:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": "CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)"
}' "http://{username}.carto.com/api/v2/sql/job"
```
If you are using the Batch Query create operation for a Node.js client POST request, use the following code:
```bash
var request = require("request");
var options = {
method: "POST",
url: "http://{username}.carto.com/api/v2/sql/job",
headers: { "content-type": "application/json" },
body: {
query: "CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)"
},
json: true
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
#### Read a Job
To read a Batch Query job, make a GET request with the following parameters.
```bash
HEADERS: GET /api/v2/sql/job/de305d54-75b4-431b-adb2-eb6b9e546014
BODY: {}
```
##### Response
```bash
HEADERS: 200 OK; application/json
BODY: {
"job_id": "de305d54-75b4-431b-adb2-eb6b9e546014",
"user": "cartofante"
"query": "UPDATE airports SET type = 'international'",
"status": "pending",
"created_at": "2015-12-15T07:36:25Z",
"updated_at": "2015-12-15T07:36:25Z"
}
```
###### GET Examples
If you are using the Batch Query read operation for a cURL GET request, use the following code:
```bash
curl -X GET "http://{username}.carto.com/api/v2/sql/job/{job_id}"
```
If you are a Batch Query read operation for a Node.js client GET request, use the following code:
```bash
var request = require("request");
var options = {
method: "GET",
url: "http://{username}.carto.com/api/v2/sql/job/{job_id}"
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
#### Cancel a Job
To cancel a Batch Query, make a DELETE request with the following parameters.
```bash
HEADERS: DELETE /api/v2/sql/job/de305d54-75b4-431b-adb2-eb6b9e546014
BODY: {}
```
**Note:** Be mindful when canceling a job when the status: `pending` or `running`.
- If the job is `pending`, the job will never be executed
- If the job is `running`, the job will be terminated immediately
##### Response
```bash
HEADERS: 200 OK; application/json
BODY: {
"job_id": "de305d54-75b4-431b-adb2-eb6b9e546014",
"user": "cartofante"
"query": "UPDATE airports SET type = 'international'",
"status": "cancelled",
"created_at": "2015-12-15T07:36:25Z",
"updated_at": "2015-12-17T06:22:42Z"
}
```
**Note:** Jobs can only be canceled while the `status: "running"` or `status: "pending"`, otherwise the Batch Query operation is not allowed. You will receive an error if the job status is anything but "running" or "pending".
```bash
errors: [
"The job status is done, cancel is not allowed"
]
```
###### DELETE Examples
If you are using the Batch Query cancel operation for cURL DELETE request, use the following code:
```bash
curl -X DELETE "http://{username}.carto.com/api/v2/sql/job/{job_id}"
```
If you are using the Batch Query cancel operation for a Node.js client DELETE request, use the following code:
```bash
var request = require("request");
var options = {
method: "DELETE",
url: "http://{username}.carto.com/api/v2/sql/job/{job_id}",
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
#### Chaining Batch Queries
In some cases, you may need to chain queries into one job. The Chaining Batch Query option enables you run an array of SQL statements, and define the order in which the queries are executed. You can use any of the operations (create, read, list, update, cancel) for the queries in a chained batch query.
```bash
HEADERS: POST /api/v2/sql/job
BODY: {
query: [
"CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"DROP TABLE airports",
"ALTER TABLE world_airports RENAME TO airport"
]
}
```
##### Response
```bash
HEADERS: 201 CREATED; application/json
BODY: {
"job_id": "de305d54-75b4-431b-adb2-eb6b9e546014",
"user": "cartofante"
"query": [{
"query": "CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"status": "pending"
}, {
"query": "DROP TABLE airports",
"status": "pending"
}, {
"query": "ALTER TABLE world_airports RENAME TO airport",
"status": "pending"
}],
"status": "pending",
"created_at": "2015-12-15T07:36:25Z",
"updated_at": "2015-12-15T07:36:25Z"
}
```
**Note:** The Batch Query returns a job status for both the parent Chained Batch Query request, and for each child query within the request. The order in which each query is executed is guaranteed. Here are the possible status results for Chained Batch Queries:
- If one query within the Chained Batch Query fails, the `"status": "failed"` is returned for both the job and the query, and any "pending" queries will not be processed
- If you cancel the Chained Batch Query job, the job status changes to `"status": "cancelled"`. Any running queries within the job will be stopped and changed to `"status": "pending"`, and will not be processed
- Suppose the first query job status is `"status": "done"`, the second query is `"status": "running"`, and the third query `"status": "pending"`. If the second query fails for some reason, the job status changes to `"status": "failed"` and the last query will not be processed. It is indicated which query failed in the Chained Batch Query job
- Creating several jobs does not guarantee that jobs are going to be executed in the same order that they were created. If you need run queries in a specific order, you may want use [Chaining Batch Queries](#chaining-batch-queries).
###### POST Examples
If you are using the Chained Batch Query operation for cURL POST request, use the following code:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": [
"CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"DROP TABLE airports",
"ALTER TABLE world_airports RENAME TO airport"
]
}' "http://{username}.carto.com/api/v2/sql/job"
```
If you are using the Chained Batch Query operation for a Node.js client POST request, use the following code:
```bash
var request = require("request");
var options = {
method: "POST",
url: "http://{username}.carto.com/api/v2/sql/job",
headers: { "content-type": "application/json" },
body: {
"query": [
"CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"DROP TABLE airports",
"ALTER TABLE world_airports RENAME TO airport"
]
},
json: true
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
###### PUT Examples
If you are using the Chained Batch Query operation for cURL PUT request, use the following code:
```bash
curl -X PUT -H "Content-Type: application/json" -d '{
"query": [
"CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"DROP TABLE airports",
"ALTER TABLE world_airports RENAME TO airport",
"UPDATE airports SET airport = upper(airport)"
]
}' "http://{username}.carto.com/api/v2/sql/job/{job_id}"
```
If you are using the Chained Batch Query operation for a Node.js client PUT request, use the following code:
```bash
var request = require("request");
var options = {
method: "PUT",
url: "http://{username}.carto.com/api/v2/sql/job/{job_id}",
headers: { "content-type": "application/json" },
body: {
query: [
"CREATE TABLE world_airports AS SELECT a.cartodb_id, a.the_geom, a.the_geom_webmercator, a.name airport, b.name country FROM world_borders b JOIN airports a ON ST_Contains(b.the_geom, a.the_geom)",
"DROP TABLE airports",
"ALTER TABLE world_airports RENAME TO airport",
"UPDATE airports SET airport = upper(airport)"
]
},
json: true
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
```
### Chaining Batch Queries with fallbacks
When you need to run an extra query based on how a chaining query finished, Batch Queries enable you to define onerror and onsuccess fallbacks. This powerful feature opens a huge range of possibilities, for instance:
- You can create jobs periodically in order to get updated data and create a new table where you can check the status of your tables.
For this example, you can create the following job:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": {
"query": [{
"query": "UPDATE nasdaq SET price = '$100.00' WHERE company = 'CARTO'",
"onsuccess": "UPDATE market_status SET status = 'updated', updated_at = NOW() WHERE table_name = 'nasdaq'"
"onerror": "UPDATE market_status SET status = 'outdated' WHERE table_name = 'nasdaq'"
}]
}
}' "http://{username}.carto.com/api/v2/sql/job"
```
If query finishes successfully, then onsuccess fallback will be fired. Otherwise, onerror will be fired. You can define fallbacks per query:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": {
"query": [{
"query": "UPDATE nasdaq SET price = '$101.00' WHERE company = 'CARTO'",
"onsuccess": "UPDATE market_status SET status = 'updated', updated_at = NOW() WHERE table_name = 'nasdaq'",
"onerror": "UPDATE market_status SET status = 'outdated' WHERE table_name = 'nasdaq'"
}, {
"query": "UPDATE down_jones SET price = '$100.00' WHERE company = 'Esri'",
"onsuccess": "UPDATE market_status SET status = 'updated', updated_at = NOW() WHERE table_name = 'down_jones'",
"onerror": "UPDATE market_status SET status = 'outdated' WHERE table_name = 'down_jones'"
}]
}
}' "http://{username}.carto.com/api/v2/sql/job"
```
...at the job level..
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": {
"query": [{
"query": "UPDATE nasdaq SET price = '$101.00' WHERE company = 'CARTO'"
}, {
"query": "UPDATE down_jones SET price = '$100.00' WHERE company = 'Esri'"
}],
"onsuccess": "UPDATE market_status SET status = 'updated', updated_at = NOW()",
"onerror": "UPDATE market_status SET status = 'outdated'"
}
}' "http://{username}.carto.com/api/v2/sql/job"
```
If a query of a job fails (and onerror fallbacks for that query and job are defined), then Batch Queries runs the first fallback for that query. The job fallback runs next and sets the job as failed. Remaining queries will not be executed. Furthermore, Batch Queries will run the onsuccess fallback at the job level, if (and only if), every query has finished successfully.
#### Templates
Batch Queries provide a simple way to get the error message and the job identifier to be used in your fallbacks, by using the following templates:
- `<%= error_message %>`: will be replaced by the error message raised by the database.
- `<%= job_id %>`: will be replaced by the job identifier that Batch Queries provides.
This is helpful when you want to save error messages into a table:
```bash
curl -X POST -H "Content-Type: application/json" -d '{
"query": {
"query": [{
"query": "UPDATE wrong_table SET price = '$100.00' WHERE company = 'CARTO'"
}],
"onerror": "INSERT INTO errors_log (job_id, error_message, date) VALUES ('<%= job_id %>', '<%= error_message %>', NOW())"
}
}' "http://{username}.carto.com/api/v2/sql/job"
```
More templates are coming soon.
### Fetching Job Results
In some scenarios, you may need to fetch the output of a job. If that is the case, wrap the query with `SELECT * INTO`, or `CREATE TABLE AS`. The output is stored in a new table in your database. For example, if using the query `SELECT * FROM airports`:
1. Wrap the query `SELECT * INTO job_result FROM (SELECT * FROM airports) AS job`
2. [Create a job](#create-a-job), as described previously
3. Once the job is done, fetch the results through the [CARTO SQL API]({{ site.sqlapi_docs }}/guides/), `SELECT * FROM job_result`
**Note:** If you need to create a map or analysis with the new table, use the [CDB_CartodbfyTable function](https://github.com/CartoDB/cartodb-postgresql/blob/master/doc/cartodbfy-requirements.rst).
### Best Practices
For best practices, follow these recommended usage notes when using Batch Queries:
- Batch Queries are recommended for INSERT, UPDATE, and CREATE queries that manipulate and create new data, such as creating expensive indexes, applying updates over large tables, and creating tables from complex queries. Batch queries have no effect for SELECT queries that retrieve data but do not store the results in a table. For example, running a batch query using `SELECT * from my_dataset` will not produce any results.
- Batch Queries are not intended for large query payloads (e.g: inserting thousands of rows), use the [Import API](https://carto.com/docs/carto-engine/import-api/) for this type of data management.
- There is a limit of 16kb per job. The following error message appears if your job exceeds this size:
`Your payload is too large. Max size allowed is 16384 (16kb)`

@ -0,0 +1,55 @@
## Creating Tables with the SQL API
[Writing data to your CARTO account]({{ site.sqlapi_docs }}/guides/making-calls#write-data-to-your-carto-account) enables you to manage data through SQL queries, it does not automatically connect tables as datasets to _Your datasets_ dashboard in CARTO.
You must apply the `CDB_CartodbfyTable`function to a target table in order to create and display connected datasets in your account. This additional step of "CARTOfying" data is the process of converting an arbitrary PostgreSQL table into a valid CARTO table, and registering it into the system so that it can be used in the graphical user interface, and the CARTO Engine, to generate maps and analysis.
### Create Tables
To create a visible table in CARTO, run the following SQL query with the SQL API:
```bash
CREATE TABLE {table_name}
(
{column1} {data type},
{column2} {data type},
{column3} {data type},
...
);
```
While this begins the process of creating the structure for the table, it is still not visible in your dashboard. Run the following request to make the table visible.
```bash
SELECT cdb_cartodbfytable({table_name});
```
**Tip:** If you are an developer using an Enterprise account, you must also include the organization username as part of the request. For example:
```bash
SELECT cdb_cartodbfytable({org_username}, {table_name});
```
The table is created and added as a connected dataset in _Your datasets_ dashboard. Refresh your browser to ensure that you can visualize it in your account. Once a table is connected to _Your datasets_ dashboard in CARTO, any modifications that you apply to your data through the SQL API are automatically updated.
### Rename Tables
To rename a connected dataset in _Your datasets_ dashboard, run the following SQL query with the SQL API:
```bash
ALTER TABLE {table_name} RENAME to {renamed table_name};
```
It may take a few seconds for the connected table to appear renamed. Refresh your browser to ensure that you can visualize the changes in _Your datasets_ dashboard.
### Remove a Table
If you remove a table, **any maps using the connected dataset will be affected**. The deleted dataset cannot be recovered. Even if you create a new table with the same name as a removed table, CARTO still internalizes it as a different table.
To remove a connected dataset from _Your datasets_ dashboard, run the following SQL query with the SQL API:
```bash
DROP TABLE {table_name};
```
This removes the connected table from _Your datasets_ dashboard. Refresh your browser to ensure that the connected dataset was removed.

@ -0,0 +1,63 @@
## Handling Geospatial Data
Handling geospatial data through the SQL API is easy. By default, *the_geom* is returned straight from the database, in a format called Well-Known Binary. There are a handful of ways you can transform your geometries into more useful formats.
The first is to use the format=GeoJSON method described above. Others can be handled through your SQL statements directly. For example, enclosing your the_geom in a function called [ST_AsGeoJSON](http://www.postgis.org/documentation/manual-svn/ST_AsGeoJSON.html) will allow you to use JSON for your data but a GeoJSON string for your geometry column only. Alternatively, using a the [ST_AsText](http://www.postgis.org/documentation/manual-svn/ST_AsGeoJSON.html) function will return your geometry as Well-Known Text.
#### ST_AsGeoJSON
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT cartodb_id,ST_AsGeoJSON(the_geom) as the_geom FROM {table_name} LIMIT 1
```
##### Result
```javascript
{
time: 0.003,
total_rows: 1,
rows: [
{
cartodb_id: 1,
the_geom: "{"type":"Point","coordinates":[-97.3349,35.4979]}"
}
]
}
```
#### ST_AsText
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT cartodb_id,ST_AsText(the_geom) FROM {table_name} LIMIT 1
```
##### Result
```javascript
{
time: 0.003,
total_rows: 1,
rows: [
{
cartodb_id: 1,
the_geom: "POINT(-74.0004162 40.6920918)",
}
]
}
```
More advanced methods exist in the PostGIS library to extract meaningful data from your geometry. Explore the PostGIS documentation and get familiar with functions such as, [ST_XMin](http://www.postgis.org/docs/ST_XMin.html), [ST_XMax](http://www.postgis.org/docs/ST_XMax.html), [ST_AsText](http://www.postgis.org/docs/ST_AsText.html), and so on.
All data returned from *the_geom* column is in WGS 84 (EPSG:4326). You can change this quickly on the fly, by using SQL. For example, if you prefer geometries using the Hanoi 1972 (EPSG:4147) projection, use [ST_Transform](http://www.postgis.org/docs/ST_Transform.html),
#### ST_Transform
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT ST_Transform(the_geom,4147) FROM {table_name} LIMIT 1
```
CARTO also stores a second geometry column, *the_geom_webmercator*. We use this internally to build your map tiles as fast as we can. In the user-interface it is hidden, but it is visible and available for use. In this column, we store a reprojected version of all your geometries using Web Mercator (EPSG:3857).

@ -0,0 +1,21 @@
## Metrics
CartoDB-SQL-API metrics
=======================
### Timers
- **sqlapi.query**: time to return a query resultset from the API, splitted into:
+ **sqlapi.query.init**: time to prepare params from the request
+ **sqlapi.query.getDBParams**: time to retrieve the database connection params
+ **sqlapi.query.authenticate**: time to determine if request is authenticated
+ **sqlapi.query.setDBAuth**: time to set the authenticated connection params
+ **sqlapi.query.queryExplain**: time to retrieve affected tables from the query
+ **sqlapi.query.eventedQuery**: (pg) Time to prepare and execute the query
+ **sqlapi.query.beforeSink**: time to start sending the response.
+ **sqlapi.query.gotRows**: Time until it finished processing all rows in the resultset.
+ **sqlapi.query.generate**: Time to prepare and generate a response from ogr
+ **sqlapi.query.finish**: time to handle an exception
### Counters
- **sqlapi.query.success**: number of successful queries
- **sqlapi.query.error**: number of failed queries

@ -0,0 +1,26 @@
## Query Optimizations
There are some tricks to consider when using the SQL API that might make your application a little faster.
* Only request the fields you need. Selecting all columns will return a full version of your geometry in *the_geom*, as well as a reprojected version in *the_geom_webmercator*
* Use PostGIS functions to simplify and filter out unneeded geometries when possible. One very handy function is, [ST_Simplify](http://www.postgis.org/docs/ST_Simplify.html)
* Remember to build indexes that will speed up some of your more common queries. For details, see [Creating Indexes](#creating-indexes)
* Use *cartodb_id* to retrieve specific rows of your data, this is the unique key column added to every CARTO table. For a sample use case, view the [_Faster data updates with CARTO](https://carto.com/blog/faster-data-updates-with-cartodb/) blogpost
### Creating Indexes
In order to better improve map performance, advanced users can use the SQL API to add custom indexes to their data. Creating indexes is useful if you have a large dataset with filtered data. By indexing select data, you are improving the performance of the map and generating the results faster. The index functionality is useful in the following scenarios:
- If you are filtering a dataset by values in one or a more columns
- If you are regularly querying data through the SQL API, and filtering by one or a more columns
- If you are creating Torque maps on very large datasets. Since Torque maps are based on time-sensitive data (i.e. a date or numeric column), creating an index on the time data is optimal
Indexed data is typically a single column representing filtered data. To create a single column index, apply this SQL query to your dataset:
{% highlight bash %}
CREATE INDEX idx_{DATASET NAME}_{COLUMN_NAME} ON {DATASET_NAME} ({COLUMN_NAME})
{% endhighlight %}
**Tip:** You can also apply more advanced, multi-column indexes. Please review the full documentation about [PostgreSQL Indexes](http://www.postgresql.org/docs/9.1/static/sql-createindex.html) before proceeding.
**Note:** Indexes are allocated towards the amount of data storage associated with your account. Be mindful when creating custom indexes. Note that indexes automatically generated by CARTO are _not_ counted against your quota. For example, `the_geom` and `cartodb_id` columns. These columns are used to index geometries for your dataset and are not associated with storage.

@ -0,0 +1,201 @@
## Making Calls to the SQL API
CARTO is based on the rock solid PostgreSQL database. All of your tables reside in a single database, which means you can perform complex queries joining tables, or carrying out geospatial operations. The best place to learn about PostgreSQL's SQL language is the [official documentation](http://www.postgresql.org/docs/9.1/static/).
CARTO is also based on PostGIS, so you can view the [official PostGIS reference](http://postgis.refractions.net/docs/) to know what functionality we support in terms of geospatial operations. All of our tables include a column called *the_geom,* which is a geometry field that indexes geometries in the EPSG:4326 (WGS 1984) coordinate system. All tables also have an automatically generated and updated column called *the_geom_webmercator*. We use the column internally to quickly create tiles for maps.
### URL Endpoints
All SQL API requests to your CARTO account should follow this general pattern:
##### SQL Query Example
```bash
https://{username}.carto.com/api/v2/sql?q={SQL statement}
```
If you encounter errors, double-check that you are using the correct account name, and that your SQL statement is valid. A simple example of this pattern is conducting a count of all the records in your table:
##### Count Example
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT count(*) FROM {table_name}
```
##### Result
```javascript
{
time: 0.007,
total_rows: 1,
rows: [
{
count: 4994
}
]
}
```
Finally, remember that in order to use the SQL API, either your table must be public, or you must be [authenticated](http://docs.carto.com/carto-engine/sql-api/authentication/#authentication) using API Keys.
### POST and GET
The CARTO SQL API is setup to handle both GET and POST requests. You can test the GET method directly in your browser. Below is an example of a jQuery SQL API request to CARTO:
#### jQuery
##### Call
```javascript
$.getJSON('https://{username}.carto.com/api/v2/sql/?q='+sql_statement, function(data) {
$.each(data.rows, function(key, val) {
// do something!
});
});
```
By default, GET requests work from anywhere. In CARTO, POST requests work from any website as well. We achieve this by hosting a cross-domain policy file at the root of all of our servers. This allows you the greatest level of flexibility when developing your application.
### Response Formats
The SQL API accepts many output formats that can be useful to export data, such as:
- GPKG
- CSV
- SHP
- SVG
- KML
- SpatiaLite
- GeoJSON
The most common response format used is JSON. For example, if you are building a web-application, the lightweight JSON format allows you to quickly integrate data from the SQL API. This section focuses on the call and response functions for generating the JSON output format.
#### JSON
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT * FROM {table_name} LIMIT 1
```
##### Result
```javascript
{
time: 0.006,
total_rows: 1,
rows: [
{
year: " 2011",
month: 10,
day: "11",
the_geom: "0101000020E610...",
cartodb_id: 1,
the_geom_webmercator: "0101000020110F000..."
}
]
}
```
Alternatively, you can use the [GeoJSON specification](http://www.geojson.org/geojson-spec.html) to return data from the API. To do so, simply supply the `format` parameter as GeoJSON:
#### GeoJSON
##### Call
```bash
https://{username}.carto.com/api/v2/sql?format=GeoJSON&q=SELECT * FROM {table_name} LIMIT 1
```
##### Result
```javascript
{
type: "FeatureCollection",
features: [
{
type: "Feature",
properties: {
year: " 2011",
month: 10,
day: "11",
cartodb_id: 1
},
geometry: {
type: "Point",
coordinates: [
-97.335,
35.498
]
}
}
]
}
```
### Output Filename
To customize the output filename, add the `filename` parameter to your URL:
##### Call
```bash
https://{username}.carto.com/api/v2/sql?filename={custom_filename}&q=SELECT * FROM {table_name} LIMIT 1
```
### Getting Table Information
Currently, there is no public method to access your table schemas. The simplest way to retrieve table structure is to access the first row of the data,
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT * FROM {table_name} LIMIT 1
```
### Response Errors
To help you debug your SQL queries, the CARTO SQL API returns the full error provided by PostgreSQL, as part of the JSON response. Error responses appear in the following format,
##### Result
```javascript
{
error: [
"syntax error at or near "LIMIT""
]
}
```
You can use these errors to help understand your SQL. If you encounter errors executing SQL, either through CARTO Builder, or through the SQL API, it is suggested to Google search the error for independent troubleshooting.
### Write Data to your CARTO Account
When writing data to your CARTO account, you are executing SQL queries to manage data in a table. Performing inserts or updates on your data is achieved by using your [API Key]({{ site.sqlapi_docs }}/guides/authentication/). Simply supply a well-formatted SQL [INSERT](http://www.postgresql.org/docs/9.1/static/sql-insert.html) or [UPDATE](http://www.postgresql.org/docs/9.1/static/sql-update.html) statement for your table, along with the api_key parameter for your account.
**Tip:** All requests should be private, as anyone with your API Key will be able to modify your tables.
A well-formatted SQL insert statement means that all of the columns that you want to insert to your data already exist in your table, and all the values for those columns are the correct type (quoted string, unquoted string for geoms and dates, or numbers).
#### Insert
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=INSERT INTO test_table (column_name, column_name_2, the_geom) VALUES ('this is a string', 11, ST_SetSRID(ST_Point(-110, 43),4326))&api_key={api_key}
```
Updates are just as simple. The following example displays how to update a row based on the value of the `cartodb_id` column.
#### Update
##### Call
```bash
https://{username}.carto.com/api/v2/sql?q=UPDATE test_table SET column_name = 'my new string value' WHERE cartodb_id = 1 &api_key={api_key}
```

@ -0,0 +1,3 @@
## API Version Number
Ensure that you are using the [latest version](https://github.com/CartoDB/CartoDB-SQL-API) of our SQL API. For example, you can check that you are using **Version 2** by looking at your request URLS. They should all contain **/v2/** in the URLs as follows, `https://{username}.carto.com/api/v2/sql`

@ -0,0 +1,32 @@
## Libraries in Different Languages
To make things easier for developers, we provide client libraries for different programming languages and caching functionalities.
**Note:** These libraries are externally developed and maintained. Use caution when using libraries in different languages, as some of these resources may be out-of-date.
- **R**
To help more researchers use CARTO to drive their geospatial data, we have released the R client library. [Fork it on GitHub!](https://github.com/Vizzuality/cartodb-r)
- **NODE.js**
This demo app authenticates with your CARTO and shows how to perform read and write queries using the SQL API. [Fork it on GitHub!](https://github.com/Vizzuality/cartodb-nodejs)
- **PHP**
The PHP library provides a wrapper around the SQL API to get PHP objects straight from SQL calls to CARTO. [Fork it on GitHub!](https://github.com/Vizzuality/cartodbclient-php)
- **PYTHON**
Provides API Key access to SQL API. [Fork it on GitHub!](https://github.com/vizzuality/cartodb-python)
- **JAVA**
Very basic example of how to access CARTO SQL API. [Fork it on GitHub!](https://github.com/cartodb/cartodb-java-client)
- **NET**
.NET library for authenticating with CARTO using an API Key, based on work started by [The Data Republic](http://www.thedatarepublic.com/). [Fork it on GitHub!](https://github.com/thedatarepublic/CartoDBClientDotNET)
- **Clojure**
Clojure library for authenticating with CARTO, maintained by [REDD Metrics](http://www.reddmetrics.com/). [Fork it on GitHub!](https://github.com/reddmetrics/cartodb-clj)
- **iOS**
Objective-C library for interacting with CARTO in native iOS applications. [Fork it on GitHub!](https://github.com/jmnavarro/cartodb-objectivec-client)
- **Golang**
A Go client for the CARTO SQL API that supports authentication using an API key. [Fork it on GitHub!](https://github.com/agonzalezro/cartodb_go)

@ -0,0 +1,97 @@
## Other Tips and Questions
### What does CARTO do to prevent SQL injection?
CARTO uses the database access mechanism for security. Every writable connection is verified by an API Key. If you have the correct API Key, you can write-access to the database. If you do not have the correct API Key, your client is "logged in" as a low privilege user, and you have read-only access to the database (if the database allows you to read).
SQL injection works by tricking a database user, so that running a query retrieves database wide results, even though the database is protected.
Because CARTO enforces roles and access at the database level, the idea of a "SQL injection attack" is not possible with CARTO. Injection is possible, but clients will still run into our security wall at the database level. The SQL API already lets you _attempt_ to run any query you want. The database will reject your SQL API request if it finds your user/role does not have the requisite permissions. In other words, you can ask any question of the database you like; the CARTO Engine does not guarantee it will be answered.
If a user's API Key found its way out into the wild, that could be a problem, but it is not something CARTO can prevent. _This is why it is very important for all CARTO users to secure their API Keys_. In the event a user's API Key is compromised, the user (or the CARTO Enterprise administrator), can regenerate the API Key in their account settings.
**Note:** While the SQL API is SQL injection secure, if you build additional layers to allow another person to run queries (i.e., building a proxy so that others can indirectly perform authenticated queries through the SQL API), the security of those newly added layers are the responsibility of the creator.
### What levels of database access can roles/users have?
There are three levels of access with CARTO:
1. __API Key level:__ Do whatever you want in your account on the tables you own (or have been shared with you in Enterprise/multi-user accounts).
2. __"publicuser" level:__ Do whatever has been granted to you. The publicuser level is normally read-only, but you could GRANT INSERT/UPDATE/DELETE permissions to publicuser if needed for some reason - for API Key-less write operations. Use with caution.
3. __postgres superadmin level:__ This third access level, the actual PostgreSQL system user, is only accessible from a direct database connection via the command line, which is only available currently via [CARTO On-Premises](https://carto.com/on-premises/).
### If a user has write access and makes a `DROP TABLE` query, is that data gone?
Yes. Grant write access with caution and keep backups of your data elsewhere / as duplicate CARTO tables.
### Is there a permission available where a user can write but not `DROP` or `DELETE`?
Yes. Create the table, and GRANT INSERT/UPDATE to the user.
### Is there an actual PostgreSQL account for each CARTO login/username?
Yes, there is. Unfortunately, the names are different - though there is a way to determine the name of the PostgreSQL user account. Every CARTO user gets their own PostgreSQL database. But there is a system database too, with the name mappings in `username` and `database_name` columns. `database_name` is the name of the database that user belongs to. It will be `cartodb_user_ID`. `id` holds long hashkey. The `database_name` is derived from this ID hash too, but in case of an Enterprise/multi-user account it will come from the user ID of the owner of the organization - and `database_name` will hold the same value for every user in an Enterprise/multi-user account.
You can also just do `select user` using the SQL API (without an API Key to get the publicuser name and with an API Key to get the CARTO user's PostgreSQL user name), to determine the name of the corresponding PostgreSQL user.
### Can I configure my CARTO database permissions exactly the same way I do on my own PostgreSQL instance?
Yes, through using GRANT statements to the SQL API. There are a few caveats to be aware of, including the aforementioned naming differences. Also, you will be limited to permissions a user has with their own tables. Users do not have PostgreSQL superuser privileges. So they cannot be creating languages, or C functions, or anything that requires superuser or CREATEUSER privileges.
### How can I export CARTO datasets with the SQL API?
You can use the SQL API to run any query and export the results in different formats, such as a CSV or GeoPackage. This is helpful for accessing your datasets offline.
**Note:** View the [response formats]({{ site.sqlapi_docs }}/guides/making-calls/#response-formats) that are available with the SQL API and ensure that your dataset does not exceed the maximum file size for [SQL API exports](https://carto.com/docs/faqs/carto-engine-usage-limits/#sql-api-limits).
#### Export Datasets as a GeoPackage
You can easily export CARTO datasets using the [GeoPackage](http://www.geopackage.org/) file format, which is an "open, standards-based, platform-independent, portable, self-describing, compact format for transferring geospatial information- &copy;". A .gpkg file itself is a [type](http://www.geopackage.org/spec/#table_column_data_types) of database, more complex than a plain file.
_**Tip:** GeoPackage is the recommended format since it exports your dataset in smaller pieces; typically avoiding error messages that might appear due to long file names and/or large datasets._. If exporting a map with the SQL API, the `GPKG` format does not include any visualization or styling, which helps reduce the file size during the export process.
```bash
https://{username}.carto.com/api/v2/sql?q=SELECT * FROM {table_name}&format=gpkg&filename={file_name}.gpkg
```
The response is `file_name.gpkg` that you can download for use offline.
##### Example of `GPKG` File
- From your Internet browser, copy and paste the following link into a new tab and press Enter.
```bash
https://builder-demo.carto.com/api/v2/sql?q=SELECT+*+FROM+san_francisco_airbnbs&format=gpkg&filename=san_francisco_airbnbs.gpkg
```
A gpkg file is downloaded, based on your web browser process, and available for use offline.
#### Download Datasets as a URL
You can use your table URL to run a response query and export downloads in different formats. For example, the following sample code shows the CSV export format for an SQL API request.
```bash
https://{username}.carto.com/api/v2/sql?format=csv&q=SELECT+*+FROM+tm_world_borders_sim
```
The response creates a direct dataset URL that you can download for use offline.
#### Why can't I see my created tables in my CARTO account?
The SQL API automatically displays tables in CARTO if you follow these steps:
- Create a table that has been "cartodbfied", which prepares your table to be compatible with CARTO. View [Creating Tables with the SQL API]({{site.sqlapi_docs}}/guides/creating-tables/#creating-tables-with-the-sql-api) to learn about this function.
- After creating your "cartodbfied" table, you must login to your CARTO account and open _Your datasets_ dashboard. Logging in initiates a check between the database and your account.
**Note:** There is an expected refresh time while the database is checking your account and your tables may not appear at this time, especially if there are a lot of tables or tables with a large amount of rows.
- Once the database updates, CARTO will display your created or changed tables as connected datasets!
#### What happens if I remove a table that is used in an existing map?
If you [drop]({{ site.sqlapi_docs }}/guides/creating-tables/#remove-a-table) a table using the SQL API, be mindful that there is no warning that the table may be used in an existing map. This is by design. Any maps using a removed table will be missing data and thus, will be deleted.
If you are unsure about which tables are connected to maps, it is suggested to remove tables from _Your datasets_ dashboard in CARTO, which automatically notifies you of any connected maps in use.
Loading…
Cancel
Save