Copy queries allow you to use the [PostgreSQL copy command](https://www.postgresql.org/docs/10/static/sql-copy.html) for efficient streaming of data to and from CARTO.
The support for copy is split across two API end points:
*`http://{username}.carto.com/api/v2/copyfrom` for uploading data to CARTO
*`http://{username}.carto.com/api/v2/copyto` for exporting data out of CARTO
## Copy From
The PostgreSQL `COPY` command is extremely fast, but requires very precise inputs:
* A `COPY` command that describes the table and columns of the upload file, and the format of the file.
* An upload file that exactly matches the `COPY` command.
"Copy from" copies data "from" your file, "to" CARTO. "Copy from" uses [multipart/form-data](https://stackoverflow.com/questions/8659808/how-does-http-file-upload-work) to stream an upload file to the server. This avoids limitations around file size and any need for temporary storage: the data travels from your file straight into the database.
*`api_key` provided in the request URL parameters.
*`sql` provided either in the request URL parameters or in a multipart form variable.
*`file` provided as a multipart form variable; this is the actual file content, not a filename.
Composing a multipart form data upload is moderately complicated, so almost all developers will use a tool or scripting language to upload data to CARTO via "copy from".
### Example
For a table to be readable by CARTO, it must have a minimum of three columns with specific data types:
*`cartodb_id`, a `serial` primary key
*`the_geom`, a `geometry` in the ESPG:4326 projection (aka long/lat)
*`the_geom_webmercator`, a `geometry` in the ESPG:3857 projection (aka web mercator)
The `COPY` command to upload this file needs to specify the file format (CSV), the fact that there is a header line before the actual data begins, and to enumerate the columns that are in the file so they can be matched to the table columns.
The `FROM STDIN` option tells the database that the input will come from a data stream, and the SQL API will read our uploaded file and use it to feed the stream.
To actually run upload, you will need a tool or script that can generate a `multipart/form-data` POST, so here are a few examples in different languages.
The [curl](https://curl.haxx.se/) utility makes it easy to run web requests from the command-line, and supports multi-part file upload, so it can feed the `copyfrom` end point.
**Important:** When supplying the "sql" parameter as a form field, it must come **before** the "file" parameter, or the upload will fail. Alternatively, you can supply the "sql" parameter on the URL line.
### Python Example
The [Requests](http://docs.python-requests.org/en/master/user/quickstart/) library for HTTP makes doing a file upload relatively terse.
import requests
api_key = {api_key}
username = {api_key}
upload_file = 'upload_example.csv'
sql = "COPY upload_example (the_geom, name, age) FROM STDIN WITH (FORMAT csv, HEADER true)"
Using the `copyto` end point to extract data bypasses the usual JSON formatting applied by the SQL API, so it can dump more data, faster. However, it has the restriction that it will only output in a handful of formats:
The Python to "copy to" is very simple, because the HTTP call is a simple get. The only complexity in this example is at the end, where the result is streamed back block-by-block, to avoid pulling the entire download into memory before writing to file.