f0b5b7e79f
The previous version of this file was enough to cache requests for the SQL API, but unfortunately no traffic was ever reaching Varnish to be cached. Nginx was proxying directly to the SQL API port, and Varnish was set to listen on 6081, so it wasn't able to intercept those requests. I updated the Nginx proxy config to aim at 6081 for requests to both SQL API and Windshaft, so now Varnish is receiving traffic. However, in order to know which backend to send traffic to, I had to add a custom HTTP header in the Nginx proxy pass. That header is picked up in the `vcl_recv` varnish subroutine and used to switch between backends. Additionally I've added logic for controlling what hosts can issue an HTTP PURGE command--in this case just localhost, since everything is on a single image. The purges will typically come from a Postgres trigger. As an overview of the purge related changes, see the Varnish docs here: https://varnish-cache.org/docs/3.0/tutorial/purging.html#http-purges |
||
---|---|---|
.. | ||
app_config.yml | ||
CartoDB-dev.js | ||
cartodb.nginx.proxy.conf | ||
database.yml | ||
varnish.vcl | ||
WS-dev.js |