The absence of foreign CDB_TableMetadata actually means that we cannot
really tell when a remote table was modified.
Therefore we're using NULL with the meaning of "I don't know when it
was last modified".
To be taken in other caching layers, to adjust headers accordingly.
The final order of the columns of a cartodbfied table wasn't uniquely specified, so could vary across PG versions.
This was a problem in particular for having deterministic test results.
This may happen with non-carto DB's, when checking the updated_at
times and not finding the corresponding remote.cdb_tablemetadata
imported from the foreign non-carto DB.
Instead of failing, return a NOW() timestampt, so that caching logic
just assumes there may have been changes.
This makes it work today, and leaves open the possibility of adding
the required carto metadata for homogeneous caching in the future.
For the Geocoding (and in general for LDS use cases) it may come in
handy to exclude geometry columns from the list of stuff to
syncrhonize. Otherwise they may be lost, overwritten with NULL values.
Generate more unique temp table names when the CDB_SyncTable function
is executed multiple times within the same transaction.
When executed in isolation, there will be always an implicit
surrounding transaction.
But when executed several times within the same transaction it can
give an `ERROR: relation "src_sync_718794" already exists`.
E.g:
```
BEGIN;
SELECT cartodb.CDB_SyncTable('source1', 'public', 'dest1');
SELECT cartodb.CDB_SyncTable('source12, 'public', 'dest2');
COMMIT;
```
With javitonino's help, greatly reduce the processing time by using
EXCEPT instead of NOT IN, which causes it to use a `HashSetOp Except`
plan on the subqueries rather than a `Seq Scan` on `Materialize`'d
subtables.
It assumes there's a cartodb_id column in both source and target. It
does not perform unnecessary actions. It respects augmented columns in
target table, if they exist. It is meant to be efficient.