sql
This commit is contained in:
parent
f164cccbbf
commit
8bfa9f5dc7
9
lib/sql/.gitignore
vendored
Normal file
9
lib/sql/.gitignore
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
cartodb--*.sql
|
||||
cartodb_version.sql
|
||||
cartodb.control
|
||||
results/
|
||||
regression.*
|
||||
expected/test
|
||||
sql/test
|
||||
.idea/*
|
||||
*.swp
|
38
lib/sql/.travis.yml
Normal file
38
lib/sql/.travis.yml
Normal file
@ -0,0 +1,38 @@
|
||||
dist: xenial
|
||||
language: c
|
||||
sudo: required
|
||||
|
||||
env:
|
||||
global:
|
||||
- PGUSER=postgres
|
||||
- PGDATABASE=postgres
|
||||
- PGOPTIONS='-c client_min_messages=NOTICE'
|
||||
- PGPORT=5432
|
||||
- POSTGIS_VERSION="2.5"
|
||||
|
||||
matrix:
|
||||
- POSTGRESQL_VERSION="9.6"
|
||||
- POSTGRESQL_VERSION="10"
|
||||
- POSTGRESQL_VERSION="11"
|
||||
|
||||
|
||||
before_install:
|
||||
- sudo service postgresql stop;
|
||||
- sudo apt-get remove postgresql* -y
|
||||
- sudo apt-get install -y --allow-unauthenticated --no-install-recommends --no-install-suggests postgresql-$POSTGRESQL_VERSION postgresql-client-$POSTGRESQL_VERSION postgresql-server-dev-$POSTGRESQL_VERSION postgresql-common
|
||||
- if [[ $POSTGRESQL_VERSION == '9.6' ]]; then sudo apt-get install -y postgresql-contrib-9.6; fi;
|
||||
- sudo apt-get install -y --allow-unauthenticated postgresql-$POSTGRESQL_VERSION-postgis-$POSTGIS_VERSION postgresql-$POSTGRESQL_VERSION-postgis-$POSTGIS_VERSION-scripts postgis postgresql-plpython-$POSTGRESQL_VERSION
|
||||
- sudo pg_dropcluster --stop $POSTGRESQL_VERSION main
|
||||
- sudo rm -rf /etc/postgresql/$POSTGRESQL_VERSION /var/lib/postgresql/$POSTGRESQL_VERSION
|
||||
- sudo pg_createcluster -u postgres $POSTGRESQL_VERSION main -- -A trust
|
||||
- sudo /etc/init.d/postgresql start $POSTGRESQL_VERSION || sudo journalctl -xe
|
||||
- sudo pip install redis==2.4.9
|
||||
script:
|
||||
- make
|
||||
- sudo make install
|
||||
- make installcheck
|
||||
|
||||
after_failure:
|
||||
- pg_lsclusters
|
||||
- cat regression.out
|
||||
- cat regression.diffs
|
73
lib/sql/CONTRIBUTING.md
Normal file
73
lib/sql/CONTRIBUTING.md
Normal file
@ -0,0 +1,73 @@
|
||||
The development tracker for cartodb-postgresql is on github:
|
||||
http://github.com/cartodb/cartodb-postgresql/
|
||||
|
||||
Bug fixes are best reported as pull requests over there.
|
||||
Features are best discussed on the mailing list:
|
||||
https://groups.google.com/d/forum/cartodb
|
||||
|
||||
Adding features to the extension
|
||||
--------------------------------
|
||||
|
||||
Extension features are coded in scripts found under the
|
||||
"scripts-available" directory. A feature can be a single function
|
||||
or a group of function with a specific scope.
|
||||
|
||||
The "scripts-enabled" directory contains symlinks to the scripts
|
||||
in "scripts-available". Any symlink in that directory is automatically
|
||||
included in the extension. Numbering can be used to enforce the order
|
||||
in which those scripts are loaded.
|
||||
|
||||
Scripts would be best coded in a way to be usable both for creation
|
||||
and upgrade of the objects. This means using CREATE OR REPLACE for
|
||||
the functions, and whatever it takes to check existence of any previous
|
||||
version of objects in other cases.
|
||||
|
||||
When adding a new function or modifying an exiting one make sure that the
|
||||
[VOLATILITY](https://www.postgresql.org/docs/current/static/xfunc-volatility.html) and [PARALLEL](https://www.postgresql.org/docs/9.6/static/parallel-safety.html) categories are updated accordingly.
|
||||
|
||||
|
||||
Although the extension will usually be installed in the "cartodb" schema, please
|
||||
use @extschema@ to fully-qualify internal calls to avoid name clashes.
|
||||
When you use postgis functions or types, please fully-qualify them by using
|
||||
@postgisschema@ (it's changed to "public" by the install script) to avoid
|
||||
pg_upgrade issues.
|
||||
|
||||
Every new feature (as well as bugfixes) should come with a test case,
|
||||
see the 'Writing testcases' section.
|
||||
|
||||
Writing testcases
|
||||
-----------------
|
||||
|
||||
Tests reside in the test/ directory.
|
||||
You can find information about how to write tests in test/README
|
||||
|
||||
Testing changes live
|
||||
--------------------
|
||||
|
||||
Testing changes made during development requires upgrading
|
||||
the extension into your test database.
|
||||
|
||||
During development the cartodb extension version doesn't change with
|
||||
every commit, so testing latest change requires cheating with PostgreSQL
|
||||
as to enforce the scripts to reload. To help with cheating, "make install"
|
||||
also installs migration scripts to go from "V" to "V"next and from "V"next
|
||||
to "V". Example to upgrade a 0.2.0dev version:
|
||||
|
||||
```sql
|
||||
ALTER EXTENSION cartodb UPDATE TO '0.2.0next';
|
||||
ALTER EXTENSION cartodb UPDATE TO '0.2.0dev';
|
||||
```
|
||||
Starting with 0.2.0, the in-place reload can be done with an ad-hoc function:
|
||||
|
||||
```sql
|
||||
SELECT cartodb.cdb_extension_reload();
|
||||
```
|
||||
|
||||
A useful query:
|
||||
```sql
|
||||
SELECT * FROM pg_extension_update_paths('cartodb') WHERE path IS NOT NULL AND source = cdb_version();
|
||||
```
|
||||
|
||||
## Submitting Contributions
|
||||
|
||||
* You will need to sign a Contributor License Agreement (CLA) before making a submission. [Learn more here](https://carto.com/contributions).
|
27
lib/sql/LICENSE
Normal file
27
lib/sql/LICENSE
Normal file
@ -0,0 +1,27 @@
|
||||
Copyright (c) 2014, Vizzuality
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software without
|
||||
specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
186
lib/sql/Makefile
Normal file
186
lib/sql/Makefile
Normal file
@ -0,0 +1,186 @@
|
||||
# cartodb/Makefile
|
||||
|
||||
EXTENSION = cartodb
|
||||
EXTVERSION = 0.28.1
|
||||
|
||||
SED = sed
|
||||
AWK = awk
|
||||
|
||||
CDBSCRIPTS = \
|
||||
scripts-enabled/*.sql \
|
||||
scripts-available/CDB_SearchPath.sql \
|
||||
scripts-available/CDB_ExtensionPost.sql \
|
||||
scripts-available/CDB_ExtensionUtils.sql \
|
||||
scripts-available/CDB_Helper.sql \
|
||||
$(END)
|
||||
|
||||
UPGRADABLE = \
|
||||
unpackaged \
|
||||
0.1.0 \
|
||||
0.1.1 \
|
||||
0.2.0 \
|
||||
0.2.1 \
|
||||
0.3.0 \
|
||||
0.3.0dev \
|
||||
0.3.1 \
|
||||
0.3.2 \
|
||||
0.3.3 \
|
||||
0.3.4 \
|
||||
0.3.5 \
|
||||
0.3.6 \
|
||||
0.4.0 \
|
||||
0.4.1 \
|
||||
0.5.0 \
|
||||
0.5.1 \
|
||||
0.5.2 \
|
||||
0.5.3 \
|
||||
0.6.0 \
|
||||
0.7.0 \
|
||||
0.7.1 \
|
||||
0.7.2 \
|
||||
0.7.3 \
|
||||
0.7.4 \
|
||||
0.8.0 \
|
||||
0.8.1 \
|
||||
0.8.2 \
|
||||
0.9.0 \
|
||||
0.9.1 \
|
||||
0.9.2 \
|
||||
0.9.3 \
|
||||
0.9.4 \
|
||||
0.10.0 \
|
||||
0.10.1 \
|
||||
0.10.2 \
|
||||
0.11.0 \
|
||||
0.11.1 \
|
||||
0.11.2 \
|
||||
0.11.3 \
|
||||
0.11.4 \
|
||||
0.11.5 \
|
||||
0.12.0 \
|
||||
0.13.0 \
|
||||
0.13.1 \
|
||||
0.14.0 \
|
||||
0.14.1 \
|
||||
0.14.2 \
|
||||
0.14.3 \
|
||||
0.14.4 \
|
||||
0.15.0 \
|
||||
0.15.1 \
|
||||
0.16.0 \
|
||||
0.16.1 \
|
||||
0.16.2 \
|
||||
0.16.3 \
|
||||
0.16.4 \
|
||||
0.17.0 \
|
||||
0.17.1 \
|
||||
0.18.0 \
|
||||
0.18.1 \
|
||||
0.18.2 \
|
||||
0.18.3 \
|
||||
0.18.4 \
|
||||
0.18.5 \
|
||||
0.19.0 \
|
||||
0.19.1 \
|
||||
0.19.2 \
|
||||
0.20.0 \
|
||||
0.21.0 \
|
||||
0.22.0 \
|
||||
0.22.1 \
|
||||
0.22.2 \
|
||||
0.23.0 \
|
||||
0.23.1 \
|
||||
0.23.2 \
|
||||
0.24.0 \
|
||||
0.24.1 \
|
||||
0.25.0 \
|
||||
0.26.0 \
|
||||
0.26.1 \
|
||||
0.27.0 \
|
||||
0.27.1 \
|
||||
0.27.2 \
|
||||
0.28.0 \
|
||||
0.28.1 \
|
||||
$(EXTVERSION)dev \
|
||||
$(EXTVERSION)next \
|
||||
$(END)
|
||||
|
||||
UPGRADES = \
|
||||
$(shell echo $(UPGRADABLE) | \
|
||||
$(SED) 's/^/$(EXTENSION)--/' | \
|
||||
$(SED) 's/$$/--$(EXTVERSION).sql/' | \
|
||||
$(SED) 's/ /--$(EXTVERSION).sql $(EXTENSION)--/g')
|
||||
|
||||
GITDIR=$(shell test -d .git && echo '.git' || cat .git | $(SED) 's/^gitdir: //')
|
||||
|
||||
DATA_built = \
|
||||
$(EXTENSION)--$(EXTVERSION).sql \
|
||||
$(EXTENSION)--$(EXTVERSION)--$(EXTVERSION)next.sql \
|
||||
$(UPGRADES) \
|
||||
$(EXTENSION).control
|
||||
|
||||
EXTRA_CLEAN = cartodb_version.sql
|
||||
|
||||
DOCS = README.md
|
||||
REGRESS_OLD = $(wildcard test/*.sql)
|
||||
REGRESS_LEGACY = $(REGRESS_OLD:.sql=)
|
||||
REGRESS = test_setup $(REGRESS_LEGACY)
|
||||
|
||||
PG_CONFIG = pg_config
|
||||
PGXS := $(shell $(PG_CONFIG) --pgxs)
|
||||
include $(PGXS)
|
||||
|
||||
$(EXTENSION)--$(EXTVERSION).sql: $(CDBSCRIPTS) cartodb_version.sql Makefile
|
||||
echo '\echo Use "CREATE EXTENSION $(EXTENSION)" to load this file. \quit' > $@
|
||||
cat $(CDBSCRIPTS) | \
|
||||
$(SED) -e 's/@extschema@/cartodb/g' \
|
||||
-e "s/@postgisschema@/public/g" >> $@
|
||||
echo "GRANT USAGE ON SCHEMA cartodb TO public;" >> $@
|
||||
cat cartodb_version.sql >> $@
|
||||
|
||||
$(EXTENSION)--unpackaged--$(EXTVERSION).sql: $(EXTENSION)--$(EXTVERSION).sql util/create_from_unpackaged.sh Makefile
|
||||
./util/create_from_unpackaged.sh $(EXTVERSION)
|
||||
|
||||
$(EXTENSION)--%--$(EXTVERSION).sql: $(EXTENSION)--$(EXTVERSION).sql
|
||||
cp $< $@
|
||||
|
||||
$(EXTENSION)--$(EXTVERSION)--$(EXTVERSION)next.sql: $(EXTENSION)--$(EXTVERSION).sql
|
||||
cp $< $@
|
||||
|
||||
$(EXTENSION).control: $(EXTENSION).control.in Makefile
|
||||
$(SED) -e 's/@@VERSION@@/$(EXTVERSION)/' $< > $@
|
||||
|
||||
cartodb_version.sql: cartodb_version.sql.in Makefile $(GITDIR)/index
|
||||
$(SED) -e 's/@@VERSION@@/$(EXTVERSION)/' -e 's/@extschema@/cartodb/g' -e "s/@postgisschema@/public/g" $< > $@
|
||||
|
||||
# Needed for consistent `echo` results with backslashes
|
||||
SHELL = bash
|
||||
|
||||
legacy_regress: $(REGRESS_OLD) Makefile
|
||||
mkdir -p sql/test/
|
||||
mkdir -p expected/test/
|
||||
mkdir -p results/test/
|
||||
for f in $(REGRESS_OLD); do \
|
||||
tn=`basename $${f} .sql`; \
|
||||
of=sql/test/$${tn}.sql; \
|
||||
echo '\set ECHO none' > $${of}; \
|
||||
echo '\a' >> $${of}; \
|
||||
echo '\t' >> $${of}; \
|
||||
echo '\set QUIET off' >> $${of}; \
|
||||
cat $${f} | \
|
||||
$(SED) -e 's/@@VERSION@@/$(EXTVERSION)/' -e 's/@extschema@/cartodb/g' -e "s/@postgisschema@/public/g" >> $${of}; \
|
||||
exp=expected/test/$${tn}.out; \
|
||||
echo '\set ECHO none' > $${exp}; \
|
||||
cat test/$${tn}_expect >> $${exp}; \
|
||||
done
|
||||
|
||||
test_organization:
|
||||
bash test/organization/test.sh
|
||||
|
||||
test_extension_new:
|
||||
bash test/extension/test.sh
|
||||
|
||||
legacy_tests: legacy_regress
|
||||
|
||||
installcheck: legacy_tests test_extension_new test_organization
|
||||
|
469
lib/sql/NEWS.md
Normal file
469
lib/sql/NEWS.md
Normal file
@ -0,0 +1,469 @@
|
||||
0.28.1 (2019-07-04)
|
||||
* Avoid temporary tables creation in CDB_SyncTable (#366)
|
||||
* Make CDB_Get_Foreign_Updated_At robust to missing CDB_TableMetadata (#362)
|
||||
|
||||
0.28.0 (2019-07-01)
|
||||
* New function CDB_SyncTable (#355)
|
||||
|
||||
0.27.2 (2019-06-21)
|
||||
* Improvements and fixes in Ghost tables functions (#360)
|
||||
|
||||
0.27.1 (2019-06-03)
|
||||
* Add some qualifications that were left in the previous release.
|
||||
|
||||
0.27.0 (2019-06-03)
|
||||
* Fully qualify function calls
|
||||
* Several improvements to bash tests.
|
||||
* Avoid dropping publicuser in tests.
|
||||
* Raise minimum requirement to PostgreSQL 9.6.
|
||||
|
||||
0.26.1 (2019-03-19)
|
||||
* Remove default TIS values from Ghost tables functions
|
||||
|
||||
0.26.0 (2019-03-11)
|
||||
* Use `ST_ShiftLongitude` instead of `ST_Shift_Longitude`.
|
||||
* Add Ghost tables functions to install triggers and enqueue the linking process
|
||||
|
||||
0.25.0 (2019-02-22)
|
||||
* Add `CDB_Username` to get the cartodb username from the current PostgreSQL user
|
||||
|
||||
0.24.1 (2019-01-02)
|
||||
* Drop functions removed in 0.12 (#341)
|
||||
* Travis: Test with PostgreSQL 9.5, 10 and 11.
|
||||
|
||||
0.24.0 (2018-09-13)
|
||||
* Travis: Test with PostgreSQL 9.5 and 10.
|
||||
* _cdb_estimated_extent: Fix bug with ST_EstimatedExtent interaction.
|
||||
* Improvements in `CDB_JenksBins`.
|
||||
* Now it ignores NULLs.
|
||||
* No longer puts the same value in multiple categories.
|
||||
* Removes all limits related to size.
|
||||
* If not set, the number of iterations done is based now on the size of the array.
|
||||
* Fixed multiple bugs.
|
||||
* The internal function `CDB_JenksBinsIteration` has changed its signature.
|
||||
|
||||
0.23.2 (2018-07-19)
|
||||
* Fix `CDB_QueryTablesText` with parenthesized queries (#335)
|
||||
|
||||
0.23.1 (2018-07-19)
|
||||
* Fix `CDB_EstimateRowCount` parallelizability #333
|
||||
|
||||
0.23.0 (2018-07-03)
|
||||
* Add a new helper function `_CDB_Table_Exists(table_name_with_optional_schema TEXT)` #332
|
||||
|
||||
0.22.2 (2018-05-29)
|
||||
* Fix: Fix hyphenates usernames in 0.22.1 fix (#331)
|
||||
|
||||
0.22.1 (2018-05-29)
|
||||
* Fix: Correctly grant permission to all sequences related with table (#330)
|
||||
|
||||
0.22.0 (2018-03-22)
|
||||
* Fix: allow older ogr2ogr to work in -append mode (#319,#325)
|
||||
* Refactors CDB_QuantileBins to rely on PostgreSQL function `percentile_disc` #316
|
||||
|
||||
0.21.0 (2018-02-15)
|
||||
* Add optional parameter to limit the number of cells in grid-generation functions #322
|
||||
* Fix: grant usage on cartodb_id sequence when sharing read write #323
|
||||
* Fix: Change sed in-place for tmpfiles 524319
|
||||
|
||||
0.20.0 (2017-11-08)
|
||||
* Added VOLATILITY and PARALLEL categories to all functions
|
||||
|
||||
0.19.2 (2017-06-30)
|
||||
* Improved functions to generate unique identifiers #305
|
||||
|
||||
0.19.1 (2017-06-05)
|
||||
|
||||
* Fixed a deadlock problem when trying to regenarate overviews #302
|
||||
|
||||
0.19.0 (2017-04-11)
|
||||
|
||||
* Add new function `CDB_EstimateRowCount` #295
|
||||
|
||||
0.18.5 (2016-11-30)
|
||||
|
||||
* Add to new overview creation strategies #290
|
||||
* Fix tests: race condition with publicuser #157
|
||||
* Fix: CDB_Stats divisions by zero #181
|
||||
* Better implementation of `CDB_EqualIntervalBins` #244
|
||||
* New tests for binning functions #249
|
||||
|
||||
0.18.4 (2016-11-04)
|
||||
|
||||
* No functional changes; fixes the migration from previous versions #288
|
||||
|
||||
0.18.3 (2016-11-03)
|
||||
|
||||
* Exclude analysis cache tables from the quota #281
|
||||
|
||||
0.18.2 (2016-10-20)
|
||||
-------------------
|
||||
|
||||
* Fix: cleanup inconsistent position of `username` column in analysis catalog after upgrades
|
||||
[#285](https://github.com/cartodb/cartodb-postgresql/pull/285)
|
||||
|
||||
0.18.1 (2016-10-19)
|
||||
-------------------
|
||||
|
||||
* Increase analysis limit factor to 2 [#284](https://github.com/CartoDB/cartodb-postgresql/pull/284)
|
||||
|
||||
0.18.0 (2016-10-17)
|
||||
-------------------
|
||||
|
||||
* Fix: exclude NULL geometries when creating Overviews #269
|
||||
* Function to check analysis tables limits #279
|
||||
|
||||
0.17.1 (2016-08-16)
|
||||
-------------------
|
||||
|
||||
* Add cache_tables column to cdb_analysis_catalog table #274.
|
||||
|
||||
|
||||
0.17.0 (2016-07-04)
|
||||
-------------------
|
||||
|
||||
* Add export config for cdb_analysis_catalog table #268.
|
||||
* Add some extra fields to cdb_analysis_catalog table. Track user, error_message for failures, and last entity modifying the node #267.
|
||||
* Exclude overviews from user data size #262.
|
||||
|
||||
|
||||
0.16.4 (2016-05-27)
|
||||
-------------------
|
||||
|
||||
* Change CDB_ZoomFromScale() to use a formula and raise
|
||||
maximum overview level from 23 to 29.
|
||||
[#259](https://github.com/CartoDB/cartodb-postgresql/pull/259)
|
||||
|
||||
* Fix bug in overview creating causing it to fail when `x` or
|
||||
`y` columns exist with non-integer type. Prevent also
|
||||
potential integer overflows limiting maximum overview level
|
||||
to 23.
|
||||
[#258](https://github.com/CartoDB/cartodb-postgresql/pull/258)
|
||||
|
||||
|
||||
0.16.3 (2016-05-09)
|
||||
-------------------
|
||||
|
||||
* Fix overview creation problem for organization users
|
||||
with names that require quoting:
|
||||
[#253](https://github.com/CartoDB/cartodb-postgresql/pull/253)
|
||||
|
||||
0.16.2 (2016-04-27)
|
||||
-------------------
|
||||
|
||||
* Use the mode to aggregate category columns in overviews
|
||||
[#246](https://github.com/CartoDB/cartodb-postgresql/pull/246)
|
||||
|
||||
0.16.1 (2016-04-25)
|
||||
-------------------
|
||||
|
||||
* Optimize column information functions performance
|
||||
[#238](https://github.com/CartoDB/cartodb-postgresql/pull/238)
|
||||
|
||||
* Adjust overview points to pixel CDB_EqualIntervalBins
|
||||
[#242](https://github.com/CartoDB/cartodb-postgresql/pull/242)
|
||||
|
||||
* Compute webmercator resolution using full numeric precision
|
||||
[#243](https://github.com/CartoDB/cartodb-postgresql/pull/243)
|
||||
|
||||
|
||||
0.16.0 (2016-04-15)
|
||||
-------------------
|
||||
* Adds table for storing camshaft analysis nodes
|
||||
[#237](https://github.com/CartoDB/cartodb-postgresql/pull/237)
|
||||
|
||||
0.15.1 (2016-04-15)
|
||||
-------------------
|
||||
* Fix problems with org users in overviews functions
|
||||
[#224](https://github.com/CartoDB/cartodb-postgresql/pull/224)
|
||||
* Add `_feature_count` to overviews
|
||||
[#227](https://github.com/CartoDB/cartodb-postgresql/pull/227)
|
||||
* Change point clustering behaviour of overviews
|
||||
[#228](https://github.com/CartoDB/cartodb-postgresql/pull/228)
|
||||
* Change default tolerance of overviews
|
||||
[#230](https://github.com/CartoDB/cartodb-postgresql/pull/230)
|
||||
* Fix problem with aggregated numerical fields in overviews
|
||||
[#233](https://github.com/CartoDB/cartodb-postgresql/pull/233)
|
||||
* Enhance aggregation of text fields in overviews
|
||||
[#234]https://github.com/CartoDB/cartodb-postgresql/pull/234
|
||||
|
||||
0.15.0 (2016-04-05)
|
||||
-------------------
|
||||
* New function CDB_CreateOverviewsWithToleranceInPixels that adds tolerance parameter for overview creation
|
||||
[#221](https://github.com/CartoDB/cartodb-postgresql/pull/221)
|
||||
* New default value for the overviews tolerance in pixels is 2 (used to be 7.5) (also in #221)
|
||||
* The feature density limit used to choose the reference Z level now depends on the tolerance in pixels (also in #221)
|
||||
* Tables that require an explicit schema can now be passed to overview functions
|
||||
[#220](https://github.com/CartoDB/cartodb-postgresql/pull/220)
|
||||
|
||||
0.14.4 (2016-03-29)
|
||||
-------------------
|
||||
* Fix creating overviews for tables with boolean columns
|
||||
[#214](https://github.com/CartoDB/cartodb-postgresql/pull/214)
|
||||
* Fix tests for some systems [#215](https://github.com/CartoDB/cartodb-postgresql/pull/215)
|
||||
|
||||
0.14.3 (2016-03-17)
|
||||
-------------------
|
||||
* Fix for `cartodb_id` bigint casting hardcoded in 0.14.2 to support `cartodb_id` text columns [#210](https://github.com/CartoDB/cartodb-postgresql/pull/210)
|
||||
|
||||
0.14.2 (2016-03-15)
|
||||
-------------------
|
||||
* Support text `cartodb_id` columns in `_CDB_Has_Usable_Primary_ID` [#202](https://github.com/CartoDB/cartodb-postgresql/pull/202)
|
||||
|
||||
0.14.1 (2016-03-07)
|
||||
-------------------
|
||||
* Fully qualify table names in cache cdb_invalidate_varnish calls [#198](https://github.com/CartoDB/cartodb-postgresql/issues/198)
|
||||
|
||||
0.14.0 (2016-02-14)
|
||||
-------------------
|
||||
* Add CDB_ForeignTable.sql to support FDW's [#199](https://github.com/CartoDB/cartodb-postgresql/pull/199)
|
||||
|
||||
0.13.1 (2016-02-01)
|
||||
-------------------
|
||||
* Fix migration fron unpackaged. [193](https://github.com/CartoDB/cartodb-postgresql/pull/193)
|
||||
|
||||
0.13.0 (2016-01-29)
|
||||
-------------------
|
||||
* Add CDB_CreateOverviews, CDB_DropOverviews and CDB_Overviews for vector overviews support. [185](https://github.com/CartoDB/cartodb-postgresql/pull/185)
|
||||
* Convert some simple functions from plpgsql to sql. [188](https://github.com/CartoDB/cartodb-postgresql/pull/188)
|
||||
|
||||
0.12.0 (2016-01-27)
|
||||
-------------------
|
||||
* Remove schema_triggers extension dependency, to ensure compatibility with PostgreSQL 9.5. [#190](https://github.com/CartoDB/cartodb-postgresql/pull/190)
|
||||
* Remove DDL trigger functions (unused by CartoDB).
|
||||
|
||||
0.11.5 (2015-11-27)
|
||||
-------------------
|
||||
* Disable log invalidation time [#178](https://github.com/CartoDB/cartodb-postgresql/pull/178)
|
||||
|
||||
0.11.4 (2015-11-24)
|
||||
-------------------
|
||||
* Fix for existing PK cartodb_id problem [#174](https://github.com/CartoDB/cartodb-postgresql/issues/174)
|
||||
* Add cartodbfication support for column names with embedded points to fix [#6114](https://github.com/CartoDB/cartodb/issues/6114)
|
||||
* Add CDB_GreatCircle for creating great circle routes between two points [#171](https://github.com/CartoDB/cartodb-postgresql/pull/171)
|
||||
* Fix to prevent cartodbfication problems [#155](https://github.com/CartoDB/cartodb-postgresql/issues/155)
|
||||
|
||||
0.11.3 (2015-10-27)
|
||||
-------------------
|
||||
* Added CDB_Helper.sql [#173](https://github.com/CartoDB/cartodb-postgresql/pull/173)
|
||||
* Added `_CDB_Unique_Identifier` for creating UTF8 aware unique identifiers
|
||||
* Added `_CDB_Unique_Column_Identifier` for creating UTF8 aware unique identifiers for columns
|
||||
* Added `_CDB_Octet_Truncate` that truncates text to a certain amount of octets.
|
||||
|
||||
0.11.2 (2015-10-19)
|
||||
-------------------
|
||||
* Fix schema not being specified on pg_get_serial_sequence [#170](https://github.com/CartoDB/cartodb-postgresql/pull/170)
|
||||
* Log invalidation function call duration in seconds [#163](https://github.com/CartoDB/cartodb-postgresql/pull/163)
|
||||
|
||||
0.11.1 (2015-10-06)
|
||||
-------------------
|
||||
* Added CDB_DateToNumber(timestamp with time zone) [#169](https://github.com/CartoDB/cartodb-postgresql/pull/169)
|
||||
* cartodbfy now discards cartodb_id candidates that contain nulls [#148](https://github.com/CartoDB/cartodb-postgresql/issues/148)
|
||||
|
||||
0.11.0 (2015-09-dd)
|
||||
-------------------
|
||||
* Groups API
|
||||
|
||||
0.10.2 (2015-09-24)
|
||||
-------------------
|
||||
* Add back the `DROP FUNCTION IF EXISTS CDB_UserTables(text);` to be able to upgrade from `0.7.3` upward [#160](https://github.com/CartoDB/cartodb-postgresql/issues/160)
|
||||
|
||||
0.10.1 (2015-09-16)
|
||||
-------------------
|
||||
* Get back the `update_updated_at` function (still used by old tables) [#143](https://github.com/CartoDB/cartodb-postgresql/pull/143)
|
||||
* Fix for CDB_StatsTest.sql test failing randomly [#144](https://github.com/CartoDB/cartodb-postgresql/issues/144)
|
||||
* Fix for table cartodbfy'ed without default seq value [#138](https://github.com/CartoDB/cartodb-postgresql/issues/138)
|
||||
* Fix for cartodbfy error column `the_geom` already exists [#141](https://github.com/CartoDB/cartodb-postgresql/issues/141)
|
||||
* Fix for columns with geometry cartodbfy'ed without SRID [#154](https://github.com/CartoDB/cartodb-postgresql/issues/154)
|
||||
|
||||
0.10.0 (2015-09-07)
|
||||
-----------------
|
||||
* Quote schema and table names returned by CDB_QueryTables [#134](https://github.com/CartoDB/cartodb-postgresql/pull/134). Use quote_ident to quote schema and table names when necessary.
|
||||
* Fixed CDB_ColumnNames [#122](https://github.com/CartoDB/cartodb-postgresql/issues/122) and CDB_ColumnType [#130](https://github.com/CartoDB/cartodb-postgresql/issues/130) should honor regclass, returning columns for just the table in the schema and not in any other one [#131](https://github.com/CartoDB/cartodb-postgresql/pull/131).
|
||||
* Add kurtosis and skewness [#124](https://github.com/CartoDB/cartodb-postgresql/pull/124).
|
||||
* Removed `DROP FUNCTION IF EXISTS cdb_usertables(text);` [#129](https://github.com/CartoDB/cartodb-postgresql/pull/129). This was needed for upgrading between 0.7.4 to 0.8.0 but is no longer needed.
|
||||
|
||||
0.9.4 (2015-08-28)
|
||||
------------------
|
||||
* Fixed issue with indices when renaming tables [#123](https://github.com/CartoDB/cartodb-postgresql/issues/123)
|
||||
|
||||
0.9.3 (2015-08-27)
|
||||
------------------
|
||||
* Modify sampling of quota trigger [#126](https://github.com/CartoDB/cartodb-postgresql/issues/126)
|
||||
|
||||
0.9.2 (2015-08-24)
|
||||
------------------
|
||||
* Fix for `the_geom` column present but not SRID (EWKT) and other corner cases [#121](https://github.com/CartoDB/cartodb-postgresql/pull/121)
|
||||
|
||||
0.9.1 (2015-08-19)
|
||||
------------------
|
||||
* Fix for transformation to webmercator in corner cases [#116](https://github.com/CartoDB/cartodb-postgresql/issues/116)
|
||||
|
||||
0.9.0 (2015-08-19)
|
||||
------------------
|
||||
* Re-implementation of `CDB_CartodbfyTable` functions
|
||||
- The signature of the main function changes to
|
||||
```
|
||||
FUNCTION CDB_CartodbfyTable(destschema TEXT, reloid REGCLASS)
|
||||
RETURNS REGCLASS
|
||||
```
|
||||
- The `destschema` does not need to match the origin schema of `reloid`
|
||||
- It returns the `regclass` of the cartodbfy'ed table, if it needs to be rewritten.
|
||||
- There are many optimizations
|
||||
- The columns `created_at` and `updated_at` will no longer be added
|
||||
* Fix for CDB_UserDataSize failing due `ERROR: relation "*" does not exist.` #110
|
||||
* Review test to validate permissions in public tables [#112](https://github.com/CartoDB/cartodb-postgresql/pull/112)
|
||||
|
||||
0.8.3 (2015-08-14)
|
||||
------------------
|
||||
* Fixes CDB_UserDataSize failing due `ERROR: relation "*" does not exist.` [#108](https://github.com/CartoDB/cartodb-postgresql/issues/108)
|
||||
|
||||
0.8.2 (2015-07-27)
|
||||
------------------
|
||||
* Fix for CDB_UserTables returning wrong listings when publicuser is used
|
||||
|
||||
0.8.1 (2015-06-30)
|
||||
------------------
|
||||
* Fix for [#95](https://github.com/CartoDB/cartodb-postgresql/issues/95) *cdb_usertables should return public tables when the user is publicuser*
|
||||
|
||||
0.8.0 (2015-06-30)
|
||||
------------------
|
||||
* Adds new function CDB_QueryTablesText that can deal with "schema.table_name"
|
||||
longer than 63 chars.
|
||||
* Adds a set of statistical functions:
|
||||
- CDB_DistType
|
||||
- CDB_DistinctMeasure
|
||||
- CDB_EqualIntervalBins
|
||||
* Fix for CDB_UserTables returns 0 entries for multiuser accounts [#64](https://github.com/CartoDB/cartodb-postgresql/issues/64)
|
||||
|
||||
0.7.4 (2015-06-29)
|
||||
------------------
|
||||
Dummy transitional version.
|
||||
|
||||
0.7.3 (2015-03-03)
|
||||
------------------
|
||||
* Fix upgrade of CDB_StringToDate function
|
||||
* Add a test for to validate CDB_TableMetadataTouch usage with OID
|
||||
|
||||
0.7.2 (2015-03-03)
|
||||
------------------
|
||||
* Fix conversion of strings to datetime
|
||||
|
||||
0.7.1 (2015-02-27)
|
||||
------------------
|
||||
* Revert quota checks to `pg_total_relation_size`
|
||||
|
||||
0.7.0 (2015-02-19)
|
||||
------------------
|
||||
* Adds CDB_ZoomFromScale function
|
||||
|
||||
0.6.0 (2015-02-19)
|
||||
------------------
|
||||
* Select permission in CDB_TableMetadata no longer granted to public
|
||||
* New function to upsert the updated_at in CDB_TableMetadata for a regclass
|
||||
|
||||
0.5.3 (2015-02-17)
|
||||
------------------
|
||||
* Fixed security problem related with system tables
|
||||
* Changed quota checks to use `pg_relation_size` instead of `pg_total_relation_size`
|
||||
|
||||
0.5.2 (2015-01-29)
|
||||
------------------
|
||||
* Improvement: make CDB_UserDataSize functions much faster.
|
||||
|
||||
0.5.1 (2014-11-21)
|
||||
------------------
|
||||
* Bugfix: Quota check and some organization permissions functions were not properly escaping table name.
|
||||
|
||||
0.5.0 (2014-11-03)
|
||||
------------------
|
||||
* Support of raster tables for cartodbfication
|
||||
* Modified quota functions: vector tables stay the same, raster tables count as full size (as have no
|
||||
the_geom + the_geom_webmercator combo) and raster overviews are not counted
|
||||
|
||||
0.4.1 (2014-09-21)
|
||||
------------------
|
||||
* Bugfix for Cartodbfication: Set primary key of the table if not already present (e.g. tables created from SQL API)
|
||||
|
||||
0.4.0 (2014-08-27)
|
||||
------------------
|
||||
Added CDB_Math_Mode function
|
||||
Changes in versioning: no revision is attached so it no longer uses `git describe` for the version.
|
||||
|
||||
0.3.6 (2014-08-11)
|
||||
------------------
|
||||
Dummy release to solve some issues with cdb branch/tag
|
||||
|
||||
0.3.5 (2014-08-11)
|
||||
------------------
|
||||
Inverting priority of CDB_CheckQuota qmax so gies more priority to existing user quota function over parameter value.
|
||||
|
||||
0.3.4 (2014-08-01)
|
||||
------------------
|
||||
Fixes issue with schemas in CDB_QueryTables
|
||||
|
||||
0.3.3 (2014-07-30)
|
||||
------------------
|
||||
* Splitting of CartodbfyTable method in subfunctions to be able to call in fragments and evade timeouts on hot zones
|
||||
|
||||
0.3.2 (2014-07-28)
|
||||
------------------
|
||||
* Make 0.3.0dev version upgradeable
|
||||
|
||||
0.3.1 (2014-07-22)
|
||||
------------------
|
||||
* Dummy version. We start using semantic versioning
|
||||
|
||||
0.3.0 (2014-07-15)
|
||||
------------------
|
||||
* Permission management functions
|
||||
* Adapt functions to use schemas
|
||||
|
||||
0.2.1 - 2014-06-11
|
||||
------------------
|
||||
|
||||
Enhancements:
|
||||
|
||||
- Do not force re-cartodbfication on CREATE FROM unpackaged
|
||||
- Drop useless DEFAULT specification in plpgsql variable declarations
|
||||
- List plpythonu requirement first, to get pg_catalog scanned before public
|
||||
|
||||
Bug fixes:
|
||||
|
||||
- Do not add unique index on cartodb_id if already a primary key (#38)
|
||||
|
||||
0.2.0 - 2014-06-09
|
||||
------------------
|
||||
|
||||
Important changes:
|
||||
|
||||
- This release adds dependency on "plpythonu" extension
|
||||
- Roles are not created anymore, previously private functions
|
||||
for table information extraction (CDB_UserTables, CDB_TableIndexes,
|
||||
CDB_ColumnNames, CDB_ColumnType) will now be callable by anyone while
|
||||
only returning information about tables over which the calling user
|
||||
has SELECT privilege (#36)
|
||||
|
||||
Bug fixes:
|
||||
|
||||
- Fix recursive trigger on create table (#32)
|
||||
- Ensure cartodb_id uses an associated sequence (#33)
|
||||
- Fully qualify call to cdb_disable_ddl_hooks from cdb_enable_ddl_hooks
|
||||
- Fully qualify call to CDB_UserDataSize from quota trigger
|
||||
- Fully qualify call to CDB_TransformToWebmercator from CDB_CartodbfyTable
|
||||
- Fix potential infinite loop in CDB_CartodbfyTable
|
||||
- Fix potential infinite loop in CDB_QueryStatements
|
||||
|
||||
Enhancements:
|
||||
|
||||
- Include revision info in cdb_version() output (#34)
|
||||
|
||||
New features:
|
||||
|
||||
- Add a cdb_extension_reload() function
|
||||
|
||||
|
||||
0.1.0 - 2014-05-23
|
||||
------------------
|
||||
|
||||
Initial release
|
96
lib/sql/README.md
Normal file
96
lib/sql/README.md
Normal file
@ -0,0 +1,96 @@
|
||||
cartodb-postgresql
|
||||
==================
|
||||
|
||||
[![Build Status](http://api.travis-ci.org/CartoDB/cartodb-postgresql.svg?branch=master)](http://travis-ci.org/CartoDB/cartodb-postgresql)
|
||||
|
||||
PostgreSQL extension for CartoDB
|
||||
|
||||
See [the cartodb-postgresql wiki](https://github.com/CartoDB/cartodb-postgresql/wiki).
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
* PostgreSQL 9.6+ (with plpythonu extension and xml support)
|
||||
* [PostGIS extension](http://postgis.net)
|
||||
* Python with [Redis module](https://pypi.org/project/redis/)
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
```sh
|
||||
make all install
|
||||
```
|
||||
|
||||
Test installation
|
||||
-----------------
|
||||
|
||||
```sh
|
||||
make installcheck
|
||||
```
|
||||
|
||||
NOTE: you need to run the installcheck as a superuser, use PGUSER
|
||||
env variable if needed, like: PGUSER=postgres make installcheck
|
||||
|
||||
NOTE: the tests need to run against a **clean postgres instance**, if you have some roles already created test will likely fail due `publicuser` not being dropped.
|
||||
|
||||
Enable database
|
||||
---------------
|
||||
|
||||
In a database that needs to be turned into a "cartodb" user database, run:
|
||||
|
||||
```sql
|
||||
CREATE EXTENSION postgis;
|
||||
CREATE EXTENSION cartodb;
|
||||
```
|
||||
|
||||
Migrate existing cartodb database
|
||||
---------------------------------
|
||||
|
||||
When upgrading an existing cartodb user database, the cartodb extension
|
||||
can be migrated from the "unpackaged" version. The procedure will copy
|
||||
the data from ``public.CDB_TableMetada`` to ``cartodb.CDB_TableMetadata``,
|
||||
re-cartodbfy all tables using old functions in triggers and drop the
|
||||
cartodb functions from the 'public' schema. All new cartodb objects will
|
||||
be in the "cartodb" schema.
|
||||
|
||||
```sql
|
||||
CREATE EXTENSION postgis FROM unpackaged;
|
||||
CREATE EXTENSION cartodb FROM unpackaged;
|
||||
```
|
||||
|
||||
Update cartodb extension
|
||||
------------------------
|
||||
|
||||
Updating the version of cartodb extension installed in a database
|
||||
is done using ALTER EXTENSION.
|
||||
|
||||
```sql
|
||||
ALTER EXTENSION cartodb UPDATE TO '0.1.1';
|
||||
```
|
||||
|
||||
The target version needs to be installed on the system first
|
||||
(see Install section).
|
||||
|
||||
If the "TO 'x.y.z'" part is omitted, the extension will be updated to the
|
||||
latest installed version, which you can find with the following command:
|
||||
|
||||
```sh
|
||||
grep default_version `pg_config --sharedir`/extension/cartodb.control
|
||||
```
|
||||
|
||||
Updates are performed by PostgreSQL by loading one or more migration scripts
|
||||
as needed to go from the installed version S to the target version T.
|
||||
All migration scripts are in the "extension" directory of PostgreSQL:
|
||||
|
||||
```sh
|
||||
ls `pg_config --sharedir`/extension/cartodb*
|
||||
```
|
||||
|
||||
During development the cartodb extension version doesn't change with
|
||||
every commit, so testing latest change requires special steps documented
|
||||
in the CONTRIBUTING document, under "Testing changes live".
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
- The main schema of an organization user must have one only owner (the user).
|
11
lib/sql/carto-package.json
Normal file
11
lib/sql/carto-package.json
Normal file
@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "carto_postgresql_ext",
|
||||
"current_version": {
|
||||
"requires": {
|
||||
"postgresql": ">=10.0.0",
|
||||
"postgis": ">=2.4.0.0"
|
||||
},
|
||||
"works_with": {
|
||||
}
|
||||
}
|
||||
}
|
6
lib/sql/cartodb.control.in
Normal file
6
lib/sql/cartodb.control.in
Normal file
@ -0,0 +1,6 @@
|
||||
default_version = '@@VERSION@@'
|
||||
comment = 'Turn a database into a cartodb user database.'
|
||||
superuser = true
|
||||
relocatable = false
|
||||
schema = cartodb
|
||||
requires = 'plpythonu, postgis'
|
7
lib/sql/cartodb_version.sql.in
Normal file
7
lib/sql/cartodb_version.sql.in
Normal file
@ -0,0 +1,7 @@
|
||||
DO $$ BEGIN IF EXISTS (SELECT * FROM pg_proc p, pg_namespace n WHERE p.proname = 'cdb_transformtowebmercator' AND p.pronamespace = n.oid AND n.nspname = 'public') THEN RAISE EXCEPTION 'Use CREATE EXTENSION cartodb FROM unpackaged'; END IF; END; $$ LANGUAGE 'plpgsql'; -- forbid duplicated extension
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_version()
|
||||
RETURNS text AS $$
|
||||
SELECT '@@VERSION@@'::text;
|
||||
$$ language 'sql' IMMUTABLE STRICT;
|
||||
|
14
lib/sql/doc/CDB_ColumnNames.md
Normal file
14
lib/sql/doc/CDB_ColumnNames.md
Normal file
@ -0,0 +1,14 @@
|
||||
Retrieve all column names in a particular table
|
||||
|
||||
#### Using the function
|
||||
|
||||
```sql
|
||||
SELECT CDB_ColumnNames('table_name')
|
||||
--- Returns a set of rows with column names
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_ColumnNames(table_name)
|
||||
|
||||
* **table_name** text
|
15
lib/sql/doc/CDB_ColumnType.md
Normal file
15
lib/sql/doc/CDB_ColumnType.md
Normal file
@ -0,0 +1,15 @@
|
||||
Returns a column type for any column in a table
|
||||
|
||||
#### Using the function
|
||||
|
||||
```sql
|
||||
SELECT CDB_ColumnType('column_name','table_name')
|
||||
--- Returns a set of rows with column types
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_ColumnType(column_name, table_name)
|
||||
|
||||
* **column_name** text
|
||||
* **table_name** text
|
25
lib/sql/doc/CDB_EstimateRowCount.md
Normal file
25
lib/sql/doc/CDB_EstimateRowCount.md
Normal file
@ -0,0 +1,25 @@
|
||||
Estimate the number of rows of a query.
|
||||
|
||||
|
||||
#### Using the function
|
||||
|
||||
```sql
|
||||
SELECT CDB_EstimateRowCount($$
|
||||
UPDATE addresses SET the_geom = cdb_geocode_street_point(addr, city, state, 'US');
|
||||
$$) AS row_count;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```
|
||||
row_count
|
||||
-----------
|
||||
5
|
||||
(1 row)
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_EstimateRowCount(query)
|
||||
|
||||
* **query** text: the SQL query to estimate the row count for.
|
16
lib/sql/doc/CDB_GreatCircle.md
Normal file
16
lib/sql/doc/CDB_GreatCircle.md
Normal file
@ -0,0 +1,16 @@
|
||||
Based on Paul Ramsey's [blog post](http://blog.cartodb.com/jets-and-datelines/).
|
||||
#### Using the function
|
||||
|
||||
Creates a great circle line.
|
||||
|
||||
```sql
|
||||
SELECT CDB_GreatCircle(start_point, end_point) FROM table_name
|
||||
-- Results a line reprsenting the great circle between the two points
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_GreatCircle(start_point, end_point)
|
||||
|
||||
* **start_point** ST_Point indicating the start of the line.
|
||||
* **end_point** ST_point indicating the end of the line.
|
21
lib/sql/doc/CDB_HeadsTailsBins.md
Normal file
21
lib/sql/doc/CDB_HeadsTailsBins.md
Normal file
@ -0,0 +1,21 @@
|
||||
Find the breaks for N categories in a numerical column based on the [Heads/Tails optimization](http://arxiv.org/pdf/1209.2801v1.pdf). Below, Heads/Tails used to color based on the area of the polygons.
|
||||
|
||||
![headtails](https://f.cloud.github.com/assets/370259/140655/6eebb918-7228-11e2-89fa-149745f25d34.png)
|
||||
|
||||
#### Using the function
|
||||
|
||||
We can determine the 7 most optimal breaks in a column of numerical data as follows,
|
||||
|
||||
```sql
|
||||
SELECT CDB_HeadsTailsBins(array_agg(numeric_column), 7) FROM table_name
|
||||
-- Results in an ordered array like, {7824,23492,52696,233857,666089,1001709,1638094}
|
||||
-- Each break happens up to, and equal, to a number:
|
||||
-- (bin1 is less than or equal to 7824, bin2 is less than or equal to 23492, etc.)
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_HeadsTailsBins(in_array, breaks)
|
||||
|
||||
* **in_array** numeric[]. A NUMERIC array of values.
|
||||
* **breaks** int. The number of categories you want to create
|
43
lib/sql/doc/CDB_HexagonGrid.md
Normal file
43
lib/sql/doc/CDB_HexagonGrid.md
Normal file
@ -0,0 +1,43 @@
|
||||
Fill given extent with an hexagonal coverage
|
||||
|
||||
#### Using the function
|
||||
|
||||
Create a hexagonal grid from a polygon geometry. For example, take the geometry
|
||||
|
||||
```sql
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(10000000,-10000000),
|
||||
ST_MakePoint(-10000000,10000000)
|
||||
)
|
||||
),
|
||||
3857)
|
||||
```
|
||||
|
||||
We can create a grid as follows,
|
||||
|
||||
```sql
|
||||
SELECT CDB_HexagonGrid(
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(10000000,-10000000),
|
||||
ST_MakePoint(-10000000,10000000)
|
||||
)
|
||||
),
|
||||
3857),
|
||||
1000000) the_geom_webmercator
|
||||
```
|
||||
|
||||
Which will look something like this,
|
||||
|
||||
![grid tile](http://i.imgur.com/4rZXGMb.png)
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_HexagonGrid(ext, side, origin)
|
||||
|
||||
* **ext** geometry. Extent to fill. Only hexagons with center point falling inside the extent (or at the lower or leftmost edge) will be emitted. The returned hexagons will have the same SRID as this extent.
|
||||
* **side** float. Side measure for the hexagon. Maximum diameter will be 2 * side. Measure is in the same projection as **ext**
|
||||
* **origin** OPTIONAL geometry. Optional origin to allow for exact tiling. If omitted the origin will be 0,0. The parameter is checked for having the same SRID as the extent.
|
23
lib/sql/doc/CDB_JenksBins.md
Normal file
23
lib/sql/doc/CDB_JenksBins.md
Normal file
@ -0,0 +1,23 @@
|
||||
Find the breaks for N categories in a numerical column based on the [Jenks optimization](http://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization). Below, Jenks used to color based on the area of the polygons.
|
||||
|
||||
![Jenks](https://f.cloud.github.com/assets/370259/140093/b64a9382-7210-11e2-81a4-c65cce3c885e.png)
|
||||
|
||||
#### Using the function
|
||||
|
||||
We can determine the 7 most optimal breaks in a column of numerical data as follows,
|
||||
|
||||
```sql
|
||||
SELECT CDB_JenksBins(array_agg(numeric_column), 7) FROM table_name
|
||||
-- Results in an ordered array like, {0,73,2568,9408,29411,768230,1638094}
|
||||
-- Each break happens up to, and equal, to a number:
|
||||
-- (bin1 is less than or equal to 0, bin2 is less than or equal to 73, etc.)
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_JenksBins(in_array, breaks, invert)
|
||||
|
||||
* **in_array** numeric[]. A NUMERIC array of values.
|
||||
* **breaks** int. The number of categories you want to create
|
||||
* **iterations** OPTIONAL int. The number of iterations used for calculating breaks.
|
||||
* **invert** OPTIONAL boolean. Flips whether you receive top down breaks or bottom up breaks. Default is top down (so, <=). Bottom up would give you values that define the lower-end start of a bin (so >=).
|
21
lib/sql/doc/CDB_MakeHexagon.md
Normal file
21
lib/sql/doc/CDB_MakeHexagon.md
Normal file
@ -0,0 +1,21 @@
|
||||
Return an Hexagon with given center and side (or maximal radius)
|
||||
|
||||
#### Using the function
|
||||
|
||||
Running the following SQL
|
||||
|
||||
```sql
|
||||
SELECT CDB_MakeHexagon(ST_MakePoint(0,0),10000000)
|
||||
```
|
||||
|
||||
Would give you back a single hexagon geometry,
|
||||
|
||||
![hexagon](http://i.imgur.com/6jeGStb.png)
|
||||
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_MakeHexagon(center, radius)
|
||||
|
||||
* **center** geometry
|
||||
* **radius** float. Radius of hexagon measured in same projection as **center**
|
123
lib/sql/doc/CDB_Overviews.md
Normal file
123
lib/sql/doc/CDB_Overviews.md
Normal file
@ -0,0 +1,123 @@
|
||||
Overviews are tables that represent a *reduced* version of a dataset intended
|
||||
for efficient rendering at certain zoom levels while preserving the
|
||||
general visual appearance of the complete dataset.
|
||||
|
||||
The *reduction* consists in havig a fewer number of records
|
||||
(while each overview record may represent an aggregation of multiple records)
|
||||
and/or simplified record geometries.
|
||||
|
||||
Overviews are created through the `CDB_CreateOverviews` function.
|
||||
The statement timeout may need to be adjusted before using this function,
|
||||
as overview creation for large tables is a time-consuming operation.
|
||||
|
||||
The `CDB_Overviews` function can be used determine what overview tables
|
||||
exist for a given dataset table and which zoom levels correspond to it.
|
||||
|
||||
The `CDB_DropOverviews` function removes a dataset's existing overviews.
|
||||
|
||||
To know if overview tables exist for some base table, and to obtain
|
||||
a list of which overview tables are approrpiate for which zoom levels,
|
||||
the `CDB_Overviews` functions can be used.
|
||||
|
||||
The zoom level we're referring here to are those used
|
||||
by the tiler: http://wiki.openstreetmap.org/wiki/Zoom_levels
|
||||
|
||||
### CDB_CreateOverviews
|
||||
|
||||
Create overviews for vector dataset.
|
||||
|
||||
#### Using the function
|
||||
|
||||
The table for which overviews will be generated should be
|
||||
a Cartodbfied dataset with vector geometry.
|
||||
|
||||
```sql
|
||||
SELECT CDB_CreateOverviews('table_name');
|
||||
--- Generates overview tables for the dataset
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_CreateOverviews(table_name, ref_z_strategy, reduction_strategy)
|
||||
|
||||
* **table_name** regclass, table for which overviews will be generated
|
||||
* **ref_z_strategy** regproc, optional function that provides
|
||||
the Z-scale strategy.
|
||||
It returns the base Z level for the dataset.
|
||||
It should have these arguments:
|
||||
- **table_name** regclass, table to compute the reference Z scale for
|
||||
* **reduction_strategy** regproc, optional function that provides
|
||||
the reduction strategy to generate an overview table from a table
|
||||
for a smaller scale (higher Z number).
|
||||
It returns the name of the generated table.
|
||||
It should have these arguments:
|
||||
- **base_table_name** regclass, base table to be reduced.
|
||||
- **base_z** integer, base Z level assigned to the base table.
|
||||
- **overview_z** integer, Z level for which to generate the overview.
|
||||
|
||||
#### Tolerance / level of detail
|
||||
|
||||
The level of detail to be representable by each overview layer can
|
||||
be specified as a tolerance in pixels (if different from the default of 1 pixel)
|
||||
with the function `CDB_CreateOverviewsWithToleranceInPixels`
|
||||
which has as a second additional argument the desired tolerance.
|
||||
|
||||
This tolerance defines the maximum deviation in pixels of the overviews
|
||||
geometries with respect to the original geometries when overview tables
|
||||
are used for their intendend zoom level.
|
||||
|
||||
### CDB_Overviews
|
||||
|
||||
Obtain overview metadata for a given table (existing overviews).
|
||||
The returned relation will be empty if the table has no overviews.
|
||||
|
||||
The function can be applied to a single table:
|
||||
|
||||
```sql
|
||||
SELECT CDB_Overviews('table_name');
|
||||
--- Return existing overview Z levels and corresponding tables
|
||||
```
|
||||
|
||||
Or to multiple tables passed as an array; this can be used
|
||||
to obtain the overviews that can be applied to a query by
|
||||
combining it with `CDB_QueryTablesText`:
|
||||
|
||||
```sql
|
||||
SELECT CDB_Overviews(CDB_QueryTablesText('SELECT * FROM table1, table2'));
|
||||
--- Return existing overview Z levels and corresponding tables
|
||||
```
|
||||
|
||||
The result of `CDB_Overviews` has three columns:
|
||||
|
||||
| base_table | z | overview_table |
|
||||
| ---------- | - | -------------- |
|
||||
| table1 | 1 | table1_ov1 |
|
||||
| table1 | 2 | table1_ov2 |
|
||||
| table1 | 4 | table1_ov4 |
|
||||
| table2 | 1 | table1_ov1 |
|
||||
| table2 | 2 | table1_ov2 |
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_Overviews(table_name)
|
||||
|
||||
* **table_name** regclass, oid of table to obtain existing overviews for
|
||||
|
||||
CDB_Overviews(table_names)
|
||||
|
||||
* **table_names** regclass[], array of table oids
|
||||
|
||||
|
||||
### CDB_DropOverviews
|
||||
|
||||
Remove the overviews of a table, if present.
|
||||
|
||||
```sql
|
||||
SELECT CDB_DropOverviews('table_name');
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_Overviews(table_name)
|
||||
|
||||
* **table_name** regclass, table for which to drop existing overviews.
|
21
lib/sql/doc/CDB_QuantileBins.md
Normal file
21
lib/sql/doc/CDB_QuantileBins.md
Normal file
@ -0,0 +1,21 @@
|
||||
Find the breaks for N categories in a numerical column based on the [Quantile bins]. Below, the quantile method is used to determine color based on the area of the polygons.
|
||||
|
||||
![qunatile](https://f.cloud.github.com/assets/370259/140714/932ed0e6-722b-11e2-9807-ffbd0fddb9ac.png)
|
||||
|
||||
#### Using the function
|
||||
|
||||
We can determine the 7 most optimal breaks in a column of numerical data as follows,
|
||||
|
||||
```sql
|
||||
SELECT CDB_QuantileBins(array_agg(numeric_column), 7) FROM table_name
|
||||
-- Results in an ordered array like, {80,2281,7162,17652,39730,91077,1638094}
|
||||
-- Each break happens up to, and equal, to a number:
|
||||
-- (bin1 is less than or equal to 80, bin2 is less than or equal to 2281, etc.)
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_QuantileBins(in_array, breaks)
|
||||
|
||||
* **in_array** numeric[]. A NUMERIC array of values.
|
||||
* **breaks** int. The number of categories you want to create
|
46
lib/sql/doc/CDB_RectangleGrid.md
Normal file
46
lib/sql/doc/CDB_RectangleGrid.md
Normal file
@ -0,0 +1,46 @@
|
||||
Fill given extent with a rectangular coverage
|
||||
|
||||
#### Using the function
|
||||
|
||||
Create a rectangular grid from a polygon geometry. For example, take the geometry
|
||||
|
||||
```sql
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(10000000,-10000000),
|
||||
ST_MakePoint(-10000000,10000000)
|
||||
)
|
||||
),
|
||||
3857)
|
||||
```
|
||||
|
||||
We can create a grid as follows,
|
||||
|
||||
```sql
|
||||
SELECT CDB_RectangleGrid(
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(10000000,-10000000),
|
||||
ST_MakePoint(-10000000,10000000)
|
||||
)
|
||||
),
|
||||
3857),
|
||||
1000000,
|
||||
1000000
|
||||
) the_geom_webmercator
|
||||
```
|
||||
|
||||
Which will look something like this,
|
||||
|
||||
![rect grid](http://i.imgur.com/HuhOJRs.png)
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_RectangleGrid(ext, width, height, origin)
|
||||
|
||||
* **ext** geometry. Extent to fill. Only rectangles with center point falling inside the extent (or at the lower or leftmost edge) will be emitted. The returned hexagons will have the same SRID as this extent.
|
||||
* **width** float. Width of each rectangle. Measure is in the same projection as **ext**
|
||||
* **height** float. Height of each rectangle. Measure is in the same projection as **ext**
|
||||
* **origin** OPTIONAL geometry. Optional origin to allow for exact tiling. If omitted the origin will be 0,0. The parameter is checked for having the same SRID as the extent.
|
11
lib/sql/doc/CDB_SetUserQuotaInBytes.md
Normal file
11
lib/sql/doc/CDB_SetUserQuotaInBytes.md
Normal file
@ -0,0 +1,11 @@
|
||||
Sets user quota in bytes (superuser only)
|
||||
|
||||
#### Using the function
|
||||
|
||||
```sql
|
||||
SELECT CDB_SetUserQuotaInBytes(10485760);
|
||||
--- Returns the previously set quota.
|
||||
--- Use 0 to disable quota.
|
||||
```
|
||||
|
||||
REF: https://github.com/CartoDB/cartodb-postgresql/blob/master/scripts-available/CDB_Quota.sql
|
56
lib/sql/doc/CDB_SyncTable.md
Normal file
56
lib/sql/doc/CDB_SyncTable.md
Normal file
@ -0,0 +1,56 @@
|
||||
Synchronize two tables. This function will synchronize a *destination* table with a *source* table.
|
||||
The idea is that the *destination* is a replica of *source* and *source* has been subject to
|
||||
modifications that are to be applied to *destination*.
|
||||
|
||||
This will be achieved by deleting the rows in the destination not present
|
||||
in the source, inserting rows of the source not in the destination and updating modified rows.
|
||||
If the destination table does not exist it will be created and all the rows of the source inserted into it.
|
||||
|
||||
Both tables must have a consistent `cartodb_id` primary key column which will be used to match
|
||||
the source and destination rows.
|
||||
|
||||
Note that both tables do not necessarily become identical after the synchronization, since additional columns
|
||||
may have been added to the destination; those columns will not be altered by the synchronization.
|
||||
|
||||
In addition some source columns may be skipped by listing them in the optional last argument; such columns
|
||||
will not be updated in the destination, so if they are present in it their values won't be altered.
|
||||
|
||||
|
||||
#### Using the function
|
||||
|
||||
Import some data using COPY FROM into a temporary table, then synchronize a table with the data and
|
||||
finally delete the temporary table. This could be used import and update some data periodically while
|
||||
allowing to add columns to the data that will be preserved across updates.
|
||||
|
||||
```sql
|
||||
CREATE tmp_pois(cartodb_id int, name text, type text, longitude double precision, latitude double precision, rank int);
|
||||
COPY tmp_pois FROM '/tmp/pois.csv';
|
||||
SELECT CDB_SyncTable('tmp_pois', 'public', 'pois');
|
||||
DROP TABLE tmp_pois;
|
||||
```
|
||||
|
||||
Now we could perform some changes to the `pois` to maintain our own ranking:
|
||||
|
||||
```sql
|
||||
UPDATE pois SET rank = random()*4 + 1;
|
||||
```
|
||||
|
||||
Then, if the source were updated at `/tmp/pois.csv` we could synchronize with it while preserving our `rank` values with:
|
||||
|
||||
```sql
|
||||
CREATE tmp_pois(cartodb_id int, name text, type text, longitude double precision, latitude double precision, rank int);
|
||||
COPY tmp_pois FROM '/tmp/pois.csv';
|
||||
SELECT CDB_SyncTable('tmp_pois', 'public', 'pois', '{rank}');
|
||||
DROP TABLE tmp_pois;
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
```
|
||||
CDB_SyncTable(src_table, dst_schema, dst_table, skip_cols)
|
||||
```
|
||||
|
||||
* **src_table** REGCLASS the source data for the synchronization
|
||||
* **dst_scgena** REGNAMESPACE the destination schema
|
||||
* **dst_table** NAME the destination table to be updated
|
||||
* **skip_cols** NAME[] an array of column names, empty by default, which will be skipped
|
44
lib/sql/doc/CDB_TransformToWebmercator.md
Normal file
44
lib/sql/doc/CDB_TransformToWebmercator.md
Normal file
@ -0,0 +1,44 @@
|
||||
Function to "safely" transform to webmercator. This function is most useful for rendering custom geometries using the CartoDB tiler. Often, transforming a projection like WGS84 can cause issues with extents beyond what are actually valid in webmercator, this attempts to fix those issues.
|
||||
|
||||
#### Using the function
|
||||
|
||||
Using a box that is nearly the full globe,
|
||||
|
||||
```sql
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(-180,60),
|
||||
ST_MakePoint(180,-60)
|
||||
)
|
||||
),
|
||||
4326
|
||||
)
|
||||
```
|
||||
|
||||
We can then convert it to a renderable webmercator geometry.
|
||||
|
||||
```sql
|
||||
SELECT CDB_TransformToWebmercator(
|
||||
ST_SetSRID(
|
||||
ST_Envelope(
|
||||
ST_Collect(
|
||||
ST_MakePoint(-10,60),
|
||||
ST_MakePoint(300,-60)
|
||||
)
|
||||
),
|
||||
4326
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Would give you back a single valid rectangle in webmercator. Since a longitude of 300 would convert to an unallowed webmercator coordinate, it gets clipped first. Valid extent is WGS84 (-180, -89, 180, 89)
|
||||
|
||||
![valid geom](http://i.imgur.com/EFdXiqt.png)
|
||||
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_TransformToWebmercator(geom)
|
||||
|
||||
* **geom** geometry
|
15
lib/sql/doc/CDB_UserTables.md
Normal file
15
lib/sql/doc/CDB_UserTables.md
Normal file
@ -0,0 +1,15 @@
|
||||
List the name of available tables (only the usable ones)
|
||||
|
||||
#### Using the function
|
||||
|
||||
```sql
|
||||
--- Returns a row for each table having given permission with the table name.
|
||||
--- It also returns tables from others users if you've permission to see them. For example, consider the following scenario:
|
||||
--- User X and User Y at account C.
|
||||
--- User X has a public table T.
|
||||
--- User Y will see table T.
|
||||
--- Currently accepted permissions are: 'public', 'private' or 'all'
|
||||
SELECT CDB_UserTables(perms)
|
||||
```
|
||||
|
||||
REF: https://github.com/CartoDB/cartodb-postgresql/blob/master/scripts-available/CDB_UserTables.sql
|
22
lib/sql/doc/CDB_XYZ_Extent.md
Normal file
22
lib/sql/doc/CDB_XYZ_Extent.md
Normal file
@ -0,0 +1,22 @@
|
||||
Determine the spatial extent of a tile based on the tile's XYZ coordinate.
|
||||
|
||||
#### Using the function
|
||||
|
||||
Take a common tile with coordinates x=3, y=2, z=2,
|
||||
|
||||
![2/3/2](https://viz2.cartodb.com/tiles/quantile_breaks/2/3/2.png)
|
||||
|
||||
To determine its extent you would run,
|
||||
|
||||
```sql
|
||||
SELECT CDB_XYZ_Extent(3,2,2)
|
||||
--- Returns a WKB polygon in Webmercator (SRID 3857)
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_XYZ_Extent(x,y,z)
|
||||
|
||||
* **x** integer
|
||||
* **y** integer
|
||||
* **z** integer
|
20
lib/sql/doc/CDB_XYZ_Resolution.md
Normal file
20
lib/sql/doc/CDB_XYZ_Resolution.md
Normal file
@ -0,0 +1,20 @@
|
||||
Return pixel resolution of tiles at a given zoom level
|
||||
|
||||
#### Using the function
|
||||
|
||||
Take a common tile with zoom, z=2,
|
||||
|
||||
![2/3/2](https://viz2.cartodb.com/tiles/quantile_breaks/2/3/2.png)
|
||||
|
||||
To determine the resolution of these pixels,
|
||||
|
||||
```sql
|
||||
SELECT CDB_XYZ_Resolution(2)
|
||||
--- Returns a float, 39135.7587890625
|
||||
```
|
||||
|
||||
#### Arguments
|
||||
|
||||
CDB_XYZ_Resolution(z)
|
||||
|
||||
* **z** integer
|
38
lib/sql/doc/CartoDB-PLpgSQL.md
Normal file
38
lib/sql/doc/CartoDB-PLpgSQL.md
Normal file
@ -0,0 +1,38 @@
|
||||
INTRODUCTION
|
||||
============
|
||||
|
||||
CartoDB uses a number of custom [PLpgSQL](http://www.postgresql.org/docs/8.3/static/plpgsql.html) functions to perform a few magical things. Those functions are accessible to users on CartoDB as well, so we would like to document what they are and what they do here.
|
||||
|
||||
## Spatial functions
|
||||
|
||||
[CDB_HexagonGrid](CDB_HexagonGrid) - create hexagonal grid from extent and size
|
||||
|
||||
[CDB_MakeHexagon](CDB_MakeHexagon) - make a hexagon with given center and side
|
||||
|
||||
[CDB_RectangleGrid](CDB_RectangleGrid) - fill given extent with a rectangular coverage
|
||||
|
||||
##### Tile based
|
||||
|
||||
[CDB_XYZ_Extent](CDB_XYZ_Extent) - Find the extent of a tile by XYZ
|
||||
|
||||
[CDB_XYZ_Resolution](CDB_XYZ_Resolution) - Find the pixel resolution of tiles
|
||||
|
||||
[CDB_TransformToWebmercator](CDB_TransformToWebmercator) - Convert a geometry to valid webmercator
|
||||
|
||||
## Statistical functions
|
||||
|
||||
[CDB_JenksBins](CDB_JenksBins) - Find breaks in an array of numbers using Jenks method
|
||||
|
||||
[CDB_HeadsTailsBins](CDB_HeadsTailsBins) - Find breaks in an array of numbers using Heads/Tails method
|
||||
|
||||
[CDB_QuantileBins](CDB_QuantileBins) - Find quantile breaks in an array of numbers
|
||||
|
||||
## System functions
|
||||
|
||||
[CDB_UserTables](CDB_UserTables) - Get a list of all tables in your account
|
||||
|
||||
[[CDB_SetUserQuotaInBytes]] - Set maximum user quota in bytes
|
||||
|
||||
column names - now returned in JSON response
|
||||
|
||||
column types - now returned in JSON response
|
68
lib/sql/doc/CartoDB-user-table.rst
Normal file
68
lib/sql/doc/CartoDB-user-table.rst
Normal file
@ -0,0 +1,68 @@
|
||||
CartoDB User Table
|
||||
==================
|
||||
|
||||
Introduction
|
||||
----------
|
||||
A CartoDB user table is a table with a well-known set of columns and a well-known set of triggers attached on.
|
||||
|
||||
Columns
|
||||
----------
|
||||
The required columns of a CartoDB table are:
|
||||
|
||||
- ``cartodb_id``
|
||||
- This column will be used as the primary key of the table and it has a sequence as default value
|
||||
- Its values must be integer, non-zero, non-null and unique
|
||||
- B-Tree indexed
|
||||
- ``the_geom``
|
||||
- This column stores the main geometric features of a table
|
||||
- The type of the column in the Postgres database is ``geometry(Geometry,4326)```
|
||||
- GiST indexed
|
||||
- geometry, GiST indexed, constrained (see below)
|
||||
- ``the_geom_webmercator``
|
||||
- This column stores the geometries used for rendering purposes
|
||||
- The type of the column in the Postgres database is ``geometry(Geometry,3857)``
|
||||
- GiST indexed
|
||||
- This column is automatically updated by the system when the ``the_geom`` column is updated or when there is an insertion of a new row into the table (See triggers below)
|
||||
|
||||
The values of ``the_geom`` and ``the_geom_webmercator`` must be two-dimensional Points, MultiLineStrings or MultiPolygons. Different geometric types in a CartoDB table are not supported.
|
||||
|
||||
Described table example
|
||||
^^^^^^^^^^
|
||||
::
|
||||
|
||||
Column | Type | Modifiers
|
||||
----------------------+-------------------------+--------------------------------------------------------
|
||||
cartodb_id | bigint | not null default nextval('t_cartodb_id_seq'::regclass)
|
||||
the_geom | geometry(Geometry,4326) |
|
||||
the_geom_webmercator | geometry(Geometry,3857) |
|
||||
Indexes:
|
||||
"table_name_pkey" PRIMARY KEY, btree (cartodb_id)
|
||||
"table_name_the_geom_idx" gist (the_geom)
|
||||
"table_name_the_geom_webmercator_idx" gist (the_geom_webmercator)
|
||||
|
||||
Triggers
|
||||
----------
|
||||
The triggers generated in each CartoDB table are:
|
||||
|
||||
- ``track_updates`` after modifying statement updates ``cdb_tablemetadata``
|
||||
- ``test_quota`` before changing statement to forbid if overquota
|
||||
- ``test_quota_per_row`` before insert ot update row to forbid if overquota (checked on a probabilistic basis)
|
||||
- ``update_the_geom_webmercator`` before insert or update row to maintain the ``the_geom_webmercator`` updated with the contents in ``the_geom``
|
||||
|
||||
Described triggers example
|
||||
^^^^^^^^^^
|
||||
::
|
||||
|
||||
test_quota BEFORE INSERT OR UPDATE ON t FOR EACH STATEMENT EXECUTE PROCEDURE cdb_checkquota('0.1', '-1', 'public')
|
||||
test_quota_per_row BEFORE INSERT OR UPDATE ON t FOR EACH ROW EXECUTE PROCEDURE cdb_checkquota('0.001', '-1', 'public')
|
||||
track_updates AFTER INSERT OR DELETE OR UPDATE OR TRUNCATE ON t FOR EACH STATEMENT EXECUTE PROCEDURE cdb_tablemetadata_trigger()
|
||||
update_the_geom_webmercator_trigger BEFORE INSERT OR UPDATE OF the_geom ON t FOR EACH ROW EXECUTE PROCEDURE _cdb_update_the_geom_webmercator()
|
||||
|
||||
|
||||
Further details
|
||||
----------
|
||||
|
||||
Some conversions will be attempted to perform upon cartodbfication when certain fields appear:
|
||||
|
||||
- ``cartodb_id``: If found type TEXT will be attempted to cast to integer. If not casteable, an eror will be raised.
|
||||
- ``the_geom``: If found type TEXT will be attempted to cast to geometry(Geometry,4326).
|
23
lib/sql/doc/README.md
Normal file
23
lib/sql/doc/README.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Contents
|
||||
|
||||
* [CartoDB-user-table](CartoDB-user-table.md)
|
||||
* [CartoDB-PLpgSQL](CartoDB-PLpgSQL.md)
|
||||
* [CDB_ColumnNames](CDB_ColumnNames.md)
|
||||
* [CDB_ColumnType](CDB_ColumnType.md)
|
||||
* [CDB_HeadsTailsBins](CDB_HeadsTailsBins.md)
|
||||
* [CDB_HexagonGrid](CDB_HexagonGrid.md)
|
||||
* [CDB_JenksBins](CDB_JenksBins.md)
|
||||
* [CDB_MakeHexagon](CDB_MakeHexagon.md)
|
||||
* [CDB_QuantileBins](CDB_QuantileBins.md)
|
||||
* [CDB_RectangleGrid](CDB_RectangleGrid.md)
|
||||
* [CDB_SetUserQuotaInBytes](CDB_SetUserQuotaInBytes.md)
|
||||
* [CDB_TransformToWebmercator](CDB_TransformToWebmercator.md)
|
||||
* [CDB_UserTables](CDB_UserTables.md)
|
||||
* [CDB_XYZ_Extent](CDB_XYZ_Extent.md)
|
||||
* [CDB_XYZ_Resolution](CDB_XYZ_Resolution.md)
|
||||
|
||||
The CartoDB PostgreSQL extension is a module to load into each CartoDB user database to perform cartodb-specific security and functionality checks.
|
||||
|
||||
# Checks
|
||||
|
||||
User tables need to match certain structure criteria (See [[CartoDB-user-table]]) so the extension should provide a mean to enforce such structure everytime an attempt to change structure is encountered.
|
63
lib/sql/doc/cartodbfy-requirements.rst
Normal file
63
lib/sql/doc/cartodbfy-requirements.rst
Normal file
@ -0,0 +1,63 @@
|
||||
CartoDBfy Requirements
|
||||
======================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
This document aims at describing what the CartoDBfication is and what its formal requirements are, with the following goals in mind:
|
||||
|
||||
- Clarify what are the expectations of the "cartodbfycation process".
|
||||
- Define an important part of what should be a stable, public API
|
||||
- Allow for better testing, which should in turn...
|
||||
- ...ease modifications and increase quality of the code
|
||||
|
||||
|
||||
What is the CartoDBfycation
|
||||
---------------------------
|
||||
|
||||
The CartoDBfycation is the process of converting an arbitrary postgres table into a valid CartoDB table, and register it in the system so that it can be used in the CartoDB editor and platform to generate maps and analysis.
|
||||
|
||||
It is performed by running the function ``CDB_CartodbfyTable(reloid REGCLASS)`` over a target table.
|
||||
|
||||
Valid CartoDB tables
|
||||
--------------------
|
||||
|
||||
A valid CartoDB table shall meet the following conditions:
|
||||
|
||||
- Have a ``cartodb_id`` column with integer, unique, non-zero and non-null values as primary key with a sequence as default value
|
||||
- Have a ``the_geom`` column of type ``Geometry`` with SRID 4326
|
||||
- Have a ``the_geom_webmercator`` column of type ``Geometry`` with SRID 3857
|
||||
- The columns ``the_geom`` and ``the_geom_webmercator`` shall be in sync (task of the ``update_the_geom_webmercator`` trigger)
|
||||
|
||||
Additionally, a CartoDB table can contain other columns.
|
||||
|
||||
See the `CartoDB User Table documentation`_
|
||||
|
||||
.. _CartoDB User Table documentation: https://github.com/CartoDB/cartodb-postgresql/blob/master/doc/CartoDB-user-table.rst
|
||||
for further information.
|
||||
|
||||
High level requirements
|
||||
-----------------------
|
||||
|
||||
Here is a list of high level requirments for the public function ``CDB_CartodbfyTable()``:
|
||||
|
||||
- A call to the function shall modify/rewrite the table and produce a valid CartoDB table with the same name.
|
||||
- A call to the function shall cause the registration of the table into the platform.
|
||||
- It shall be idempotent, meaning that successive calls to the function shall not produce any visible effect in the system.
|
||||
- If there's a column containing a geometry, it shall be used to generate ``the_geom`` and the ``the_geom_webmercator`` columns.
|
||||
- Exporting and re-importing the same table in CartoDB shall produce equivalent tables, with the same features associated to the same ``cartodb_id``'s.
|
||||
|
||||
|
||||
Note that there should be only one geometry per row in the source table. If there's more than one, then which one is used for ``the_geom`` and ``the_geom_webmercator`` fields is not determined.
|
||||
|
||||
|
||||
Low-level requirements
|
||||
----------------------
|
||||
|
||||
- If the original table contains a valid (integer, unique, non-zero and not null) ``cartodb_id`` column, it shall be used
|
||||
- If the original table contains a ``the_geom`` column or a ``the_geom_webmercator`` geometric column in the expected projection (EPSG 4326 and EPSG 3857, respectively) they shall be used.
|
||||
- A modification of a cartodbfy'ed table shall insert or update a row in ``CDB_TableMetadata``
|
||||
- A cartodbfy'ed table shall have a ``btree`` index on ``cartodb_id``
|
||||
- A cartodbfy'ed table shall have ``gist`` indices on ``the_geom`` and ``the_geom_webmercator``
|
||||
- Cartodbfy shall deal with text columns for imports, regarding CartoDB columns (``cartodb_id``, ``the_geom``, ``the_geom_webmercator``)
|
||||
|
9
lib/sql/expected/test_setup.out
Normal file
9
lib/sql/expected/test_setup.out
Normal file
@ -0,0 +1,9 @@
|
||||
CREATE EXTENSION postgis;
|
||||
CREATE EXTENSION plpythonu;
|
||||
CREATE EXTENSION cartodb;
|
||||
CREATE FUNCTION public.cdb_invalidate_varnish(table_name text)
|
||||
RETURNS void AS $$
|
||||
BEGIN
|
||||
RAISE NOTICE 'cdb_invalidate_varnish(%) called', table_name;
|
||||
END;
|
||||
$$ LANGUAGE 'plpgsql';
|
95
lib/sql/scripts-available/CDB_AnalysisCatalog.sql
Normal file
95
lib/sql/scripts-available/CDB_AnalysisCatalog.sql
Normal file
@ -0,0 +1,95 @@
|
||||
-- Table to register analysis nodes from https://github.com/cartodb/camshaft
|
||||
CREATE TABLE IF NOT EXISTS
|
||||
@extschema@.cdb_analysis_catalog (
|
||||
-- md5 hex hash
|
||||
node_id char(40) CONSTRAINT cdb_analysis_catalog_pkey PRIMARY KEY,
|
||||
-- being json allows to do queries like analysis_def->>'type' = 'buffer'
|
||||
analysis_def json NOT NULL,
|
||||
-- can reference other nodes in this very same table, allowing recursive queries
|
||||
input_nodes char(40) ARRAY NOT NULL DEFAULT '{}',
|
||||
status TEXT NOT NULL DEFAULT 'pending',
|
||||
CONSTRAINT valid_status CHECK (
|
||||
status IN ( 'pending', 'waiting', 'running', 'canceled', 'failed', 'ready' )
|
||||
),
|
||||
created_at timestamp with time zone NOT NULL DEFAULT now(),
|
||||
-- should be updated when some operation was performed in the node
|
||||
-- and anything associated to it might have changed
|
||||
updated_at timestamp with time zone DEFAULT NULL,
|
||||
-- should register last time the node was used
|
||||
used_at timestamp with time zone NOT NULL DEFAULT now(),
|
||||
-- should register the number of times the node was used
|
||||
hits NUMERIC DEFAULT 0,
|
||||
-- should register what was the last node using current node
|
||||
last_used_from char(40),
|
||||
-- last job modifying the node
|
||||
last_modified_by uuid,
|
||||
-- store error message for failures
|
||||
last_error_message text,
|
||||
-- cached tables involved in the analysis
|
||||
cache_tables regclass[] NOT NULL DEFAULT '{}',
|
||||
-- useful for multi account deployments
|
||||
username text
|
||||
);
|
||||
|
||||
-- This can only be called from an SQL script executed by CREATE EXTENSION
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
PERFORM pg_catalog.pg_extension_config_dump('@extschema@.cdb_analysis_catalog', '');
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Migrations to add new columns from old versions.
|
||||
-- IMPORTANT: Those columns will be added in order of creation. To be consistent
|
||||
-- in column order, ensure that new columns are added at the end and in the same order.
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
BEGIN
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog ADD COLUMN last_modified_by uuid;
|
||||
EXCEPTION
|
||||
WHEN duplicate_column THEN END;
|
||||
END;
|
||||
$$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
BEGIN
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog ADD COLUMN last_error_message text;
|
||||
EXCEPTION
|
||||
WHEN duplicate_column THEN END;
|
||||
END;
|
||||
$$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
BEGIN
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog ADD COLUMN cache_tables regclass[] NOT NULL DEFAULT '{}';
|
||||
EXCEPTION
|
||||
WHEN duplicate_column THEN END;
|
||||
END;
|
||||
$$;
|
||||
|
||||
DO $$
|
||||
BEGIN
|
||||
BEGIN
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog ADD COLUMN username text;
|
||||
EXCEPTION
|
||||
WHEN duplicate_column THEN END;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- We want the "username" column to be moved to the last position if it was on a position from other versions
|
||||
-- see https://github.com/CartoDB/cartodb-postgresql/issues/276
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
DECLARE
|
||||
column_index int;
|
||||
BEGIN
|
||||
SELECT ordinal_position FROM information_schema.columns WHERE table_name='cdb_analysis_catalog' AND table_schema='@extschema@' AND column_name='username' INTO column_index;
|
||||
IF column_index = 1 OR column_index = 10 THEN
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog ADD COLUMN username_final text;
|
||||
UPDATE @extschema@.cdb_analysis_catalog SET username_final = username;
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog DROP COLUMN username;
|
||||
ALTER TABLE @extschema@.cdb_analysis_catalog RENAME COLUMN username_final TO username;
|
||||
END IF;
|
||||
END;
|
||||
$$;
|
62
lib/sql/scripts-available/CDB_AnalysisCheck.sql
Normal file
62
lib/sql/scripts-available/CDB_AnalysisCheck.sql
Normal file
@ -0,0 +1,62 @@
|
||||
-- Read configuration parameter analysis_quota_factor, making it
|
||||
-- accessible to regular users (which don't have access to cdb_conf)
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_GetConfAnalysisQuotaFactor()
|
||||
RETURNS float8 AS
|
||||
$$
|
||||
BEGIN
|
||||
RETURN @extschema@.CDB_Conf_GetConf('analysis_quota_factor')::text::float8;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' STABLE PARALLEL SAFE SECURITY DEFINER;
|
||||
|
||||
|
||||
-- Get the factor (fraction of the quota) for Camshaft cached analysis tables
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_AnalysisQuotaFactor()
|
||||
RETURNS float8 AS
|
||||
$$
|
||||
DECLARE
|
||||
factor float8;
|
||||
BEGIN
|
||||
-- We use a floating point cdb_conf parameter
|
||||
factor := @extschema@._CDB_GetConfAnalysisQuotaFactor();
|
||||
-- With a default value
|
||||
IF factor IS NULL THEN
|
||||
factor := 2;
|
||||
END IF;
|
||||
RETURN factor;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' STABLE PARALLEL SAFE;
|
||||
|
||||
-- This checks the space used up by Camshaft cached analysis tables.
|
||||
-- An exception will be raised if the limits are exceeded.
|
||||
-- The name of an analysis table is passed; this, in addition to the
|
||||
-- db role that executes this function is used to determined which
|
||||
-- analysis tables will be considered.
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_CheckAnalysisQuota(table_name TEXT)
|
||||
RETURNS void AS
|
||||
$$
|
||||
DECLARE
|
||||
schema_name TEXT;
|
||||
user_name TEXT;
|
||||
nominal_quota int8;
|
||||
cache_size float8;
|
||||
BEGIN
|
||||
-- We rely on the search_path to determine the user's schema and
|
||||
-- check for all analysis tables in that schema.
|
||||
-- An alternative would be to use cdb_analysis_catalog to
|
||||
-- select analysis tables (cache_tables) from the same user, analysis or node.
|
||||
-- For example:
|
||||
-- SELECT unnest(cache_tables) FROM cdb_analysis_catalog
|
||||
-- WHERE username IN (SELECT username FROM cdb_analysis_catalog
|
||||
-- WHERE table_name::regclass = ANY (cache_tables));
|
||||
-- At the moment we're not using the provided table_name.
|
||||
|
||||
SELECT current_schema() INTO schema_name;
|
||||
EXECUTE FORMAT('SELECT %I._CDB_UserQuotaInBytes();', schema_name) INTO nominal_quota;
|
||||
IF nominal_quota * @extschema@._CDB_AnalysisQuotaFactor() < @extschema@._CDB_AnalysisDataSize(schema_name) THEN
|
||||
-- The limit is defined by a factor applied to the total space quota for the user
|
||||
RAISE EXCEPTION 'Analysis cache space limits exceeded';
|
||||
END IF;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
55
lib/sql/scripts-available/CDB_AnalysisSupport.sql
Normal file
55
lib/sql/scripts-available/CDB_AnalysisSupport.sql
Normal file
@ -0,0 +1,55 @@
|
||||
-- Internal auxiliar functions to deal with [Camshaft](https://github.com/cartodb/camshaft) cached analysis tables.
|
||||
|
||||
-- This function returns TRUE if a given table name corresponds to a Camshaft cached analysis table
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_IsAnalysisTableName(table_name TEXT)
|
||||
RETURNS BOOLEAN
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN table_name SIMILAR TO '\Aanalysis_[0-9a-f]{10}_[0-9a-f]{40}\Z';
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- This function returns a relation of Camshaft cached analysis tables in the given schema.
|
||||
-- If the schema name parameter is NULL, then tables from all schemas
|
||||
-- that may contain user tables are returned.
|
||||
-- For each table, the regclass, schema name and table name are returned.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_AnalysisTablesInSchema(schema_name text DEFAULT NULL)
|
||||
RETURNS TABLE(table_regclass REGCLASS, schema_name TEXT, table_name TEXT)
|
||||
AS $$
|
||||
SELECT * FROM @extschema@._CDB_UserTablesInSchema(schema_name) WHERE @extschema@._CDB_IsAnalysisTableName(table_name);
|
||||
$$ LANGUAGE 'sql' STABLE PARALLEL SAFE;
|
||||
|
||||
-- This function returns a relation user tables excluding analysis tables
|
||||
-- If the schema name parameter is NULL, then tables from all schemas
|
||||
-- that may contain user tables are returned.
|
||||
-- For each table, the regclass, schema name and table name are returned.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_NonAnalysisTablesInSchema(schema_name text DEFAULT NULL)
|
||||
RETURNS TABLE(table_regclass REGCLASS, schema_name TEXT, table_name TEXT)
|
||||
AS $$
|
||||
SELECT * FROM @extschema@._CDB_UserTablesInSchema(schema_name) WHERE Not @extschema@._CDB_IsAnalysisTableName(table_name);
|
||||
$$ LANGUAGE 'sql' STABLE PARALLEL SAFE;
|
||||
|
||||
-- Total spaced used up by Camshaft cached analysis tables in the given schema.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_AnalysisDataSize(schema_name TEXT DEFAULT NULL)
|
||||
RETURNS bigint AS
|
||||
$$
|
||||
DECLARE
|
||||
total_size bigint;
|
||||
BEGIN
|
||||
WITH analysis_tables AS (
|
||||
SELECT t.schema_name, t.table_name FROM @extschema@._CDB_AnalysisTablesInSchema(schema_name) t
|
||||
)
|
||||
SELECT COALESCE(INT8(SUM(@extschema@._CDB_total_relation_size(analysis_tables.schema_name, analysis_tables.table_name))))::int8
|
||||
INTO total_size FROM analysis_tables;
|
||||
IF total_size IS NOT NULL THEN
|
||||
RETURN total_size;
|
||||
ELSE
|
||||
RETURN 0;
|
||||
END IF;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
1319
lib/sql/scripts-available/CDB_CartodbfyTable.sql
Normal file
1319
lib/sql/scripts-available/CDB_CartodbfyTable.sql
Normal file
File diff suppressed because it is too large
Load Diff
16
lib/sql/scripts-available/CDB_ColumnNames.sql
Normal file
16
lib/sql/scripts-available/CDB_ColumnNames.sql
Normal file
@ -0,0 +1,16 @@
|
||||
-- Function returning the column names of a table
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_ColumnNames(REGCLASS)
|
||||
RETURNS SETOF information_schema.sql_identifier
|
||||
AS $$
|
||||
SELECT
|
||||
a.attname::information_schema.sql_identifier column_name
|
||||
FROM pg_class c
|
||||
LEFT JOIN pg_attribute a ON a.attrelid = c.oid
|
||||
WHERE c.oid = $1::oid
|
||||
AND a.attstattarget < 0 -- exclude system columns
|
||||
ORDER BY a.attnum;
|
||||
$$ LANGUAGE SQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- This is to migrate from pre-0.2.0 version
|
||||
-- See http://github.com/CartoDB/cartodb-postgresql/issues/36
|
||||
GRANT EXECUTE ON FUNCTION @extschema@.CDB_ColumnNames(REGCLASS) TO PUBLIC;
|
16
lib/sql/scripts-available/CDB_ColumnType.sql
Normal file
16
lib/sql/scripts-available/CDB_ColumnType.sql
Normal file
@ -0,0 +1,16 @@
|
||||
-- Function returning the type of a column
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_ColumnType(REGCLASS, TEXT)
|
||||
RETURNS information_schema.character_data
|
||||
AS $$
|
||||
SELECT
|
||||
format_type(a.atttypid, NULL)::information_schema.character_data data_type
|
||||
FROM pg_class c
|
||||
LEFT JOIN pg_attribute a ON a.attrelid = c.oid
|
||||
WHERE c.oid = $1::oid
|
||||
AND a.attname = $2
|
||||
AND a.attstattarget < 0; -- exclude system columns
|
||||
$$ LANGUAGE SQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- This is to migrate from pre-0.2.0 version
|
||||
-- See http://github.com/CartoDB/cartodb-postgresql/issues/36
|
||||
GRANT EXECUTE ON FUNCTION @extschema@.CDB_ColumnType(REGCLASS, TEXT) TO public;
|
48
lib/sql/scripts-available/CDB_Conf.sql
Normal file
48
lib/sql/scripts-available/CDB_Conf.sql
Normal file
@ -0,0 +1,48 @@
|
||||
----------------------------------
|
||||
-- CONF MANAGEMENT FUNCTIONS
|
||||
--
|
||||
-- Meant to be used by superadmin user.
|
||||
-- Functions needing reading configuration should use SECURITY DEFINER.
|
||||
----------------------------------
|
||||
|
||||
-- This will trigger NOTICE if @extschema@.CDB_CONF already exists
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
CREATE TABLE IF NOT EXISTS @extschema@.CDB_CONF ( KEY TEXT PRIMARY KEY, VALUE JSON NOT NULL );
|
||||
END
|
||||
$$;
|
||||
|
||||
-- This can only be called from an SQL script executed by CREATE EXTENSION
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
PERFORM pg_catalog.pg_extension_config_dump('@extschema@.CDB_CONF', '');
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Conf_SetConf(key text, value JSON)
|
||||
RETURNS void AS $$
|
||||
BEGIN
|
||||
PERFORM @extschema@.CDB_Conf_RemoveConf(key);
|
||||
EXECUTE 'INSERT INTO @extschema@.CDB_CONF (KEY, VALUE) VALUES ($1, $2);' USING key, value;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Conf_RemoveConf(key text)
|
||||
RETURNS void AS $$
|
||||
BEGIN
|
||||
EXECUTE 'DELETE FROM @extschema@.CDB_CONF WHERE KEY = $1;' USING key;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Conf_GetConf(key text)
|
||||
RETURNS JSON AS $$
|
||||
DECLARE
|
||||
value JSON;
|
||||
BEGIN
|
||||
EXECUTE 'SELECT VALUE FROM @extschema@.CDB_CONF WHERE KEY = $1;' INTO value USING key;
|
||||
RETURN value;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL STABLE PARALLEL SAFE;
|
14
lib/sql/scripts-available/CDB_DDLTriggers.sql
Normal file
14
lib/sql/scripts-available/CDB_DDLTriggers.sql
Normal file
@ -0,0 +1,14 @@
|
||||
--
|
||||
-- Legacy file
|
||||
-- Introduced again to make sure that updates do not leave dangling functions
|
||||
--
|
||||
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_handle_create_table();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_handle_drop_table();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_handle_alter_column();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_handle_drop_column();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_handle_add_column();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_disable_ddl_hooks();
|
||||
DROP FUNCTION IF EXISTS @extschema@.cdb_enable_ddl_hooks();
|
||||
|
||||
|
31
lib/sql/scripts-available/CDB_DateToNumber.sql
Normal file
31
lib/sql/scripts-available/CDB_DateToNumber.sql
Normal file
@ -0,0 +1,31 @@
|
||||
-- Convert timestamp to double precision
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DateToNumber(input timestamp)
|
||||
RETURNS double precision AS $$
|
||||
DECLARE output double precision;
|
||||
BEGIN
|
||||
BEGIN
|
||||
SELECT extract (EPOCH FROM input) INTO output;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NULL;
|
||||
END;
|
||||
RETURN output;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL UNSAFE;
|
||||
|
||||
-- Convert timestamp with time zone to double precision
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DateToNumber(input timestamp with time zone)
|
||||
RETURNS double precision AS $$
|
||||
DECLARE output double precision;
|
||||
BEGIN
|
||||
BEGIN
|
||||
SELECT extract (EPOCH FROM input) INTO output;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NULL;
|
||||
END;
|
||||
RETURN output;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL UNSAFE;
|
53
lib/sql/scripts-available/CDB_DigitSeparator.sql
Normal file
53
lib/sql/scripts-available/CDB_DigitSeparator.sql
Normal file
@ -0,0 +1,53 @@
|
||||
-- Find thousand and decimal digits separators
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DigitSeparator (rel REGCLASS, fld TEXT, OUT t CHAR, OUT d CHAR)
|
||||
as $$
|
||||
DECLARE
|
||||
sql TEXT;
|
||||
rec RECORD;
|
||||
BEGIN
|
||||
|
||||
-- We're only interested in rows with either "," or '.'
|
||||
sql := 'SELECT ' || quote_ident(fld) || ' as f FROM ' || rel::text
|
||||
|| ' WHERE ' || quote_ident(fld) || ' ~ ''[,.]''';
|
||||
|
||||
FOR rec IN EXECUTE sql
|
||||
LOOP
|
||||
-- Any separator appearing more than once
|
||||
-- will be assumed to be thousand separator
|
||||
IF rec.f ~ ',.*,' THEN
|
||||
t := ','; d := '.';
|
||||
RETURN;
|
||||
ELSIF rec.f ~ '\..*\.' THEN
|
||||
t := '.'; d := ',';
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
-- If both separator are present, rightmost
|
||||
-- will be assumed to be decimal separator
|
||||
IF rec.f ~ '\.' AND rec.f ~ ',' THEN
|
||||
rec.f = reverse(rec.f);
|
||||
IF strpos(rec.f, ',') < strpos(rec.f, '.') THEN
|
||||
t := '.'; d := ',';
|
||||
ELSE
|
||||
t := ','; d := '.';
|
||||
END IF;
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
-- A separator NOT followed by 3 digits
|
||||
-- will be assumed to be decimal separator
|
||||
IF rec.f ~ ',' AND rec.f !~ '(,[0-9]{3}$)|(,[0-9]{3}[,.])' THEN
|
||||
t := '.'; d := ',';
|
||||
RETURN;
|
||||
ELSIF rec.f ~ '\.' AND rec.f !~ '(\.[0-9]{3}$)|(\.[0-9]{3}[,.])' THEN
|
||||
t := ','; d := '.';
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
-- Otherwise continue looking
|
||||
|
||||
END LOOP;
|
||||
|
||||
END
|
||||
$$
|
||||
LANGUAGE 'plpgsql' STABLE STRICT PARALLEL SAFE;
|
122
lib/sql/scripts-available/CDB_DistType.sql
Normal file
122
lib/sql/scripts-available/CDB_DistType.sql
Normal file
@ -0,0 +1,122 @@
|
||||
--
|
||||
-- CDB_DistType classifies the histograms of a column into
|
||||
-- one of the basic types listed by Galtung: http://druedin.com/2012/12/08/galtungs-ajus-system/
|
||||
--
|
||||
-- Future improvements:
|
||||
-- variable number of bins (7 is baked in right now)
|
||||
-- catch the number of items to ensure that the sample is large enough
|
||||
--
|
||||
-- Refs:
|
||||
-- 1. width_bucket/histograms: http://tapoueh.org/blog/2014/02/21-PostgreSQL-histogram
|
||||
-- 2. R implementation: https://github.com/cran/agrmt
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DistType ( in_array NUMERIC[] ) RETURNS text as $$
|
||||
DECLARE
|
||||
element_count INT4;
|
||||
minv numeric;
|
||||
maxv numeric;
|
||||
bins numeric[];
|
||||
freqs numeric[];
|
||||
ajus INT[];
|
||||
freq INT4;
|
||||
signature text;
|
||||
i INT := 1;
|
||||
BEGIN
|
||||
SELECT min(e), max(e), count(e) INTO minv, maxv, element_count FROM ( SELECT unnest(in_array) e ) x;
|
||||
|
||||
IF abs(maxv - minv) < 1e-7 THEN -- if max and min are nearly equal, call if 'F' (make relative to maxv?)
|
||||
signature = 'F';
|
||||
ELSE
|
||||
-- Calculate bins and count in bins
|
||||
EXECUTE 'WITH stats as (
|
||||
SELECT min(e) as minv,
|
||||
max(e) as maxv,
|
||||
count(e) as total
|
||||
FROM (SELECT unnest($1) e) x
|
||||
WHERE e is not null
|
||||
),
|
||||
hist as (
|
||||
SELECT width_bucket(e, s.minv, s.maxv, 7) bucket,
|
||||
count(*) freq
|
||||
FROM (SELECT unnest($1) e) x, stats s
|
||||
WHERE e is not null
|
||||
GROUP BY 1
|
||||
ORDER BY 1
|
||||
)
|
||||
SELECT array_agg(round(100.0 * hist.freq::numeric / stats.total::numeric,1)) freqs,
|
||||
array_agg(hist.bucket) buckets
|
||||
FROM hist, stats'
|
||||
INTO freqs, bins
|
||||
USING in_array;
|
||||
|
||||
LOOP
|
||||
IF i < 7 THEN
|
||||
ajus[i] = CASE WHEN freqs[i] > freqs[i+1] THEN -1
|
||||
WHEN abs(freqs[i] - freqs[i+1]) <= 0.05 THEN 0
|
||||
ELSE 1 END;
|
||||
ELSE
|
||||
EXIT;
|
||||
END IF;
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
|
||||
signature = @extschema@._CDB_DistTypeClassify(ajus);
|
||||
END IF;
|
||||
|
||||
RETURN signature;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE STRICT PARALLEL SAFE;
|
||||
|
||||
-- Classify data into AJUSFL
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_DistTypeClassify ( in_array INT[] ) RETURNS text as $$
|
||||
DECLARE
|
||||
element_count INT4;
|
||||
maxv numeric;
|
||||
minv numeric;
|
||||
uniques INT[];
|
||||
type text;
|
||||
BEGIN
|
||||
SELECT max(e), min(e) INTO maxv, minv FROM ( SELECT unnest(in_array) e ) x;
|
||||
|
||||
IF (maxv = 0 AND minv = 0) THEN
|
||||
type = 'F';
|
||||
ELSIF maxv < 1 THEN
|
||||
type = 'L';
|
||||
ELSIF minv > -1 THEN
|
||||
type = 'J';
|
||||
ELSE
|
||||
-- Get distinct elements ordered by original position
|
||||
EXECUTE 'WITH b AS (
|
||||
SELECT a
|
||||
FROM (SELECT unnest($1) a) x
|
||||
),
|
||||
c AS (
|
||||
SELECT a, row_number() OVER () r
|
||||
FROM b
|
||||
),
|
||||
d AS (
|
||||
SELECT DISTINCT a
|
||||
FROM c
|
||||
),
|
||||
e AS (
|
||||
SELECT a FROM d ORDER BY (
|
||||
SELECT r FROM c WHERE d.a = c.a ORDER BY r ASC LIMIT 1
|
||||
) ASC)
|
||||
SELECT array_agg(a) FROM e'
|
||||
INTO uniques
|
||||
USING in_array;
|
||||
|
||||
-- Decide if it's an A, U, or other
|
||||
IF (uniques = ARRAY[1,-1] OR uniques = ARRAY[1,0,-1] OR uniques = ARRAY[1,-1,0] OR uniques = ARRAY[0,1,-1]) THEN
|
||||
type = 'A';
|
||||
ELSIF (uniques = ARRAY[-1,1] OR uniques = ARRAY[-1,0,1] OR uniques = ARRAY[-1,1,0] OR uniques = ARRAY[0,-1,1]) THEN
|
||||
type = 'U';
|
||||
ELSE
|
||||
type = 'S';
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
RETURN type;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE STRICT PARALLEL SAFE;
|
46
lib/sql/scripts-available/CDB_DistinctMeasure.sql
Normal file
46
lib/sql/scripts-available/CDB_DistinctMeasure.sql
Normal file
@ -0,0 +1,46 @@
|
||||
--
|
||||
-- CDB_DistinctMeasure
|
||||
-- calculates the fraction of rows in the 10 most common distinct categories
|
||||
-- returns true if the number of rows in these 10 categories is >= 0.9 * total number of rows
|
||||
--
|
||||
--
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DistinctMeasure ( in_array text[], threshold numeric DEFAULT null ) RETURNS numeric as $$
|
||||
DECLARE
|
||||
element_count INT4;
|
||||
maxval numeric;
|
||||
passes numeric;
|
||||
BEGIN
|
||||
SELECT count(e) INTO element_count FROM ( SELECT unnest(in_array) e ) x;
|
||||
|
||||
-- count number of occurrences per bin
|
||||
-- calculate the normalized cumulative sum
|
||||
-- return the max value: which corresponds nth entry
|
||||
-- for n <= 10 depending on # of distinct values
|
||||
EXECUTE 'WITH a As (
|
||||
SELECT
|
||||
count(*) cnt
|
||||
FROM
|
||||
(SELECT * FROM unnest($2) e ) x
|
||||
WHERE e is not null
|
||||
GROUP BY e
|
||||
ORDER BY cnt DESC
|
||||
),
|
||||
b As (
|
||||
SELECT
|
||||
sum(cnt) OVER (ORDER BY cnt DESC) / $1 As cumsum
|
||||
FROM a
|
||||
LIMIT 10
|
||||
)
|
||||
SELECT max(cumsum) maxval FROM b'
|
||||
INTO maxval
|
||||
USING element_count, in_array;
|
||||
IF threshold is null THEN
|
||||
passes = maxval;
|
||||
ELSE
|
||||
passes = CASE WHEN (maxval >= threshold) THEN 1 ELSE 0 END;
|
||||
END IF;
|
||||
|
||||
RETURN passes;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE PARALLEL SAFE;
|
24
lib/sql/scripts-available/CDB_EqualIntervalBins.sql
Normal file
24
lib/sql/scripts-available/CDB_EqualIntervalBins.sql
Normal file
@ -0,0 +1,24 @@
|
||||
--
|
||||
-- Calculate the equal interval bins for a given column
|
||||
--
|
||||
-- @param in_array An array of numbers to determine the best
|
||||
-- bin boundary
|
||||
--
|
||||
-- @param breaks The number of bins you want to find.
|
||||
--
|
||||
--
|
||||
-- Returns: upper edges of bins
|
||||
--
|
||||
--
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_EqualIntervalBins ( in_array anyarray, breaks INT ) RETURNS anyarray as $$
|
||||
WITH stats AS (
|
||||
SELECT min(e), (max(e)-min(e))/breaks AS del
|
||||
FROM (SELECT unnest(in_array) e) AS p)
|
||||
SELECT array_agg(bins)
|
||||
FROM (
|
||||
SELECT min + generate_series(1,breaks)*del AS bins
|
||||
FROM stats) q;
|
||||
$$ LANGUAGE SQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_EqualIntervalBins( numeric[], integer);
|
31
lib/sql/scripts-available/CDB_EstimateRowCount.sql
Normal file
31
lib/sql/scripts-available/CDB_EstimateRowCount.sql
Normal file
@ -0,0 +1,31 @@
|
||||
-- Internal function to generate stats for a table if they don't exist
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_GenerateStats(reloid REGCLASS)
|
||||
RETURNS VOID
|
||||
AS $$
|
||||
DECLARE
|
||||
has_stats BOOLEAN;
|
||||
BEGIN
|
||||
SELECT EXISTS (
|
||||
SELECT * FROM pg_catalog.pg_statistic WHERE starelid = reloid
|
||||
) INTO has_stats;
|
||||
IF NOT has_stats THEN
|
||||
EXECUTE Format('ANALYZE %s;', reloid);
|
||||
END IF;
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
-- Return a row count estimate of the result of a query using statistics
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_EstimateRowCount(query text)
|
||||
RETURNS Numeric
|
||||
AS $$
|
||||
DECLARE
|
||||
plan JSON;
|
||||
BEGIN
|
||||
-- Make sure statistics exist for all the tables of the query
|
||||
PERFORM @extschema@._CDB_GenerateStats(tabname) FROM unnest(@extschema@.CDB_QueryTablesText(query)) AS tabname;
|
||||
|
||||
-- Use the query planner to obtain an estimate of the number of result rows
|
||||
EXECUTE 'EXPLAIN (FORMAT JSON) ' || query INTO STRICT plan;
|
||||
RETURN plan->0->'Plan'->'Plan Rows';
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
2
lib/sql/scripts-available/CDB_ExtensionPost.sql
Normal file
2
lib/sql/scripts-available/CDB_ExtensionPost.sql
Normal file
@ -0,0 +1,2 @@
|
||||
SELECT pg_catalog.pg_extension_config_dump('@extschema@.cdb_tablemetadata','');
|
||||
|
20
lib/sql/scripts-available/CDB_ExtensionUtils.sql
Normal file
20
lib/sql/scripts-available/CDB_ExtensionUtils.sql
Normal file
@ -0,0 +1,20 @@
|
||||
CREATE OR REPLACE FUNCTION @extschema@.cdb_extension_reload() RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
ver TEXT;
|
||||
sql TEXT;
|
||||
BEGIN
|
||||
ver := split_part(@extschema@.cdb_version(), ' ', 1);
|
||||
sql := 'ALTER EXTENSION cartodb UPDATE TO ''' || ver || 'next''';
|
||||
EXECUTE sql;
|
||||
sql := 'ALTER EXTENSION cartodb UPDATE TO ''' || ver || '''';
|
||||
EXECUTE sql;
|
||||
END;
|
||||
$$ language 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.schema_exists(schema_name text)
|
||||
RETURNS boolean AS
|
||||
$$
|
||||
SELECT EXISTS(SELECT 1 FROM pg_namespace WHERE nspname = schema_name::text);
|
||||
$$
|
||||
language sql STABLE PARALLEL SAFE;
|
206
lib/sql/scripts-available/CDB_ForeignTable.sql
Normal file
206
lib/sql/scripts-available/CDB_ForeignTable.sql
Normal file
@ -0,0 +1,206 @@
|
||||
---------------------------
|
||||
-- FDW MANAGEMENT FUNCTIONS
|
||||
--
|
||||
-- All the FDW settings are read from the `cdb_conf.fdws` entry json file.
|
||||
---------------------------
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Setup_FDW(fdw_name text, config json)
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
row record;
|
||||
option record;
|
||||
org_role text;
|
||||
BEGIN
|
||||
-- This function tries to be as idempotent as possible, by not creating anything more than once
|
||||
-- (not even using IF NOT EXIST to avoid throwing warnings)
|
||||
IF NOT EXISTS ( SELECT * FROM pg_extension WHERE extname = 'postgres_fdw') THEN
|
||||
CREATE EXTENSION postgres_fdw;
|
||||
END IF;
|
||||
-- Create FDW first if it does not exist
|
||||
IF NOT EXISTS ( SELECT * FROM pg_foreign_server WHERE srvname = fdw_name)
|
||||
THEN
|
||||
EXECUTE FORMAT('CREATE SERVER %I FOREIGN DATA WRAPPER postgres_fdw', fdw_name);
|
||||
END IF;
|
||||
|
||||
-- Set FDW settings
|
||||
FOR row IN SELECT p.key, p.value from lateral json_each_text(config->'server') p
|
||||
LOOP
|
||||
IF NOT EXISTS (WITH a AS (select split_part(unnest(srvoptions), '=', 1) as options from pg_foreign_server where srvname=fdw_name) SELECT * from a where options = row.key)
|
||||
THEN
|
||||
EXECUTE FORMAT('ALTER SERVER %I OPTIONS (ADD %I %L)', fdw_name, row.key, row.value);
|
||||
ELSE
|
||||
EXECUTE FORMAT('ALTER SERVER %I OPTIONS (SET %I %L)', fdw_name, row.key, row.value);
|
||||
END IF;
|
||||
END LOOP;
|
||||
|
||||
-- Create user mappings
|
||||
FOR row IN SELECT p.key, p.value from lateral json_each(config->'users') p LOOP
|
||||
-- Check if entry on pg_user_mappings exists
|
||||
|
||||
IF NOT EXISTS ( SELECT * FROM pg_user_mappings WHERE srvname = fdw_name AND usename = row.key ) THEN
|
||||
EXECUTE FORMAT ('CREATE USER MAPPING FOR %I SERVER %I', row.key, fdw_name);
|
||||
END IF;
|
||||
|
||||
-- Update user mapping settings
|
||||
FOR option IN SELECT o.key, o.value from lateral json_each_text(row.value) o LOOP
|
||||
IF NOT EXISTS (WITH a AS (select split_part(unnest(umoptions), '=', 1) as options from pg_user_mappings WHERE srvname = fdw_name AND usename = row.key) SELECT * from a where options = option.key) THEN
|
||||
EXECUTE FORMAT('ALTER USER MAPPING FOR %I SERVER %I OPTIONS (ADD %I %L)', row.key, fdw_name, option.key, option.value);
|
||||
ELSE
|
||||
EXECUTE FORMAT('ALTER USER MAPPING FOR %I SERVER %I OPTIONS (SET %I %L)', row.key, fdw_name, option.key, option.value);
|
||||
END IF;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
|
||||
-- Create schema if it does not exist.
|
||||
IF NOT EXISTS ( SELECT * from pg_namespace WHERE nspname=fdw_name) THEN
|
||||
EXECUTE FORMAT ('CREATE SCHEMA %I', fdw_name);
|
||||
END IF;
|
||||
|
||||
-- Give the organization role usage permisions over the schema
|
||||
SELECT @extschema@.CDB_Organization_Member_Group_Role_Member_Name() INTO org_role;
|
||||
EXECUTE FORMAT ('GRANT USAGE ON SCHEMA %I TO %I', fdw_name, org_role);
|
||||
|
||||
-- Bring here the remote cdb_tablemetadata
|
||||
IF NOT EXISTS ( SELECT * FROM PG_CLASS WHERE relnamespace = (SELECT oid FROM pg_namespace WHERE nspname=fdw_name) and relname='cdb_tablemetadata') THEN
|
||||
EXECUTE FORMAT ('CREATE FOREIGN TABLE %I.cdb_tablemetadata (tabname text, updated_at timestamp with time zone) SERVER %I OPTIONS (table_name ''cdb_tablemetadata_text'', schema_name ''@extschema@'', updatable ''false'')', fdw_name, fdw_name);
|
||||
END IF;
|
||||
EXECUTE FORMAT ('GRANT SELECT ON %I.cdb_tablemetadata TO %I', fdw_name, org_role);
|
||||
|
||||
END
|
||||
$$
|
||||
LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Setup_FDWS()
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
DECLARE
|
||||
row record;
|
||||
BEGIN
|
||||
FOR row IN SELECT p.key, p.value from lateral json_each(@extschema@.CDB_Conf_GetConf('fdws')) p LOOP
|
||||
EXECUTE 'SELECT @extschema@._CDB_Setup_FDW($1, $2)' USING row.key, row.value;
|
||||
END LOOP;
|
||||
END
|
||||
$$
|
||||
LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Setup_FDW(fdw_name text)
|
||||
RETURNS void AS
|
||||
$BODY$
|
||||
DECLARE
|
||||
config json;
|
||||
BEGIN
|
||||
SELECT p.value FROM LATERAL json_each(@extschema@.CDB_Conf_GetConf('fdws')) p WHERE p.key = fdw_name INTO config;
|
||||
EXECUTE 'SELECT @extschema@._CDB_Setup_FDW($1, $2)' USING fdw_name, config;
|
||||
END
|
||||
$BODY$
|
||||
LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Add_Remote_Table(source text, table_name text)
|
||||
RETURNS void AS
|
||||
$$
|
||||
BEGIN
|
||||
PERFORM @extschema@._CDB_Setup_FDW(source);
|
||||
EXECUTE FORMAT ('IMPORT FOREIGN SCHEMA %I LIMIT TO (%I) FROM SERVER %I INTO %I;', source, table_name, source, source);
|
||||
--- Grant SELECT to publicuser
|
||||
EXECUTE FORMAT ('GRANT SELECT ON %I.%I TO publicuser;', source, table_name);
|
||||
END
|
||||
$$
|
||||
LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Get_Foreign_Updated_At(foreign_table regclass)
|
||||
RETURNS timestamp with time zone AS
|
||||
$$
|
||||
DECLARE
|
||||
remote_table_name text;
|
||||
fdw_schema_name text;
|
||||
time timestamp with time zone;
|
||||
BEGIN
|
||||
-- This will turn a local foreign table (referenced as regclass) to its fully qualified text remote table reference.
|
||||
WITH a AS (SELECT ftoptions FROM pg_foreign_table WHERE ftrelid=foreign_table LIMIT 1),
|
||||
b as (SELECT (pg_options_to_table(ftoptions)).* FROM a)
|
||||
SELECT FORMAT('%I.%I', (SELECT option_value FROM b WHERE option_name='schema_name'), (SELECT option_value FROM b WHERE option_name='table_name'))
|
||||
INTO remote_table_name;
|
||||
|
||||
-- We assume that the remote cdb_tablemetadata is called cdb_tablemetadata and is on the same schema as the queried table.
|
||||
SELECT nspname FROM pg_class c, pg_namespace n WHERE c.oid=foreign_table AND c.relnamespace = n.oid INTO fdw_schema_name;
|
||||
BEGIN
|
||||
EXECUTE FORMAT('SELECT updated_at FROM %I.cdb_tablemetadata WHERE tabname=%L ORDER BY updated_at DESC LIMIT 1', fdw_schema_name, remote_table_name) INTO time;
|
||||
EXCEPTION
|
||||
WHEN undefined_table THEN
|
||||
-- If you add a GET STACKED DIAGNOSTICS text_var = RETURNED_SQLSTATE
|
||||
-- you get a code 42P01 which corresponds to undefined_table
|
||||
RAISE NOTICE 'CDB_Get_Foreign_Updated_At: could not find %.cdb_tablemetadata while checking % updated_at, returning NULL timestamp', fdw_schema_name, foreign_table;
|
||||
END;
|
||||
RETURN time;
|
||||
END
|
||||
$$
|
||||
LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._cdb_dbname_of_foreign_table(reloid oid)
|
||||
RETURNS TEXT AS $$
|
||||
SELECT option_value FROM pg_options_to_table((
|
||||
|
||||
SELECT fs.srvoptions
|
||||
FROM pg_foreign_table ft
|
||||
LEFT JOIN pg_foreign_server fs ON ft.ftserver = fs.oid
|
||||
WHERE ft.ftrelid = reloid
|
||||
|
||||
)) WHERE option_name='dbname';
|
||||
$$ LANGUAGE SQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
-- Return a set of (dbname, schema_name, table_name, updated_at)
|
||||
-- It is aware of foreign tables
|
||||
-- It assumes the local (schema_name, table_name) map to the remote ones with the same name
|
||||
-- Note: dbname is never quoted whereas schema and table names are when needed.
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_QueryTables_Updated_At(query text)
|
||||
RETURNS TABLE(dbname text, schema_name text, table_name text, updated_at timestamptz)
|
||||
AS $$
|
||||
WITH query_tables AS (
|
||||
SELECT unnest(@extschema@.CDB_QueryTablesText(query)) schema_table_name
|
||||
), query_tables_oid AS (
|
||||
SELECT schema_table_name, schema_table_name::regclass::oid AS reloid
|
||||
FROM query_tables
|
||||
),
|
||||
fqtn AS (
|
||||
SELECT
|
||||
(CASE WHEN c.relkind = 'f' THEN @extschema@._cdb_dbname_of_foreign_table(query_tables_oid.reloid)
|
||||
ELSE current_database()
|
||||
END)::text AS dbname,
|
||||
quote_ident(n.nspname::text) schema_name,
|
||||
quote_ident(c.relname::text) table_name,
|
||||
c.relkind,
|
||||
query_tables_oid.reloid
|
||||
FROM query_tables_oid, pg_catalog.pg_class c
|
||||
LEFT JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE c.oid = query_tables_oid.reloid
|
||||
)
|
||||
SELECT fqtn.dbname, fqtn.schema_name, fqtn.table_name,
|
||||
(CASE WHEN relkind = 'f' THEN @extschema@.CDB_Get_Foreign_Updated_At(reloid)
|
||||
ELSE (SELECT md.updated_at FROM @extschema@.CDB_TableMetadata md WHERE md.tabname = reloid)
|
||||
END) AS updated_at
|
||||
FROM fqtn;
|
||||
$$ LANGUAGE SQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
-- Return the last updated time of a set of tables
|
||||
-- It is aware of foreign tables
|
||||
-- It assumes the local (schema_name, table_name) map to the remote ones with the same name
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Last_Updated_Time(tables text[])
|
||||
RETURNS timestamptz AS $$
|
||||
WITH t AS (
|
||||
SELECT unnest(tables) AS schema_table_name
|
||||
), t_oid AS (
|
||||
SELECT (t.schema_table_name)::regclass::oid as reloid FROM t
|
||||
), t_updated_at AS (
|
||||
SELECT
|
||||
(CASE WHEN relkind = 'f' THEN @extschema@.CDB_Get_Foreign_Updated_At(reloid)
|
||||
ELSE (SELECT md.updated_at FROM @extschema@.CDB_TableMetadata md WHERE md.tabname = reloid)
|
||||
END) AS updated_at
|
||||
FROM t_oid
|
||||
LEFT JOIN pg_catalog.pg_class c ON c.oid = reloid
|
||||
) SELECT max(updated_at) FROM t_updated_at;
|
||||
$$ LANGUAGE SQL VOLATILE PARALLEL UNSAFE;
|
123
lib/sql/scripts-available/CDB_GhostTables.sql
Normal file
123
lib/sql/scripts-available/CDB_GhostTables.sql
Normal file
@ -0,0 +1,123 @@
|
||||
-- Enqueues a job to run Ghost tables linking process for the provided username
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_LinkGhostTables(username text, db_name text, event_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
if not username:
|
||||
return
|
||||
|
||||
if 'json' not in GD:
|
||||
import json
|
||||
GD['json'] = json
|
||||
else:
|
||||
json = GD['json']
|
||||
|
||||
tis_config = plpy.execute("select @extschema@.CDB_Conf_GetConf('invalidation_service');")[0]['cdb_conf_getconf']
|
||||
if not tis_config:
|
||||
plpy.warning('Invalidation service configuration not found. Skipping Ghost Tables linking.')
|
||||
return
|
||||
|
||||
tis_config_dict = json.loads(tis_config)
|
||||
tis_host = tis_config_dict.get('host')
|
||||
tis_port = tis_config_dict.get('port')
|
||||
tis_timeout = tis_config_dict.get('timeout', 5)
|
||||
tis_retry = tis_config_dict.get('retry', 5)
|
||||
|
||||
client = GD.get('invalidation', None)
|
||||
|
||||
while True:
|
||||
|
||||
if not client:
|
||||
try:
|
||||
import redis
|
||||
client = redis.Redis(host=tis_host, port=tis_port, socket_timeout=tis_timeout)
|
||||
GD['invalidation'] = client
|
||||
except Exception as err:
|
||||
error = "client_error - %s" % str(err)
|
||||
# NOTE: no retries on connection error
|
||||
plpy.warning('Error trying to connect to Invalidation Service to link Ghost Tables: ' + str(err))
|
||||
break
|
||||
|
||||
try:
|
||||
client.execute_command('DBSCH', db_name, username, event_name)
|
||||
break
|
||||
except Exception as err:
|
||||
error = "request_error - %s" % str(err)
|
||||
client = GD['invalidation'] = None # force reconnect
|
||||
if not tis_retry:
|
||||
plpy.warning('Error calling Invalidation Service to link Ghost Tables: ' + str(err))
|
||||
break
|
||||
tis_retry -= 1 # try reconnecting
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Enqueues a job to run Ghost tables linking process for the current user
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_LinkGhostTables(event_name text DEFAULT 'USER')
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
username TEXT;
|
||||
db_name TEXT;
|
||||
BEGIN
|
||||
EXECUTE 'SELECT @extschema@.CDB_Username();' INTO username;
|
||||
EXECUTE 'SELECT current_database();' INTO db_name;
|
||||
|
||||
PERFORM @extschema@._CDB_LinkGhostTables(username, db_name, event_name);
|
||||
RAISE NOTICE '_CDB_LinkGhostTables() called with username=%, event_name=%', username, event_name;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
-- Trigger function to call CDB_LinkGhostTables()
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_LinkGhostTablesTrigger()
|
||||
RETURNS trigger
|
||||
AS $$
|
||||
DECLARE
|
||||
ddl_tag TEXT;
|
||||
BEGIN
|
||||
EXECUTE 'DELETE FROM @extschema@.cdb_ddl_execution WHERE txid = txid_current() RETURNING tag;' INTO ddl_tag;
|
||||
PERFORM @extschema@.CDB_LinkGhostTables(ddl_tag);
|
||||
RETURN NULL;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
-- Event trigger to save the current transaction in @extschema@.cdb_ddl_execution
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_SaveDDLTransaction()
|
||||
RETURNS event_trigger
|
||||
AS $$
|
||||
BEGIN
|
||||
INSERT INTO @extschema@.cdb_ddl_execution VALUES (txid_current(), tg_tag) ON CONFLICT ON CONSTRAINT cdb_ddl_execution_pkey DO NOTHING;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
-- Creates the trigger on DDL events to link ghost tables
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_EnableGhostTablesTrigger()
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
DROP EVENT TRIGGER IF EXISTS link_ghost_tables;
|
||||
DROP TRIGGER IF EXISTS check_ddl_update ON @extschema@.cdb_ddl_execution;
|
||||
|
||||
-- Table to store the transaction id from DDL events to avoid multiple executions
|
||||
CREATE TABLE IF NOT EXISTS @extschema@.cdb_ddl_execution(txid bigint PRIMARY KEY, tag text);
|
||||
|
||||
CREATE CONSTRAINT TRIGGER check_ddl_update
|
||||
AFTER INSERT ON @extschema@.cdb_ddl_execution
|
||||
INITIALLY DEFERRED
|
||||
FOR EACH ROW
|
||||
EXECUTE PROCEDURE @extschema@._CDB_LinkGhostTablesTrigger();
|
||||
|
||||
CREATE EVENT TRIGGER link_ghost_tables
|
||||
ON ddl_command_end
|
||||
WHEN TAG IN ('CREATE TABLE', 'SELECT INTO', 'DROP TABLE', 'ALTER TABLE', 'CREATE TRIGGER', 'DROP TRIGGER', 'CREATE VIEW', 'DROP VIEW', 'ALTER VIEW', 'CREATE FOREIGN TABLE', 'ALTER FOREIGN TABLE', 'DROP FOREIGN TABLE')
|
||||
EXECUTE PROCEDURE @extschema@.CDB_SaveDDLTransaction();
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Drops the trigger on DDL events to link ghost tables
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_DisableGhostTablesTrigger()
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
DROP EVENT TRIGGER IF EXISTS link_ghost_tables;
|
||||
DROP TRIGGER IF EXISTS check_ddl_update ON @extschema@.cdb_ddl_execution;
|
||||
DROP TABLE IF EXISTS @extschema@.cdb_ddl_execution;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
26
lib/sql/scripts-available/CDB_GreatCircle.sql
Normal file
26
lib/sql/scripts-available/CDB_GreatCircle.sql
Normal file
@ -0,0 +1,26 @@
|
||||
-- Great circle point-to-point routes, based on:
|
||||
-- http://blog.cartodb.com/jets-and-datelines/
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_GreatCircle(start_point @postgisschema@.geometry, end_point @postgisschema@.geometry, max_segment_length NUMERIC DEFAULT 100000)
|
||||
RETURNS @postgisschema@.geometry AS $$
|
||||
DECLARE
|
||||
line @postgisschema@.geometry;
|
||||
BEGIN
|
||||
line = @postgisschema@.ST_Segmentize(
|
||||
@postgisschema@.ST_Makeline(
|
||||
start_point,
|
||||
end_point
|
||||
)::geography,
|
||||
max_segment_length
|
||||
)::geometry;
|
||||
|
||||
IF @postgisschema@.ST_XMax(line) - @postgisschema@.ST_XMin(line) > 180 THEN
|
||||
line = @postgisschema@.ST_Difference(
|
||||
@postgisschema@.ST_ShiftLongitude(line),
|
||||
@postgisschema@.ST_Buffer(@postgisschema@.ST_GeomFromText('LINESTRING(180 90, 180 -90)', 4326), 0.00001)
|
||||
);
|
||||
END IF;
|
||||
RETURN line;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL SAFE;
|
252
lib/sql/scripts-available/CDB_Groups.sql
Normal file
252
lib/sql/scripts-available/CDB_Groups.sql
Normal file
@ -0,0 +1,252 @@
|
||||
----------------------------------
|
||||
-- GROUP MANAGEMENT FUNCTIONS
|
||||
--
|
||||
-- Meant to be used by org admin. See CDB_Organization_AddAdmin.
|
||||
----------------------------------
|
||||
|
||||
-- Creates a new group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_CreateGroup(group_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
EXECUTE format('CREATE ROLE %I NOLOGIN;', group_role);
|
||||
PERFORM @extschema@._CDB_Group_CreateGroup_API(group_name, group_role);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Drops group and everything that role owns
|
||||
-- TODO: LIMITATION: in order to drop a role all its owned objects must be dropped before.
|
||||
-- Right now this is done with DROP OWNED, which can only be done by a superadmin.
|
||||
-- Not even the role creator can drop the role and the objects it owns.
|
||||
-- All group owned objects by the group are permissions.
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_DropGroup(group_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
EXECUTE format('DROP OWNED BY %I', group_role);
|
||||
EXECUTE format('DROP ROLE IF EXISTS %I', group_role);
|
||||
PERFORM @extschema@._CDB_Group_DropGroup_API(group_name);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Renames a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_RenameGroup(old_group_name text, new_group_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
old_group_role TEXT;
|
||||
new_group_role TEXT;
|
||||
BEGIN
|
||||
old_group_role = @extschema@._CDB_Group_GroupRole(old_group_name);
|
||||
new_group_role = @extschema@._CDB_Group_GroupRole(new_group_name);
|
||||
EXECUTE format('ALTER ROLE %I RENAME TO %I', old_group_role, new_group_role);
|
||||
PERFORM @extschema@._CDB_Group_RenameGroup_API(old_group_name, new_group_name, new_group_role);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Adds users to a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_AddUsers(group_name text, usernames text[])
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
user_role TEXT;
|
||||
username TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
foreach username in array usernames
|
||||
loop
|
||||
user_role := @extschema@._CDB_User_RoleFromUsername(username);
|
||||
IF(group_role IS NULL OR user_role IS NULL)
|
||||
THEN
|
||||
RAISE EXCEPTION 'Group role (%) and user role (%) must be already existing', group_role, user_role;
|
||||
END IF;
|
||||
EXECUTE format('GRANT %I TO %I', group_role, user_role);
|
||||
end loop;
|
||||
PERFORM @extschema@._CDB_Group_AddUsers_API(group_name, usernames);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Removes users from a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_RemoveUsers(group_name text, usernames text[])
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
user_role TEXT;
|
||||
username TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
foreach username in array usernames
|
||||
loop
|
||||
user_role := @extschema@._CDB_User_RoleFromUsername(username);
|
||||
EXECUTE format('REVOKE %I FROM %I', group_role, user_role);
|
||||
end loop;
|
||||
PERFORM @extschema@._CDB_Group_RemoveUsers_API(group_name, usernames);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
----------------------------------
|
||||
-- TABLE MANAGEMENT FUNCTIONS
|
||||
--
|
||||
-- Meant to be used by table owners.
|
||||
----------------------------------
|
||||
|
||||
-- Grants table read permission to a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_Table_GrantRead(group_name text, username text, table_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
PERFORM @extschema@._CDB_Group_Table_GrantRead(group_name, username, table_name, true);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_Table_GrantRead(group_name text, username text, table_name text, sync boolean)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
EXECUTE format('GRANT USAGE ON SCHEMA %I TO %I', username, group_role);
|
||||
EXECUTE format('GRANT SELECT ON TABLE %I.%I TO %I', username, table_name, group_role );
|
||||
IF(sync) THEN
|
||||
PERFORM @extschema@._CDB_Group_Table_GrantPermission_API(group_name, username, table_name, 'r');
|
||||
END IF;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Grants table write permission to a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_Table_GrantReadWrite(group_name text, username text, table_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
PERFORM @extschema@._CDB_Group_Table_GrantReadWrite(group_name, username, table_name, true);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_Table_GrantReadWrite(group_name text, username text, table_name text, sync boolean)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
EXECUTE format('GRANT USAGE ON SCHEMA %I TO %I', username, group_role);
|
||||
EXECUTE format('GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE %I.%I TO %I', username, table_name, group_role);
|
||||
PERFORM @extschema@._CDB_Group_TableSequences_Permission(group_name, username, table_name, true);
|
||||
IF(sync) THEN
|
||||
PERFORM @extschema@._CDB_Group_Table_GrantPermission_API(group_name, username, table_name, 'w');
|
||||
END IF;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Granting and revoking permissions on sequences
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_TableSequences_Permission(group_name text, username text, table_name text, do_grant bool)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
column_name TEXT;
|
||||
sequence_name TEXT;
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
FOR column_name IN EXECUTE 'SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_CATALOG = current_database() AND TABLE_SCHEMA = $1 AND TABLE_NAME = $2 AND COLUMN_DEFAULT LIKE ''nextval%''' USING username, table_name
|
||||
LOOP
|
||||
EXECUTE format('SELECT PG_GET_SERIAL_SEQUENCE(''%I.%I'', ''%I'')', username, table_name, column_name) INTO sequence_name;
|
||||
IF sequence_name IS NOT NULL THEN
|
||||
IF do_grant THEN
|
||||
-- Here %s is needed since sequence_name has quotes
|
||||
EXECUTE format('GRANT USAGE, SELECT, UPDATE ON SEQUENCE %s TO %I', sequence_name, group_role);
|
||||
ELSE
|
||||
EXECUTE format('REVOKE ALL ON SEQUENCE %s FROM %I', sequence_name, group_role);
|
||||
END IF;
|
||||
END IF;
|
||||
END LOOP;
|
||||
RETURN;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Revokes all permissions on a table from a group
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Group_Table_RevokeAll(group_name text, username text, table_name text)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
PERFORM @extschema@._CDB_Group_Table_RevokeAll(group_name, username, table_name, true);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_Table_RevokeAll(group_name text, username text, table_name text, sync boolean)
|
||||
RETURNS VOID AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
BEGIN
|
||||
group_role := @extschema@._CDB_Group_GroupRole(group_name);
|
||||
EXECUTE format('REVOKE ALL ON TABLE %I.%I FROM %I', username, table_name, group_role);
|
||||
PERFORM @extschema@._CDB_Group_TableSequences_Permission(group_name, username, table_name, false);
|
||||
IF(sync) THEN
|
||||
PERFORM @extschema@._CDB_Group_Table_RevokeAllPermission_API(group_name, username, table_name);
|
||||
END IF;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-----------------------
|
||||
-- Helper functions
|
||||
-----------------------
|
||||
-- Given a group name returns a role. group_name must be a valid PostgreSQL idenfifier. See http://www.postgresql.org/docs/9.2/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_GroupRole(group_name text)
|
||||
RETURNS TEXT AS $$
|
||||
DECLARE
|
||||
group_role TEXT;
|
||||
prefix TEXT;
|
||||
max_length constant INTEGER := 63;
|
||||
BEGIN
|
||||
prefix = format('%s_g_', @extschema@._CDB_Group_ShortDatabaseName());
|
||||
group_role := format('%s%s', prefix, group_name);
|
||||
IF LENGTH(group_role) > max_length
|
||||
THEN
|
||||
RAISE EXCEPTION 'Group name must be shorter. It can''t have more than % characters, but it is longer (%): %', max_length - LENGTH(prefix), length(group_name), group_name;
|
||||
END IF;
|
||||
RETURN group_role;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- Returns the first owner of the schema matching username. Organization user schemas must have one only owner.
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_User_RoleFromUsername(username text)
|
||||
RETURNS TEXT AS $$
|
||||
DECLARE
|
||||
user_role TEXT;
|
||||
BEGIN
|
||||
-- This was preferred, but non-superadmins won't get results
|
||||
-- SELECT SCHEMA_OWNER FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = $1 LIMIT 1'
|
||||
SELECT pg_get_userbyid(nspowner) FROM pg_namespace WHERE nspname = username INTO user_role;
|
||||
RETURN user_role;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- Database names are too long, we need a shorter version for composing role names
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_ShortDatabaseName()
|
||||
RETURNS TEXT AS $$
|
||||
DECLARE
|
||||
short_database_name TEXT;
|
||||
BEGIN
|
||||
SELECT md5(current_database()) INTO short_database_name;
|
||||
RETURN short_database_name;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL STABLE PARALLEL SAFE;
|
195
lib/sql/scripts-available/CDB_Groups_API.sql
Normal file
195
lib/sql/scripts-available/CDB_Groups_API.sql
Normal file
@ -0,0 +1,195 @@
|
||||
----------------------------------
|
||||
-- GROUP METADATA API FUNCTIONS
|
||||
--
|
||||
-- Meant to be used by CDB_Group_* functions to sync data with the editor.
|
||||
-- Requires configuration parameter. Example: SELECT @extschema@.CDB_Conf_SetConf('groups_api', '{ "host": "127.0.0.1", "port": 3000, "timeout": 10, "username": "extension", "password": "elephant" }');
|
||||
----------------------------------
|
||||
|
||||
-- TODO: delete this development cleanup before final merge
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_Group_AddMember(group_name text, username text);
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_Group_RemoveMember(group_name text, username text);
|
||||
DROP FUNCTION IF EXISTS @extschema@._CDB_Group_AddMember_API(group_name text, username text);
|
||||
DROP FUNCTION IF EXISTS @extschema@._CDB_Group_RemoveMember_API(group_name text, username text);
|
||||
|
||||
-- Sends the create group request
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_CreateGroup_API(group_name text, group_role text)
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
|
||||
url = '/api/v1/databases/{0}/groups'
|
||||
body = '{ "name": "%s", "database_role": "%s" }' % (group_name, group_role)
|
||||
query = "select @extschema@._CDB_Group_API_Request('POST', '%s', '%s', '{200, 409}') as response_status" % (url, body)
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_DropGroup_API(group_name text)
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s' % (urllib.pathname2url(group_name))
|
||||
|
||||
query = "select @extschema@._CDB_Group_API_Request('DELETE', '%s', '', '{204, 404}') as response_status" % url
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_RenameGroup_API(old_group_name text, new_group_name text, new_group_role text)
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s' % (urllib.pathname2url(old_group_name))
|
||||
body = '{ "name": "%s", "database_role": "%s" }' % (new_group_name, new_group_role)
|
||||
query = "select @extschema@._CDB_Group_API_Request('PUT', '%s', '%s', '{200, 409}') as response_status" % (url, body)
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_AddUsers_API(group_name text, usernames text[])
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s/users' % (urllib.pathname2url(group_name))
|
||||
body = "{ \"users\": [\"%s\"] }" % "\",\"".join(usernames)
|
||||
query = "select @extschema@._CDB_Group_API_Request('POST', '%s', '%s', '{200, 409}') as response_status" % (url, body)
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE SECURITY DEFINER;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_RemoveUsers_API(group_name text, usernames text[])
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s/users' % (urllib.pathname2url(group_name))
|
||||
body = "{ \"users\": [\"%s\"] }" % "\",\"".join(usernames)
|
||||
query = "select @extschema@._CDB_Group_API_Request('DELETE', '%s', '%s', '{200, 404}') as response_status" % (url, body)
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
-- Needed for dropping type
|
||||
DROP FUNCTION IF EXISTS @extschema@._CDB_Group_API_Conf();
|
||||
DROP TYPE IF EXISTS @extschema@._CDB_Group_API_Params;
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_Table_GrantPermission_API(group_name text, username text, table_name text, access text)
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s/permission/%s/tables/%s' % (urllib.pathname2url(group_name), username, table_name)
|
||||
body = '{ "access": "%s" }' % access
|
||||
query = "select @extschema@._CDB_Group_API_Request('PUT', '%s', '%s', '{200, 409}') as response_status" % (url, body)
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
-- Needed for dropping type
|
||||
DROP FUNCTION IF EXISTS @extschema@._CDB_Group_API_Conf();
|
||||
DROP TYPE IF EXISTS @extschema@._CDB_Group_API_Params;
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_Table_RevokeAllPermission_API(group_name text, username text, table_name text)
|
||||
RETURNS VOID AS
|
||||
$$
|
||||
import string
|
||||
import urllib
|
||||
|
||||
url = '/api/v1/databases/{0}/groups/%s/permission/%s/tables/%s' % (urllib.pathname2url(group_name), username, table_name)
|
||||
query = "select @extschema@._CDB_Group_API_Request('DELETE', '%s', '', '{200, 404}') as response_status" % url
|
||||
plpy.execute(query)
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
BEGIN
|
||||
-- Needed for dropping type
|
||||
DROP FUNCTION IF EXISTS @extschema@._CDB_Group_API_Conf();
|
||||
DROP TYPE IF EXISTS @extschema@._CDB_Group_API_Params;
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE TYPE @extschema@._CDB_Group_API_Params AS (
|
||||
host text,
|
||||
port int,
|
||||
timeout int,
|
||||
auth text
|
||||
);
|
||||
|
||||
-- This must be explicitally extracted because "composite types are currently not supported".
|
||||
-- See http://www.postgresql.org/docs/9.3/static/plpython-database.html.
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_API_Conf()
|
||||
RETURNS @extschema@._CDB_Group_API_Params AS
|
||||
$$
|
||||
conf = plpy.execute("SELECT @extschema@.CDB_Conf_GetConf('groups_api') conf")[0]['conf']
|
||||
if conf is None:
|
||||
return None
|
||||
else:
|
||||
import json
|
||||
params = json.loads(conf)
|
||||
auth = 'Basic %s' % plpy.execute("SELECT @extschema@._CDB_Group_API_Auth('%s', '%s') as auth" % (params['username'], params['password']))[0]['auth']
|
||||
return { "host": params['host'], "port": params['port'], 'timeout': params['timeout'], 'auth': auth }
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_API_Auth(username text, password text)
|
||||
RETURNS TEXT AS
|
||||
$$
|
||||
import base64
|
||||
return base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- url must contain a '%s' placeholder that will be replaced by current_database, for security reasons.
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Group_API_Request(method text, url text, body text, valid_return_codes int[])
|
||||
RETURNS int AS
|
||||
$$
|
||||
import httplib
|
||||
|
||||
params = plpy.execute("select c.host, c.port, c.timeout, c.auth from @extschema@._CDB_Group_API_Conf() c;")[0]
|
||||
if params['host'] is None:
|
||||
return None
|
||||
|
||||
headers = { 'Authorization': params['auth'], 'Content-Type': 'application/json', 'X-Forwarded-Proto': 'https' }
|
||||
|
||||
retry = 3
|
||||
|
||||
last_err = None
|
||||
while retry > 0:
|
||||
try:
|
||||
client = SD['groups_api_client'] = httplib.HTTPConnection(params['host'], params['port'], False, params['timeout'])
|
||||
database_name = plpy.execute("select current_database();")[0]['current_database']
|
||||
client.request(method, url.format(database_name), body, headers)
|
||||
response = client.getresponse()
|
||||
assert response.status in valid_return_codes
|
||||
return response.status
|
||||
except Exception as err:
|
||||
retry -= 1
|
||||
last_err = err
|
||||
plpy.warning('Retrying after: ' + str(err))
|
||||
client = SD['groups_api_client'] = None
|
||||
|
||||
if last_err is not None:
|
||||
plpy.error('Fatal Group API error: ' + str(last_err))
|
||||
raise last_err
|
||||
|
||||
return None
|
||||
$$ LANGUAGE 'plpythonu' VOLATILE PARALLEL UNSAFE;
|
||||
revoke all on function @extschema@._CDB_Group_API_Request(text, text, text, int[]) from public;
|
46
lib/sql/scripts-available/CDB_HeadsTailsBins.sql
Normal file
46
lib/sql/scripts-available/CDB_HeadsTailsBins.sql
Normal file
@ -0,0 +1,46 @@
|
||||
--
|
||||
-- Determine the Heads/Tails classifications from a numeric array
|
||||
--
|
||||
-- @param in_array A numeric array of numbers to determine the best
|
||||
-- bins based on the Heads/Tails method.
|
||||
--
|
||||
-- @param breaks The number of bins you want to find.
|
||||
--
|
||||
--
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_HeadsTailsBins ( in_array NUMERIC[], breaks INT) RETURNS NUMERIC[] as $$
|
||||
DECLARE
|
||||
element_count INT4;
|
||||
arr_mean numeric;
|
||||
i INT := 2;
|
||||
reply numeric[];
|
||||
BEGIN
|
||||
-- get the total size of our row
|
||||
element_count := array_upper(in_array, 1) - array_lower(in_array, 1);
|
||||
-- ensure the ordering of in_array
|
||||
SELECT array_agg(e) INTO in_array FROM (SELECT unnest(in_array) e ORDER BY e) x;
|
||||
-- stop if no rows
|
||||
IF element_count IS NULL THEN
|
||||
RETURN NULL;
|
||||
END IF;
|
||||
-- stop if our breaks are more than our input array size
|
||||
IF element_count < breaks THEN
|
||||
RETURN in_array;
|
||||
END IF;
|
||||
|
||||
-- get our mean value
|
||||
SELECT avg(v) INTO arr_mean FROM ( SELECT unnest(in_array) as v ) x;
|
||||
|
||||
reply = Array[arr_mean];
|
||||
-- slice our bread
|
||||
LOOP
|
||||
IF i > breaks THEN EXIT; END IF;
|
||||
SELECT avg(e) INTO arr_mean FROM ( SELECT unnest(in_array) e) x WHERE e > reply[i-1];
|
||||
IF arr_mean IS NOT NULL THEN
|
||||
reply = array_append(reply, arr_mean);
|
||||
END IF;
|
||||
i := i+1;
|
||||
END LOOP;
|
||||
RETURN reply;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE PARALLEL SAFE;
|
177
lib/sql/scripts-available/CDB_Helper.sql
Normal file
177
lib/sql/scripts-available/CDB_Helper.sql
Normal file
@ -0,0 +1,177 @@
|
||||
-- Create a sequence that belongs to the schema of the extension.
|
||||
-- It will be used to generate unique identifiers within the
|
||||
|
||||
|
||||
-- UTF8 safe and length aware. Find a unique identifier with a given prefix
|
||||
-- and/or suffix and withing a schema. If a schema is not specified, the identifier
|
||||
-- is guaranteed to be unique for all schemas.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Unique_Identifier(prefix TEXT, relname TEXT, suffix TEXT, schema TEXT DEFAULT NULL)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
DECLARE
|
||||
maxlen CONSTANT INTEGER := 63;
|
||||
|
||||
rec RECORD;
|
||||
usedspace INTEGER;
|
||||
ident TEXT;
|
||||
origident TEXT;
|
||||
candrelname TEXT;
|
||||
|
||||
i INTEGER;
|
||||
BEGIN
|
||||
-- Accounts for the XXXX incremental suffix in case the identifier is taken
|
||||
usedspace := 4;
|
||||
usedspace := usedspace + coalesce(octet_length(prefix), 0);
|
||||
usedspace := usedspace + coalesce(octet_length(suffix), 0);
|
||||
|
||||
candrelname := @extschema@._CDB_Octet_Truncate(relname, maxlen - usedspace);
|
||||
|
||||
IF candrelname = '' THEN
|
||||
PERFORM @extschema@._CDB_Error('prefixes are to long to generate a valid identifier', '_CDB_Unique_Identifier');
|
||||
END IF;
|
||||
|
||||
ident := coalesce(prefix, '') || candrelname || coalesce(suffix, '');
|
||||
|
||||
i := 0;
|
||||
origident := ident;
|
||||
|
||||
WHILE i < 10000 LOOP
|
||||
IF schema IS NOT NULL THEN
|
||||
SELECT c.relname, n.nspname
|
||||
INTO rec
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE c.relname = ident
|
||||
AND n.nspname = schema;
|
||||
ELSE
|
||||
SELECT c.relname, n.nspname
|
||||
INTO rec
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE c.relname = ident;
|
||||
END IF;
|
||||
|
||||
IF NOT FOUND THEN
|
||||
RETURN ident;
|
||||
END IF;
|
||||
|
||||
ident := origident || i;
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
|
||||
PERFORM @extschema@._CDB_Error('looping too far', '_CDB_Unique_Identifier');
|
||||
END;
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
-- UTF8 safe and length aware. Find a unique identifier for a column with a given prefix
|
||||
-- and/or suffix based on colname and within a relation specified via reloid.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Unique_Column_Identifier(prefix TEXT, colname TEXT, suffix TEXT, reloid REGCLASS)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
DECLARE
|
||||
maxlen CONSTANT INTEGER := 63;
|
||||
|
||||
rec RECORD;
|
||||
candcolname TEXT;
|
||||
usedspace INTEGER;
|
||||
ident TEXT;
|
||||
origident TEXT;
|
||||
|
||||
i INTEGER;
|
||||
BEGIN
|
||||
-- Accounts for the XXXX incremental suffix in case the identifier is taken
|
||||
usedspace := 4;
|
||||
usedspace := usedspace + coalesce(octet_length(prefix), 0);
|
||||
usedspace := usedspace + coalesce(octet_length(suffix), 0);
|
||||
|
||||
candcolname := @extschema@._CDB_Octet_Truncate(colname, maxlen - usedspace);
|
||||
|
||||
IF candcolname = '' THEN
|
||||
PERFORM @extschema@._CDB_Error('prefixes are to long to generate a valid identifier', '_CDB_Unique_Column_Identifier');
|
||||
END IF;
|
||||
|
||||
ident := coalesce(prefix, '') || candcolname || coalesce(suffix, '');
|
||||
|
||||
i := 0;
|
||||
origident := ident;
|
||||
|
||||
WHILE i < 10000 LOOP
|
||||
SELECT a.attname
|
||||
INTO rec
|
||||
FROM pg_class c
|
||||
JOIN pg_attribute a ON a.attrelid = c.oid
|
||||
WHERE NOT a.attisdropped
|
||||
AND a.attnum > 0
|
||||
AND c.oid = reloid
|
||||
AND a.attname = ident;
|
||||
|
||||
IF NOT FOUND THEN
|
||||
RETURN ident;
|
||||
END IF;
|
||||
|
||||
ident := origident || i;
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
|
||||
PERFORM @extschema@._CDB_Error('looping too far', '_CDB_Unique_Column_Identifier');
|
||||
END;
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE PARALLEL SAFE;
|
||||
|
||||
|
||||
-- Truncates a given string to a max_octets octets taking care
|
||||
-- not to leave characters in half. UTF8 safe.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Octet_Truncate(string TEXT, max_octets INTEGER)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
DECLARE
|
||||
extcharlen CONSTANT INTEGER := octet_length('ñ');
|
||||
|
||||
expected INTEGER;
|
||||
examined INTEGER;
|
||||
strlen INTEGER;
|
||||
|
||||
i INTEGER;
|
||||
BEGIN
|
||||
|
||||
IF max_octets <= 0 THEN
|
||||
RETURN '';
|
||||
ELSIF max_octets >= octet_length(string) THEN
|
||||
RETURN string;
|
||||
END IF;
|
||||
|
||||
strlen := char_length(string);
|
||||
|
||||
expected := char_length(string);
|
||||
examined := octet_length(string);
|
||||
|
||||
IF expected = examined THEN
|
||||
RETURN left(string, max_octets);
|
||||
END IF;
|
||||
|
||||
i := max_octets / extcharlen;
|
||||
|
||||
WHILE octet_length(left(string, i)) <= max_octets LOOP
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
|
||||
RETURN left(string, (i - 1));
|
||||
END;
|
||||
$$ LANGUAGE 'plpgsql' IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
|
||||
-- Checks if a given text representing a qualified or unqualified table name (relation)
|
||||
-- actually exists in the database. It is meant to be used as a guard for other function/queries.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Table_Exists(table_name_with_optional_schema TEXT)
|
||||
RETURNS bool
|
||||
AS $$
|
||||
DECLARE
|
||||
table_exists bool := false;
|
||||
BEGIN
|
||||
table_exists := EXISTS(SELECT * FROM pg_class WHERE table_name_with_optional_schema::regclass::oid = oid AND relkind = 'r');
|
||||
RETURN table_exists;
|
||||
EXCEPTION
|
||||
WHEN invalid_schema_name OR undefined_table THEN
|
||||
RETURN false;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
148
lib/sql/scripts-available/CDB_Hexagon.sql
Normal file
148
lib/sql/scripts-available/CDB_Hexagon.sql
Normal file
@ -0,0 +1,148 @@
|
||||
-- Return an Hexagon with given center and side (or maximal radius)
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_MakeHexagon(center GEOMETRY, radius FLOAT8)
|
||||
RETURNS GEOMETRY
|
||||
AS $$
|
||||
SELECT @postgisschema@.ST_MakePolygon(@postgisschema@.ST_MakeLine(geom))
|
||||
FROM
|
||||
(
|
||||
SELECT (@postgisschema@.ST_DumpPoints(@postgisschema@.ST_ExteriorRing(@postgisschema@.ST_Buffer($1, $2, 3)))).*
|
||||
) as points
|
||||
WHERE path[1] % 2 != 0
|
||||
$$ LANGUAGE 'sql' IMMUTABLE STRICT PARALLEL SAFE;
|
||||
|
||||
|
||||
-- In older versions of the extension, CDB_HexagonGrid had a different signature
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_HexagonGrid(GEOMETRY, FLOAT8, GEOMETRY);
|
||||
|
||||
--
|
||||
-- Fill given extent with an hexagonal coverage
|
||||
--
|
||||
-- @param ext Extent to fill. Only hexagons with center point falling
|
||||
-- inside the extent (or at the lower or leftmost edge) will
|
||||
-- be emitted. The returned hexagons will have the same SRID
|
||||
-- as this extent.
|
||||
--
|
||||
-- @param side Side measure for the hexagon.
|
||||
-- Maximum diameter will be 2 * side.
|
||||
--
|
||||
-- @param origin Optional origin to allow for exact tiling.
|
||||
-- If omitted the origin will be 0,0.
|
||||
-- The parameter is checked for having the same SRID
|
||||
-- as the extent.
|
||||
--
|
||||
-- @param maxcells Optional maximum number of grid cells to generate;
|
||||
-- if the grid requires more cells to cover the extent
|
||||
-- and exception will occur.
|
||||
----
|
||||
-- DROP FUNCTION IF EXISTS CDB_HexagonGrid(ext GEOMETRY, side FLOAT8);
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_HexagonGrid(ext GEOMETRY, side FLOAT8, origin GEOMETRY DEFAULT NULL, maxcells INTEGER DEFAULT 512*512)
|
||||
RETURNS SETOF GEOMETRY
|
||||
AS $$
|
||||
DECLARE
|
||||
h GEOMETRY; -- hexagon
|
||||
c GEOMETRY; -- center point
|
||||
rec RECORD;
|
||||
hstep FLOAT8; -- horizontal step
|
||||
vstep FLOAT8; -- vertical step
|
||||
vstart FLOAT8;
|
||||
vstartary FLOAT8[];
|
||||
vstartidx INTEGER;
|
||||
hskip BIGINT;
|
||||
hstart FLOAT8;
|
||||
hend FLOAT8;
|
||||
vend FLOAT8;
|
||||
xoff FLOAT8;
|
||||
yoff FLOAT8;
|
||||
xgrd FLOAT8;
|
||||
ygrd FLOAT8;
|
||||
srid INTEGER;
|
||||
BEGIN
|
||||
|
||||
-- | |
|
||||
-- |hstep|
|
||||
-- ______ ___ |
|
||||
-- vstep / \ ___ /
|
||||
-- ______ \ ___ / \
|
||||
-- / \ ___ /
|
||||
--
|
||||
--
|
||||
RAISE DEBUG 'Side: %', side;
|
||||
|
||||
vstep := side * sqrt(3); -- x 2 ?
|
||||
hstep := side * 1.5;
|
||||
|
||||
RAISE DEBUG 'vstep: %', vstep;
|
||||
RAISE DEBUG 'hstep: %', hstep;
|
||||
|
||||
srid := ST_SRID(ext);
|
||||
|
||||
xoff := 0;
|
||||
yoff := 0;
|
||||
|
||||
IF origin IS NOT NULL THEN
|
||||
IF @postgisschema@.ST_SRID(origin) != srid THEN
|
||||
RAISE EXCEPTION 'SRID mismatch between extent (%) and origin (%)', srid, ST_SRID(origin);
|
||||
END IF;
|
||||
xoff := @postgisschema@.ST_X(origin);
|
||||
yoff := @postgisschema@.ST_Y(origin);
|
||||
END IF;
|
||||
|
||||
RAISE DEBUG 'X offset: %', xoff;
|
||||
RAISE DEBUG 'Y offset: %', yoff;
|
||||
|
||||
xgrd := side * 0.5;
|
||||
ygrd := ( side * sqrt(3) ) / 2.0;
|
||||
RAISE DEBUG 'X grid size: %', xgrd;
|
||||
RAISE DEBUG 'Y grid size: %', ygrd;
|
||||
|
||||
-- Tweak horizontal start on hstep*2 grid from origin
|
||||
hskip := ceil((@postgisschema@.ST_XMin(ext)-xoff)/hstep);
|
||||
RAISE DEBUG 'hskip: %', hskip;
|
||||
hstart := xoff + hskip*hstep;
|
||||
RAISE DEBUG 'hstart: %', hstart;
|
||||
|
||||
-- Tweak vertical start on hstep grid from origin
|
||||
vstart := yoff + ceil((@postgisschema@.ST_Ymin(ext)-yoff)/vstep)*vstep;
|
||||
RAISE DEBUG 'vstart: %', vstart;
|
||||
|
||||
hend := @postgisschema@.ST_XMax(ext);
|
||||
vend := @postgisschema@.ST_YMax(ext);
|
||||
|
||||
IF vstart - (vstep/2.0) < @postgisschema@.ST_YMin(ext) THEN
|
||||
vstartary := ARRAY[ vstart + (vstep/2.0), vstart ];
|
||||
ELSE
|
||||
vstartary := ARRAY[ vstart - (vstep/2.0), vstart ];
|
||||
END IF;
|
||||
|
||||
If maxcells IS NOT NULL AND maxcells > 0 THEN
|
||||
IF CEIL((CEIL((vend-vstart)/(vstep/2.0)) * CEIL((hend-hstart)/(hstep*2.0/3.0)))/3.0)::integer > maxcells THEN
|
||||
RAISE EXCEPTION 'The requested grid is too big to be rendered';
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
vstartidx := abs(hskip)%2;
|
||||
|
||||
RAISE DEBUG 'vstartary: % : %', vstartary[1], vstartary[2];
|
||||
RAISE DEBUG 'vstartidx: %', vstartidx;
|
||||
|
||||
c := @postgisschema@.ST_SetSRID(@postgisschema@.ST_MakePoint(hstart, vstartary[vstartidx+1]), srid);
|
||||
h := @postgisschema@.ST_SnapToGrid(@extschema@.CDB_MakeHexagon(c, side), xoff, yoff, xgrd, ygrd);
|
||||
vstartidx := (vstartidx + 1) % 2;
|
||||
WHILE @postgisschema@.ST_X(c) < hend LOOP -- over X
|
||||
--RAISE DEBUG 'X loop starts, center point: %', ST_AsText(c);
|
||||
WHILE @postgisschema@.ST_Y(c) < vend LOOP -- over Y
|
||||
--RAISE DEBUG 'Center: %', ST_AsText(c);
|
||||
--h := ST_SnapToGrid(CDB_MakeHexagon(c, side), xoff, yoff, xgrd, ygrd);
|
||||
RETURN NEXT h;
|
||||
h := @postgisschema@.ST_SnapToGrid(ST_Translate(h, 0, vstep), xoff, yoff, xgrd, ygrd);
|
||||
c := @postgisschema@.ST_Translate(c, 0, vstep); -- TODO: drop ?
|
||||
END LOOP;
|
||||
-- TODO: translate h direcly ...
|
||||
c := @postgisschema@.ST_SetSRID(@postgisschema@.ST_MakePoint(ST_X(c)+hstep, vstartary[vstartidx+1]), srid);
|
||||
h := @postgisschema@.ST_SnapToGrid(@extschema@.CDB_MakeHexagon(c, side), xoff, yoff, xgrd, ygrd);
|
||||
vstartidx := (vstartidx + 1) % 2;
|
||||
END LOOP;
|
||||
|
||||
RETURN;
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' IMMUTABLE PARALLEL SAFE;
|
346
lib/sql/scripts-available/CDB_JenksBins.sql
Normal file
346
lib/sql/scripts-available/CDB_JenksBins.sql
Normal file
@ -0,0 +1,346 @@
|
||||
--
|
||||
-- Determine the Jenks classifications from a numeric array
|
||||
--
|
||||
-- @param in_array A numeric array of numbers to determine the best
|
||||
-- bins based on the Jenks method.
|
||||
--
|
||||
-- @param breaks The number of bins you want to find.
|
||||
--
|
||||
-- @param iterations The number of different starting positions to test.
|
||||
--
|
||||
-- @param invert Optional wheter to return the top of each bin (default)
|
||||
-- or the bottom. BOOLEAN, default=FALSE.
|
||||
--
|
||||
--
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_JenksBins(in_array NUMERIC[], breaks INT, iterations INT DEFAULT 0, invert BOOLEAN DEFAULT FALSE)
|
||||
RETURNS NUMERIC[] as
|
||||
$$
|
||||
DECLARE
|
||||
in_matrix NUMERIC[][];
|
||||
in_unique_count BIGINT;
|
||||
|
||||
shuffles INT;
|
||||
arr_mean NUMERIC;
|
||||
sdam NUMERIC;
|
||||
|
||||
i INT;
|
||||
bot INT;
|
||||
top INT;
|
||||
|
||||
tops INT[];
|
||||
classes INT[][];
|
||||
j INT := 1;
|
||||
curr_result NUMERIC[];
|
||||
best_result NUMERIC[];
|
||||
seedtarget TEXT;
|
||||
|
||||
BEGIN
|
||||
-- We clean the input array (remove NULLs) and create 2 arrays
|
||||
-- [1] contains the unique values in in_array
|
||||
-- [2] contains the number of appearances of those unique values
|
||||
SELECT ARRAY[array_agg(value), array_agg(count)] FROM
|
||||
(
|
||||
SELECT value, count(1)::numeric as count
|
||||
FROM unnest(in_array) AS value
|
||||
WHERE value is NOT NULL
|
||||
GROUP BY value
|
||||
ORDER BY value
|
||||
) __clean_array_q INTO in_matrix;
|
||||
|
||||
-- Get the number of unique values
|
||||
in_unique_count := array_length(in_matrix[1:1], 2);
|
||||
|
||||
IF in_unique_count IS NULL THEN
|
||||
RETURN NULL;
|
||||
END IF;
|
||||
|
||||
IF in_unique_count <= breaks THEN
|
||||
-- There isn't enough distinct values for the requested breaks
|
||||
RETURN ARRAY(Select unnest(in_matrix[1:1])) _a;
|
||||
END IF;
|
||||
|
||||
-- If not declated explicitly we iterate based on the length of the array
|
||||
IF iterations < 1 THEN
|
||||
-- This is based on a 'looks fine' heuristic
|
||||
iterations := log(in_unique_count)::integer + 1;
|
||||
END IF;
|
||||
|
||||
-- We set the number of shuffles per iteration as the number of unique values but
|
||||
-- this is just another 'looks fine' heuristic
|
||||
shuffles := in_unique_count;
|
||||
|
||||
-- Get the mean value of the whole vector (already ignores NULLs)
|
||||
SELECT avg(v) INTO arr_mean FROM ( SELECT unnest(in_array) as v ) x;
|
||||
|
||||
-- Calculate the sum of squared deviations from the array mean (SDAM).
|
||||
SELECT sum(((arr_mean - v)^2) * w) INTO sdam FROM (
|
||||
SELECT unnest(in_matrix[1:1]) as v, unnest(in_matrix[2:2]) as w
|
||||
) x;
|
||||
|
||||
-- To start, we create ranges with approximately the same amount of different values
|
||||
top := 0;
|
||||
i := 1;
|
||||
LOOP
|
||||
bot := top + 1;
|
||||
top := ROUND(i * in_unique_count::numeric / breaks::NUMERIC);
|
||||
|
||||
IF i = 1 THEN
|
||||
classes = ARRAY[ARRAY[bot,top]];
|
||||
ELSE
|
||||
classes = ARRAY_CAT(classes, ARRAY[bot,top]);
|
||||
END IF;
|
||||
|
||||
i := i + 1;
|
||||
IF i > breaks THEN EXIT; END IF;
|
||||
END LOOP;
|
||||
|
||||
best_result = @extschema@.CDB_JenksBinsIteration(in_matrix, breaks, classes, invert, sdam, shuffles);
|
||||
|
||||
--set the seed so we can ensure the same results
|
||||
SELECT setseed(0.4567) INTO seedtarget;
|
||||
--loop through random starting positions
|
||||
LOOP
|
||||
IF j > iterations-1 THEN EXIT; END IF;
|
||||
i = 1;
|
||||
tops = ARRAY[in_unique_count];
|
||||
LOOP
|
||||
IF i = breaks THEN EXIT; END IF;
|
||||
SELECT array_agg(distinct e) INTO tops FROM (
|
||||
SELECT unnest(array_cat(tops, ARRAY[trunc(random() * in_unique_count::float8)::int + 1])) as e ORDER BY e
|
||||
) x;
|
||||
i = array_length(tops, 1);
|
||||
END LOOP;
|
||||
top := 0;
|
||||
i = 1;
|
||||
LOOP
|
||||
bot := top + 1;
|
||||
top = tops[i];
|
||||
IF i = 1 THEN
|
||||
classes = ARRAY[ARRAY[bot,top]];
|
||||
ELSE
|
||||
classes = ARRAY_CAT(classes, ARRAY[bot,top]);
|
||||
END IF;
|
||||
|
||||
i := i+1;
|
||||
IF i > breaks THEN EXIT; END IF;
|
||||
END LOOP;
|
||||
|
||||
curr_result = @extschema@.CDB_JenksBinsIteration(in_matrix, breaks, classes, invert, sdam, shuffles);
|
||||
|
||||
IF curr_result[1] > best_result[1] THEN
|
||||
best_result = curr_result;
|
||||
END IF;
|
||||
|
||||
j = j+1;
|
||||
END LOOP;
|
||||
|
||||
RETURN (best_result)[2:array_upper(best_result, 1)];
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL RESTRICTED;
|
||||
|
||||
|
||||
--
|
||||
-- Perform a single iteration of the Jenks classification
|
||||
--
|
||||
-- Returns an array with:
|
||||
-- - First element: gvf
|
||||
-- - Second to 2+n: Category limits
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_JenksBinsIteration ( in_matrix NUMERIC[], breaks INT, classes INT[], invert BOOLEAN, element_count INT4, arr_mean NUMERIC, max_search INT); -- Old signature
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_JenksBinsIteration ( in_matrix NUMERIC[], breaks INT, classes INT[], invert BOOLEAN, sdam NUMERIC, max_search INT DEFAULT 50) RETURNS NUMERIC[] as $$
|
||||
DECLARE
|
||||
i INT;
|
||||
iterations INT = 0;
|
||||
|
||||
side INT := 2;
|
||||
|
||||
gvf numeric := 0.0;
|
||||
new_gvf numeric;
|
||||
arr_gvf numeric[];
|
||||
arr_avg numeric[];
|
||||
class_avg numeric;
|
||||
class_dev numeric;
|
||||
|
||||
class_max_i INT = 0;
|
||||
class_min_i INT = 0;
|
||||
dev_max numeric;
|
||||
dev_min numeric;
|
||||
|
||||
best_classes INT[] = classes;
|
||||
best_gvf numeric[];
|
||||
best_avg numeric[];
|
||||
move_elements INT = 1;
|
||||
|
||||
reply numeric[];
|
||||
|
||||
BEGIN
|
||||
|
||||
-- We fill the arrays with the initial values
|
||||
i = 0;
|
||||
LOOP
|
||||
IF i = breaks THEN EXIT; END IF;
|
||||
i = i + 1;
|
||||
|
||||
-- Get class mean
|
||||
SELECT (sum(v * w) / sum(w)) INTO class_avg FROM (
|
||||
SELECT unnest(in_matrix[1:1][classes[i][1]:classes[i][2]]) as v,
|
||||
unnest(in_matrix[2:2][classes[i][1]:classes[i][2]]) as w
|
||||
) x;
|
||||
|
||||
-- Get class deviation
|
||||
SELECT sum((class_avg - v)^2 * w) INTO class_dev FROM (
|
||||
SELECT unnest(in_matrix[1:1][classes[i][1]:classes[i][2]]) as v,
|
||||
unnest(in_matrix[2:2][classes[i][1]:classes[i][2]]) as w
|
||||
) x;
|
||||
|
||||
|
||||
IF i = 1 THEN
|
||||
arr_avg = ARRAY[class_avg];
|
||||
arr_gvf = ARRAY[class_dev];
|
||||
ELSE
|
||||
arr_avg = array_append(arr_avg, class_avg);
|
||||
arr_gvf = array_append(arr_gvf, class_dev);
|
||||
END IF;
|
||||
END LOOP;
|
||||
|
||||
-- We copy the values to avoid recalculation when a failure happens
|
||||
best_avg = arr_avg;
|
||||
best_gvf = arr_gvf;
|
||||
|
||||
iterations = 0;
|
||||
LOOP
|
||||
IF iterations = max_search THEN EXIT; END IF;
|
||||
iterations = iterations + 1;
|
||||
|
||||
-- calculate our new GVF
|
||||
SELECT sdam - sum(e) INTO new_gvf FROM ( SELECT unnest(arr_gvf) as e ) x;
|
||||
|
||||
-- Check if any improvement was made
|
||||
IF new_gvf <= gvf THEN
|
||||
-- If we were moving too many elements, go back and move less
|
||||
IF move_elements <= 2 OR class_max_i = class_min_i THEN
|
||||
EXIT;
|
||||
END IF;
|
||||
|
||||
move_elements = GREATEST(move_elements / 8, 1);
|
||||
|
||||
-- Rollback from saved statuses
|
||||
classes = best_classes;
|
||||
new_gvf = gvf;
|
||||
|
||||
i = class_min_i;
|
||||
LOOP
|
||||
arr_avg[i] = best_avg[i];
|
||||
arr_gvf[i] = best_gvf[i];
|
||||
|
||||
IF i = class_max_i THEN EXIT; END IF;
|
||||
i = i + 1;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
-- We search for the classes with the min and max deviation
|
||||
i = 1;
|
||||
class_min_i = 1;
|
||||
class_max_i = 1;
|
||||
dev_max = arr_gvf[1];
|
||||
dev_min = arr_gvf[1];
|
||||
LOOP
|
||||
IF i = breaks THEN EXIT; END IF;
|
||||
i = i + 1;
|
||||
|
||||
IF arr_gvf[i] < dev_min THEN
|
||||
dev_min = arr_gvf[i];
|
||||
class_min_i = i;
|
||||
ELSE
|
||||
IF arr_gvf[i] > dev_max THEN
|
||||
dev_max = arr_gvf[i];
|
||||
class_max_i = i;
|
||||
END IF;
|
||||
END IF;
|
||||
END LOOP;
|
||||
|
||||
|
||||
-- Save best values for comparison and output
|
||||
gvf = new_gvf;
|
||||
best_classes = classes;
|
||||
|
||||
-- Limit the moved elements as to not remove everything from class_max_i
|
||||
move_elements = LEAST(move_elements, classes[class_max_i][2] - classes[class_max_i][1]);
|
||||
|
||||
-- Move `move_elements` from class_max_i to class_min_i
|
||||
IF class_min_i < class_max_i THEN
|
||||
i := class_min_i;
|
||||
LOOP
|
||||
IF i = class_max_i THEN EXIT; END IF;
|
||||
classes[i][2] = classes[i][2] + move_elements;
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
|
||||
i := class_max_i;
|
||||
LOOP
|
||||
IF i = class_min_i THEN EXIT; END IF;
|
||||
classes[i][1] = classes[i][1] + move_elements;
|
||||
i := i - 1;
|
||||
END LOOP;
|
||||
ELSE
|
||||
i := class_min_i;
|
||||
LOOP
|
||||
IF i = class_max_i THEN EXIT; END IF;
|
||||
classes[i][1] = classes[i][1] - move_elements;
|
||||
i := i - 1;
|
||||
END LOOP;
|
||||
|
||||
i := class_max_i;
|
||||
LOOP
|
||||
IF i = class_min_i THEN EXIT; END IF;
|
||||
classes[i][2] = classes[i][2] - move_elements;
|
||||
i := i + 1;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
-- Recalculate avg and deviation ONLY for the affected classes
|
||||
i = LEAST(class_min_i, class_max_i);
|
||||
class_max_i = GREATEST(class_min_i, class_max_i);
|
||||
class_min_i = i;
|
||||
LOOP
|
||||
SELECT (sum(v * w) / sum(w)) INTO class_avg FROM (
|
||||
SELECT unnest(in_matrix[1:1][classes[i][1]:classes[i][2]]) as v,
|
||||
unnest(in_matrix[2:2][classes[i][1]:classes[i][2]]) as w
|
||||
) x;
|
||||
|
||||
SELECT sum((class_avg - v)^2 * w) INTO class_dev FROM (
|
||||
SELECT unnest(in_matrix[1:1][classes[i][1]:classes[i][2]]) as v,
|
||||
unnest(in_matrix[2:2][classes[i][1]:classes[i][2]]) as w
|
||||
) x;
|
||||
|
||||
-- Save status (in case it's needed for rollback) and store the new one
|
||||
best_avg[i] = arr_avg[i];
|
||||
arr_avg[i] = class_avg;
|
||||
|
||||
best_gvf[i] = arr_gvf[i];
|
||||
arr_gvf[i] = class_dev;
|
||||
|
||||
IF i = class_max_i THEN EXIT; END IF;
|
||||
i = i + 1;
|
||||
END LOOP;
|
||||
|
||||
move_elements = move_elements * 2;
|
||||
|
||||
END LOOP;
|
||||
|
||||
i = 1;
|
||||
LOOP
|
||||
IF invert = TRUE THEN
|
||||
side = 1; --default returns bottom side of breaks, invert returns top side
|
||||
END IF;
|
||||
reply = array_append(reply, unnest(in_matrix[1:1][best_classes[i][side]:best_classes[i][side]]));
|
||||
i = i+1;
|
||||
IF i > breaks THEN EXIT; END IF;
|
||||
END LOOP;
|
||||
|
||||
reply = array_prepend(gvf, reply);
|
||||
RETURN reply;
|
||||
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
17
lib/sql/scripts-available/CDB_LatLng.sql
Normal file
17
lib/sql/scripts-available/CDB_LatLng.sql
Normal file
@ -0,0 +1,17 @@
|
||||
--
|
||||
-- Create a valid GEOMETRY in 4326 from a lat/lng pair
|
||||
--
|
||||
-- @param lat A numeric latitude value.
|
||||
--
|
||||
-- @param lng A numeric longitude value.
|
||||
--
|
||||
--
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_LatLng (lat NUMERIC, lng NUMERIC) RETURNS @postgisschema@.geometry as $$
|
||||
SELECT @postgisschema@.ST_SetSRID(@postgisschema@.ST_MakePoint(lng,lat), 4326);
|
||||
$$ language SQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_LatLng (lat FLOAT8, lng FLOAT8) RETURNS @postgisschema@.geometry as $$
|
||||
SELECT @postgisschema@.ST_SetSRID(@postgisschema@.ST_MakePoint(lng,lat), 4326);
|
||||
$$ language SQL IMMUTABLE PARALLEL SAFE;
|
||||
|
27
lib/sql/scripts-available/CDB_Math.sql
Normal file
27
lib/sql/scripts-available/CDB_Math.sql
Normal file
@ -0,0 +1,27 @@
|
||||
-- CartoDB Math SQL functions
|
||||
|
||||
|
||||
-- Mode
|
||||
-- https://wiki.postgresql.org/wiki/Aggregate_Mode
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_Math_final_mode(anyarray)
|
||||
RETURNS anyelement AS
|
||||
$BODY$
|
||||
SELECT a
|
||||
FROM unnest($1) a
|
||||
GROUP BY 1
|
||||
ORDER BY COUNT(1) DESC, 1
|
||||
LIMIT 1;
|
||||
$BODY$
|
||||
LANGUAGE 'sql' IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
DROP AGGREGATE IF EXISTS @extschema@.CDB_Math_Mode(anyelement);
|
||||
|
||||
CREATE AGGREGATE @extschema@.CDB_Math_Mode(anyelement) (
|
||||
SFUNC=array_append,
|
||||
STYPE=anyarray,
|
||||
FINALFUNC=@extschema@._CDB_Math_final_mode,
|
||||
PARALLEL = SAFE,
|
||||
INITCOND='{}'
|
||||
);
|
||||
|
171
lib/sql/scripts-available/CDB_Organizations.sql
Normal file
171
lib/sql/scripts-available/CDB_Organizations.sql
Normal file
@ -0,0 +1,171 @@
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Member_Group_Role_Member_Name()
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
SELECT 'cdb_org_member'::text || '_' || md5(current_database());
|
||||
$$
|
||||
LANGUAGE SQL STABLE PARALLEL SAFE;
|
||||
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
DECLARE
|
||||
cdb_org_member_role_name TEXT;
|
||||
BEGIN
|
||||
cdb_org_member_role_name := @extschema@.CDB_Organization_Member_Group_Role_Member_Name();
|
||||
IF NOT EXISTS ( SELECT * FROM pg_roles WHERE rolname= cdb_org_member_role_name )
|
||||
THEN
|
||||
EXECUTE 'CREATE ROLE "' || cdb_org_member_role_name || '" NOLOGIN;';
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Create_Member(role_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'GRANT "' || @extschema@.CDB_Organization_Member_Group_Role_Member_Name() || '" TO "' || role_name || '"';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
-- Administrator
|
||||
-------------------------------------------------------------------------------
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Organization_Admin_Role_Name()
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
SELECT current_database() || '_a'::text;
|
||||
$$
|
||||
LANGUAGE SQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- Administrator role creation on extension install
|
||||
DO LANGUAGE 'plpgsql' $$
|
||||
DECLARE
|
||||
cdb_org_admin_role_name TEXT;
|
||||
BEGIN
|
||||
cdb_org_admin_role_name := @extschema@._CDB_Organization_Admin_Role_Name();
|
||||
IF NOT EXISTS ( SELECT * FROM pg_roles WHERE rolname= cdb_org_admin_role_name )
|
||||
THEN
|
||||
EXECUTE format('CREATE ROLE %I CREATEROLE NOLOGIN;', cdb_org_admin_role_name);
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_AddAdmin(username text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
cdb_user_role TEXT;
|
||||
cdb_admin_role TEXT;
|
||||
BEGIN
|
||||
cdb_admin_role := @extschema@._CDB_Organization_Admin_Role_Name();
|
||||
cdb_user_role := @extschema@._CDB_User_RoleFromUsername(username);
|
||||
EXECUTE format('GRANT %I TO %I WITH ADMIN OPTION', cdb_admin_role, cdb_user_role);
|
||||
-- CREATEROLE is not inherited, and is needed for user creation
|
||||
EXECUTE format('ALTER ROLE %I CREATEROLE', cdb_user_role);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_RemoveAdmin(username text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
cdb_user_role TEXT;
|
||||
cdb_admin_role TEXT;
|
||||
BEGIN
|
||||
cdb_admin_role := @extschema@._CDB_Organization_Admin_Role_Name();
|
||||
cdb_user_role := @extschema@._CDB_User_RoleFromUsername(username);
|
||||
EXECUTE format('ALTER ROLE %I NOCREATEROLE', cdb_user_role);
|
||||
EXECUTE format('REVOKE %I FROM %I', cdb_admin_role, cdb_user_role);
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
-- Sharing tables
|
||||
-------------------------------------------------------------------------------
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Add_Table_Read_Permission(from_schema text, table_name text, to_role_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'GRANT USAGE ON SCHEMA "' || from_schema || '" TO "' || to_role_name || '"';
|
||||
EXECUTE 'GRANT SELECT ON "' || from_schema || '"."' || table_name || '" TO "' || to_role_name || '"';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Add_Table_Organization_Read_Permission(from_schema text, table_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'SELECT @extschema@.CDB_Organization_Add_Table_Read_Permission(''' || from_schema || ''', ''' || table_name || ''', ''' || @extschema@.CDB_Organization_Member_Group_Role_Member_Name() || ''');';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@._CDB_Organization_Get_Table_Sequences(from_schema text, table_name text)
|
||||
RETURNS SETOF TEXT
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN QUERY EXECUTE 'SELECT
|
||||
quote_ident(n.nspname) || ''.'' || quote_ident(c.relname)
|
||||
FROM
|
||||
pg_depend d
|
||||
JOIN pg_class c ON d.objid = c.oid
|
||||
JOIN pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE
|
||||
d.refobjsubid > 0 AND
|
||||
d.classid = ''pg_class''::regclass AND
|
||||
c.relkind = ''S''::"char" AND
|
||||
d.refobjid = (''' || quote_ident(from_schema) || '.' || quote_ident(table_name) ||''')::regclass';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Add_Table_Read_Write_Permission(from_schema text, table_name text, to_role_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
sequence_name TEXT;
|
||||
BEGIN
|
||||
EXECUTE 'GRANT USAGE ON SCHEMA "' || from_schema || '" TO "' || to_role_name || '"';
|
||||
EXECUTE 'GRANT SELECT, INSERT, UPDATE, DELETE ON "' || from_schema || '"."' || table_name || '" TO "' || to_role_name || '"';
|
||||
|
||||
FOR sequence_name IN SELECT * FROM @extschema@._CDB_Organization_Get_Table_Sequences(from_schema, table_name) LOOP
|
||||
EXECUTE 'GRANT USAGE, SELECT ON SEQUENCE ' || sequence_name || ' TO "' || to_role_name || '"';
|
||||
END LOOP;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Add_Table_Organization_Read_Write_Permission(from_schema text, table_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'SELECT @extschema@.CDB_Organization_Add_Table_Read_Write_Permission(''' || from_schema || ''', ''' || table_name || ''', ''' || @extschema@.CDB_Organization_Member_Group_Role_Member_Name() || ''');';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Remove_Access_Permission(from_schema text, table_name text, to_role_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'REVOKE ALL PRIVILEGES ON TABLE "' || from_schema || '"."' || table_name || '" FROM "' || to_role_name || '"';
|
||||
-- EXECUTE 'REVOKE USAGE ON SCHEMA ' || from_schema || ' FROM "' || to_role_name || '"';
|
||||
-- We need to revoke usage on schema only if we are revoking privileges from the last table where to_role_name has
|
||||
-- any permission granted within the schema from_schema
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
CREATE OR REPLACE
|
||||
FUNCTION @extschema@.CDB_Organization_Remove_Organization_Access_Permission(from_schema text, table_name text)
|
||||
RETURNS void
|
||||
AS $$
|
||||
BEGIN
|
||||
EXECUTE 'SELECT @extschema@.CDB_Organization_Remove_Access_Permission(''' || from_schema || ''', ''' || table_name || ''', ''' || @extschema@.CDB_Organization_Member_Group_Role_Member_Name() || ''');';
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL VOLATILE PARALLEL UNSAFE;
|
1070
lib/sql/scripts-available/CDB_Overviews.sql
Normal file
1070
lib/sql/scripts-available/CDB_Overviews.sql
Normal file
File diff suppressed because it is too large
Load Diff
173
lib/sql/scripts-available/CDB_OverviewsSupport.sql
Normal file
173
lib/sql/scripts-available/CDB_OverviewsSupport.sql
Normal file
@ -0,0 +1,173 @@
|
||||
-- Auxiliary overviews FUNCTIONS
|
||||
|
||||
-- Maximum zoom level for which overviews may be created
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_MaxOverviewLevel()
|
||||
RETURNS INTEGER
|
||||
AS $$
|
||||
BEGIN
|
||||
-- Zoom level will be limited so that both tile coordinates
|
||||
-- and gridding coordinates within a tile up to 1px
|
||||
-- (i.e. tile coordinates / 256)
|
||||
-- can be stored in a 32-bit signed integer.
|
||||
-- We have 31 bits por positive numbers
|
||||
-- For zoom level Z coordinates range from 0 to 2^Z-1, so they
|
||||
-- need Z bits, and need 8 bits more to address pixels within a tile
|
||||
-- (gridding), so we'll limit Z to a maximum of 31 - 8
|
||||
RETURN 23;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Maximum zoom level usable with integer coordinates
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_MaxZoomLevel()
|
||||
RETURNS INTEGER
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN 31;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Information about tables in a schema.
|
||||
-- If the schema name parameter is NULL, then tables from all schemas
|
||||
-- that may contain user tables are returned.
|
||||
-- For each table, the regclass, schema name and table name are returned.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_UserTablesInSchema(schema_name text DEFAULT NULL)
|
||||
RETURNS TABLE(table_regclass REGCLASS, schema_name TEXT, table_name TEXT)
|
||||
AS $$
|
||||
SELECT
|
||||
c.oid::regclass AS table_regclass,
|
||||
n.nspname::text AS schema_name,
|
||||
c.relname::text AS table_relname
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relkind = 'r'
|
||||
AND c.relname NOT IN ('cdb_tablemetadata', 'cdb_analysis_catalog', 'cdb_conf', 'spatial_ref_sys')
|
||||
AND CASE WHEN schema_name IS NULL
|
||||
THEN n.nspname NOT IN ('pg_catalog', 'information_schema', 'topology', '@extschema@')
|
||||
ELSE n.nspname = schema_name
|
||||
END;
|
||||
$$ LANGUAGE 'sql' STABLE PARALLEL SAFE;
|
||||
|
||||
-- Pattern that can be used to detect overview tables and Extract
|
||||
-- the intended zoom level from the table name.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewTableDiscriminator()
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN '\A_vovw_(\d+)_';
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
-- substring(tablename from _CDB_OverviewTableDiscriminator())
|
||||
|
||||
|
||||
-- Pattern matched by the overview tables of a given base table name.
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewTablePattern(base_table TEXT)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN @extschema@._CDB_OverviewTableDiscriminator() || base_table;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
-- tablename SIMILAR TO _CDB_OverviewTablePattern(base_table)
|
||||
|
||||
-- Name of an overview table, given the base table name and the Z level
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewTableName(base_table TEXT, z INTEGER)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN '_vovw_' || z::text || '_' || base_table;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Condition to check if a tabla is an overview table of some base table
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_IsOverviewTableOf(base_table TEXT, otable TEXT)
|
||||
RETURNS BOOLEAN
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN otable SIMILAR TO @extschema@._CDB_OverviewTablePattern(base_table);
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Extract the Z level from an overview table name
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewTableZ(otable TEXT)
|
||||
RETURNS INTEGER
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN substring(otable from @extschema@._CDB_OverviewTableDiscriminator())::integer;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Name of the base table corresponding to an overview table
|
||||
-- Scope: private.
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewBaseTableName(overview_table TEXT)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
BEGIN
|
||||
IF @extschema@._CDB_OverviewTableZ(overview_table) IS NULL THEN
|
||||
RETURN overview_table;
|
||||
ELSE
|
||||
RETURN regexp_replace(overview_table, @extschema@._CDB_OverviewTableDiscriminator(), '');
|
||||
END IF;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_OverviewBaseTable(overview_table REGCLASS)
|
||||
RETURNS REGCLASS
|
||||
AS $$
|
||||
DECLARE
|
||||
table_name TEXT;
|
||||
schema_name TEXT;
|
||||
base_name TEXT;
|
||||
base_table REGCLASS;
|
||||
BEGIN
|
||||
SELECT * FROM @extschema@._cdb_split_table_name(overview_table) INTO schema_name, table_name;
|
||||
base_name := @extschema@._CDB_OverviewBaseTableName(table_name);
|
||||
IF base_name != table_name THEN
|
||||
base_table := Format('%I.%I', schema_name, base_name)::regclass;
|
||||
ELSE
|
||||
base_table := overview_table;
|
||||
END IF;
|
||||
RETURN base_table;
|
||||
END;
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Schema and relation names of a table given its reloid
|
||||
-- Scope: private.
|
||||
-- Parameters
|
||||
-- reloid: oid of the table.
|
||||
-- Return (schema_name, table_name)
|
||||
-- note that returned names will be quoted if necessary
|
||||
CREATE OR REPLACE FUNCTION @extschema@._cdb_split_table_name(reloid REGCLASS, OUT schema_name TEXT, OUT table_name TEXT)
|
||||
AS $$
|
||||
BEGIN
|
||||
SELECT n.nspname, c.relname
|
||||
INTO STRICT schema_name, table_name
|
||||
FROM pg_class c JOIN pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE c.oid = reloid;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
-- Schema and relation names of a table given its reloid
|
||||
-- Scope: private.
|
||||
-- Parameters
|
||||
-- reloid: oid of the table.
|
||||
-- Return (schema_name, table_name)
|
||||
-- note that returned names will be quoted if necessary
|
||||
CREATE OR REPLACE FUNCTION @extschema@._cdb_schema_name(reloid REGCLASS)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
DECLARE
|
||||
schema_name TEXT;
|
||||
BEGIN
|
||||
SELECT n.nspname
|
||||
INTO STRICT schema_name
|
||||
FROM pg_class c JOIN pg_namespace n ON c.relnamespace = n.oid
|
||||
WHERE c.oid = reloid;
|
||||
RETURN schema_name;
|
||||
END
|
||||
$$ LANGUAGE PLPGSQL IMMUTABLE PARALLEL SAFE;
|
18
lib/sql/scripts-available/CDB_QuantileBins.sql
Normal file
18
lib/sql/scripts-available/CDB_QuantileBins.sql
Normal file
@ -0,0 +1,18 @@
|
||||
--
|
||||
-- Determine the Quantile classifications from a numeric array
|
||||
--
|
||||
-- @param in_array A numeric array of numbers to determine the best
|
||||
-- bins based on the Quantile method.
|
||||
--
|
||||
-- @param breaks The number of bins you want to find.
|
||||
--
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_QuantileBins(in_array numeric[], breaks int)
|
||||
RETURNS numeric[]
|
||||
AS $$
|
||||
SELECT
|
||||
percentile_disc(Array(SELECT generate_series(1, breaks) / breaks::numeric))
|
||||
WITHIN GROUP (ORDER BY x ASC) AS p
|
||||
FROM
|
||||
unnest(in_array) AS x;
|
||||
$$ language SQL IMMUTABLE STRICT PARALLEL SAFE;
|
14
lib/sql/scripts-available/CDB_QueryStatements.sql
Normal file
14
lib/sql/scripts-available/CDB_QueryStatements.sql
Normal file
@ -0,0 +1,14 @@
|
||||
-- Return an array of statements found in the given query text
|
||||
--
|
||||
-- Regexp curtesy of Hubert Lubaczewski (depesz)
|
||||
-- Implemented in plpython for performance reasons
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_QueryStatements(query text)
|
||||
RETURNS SETOF TEXT AS $$
|
||||
import re
|
||||
pat = re.compile( r'''((?:[^'"$;]+|"[^"]*"|'[^']*'|(\$[^$]*\$).*?\2)+)''', re.DOTALL )
|
||||
for match in pat.findall(query):
|
||||
cleaned = match[0].strip()
|
||||
if ( cleaned ):
|
||||
yield cleaned
|
||||
$$ language 'plpythonu' IMMUTABLE STRICT PARALLEL SAFE;
|
75
lib/sql/scripts-available/CDB_QueryTables.sql
Normal file
75
lib/sql/scripts-available/CDB_QueryTables.sql
Normal file
@ -0,0 +1,75 @@
|
||||
-- Return an array of table names scanned by a given query
|
||||
--
|
||||
-- Requires PostgreSQL 9.x+
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_QueryTablesText(query text)
|
||||
RETURNS text[]
|
||||
AS $$
|
||||
DECLARE
|
||||
exp XML;
|
||||
tables text[];
|
||||
rec RECORD;
|
||||
rec2 RECORD;
|
||||
BEGIN
|
||||
|
||||
tables := '{}';
|
||||
|
||||
FOR rec IN SELECT @extschema@.CDB_QueryStatements(query) q LOOP
|
||||
BEGIN
|
||||
EXECUTE 'EXPLAIN (FORMAT XML, VERBOSE) ' || rec.q INTO STRICT exp;
|
||||
EXCEPTION WHEN syntax_error THEN
|
||||
-- We can get a syntax error if the user tries to EXPLAIN a DDL
|
||||
CONTINUE;
|
||||
WHEN others THEN
|
||||
-- TODO: if error is 'relation "xxxxxx" does not exist', take xxxxxx as
|
||||
-- the affected table ?
|
||||
RAISE WARNING 'CDB_QueryTables cannot explain query: % (%: %)', rec.q, SQLSTATE, SQLERRM;
|
||||
RAISE EXCEPTION '%', SQLERRM;
|
||||
CONTINUE;
|
||||
END;
|
||||
|
||||
-- Now need to extract all values of <Relation-Name>
|
||||
|
||||
-- RAISE DEBUG 'Explain: %', exp;
|
||||
|
||||
FOR rec2 IN WITH
|
||||
inp AS (
|
||||
SELECT
|
||||
xpath('//x:Relation-Name/text()', exp, ARRAY[ARRAY['x', 'http://www.postgresql.org/2009/explain']]) as x,
|
||||
xpath('//x:Relation-Name/../x:Schema/text()', exp, ARRAY[ARRAY['x', 'http://www.postgresql.org/2009/explain']]) as s
|
||||
)
|
||||
SELECT unnest(x)::text as p, unnest(s)::text as sc from inp
|
||||
LOOP
|
||||
-- RAISE DEBUG 'tab: %', rec2.p;
|
||||
-- RAISE DEBUG 'sc: %', rec2.sc;
|
||||
tables := array_append(tables, format('%s.%s', quote_ident(rec2.sc), quote_ident(rec2.p)));
|
||||
END LOOP;
|
||||
|
||||
-- RAISE DEBUG 'Tables: %', tables;
|
||||
|
||||
END LOOP;
|
||||
|
||||
-- RAISE DEBUG 'Tables: %', tables;
|
||||
|
||||
-- Remove duplicates and sort by name
|
||||
IF array_upper(tables, 1) > 0 THEN
|
||||
WITH dist as ( SELECT DISTINCT unnest(tables)::text as p ORDER BY p )
|
||||
SELECT array_agg(p) from dist into tables;
|
||||
END IF;
|
||||
|
||||
--RAISE DEBUG 'Tables: %', tables;
|
||||
|
||||
return tables;
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
||||
|
||||
|
||||
-- Keep CDB_QueryTables with same signature for backwards compatibility.
|
||||
-- It should probably be removed in the future.
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_QueryTables(query text)
|
||||
RETURNS name[]
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN @extschema@.CDB_QueryTablesText(query)::name[];
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
155
lib/sql/scripts-available/CDB_Quota.sql
Normal file
155
lib/sql/scripts-available/CDB_Quota.sql
Normal file
@ -0,0 +1,155 @@
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_total_relation_size(_schema_name TEXT, _table_name TEXT)
|
||||
RETURNS bigint AS
|
||||
$$
|
||||
DECLARE relation_size bigint := 0;
|
||||
BEGIN
|
||||
BEGIN
|
||||
SELECT pg_total_relation_size(format('"%s"."%s"', _schema_name, _table_name)) INTO relation_size;
|
||||
EXCEPTION
|
||||
WHEN undefined_table OR OTHERS THEN
|
||||
RAISE NOTICE '@extschema@._CDB_total_relation_size(''%'', ''%'') caught error: % (%)', _schema_name, _table_name, SQLERRM, SQLSTATE;
|
||||
END;
|
||||
RETURN relation_size;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Return the estimated size of user data. Used for quota checking.
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_UserDataSize(schema_name TEXT)
|
||||
RETURNS bigint AS
|
||||
$$
|
||||
DECLARE
|
||||
total_size INT8;
|
||||
BEGIN
|
||||
WITH raster_tables AS (
|
||||
SELECT o_table_name, r_table_name FROM raster_overviews
|
||||
WHERE o_table_schema = schema_name AND o_table_catalog = current_database()
|
||||
),
|
||||
user_tables AS (
|
||||
SELECT table_name FROM @extschema@._CDB_NonAnalysisTablesInSchema(schema_name)
|
||||
),
|
||||
table_cat AS (
|
||||
SELECT
|
||||
table_name,
|
||||
(
|
||||
EXISTS(select * from raster_tables where o_table_name = table_name)
|
||||
OR table_name SIMILAR TO @extschema@._CDB_OverviewTableDiscriminator() || '[\w\d]*'
|
||||
) AS is_overview,
|
||||
EXISTS(SELECT * FROM raster_tables WHERE r_table_name = table_name) AS is_raster
|
||||
FROM user_tables
|
||||
),
|
||||
sizes AS (
|
||||
SELECT COALESCE(INT8(SUM(@extschema@._CDB_total_relation_size(schema_name, table_name)))) table_size,
|
||||
CASE
|
||||
WHEN is_overview THEN 0
|
||||
WHEN is_raster THEN 1
|
||||
ELSE 0.5 -- Division by 2 is for not counting the_geom_webmercator
|
||||
END AS multiplier FROM table_cat GROUP BY is_overview, is_raster
|
||||
)
|
||||
SELECT sum(table_size*multiplier)::int8 INTO total_size FROM sizes;
|
||||
|
||||
IF total_size IS NOT NULL THEN
|
||||
RETURN total_size;
|
||||
ELSE
|
||||
RETURN 0;
|
||||
END IF;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
-- Return the estimated size of user data. Used for quota checking.
|
||||
-- Implicit schema version for backwards compatibility
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_UserDataSize()
|
||||
RETURNS bigint AS
|
||||
$$
|
||||
SELECT @extschema@.CDB_UserDataSize('public');
|
||||
$$
|
||||
LANGUAGE 'sql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
-- Triggers cannot have declared arguments: pbfact float8, qmax int8, schema_name text
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_CheckQuota()
|
||||
RETURNS trigger AS
|
||||
$$
|
||||
DECLARE
|
||||
pbfact float8;
|
||||
qmax int8;
|
||||
schema_name text;
|
||||
dice float8;
|
||||
quota float8;
|
||||
BEGIN
|
||||
IF TG_NARGS = 3 THEN
|
||||
schema_name := TG_ARGV[2];
|
||||
IF @extschema@.schema_exists(schema_name) = false THEN
|
||||
RAISE EXCEPTION 'Invalid schema name "%"', schema_name;
|
||||
END IF;
|
||||
ELSE
|
||||
schema_name := 'public';
|
||||
END IF;
|
||||
|
||||
-- By default try to use quota function, and if not present then rely on the one specified by params
|
||||
BEGIN
|
||||
EXECUTE FORMAT('SELECT %I._CDB_UserQuotaInBytes();', schema_name) INTO qmax;
|
||||
EXCEPTION WHEN undefined_function THEN
|
||||
BEGIN
|
||||
IF TG_NARGS >= 2 AND TG_ARGV[1] <> '-1' THEN
|
||||
qmax := TG_ARGV[1];
|
||||
ELSE
|
||||
RAISE EXCEPTION 'Missing "%"._CDB_UserQuotaInBytes()', schema_name;
|
||||
END IF;
|
||||
END;
|
||||
END;
|
||||
|
||||
pbfact := TG_ARGV[0];
|
||||
|
||||
dice := random();
|
||||
|
||||
IF dice < pbfact THEN
|
||||
RAISE DEBUG 'Checking quota on table % (dice:%, needed:<%)', TG_RELID::text, dice, pbfact;
|
||||
|
||||
IF qmax = 0 THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
SELECT @extschema@.CDB_UserDataSize(schema_name) INTO quota;
|
||||
IF quota > qmax THEN
|
||||
RAISE EXCEPTION 'Quota exceeded by %KB', (quota-qmax)/1024;
|
||||
ELSE RAISE DEBUG 'User quota in bytes: % < % (max allowed)', quota, qmax;
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_SetUserQuotaInBytes(schema_name text, bytes int8)
|
||||
RETURNS int8 AS
|
||||
$$
|
||||
DECLARE
|
||||
sql text;
|
||||
BEGIN
|
||||
IF @extschema@.schema_exists(schema_name::text) = false THEN
|
||||
RAISE EXCEPTION 'Invalid schema name "%"', schema_name::text;
|
||||
END IF;
|
||||
|
||||
sql := 'CREATE OR REPLACE FUNCTION "' || schema_name::text || '"._CDB_UserQuotaInBytes() '
|
||||
|| 'RETURNS int8 AS $X$ SELECT ' || bytes
|
||||
|| '::int8 $X$ LANGUAGE sql IMMUTABLE';
|
||||
EXECUTE sql;
|
||||
|
||||
return bytes;
|
||||
END
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
||||
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_SetUserQuotaInBytes(bytes int8)
|
||||
RETURNS int8 AS
|
||||
$$
|
||||
BEGIN
|
||||
return @extschema@.CDB_SetUserQuotaInBytes('public', bytes);
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
69
lib/sql/scripts-available/CDB_RandomTids.sql
Normal file
69
lib/sql/scripts-available/CDB_RandomTids.sql
Normal file
@ -0,0 +1,69 @@
|
||||
|
||||
-- {
|
||||
--
|
||||
-- Return random TIDs in a table.
|
||||
--
|
||||
-- You can use like this:
|
||||
--
|
||||
-- SELECT * FROM lots_of_points WHERE ctid = ANY (
|
||||
-- ARRAY[ (SELECT CDB_RandomTids('lots_of_points', 100000)) ]
|
||||
-- );
|
||||
--
|
||||
-- NOTE:
|
||||
-- It currently doesn't really do it random, but in a
|
||||
-- equally-distributed way among all tuples.
|
||||
--
|
||||
--
|
||||
-- }{
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_RandomTids(in_table regclass, in_nsamples integer)
|
||||
RETURNS tid[]
|
||||
AS $$
|
||||
DECLARE
|
||||
class_info RECORD;
|
||||
tuples_per_page INTEGER;
|
||||
needed_pages INTEGER;
|
||||
skip_pages INTEGER;
|
||||
tidlist TID[];
|
||||
pnrec RECORD;
|
||||
BEGIN
|
||||
|
||||
-- (#) estimate pages and tuples-per-page
|
||||
-- HINT: pg_class.relpages, pg_class.reltuples
|
||||
SELECT relpages, reltuples
|
||||
FROM pg_class WHERE oid = in_table
|
||||
INTO class_info;
|
||||
|
||||
RAISE DEBUG 'Table % has % pages and % tuples',
|
||||
in_table::text, class_info.relpages, class_info.reltuples;
|
||||
|
||||
IF in_nsamples > class_info.reltuples THEN
|
||||
RAISE WARNING 'Table has less tuples than requested';
|
||||
-- should just perform a sequencial scan here...
|
||||
END IF;
|
||||
|
||||
tuples_per_page := floor(class_info.reltuples/class_info.relpages);
|
||||
needed_pages := ceil(in_nsamples::real/tuples_per_page);
|
||||
|
||||
RAISE DEBUG '% tuples per page, we need % pages for % tuples',
|
||||
tuples_per_page, needed_pages, in_nsamples;
|
||||
|
||||
-- (#) select random pages
|
||||
-- TODO: see how good this is first
|
||||
|
||||
skip_pages := floor( (class_info.relpages-needed_pages)/(needed_pages+1) );
|
||||
|
||||
RAISE DEBUG 'we are going to skip % pages at each iteration',
|
||||
skip_pages;
|
||||
|
||||
SELECT array_agg(t) FROM (
|
||||
SELECT '(' || pn || ',' || tn || ')' as t
|
||||
FROM generate_series(1, tuples_per_page) x(tn),
|
||||
generate_series(skip_pages+1, class_info.relpages, skip_pages) y(pn) ) f
|
||||
INTO tidlist;
|
||||
|
||||
RETURN tidlist;
|
||||
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' STABLE STRICT PARALLEL SAFE;
|
||||
-- }
|
||||
|
108
lib/sql/scripts-available/CDB_RectangleGrid.sql
Normal file
108
lib/sql/scripts-available/CDB_RectangleGrid.sql
Normal file
@ -0,0 +1,108 @@
|
||||
-- In older versions of the extension, CDB_RectangleGrid had a different signature
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_RectangleGrid(GEOMETRY, FLOAT8, FLOAT8, GEOMETRY);
|
||||
|
||||
--
|
||||
-- Fill given extent with a rectangular coverage
|
||||
--
|
||||
-- @param ext Extent to fill. Only rectangles with center point falling
|
||||
-- inside the extent (or at the lower or leftmost edge) will
|
||||
-- be emitted. The returned hexagons will have the same SRID
|
||||
-- as this extent.
|
||||
--
|
||||
-- @param width Width of each rectangle
|
||||
--
|
||||
-- @param height Height of each rectangle
|
||||
--
|
||||
-- @param origin Optional origin to allow for exact tiling.
|
||||
-- If omitted the origin will be 0,0.
|
||||
-- The parameter is checked for having the same SRID
|
||||
-- as the extent.
|
||||
--
|
||||
-- @param maxcells Optional maximum number of grid cells to generate;
|
||||
-- if the grid requires more cells to cover the extent
|
||||
-- and exception will occur.
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_RectangleGrid(ext GEOMETRY, width FLOAT8, height FLOAT8, origin GEOMETRY DEFAULT NULL, maxcells INTEGER DEFAULT 512*512)
|
||||
RETURNS SETOF GEOMETRY
|
||||
AS $$
|
||||
DECLARE
|
||||
h GEOMETRY; -- rectangle cell
|
||||
hstep FLOAT8; -- horizontal step
|
||||
vstep FLOAT8; -- vertical step
|
||||
hw FLOAT8; -- half width
|
||||
hh FLOAT8; -- half height
|
||||
vstart FLOAT8;
|
||||
hstart FLOAT8;
|
||||
hend FLOAT8;
|
||||
vend FLOAT8;
|
||||
xoff FLOAT8;
|
||||
yoff FLOAT8;
|
||||
xgrd FLOAT8;
|
||||
ygrd FLOAT8;
|
||||
x FLOAT8;
|
||||
y FLOAT8;
|
||||
srid INTEGER;
|
||||
BEGIN
|
||||
|
||||
srid := @postgisschema@.ST_SRID(ext);
|
||||
|
||||
xoff := 0;
|
||||
yoff := 0;
|
||||
|
||||
IF origin IS NOT NULL THEN
|
||||
IF @postgisschema@.ST_SRID(origin) != srid THEN
|
||||
RAISE EXCEPTION 'SRID mismatch between extent (%) and origin (%)', srid, ST_SRID(origin);
|
||||
END IF;
|
||||
xoff := @postgisschema@.ST_X(origin);
|
||||
yoff := @postgisschema@.ST_Y(origin);
|
||||
END IF;
|
||||
|
||||
--RAISE DEBUG 'X offset: %', xoff;
|
||||
--RAISE DEBUG 'Y offset: %', yoff;
|
||||
|
||||
hw := width/2.0;
|
||||
hh := height/2.0;
|
||||
|
||||
xgrd := hw;
|
||||
ygrd := hh;
|
||||
--RAISE DEBUG 'X grid size: %', xgrd;
|
||||
--RAISE DEBUG 'Y grid size: %', ygrd;
|
||||
|
||||
hstep := width;
|
||||
vstep := height;
|
||||
|
||||
-- Tweak horizontal start on hstep grid from origin
|
||||
hstart := xoff + ceil((@postgisschema@.ST_XMin(ext)-xoff)/hstep)*hstep;
|
||||
--RAISE DEBUG 'hstart: %', hstart;
|
||||
|
||||
-- Tweak vertical start on vstep grid from origin
|
||||
vstart := yoff + ceil((@postgisschema@.ST_Ymin(ext)-yoff)/vstep)*vstep;
|
||||
--RAISE DEBUG 'vstart: %', vstart;
|
||||
|
||||
hend := ST_XMax(ext);
|
||||
vend := ST_YMax(ext);
|
||||
|
||||
--RAISE DEBUG 'hend: %', hend;
|
||||
--RAISE DEBUG 'vend: %', vend;
|
||||
|
||||
If maxcells IS NOT NULL AND maxcells > 0 THEN
|
||||
IF ((hend - hstart)/hstep * (vend - vstart)/vstep)::integer > maxcells THEN
|
||||
RAISE EXCEPTION 'The requested grid is too big to be rendered';
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
x := hstart;
|
||||
WHILE x < hend LOOP -- over X
|
||||
y := vstart;
|
||||
h := @postgisschema@.ST_MakeEnvelope(x-hw, y-hh, x+hw, y+hh, srid);
|
||||
WHILE y < vend LOOP -- over Y
|
||||
RETURN NEXT h;
|
||||
h := @postgisschema@.ST_Translate(h, 0, vstep);
|
||||
y := yoff + round(((y + vstep)-yoff)/ygrd)*ygrd; -- round to grid
|
||||
END LOOP;
|
||||
x := xoff + round(((x + hstep)-xoff)/xgrd)*xgrd; -- round to grid
|
||||
END LOOP;
|
||||
|
||||
RETURN;
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' IMMUTABLE PARALLEL SAFE;
|
24
lib/sql/scripts-available/CDB_SearchPath.sql
Normal file
24
lib/sql/scripts-available/CDB_SearchPath.sql
Normal file
@ -0,0 +1,24 @@
|
||||
---- Make sure '@extschema@' is in database search path
|
||||
DO
|
||||
$$
|
||||
DECLARE
|
||||
var_result text;
|
||||
var_cur_search_path text;
|
||||
BEGIN
|
||||
SELECT reset_val INTO var_cur_search_path
|
||||
FROM pg_settings WHERE name = 'search_path';
|
||||
|
||||
IF var_cur_search_path LIKE '%@extschema@%' THEN
|
||||
RAISE DEBUG '"@extschema@" already in database search_path';
|
||||
ELSE
|
||||
var_cur_search_path := var_cur_search_path || ', "@extschema@"';
|
||||
EXECUTE 'ALTER DATABASE ' || quote_ident(current_database()) ||
|
||||
' SET search_path = ' || var_cur_search_path;
|
||||
RAISE DEBUG '"@extschema@" has been added to end of database search_path';
|
||||
END IF;
|
||||
|
||||
-- Reset search_path
|
||||
EXECUTE 'SET search_path = ' || var_cur_search_path;
|
||||
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql';
|
53
lib/sql/scripts-available/CDB_Stats.sql
Normal file
53
lib/sql/scripts-available/CDB_Stats.sql
Normal file
@ -0,0 +1,53 @@
|
||||
--
|
||||
-- Calculate basic statistics of a given dataset
|
||||
--
|
||||
-- @param in_array A numeric array of numbers
|
||||
--
|
||||
-- Returns: statistical quantity chosen
|
||||
--
|
||||
-- References: http://www.itl.nist.gov/div898/handbook/eda/section3/eda35b.htm
|
||||
--
|
||||
|
||||
-- Calculate kurtosis
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Kurtosis ( in_array NUMERIC[] ) RETURNS NUMERIC as $$
|
||||
DECLARE
|
||||
a numeric;
|
||||
c numeric;
|
||||
k numeric;
|
||||
BEGIN
|
||||
SELECT AVG(e), COUNT(e)::numeric * power(stddev(e),4) INTO a, c FROM ( SELECT unnest(in_array) e ) x;
|
||||
|
||||
IF c=0 THEN
|
||||
RETURN 0;
|
||||
ELSE
|
||||
|
||||
EXECUTE 'SELECT sum(power($1 - e, 4)) / ($2 ) - 3
|
||||
FROM (SELECT unnest($3) e ) x'
|
||||
INTO k
|
||||
USING a, c, in_array;
|
||||
|
||||
RETURN k;
|
||||
END IF;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE STRICT PARALLEL SAFE;
|
||||
|
||||
-- Calculate skewness
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Skewness ( in_array NUMERIC[] ) RETURNS NUMERIC as $$
|
||||
DECLARE
|
||||
a numeric;
|
||||
c numeric;
|
||||
sk numeric;
|
||||
BEGIN
|
||||
SELECT AVG(e), COUNT(e)::numeric * power(stddev(e),3) INTO a, c FROM ( SELECT unnest(in_array) e ) x;
|
||||
IF c=0 THEN
|
||||
RETURN 0;
|
||||
ELSE
|
||||
EXECUTE 'SELECT sum(power($1 - e, 3)) / ( $2 )
|
||||
FROM (SELECT unnest($3) e ) x'
|
||||
INTO sk
|
||||
USING a, c, in_array;
|
||||
|
||||
RETURN sk;
|
||||
END IF;
|
||||
END;
|
||||
$$ language plpgsql IMMUTABLE STRICT PARALLEL SAFE;
|
20
lib/sql/scripts-available/CDB_StringToDate.sql
Normal file
20
lib/sql/scripts-available/CDB_StringToDate.sql
Normal file
@ -0,0 +1,20 @@
|
||||
-- Convert string to date
|
||||
--
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_StringToDate(character varying);
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_StringToDate(input character varying)
|
||||
RETURNS TIMESTAMP AS $$
|
||||
DECLARE output TIMESTAMP;
|
||||
BEGIN
|
||||
BEGIN
|
||||
output := input::date;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
BEGIN
|
||||
SELECT to_timestamp(input::integer) INTO output;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NULL;
|
||||
END;
|
||||
END;
|
||||
RETURN output;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL UNSAFE;
|
167
lib/sql/scripts-available/CDB_SyncTable.sql
Normal file
167
lib/sql/scripts-available/CDB_SyncTable.sql
Normal file
@ -0,0 +1,167 @@
|
||||
/*
|
||||
Gets the column names of a given table.
|
||||
|
||||
Sample usage:
|
||||
|
||||
SELECT @extschema@._CDB_GetColumns('public.films');
|
||||
*/
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_GetColumns(src_table REGCLASS)
|
||||
RETURNS SETOF NAME
|
||||
AS $$
|
||||
SELECT
|
||||
a.attname as "colname"
|
||||
FROM
|
||||
pg_catalog.pg_attribute a
|
||||
WHERE
|
||||
a.attnum > 0
|
||||
AND NOT a.attisdropped
|
||||
AND a.attrelid = (
|
||||
SELECT c.oid
|
||||
FROM pg_catalog.pg_class c
|
||||
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.oid = src_table::oid
|
||||
AND pg_catalog.pg_table_is_visible(c.oid)
|
||||
)
|
||||
ORDER BY a.attnum;
|
||||
$$ LANGUAGE sql STABLE PARALLEL UNSAFE;
|
||||
|
||||
|
||||
/*
|
||||
Given an array of quoted column names, it generates an UPDATE SET
|
||||
clause with the following form:
|
||||
|
||||
the_geom = changed.the_geom,
|
||||
id = changed.id,
|
||||
elevation = changed.elevation
|
||||
|
||||
Example of usage:
|
||||
|
||||
SELECT @extschema@.__CDB_GetUpdateSetClause('{the_geom, id, elevation}', 'changed');
|
||||
*/
|
||||
CREATE OR REPLACE FUNCTION @extschema@.__CDB_GetUpdateSetClause(colnames TEXT[], update_source TEXT)
|
||||
RETURNS TEXT
|
||||
AS $$
|
||||
DECLARE
|
||||
set_clause_list TEXT[];
|
||||
col TEXT;
|
||||
BEGIN
|
||||
FOREACH col IN ARRAY colnames
|
||||
LOOP
|
||||
set_clause_list := array_append(set_clause_list, format('%1$s = %2$s.%1$s', col, update_source));
|
||||
END lOOP;
|
||||
RETURN array_to_string(set_clause_list, ', ');
|
||||
END;
|
||||
$$ LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
|
||||
/*
|
||||
Given a prefix, generate a safe unique NAME for a temp table.
|
||||
|
||||
Example of usage:
|
||||
|
||||
SELECT @extschema@.__CDB_GenerateUniqueName('src_sync'); --> src_sync_718794_120106
|
||||
|
||||
*/
|
||||
CREATE OR REPLACE FUNCTION @extschema@.__CDB_GenerateUniqueName(prefix TEXT)
|
||||
RETURNS NAME
|
||||
AS $$
|
||||
SELECT format('%s_%s_%s', prefix, txid_current(), (random()*1000000)::int)::NAME;
|
||||
$$ LANGUAGE sql VOLATILE PARALLEL UNSAFE;
|
||||
|
||||
/*
|
||||
Given a table name and an array of column names,
|
||||
return array of column names qualified with the table name and quoted when necessary
|
||||
tablename and colnames should be properly quoted, and for this reason the type NAME is not
|
||||
used for them (with quotes they could exceed the maximum identifier length)
|
||||
|
||||
Example of usage:
|
||||
|
||||
SELECT @extschema@.__CDB_QualifyColumns('t', ARRAY['a','"b-1"']); --> ARRAY['t.a','t."b-1"']
|
||||
|
||||
*/
|
||||
CREATE OR REPLACE FUNCTION @extschema@.__CDB_QualifyColumns(tablename NAME, colnames NAME[]) RETURNS TEXT[] AS
|
||||
$$
|
||||
SELECT array_agg(tablename || '.' || _colname) from unnest(colnames) _colname;
|
||||
$$ LANGUAGE sql IMMUTABLE PARALLEL SAFE;
|
||||
|
||||
/*
|
||||
A Table Syncer
|
||||
|
||||
Assumptions:
|
||||
- Both tables contain a consistent cartodb_id column
|
||||
- Destination table has all columns of the source or does not exist
|
||||
|
||||
Sample usage:
|
||||
|
||||
SELECT CDB_SyncTable('radar_stations', 'public', 'syncdest');
|
||||
SELECT CDB_SyncTable('test_sync_source', 'public', 'test_sync_dest', '{the_geom, the_geom_webmercator}');
|
||||
|
||||
*/
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_SyncTable(src_table REGCLASS, dst_schema REGNAMESPACE, dst_table NAME, skip_cols NAME[] = '{}')
|
||||
RETURNS void
|
||||
AS $$
|
||||
DECLARE
|
||||
fq_dest_table TEXT;
|
||||
|
||||
colnames TEXT[];
|
||||
dst_colnames TEXT;
|
||||
src_colnames TEXT;
|
||||
|
||||
update_set_clause TEXT;
|
||||
|
||||
num_rows BIGINT;
|
||||
err_context text;
|
||||
|
||||
t timestamptz;
|
||||
BEGIN
|
||||
-- If the destination table does not exist, just copy the source table
|
||||
fq_dest_table := format('%s.%I', dst_schema, dst_table);
|
||||
EXECUTE format('CREATE TABLE IF NOT EXISTS %s as TABLE %s', fq_dest_table, src_table);
|
||||
GET DIAGNOSTICS num_rows = ROW_COUNT;
|
||||
IF num_rows > 0 THEN
|
||||
RAISE NOTICE 'INSERTED % row(s)', num_rows;
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
skip_cols := skip_cols || '{cartodb_id}';
|
||||
|
||||
-- Get the list of columns from the source table, excluding skip_cols
|
||||
SELECT ARRAY(SELECT quote_ident(c) FROM @extschema@._CDB_GetColumns(src_table) as c EXCEPT SELECT unnest(skip_cols)) INTO colnames;
|
||||
|
||||
-- Deal with deleted rows: ids in dest but not in source
|
||||
t := clock_timestamp();
|
||||
EXECUTE format(
|
||||
'DELETE FROM %1$s _dst WHERE NOT EXISTS (SELECT * FROM %2$s _src WHERE _src.cartodb_id=_dst.cartodb_id)',
|
||||
fq_dest_table, src_table);
|
||||
GET DIAGNOSTICS num_rows = ROW_COUNT;
|
||||
RAISE NOTICE 'DELETED % row(s)', num_rows;
|
||||
RAISE DEBUG 'DELETE time (s): %', clock_timestamp() - t;
|
||||
|
||||
-- Deal with inserted rows: ids in source but not in dest
|
||||
t := clock_timestamp();
|
||||
EXECUTE format('
|
||||
INSERT INTO %1$s(cartodb_id, %2$s)
|
||||
SELECT cartodb_id, %2$s FROM %3$s _src WHERE NOT EXISTS (SELECT * FROM %1$s _dst WHERE _src.cartodb_id=_dst.cartodb_id)
|
||||
', fq_dest_table, array_to_string(colnames, ','), src_table);
|
||||
GET DIAGNOSTICS num_rows = ROW_COUNT;
|
||||
RAISE NOTICE 'INSERTED % row(s)', num_rows;
|
||||
RAISE DEBUG 'INSERT time (s): %', clock_timestamp() - t;
|
||||
|
||||
-- Deal with modified rows: ids in source and dest but different hashes
|
||||
t := clock_timestamp();
|
||||
update_set_clause := @extschema@.__CDB_GetUpdateSetClause(colnames, '_changed');
|
||||
dst_colnames := array_to_string(@extschema@.__CDB_QualifyColumns('_dst', colnames), ',');
|
||||
src_colnames := array_to_string(@extschema@.__CDB_QualifyColumns('_src', colnames), ',');
|
||||
EXECUTE format('
|
||||
UPDATE %1$s _update SET %2$s
|
||||
FROM (
|
||||
SELECT _src.* FROM %3$s _src JOIN %1$s _dst ON (_dst.cartodb_id = _src.cartodb_id)
|
||||
WHERE md5(ROW(%4$s)::text) <> md5(ROW(%5$s)::text)
|
||||
) _changed
|
||||
WHERE _update.cartodb_id = _changed.cartodb_id;
|
||||
', fq_dest_table, update_set_clause, src_table, dst_colnames, src_colnames);
|
||||
GET DIAGNOSTICS num_rows = ROW_COUNT;
|
||||
RAISE NOTICE 'MODIFIED % row(s)', num_rows;
|
||||
RAISE DEBUG 'UPDATE time (s): %', clock_timestamp() - t;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE;
|
27
lib/sql/scripts-available/CDB_TableIndexes.sql
Normal file
27
lib/sql/scripts-available/CDB_TableIndexes.sql
Normal file
@ -0,0 +1,27 @@
|
||||
-- Function returning indexes for a table
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_TableIndexes(REGCLASS)
|
||||
RETURNS TABLE(index_name name, index_unique bool, index_primary bool, index_keys text array)
|
||||
AS $$
|
||||
|
||||
SELECT pg_class.relname as index_name,
|
||||
idx.indisunique as index_unique,
|
||||
idx.indisprimary as index_primary,
|
||||
ARRAY(
|
||||
SELECT pg_get_indexdef(idx.indexrelid, k + 1, true)
|
||||
FROM generate_subscripts(idx.indkey, 1) as k
|
||||
ORDER BY k
|
||||
) as index_keys
|
||||
FROM pg_indexes,
|
||||
pg_index as idx
|
||||
JOIN pg_class
|
||||
ON pg_class.oid = idx.indexrelid
|
||||
WHERE pg_indexes.tablename = '' || $1 || ''
|
||||
AND '' || $1 || '' IN (SELECT CDB_UserTables())
|
||||
AND pg_class.relname=pg_indexes.indexname
|
||||
;
|
||||
|
||||
$$ LANGUAGE SQL STABLE PARALLEL SAFE;
|
||||
|
||||
-- This is to migrate from pre-0.2.0 version
|
||||
-- See http://github.com/CartoDB/cartodb-postgresql/issues/36
|
||||
GRANT EXECUTE ON FUNCTION @extschema@.CDB_TableIndexes(REGCLASS) TO public;
|
146
lib/sql/scripts-available/CDB_TableMetadata.sql
Normal file
146
lib/sql/scripts-available/CDB_TableMetadata.sql
Normal file
@ -0,0 +1,146 @@
|
||||
|
||||
CREATE TABLE IF NOT EXISTS
|
||||
@extschema@.CDB_TableMetadata (
|
||||
tabname regclass not null primary key,
|
||||
updated_at timestamp with time zone not null default now()
|
||||
);
|
||||
|
||||
CREATE OR REPLACE VIEW @extschema@.CDB_TableMetadata_Text AS
|
||||
SELECT FORMAT('%I.%I', n.nspname::text, c.relname::text) tabname, updated_at
|
||||
FROM @extschema@.CDB_TableMetadata m JOIN pg_catalog.pg_class c ON m.tabname::oid = c.oid
|
||||
LEFT JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid;
|
||||
|
||||
-- No one can see this
|
||||
-- Updates are only possible trough the security definer trigger
|
||||
-- GRANT SELECT ON @extschema@.CDB_TableMetadata TO public;
|
||||
|
||||
--
|
||||
-- Trigger logging updated_at in the CDB_TableMetadata
|
||||
-- and notifying cdb_tabledata_update with table name as payload.
|
||||
--
|
||||
-- Attach to tables like this:
|
||||
--
|
||||
-- CREATE trigger track_updates
|
||||
-- AFTER INSERT OR UPDATE OR TRUNCATE OR DELETE ON <tablename>
|
||||
-- FOR EACH STATEMENT
|
||||
-- EXECUTE PROCEDURE cdb_tablemetadata_trigger();
|
||||
--
|
||||
-- NOTE: _never_ attach to CDB_TableMetadata ...
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_TableMetadata_Trigger()
|
||||
RETURNS trigger AS
|
||||
$$
|
||||
BEGIN
|
||||
-- Guard against infinite loop
|
||||
IF TG_RELID = '@extschema@.CDB_TableMetadata'::regclass::oid THEN
|
||||
RETURN NULL;
|
||||
END IF;
|
||||
|
||||
-- Cleanup stale entries
|
||||
DELETE FROM @extschema@.CDB_TableMetadata
|
||||
WHERE NOT EXISTS (
|
||||
SELECT oid FROM pg_class WHERE oid = tabname
|
||||
);
|
||||
|
||||
WITH nv as (
|
||||
SELECT TG_RELID as tabname, NOW() as t
|
||||
), updated as (
|
||||
UPDATE @extschema@.CDB_TableMetadata x SET updated_at = nv.t
|
||||
FROM nv WHERE x.tabname = nv.tabname
|
||||
RETURNING x.tabname
|
||||
)
|
||||
INSERT INTO @extschema@.CDB_TableMetadata SELECT nv.*
|
||||
FROM nv LEFT JOIN updated USING(tabname)
|
||||
WHERE updated.tabname IS NULL;
|
||||
|
||||
RETURN NULL;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
--
|
||||
-- Trigger invalidating varnish whenever CDB_TableMetadata
|
||||
-- record change.
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_TableMetadata_Updated()
|
||||
RETURNS trigger AS
|
||||
$$
|
||||
DECLARE
|
||||
tabname regclass;
|
||||
rec RECORD;
|
||||
found BOOL;
|
||||
schema_name TEXT;
|
||||
table_name TEXT;
|
||||
BEGIN
|
||||
|
||||
IF TG_OP = 'UPDATE' or TG_OP = 'INSERT' THEN
|
||||
tabname = NEW.tabname;
|
||||
ELSE
|
||||
tabname = OLD.tabname;
|
||||
END IF;
|
||||
|
||||
-- Notify table data update
|
||||
-- This needs a little bit more of research regarding security issues
|
||||
-- see https://github.com/CartoDB/cartodb/pull/241
|
||||
-- PERFORM pg_notify('cdb_tabledata_update', tabname);
|
||||
|
||||
--RAISE NOTICE 'Table % was updated', tabname;
|
||||
|
||||
-- This will be needed until we'll have someone listening
|
||||
-- on the event we just broadcasted:
|
||||
--
|
||||
-- LISTEN cdb_tabledata_update;
|
||||
--
|
||||
|
||||
-- Call the first varnish invalidation function owned
|
||||
-- by a superuser found in @extschema@ or public schema
|
||||
-- (in that order)
|
||||
found := false;
|
||||
FOR rec IN SELECT u.usesuper, u.usename, n.nspname, p.proname
|
||||
FROM pg_proc p, pg_namespace n, pg_user u
|
||||
WHERE p.proname = 'cdb_invalidate_varnish'
|
||||
AND p.pronamespace = n.oid
|
||||
AND n.nspname IN ('public', '@extschema@')
|
||||
AND u.usesysid = p.proowner
|
||||
AND u.usesuper
|
||||
ORDER BY n.nspname
|
||||
LOOP
|
||||
SELECT n.nspname, c.relname FROM pg_class c, pg_namespace n WHERE c.oid=tabname AND c.relnamespace = n.oid INTO schema_name, table_name;
|
||||
EXECUTE 'SELECT ' || quote_ident(rec.nspname) || '.'
|
||||
|| quote_ident(rec.proname)
|
||||
|| '(' || quote_literal(quote_ident(schema_name) || '.' || quote_ident(table_name)) || ')';
|
||||
found := true;
|
||||
EXIT;
|
||||
END LOOP;
|
||||
IF NOT found THEN RAISE WARNING 'Missing cdb_invalidate_varnish()'; END IF;
|
||||
|
||||
RETURN NULL;
|
||||
END;
|
||||
$$
|
||||
LANGUAGE plpgsql VOLATILE PARALLEL UNSAFE SECURITY DEFINER;
|
||||
|
||||
DROP TRIGGER IF EXISTS table_modified ON @extschema@.CDB_TableMetadata;
|
||||
-- NOTE: on DELETE we would be unable to convert the table
|
||||
-- oid (regclass) to its name
|
||||
CREATE TRIGGER table_modified AFTER INSERT OR UPDATE
|
||||
ON @extschema@.CDB_TableMetadata FOR EACH ROW EXECUTE PROCEDURE
|
||||
@extschema@._CDB_TableMetadata_Updated();
|
||||
|
||||
|
||||
-- similar to TOUCH(1) in unix filesystems but for table in cdb_tablemetadata
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_TableMetadataTouch(tablename regclass)
|
||||
RETURNS void AS
|
||||
$$
|
||||
BEGIN
|
||||
WITH upsert AS (
|
||||
UPDATE @extschema@.cdb_tablemetadata
|
||||
SET updated_at = NOW()
|
||||
WHERE tabname = tablename
|
||||
RETURNING *
|
||||
)
|
||||
INSERT INTO @extschema@.cdb_tablemetadata (tabname, updated_at)
|
||||
SELECT tablename, NOW()
|
||||
WHERE NOT EXISTS (SELECT * FROM upsert);
|
||||
END;
|
||||
$$
|
||||
LANGUAGE 'plpgsql' VOLATILE STRICT PARALLEL UNSAFE;
|
82
lib/sql/scripts-available/CDB_TransformToWebmercator.sql
Normal file
82
lib/sql/scripts-available/CDB_TransformToWebmercator.sql
Normal file
@ -0,0 +1,82 @@
|
||||
--
|
||||
-- Function to "safely" transform to webmercator
|
||||
--
|
||||
-- This function works around the existance of a valid range
|
||||
-- for web mercator by "clipping" anything outside to the valid
|
||||
-- range.
|
||||
--
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_TransformToWebmercator(geom @postgisschema@.geometry)
|
||||
RETURNS @postgisschema@.geometry
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
valid_extent @postgisschema@.GEOMETRY;
|
||||
latlon_input @postgisschema@.GEOMETRY;
|
||||
clipped_input @postgisschema@.GEOMETRY;
|
||||
to_webmercator @postgisschema@.GEOMETRY;
|
||||
ret @postgisschema@.GEOMETRY;
|
||||
BEGIN
|
||||
|
||||
IF @postgisschema@.ST_Srid(geom) = 3857 THEN
|
||||
RETURN geom;
|
||||
END IF;
|
||||
|
||||
-- This is the valid web mercator extent
|
||||
--
|
||||
-- NOTE: some sources set the valid latitude range
|
||||
-- to -85.0511 to 85.0511 but as long as proj
|
||||
-- does not complain we are happy
|
||||
--
|
||||
valid_extent := @postgisschema@.ST_MakeEnvelope(-180, -89, 180, 89, 4326);
|
||||
|
||||
-- Then we transform to WGS84 latlon, which is
|
||||
-- where we have known coordinates for the clipping
|
||||
--
|
||||
latlon_input := @postgisschema@.ST_Transform(geom, 4326);
|
||||
|
||||
-- Don't bother clipping if the geometry boundary doesn't
|
||||
-- go outside the valid extent.
|
||||
IF latlon_input @ valid_extent THEN
|
||||
BEGIN
|
||||
RETURN @postgisschema@.ST_Transform(latlon_input, 3857);
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
RETURN NULL;
|
||||
END;
|
||||
END IF;
|
||||
|
||||
-- Since we're going to use ST_Intersection on input
|
||||
-- we'd better ensure the input is valid
|
||||
-- TODO: only do this if the first ST_Intersection fails ?
|
||||
IF @postgisschema@.ST_Dimension(geom) != 0 AND
|
||||
-- See http://trac.osgeo.org/postgis/ticket/1719
|
||||
@postgisschema@.GeometryType(geom) != 'GEOMETRYCOLLECTION'
|
||||
THEN
|
||||
BEGIN
|
||||
latlon_input := @postgisschema@.ST_MakeValid(latlon_input);
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
-- See http://github.com/Vizzuality/cartodb/issues/931
|
||||
RAISE WARNING 'Could not clean input geometry: %', SQLERRM;
|
||||
RETURN NULL;
|
||||
END;
|
||||
latlon_input := @postgisschema@.ST_CollectionExtract(latlon_input, ST_Dimension(geom)+1);
|
||||
END IF;
|
||||
|
||||
-- Then we clip, trying to retain the input type
|
||||
-- TODO: catch exceptions here too ?
|
||||
clipped_input := @postgisschema@.ST_Intersection(latlon_input, valid_extent);
|
||||
|
||||
-- We transform to web mercator
|
||||
to_webmercator := @postgisschema@.ST_Transform(clipped_input, 3857);
|
||||
|
||||
-- Finally we convert EMPTY to NULL
|
||||
-- See https://github.com/Vizzuality/cartodb/issues/706
|
||||
-- And retain "multi" status
|
||||
ret := CASE WHEN @postgisschema@.ST_IsEmpty(to_webmercator) THEN NULL::@postgisschema@.geometry
|
||||
WHEN @postgisschema@.GeometryType(geom) LIKE 'MULTI%' THEN @postgisschema@.ST_Multi(to_webmercator)
|
||||
ELSE to_webmercator
|
||||
END;
|
||||
|
||||
RETURN ret;
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL UNSAFE;
|
28
lib/sql/scripts-available/CDB_UserTables.sql
Normal file
28
lib/sql/scripts-available/CDB_UserTables.sql
Normal file
@ -0,0 +1,28 @@
|
||||
-- Function returning list of cartodb user tables
|
||||
--
|
||||
-- The optional argument restricts the result to tables
|
||||
-- of the specified access type.
|
||||
--
|
||||
-- Currently accepted permissions are: 'public', 'private' or 'all'
|
||||
--
|
||||
DROP FUNCTION IF EXISTS @extschema@.CDB_UserTables(text);
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_UserTables(perm text DEFAULT 'all')
|
||||
RETURNS SETOF name
|
||||
AS $$
|
||||
|
||||
SELECT c.relname
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relkind = 'r'
|
||||
AND c.relname NOT IN ('cdb_tablemetadata', 'cdb_analysis_catalog', 'cdb_conf', 'spatial_ref_sys')
|
||||
AND n.nspname NOT IN ('pg_catalog', 'information_schema', 'topology', '@extschema@')
|
||||
AND CASE WHEN perm = 'public' THEN has_table_privilege('publicuser', c.oid, 'SELECT')
|
||||
WHEN perm = 'private' THEN has_table_privilege(current_user, c.oid, 'SELECT') AND NOT has_table_privilege('publicuser', c.oid, 'SELECT')
|
||||
WHEN perm = 'all' THEN has_table_privilege(current_user, c.oid, 'SELECT') OR has_table_privilege('publicuser', c.oid, 'SELECT')
|
||||
ELSE false END;
|
||||
|
||||
$$ LANGUAGE 'sql' STABLE PARALLEL SAFE;
|
||||
|
||||
-- This is to migrate from pre-0.2.0 version
|
||||
-- See http://github.com/CartoDB/cartodb-postgresql/issues/36
|
||||
GRANT EXECUTE ON FUNCTION @extschema@.CDB_UserTables(text) TO public;
|
6
lib/sql/scripts-available/CDB_Username.sql
Normal file
6
lib/sql/scripts-available/CDB_Username.sql
Normal file
@ -0,0 +1,6 @@
|
||||
-- Returns the cartodb username of the current PostgreSQL session
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_Username()
|
||||
RETURNS text
|
||||
AS $$
|
||||
SELECT @extschema@.CDB_Conf_GetConf(CONCAT('api_keys_', session_user))->>'username';
|
||||
$$ LANGUAGE SQL STABLE PARALLEL SAFE SECURITY DEFINER;
|
62
lib/sql/scripts-available/CDB_XYZ.sql
Normal file
62
lib/sql/scripts-available/CDB_XYZ.sql
Normal file
@ -0,0 +1,62 @@
|
||||
-- {
|
||||
-- Return pixel resolution at the given zoom level
|
||||
-- }{
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_XYZ_Resolution(z INTEGER)
|
||||
RETURNS FLOAT8
|
||||
AS $$
|
||||
-- circumference divided by 256 is z0 resolution, then divide by 2^z
|
||||
SELECT 6378137.0*2.0*pi() / 256.0 / power(2.0, z);
|
||||
$$ LANGUAGE SQL IMMUTABLE PARALLEL SAFE STRICT;
|
||||
-- }
|
||||
|
||||
-- {
|
||||
-- Returns a polygon representing the bounding box of a given XYZ tile
|
||||
--
|
||||
-- SRID of the returned polygon is forceably 3857
|
||||
--
|
||||
-- }{
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_XYZ_Extent(x INTEGER, y INTEGER, z INTEGER)
|
||||
RETURNS GEOMETRY
|
||||
AS $$
|
||||
DECLARE
|
||||
origin_shift FLOAT8;
|
||||
initial_resolution FLOAT8;
|
||||
tile_geo_size FLOAT8;
|
||||
pixres FLOAT8;
|
||||
xmin FLOAT8;
|
||||
ymin FLOAT8;
|
||||
xmax FLOAT8;
|
||||
ymax FLOAT8;
|
||||
earth_circumference FLOAT8;
|
||||
tile_size INTEGER;
|
||||
BEGIN
|
||||
|
||||
-- Size of each tile in pixels (1:1 aspect ratio)
|
||||
tile_size := 256;
|
||||
|
||||
initial_resolution := @extschema@.CDB_XYZ_Resolution(0);
|
||||
--RAISE DEBUG 'Initial resolution: %', initial_resolution;
|
||||
|
||||
origin_shift := (initial_resolution * tile_size) / 2.0;
|
||||
-- RAISE DEBUG 'Origin shift (after): %', origin_shift;
|
||||
|
||||
pixres := initial_resolution / (power(2,z));
|
||||
--RAISE DEBUG 'Pixel resolution: %', pixres;
|
||||
|
||||
tile_geo_size = tile_size * pixres;
|
||||
--RAISE DEBUG 'Tile_geo_size: %', tile_geo_size;
|
||||
|
||||
xmin := -origin_shift + x*tile_geo_size;
|
||||
xmax := -origin_shift + (x+1)*tile_geo_size;
|
||||
--RAISE DEBUG 'xmin: %', xmin;
|
||||
--RAISE DEBUG 'xmax: %', xmax;
|
||||
|
||||
ymin := origin_shift - y*tile_geo_size;
|
||||
ymax := origin_shift - (y+1)*tile_geo_size;
|
||||
--RAISE DEBUG 'ymin: %', ymin;
|
||||
--RAISE DEBUG 'ymax: %', ymax;
|
||||
|
||||
RETURN @postgisschema@.ST_MakeEnvelope(xmin, ymin, xmax, ymax, 3857);
|
||||
END
|
||||
$$ LANGUAGE 'plpgsql' IMMUTABLE STRICT PARALLEL SAFE;
|
||||
-- }
|
36
lib/sql/scripts-available/CDB_ZoomFromScale.sql
Normal file
36
lib/sql/scripts-available/CDB_ZoomFromScale.sql
Normal file
@ -0,0 +1,36 @@
|
||||
-- Maximum supported zoom level
|
||||
CREATE OR REPLACE FUNCTION @extschema@._CDB_MaxSupportedZoom()
|
||||
RETURNS int
|
||||
LANGUAGE SQL
|
||||
IMMUTABLE PARALLEL SAFE
|
||||
AS $$
|
||||
-- The maximum zoom level has to be limited for various reasons,
|
||||
-- e.g. zoom levels greater than 31 would require tile coordinates
|
||||
-- that would not fit in an INTEGER (which is signed, 32 bits long).
|
||||
-- We'll choose 20 as a limit which is safe also when the JavaScript shift
|
||||
-- operator (<<) is used for computing powers of two.
|
||||
SELECT 29;
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE FUNCTION @extschema@.CDB_ZoomFromScale(scaleDenominator numeric)
|
||||
RETURNS int
|
||||
LANGUAGE SQL
|
||||
IMMUTABLE PARALLEL SAFE
|
||||
AS $$
|
||||
SELECT
|
||||
CASE
|
||||
WHEN scaleDenominator > 600000000 THEN
|
||||
-- Scale is smaller than zoom level 0
|
||||
NULL
|
||||
WHEN scaleDenominator = 0 THEN
|
||||
-- Actual zoom level would be infinite
|
||||
@extschema@._CDB_MaxSupportedZoom()
|
||||
ELSE
|
||||
CAST (
|
||||
LEAST(
|
||||
ROUND(LOG(2, 559082264.028/scaleDenominator)),
|
||||
@extschema@._CDB_MaxSupportedZoom()
|
||||
)
|
||||
AS INTEGER)
|
||||
END;
|
||||
$$;
|
1
lib/sql/scripts-enabled/000-CDB_DateToNumber.sql
Symbolic link
1
lib/sql/scripts-enabled/000-CDB_DateToNumber.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_DateToNumber.sql
|
1
lib/sql/scripts-enabled/010-CDB_DigitSeparator.sql
Symbolic link
1
lib/sql/scripts-enabled/010-CDB_DigitSeparator.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_DigitSeparator.sql
|
1
lib/sql/scripts-enabled/020-CDB_HeadsTailsBins.sql
Symbolic link
1
lib/sql/scripts-enabled/020-CDB_HeadsTailsBins.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_HeadsTailsBins.sql
|
1
lib/sql/scripts-enabled/030-CDB_Hexagon.sql
Symbolic link
1
lib/sql/scripts-enabled/030-CDB_Hexagon.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_Hexagon.sql
|
1
lib/sql/scripts-enabled/040-CDB_JenksBins.sql
Symbolic link
1
lib/sql/scripts-enabled/040-CDB_JenksBins.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_JenksBins.sql
|
1
lib/sql/scripts-enabled/050-CDB_LatLng.sql
Symbolic link
1
lib/sql/scripts-enabled/050-CDB_LatLng.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_LatLng.sql
|
1
lib/sql/scripts-enabled/060-CDB_QuantileBins.sql
Symbolic link
1
lib/sql/scripts-enabled/060-CDB_QuantileBins.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_QuantileBins.sql
|
1
lib/sql/scripts-enabled/070-CDB_QueryStatements.sql
Symbolic link
1
lib/sql/scripts-enabled/070-CDB_QueryStatements.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_QueryStatements.sql
|
1
lib/sql/scripts-enabled/080-CDB_QueryTables.sql
Symbolic link
1
lib/sql/scripts-enabled/080-CDB_QueryTables.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_QueryTables.sql
|
1
lib/sql/scripts-enabled/085-CDB_OverviewsSupport.sql
Symbolic link
1
lib/sql/scripts-enabled/085-CDB_OverviewsSupport.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_OverviewsSupport.sql
|
1
lib/sql/scripts-enabled/090-CDB_Quota.sql
Symbolic link
1
lib/sql/scripts-enabled/090-CDB_Quota.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_Quota.sql
|
1
lib/sql/scripts-enabled/100-CDB_RandomTids.sql
Symbolic link
1
lib/sql/scripts-enabled/100-CDB_RandomTids.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_RandomTids.sql
|
1
lib/sql/scripts-enabled/110-CDB_RectangleGrid.sql
Symbolic link
1
lib/sql/scripts-enabled/110-CDB_RectangleGrid.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_RectangleGrid.sql
|
1
lib/sql/scripts-enabled/120-CDB_StringToDate.sql
Symbolic link
1
lib/sql/scripts-enabled/120-CDB_StringToDate.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_StringToDate.sql
|
1
lib/sql/scripts-enabled/130-CDB_TableMetadata.sql
Symbolic link
1
lib/sql/scripts-enabled/130-CDB_TableMetadata.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_TableMetadata.sql
|
1
lib/sql/scripts-enabled/140-CDB_TransformToWebmercator.sql
Symbolic link
1
lib/sql/scripts-enabled/140-CDB_TransformToWebmercator.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_TransformToWebmercator.sql
|
1
lib/sql/scripts-enabled/150-CDB_UserTables.sql
Symbolic link
1
lib/sql/scripts-enabled/150-CDB_UserTables.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_UserTables.sql
|
1
lib/sql/scripts-enabled/160-CDB_XYZ.sql
Symbolic link
1
lib/sql/scripts-enabled/160-CDB_XYZ.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_XYZ.sql
|
1
lib/sql/scripts-enabled/170-CDB_ColumnNames.sql
Symbolic link
1
lib/sql/scripts-enabled/170-CDB_ColumnNames.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_ColumnNames.sql
|
1
lib/sql/scripts-enabled/180-CDB_ColumnType.sql
Symbolic link
1
lib/sql/scripts-enabled/180-CDB_ColumnType.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_ColumnType.sql
|
1
lib/sql/scripts-enabled/190-CDB_CartodbfyTable.sql
Symbolic link
1
lib/sql/scripts-enabled/190-CDB_CartodbfyTable.sql
Symbolic link
@ -0,0 +1 @@
|
||||
../scripts-available/CDB_CartodbfyTable.sql
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user