Looker
Prerequisites
Follow installation instructions here
Get credentials with permissions to call the Looker API
Run extraction script
Once the package has been installed, you should be able to run the following command in your terminal:
castor-extract-looker [arguments]
The script will run and display logs as following:
INFO - Extracting users from Looker API
INFO - POST(https://cloud.looker.com/api/4.0/login)
INFO - GET(https://cloud.looker.com/api/4.0/users/search)
INFO - Fetched page 1 / 7 results
INFO - GET(https://catalog.cloud.looker.com/api/4.0/users/search)
INFO - Fetched page 2 / 0 results
...
INFO - Wrote output file: /tmp/catalog/1649079699-projects.json
INFO - Wrote output file: /tmp/catalog/1649079699-summary.json
Credentials
-c
,--client-id
: Looker API Client ID-s
,--client-secret
: Looker API Client Secret
Other arguments
-b
,--base-url
: Looker base url-o
,--output
: Target folder to store the extracted files-t
,--timeout
: Timeout (in s) parameter for Looker API--log-to-stdout
: Will write all log outputs tostdout
instead ofstderr
Specific exports methods for looks and dashboards
--search-per-folder
: Will export looks and dashboards per folder using multithreading (see below argument)--thread-pool-size
: Number parallel threads, defaults to 20
Use ENV variables
If you don't want to specify arguments every time, you can set the following ENV in your .bashrc
:
CASTOR_LOOKER_BASE_URL
CASTOR_LOOKER_CLIENT_ID
CASTOR_LOOKER_CLIENT_SECRET
CASTOR_OUTPUT_DIRECTORY
CASTOR_LOOKER_TIMEOUT_SECOND
# To log outputs to `stdout` instead of `stderr`
CASTOR_LOOKER_LOG_TO_STDOUT=TRUE
# To use search_per_folder multithreading
CASTOR_LOOKER_SEARCH_PER_FOLDER=TRUE
CASTOR_LOOKER_THREAD_POOL_SIZE=20
Then the script can be executed without any arguments:
castor-extract-looker
It can also be executed with partial arguments (the script looks in your ENV
as a fallback):
castor-extract-looker --output /tmp/catalog
Last updated
Was this helpful?