wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s
Loading...

Query the Data Delivery Network

Query the DDN

The easiest way to query any data on Splitgraph is via the "Data Delivery Network" (DDN). The DDN is a single endpoint that speaks the PostgreSQL wire protocol. Any Splitgraph user can connect to it at data.splitgraph.com:5432 and query any version of over 40,000 datasets that are hosted or proxied by Splitgraph.

For example, you can query the wdfwcreel_analysis_effort_count table in this repository, by referencing it like:

"wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest"."wdfwcreel_analysis_effort_count"

or in a full query, like:

SELECT
    ":id", -- Socrata column ID
    "direct_census_bank", -- The assumed spatial expansion of boat anglers in a particular section used when census count data are unavailable 
    "creel_event_id", -- Primary key id value for record in creel_event table
    "water_body", -- Waterbody where effort counts were conducted
    "created_datetime", -- Date record was created
    "count_quantity", -- Quantity of how many of specified type were counted
    "survey_type", -- Defines if the location is used as an index or census
    "surveyor_num", -- Survey number assigned to each creel survey location 
    "p_census_boat", -- The asssumed proportion of bank anglers in a particular section that are surveyed during a census effort count 
    "section_num", -- Section number assigned to survey location 
    "location_season_name", -- Optional name for location specicic to the fishery 
    "tie_in_indicator", -- True / false indicator if the counts were part of a tie-in or census survey
    "location_id", -- Effort location id value 
    "effort_end_time", -- Time the count ended
    "effort_start_time", -- Time the count started
    "indirect_census_bank", -- The assumed spatial expansion of bank anglers in a particular section used when census count data are unavailable 
    "location", -- Effort location text description
    "project_name", -- Data collection project name
    "fishery_name", -- Name of fishery being monitored 
    "comments", -- Comments pertaining to the effort count
    "effort_event_id", -- Primary key id value for record in effort_event table
    "event_date", -- Date effort counts were conducted
    "modified_datetime", -- Date record was modified
    "count_sequence", -- Defines the sequential order when a location is visited multiple times in a day
    "no_count_reason", -- Reason no count was conducted if applicable
    "location_type", -- Defines if the location is a discrete site or a broader section 
    "p_census_bank", -- The asssumed proportion of bank anglers in a particular section that are surveyed during a census effort count 
    "count_type" -- Description of what is being counted
FROM
    "wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest"."wdfwcreel_analysis_effort_count"
LIMIT 100;

Connecting to the DDN is easy. All you need is an existing SQL client that can connect to Postgres. As long as you have a SQL client ready, you'll be able to query wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s with SQL in under 60 seconds.

Query Your Local Engine

Install Splitgraph Locally
bash -c "$(curl -sL https://github.com/splitgraph/splitgraph/releases/latest/download/install.sh)"
 

Read the installation docs.

Splitgraph Cloud is built around Splitgraph Core (GitHub), which includes a local Splitgraph Engine packaged as a Docker image. Splitgraph Cloud is basically a scaled-up version of that local Engine. When you query the Data Delivery Network or the REST API, we mount the relevant datasets in an Engine on our servers and execute your query on it.

It's possible to run this engine locally. You'll need a Mac, Windows or Linux system to install sgr, and a Docker installation to run the engine. You don't need to know how to actually use Docker; sgrcan manage the image, container and volume for you.

There are a few ways to ingest data into the local engine.

For external repositories, the Splitgraph Engine can "mount" upstream data sources by using sgr mount. This feature is built around Postgres Foreign Data Wrappers (FDW). You can write custom "mount handlers" for any upstream data source. For an example, we blogged about making a custom mount handler for HackerNews stories.

For hosted datasets (like this repository), where the author has pushed Splitgraph Images to the repository, you can "clone" and/or "checkout" the data using sgr cloneand sgr checkout.

Cloning Data

Because wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest is a Splitgraph Image, you can clone the data from Spltgraph Cloud to your local engine, where you can query it like any other Postgres database, using any of your existing tools.

First, install Splitgraph if you haven't already.

Clone the metadata with sgr clone

This will be quick, and does not download the actual data.

sgr clone wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s

Checkout the data

Once you've cloned the data, you need to "checkout" the tag that you want. For example, to checkout the latest tag:

sgr checkout wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest

This will download all the objects for the latest tag of wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s and load them into the Splitgraph Engine. Depending on your connection speed and the size of the data, you will need to wait for the checkout to complete. Once it's complete, you will be able to query the data like you would any other Postgres database.

Alternatively, use "layered checkout" to avoid downloading all the data

The data in wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest is 0 bytes. If this is too big to download all at once, or perhaps you only need to query a subset of it, you can use a layered checkout.:

sgr checkout --layered wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s:latest

This will not download all the data, but it will create a schema comprised of foreign tables, that you can query as you would any other data. Splitgraph will lazily download the required objects as you query the data. In some cases, this might be faster or more efficient than a regular checkout.

Read the layered querying documentation to learn about when and why you might want to use layered queries.

Query the data with your existing tools

Once you've loaded the data into your local Splitgraph Engine, you can query it with any of your existing tools. As far as they're concerned, wa-gov/wdfwcreel-analysis-effort-count-h9a6-g38s is just another Postgres schema.

Related Documentation:

Loading...