Phenocam
In [1]:
Copied!
from springtime.datasets.insitu.phenocam import PhenocamrSite, PhenocamrBoundingBox, list_sites, list_rois
from springtime.datasets.insitu.phenocam import PhenocamrSite, PhenocamrBoundingBox, list_sites, list_rois
Retrieve meta data on "harvard" site.
In [2]:
Copied!
sites = list_sites()
sites[sites.site == "harvard"]
sites = list_sites()
sites[sites.site == "harvard"]
Out[2]:
site | elev | contact1 | contact2 | date_start | date_end | nimage | tzoffset | active | infrared | ... | MAP_worldclim | dominant_species | primary_veg_type | secondary_veg_type | koeppen_geiger | ecoregion | wwf_biome | landcover_igbp | site_acknowledgements | geometry | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
207 | harvard | 340 | Andrew Richardson <andrew DOT richardson AT na... | Bill Munger <jwmunger AT seas DOT harvard DOT ... | 2008-04-04 | 2023-04-12 | 180576 | -5 | True | N | ... | 1139.0 | Quercus rubra, Acer rubrum, Pinus strobus | DB | EN | Dfb | 5 | 4 | 5 | The Harvard EMS site is supported is an AmeriF... | POINT (-72.17150 42.53780) |
1 rows × 34 columns
List rois of harvard
In [3]:
Copied!
rois = list_rois()
rois[rois.site == "harvard"]
rois = list_rois()
rois[rois.site == "harvard"]
Out[3]:
site | veg_type | roi_id_number | description | first_date | last_date | site_years | missing_data_pct | geometry | |
---|---|---|---|---|---|---|---|---|---|
284 | harvard | DB | 1 | Deciduous trees in foreground | 2008-04-04 | 2023-04-11 | 15.0 | 0 | POINT (-72.17150 42.53780) |
285 | harvard | DB | 1000 | Deciduous trees in foreground | 2008-04-04 | 2023-04-11 | 15.0 | 0 | POINT (-72.17150 42.53780) |
In [4]:
Copied!
# Use $ in site name to get an exact match
dataset = PhenocamrSite(site='harvard$', years=(2010,2015))
dataset
# Use $ in site name to get an exact match
dataset = PhenocamrSite(site='harvard$', years=(2010,2015))
dataset
Out[4]:
PhenocamrSite(dataset='phenocam', years=YearRange(start=2010, end=2015), site='harvard$', veg_type=None, frequency='3', rois=None)
In [5]:
Copied!
# Create a data instance
dataset.download()
# Create a data instance
dataset.download()
In [6]:
Copied!
# Load downloded dataset as a dataframe
df = dataset.load()
df.head()
# Load downloded dataset as a dataframe
df = dataset.load()
df.head()
Out[6]:
site | roi_id_number | veg_type | date | year | doy | image_count | midday_filename | midday_r | midday_g | ... | smooth_ci_gcc_mean | smooth_ci_gcc_50 | smooth_ci_gcc_75 | smooth_ci_gcc_90 | smooth_ci_rcc_mean | smooth_ci_rcc_50 | smooth_ci_rcc_75 | smooth_ci_rcc_90 | int_flag | geometry | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | harvard | 0001 | DB | 2010-01-01 | 2010 | 1 | NaN | NaN | NaN | NaN | ... | 0.00307 | 0.00305 | 0.00320 | 0.00372 | 0.00735 | 0.00812 | 0.00710 | 0.00670 | NaN | POINT (-72.17150 42.53780) |
1 | harvard | 0001 | DB | 2010-01-02 | 2010 | 2 | 26.0 | harvard_2010_01_02_120137.jpg | 52.19172 | 71.35788 | ... | 0.00304 | 0.00302 | 0.00316 | 0.00368 | 0.00728 | 0.00804 | 0.00703 | 0.00664 | NaN | POINT (-72.17150 42.53780) |
2 | harvard | 0001 | DB | 2010-01-03 | 2010 | 3 | NaN | NaN | NaN | NaN | ... | 0.00304 | 0.00302 | 0.00317 | 0.00368 | 0.00728 | 0.00804 | 0.00704 | 0.00664 | NaN | POINT (-72.17150 42.53780) |
3 | harvard | 0001 | DB | 2010-01-04 | 2010 | 4 | NaN | NaN | NaN | NaN | ... | 0.00308 | 0.00306 | 0.00320 | 0.00372 | 0.00737 | 0.00814 | 0.00712 | 0.00672 | NaN | POINT (-72.17150 42.53780) |
4 | harvard | 0001 | DB | 2010-01-05 | 2010 | 5 | 39.0 | harvard_2010_01_05_120139.jpg | 54.79733 | 68.07477 | ... | 0.00310 | 0.00309 | 0.00323 | 0.00376 | 0.00743 | 0.00821 | 0.00718 | 0.00677 | NaN | POINT (-72.17150 42.53780) |
5 rows × 53 columns
Data from sites around harvard¶
In [7]:
Copied!
dataset = PhenocamrBoundingBox(area={'name': 'harvard', 'bbox': [-73, 42, -72, 43]}, years=[2019, 2020])
dataset
dataset = PhenocamrBoundingBox(area={'name': 'harvard', 'bbox': [-73, 42, -72, 43]}, years=[2019, 2020])
dataset
Out[7]:
PhenocamrBoundingBox(dataset='phenocambbox', years=YearRange(start=2019, end=2020), area=NamedArea(name='harvard', bbox=BoundingBox(xmin=-73.0, ymin=42.0, xmax=-72.0, ymax=43.0)), veg_type=None, frequency='3')
In [8]:
Copied!
# Create a data instance
dataset.download()
# Create a data instance
dataset.download()
R[write to console]: Downloading: bbc1_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: bbc2_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn_EN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn2_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn2_DB_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn2_EN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardbarn2_EN_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardblo_UN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardblo_UN_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardems2_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardfarmnorth_AG_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardfarmsouth_AG_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardfarmsouth_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardgarden_AG_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_DB_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_EN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_EN_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_EN_3000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock_EN_4000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardhemlock2_EN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardlph_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardlph_DB_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: harvardlph_DB_3000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: macleish_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_DB_0001_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_DB_0002_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_EN_0001_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_EN_0002_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00033_EN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: NEON.D01.HARV.DP1.00042_UN_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: springfieldma_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: springfieldma_DB_2000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: springfieldma_DB_3000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series! R[write to console]: Downloading: witnesstree_DB_1000_3day.csv R[write to console]: -- Flagging outliers! R[write to console]: -- Smoothing time series!
In [9]:
Copied!
# Load downloded dataset as a dataframe
df = dataset.load()
df.head()
# Load downloded dataset as a dataframe
df = dataset.load()
df.head()
Out[9]:
site | roi_id_number | veg_type | date | year | doy | image_count | midday_filename | midday_r | midday_g | ... | smooth_ci_gcc_mean | smooth_ci_gcc_50 | smooth_ci_gcc_75 | smooth_ci_gcc_90 | smooth_ci_rcc_mean | smooth_ci_rcc_50 | smooth_ci_rcc_75 | smooth_ci_rcc_90 | int_flag | geometry | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | bbc1 | 1000 | DB | 2019-01-01 | 2019 | 1 | NaN | NaN | NaN | NaN | ... | 0.00296 | 0.00310 | 0.00318 | 0.00323 | 0.00934 | 0.00959 | 0.00974 | 0.00942 | NaN | POINT (-72.17436 42.53508) |
1 | bbc1 | 1000 | DB | 2019-01-02 | 2019 | 2 | 47.0 | bbc1_2019_01_02_120002.jpg | 84.86659 | 85.59192 | ... | 0.00287 | 0.00301 | 0.00309 | 0.00313 | 0.00906 | 0.00930 | 0.00944 | 0.00913 | NaN | POINT (-72.17436 42.53508) |
2 | bbc1 | 1000 | DB | 2019-01-03 | 2019 | 3 | NaN | NaN | NaN | NaN | ... | 0.00288 | 0.00302 | 0.00310 | 0.00314 | 0.00911 | 0.00935 | 0.00949 | 0.00918 | NaN | POINT (-72.17436 42.53508) |
3 | bbc1 | 1000 | DB | 2019-01-04 | 2019 | 4 | NaN | NaN | NaN | NaN | ... | 0.00300 | 0.00314 | 0.00322 | 0.00327 | 0.00946 | 0.00971 | 0.00986 | 0.00954 | NaN | POINT (-72.17436 42.53508) |
4 | bbc1 | 1000 | DB | 2019-01-05 | 2019 | 5 | 81.0 | bbc1_2019_01_05_120004.jpg | 60.30106 | 67.32647 | ... | 0.00308 | 0.00322 | 0.00331 | 0.00336 | 0.00972 | 0.00998 | 0.01013 | 0.00980 | NaN | POINT (-72.17436 42.53508) |
5 rows × 53 columns