Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions util/csvresponse.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,13 @@ def _create_csv(self, dataset, filehandle):
# filter the dataset
dataset = dataset.drop(drop.intersection(dataset))

# rename the parameter without the stream_name prefix (12544 AC1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for csv's and json they don't have the extra metadata for each variables that netcdf has to identifies the actual stream from which the parameter came from. netcdf can get away with removing the stream name from the variable because it has additional metadata attributes that help identify the true source of the data.

removing the stream name introduces a change in the interface of the data which may break user developed m2m tools. before implementing this change for csv's and json we probably should get feedback from the user community if this is what they want.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Will pull the name changes and update the ticket to request input/direction from the user community.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will also correct the test failure.

for extern in self.stream_request.external_includes:
for param in self.stream_request.external_includes[extern]:
long_name = '-'.join((extern.stream_name, param.name))
if long_name in dataset.data_vars:
dataset = dataset.rename({long_name : param.name})

# write the CSV -- note to_csv() returns none
dataset.to_dataframe().to_csv(path_or_buf=filehandle, sep=self.delimiter)

Expand Down
10 changes: 7 additions & 3 deletions util/jsonresponse.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,14 +130,18 @@ def _deployment_particles(self, ds, stream_key, parameters, external_includes):
if lat_data is not None and lon_data is not None:
data['lat'] = lat_data
data['lon'] = lon_data
params.extend(('lat', 'lon'))
# always include lat/lon
params.extend(('lat', 'lon'))
Copy link
Copy Markdown
Contributor

@danmergens danmergens Jul 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it correct to include lat/lon when we do not have values for them?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lines 127 - 132 apply to gliders only and essentially rename GPS based lat/lon as simply lat or lon. Some non-glider data does have lat/lon, e.g. GI01SUMO_RID16_02_FLORTD000, which were not being reported. At this point, we are actually saying we want to include lat/lon; if they are not available, they are reported as missing in lines 161 - 163 and removed from the requested param list in 165.

I think it is more efficient to do this than to attempt to determine ahead of time if lat/lon are available.


# remaining externals

# remaining externals. rename the parameter without the stream_name prefix (12544 AC1)
for sk in external_includes:
for param in external_includes[sk]:
name = '-'.join((sk.stream_name, param.name))
if name in data:
params.append(name)
extern_data = data.pop(name)
data[param.name] = extern_data
params.append(param.name)

if self.stream_request.include_provenance:
params.append('provenance')
Expand Down