-
-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
Detailed Description
Currently takes ~ 6 seconds for each GSP to read forecast value table when looking at a X hour forecast.
Query at the momment is
Select DISTINCT ON (forecast_value.target_time) forecast_value.created_utc AS forecast_value_created_utc, forecast_value.uuid AS forecast_value_uuid, forecast_value.target_time AS forecast_value_target_time, forecast_value.expected_power_generation_megawatts AS forecast_value_expected_power_generation_megawatts, forecast_value.forecast_id AS forecast_value_forecast_id
FROM forecast_value
JOIN forecast ON forecast.id = forecast_value.forecast_id
JOIN location ON location.id = forecast.location_id
AND location.gsp_id = 105
and forecast_value.target_time >= '2022-10-30'
AND forecast_value.created_utc >= '2022-10-29'
AND forecast_value.target_time - forecast_value.created_utc >= interval '240 minute'
AND forecast_value.created_utc < '2022-10-31 13:01'
AND forecast.created_utc >= '2022-10-30'
ORDER BY forecast_value.target_time, forecast_value.created_utc DESC;
We have already partitation this table, but there are about 36 million rows per month. This means by the end of the month, its a bit slow. ~ 6 seconds per query. Note if you query again its quicker (this is how SQL works)
Context
- FE app is slow due to this slow read
Possible Implementation
- add datetime_interval join to forecast_value table. This means we would filter on these valyes and there would be no duplicate datetimes. This is fairly common pratice in databases
- To test if this works, we could manually
1 . add a column to a partition table,ALTER TABLE forecast_value ADD datetime_interval_id int;
2 . update and add datetime_intervals
3 . then run updated query, to see if it speeds up. If it does, can then add it in
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels