Skip to content

Commercial post-processing steps in PostgreSQL are cumbersome, limited #124

@simularis

Description

@simularis

Commercial post-processing steps are required to get unit energy consumption or savings for BldgType = Com (representing a sector weighted-average value ) but the current set of scripts provided with the repository are slow to work with and have some limitations.

  1. Learning curve and software setup. PostgreSQL scripted workflow has high learning curve and requires user to install PostgreSQL and configure a database. This workflow is a legacy of a time when the results were managed in a central database and many measures and lots of data were processed at once.
  2. End-use breakdown. Predefined script outputs for BldgType = Com are limited to the fields defined in the P2-P5 SQL scripts, which makes it burdensome to apply to new fields like multiple end use categories for heating, cooling, DHW, lighting.
  3. Helper script can help only so much. The helper script 'postgres-template-com.psql' can help speed up the commercial post-processing workflow, but also is out of date and needs attention. Some debugging and some additional changes in this and related files could help.
  4. Inefficient way to get UEC and UES for testing and iteration on measure setup. Other than for BldgType = Com, personal experience is that I try to avoid the post-processing scripts. There are several steps even to get to the point of running the database scripts where things can break down (modelkit rake results may be slow with a large number of models, and so may Com.py).

Background

Commercial post-processing steps in PostgreSQL outlined in README file are performed manually.

### Post-processing steps for commercial measures:
1. Open the Python script Com.py in the **data transformation** directory.
2. In line 27, or the line defining “measure_name = ..”, specify the corresponding measure folder. In the example code, it is specified as “**SWXX111-00 Example_SEER_AC**”. This should be the same name as the folder name under the directory “commercial measures”. A corresponding measure record with matching cohort/case file names should be present in the workbook ***DEER_EnergyPlus_Modelkit_Measure_list_working.xlsx*** under the same directory.
3.
NOTE: the example directory **SWXX111-00 Example_SEER_AC** and all its subdirectories are only used to illustrate the workflow for post-processing. The case files and its parameter in this example directory do not reflect how an actual measure should be set up, as they are only the most basic set up for a modelkit run.
Update on 3/22/2024: the example directory **SWXX111-00 Example_SEER_AC** reflects the vintage consolidation update (with only Ex and New vintages)
5. If the table "wts_com_bldg.csv" is not consistent with the DEER weights table (DEER.BldgWts), run DEER_weights_extraction.py to extract the most up-to-date weights table needed for post-procesing. Use the the most up-to-date tables during the POSTgreSQL steps.
6. Run the python script to generate three CSV files: 'current_msr_mat.csv', 'sim_annual.csv', and 'sim_hourly_wb.csv'
7. Load these three CSV files into the PostgreSQL database management software (see residential section of the README, step 6)
8. Load commercial support tables 'peakperspec.csv', 'wts_com_bldg.csv' into the PostgreSQL database management software, in the same SQL schema environment as the three csv tables (current_msr_mat, sim_annual, sim_hourly_wb) above. The support tables only needed to be imported once.
9. Run the post-processing SQL queries P1 to P5 for commercial, located in 'scripts/energy savings/commercial/'.
10. Export ‘meas_impacts_2024_com” as the output.

Typical output fields

Field name Field description
APreWBkWh Unit energy savings, Above Pre-existing case, Whole Building, electric (kWh)
APreWBkW Unit energy savings, Above Pre-existing case, Whole Building, DEER peak period demand (kW)
APreWBtherm Unit energy savings, Above Pre-existing case, Whole Building, natural gas (therm)
AStdWBkWh Unit energy savings, Above Standard case, Whole Building, electric (kWh)
AStdWBkW Unit energy savings, Above Standard case, Whole Building, DEER peak period demand (kW)
AStdWBtherm Unit energy savings, Above Standard case, Whole Building, natural gas (therm)
APreUseWBkWh Unit energy consumption (usage), Pre-existing case, Whole Building, electric (kWh)
APreUseWBtherm Unit energy consumption (usage), Pre-existing case, Whole Building, natural gas (therm)
AStdUseWBkWh Unit energy consumption (usage), Standard case, Whole Building, electric (kWh)
AStdUseWBtherm Unit energy consumption (usage), Standard case, Whole Building, natural gas (therm)
AMsrUseWBkWh Unit energy consumption (usage), Measure case, Whole Building, electric (kWh)
AMsrUseWBtherm Unit energy consumption (usage), Measure case, Whole Building, natural gas (therm)

To get another output field, the user must modify each script, P2-P5, following the examples already present, and generate a compatible input file. This is not impossible, but seems time-consuming and prone to introducing new errors.

Workflow helper script

For steps 7-10 in the README excerpt above, we have a helper script written in PSQL (the command line interface that comes with PostgreSQL) that doesn't require any additional software, but it is out of date following commercial vintage consolidation and required some debugging.

Here is the location of the PSQL script, which was copied from prior work with residential models, but not used much and has not been maintained.

/*
postgres-template-com.psql
Nicholas Fette, created 2023-01-05, modeified 2024-04-22
Automate workflow in PostgreSQL:
Set up support tables, import results data , and run all the post-processing scripts.
Warning: deletes the results data currently in the designated database schema.

The user must customize the script to specify the folder where the script can find output files from Com.py.

Then the user runs the script on a command line:

Usage on command line (substitute your measure folder for SWXX000)
cd "C:/DEER-Prototypes-EnergyPlus/commercial measures/SWXX000"
psql -d "postgres" -U "postgres" -f "postgres-com-SWXX000.psql"

Next steps

I'm did some debugging of the workflow helper script, but I think the repository could benefit from a proposal for an alternative to bypass the PostgreSQL software requirement, consolidate the many steps for processing results, and reduce the learning curve for new users. Thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions