Project Simulation Service
Building Models with the generation_models
schema
The models used by the Tyba Python Client come from the generation_models package, which is installed as a dependency of the client (it’s always a good idea to make sure you have the latest version). The package provides high-level classes for representing power generation (specifically PV solar) plants, grid-connected and behind-the-meter (BTM) energy storage assets, and hybrid systems. Each instance of these classes contains all of the information needed to model a renewable asset over some period of time, and can be passed to the client to schedule a model run and generate results.
The models have a nested class structure, so each attribute of the class is either a direct input or an object that
contains inputs to some submodel. So a PVStorageModel
instance might look like
from generation_models import * model = PVStorageModel( energy_prices=DARTPrices(...), # Price data inputs the BESS optimizes against time_interval_mins=30, # time interval represented by the energy_price data storage_inputs=MultiStorageInputs( # BESS-related inputs incl. physical specs and behavioral specs batteries=[ BatteryParams( ..., capacity_degradation_model=TableCapDegradationModel(...) ) ] ), pv_inputs=PVGenerationModel( # PV system related specs incl. physical specs and irradiance inputs solar_resource=SolarResource(...), inverter=Inverter(...), pv_module=PVModule(...), system_design=PVSystemDesign( ..., tracking=SingleAxisTracking(...) ), ), storage_coupling=StorageCoupling.dc, # Specify how the PV and storage tie together )
where ...
represents model inputs that aren’t nested classes. Submodels are also
modular, so different models (with different inputs) can be used interchangeably. For example, the Tyba Client can model
inverters using the OND model (via the ONDInverter
class) or
CEC Model (via the Inverter
class) and
instances of either class can be used anywhere an inverter
attribute is required. Where applicable, “convenience”
inputs are provided to streamline the modeling process, e.g. we can set
PVGenerationModel.solar_resource
to a
SolarResource
instance, or to a tuple of
(latitude, longitude)
. In the latter case, the Tyba API automatically
queries the NSRDB for the relevant solar resource information so we don’t have to
provide it.
Every model starts with one of the 5 high level classes:
PVGenerationModel
: For modeling a PV generation assetACExternalGenerationModel
: For modeling a PV (or other generation) asset when you already have power time series data at the medium voltage AC busDCExternalGenerationModel
: Like above but for power time series data at the inverter DC MPPT inputsStandaloneStorageModel
: For modeling standalone BESS assetsPVStorageModel
: For modeling hybrid assets that couple a generation asset with a BESS
The first 3 classes are for modeling generation assets without storage (e.g. for baseline comparisons). Additionally,
these “generation classes” are also used to model the generation side of hybrid assets via
PVStorageModel.pv_inputs
. This pattern allows a
high degree of complexity (if desired) for both the generation and storage side of the model. Once you’ve identified
which kind of model you’d like to run, you can use the Model Schema page of the
generation_models documentation to learn about and specify each of the required
attributes. In each attribute description, the submodel class names hyperlink to their own documentation, allowing you
to drill down into the submodels, sub-submodels etc as far as you desire. You can also use the interactive Jupyter
Example Notebooks detailed at the end of this page as a starting point for your modeling efforts.
The generation_models
package includes built-in validation to ensure that the inputs will result in a successful
simulation. This validation occurs when model objects are instantiated (before the simulation is submitted), and will
raise an error if the model inputs are incorrect, inconsistent, etc. The error message should explain the issue,
allowing you to correct the inputs and retry object instantiation, but always feel free to reach out to Tyba if you are
having any issues.
Importing Data from Local Files
We often need to incorporate data from files (e.g. solar resource, prices, equipment parameters) into our models. While for something like price data we might use pandas, the generation_models.utils module provides functions for handling OND and PAN files and PSM-formatted solar resource CSVs. For example,
from generation_models import * from generation_models.utils.psm_readers import solar_resource_from_psm_csv solar_resource = solar_resource_from_psm_csv("my_weather.csv", typical=True) model = PVGenerationModel( solar_resource=solar_resource, ... )
Scheduling and Receiving Responses
Because of the computational intensity of model runs, Tyba’s API follows a “Schedule & Receive” process.
Scheduling a Model Run
Once we’ve successfully built our model, we can use the client.schedule
method to send the model to the Tyba servers and request that it be scheduled for completion.
resp = client.schedule(model)
We can assess how our request went by inspecting the
requests.Response
that is returned by the schedule
method.
resp.raise_for_status() # will ONLY raise an error if we got a bad response status code print(resp.json()) # resp.json() is the response message as a dict #returns {"id":"d08b1e8e-6c68-40cf-b29c-0b6da18cd777"} id_ = resp.json()["id"] # we'll use this id to check the run's status
If our request was successful, the response message will contain an id for the scheduled run. If not, the
raise_for_status
method will raise an error and the response message will contain the particular
HTTP status code and more information on the issue.
Receiving and Analyzing Results
To receive the simulation results, we pass the scheduled run’s id to the
client.wait_on_result
method. This
method checks the status of the model run at some interval (customizable by the wait_time
argument) and returns a
results dictionary once the run is complete.
results = client.wait_on_result(id_)Depending on the simulation requested, it might take quite a while for this line of code to execute
The structure of the results
dict will depend on the model that was run, and the docs for each of the 5 high-level
classes links to its respective output schema. The results schema docs
fully explain all the generated data. For example, a PVStorageModel
run
will generate results with the PVStorageModelResults
schema:
print(results.keys()) # returns # dict_keys(['solar_only', 'solar_storage', 'solar_storage_waterfall', 'optimizer_outputs', 'market_awards'])
In this schema, the "solar_storage"
dictionary contains power flow data useful for understanding asset operation
whereas the time series in "market_awards"
and "optimizer"
are useful for accurate revenue calculation. Below
the top level, most of the results are time series that can be converted into
pandas DataFrames for in-depth analysis, plotting and easy export. Continuing the
example above:
import pandas as pd df = pd.DataFrame(results["solar_storage"]) # with the dataframe we can analyze, summarize, plot etc # we can also export our results df.to_csv("my-results.csv")
Checking the Status of a Scheduled Run
After a simulation run has been scheduled, instead of using client.wait_on_result
, we can check on the status
ourselves by passing the scheduled run’s id to the client.get_status
method. Similar to client.schedule
, a requests.Response object is returned
and the response message will contain a "status"
field that we can inspect:
status = client.get_status(id_).json()["status"] print(status)
Until a run is complete, it’s status will be "scheduled"
, and we must keep checking until the status has changed
to "complete"
. To facilitate this, we can run our status check in a loop:
complete = False while not complete: res = client.get_status(id).json() if res["status"] == "complete": complete = True print(res.keys()) else: time.sleep(2)
This is, in fact, what client.wait_on_result
accomplishes for us. Our status check can also return an
"error"
status. This means an error has occurred during the model run itself. Similar to scheduling, we can
inspect res.json()
to get more insight into what is going on, but it may not always be clear. Tyba is alerted
when run errors occur and will typically reach out to resolve the issue. Of course, please reach out for any
time-sensitive issues. A table of Tyba Model Run Status Codes with summaries is provided below.
Rate Limits
To protect Tyba’s servers from being overwhelmed (such that they degrade service for other users), we impose rate limits on API requests. We monitor performance closely and seek to deliver as fast and reliable computing as we can. These rate limits will never be used to limit reasonable requests, only to protect against runaway or excessive requests. Below we provide an example of designing your code around Tyba’s rate limits, but please reach out to us if you would like additional assistance in that area.
Model Units
The units for the Tyba rate limit is a single design-year. For example, a single-year PV run would be one unit, whereas a 10-year run would be 10 units.
Limit by Model Type
Subject to change
For PV-Only runs
Limit is 5 units per second
For PV+Storage or Standalone Storage runs
Limit is 10 units per minute
One unit typically takes 10-15 seconds to run, so this should not be prohibitive to batched runs.
Example Rate Limit Backoff
def is_ratelimited(response):
return response.status_code==429
def is_unprocessable(response):
return response.status_code==422
def portfolio_schedule(models,save_name):
ids = {}
for project, model in models.items():
print(f"{project} scheduling")
tries = 8
backoff = 2
init = 2
while tries > 0:
res = client.schedule_pv_storage(model)
if is_unprocessable(res):
print("ERROR: could not schedule")
print(res.text)
raise Exception(f"could not process {project}")
if is_ratelimited(res):
print("Call ratelimited...retrying")
time.sleep(init)
init *= backoff
tries -= 1
else:
print(res.text) #this will show the id for that run
run_id = res.json()["id"]
ids[project] = run_id
break
else:
print(f"{project} exceeded tries and has failed.")
return ids
HTTP Response Codes
HTTP Status Code |
Summary |
---|---|
200 - Ok |
Request worked as expected. |
400 - Bad Request |
The request was not accepted. Usually due to a missing required parameter or an incorrect parameter type. |
401 - Unauthorized |
An invalid or missing TYBA_PAT provided. |
402 - Request Failed |
The request failed despite valid parameters. Check to make sure input parameters are the appropriate scale, and would not lead to an infeasible solution. If problem persists, contact Tyba. |
403 - Forbidden |
The TYBA_PAT does not have sufficient permissions to perform this request. |
404 - Not Found |
The requested endpoint does not exist. Check your request. |
422 - Unprocessable Entity |
The request was accepted but was unable to process the instructions. For example, when requesting pricing data, if a pricing year is requested but not in the database, this error will be raised. |
429 - Rate Limited |
Too many requests were scheduled with the API too quickly. Please implement an exponential backoff of your requests. For more information on rate limits, visit this page. |
500-504 - Tyba Server Error |
An error occurred with Tyba’s servers. Please try refreshing and contact Tyba if the issue persists. |
Tyba Model Run Status Codes
Status Code |
Summary |
---|---|
scheduled |
The request has been sent to Tyba’s servers and the model is running. |
completed |
The request has finished. |
unknown |
The requested run id is not currently scheduled. Check your run id and try again. |
error |
The simulation encountered an error. As in, the model was successfully scheduled, and the request for status was successful, but something went wrong with the model run itself. Try scheduling the simulation again, and if issue persists, contact Tyba. |