Skip to content

API Documentation 🔧

Welcome to the TM2PY API reference. This documentation provides comprehensive details about all classes, functions, and configuration options available in the tm2py transportation modeling framework.

Getting Started

New to tm2py? Start with the User Guide and Installation Instructions before diving into the API details.

Output Data Formats

For detailed specifications of model output file formats, field definitions, and data structures, see the CTRAMP Output File Specifications.


🚀 Model Setup & Execution

Core classes for setting up and running transportation models.

Model Setup

The primary interface for configuring and initializing tm2py models.

tm2py.SetupModel

SetupModel(config_file: Path, model_dir: Path)

Main operational interface for setup model process.

Initializes an instance of the SetupModel class.

Parameters:

Name Type Description Default
config_file Path

The TOML file with the model setup attributes.

required
model_dir Path

The directory which to setup for a TM2 model run.

required
Source code in tm2py/setup_model/setup.py
71
72
73
74
75
76
77
78
79
80
def __init__(self, config_file: pathlib.Path, model_dir: pathlib.Path):
    """Initializes an instance of the SetupModel class.

    Args:
        config_file (pathlib.Path): The TOML file with the model setup attributes.
        model_dir (pathlib.Path): The directory which to setup for a TM2 model run.
    """
    self.config_file = config_file
    self.setup_config = SetupConfig(dict())
    self.model_dir = model_dir
Functions
run_setup
run_setup()

Does the work of setting up the model.

This step will do the following within the model directory.

  1. Intialize logging to write to setup.log
  2. Copy the setup config file to setupmodel_config.toml
  3. Create the required folder structure
  4. Copy the input from the locations specified: a. hwy and trn networks b. popsyn and landuse inputs c. nonres inputs d. warmstart demand matrices e. warmstart skims
  5. Copy the Emme template project and Emme network databases (based on the EMME version in sys.path)
  6. Download the travel model CTRAMP core code (runtime, uec) from the travel-model-two repository
  7. Updates the IP address in the CTRAMP runtime properties files
  8. Creates RunModel.py for running the model

Raises:

Type Description
FileExistsError

If the model directory to setup already exists.

Source code in tm2py/setup_model/setup.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
def run_setup(self):
    """
    Does the work of setting up the model.  

    This step will do the following within the model directory.

    1. Intialize logging to write to `setup.log`
    2. Copy the setup config file to `setupmodel_config.toml`
    3. Create the required folder structure
    4. Copy the input from the locations specified:
       a. hwy and trn networks
       b. popsyn and landuse inputs
       c. nonres inputs
       d. warmstart demand matrices
       e. warmstart skims
    5. Copy the Emme template project and Emme network databases 
       (based on the EMME version in sys.path)
    6. Download the travel model CTRAMP core code (runtime, uec) from the 
       [travel-model-two repository](https://github.com/BayAreaMetro/travel-model-two)
    7. Updates the IP address in the CTRAMP runtime properties files
    8. Creates `RunModel.py` for running the model

    Raises:
        FileExistsError: If the model directory to setup already exists.
    """
    # Read setup setup_config
    config_dict = self._load_toml()
    self.setup_config = SetupConfig(config_dict)
    self.setup_config.validate()

    # if the directory already exists - error and quit
    if self.model_dir.exists():
        raise FileExistsError(f"{self.model_dir.resolve()} already exists! Setup terminated.")
    else:
        self.model_dir.mkdir()

    # Initialize logging
    log_file = self.model_dir / "setup.log"
    self._setup_logging(log_file)

    self.logger.info(f"Starting process to setup MTC model in directory: {self.model_dir.resolve()}")

    # Save setup config into model dir as setupmodel_config.toml
    shutil.copy(self.config_file, self.model_dir / "setupmodel_config.toml")
    self.logger.info(f"Copied {self.config_file} to {self.model_dir / 'setupmodel_config.toml'}")

    # List of folders to create
    folders_to_create = [
        "acceptance",
        "CTRAMP",
        "ctramp_output",
        "demand_matrices",
        "demand_matrices/highway",
        "demand_matrices/highway/air_passenger",
        "demand_matrices/highway/household",
        "demand_matrices/highway/maz_demand",
        "demand_matrices/highway/internal_external",
        "demand_matrices/highway/commercial",
        "demand_matrices/transit",
        "emme_project",
        "inputs",
        "logs",
        "notebooks",
        "output_summaries",
        "skim_matrices",
        "skim_matrices/highway",
        "skim_matrices/transit",
        "skim_matrices/non_motorized",
    ]

    # Create folder structure
    self._create_folder_structure(folders_to_create)

    # Copy model inputs
    self._copy_model_inputs()

    # Copy emme project and database
    self._copy_emme_project_and_database()

    # Download toml SetupConfig files from GitHub
    config_files_list = [
        "observed_data.toml",
        "canonical_crosswalk.toml",
        "model_config.toml",
        "scenario_config.toml",
    ]
    acceptance_config_files_list = [
        "observed_data.toml",
        "canonical_crosswalk.toml",
    ]

    for file in config_files_list:
        github_url = self.setup_config.CONFIGS_GITHUB_PATH + "/" + file

        local_file = self.model_dir / file

        self._download_file_from_github(github_url, local_file)

    # Fetch required folders from travel model two github release (zip file)
    org = "BayAreaMetro"
    repo = "travel-model-two"
    tag = self.setup_config.TRAVEL_MODEL_TWO_RELEASE_TAG
    folders_to_extract = ["runtime", "uec"]

    self._download_github_release(
        org,
        repo,
        tag,
        folders_to_extract,
        self.model_dir / "CTRAMP"
    )

    # Rename 'uec' folder to 'model'
    old_path = self.model_dir / "CTRAMP" / "uec"
    old_path.rename(self.model_dir / "CTRAMP" / "model")

    self._create_run_model_batch()

    # update IP addresses in config files
    ips_here = socket.gethostbyname_ex(socket.gethostname())[-1]
    self.logger.info(f"Found the following IPs for this server: {ips_here}; using the first one: {ips_here[0]}")

    # add IP address to mtctm2.properties
    self._replace_in_file(
        self.model_dir / 'CTRAMP' / 'runtime' / 'mtctm2.properties', {
            "(\nRunModel.MatrixServerAddress[ \t]*=[ \t]*)(\S*)": f"\g<1>{ips_here[0]}",
            "(\nRunModel.HouseholdServerAddress[ \t]*=[ \t]*)(\S*)": f"\g<1>{ips_here[0]}"
        }
    )
    # add IP address to logsum.properties
    self._replace_in_file(
        self.model_dir / 'CTRAMP' / 'runtime' / 'logsum.properties', {
            "(\nRunModel.MatrixServerAddress[ \t]*=[ \t]*)(\S*)": f"\g<1>{ips_here[0]}",
            "(\nRunModel.HouseholdServerAddress[ \t]*=[ \t]*)(\S*)": f"\g<1>{ips_here[0]}"
        }
    )
    self.logger.info(f"Setup process completed successfully!")

    # Close logging
    logging.shutdown()

Controller

Central orchestration and workflow management for model execution.

tm2py.controller

RunController - model operation controller.

Main interface to start a TM2PY model run. Provide one or more configuration files in .toml format (by convention a scenario.toml and a model.toml)

Typical usage example: from tm2py.controller import RunController controller = RunController( [“scenario.toml”, “model.toml”]) controller.run()

Or from the command-line: python <path>/tm2py/tm2py/controller.py –s scenario.toml –m model.toml

Classes
RunController
RunController(config_file: Union[Collection[Union[str, Path]], str, Path] = None, run_dir: Union[Path, str] = None, run_components: Collection[str] = component_cls_map.keys())

Main operational interface for model runs.

Provide one or more config files in TOML (*.toml) format, and a run directory. If the run directory is not provided the root directory of the first config_file is used.

Properties
Internal properties

Constructor for RunController class.

Parameters:

Name Type Description Default
config_file Union[Collection[Union[str, Path]], str, Path]

Single or list of config file locations as strings or Path objects. Defaults to None.

None
run_dir Union[Path, str]

Model run directory as a Path object or string. If not provided, defaults to the directory of the first config_file.

None
run_components Collection[str]

List of component names to run. Defaults to all components.

keys()
Source code in tm2py/controller.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
def __init__(
    self,
    config_file: Union[Collection[Union[str, Path]], str, Path] = None,
    run_dir: Union[Path, str] = None,
    run_components: Collection[str] = component_cls_map.keys(),
):
    """Constructor for RunController class.

    Args:
        config_file: Single or list of config file locations as strings or Path objects.
            Defaults to None.
        run_dir: Model run directory as a Path object or string. If not provided, defaults
            to the directory of the first config_file.
        run_components: List of component names to run. Defaults to all components.
    """
    if run_dir is None:
        run_dir = Path(os.path.abspath(os.path.dirname(config_file[0])))

    self._run_dir = Path(run_dir)

    self.config = Configuration.load_toml(config_file)
    self.has_emme: bool = emme_context()
    self.top_sheet = None
    self.trace = None
    self.completed_components = []

    self._validated_components = set()
    self._emme_manager = None
    self._iteration = None
    self._component = None
    self._component_name = None
    self._queued_components = deque()

    self._maz_data = None
    self._node_seq_id_xwalk = None

    # create logger before creating components so we can log if issues arise in the component creation
    self.logger = Logger(self)
    print(f"initialize_log({self.runtime_log_file, self.runtime_log_headers, self.runtime_log_col_width})")
    initialize_log(
        self.runtime_log_file, self.runtime_log_headers, self.runtime_log_col_width
    )

    # mapping from defined names referenced in config to Component objects
    self._component_map = {
        k: v(self) for k, v in component_cls_map.items() if k in run_components
    }

    self.logger.set_emme_manager(self.emme_manager)
    self._queue_components(run_components=run_components)
Attributes
run_dir property
run_dir: Path

The root run directory of the model run.

run_iterations property
run_iterations: List[int]

List of iterations for this model run.

time_period_names property
time_period_names: List[str]

Return input time_period name or names and return list of time_period names.

Implemented here for easy access for all components.

Returns: list of uppercased string names of time periods

time_period_durations property
time_period_durations: dict

Return mapping of time periods to durations in hours.

congested_transit_assn_max_iteration property
congested_transit_assn_max_iteration: dict

Return mapping of time periods to max iteration in congested transit assignment.

iteration property
iteration: int

Current iteration of model run.

component_name property
component_name: str

Name of current component of model run.

iter_component property
iter_component: Tuple[int, str]

Tuple of the current iteration and component name.

emme_manager property
emme_manager: EmmeManager

Cached Emme Manager object.

Functions
component
component() -> Component

Current component of model.

Source code in tm2py/controller.py
223
224
225
def component(self) -> Component:
    """Current component of model."""
    return self._component
get_abs_path
get_abs_path(rel_path: Union[Path, str]) -> Path

Get the absolute path from the root run directory given a relative path.

Source code in tm2py/controller.py
242
243
244
245
246
def get_abs_path(self, rel_path: Union[Path, str]) -> Path:
    """Get the absolute path from the root run directory given a relative path."""
    if not isinstance(rel_path, Path):
        rel_path = Path(rel_path)
    return Path(os.path.join(self.run_dir, rel_path))
run
run()

Main interface to run model.

Iterates through the self._queued_components and runs them.

Source code in tm2py/controller.py
285
286
287
288
289
290
291
292
293
def run(self):
    """Main interface to run model.

    Iterates through the self._queued_components and runs them.
    """
    self._iteration = None

    while self._queued_components:
        self.run_next()
run_next
run_next()

Run next component in the queue.

Source code in tm2py/controller.py
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
def run_next(self):
    """Run next component in the queue."""
    if not self._queued_components:
        raise ValueError("No components in queue")
    iteration, name, component = self._queued_components.popleft()
    if self._iteration != iteration:
        self.logger.log(f"Start iteration {iteration}")
    self._iteration = iteration

    self.logger.debug(f" Running iteration {iteration} component {name}")
    component_start_time = datetime.now()

    # check wamrstart files exist
    if iteration == 0:
        if self.config.warmstart.warmstart:
            if self.config.warmstart.use_warmstart_demand:
                for source in [
                    "household",
                    "truck",
                    "air_passenger",
                    "internal_external",
                ]:
                    highway_demand_file = str(
                        self.get_abs_path(self.config[source].highway_demand_file)
                    )
                    for time in self.config["time_periods"]:
                        path = highway_demand_file.format(
                            period=time.name, iter=iteration
                        )
                        assert os.path.isfile(
                            path
                        ), f"{path} required as warmstart demand does not exist"
            elif self.config.warmstart.use_warmstart_skim:
                highway_skim_file = str(
                    self.get_abs_path(
                        self.config["highway"].output_skim_path
                        / self.config["highway"].output_skim_filename_tmpl
                    )
                )
                for time in self.config["time_periods"]:
                    path = highway_skim_file.format(time_period=time.name)
                    assert os.path.isfile(
                        path
                    ), f"{path} required as warmstart skim does not exist"
                transit_skim_file = str(
                    self.get_abs_path(
                        self.config["transit"].output_skim_path
                        / self.config["transit"].output_skim_filename_tmpl
                    )
                )
                for time in self.config["time_periods"]:
                    for tclass in self.config["transit"]["classes"]:
                        path = transit_skim_file.format(
                            time_period=time.name, tclass=tclass.name
                        )
                        assert os.path.isfile(
                            path
                        ), f"{path} required as warmstart skim does not exist"

    self._component = component
    try:
        component.run()
        component_end_time = datetime.now()
        add_run_log(
            iteration,
            name,
            component_start_time,
            component_end_time,
            self.runtime_log_file,
            self.runtime_log_col_width,
        )
    except:
        # re-insert failed component on error
        self._queued_components.insert(0, (iteration, name, component))
        raise
    self.completed_components.append((iteration, name, component))
Functions

⚙️ Configuration Management

Configuration classes that define model parameters, settings, and behavioral options.

Configuration Pattern

Each component has an associated configuration class that defines its parameters. Configurations are typically loaded from TOML files and validated at runtime.

tm2py.config

Config implementation and schema.

Classes

ConfigItem

Base class to add partial dict-like interface to tm2py model configuration.

Allow use of .items() [“X”] and .get(“X”) .to_dict() from configuration.

Not to be constructed directly. To be used a mixin for dataclasses representing config schema. Do not use “get” “to_dict”, or “items” for key names.

Functions
items
items()

The sub-config objects in config.

Source code in tm2py/config.py
31
32
33
def items(self):
    """The sub-config objects in config."""
    return self.__dict__.items()
get
get(key, default=None)

Return the value for key if key is in the dictionary, else default.

Source code in tm2py/config.py
35
36
37
def get(self, key, default=None):
    """Return the value for key if key is in the dictionary, else default."""
    return self.__dict__.get(key, default)
ScenarioConfig

Scenario related parameters.

Properties
WarmStartConfig

Warm start parameters.

Note that the components will be executed in the order listed.

Properties
Functions
check_warmstart_method
check_warmstart_method(value, values)

When warmstart, either skim or demand should be true.

Source code in tm2py/config.py
111
112
113
114
115
116
117
118
@validator("warmstart", allow_reuse=True)
def check_warmstart_method(cls, value, values):
    """When warmstart, either skim or demand should be true."""
    if values.get("warmstart"):
        assert (
            values.use_warmstart_skim != values.use_warmstart_demand
        ), f"'warmstart is on, only one of' {values.use_warmstart_skim} and {values.use_warmstart_demand} can be true"
    return value
RunConfig

Model run parameters.

Note that the components will be executed in the order listed.

Properties
Functions
end_iteration_gt_start
end_iteration_gt_start(value, values)

Validate end_iteration greater than start_iteration.

Source code in tm2py/config.py
145
146
147
148
149
150
151
152
153
@validator("end_iteration", allow_reuse=True)
def end_iteration_gt_start(cls, value, values):
    """Validate end_iteration greater than start_iteration."""
    if values.get("start_iteration"):
        assert (
            value >= values["start_iteration"]
        ), f"'end_iteration' ({value}) must be greater than 'start_iteration'\
            ({values['start_iteration']})"
    return value
start_component_used
start_component_used(value, values)

Validate start_component is listed in *_components.

Source code in tm2py/config.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
@validator("start_component", allow_reuse=True)
def start_component_used(cls, value, values):
    """Validate start_component is listed in *_components."""
    if not values.get("start_component") or not value:
        return value

    if "start_iteration" in values:
        if values.get("start_iteration") == 0:
            assert value in values.get(
                "initial_components", [value]
            ), f"'start_component' ({value}) must be one of the components listed in\
                initial_components if 'start_iteration = 0'"
        else:
            assert value in values.get(
                "global_iteration_components", [values]
            ), f"'start_component' ({value}) must be one of the components listed in\
                global_iteration_components if 'start_iteration > 0'"
    return value
LoggingConfig

Logging parameters. TODO.

Properties
TimePeriodConfig

Time time period entry.

Properties
TimeSplitConfig

Split matrix i and j.

i.e. for time of day splits.

TimeOfDayClassConfig

Configuration for a class of time of day model.

TimeOfDayConfig

Configuration for time of day model.

HouseholdModeAgg

household trip mode aggregation input parameters.

Properties
HouseholdConfig

Household (residents) model parameters.

AirPassengerDemandAggregationConfig

Air passenger demand aggregation input parameters.

Properties
AirPassengerConfig

Air passenger model parameters.

Properties

highway_demand_file: output OMX file input_demand_folder: location to find the input demand csvs input_demand_filename_tmpl: filename template for input demand. Should have {year}, {direction} and {airport} variables and end in ‘.csv’ reference_start_year: base start year for input demand tables used to calculate the linear interpolation, as well as in the file name template {year}{direction}{airport}.csv reference_end_year: end year for input demand tables used to calculate the linear interpolation, as well as in the file name template {year}{direction}{airport}.csv airport_names: list of one or more airport names / codes as used in the input file names demand_aggregation: specification of aggregation of by-access mode demand to highway class demand

Functions
valid_input_demand_filename_tmpl
valid_input_demand_filename_tmpl(value)

Validate skim matrix template has correct {}.

Source code in tm2py/config.py
391
392
393
394
395
396
397
398
399
400
401
402
403
404
@validator("input_demand_filename_tmpl")
def valid_input_demand_filename_tmpl(cls, value):
    """Validate skim matrix template has correct {}."""

    assert (
        "{year}" in value
    ), "-> 'output_skim_matrixname_tmpl must have {year}, found {value}."
    assert (
        "{direction}" in value
    ), "-> 'output_skim_matrixname_tmpl must have {direction}, found {value}."
    assert (
        "{airport}" in value
    ), "-> 'output_skim_matrixname_tmpl must have {airport}, found {value}."
    return value
MatrixFactorConfig

Mapping of zone or list of zones to factor value.

Functions
valid_zone_index
valid_zone_index(value)

Validate zone index and turn to list if isn’t one.

Source code in tm2py/config.py
417
418
419
420
421
422
423
424
425
@validator("zone_index", allow_reuse=True)
def valid_zone_index(value):
    """Validate zone index and turn to list if isn't one."""
    if isinstance(value, str):
        value = int(value)
    if isinstance(value, int):
        value = [value]
    assert all([x >= 0 for x in value]), "Zone_index must be greater or equal to 0"
    return value
CoefficientConfig

Coefficient and properties to be used in utility or regression.

ChoiceClassConfig

Choice class parameters.

Properties

The end value in the utility equation for class c and property p is:

utility[p].coeff * classes[c].property_factor[p] * sum(skim(classes[c].skim_mode,skim_p) for skim_p in property_to_skim[p])

TollChoiceConfig

Toll choice parameters.

Properties
InternalExternalConfig

Internal <-> External model parameters.

TripGenerationFormulaConfig

TripProductionConfig.

Trip productions or attractions for a zone are the constant plus the sum of the rates * values in land use file for that zone.

TripGenerationClassConfig

Trip Generation parameters.

TripGenerationConfig

Trip Generation parameters.

TripDistributionClassConfig

Trip Distribution parameters.

Properties
TruckClassConfig

Truck class parameters.

ImpedanceConfig

Blended skims used for accessibility/friction calculations.

Properties:I name: name to store it as, referred to in TripDistribution config skim_mode: name of the mode to use for the blended skim time_blend: blend of time periods to use; mapped to the factors (which should sum to 1)

Functions
sums_to_one
sums_to_one(value)

Validate highway.maz_to_maz.skim_period refers to a valid period.

Source code in tm2py/config.py
595
596
597
598
599
@validator("time_blend", allow_reuse=True)
def sums_to_one(value):
    """Validate highway.maz_to_maz.skim_period refers to a valid period."""
    assert sum(value.values()) - 1 < 0.0001, "blend factors must sum to 1"
    return value
TripDistributionConfig

Trip Distribution parameters.

TruckConfig

Truck model parameters.

Attributes
highway_demand_file instance-attribute
highway_demand_file: str

@validator(“classes”) def class_consistency(cls, v, values): # TODO Can’t get to work righ tnow _class_names = [c.name for c in v] _gen_classes = [c.name for c in values[“trip_gen”]] _dist_classes = [c.name for c in values[“trip_dist”]] _time_classes = [c.name for c in values[“time_split”]] _toll_classes = [c.name for c in values[“toll_choice”]]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
assert (
    _class_names == _gen_classes
), "truck.classes ({_class_names}) doesn't equal            class names in truck.trip_gen ({_gen_classes})."
assert (
    _class_names == _dist_classes
), "truck.classes ({_class_names}) doesn't  equal            class names in truck.trip_dist ({_dist_classes})."
assert (
    _class_names == _time_classes
), "truck.classes ({_class_names}) doesn't  equal            class names in truck.time_split ({_time_classes})."
assert (
    _class_names == _toll_classes
), "truck.classes ({_class_names}) doesn't equal            class names in truck.toll_choice ({_toll_classes})."

return v
ActiveModeShortestPathSkimConfig

Active mode skim entry.

ActiveModesConfig

Active Mode skim parameters.

HighwayCapClassConfig

Highway link capacity and speed (‘capclass’) index entry.

Properties
ClassDemandConfig

Input source for demand for highway or transit assignment class.

Used to specify where to find related demand file for this highway or transit class.

Properties
HighwayRelativeGapConfig

Highway assignment relative gap parameters.

Properties
HighwayClassConfig

Highway assignment class definition.

Note that excluded_links, skims and toll attribute names include vehicle groups (“{vehicle}”) which reference the list of highway.toll.dst_vehicle_group_names (see HighwayTollsConfig). The default example model config uses: “da”, “sr2”, “sr3”, “vsm”, sml”, “med”, “lrg”

Example single class config

name = “da” description= “drive alone” mode_code= “d” [[highway.classes.demand]] source = “household” name = “SOV_GP_{period}” [[highway.classes.demand]] source = “air_passenger” name = “da” [[highway.classes.demand]] source = “internal_external” name = “da” excluded_links = [“is_toll_da”, “is_sr2”], value_of_time = 18.93, # $ / hr operating_cost_per_mile = 17.23, # cents / mile toll = [“@bridgetoll_da”] skims = [“time”, “dist”, “freeflowtime”, “bridgetoll_da”],

Properties
HighwayTollsConfig

Highway assignment and skim input tolls and related parameters.

Properties
Functions
dst_vehicle_group_names_length
dst_vehicle_group_names_length(value, values)

Validate dst_vehicle_group_names has same length as src_vehicle_group_names.

Source code in tm2py/config.py
841
842
843
844
845
846
847
848
849
850
851
@validator("dst_vehicle_group_names", always=True)
def dst_vehicle_group_names_length(cls, value, values):
    """Validate dst_vehicle_group_names has same length as src_vehicle_group_names."""
    if "src_vehicle_group_names" in values:
        assert len(value) == len(
            values["src_vehicle_group_names"]
        ), "dst_vehicle_group_names must be same length as src_vehicle_group_names"
        assert all(
            [len(v) <= 4 for v in value]
        ), "dst_vehicle_group_names must be 4 characters or less"
    return value
DemandCountyGroupConfig

Grouping of counties for assignment and demand files.

Properties
HighwayMazToMazConfig

Highway MAZ to MAZ shortest path assignment and skim parameters.

Properties
Functions
unique_group_numbers
unique_group_numbers(value)

Validate list of demand_county_groups has unique .number values.

Source code in tm2py/config.py
918
919
920
921
922
923
@validator("demand_county_groups")
def unique_group_numbers(cls, value):
    """Validate list of demand_county_groups has unique .number values."""
    group_ids = [group.number for group in value]
    assert len(group_ids) == len(set(group_ids)), "-> number value must be unique"
    return value
HighwayConfig

Highway assignment and skims parameters.

Properties
Functions
valid_skim_template
valid_skim_template(value)

Validate skim template has correct {} and extension.

Source code in tm2py/config.py
983
984
985
986
987
988
989
990
991
992
@validator("output_skim_filename_tmpl")
def valid_skim_template(value):
    """Validate skim template has correct {} and extension."""
    assert (
        "{time_period" in value
    ), f"-> output_skim_filename_tmpl must have {{time_period}}', found {value}."
    assert (
        value[-4:].lower() == ".omx"
    ), f"-> 'output_skim_filename_tmpl must end in '.omx', found {value[-4:].lower() }"
    return value
valid_skim_matrix_name_template
valid_skim_matrix_name_template(value)

Validate skim matrix template has correct {}.

Source code in tm2py/config.py
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
@validator("output_skim_matrixname_tmpl")
def valid_skim_matrix_name_template(value):
    """Validate skim matrix template has correct {}."""
    assert (
        "{time_period" in value
    ), "-> 'output_skim_matrixname_tmpl must have {time_period}, found {value}."
    assert (
        "{property" in value
    ), "-> 'output_skim_matrixname_tmpl must have {property}, found {value}."
    assert (
        "{mode" in value
    ), "-> 'output_skim_matrixname_tmpl must have {mode}, found {value}."
    return value
unique_capclass_numbers
unique_capclass_numbers(value)

Validate list of capclass_lookup has unique .capclass values.

Source code in tm2py/config.py
1008
1009
1010
1011
1012
1013
1014
@validator("capclass_lookup")
def unique_capclass_numbers(cls, value):
    """Validate list of capclass_lookup has unique .capclass values."""
    capclass_ids = [i.capclass for i in value]
    error_msg = "-> capclass value must be unique in list"
    assert len(capclass_ids) == len(set(capclass_ids)), error_msg
    return value
unique_class_names
unique_class_names(value)

Validate list of classes has unique .name values.

Source code in tm2py/config.py
1016
1017
1018
1019
1020
1021
1022
@validator("classes", pre=True)
def unique_class_names(cls, value):
    """Validate list of classes has unique .name values."""
    class_names = [highway_class["name"] for highway_class in value]
    error_msg = "-> name value must be unique in list"
    assert len(class_names) == len(set(class_names)), error_msg
    return value
validate_class_mode_excluded_links
validate_class_mode_excluded_links(value, values)

Validate list of classes has unique .mode_code or .excluded_links match.

Source code in tm2py/config.py
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
@validator("classes")
def validate_class_mode_excluded_links(cls, value, values):
    """Validate list of classes has unique .mode_code or .excluded_links match."""
    # validate if any mode IDs are used twice, that they have the same excluded links sets
    mode_excluded_links = {}
    for i, highway_class in enumerate(value):
        # maz_to_maz.mode_code must be unique
        if "maz_to_maz" in values:
            assert (
                highway_class["mode_code"] != values["maz_to_maz"]["mode_code"]
            ), f"-> {i} -> mode_code: cannot be the same as the highway.maz_to_maz.mode_code"
        # make sure that if any mode IDs are used twice, they have the same excluded links sets
        if highway_class.mode_code in mode_excluded_links:
            ex_links1 = highway_class["excluded_links"]
            ex_links2 = mode_excluded_links[highway_class["mode_code"]]
            error_msg = (
                f"-> {i}: duplicated mode codes ('{highway_class['mode_code']}') "
                f"with different excluded links: {ex_links1} and {ex_links2}"
            )
            assert ex_links1 == ex_links2, error_msg
        mode_excluded_links[highway_class.mode_code] = highway_class.excluded_links
    return value
validate_class_keyword_lists
validate_class_keyword_lists(value, values)

Validate classes .skims, .toll, and .excluded_links values.

Source code in tm2py/config.py
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
@validator("classes")
def validate_class_keyword_lists(cls, value, values):
    """Validate classes .skims, .toll, and .excluded_links values."""
    if "tolls" not in values:
        return value
    avail_skims = [
        "time",
        "dist",
        "hovdist",
        "tolldist",
        "freeflowtime",
        "rlbty",
        "autotime",
    ]
    available_link_sets = ["is_sr", "is_sr2", "is_sr3", "is_auto_only"]
    avail_toll_attrs = []
    for name in values["tolls"].dst_vehicle_group_names:
        toll_types = [f"bridgetoll_{name}", f"valuetoll_{name}"]
        avail_skims.extend(toll_types)
        avail_toll_attrs.extend(["@" + name for name in toll_types])
        available_link_sets.append(f"is_toll_{name}")

    # validate class skim name list and toll attribute against toll setup
    def check_keywords(class_num, key, val, available):
        extra_keys = set(val) - set(available)
        error_msg = (
            f" -> {class_num} -> {key}: unrecognized {key} name(s): "
            f"{','.join(extra_keys)}.  Available names are: {', '.join(available)}"
        )
        assert not extra_keys, error_msg

    for i, highway_class in enumerate(value):
        check_keywords(i, "skim", highway_class["skims"], avail_skims)
        check_keywords(i, "toll", highway_class["toll"], avail_toll_attrs)
        check_keywords(
            i,
            "excluded_links",
            highway_class["excluded_links"],
            available_link_sets,
        )
    return value
TransitModeConfig

Transit mode definition (see also mode in the Emme API).

Functions
in_vehicle_perception_factor_valid
in_vehicle_perception_factor_valid(value, values)

Validate in_vehicle_perception_factor exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1116
1117
1118
1119
1120
1121
@validator("in_vehicle_perception_factor", always=True)
def in_vehicle_perception_factor_valid(cls, value, values):
    """Validate in_vehicle_perception_factor exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
speed_or_time_factor_valid
speed_or_time_factor_valid(value, values)

Validate speed_or_time_factor exists if assign_type is AUX_TRANSIT.

Source code in tm2py/config.py
1123
1124
1125
1126
1127
1128
@validator("speed_or_time_factor", always=True)
def speed_or_time_factor_valid(cls, value, values):
    """Validate speed_or_time_factor exists if assign_type is AUX_TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "AUX_TRANSIT":
        assert value is not None, "must be specified when assign_type==AUX_TRANSIT"
    return value
initial_boarding_penalty_valid
initial_boarding_penalty_valid(value, values)

Validate initial_boarding_penalty exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1130
1131
1132
1133
1134
1135
@validator("initial_boarding_penalty", always=True)
def initial_boarding_penalty_valid(value, values):
    """Validate initial_boarding_penalty exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
transfer_boarding_penalty_valid
transfer_boarding_penalty_valid(value, values)

Validate transfer_boarding_penalty exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1137
1138
1139
1140
1141
1142
@validator("transfer_boarding_penalty", always=True)
def transfer_boarding_penalty_valid(value, values):
    """Validate transfer_boarding_penalty exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
headway_fraction_valid
headway_fraction_valid(value, values)

Validate headway_fraction exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1144
1145
1146
1147
1148
1149
@validator("headway_fraction", always=True)
def headway_fraction_valid(value, values):
    """Validate headway_fraction exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
transfer_wait_perception_factor_valid
transfer_wait_perception_factor_valid(value, values)

Validate transfer_wait_perception_factor exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1151
1152
1153
1154
1155
1156
@validator("transfer_wait_perception_factor", always=True)
def transfer_wait_perception_factor_valid(value, values):
    """Validate transfer_wait_perception_factor exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
mode_id_valid classmethod
mode_id_valid(value)

Validate mode_id.

Source code in tm2py/config.py
1158
1159
1160
1161
1162
1163
@classmethod
@validator("mode_id")
def mode_id_valid(cls, value):
    """Validate mode_id."""
    assert len(value) == 1, "mode_id must be one character"
    return value
TransitVehicleConfig

Transit vehicle definition (see also transit vehicle in the Emme API).

TransitClassConfig

Transit demand class definition.

ManualJourneyLevelsConfig

Manual Journey Level Specification

TransitJourneyLevelsConfig

Transit manual journey levels structure.

Attributes
use_algorithm class-attribute instance-attribute
use_algorithm: bool = False

The original translation from Cube to Emme used an algorithm to, as faithfully as possible, reflect transfer fares via journey levels. The algorithm examines fare costs and proximity of transit services to create a set of journey levels that reflects transfer costs. While this algorithm works well, the Bay Area’s complex fare system results in numerous journey levels specific to operators with low ridership. The resulting assignment compute therefore expends a lot of resources on these operators. Set this parameter to True to use the algorithm. Exactly one of use_algorithm or specify_manually must be True.

specify_manually class-attribute instance-attribute
specify_manually: bool = False

An alternative to using an algorithm to specify the journey levels is to use specify them manually. If this option is set to True, the manual parameter can be used to assign fare systems to faresystem groups (or journey levels). Consider, for example, the following three journey levels: 0 - has yet to board transit; 1 - has boarded SF Muni; 2 - has boarded all other transit systems. To specify this configuration, a single manual entry identifying the SF Muni fare systems is needed. The other faresystem group is automatically generated in the code with the rest of the faresystems which are not specified in any of the groups. See the manual entry for an example.

manual class-attribute instance-attribute
manual: Optional[Tuple[ManualJourneyLevelsConfig, ...]] = (ManualJourneyLevelsConfig(level_id=1, group_fare_systems=(25,)),)

If ‘specify_manually’ is set to True, there should be at least one faresystem group specified here. The format includes two entries: level_id, which is the serial number of the group specified, and group_fare_system, which is a list of all faresystems belonging to that group. For example, to specify MUNI as one faresystem group, the right configuration would be: [[transit.journey_levels.manual]] level_id = 1 group_fare_systems = [25] If there are multiple groups required to be specified, for example, MUNI in one and Caltrain in the other group, it can be achieved by adding another entry of manual, like: [[transit.journey_levels.manual]] level_id = 1 group_fare_systems = [25][[transit.journey_levels.manual]] level_id = 2 group_fare_systems = [12,14]

Functions
check_exclusivity
check_exclusivity(v, values)

Valdiates that exactly one of specify_manually and use_algorithm is True

Source code in tm2py/config.py
1241
1242
1243
1244
1245
1246
1247
1248
@validator("specify_manually")
def check_exclusivity(cls, v, values):
    """Valdiates that exactly one of specify_manually and use_algorithm is True"""
    use_algorithm = values.get("use_algorithm")
    assert (
        use_algorithm != v
    ), 'Exactly one of "use_algorithm" or "specify_manually" must be True.'
    return v
AssignmentStoppingCriteriaConfig

Assignment stop configuration parameters.

CcrWeightsConfig

Weights for CCR Configuration.

CongestedWeightsConfig

Weights for Congested Transit Assignment Configuration.

EawtWeightsConfig

Weights for calculating extra added wait time Configuration.

CongestedTransitMaxIteration

Congested transit assignment time period specific max iteration parameters.

Properties
CongestedTransitStopCriteria

Congested transit assignment stopping criteria parameters.

Properties
CongestedAssnConfig

Congested transit assignment Configuration.

TransitConfig

Transit assignment parameters.

HighwayDistribution

Highway distribution run configuration. Use to enable distributing the assignment (running time periods in parallel).

Properties
NetworkSummaryConfig

Network Summary Configuration.

Properties
PostProcessorConfig

Post Processor Configuration.

EmmeConfig

Emme-specific parameters.

Properties
Configuration
Functions
load_toml classmethod
load_toml(toml_path: Union[List[Union[str, Path]], str, Path]) -> Configuration

Load configuration from .toml files(s).

Normally the config is split into a scenario_config.toml file and a model_config.toml file.

Parameters:

Name Type Description Default
toml_path Union[List[Union[str, Path]], str, Path]

a valid system path string or Path object to a TOML format config file or list of paths of path objects to a set of TOML files.

required

Returns:

Type Description
Configuration

A Configuration object

Source code in tm2py/config.py
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
@classmethod
def load_toml(
    cls,
    toml_path: Union[List[Union[str, pathlib.Path]], str, pathlib.Path],
) -> "Configuration":
    """Load configuration from .toml files(s).

    Normally the config is split into a scenario_config.toml file and a
    model_config.toml file.

    Args:
        toml_path: a valid system path string or Path object to a TOML format config file or
            list of paths of path objects to a set of TOML files.

    Returns:
        A Configuration object
    """
    if not isinstance(toml_path, List):
        toml_path = [toml_path]
    toml_path = list(map(pathlib.Path, toml_path))

    print(f"DEBUG: Config files to load: {[str(p.resolve()) for p in toml_path]}")
    data = _load_toml(toml_path[0])
    print(f"DEBUG: Loaded base config from: {toml_path[0].resolve()}")
    for path_item in toml_path[1:]:
        print(f"DEBUG: Merging config from: {path_item.resolve()}")
        _merge_dicts(data, _load_toml(path_item))
    # Debug print for transit.modes[1].speed_or_time_factor
    try:
        modes = data.get('transit', {}).get('modes', [])
        if len(modes) > 1 and 'speed_or_time_factor' in modes[1]:
            val = modes[1]['speed_or_time_factor']
            print(f"DEBUG: transit.modes[1].speed_or_time_factor = {val!r} (type: {type(val)})")
    except Exception as e:
        print(f"DEBUG: Error inspecting transit.modes[1].speed_or_time_factor: {e}")
    return cls(**data)
maz_skim_period_exists
maz_skim_period_exists(value, values)

Validate highway.maz_to_maz.skim_period refers to a valid period.

Source code in tm2py/config.py
1539
1540
1541
1542
1543
1544
1545
1546
1547
@validator("highway")
def maz_skim_period_exists(cls, value, values):
    """Validate highway.maz_to_maz.skim_period refers to a valid period."""
    if "time_periods" in values:
        time_period_names = set(time.name for time in values["time_periods"])
        assert (
            value.maz_to_maz.skim_period in time_period_names
        ), "maz_to_maz -> skim_period -> name not found in time_periods list"
    return value
relative_gap_length
relative_gap_length(value, values)

Validate highway.relative_gaps is a list of length greater or equal to global iterations.

Source code in tm2py/config.py
1549
1550
1551
1552
1553
1554
1555
1556
1557
@validator("highway", always=True)
def relative_gap_length(cls, value, values):
    """Validate highway.relative_gaps is a list of length greater or equal to global iterations."""
    if "run" in values:
        assert len(value.relative_gaps) >= (
            values["run"]["end_iteration"] + 1
        ), f"'highway.relative_gaps must be the same or greater length as end_iteration+1,\
            that includes global iteration 0 to {values['run']['end_iteration']}'"
    return value
transit_stop_criteria_length
transit_stop_criteria_length(value, values)

Validate transit.congested.stop_criteria is a list of length greater or equal to global iterations.

Source code in tm2py/config.py
1559
1560
1561
1562
1563
1564
1565
1566
1567
@validator("transit", always=True)
def transit_stop_criteria_length(cls, value, values):
    """Validate transit.congested.stop_criteria is a list of length greater or equal to global iterations."""
    if ("run" in values) & (value.congested_transit_assignment):
        assert len(value.congested.stop_criteria) >= (
            values["run"]["end_iteration"]
        ), f"'transit.stop_criteria must be the same or greater length as end_iteration,\
            that includes global iteration 1 to {values['run']['end_iteration']}'"
    return value
sample_rate_length
sample_rate_length(value, values)

Validate highway.sample_rate_by_iteration is a list of length greater or equal to global iterations.

Source code in tm2py/config.py
1569
1570
1571
1572
1573
1574
1575
1576
@validator("household", always=True)
def sample_rate_length(cls, value, values):
    """Validate highway.sample_rate_by_iteration is a list of length greater or equal to global iterations."""
    if "run" in values:
        assert len(value.sample_rate_by_iteration) >= (
            values["run"]["end_iteration"]
        ), f"'highway.sample_rate_by_iteration must be the same or greater length as end_iteration'"
    return value

Component-specific configurations are documented with their respective components below.


🏗️ Model Components

The core building blocks of the transportation modeling system, organized by functional area.

Base Component Framework

Abstract base classes and shared functionality for all model components.

tm2py.components.component

Root component ABC.

Classes
FileFormatError
FileFormatError(f, *args)

Bases: Exception

Exception raised when a file is not in the expected format.

Exception for invalid file formats.

Source code in tm2py/components/component.py
19
20
21
22
def __init__(self, f, *args):
    """Exception for invalid file formats."""
    super().__init__(args)
    self.f = f
Component
Component(controller: RunController)

Bases: ABC

Template for Component class with several built-in methods.

A component is a piece of the model that can be run independently (of other components) given the required input data and configuration. It communicates information to other components via disk I/O (including the emmebank).

Note: if the component needs data that is not written to disk, it would be considered a subcomponent.

Abstract Methods – Each component class must have the following methods: __init___: constructor, which associates the RunController with the instantiated object run: run the component without any arguments validate_inputs: validate the inputs to the component report_progress: report progress to the user verify: verify the component’s output write_top_sheet: write outputs to topsheet test_component: test the component

Template Class methods - component classes inherit
Template Class Properties - component classes inherit

:: class MyComponent(Component):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
def __init__(self, controller):
    super().__init__(controller)
    self._parameter = None

def run(self):
    self._step1()
    self._step2()

def _step1(self):
    pass

def _step2(self):
    pass

Model component template/abstract base class.

Parameters:

Name Type Description Default
controller RunController

Reference to the run controller object.

required
Source code in tm2py/components/component.py
79
80
81
82
83
84
85
86
87
88
def __init__(self, controller: RunController):
    """Model component template/abstract base class.

    Args:
        controller (RunController): Reference to the run controller object.
    """
    self._controller = controller
    self._trace = None

    self._controller.logger.detail(f"Initializing component {type(self).__qualname__}")
Attributes
controller property
controller

Parent controller.

time_period_names property
time_period_names: List[str]

Return input time_period name or names and return list of time_period names.

Implemented here for easy access for all components.

Returns: list of uppercased string names of time periods

time_period_durations property
time_period_durations: dict

Return mapping of time periods to durations in hours.

congested_transit_assn_max_iteration property
congested_transit_assn_max_iteration: dict

Return mapping of time periods to max iteration in congested transit assignment.

top_sheet property
top_sheet

Reference to top sheet.

logger property
logger

Reference to logger.

trace property
trace

Reference to trace.

Functions
get_abs_path
get_abs_path(path: Union[Path, str]) -> str

Convenince method to get absolute path from run directory.

Source code in tm2py/components/component.py
 99
100
101
102
103
104
def get_abs_path(self, path: Union[Path, str]) -> str:
    """Convenince method to get absolute path from run directory."""
    if not os.path.isabs(path):
        return self.controller.get_abs_path(path).__str__()
    else:
        return path
validate_inputs abstractmethod
validate_inputs()

Validate inputs are correct at model initiation, raise on error.

Source code in tm2py/components/component.py
141
142
143
@abstractmethod
def validate_inputs(self):
    """Validate inputs are correct at model initiation, raise on error."""
run abstractmethod
run()

Run model component.

Source code in tm2py/components/component.py
145
146
147
@abstractmethod
def run(self):
    """Run model component."""
report_progress
report_progress()

Write progress to log file.

Source code in tm2py/components/component.py
150
151
def report_progress(self):
    """Write progress to log file."""
verify
verify()

Verify component outputs / results.

Source code in tm2py/components/component.py
154
155
def verify(self):
    """Verify component outputs / results."""
write_top_sheet
write_top_sheet()

Write key outputs to the model top sheet.

Source code in tm2py/components/component.py
158
159
def write_top_sheet(self):
    """Write key outputs to the model top sheet."""
Subcomponent
Subcomponent(controller: RunController, component: Component)

Bases: Component

Template for sub-component class.

A sub-component is a more loosly defined component that allows for input into the run() method. It is used to break-up larger processes into smaller chunks which can be: (1) re-used across components (i.e toll choice) (2) updated/subbed in to a parent component(s) run method based on the expected API (3) easier to test, understand and debug. (4) more consistent with the algorithms we understand from transportation planning 101

Constructor for model sub-component abstract base class.

Only calls the super class constructor.

Parameters:

Name Type Description Default
controller RunController

Reference to the run controller object.

required
component Component

Reference to the parent component object.

required
Source code in tm2py/components/component.py
173
174
175
176
177
178
179
180
181
182
183
def __init__(self, controller: RunController, component: Component):
    """Constructor for model sub-component abstract base class.

    Only calls the super class constructor.

    Args:
        controller (RunController): Reference to the run controller object.
        component (Component): Reference to the parent component object.
    """
    super().__init__(controller)
    self.component = component
Functions
run abstractmethod
run(*args, **kwargs)

Run sub-component, allowing for multiple inputs.

Allowing for inputs to the run() method is what differentiates a sub-component from a component.

Source code in tm2py/components/component.py
185
186
187
188
189
190
191
@abstractmethod
def run(self, *args, **kwargs):
    """Run sub-component, allowing for multiple inputs.

    Allowing for inputs to the run() method is what differentiates a sub-component from
    a component.
    """

🚗 Demand Modeling Components

Components responsible for generating travel demand from various sources.

Demand Preparation

Foundational demand processing and preparation utilities.

tm2py.components.demand.prepare_demand

Demand loading from OMX to Emme database.

Classes
EmmeDemand
EmmeDemand(controller: RunController)

Abstract base class to import and average demand.

Constructor for PrepareDemand class.

Parameters:

Name Type Description Default
controller RunController

Run controller for the current run.

required
Source code in tm2py/components/demand/prepare_demand.py
30
31
32
33
34
35
36
37
38
39
def __init__(self, controller: RunController):
    """Constructor for PrepareDemand class.

    Args:
        controller (RunController): Run controller for the current run.
    """
    self.controller = controller
    self._emmebank = None
    self._scenario = None
    self._source_ref_key = None
Attributes
logger property
logger

Reference to logger.

PrepareHighwayDemand
PrepareHighwayDemand(controller: RunController)

Bases: EmmeDemand

Import and average highway demand.

Demand is imported from OMX files based on reference file paths and OMX matrix names in highway assignment config (highway.classes). The demand is average using MSA with the current demand matrices (in the Emmebank) if the controller.iteration > 1.

Parameters:

Name Type Description Default
controller RunController

parent RunController object

required

Constructor for PrepareHighwayDemand.

Parameters:

Name Type Description Default
controller RunController

Reference to run controller object.

required
Source code in tm2py/components/demand/prepare_demand.py
149
150
151
152
153
154
155
156
157
158
def __init__(self, controller: RunController):
    """Constructor for PrepareHighwayDemand.

    Args:
        controller (RunController): Reference to run controller object.
    """
    super().__init__(controller)
    self.controller = controller
    self.config = self.controller.config.highway
    self._highway_emmebank = None
Functions
run
run()

Open combined demand OMX files from demand models and prepare for assignment.

Source code in tm2py/components/demand/prepare_demand.py
172
173
174
175
176
177
178
def run(self):
    """Open combined demand OMX files from demand models and prepare for assignment."""

    self.highway_emmebank.create_zero_matrix()
    for time in self.controller.time_period_names:
        for klass in self.config.classes:
            self._prepare_demand(klass.name, klass.description, klass.demand, time)
prepare_household_demand
prepare_household_demand()

Prepares highway and transit household demand matrices from trip lists produced by CT-RAMP.

Source code in tm2py/components/demand/prepare_demand.py
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
@LogStartEnd("Prepare household demand matrices.")
def prepare_household_demand(self):
    """Prepares highway and transit household demand matrices from trip lists produced by CT-RAMP."""
    iteration = self.controller.iteration

    # Create folders if they don't exist
    pathlib.Path(
        self.controller.get_abs_path(
            self.controller.config.household.highway_demand_file
        )
    ).parents[0].mkdir(parents=True, exist_ok=True)
    pathlib.Path(
        self.controller.get_abs_path(
            self.controller.config.household.transit_demand_file
        )
    ).parents[0].mkdir(parents=True, exist_ok=True)
    #    pathlib.Path(self.controller.get_abs_path(self.controller.config.household.active_demand_file)).parents[0].mkdir(parents=True, exist_ok=True)

    indiv_trip_file = (
        self.controller.config.household.ctramp_indiv_trip_file.format(
            iteration=iteration
        )
    )
    joint_trip_file = (
        self.controller.config.household.ctramp_joint_trip_file.format(
            iteration=iteration
        )
    )
    it_full, jt_full = pd.read_csv(indiv_trip_file), pd.read_csv(joint_trip_file)

    # Add time period, expanded count
    time_period_start = dict(
        zip(
            [c.name.upper() for c in self.controller.config.time_periods],
            [c.start_period for c in self.controller.config.time_periods],
        )
    )
    # the last time period needs to be filled in because the first period may or may not start at midnight
    time_periods_sorted = sorted(
        time_period_start, key=lambda x: time_period_start[x]
    )  # in upper case
    first_period = time_periods_sorted[0]
    periods_except_last = time_periods_sorted[:-1]
    breakpoints = [time_period_start[tp] for tp in time_periods_sorted]
    it_full["time_period"] = (
        pd.cut(
            it_full.stop_period,
            breakpoints,
            right=False,
            labels=periods_except_last,
        )
        .cat.add_categories(time_periods_sorted[-1])
        .fillna(time_periods_sorted[-1])
        .astype(str)
    )
    jt_full["time_period"] = (
        pd.cut(
            jt_full.stop_period,
            breakpoints,
            right=False,
            labels=periods_except_last,
        )
        .cat.add_categories(time_periods_sorted[-1])
        .fillna(time_periods_sorted[-1])
        .astype(str)
    )
    it_full["eq_cnt"] = 1 / it_full.sampleRate
    it_full["eq_cnt"] = np.where(
        it_full["trip_mode"].isin([3, 4, 5]),
        0.5 * it_full["eq_cnt"],
        np.where(
            it_full["trip_mode"].isin([6, 7, 8]),
            0.35 * it_full["eq_cnt"],
            it_full["eq_cnt"],
        ),
    )
    jt_full["eq_cnt"] = jt_full.num_participants / jt_full.sampleRate
    zp_cav = self.controller.config.household.OwnedAV_ZPV_factor
    zp_tnc = self.controller.config.household.TNC_ZPV_factor

    maz_taz_df = self.controller.maz_data
    maz_taz_df = maz_taz_df[["MAZ", "TAZ"]]
    it_full = it_full.merge(
        maz_taz_df, left_on="orig_mgra", right_on="MAZ", how="left"
    ).rename(columns={"TAZ": "orig_taz"})
    it_full = it_full.merge(
        maz_taz_df, left_on="dest_mgra", right_on="MAZ", how="left"
    ).rename(columns={"TAZ": "dest_taz"})
    jt_full = jt_full.merge(
        maz_taz_df, left_on="orig_mgra", right_on="MAZ", how="left"
    ).rename(columns={"TAZ": "orig_taz"})
    jt_full = jt_full.merge(
        maz_taz_df, left_on="dest_mgra", right_on="MAZ", how="left"
    ).rename(columns={"TAZ": "dest_taz"})
    it_full["trip_mode"] = np.where(
        it_full["trip_mode"] == 14, 13, it_full["trip_mode"]
    )
    jt_full["trip_mode"] = np.where(
        jt_full["trip_mode"] == 14, 13, jt_full["trip_mode"]
    )

    num_zones = self.num_internal_zones
    OD_full_index = pd.MultiIndex.from_product(
        [range(1, num_zones + 1), range(1, num_zones + 1)]
    )

    def combine_trip_lists(it, jt, trip_mode):
        # combines individual trip list and joint trip list
        combined_trips = pd.concat(
            [it[(it["trip_mode"] == trip_mode)], jt[(jt["trip_mode"] == trip_mode)]]
        )
        combined_sum = combined_trips.groupby(["orig_taz", "dest_taz"])[
            "eq_cnt"
        ].sum()
        return combined_sum.reindex(OD_full_index, fill_value=0).unstack().values

    def create_zero_passenger_trips(
        trips, deadheading_factor, trip_modes=[1, 2, 3]
    ):
        zpv_trips = trips.loc[
            (trips["avAvailable"] == 1) & (trips["trip_mode"].isin(trip_modes))
        ]
        zpv_trips["eq_cnt"] = zpv_trips["eq_cnt"] * deadheading_factor
        zpv_trips = zpv_trips.rename(
            columns={"dest_taz": "orig_taz", "orig_taz": "dest_taz"}
        )
        return zpv_trips

    # create zero passenger trips for auto modes
    if it_full["avAvailable"].sum() > 0:
        it_zpav_trp = create_zero_passenger_trips(
            it_full, zp_cav, trip_modes=[1, 2, 3]
        )
        it_zptnc_trp = create_zero_passenger_trips(it_full, zp_tnc, trip_modes=[9])
        # Combining zero passenger trips to trip files
        it_full = pd.concat(
            [it_full, it_zpav_trp, it_zptnc_trp], ignore_index=True
        ).reset_index(drop=True)

    if jt_full["avAvailable"].sum() > 0:
        jt_zpav_trp = create_zero_passenger_trips(
            jt_full, zp_cav, trip_modes=[1, 2, 3]
        )
        jt_zptnc_trp = create_zero_passenger_trips(jt_full, zp_tnc, trip_modes=[9])
        # Combining zero passenger trips to trip files
        jt_full = pd.concat(
            [jt_full, jt_zpav_trp, jt_zptnc_trp], ignore_index=True
        ).reset_index(drop=True)

    # read properties from config

    mode_name_dict = self.controller.config.household.ctramp_mode_names
    income_segment_config = self.controller.config.household.income_segment

    if income_segment_config["enabled"]:
        # This only affects highway trip tables.

        hh_file = self.controller.config.household.ctramp_hh_file.format(
            iteration=iteration
        )
        hh = pd.read_csv(hh_file, usecols=["hh_id", "income"])
        it_full = it_full.merge(hh, on="hh_id", how="left")
        jt_full = jt_full.merge(hh, on="hh_id", how="left")

        suffixes = income_segment_config["segment_suffixes"]

        it_full["income_seg"] = pd.cut(
            it_full["income"],
            right=False,
            bins=income_segment_config["cutoffs"] + [float("inf")],
            labels=suffixes,
        ).astype(str)

        jt_full["income_seg"] = pd.cut(
            jt_full["income"],
            right=False,
            bins=income_segment_config["cutoffs"] + [float("inf")],
            labels=suffixes,
        ).astype(str)
    else:
        it_full["income_seg"] = ""
        jt_full["income_seg"] = ""
        suffixes = [""]

    # groupby objects for combinations of time period - income segmentation, used for highway modes only
    it_grp = it_full.groupby(["time_period", "income_seg"])
    jt_grp = jt_full.groupby(["time_period", "income_seg"])

    for time_period in time_periods_sorted:
        self.logger.debug(
            f"Producing household demand matrices for period {time_period}"
        )

        highway_out_file = OMXManager(
            self.controller.get_abs_path(
                self.controller.config.household.highway_demand_file
            )
            .__str__()
            .format(period=time_period, iter=self.controller.iteration),
            "w",
        )
        transit_out_file = OMXManager(
            self.controller.get_abs_path(
                self.controller.config.household.transit_demand_file
            )
            .__str__()
            .format(period=time_period, iter=self.controller.iteration),
            "w",
        )
        # active_out_file = OMXManager(
        #    self.controller.get_abs_path(self.controller.config.household.active_demand_file).__str__().format(period=time_period), 'w')

        # hsr_trips_file = _omx.open_file(
        #    self.controller.get_abs_path(self.controller.config.household.hsr_demand_file).format(year=self.controller.config.scenario.year, period=time_period))

        # interregional_trips_file = _omx.open_file(
        #   self.controller.get_abs_path(self.controller.config.household.interregional_demand_file).format(year=self.controller.config.scenario.year, period=time_period))

        highway_out_file.open()
        transit_out_file.open()
        # active_out_file.open()

        # Transit and active modes: one matrix per time period per mode
        it = it_full[it_full.time_period == time_period]
        jt = jt_full[jt_full.time_period == time_period]

        for trip_mode in mode_name_dict:
            #                if trip_mode in [9,10]:
            #                    matrix_name =  mode_name_dict[trip_mode]
            #                    self.logger.debug(f"Writing out mode {mode_name_dict[trip_mode]}")
            #                    active_out_file.write_array(numpy_array=combine_trip_lists(it,jt, trip_mode), name = matrix_name)

            if trip_mode == 11:
                matrix_name = "WLK_TRN_WLK"
                self.logger.debug(f"Writing out mode WLK_TRN_WLK")
                # other_trn_trips = np.array(hsr_trips_file[matrix_name])+np.array(interregional_trips_file[matrix_name])
                transit_out_file.write_array(
                    numpy_array=(combine_trip_lists(it, jt, trip_mode)),
                    name=matrix_name,
                )

            elif trip_mode in [12, 13]:
                it_outbound, it_inbound = it[it.inbound == 0], it[it.inbound == 1]
                jt_outbound, jt_inbound = jt[jt.inbound == 0], jt[jt.inbound == 1]

                matrix_name = f"{mode_name_dict[trip_mode].upper()}_TRN_WLK"
                # other_trn_trips = np.array(hsr_trips_file[matrix_name])+np.array(interregional_trips_file[matrix_name])
                self.logger.debug(
                    f"Writing out mode {mode_name_dict[trip_mode].upper() + '_TRN_WLK'}"
                )
                transit_out_file.write_array(
                    numpy_array=(
                        combine_trip_lists(it_outbound, jt_outbound, trip_mode)
                    ),
                    name=matrix_name,
                )

                matrix_name = f"WLK_TRN_{mode_name_dict[trip_mode].upper()}"
                # other_trn_trips = np.array(hsr_trips_file[matrix_name])+np.array(interregional_trips_file[matrix_name])
                self.logger.debug(
                    f"Writing out mode {'WLK_TRN_' + mode_name_dict[trip_mode].upper()}"
                )
                transit_out_file.write_array(
                    numpy_array=(
                        combine_trip_lists(it_inbound, jt_inbound, trip_mode)
                    ),
                    name=matrix_name,
                )

        # Highway modes: one matrix per suffix (income class) per time period per mode
        for suffix in suffixes:
            highway_cache = {}

            if (time_period, suffix) in it_grp.groups.keys():
                it = it_grp.get_group((time_period, suffix))
            else:
                it = pd.DataFrame(None, columns=it_full.columns)

            if (time_period, suffix) in jt_grp.groups.keys():
                jt = jt_grp.get_group((time_period, suffix))
            else:
                jt = pd.DataFrame(None, columns=jt_full.columns)

            for trip_mode in sorted(mode_name_dict):
                # Python preserves keys in the order they are inserted but
                # mode_name_dict originates from TOML, which does not guarantee
                # that the ordering of keys is preserved.  See
                # https://github.com/toml-lang/toml/issues/162

                if trip_mode in [
                    1,
                    2,
                    3,
                    4,
                    5,
                    6,
                    7,
                    8,
                    9,
                    10,
                    15,
                    16,
                    17,
                ]:  # currently hard-coded based on Travel Mode trip mode codes
                    highway_cache[mode_name_dict[trip_mode]] = combine_trip_lists(
                        it, jt, trip_mode
                    )
                    out_mode = f"{mode_name_dict[trip_mode].upper()}"
                    matrix_name = (
                        f"{out_mode}_{suffix}_{time_period.upper()}"
                        if suffix
                        else f"{out_mode}_{time_period.upper()}"
                    )
                    highway_out_file.write_array(
                        numpy_array=highway_cache[mode_name_dict[trip_mode]],
                        name=matrix_name,
                    )

                elif trip_mode in [15, 16]:
                    # identify the correct mode split factors for da, sr2, sr3
                    self.logger.debug(
                        f"Splitting ridehail trips into shared ride trips"
                    )
                    ridehail_split_factors = defaultdict(float)
                    splits = self.controller.config.household.rideshare_mode_split
                    for key in splits:
                        out_mode_split = self.controller.config.household.__dict__[
                            f"{key}_split"
                        ]
                        for out_mode in out_mode_split:
                            ridehail_split_factors[out_mode] += (
                                out_mode_split[out_mode] * splits[key]
                            )

                    ridehail_trips = combine_trip_lists(it, jt, trip_mode)
                    for out_mode in ridehail_split_factors:
                        matrix_name = f"{out_mode}_{suffix}" if suffix else out_mode
                        self.logger.debug(f"Writing out mode {out_mode}")
                        highway_cache[out_mode] += (
                            (ridehail_trips * ridehail_split_factors[out_mode])
                            .astype(float)
                            .round(2)
                        )
                        highway_out_file.write_array(
                            numpy_array=highway_cache[out_mode], name=matrix_name
                        )

        highway_out_file.close()
        transit_out_file.close()
PrepareTransitDemand
PrepareTransitDemand(controller: 'RunController')

Bases: EmmeDemand

Import transit demand.

Demand is imported from OMX files based on reference file paths and OMX matrix names in transit assignment config (transit.classes). The demand is average using MSA with the current demand matrices (in the Emmebank) if transit.apply_msa_demand is true if the controller.iteration > 1.

Constructor for PrepareTransitDemand.

Parameters:

Name Type Description Default
controller 'RunController'

RunController object.

required
Source code in tm2py/components/demand/prepare_demand.py
606
607
608
609
610
611
612
613
614
615
def __init__(self, controller: "RunController"):
    """Constructor for PrepareTransitDemand.

    Args:
        controller: RunController object.
    """
    super().__init__(controller)
    self.controller = controller
    self.config = self.controller.config.transit
    self._transit_emmebank = None
Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/prepare_demand.py
617
618
def validate_inputs(self):
    """Validate the inputs."""
run
run()

Open combined demand OMX files from demand models and prepare for assignment.

Source code in tm2py/components/demand/prepare_demand.py
628
629
630
631
632
633
634
635
636
637
638
639
@LogStartEnd("Prepare transit demand")
def run(self):
    """Open combined demand OMX files from demand models and prepare for assignment."""
    self._source_ref_key = "transit_demand_file"
    self.transit_emmebank.create_zero_matrix()
    _time_period_tclass = itertools.product(
        self.controller.time_period_names, self.config.classes
    )
    for _time_period, _tclass in _time_period_tclass:
        self._prepare_demand(
            _tclass.skim_set_id, _tclass.description, _tclass.demand, _time_period
        )
Functions
avg_matrix_msa
avg_matrix_msa(prev_avg_matrix: NumpyArray, this_iter_matrix: NumpyArray, msa_iteration: int) -> NumpyArray

Average matrices based on Method of Successive Averages (MSA).

Parameters:

Name Type Description Default
prev_avg_matrix NumpyArray

Previously averaged matrix

required
this_iter_matrix NumpyArray

Matrix for this iteration

required
msa_iteration int

MSA iteration

required

Returns:

Name Type Description
NumpyArray NumpyArray

MSA Averaged matrix for this iteration.

Source code in tm2py/components/demand/prepare_demand.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
def avg_matrix_msa(
    prev_avg_matrix: NumpyArray, this_iter_matrix: NumpyArray, msa_iteration: int
) -> NumpyArray:
    """Average matrices based on Method of Successive Averages (MSA).

    Args:
        prev_avg_matrix (NumpyArray): Previously averaged matrix
        this_iter_matrix (NumpyArray): Matrix for this iteration
        msa_iteration (int): MSA iteration

    Returns:
        NumpyArray: MSA Averaged matrix for this iteration.
    """
    if msa_iteration < 1:
        return this_iter_matrix
    result_matrix = prev_avg_matrix + (1.0 / msa_iteration) * (
        this_iter_matrix - prev_avg_matrix
    )
    return result_matrix

Household Travel Demand

Personal travel demand generated by household members, including work, school, and discretionary trips.

Key Features

  • Activity-based modeling approach
  • Tour and trip generation
  • Mode choice modeling
  • Time-of-day distribution

tm2py.components.demand.household

Placeholder docstring for CT-RAMP related components for household residents’ model.

Classes
HouseholdModel
HouseholdModel(controller: RunController)

Bases: Component

Run household resident model.

Source code in tm2py/components/component.py
79
80
81
82
83
84
85
86
87
88
def __init__(self, controller: RunController):
    """Model component template/abstract base class.

    Args:
        controller (RunController): Reference to the run controller object.
    """
    self._controller = controller
    self._trace = None

    self._controller.logger.detail(f"Initializing component {type(self).__qualname__}")
Functions
validate_inputs
validate_inputs()

Validates inputs for component.

Source code in tm2py/components/demand/household.py
16
17
18
def validate_inputs(self):
    """Validates inputs for component."""
    pass
run
run()

Run the the household resident travel demand model.

Steps
  1. Starts household manager.
  2. Starts matrix manager.
  3. Starts resident travel model (CTRAMP).
  4. Cleans up CTRAMP java.
Source code in tm2py/components/demand/household.py
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@LogStartEnd()
def run(self):
    """Run the the household resident travel demand model.

    Steps:
        1. Starts household manager.
        2. Starts matrix manager.
        3. Starts resident travel model (CTRAMP).
        4. Cleans up CTRAMP java.
    """
    self.config = self.controller.config.household
    self._start_household_manager()
    self._start_matrix_manager()
    self._run_resident_model()
    self._stop_java()
    # consume ctramp person trip list and create trip tables for assignment
    self._prepare_demand_for_assignment()
    self._copy_auto_maz_demand()
Functions

Configuration:

tm2py.config.HouseholdConfig

Household (residents) model parameters.

Air Passenger Demand

Airport access and egress travel demand modeling.

tm2py.components.demand.air_passenger

Module containing the AirPassenger class which builds the airport trip matrices.

Classes
AirPassenger
AirPassenger(controller: RunController)

Bases: Component

Builds the airport trip matrices.

input: nonres/{year}_{tofrom}{airport}.csv output: five time-of-day-specific OMX files with matrices DA, SR2, SR3

Notes: These are independent of level-of-service.

Note that the reference names, years, file paths and other key details are controlled via the config, air_passenger section. See the AirPassengerConfig doc for details on specifying these inputs.

The following details are based on the default config values.

Creates air passenger vehicle trip tables for the Bay Area’s three major airports, namely SFO, OAK, and SJC. Geoff Gosling, a consultant, created vehicle trip tables segmented by time of day, travel mode, and access/egress direction (i.e. to the airport or from the airport) for years 2007 and 2035. The tables are based on a 2006 Air Passenger survey, which was conducted at SFO and OAK (but not SJC).

The travel modes are as follows

(a) escort (drive alone, shared ride 2, and shared ride 3+) (b) park (da, sr2, & sr3+) © rental car (da, sr2, & sr3+) (d) taxi ((da, sr2, & sr3+) (e) limo (da, sr2, & sr3+) (f) shared ride van (all assumed to be sr3); (g) hotel shuttle (all assumed to be sr3); and, (h) charter bus (all assumed to be sr3).

The shared ride van, hotel shuttle, and charter bus modes are assumed to have no deadhead travel. The return escort trip is included, as are the deadhead limo and taxi trips.

The scripts reads in csv files adapted from Mr. Gosling’s Excel files, and creates a highway-assignment ready OMX matrix file for each time-of-day interval.

Assumes that no air passengers use HOT lanes (probably not exactly true in certain future year scenarios, but the assumption is made here as a simplification). Simple linear interpolations are used to estimate vehicle demand in years other than 2007 and 2035, including 2015, 2020, 2025, 2030, and 2040.

Transit travel to the airports is not included in these vehicle trip tables.

Input

Year-, access/egress-, and airport-specific database file with 90 columns of data for each TAZ. There are 18 columns for each time-of-day interval as follows: (1) Escort, drive alone (2) Escort, shared ride 2 (3) Escort, shared ride 3+ (4) Park, drive alone (5) Park, shared ride 2 (6) Park, shared ride 3+ (7) Rental car, drive alone (8) Rental car, shared ride 2 (9) Rental car, shared ride 3+ (10) Taxi, drive alone (11) Taxi, shared ride 2 (12) Taxi, shared ride 3+ (13) Limo, drive alone (14) Limo, shared ride 2 (15) Limo, shared ride 3+ (16) Shared ride van, shared ride 3+ (17) Hotel shuttle, shared ride 3+ (18) Charter bus, shared ride 3+

Five time-of-day-specific tables, each containing origin/destination vehicle matrices for the following modes: (1) drive alone (DA) (2) shared ride 2 (SR2) (3) shared ride 3+ (SR3)

Internal properties

_start_year _end_year _mode_groups: _out_names:

Build the airport trip matrices.

Parameters:

Name Type Description Default
controller RunController

parent Controller object

required
Source code in tm2py/components/demand/air_passenger.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def __init__(self, controller: RunController):
    """Build the airport trip matrices.

    Args:
        controller: parent Controller object
    """
    super().__init__(controller)

    self.config = self.controller.config.air_passenger

    self.start_year = self.config.reference_start_year
    self.end_year = self.config.reference_end_year
    self.scenario_year = self.controller.config.scenario.year

    self.airports = self.controller.config.air_passenger.airport_names

    self._demand_classes = None
    self._access_mode_groups = None
    self._class_modes = None
Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/air_passenger.py
154
155
156
157
def validate_inputs(self):
    """Validate the inputs."""
    # TODO
    pass
run
run()

Run the Air Passenger Demand model to generate the demand matrices.

Steps
  1. Load the demand data from the CSV files.
  2. Aggregate the demand data into the assignable classes.
  3. Create the demand matrices be interpolating the demand data.
  4. Write the demand matrices to OMX files.
Source code in tm2py/components/demand/air_passenger.py
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
@LogStartEnd()
def run(self):
    """Run the Air Passenger Demand model to generate the demand matrices.

    Steps:
        1. Load the demand data from the CSV files.
        2. Aggregate the demand data into the assignable classes.
        3. Create the demand matrices be interpolating the demand data.
        4. Write the demand matrices to OMX files.
    """

    input_demand = self._load_air_pax_demand()
    aggr_demand = self._aggregate_demand(input_demand)

    demand = interpolate_dfs(
        aggr_demand,
        [self.start_year, self.end_year],
        self.scenario_year,
    )
    self._export_result(demand)
Functions

Configuration:

tm2py.config.AirPassengerDemandAggregationConfig

Air passenger demand aggregation input parameters.

Properties

Commercial Vehicle Demand

Freight and commercial vehicle trip generation and distribution.

Commercial Vehicle Types

  • Light commercial vehicles
  • Medium trucks
  • Heavy trucks
  • Delivery vehicles

tm2py.components.demand.commercial

Commercial vehicle / truck model module.

Classes
CommercialVehicleModel
CommercialVehicleModel(controller: RunController)

Bases: Component

Commercial Vehicle demand model.

Generates truck demand matrices from
  • land use
  • highway network impedances
  • parameters
Segmented into four truck types

(1) very small trucks (two-axle, four-tire), (2) small trucks (two-axle, six-tire), (3) medium trucks (three-axle), (4) large or combination (four or more axle) trucks.

(1) MAZ csv data file with the employment and household counts.

(2) Skims (3) K-Factors (4)

Notes: (1) Based on the BAYCAST truck model, no significant updates. (2) Combined Chuck’s calibration adjustments into the NAICS-based model coefficients.

Constructor for the CommercialVehicleTripGeneration component.

Parameters:

Name Type Description Default
controller RunController

Run controller for model run.

required
Source code in tm2py/components/demand/commercial.py
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
def __init__(self, controller: RunController):
    """Constructor for the CommercialVehicleTripGeneration component.

    Args:
        controller (RunController): Run controller for model run.
    """
    super().__init__(controller)

    self.config = self.controller.config.truck
    self.sub_components = {
        "trip generation": CommercialVehicleTripGeneration(controller, self),
        "trip distribution": CommercialVehicleTripDistribution(controller, self),
        "time of day": CommercialVehicleTimeOfDay(controller, self),
        "toll choice": CommercialVehicleTollChoice(controller, self),
    }

    self.trk_impedances = {imp.name: imp for imp in self.config.impedances}

    # Emme matrix management (lazily evaluated)
    self._matrix_cache = None

    # Interim Results
    self.total_tripends_df = None
    self.daily_demand_dict = None
    self.trkclass_tp_demand_dict = None
    self.trkclass_tp_toll_demand_dict = None
Attributes
emmebank property
emmebank

Reference to highway assignment Emmebank.

TODO This should really be in the controller? Or part of network.skims?

emme_scenario property
emme_scenario

Return emme scenario from emmebank.

Use first valid scenario for reference Zone IDs.

TODO This should really be in the controller? Or part of network.skims?

matrix_cache property
matrix_cache

Access to MatrixCache to Emmebank for given emme_scenario.

Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/commercial.py
120
121
def validate_inputs(self):
    """Validate the inputs."""
run
run()

Run commercial vehicle model.

Source code in tm2py/components/demand/commercial.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
@LogStartEnd()
def run(self):
    """Run commercial vehicle model."""
    self.total_tripends_df = self.sub_components["trip generation"].run()
    self.daily_demand_dict = self.sub_components["trip distribution"].run(
        self.total_tripends_df
    )
    self.trkclass_tp_demand_dict = self.sub_components["time of day"].run(
        self.daily_demand_dict
    )
    self.trkclass_tp_toll_demand_dict = self.sub_components["toll choice"].run(
        self.trkclass_tp_demand_dict
    )
    self._export_results_as_omx(self.trkclass_tp_toll_demand_dict)
CommercialVehicleTripGeneration
CommercialVehicleTripGeneration(controller: RunController, component: Component)

Bases: Subcomponent

Commercial vehicle (truck) Trip Generation for 4 sizes of truck.

The four truck types are

(1) very small trucks (two-axle, four-tire), (2) small trucks (two-axle, six-tire), (3) medium trucks (three-axle), (4) large or combination (four or more axle) trucks.

Trip generation

Use linear regression models to generate trip ends, balancing attractions to productions. Based on BAYCAST truck model.

The truck trip generation models for small trucks (two-axle, six tire), medium trucks (three-axle), and large or combination (four or more axle) trucks are taken directly from the study: “I-880 Intermodal Corridor Study: Truck Travel in the San Francisco Bay Area”, prepared by Barton Aschman in December 1992. The coefficients are on page 223 of this report.

The very small truck generation model is based on the Phoenix four-tire truck model documented in the TMIP Quick Response Freight Manual.

Note that certain production models previously used SIC-based employment categories. To both maintain consistency with the BAYCAST truck model and update the model to use NAICS-based employment categories, new regression models were estimated relating the NAICS-based employment data with the SIC-based-predicted trips. The goal here is not to create a new truck model, but to mimic the old model with the available data. Please see the excel spreadsheet TruckModel.xlsx for details. The NAICS-based model results replicate the SIC-based model results quite well.

Constructor for the CommercialVehicleTripGeneration component.

Parameters:

Name Type Description Default
controller RunController

Run controller for model run.

required
component Component

Parent component of sub-component

required
Source code in tm2py/components/demand/commercial.py
222
223
224
225
226
227
228
229
230
def __init__(self, controller: RunController, component: Component):
    """Constructor for the CommercialVehicleTripGeneration component.

    Args:
        controller (RunController): Run controller for model run.
        component (Component): Parent component of sub-component
    """
    super().__init__(controller, component)
    self.config = self.component.config.trip_gen
Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/commercial.py
232
233
234
235
def validate_inputs(self):
    """Validate the inputs."""
    # TODO
    pass
run
run()

Run commercial vehicle trip distribution.

Source code in tm2py/components/demand/commercial.py
237
238
239
240
241
242
243
244
@LogStartEnd()
def run(self):
    """Run commercial vehicle trip distribution."""
    _landuse_df = self._aggregate_landuse()
    _unbalanced_tripends_df = self._generate_trip_ends(_landuse_df)
    _balanced_tripends_df = self._balance_pa(_unbalanced_tripends_df)
    total_tripends_df = self._aggregate_by_class(_balanced_tripends_df)
    return total_tripends_df
CommercialVehicleTripDistribution
CommercialVehicleTripDistribution(controller: RunController, component: Component)

Bases: Subcomponent

Commercial vehicle (truck) Trip Distribution for 4 sizes of truck.

The four truck types are

(1) very small trucks (two-axle, four-tire), (2) small trucks (two-axle, six-tire), (3) medium trucks (three-axle), (4) large or combination (four or more axle) trucks.

(1) Trips by 4 truck sizes

(2) highway skims for truck, time, distance, bridgetoll and value toll (3) friction factors lookup table (4) k-factors matrix

A simple gravity model is used to distribute the truck trips, with separate friction factors used for each class of truck.

A blended travel time is used as the impedance measure, specifically the weighted average of the AM travel time (one-third weight) and the midday travel time (two-thirds weight).

Input

Level-of-service matrices for the AM peak period (6 am to 10 am) and midday period (10 am to 3 pm) which contain truck-class specific estimates of congested travel time (in minutes)

A matrix of k-factors, as calibrated by Chuck Purvis. Note the very small truck model does not use k-factors; the small, medium, and large trucks use the same k-factors.

A table of friction factors in text format with the following fields, space separated: - impedance measure (blended travel time); - friction factors for very small trucks; - friction factors for small trucks; - friction factors for medium trucks; and, - friction factors for large trucks.

Notes on distribution steps

load nonres/truck_kfactors_taz.csv load nonres/truckFF.dat Apply friction factors and kfactors to produce balancing matrix apply the gravity models using friction factors from nonres/truckFF.dat (note the very small trucks do not use the K-factors) Can use Emme matrix balancing for this - important note: reference matrices by name and ensure names are unique Trips rounded to 0.01, causes some instability in results

Notes: (1) Based on the BAYCAST truck model, no significant updates. (2) Combined Chuck’s calibration adjustments into the NAICS-based model coefficients.

Constructor for the CommercialVehicleTripDistribution component.

Parameters:

Name Type Description Default
controller RunController

Run controller for model run.

required
component Component

Parent component of sub-component

required
Source code in tm2py/components/demand/commercial.py
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
def __init__(self, controller: RunController, component: Component):
    """Constructor for the CommercialVehicleTripDistribution component.

    Args:
        controller (RunController): Run controller for model run.
        component (Component): Parent component of sub-component
    """
    super().__init__(controller, component)

    self.config = self.component.config.trip_dist
    self._k_factors = None
    self._blended_skims = {}
    self._friction_factors = None
    self._friction_factor_matrices = {}

    self._class_config = None
Attributes
k_factors property
k_factors

Zone-to-zone values of truck K factors.

Returns:

Name Type Description
NumpyArray

Zone-to-zone values of truck K factors.

friction_factors property
friction_factors

Table of friction factors for each time band by truck class.

Returns:

Type Description

pd.DataFrame: DataFrame of friction factors read from disk.

Functions
blended_skims
blended_skims(mode: str)

Get blended skim. Creates it if doesn’t already exist.

Parameters:

Name Type Description Default
mode str

Mode for skim

required

Returns:

Name Type Description
_type_

description

Source code in tm2py/components/demand/commercial.py
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
def blended_skims(self, mode: str):
    """Get blended skim. Creates it if doesn't already exist.

    Args:
        mode (str): Mode for skim

    Returns:
        _type_: _description_
    """
    if mode not in self._blended_skims:
        self._blended_skims[mode] = get_blended_skim(
            self.controller,
            mode=mode,
            blend=self.component.trk_impedances[mode]["time_blend"],
        )
    return self._blended_skims[mode]
friction_factor_matrices
friction_factor_matrices(trk_class: str, k_factors: Union[None, NumpyArray] = None) -> NumpyArray

Zone to zone NumpyArray of impedances for a given truck class.

Parameters:

Name Type Description Default
trk_class str

Truck class abbreviated name

required
k_factors Union[None, NumpyArray]

If not None, gives an zone-by-zone array of k-factors–additive impedances to be added on top of friciton factors. Defaults to None.

None

Returns:

Name Type Description
NumpyArray NumpyArray

Zone-by-zone matrix of friction factors

Source code in tm2py/components/demand/commercial.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
def friction_factor_matrices(
    self, trk_class: str, k_factors: Union[None, NumpyArray] = None
) -> NumpyArray:
    """Zone to zone NumpyArray of impedances for a given truck class.

    Args:
        trk_class (str): Truck class abbreviated name
        k_factors (Union[None,NumpyArray]): If not None, gives an zone-by-zone array of
            k-factors--additive impedances to be added on top of friciton factors.
            Defaults to None.

    Returns:
        NumpyArray: Zone-by-zone matrix of friction factors
    """
    if trk_class not in self._friction_factor_matrices.keys():
        self._friction_factor_matrices[
            trk_class
        ] = self._calculate_friction_factor_matrix(
            trk_class,
            self.class_config[trk_class].impedance,
            self.k_factors,
            self.class_config[trk_class].use_k_factors,
        )

    return self._friction_factor_matrices[trk_class]
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/commercial.py
629
630
631
632
def validate_inputs(self):
    """Validate the inputs."""
    # TODO
    pass
run
run(tripends_df) -> Dict[str, NumpyArray]

Run commercial vehicle trip distribution.

Source code in tm2py/components/demand/commercial.py
634
635
636
637
638
639
640
641
@LogStartEnd()
def run(self, tripends_df) -> Dict[str, NumpyArray]:
    """Run commercial vehicle trip distribution."""
    daily_demand_dict = {
        tc: self._distribute_ods(tripends_df, tc) for tc in self.component.classes
    }

    return daily_demand_dict
CommercialVehicleTimeOfDay
CommercialVehicleTimeOfDay(controller: RunController, component: Component)

Bases: Subcomponent

Commercial vehicle (truck) Time of Day Split for 4 sizes of truck.

Input: Trips origin and destination matrices by 4 truck sizes Ouput: 20 trips origin and destination matrices by 4 truck sizes by 5 times periods

Note

The diurnal factors are taken from the BAYCAST-90 model with adjustments made

during calibration to the very small truck values to better match counts.

Constructor for the CommercialVehicleTimeOfDay component.

Parameters:

Name Type Description Default
controller RunController

Run controller for model run.

required
component Component

Parent component of sub-component

required
Source code in tm2py/components/demand/commercial.py
767
768
769
770
771
772
773
774
775
776
777
778
779
780
def __init__(self, controller: RunController, component: Component):
    """Constructor for the CommercialVehicleTimeOfDay component.

    Args:
        controller (RunController): Run controller for model run.
        component (Component): Parent component of sub-component
    """
    super().__init__(controller, component)

    self.config = self.component.config.time_of_day

    self.split_factor = "od"
    self._class_configs = None
    self._class_period_splits = None
Attributes
class_period_splits property
class_period_splits

Returns split fraction dictonary mapped to [time period class][time period].

Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/commercial.py
807
808
809
810
def validate_inputs(self):
    """Validate the inputs."""
    # TODO
    pass
run
run(daily_demand: Dict[str, NumpyArray]) -> Dict[str, Dict[str, NumpyArray]]

Splits the daily demand by time of day based on factors in the config.

Uses self.config.truck.classes.{class_name}.time_of_day_split to split the daily demand.

TODO use TimePeriodSplit

Args: daily_demand: dictionary of truck type name to numpy array of truck type daily demand

Returns:

Type Description
Dict[str, Dict[str, NumpyArray]]

Nested dictionary of truck class: time period name => numpy array of demand

Source code in tm2py/components/demand/commercial.py
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
@LogStartEnd()
def run(
    self, daily_demand: Dict[str, NumpyArray]
) -> Dict[str, Dict[str, NumpyArray]]:
    """Splits the daily demand by time of day based on factors in the config.

    Uses self.config.truck.classes.{class_name}.time_of_day_split to split the daily demand.

    #TODO use TimePeriodSplit
    Args:
        daily_demand: dictionary of truck type name to numpy array of
            truck type daily demand

    Returns:
         Nested dictionary of truck class: time period name => numpy array of demand
    """
    trkclass_tp_demand_dict = defaultdict(dict)

    _class_timeperiod = itertools.product(self.classes, self.time_period_names)

    for _t_class, _tp in _class_timeperiod:
        trkclass_tp_demand_dict[_t_class][_tp] = np.around(
            self.class_period_splits[_t_class][_tp.lower()][self.split_factor]
            * daily_demand[_t_class],
            decimals=2,
        )

    return trkclass_tp_demand_dict
CommercialVehicleTollChoice
CommercialVehicleTollChoice(controller, component)

Bases: Subcomponent

Commercial vehicle (truck) toll choice.

A binomial choice model for very small, small, medium, and large trucks. A separate value toll paying versus no value toll paying path choice model is applied to each of the twenty time period and vehicle type combinations.

(1) Trip tables by time of day and truck class

(2) Skims providing the time and cost for value toll and non-value toll paths for each; the matrix names in the OMX files are: “{period}{cls_name}_time” “{period}{cls_name}dist” “{period}{cls_name}bridgetoll{grp_name}” “{period}{cls_name}toll_time” “{period}{cls_name}toll_dist” “{period}{cls_name}toll_bridgetoll{grp_name}” “{period}_{cls_name}toll_valuetoll{grp_name}” Where period is the assignment period, cls_name is the truck assignment class name (as very small, small and medium truck are assigned as the same class) and grp_name is the truck type name (as the tolls are calculated separately for very small, small and medium).

(1) TOLLCLASS is a code, 1 through 10 are reserved for bridges; 11 and up is

reserved for value toll facilities.

1
2
3
4
5
6
7
    (2)  All costs should be coded in year 2000 cents
    (3)  The 2-axle fee is used for very small trucks
    (4)  The 2-axle fee is used for small trucks
    (5)  The 3-axle fee is used for medium trucks
    (6)  The average of the 5-axle and 6-axle fee is used for large trucks
         (about the midpoint of the fee schedule).
    (7)  The in-vehicle time coefficient is from the work trip mode choice model.

Constructor for Commercial Vehicle Toll Choice.

Also calls Subclass init().

Parameters:

Name Type Description Default
controller

model run controller

required
component

parent component

required
Source code in tm2py/components/demand/commercial.py
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
def __init__(self, controller, component):
    """Constructor for Commercial Vehicle Toll Choice.

    Also calls Subclass __init__().

    Args:
        controller: model run controller
        component: parent component
    """
    super().__init__(controller, component)

    self.config = self.component.config.toll_choice

    self.sub_components = {
        "toll choice calculator": TollChoiceCalculator(
            controller,
            self,
            self.config,
        ),
    }

    # shortcut
    self._toll_choice = self.sub_components["toll choice calculator"]
    self._toll_choice.toll_skim_suffix = "trk"
Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/demand/commercial.py
902
903
904
905
def validate_inputs(self):
    """Validate the inputs."""
    # TODO
    pass
run
run(trkclass_tp_demand_dict)

Split per-period truck demands into nontoll and toll classes.

Uses OMX skims output from highway assignment: traffic_skims_{period}.omx

Source code in tm2py/components/demand/commercial.py
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
@LogStartEnd()
def run(self, trkclass_tp_demand_dict):
    """Split per-period truck demands into nontoll and toll classes.

    Uses OMX skims output from highway assignment: traffic_skims_{period}.omx"""

    _tclass_time_combos = itertools.product(
        self.time_period_names, self.config.classes
    )

    class_demands = defaultdict(dict)
    for _time_period, _tclass in _tclass_time_combos:
        _split_demand = self._toll_choice.run(
            trkclass_tp_demand_dict[_tclass.name][_time_period],
            _tclass.name,
            _time_period,
        )

        class_demands[_time_period][_tclass.name] = _split_demand["non toll"]
        class_demands[_time_period][f"{_tclass.name}toll"] = _split_demand["toll"]
    return class_demands
Functions

Configuration:

tm2py.config.TruckConfig

Truck model parameters.

Attributes
highway_demand_file instance-attribute
highway_demand_file: str

@validator(“classes”) def class_consistency(cls, v, values): # TODO Can’t get to work righ tnow _class_names = [c.name for c in v] _gen_classes = [c.name for c in values[“trip_gen”]] _dist_classes = [c.name for c in values[“trip_dist”]] _time_classes = [c.name for c in values[“time_split”]] _toll_classes = [c.name for c in values[“toll_choice”]]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
assert (
    _class_names == _gen_classes
), "truck.classes ({_class_names}) doesn't equal            class names in truck.trip_gen ({_gen_classes})."
assert (
    _class_names == _dist_classes
), "truck.classes ({_class_names}) doesn't  equal            class names in truck.trip_dist ({_dist_classes})."
assert (
    _class_names == _time_classes
), "truck.classes ({_class_names}) doesn't  equal            class names in truck.time_split ({_time_classes})."
assert (
    _class_names == _toll_classes
), "truck.classes ({_class_names}) doesn't equal            class names in truck.toll_choice ({_toll_classes})."

return v

Inter-regional Demand

External travel demand entering and leaving the model region.

tm2py.components.demand.internal_external

Module containing Internal <-> External trip model.

Classes
InternalExternal
InternalExternal(controller: 'RunController')

Bases: Component

Develop Internal <-> External trip tables from land use and impedances.

  1. Grow demand from base year using static rates ::ExternalDemand
  2. Split by time of day using static factors ::TimePeriodSplit
  3. Apply basic toll binomial choice model: ::ExternalTollChoice
Governed by InternalExternalConfig
Source code in tm2py/components/demand/internal_external.py
46
47
48
49
50
51
52
53
54
55
56
def __init__(self, controller: "RunController"):
    super().__init__(controller)
    self.config = self.controller.config.internal_external

    self.sub_components = {
        "demand forecast": ExternalDemand(controller, self),
        "time of day": TimePeriodSplit(
            controller, self, self.config.time_of_day.classes[0].time_period_split
        ),
        "toll choice": ExternalTollChoice(controller, self),
    }
Functions
validate_inputs
validate_inputs()

Validate inputs to component.

Source code in tm2py/components/demand/internal_external.py
62
63
64
65
def validate_inputs(self):
    """Validate inputs to component."""
    ## TODO
    pass
run
run()

Run internal/external travel demand component.

Source code in tm2py/components/demand/internal_external.py
67
68
69
70
71
72
73
74
@LogStartEnd()
def run(self):
    """Run internal/external travel demand component."""

    daily_demand = self.sub_components["demand forecast"].run()
    period_demand = self.sub_components["time of day"].run(daily_demand)
    class_demands = self.sub_components["toll choice"].run(period_demand)
    self._export_results(class_demands)
ExternalDemand
ExternalDemand(controller, component)

Bases: Subcomponent

Forecast of daily internal<->external demand based on growth from a base year.

Create a daily matrix that includes internal/external, external/internal, and external/external passenger vehicle travel (based on Census 2000 journey-to-work flows). These trip tables are based on total traffic counts, which include trucks, but trucks are not explicitly segmented from passenger vehicles. This short-coming is a hold-over from BAYCAST and will be addressed in the next model update.

The row and column totals are taken from count station data provided by Caltrans. The BAYCAST 2006 IX matrix is used as the base matrix and scaled to match forecast year growth assumptions. The script generates estimates for the model forecast year; the growth rates were discussed with neighboring MPOs as part of the SB 375 target setting process.

Input: (1) Station-specific assumed growth rates for each forecast year (the lack of external/external movements through the region allows simple factoring of cells without re-balancing); (2) An input base matrix derived from the Census journey-to-work data.

Output: (1) Four-table, forecast-year specific trip tables containing internal/external, external/internal, and external/external vehicle (xxx or person xxx) travel.

Governed by class DemandGrowth Config:

    highway_demand_file:
    input_demand_file:
    input_demand_matrixname_tmpl:
    modes:
    reference_year:
    annual_growth_rate:
    special_gateway_adjust:

Source code in tm2py/components/demand/internal_external.py
127
128
129
130
131
def __init__(self, controller, component):
    super().__init__(controller, component)
    self.config = self.component.config.demand
    # Loaded lazily
    self._base_demand = None
Functions
run
run(base_demand: Dict[str, NumpyArray] = None) -> Dict[str, NumpyArray]

Calculate adjusted demand based on scenario year and growth rates.

Steps: - 1.1 apply special factors to certain gateways based on ID - 1.2 apply gateway-specific annual growth rates to results of step 1 to generate year specific forecast

Parameters:

Name Type Description Default
demand

dictionary of input daily demand matrices (numpy arrays)

required

Returns:

Type Description
Dict[str, NumpyArray]

Dictionary of Numpy matrices of daily PA by class mode

Source code in tm2py/components/demand/internal_external.py
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
def run(self, base_demand: Dict[str, NumpyArray] = None) -> Dict[str, NumpyArray]:
    """Calculate adjusted demand based on scenario year and growth rates.

    Steps:
    - 1.1 apply special factors to certain gateways based on ID
    - 1.2 apply gateway-specific annual growth rates to results of step 1
       to generate year specific forecast

    Args:
        demand: dictionary of input daily demand matrices (numpy arrays)

    Returns:
         Dictionary of Numpy matrices of daily PA by class mode
    """
    # Build adjustment matrix to be applied to all input matrices
    # special gateway adjustments based on zone index
    if base_demand is None:
        base_demand = self.base_demand
    _num_years = self.year - self.config.reference_year
    _adj_matrix = np.ones(base_demand["da"].shape)

    _adj_matrix = create_matrix_factors(
        default_matrix=_adj_matrix,
        matrix_factors=self.config.special_gateway_adjust,
    )

    _adj_matrix = create_matrix_factors(
        default_matrix=_adj_matrix,
        matrix_factors=self.config.annual_growth_rate,
        periods=_num_years,
    )

    daily_prod_attract = dict(
        (_mode, _demand * _adj_matrix) for _mode, _demand in base_demand.items()
    )
    return daily_prod_attract
ExternalTollChoice
ExternalTollChoice(controller, component)

Bases: Subcomponent

Toll choice

Apply a binomial choice model for drive alone, shared ride 2, and shared ride 3 internal/external personal vehicle travel.

(1) Time-period-specific origin/destination matrices of drive alone, shared ride 2,

and share ride 3+ internal/external trip tables.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
    (2) Skims providing the time and cost for value toll and non-value toll paths for each

        traffic_skims_{period}.omx, where {period} is the time period ID,
        {class} is the class name da, sr2, sr2, with the following matrix names
          Non-value-toll paying time: {period}_{class}_time,
          Non-value-toll distance: {period}_{class}_dist,
          Non-value-toll bridge toll is: {period}_{class}_bridgetoll_{class},
          Value-toll paying time is: {period}_{class}toll_time,
          Value-toll paying distance is: {period}_{class}toll_dist,
          Value-toll bridge toll is: {period}_{class}toll_bridgetoll_{class},
          Value-toll value toll is: {period}_{class}toll_valuetoll_{class},

Output: Five, six-table trip matrices, one for each time period. Two tables for each vehicle class representing value-toll paying path trips and non-value-toll paying path trips

Governed by TollClassConfig:

1
2
3
4
5
6
7
8
```
classes:
value_of_time:
operating_cost_per_mile:
property_to_skim_toll:
property_to_skim_notoll:
utility:
```
Source code in tm2py/components/demand/internal_external.py
239
240
241
242
243
244
245
246
247
248
249
250
251
252
def __init__(self, controller, component):
    super().__init__(controller, component)

    self.config = self.component.config.toll_choice

    self.sub_components = {
        "toll choice calculator": TollChoiceCalculator(
            controller, component, self.config
        ),
    }

    # shortcut
    self._toll_choice = self.sub_components["toll choice calculator"]
    self._toll_choice.toll_skim_suffix = "trk"
Functions
run
run(period_demand: Dict[str, Dict[str, NumpyArray]]) -> Dict[str, Dict[str, NumpyArray]]

Binary toll / non-toll choice model by class.

input: result of ix_time_of_day skims: traffic_skims{period}.omx, where {period} is the time period ID, {class} is the class name da, sr2, sr2, with the following matrix names Non-value-toll paying time: {period}{class}_time, Non-value-toll distance: {period}{class}dist, Non-value-toll bridge toll is: {period}{class}bridgetoll{class}, Value-toll paying time is: {period}{class}toll_time, Value-toll paying distance is: {period}{class}toll_dist, Value-toll bridge toll is: {period}{class}toll_bridgetoll{class}, Value-toll value toll is: {period}{class}toll_valuetoll{class},

STEPS: 3.1: For each time of day, for each da, sr2, sr3, calculate - utility of toll and nontoll - probability of toll / nontoll - split demand into toll and nontoll matrices

Source code in tm2py/components/demand/internal_external.py
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
@LogStartEnd()
def run(
    self, period_demand: Dict[str, Dict[str, NumpyArray]]
) -> Dict[str, Dict[str, NumpyArray]]:
    """Binary toll / non-toll choice model by class.

    input: result of _ix_time_of_day
    skims:
        traffic_skims_{period}.omx, where {period} is the time period ID,
        {class} is the class name da, sr2, sr2, with the following matrix names
          Non-value-toll paying time: {period}_{class}_time,
          Non-value-toll distance: {period}_{class}_dist,
          Non-value-toll bridge toll is: {period}_{class}_bridgetoll_{class},
          Value-toll paying time is: {period}_{class}toll_time,
          Value-toll paying distance is: {period}_{class}toll_dist,
          Value-toll bridge toll is: {period}_{class}toll_bridgetoll_{class},
          Value-toll value toll is: {period}_{class}toll_valuetoll_{class},

    STEPS:
    3.1: For each time of day, for each da, sr2, sr3, calculate
         - utility of toll and nontoll
         - probability of toll / nontoll
         - split demand into toll and nontoll matrices

    """

    _time_class_combos = itertools.product(
        self.time_period_names, self.component.classes
    )

    class_demands = defaultdict(dict)
    for _time_period, _class in _time_class_combos:
        if _time_period in period_demand.keys():
            None
        elif _time_period.lower() in period_demand.keys():
            _time_period = _time_period.lower()
        elif _time_period.upper() in period_demand.keys():
            _time_period = _time_period.upper()
        else:
            raise ValueError(
                f"Period {_time_period} not an available time period.\
                Available periods are:  {period_demand.keys()}"
            )

        _split_demand = self._toll_choice.run(
            period_demand[_time_period][_class], _class, _time_period
        )

        class_demands[_time_period][_class] = _split_demand["non toll"]
        class_demands[_time_period][f"{_class}toll"] = _split_demand["toll"]
    return class_demands
Functions

Configuration:

tm2py.config.InternalExternalConfig

Internal <-> External model parameters.

Visitor Demand

Tourist and visitor travel patterns within the region.

tm2py.components.demand.visitor

Visitor module.


🛣️ Highway Network Components

Components for highway network modeling, assignment, and analysis.

Highway Modeling Workflow

  1. Network Building: Load and process highway network data
  2. MAZ Connectivity: Connect micro-zones to highway access points
  3. Traffic Assignment: Assign vehicle trips to network links
  4. Performance Analysis: Calculate travel times, speeds, and congestion

Highway Network Management

Core highway network data structures and utilities.

tm2py.components.network.highway.highway_network

Module for highway network preparation steps.

Creates required attributes and populates input values needed for highway assignments. The toll values, VDFs, per-class cost (tolls+operating costs), modes and skim link attributes are calculated.

The following keys and tables are used from the config

highway.tolls.file_path: relative path to input toll file highway.tolls.src_vehicle_group_names: names used in tolls file for toll class values highway.tolls.dst_vehicle_group_names: corresponding names used in network attributes toll classes highway.tolls.valuetoll_start_tollbooth_code: index to split point bridge tolls (< this value) from distance value tolls (>= this value) highway.classes: the list of assignment classes, see the notes under highway_assign for detailed explanation highway.capclass_lookup: the lookup table mapping the link @capclass setting to capacity (@capacity), free_flow_speed (@free_flow_speec) and critical_speed (used to calculate @ja for akcelik type functions) highway.generic_highway_mode_code: unique (with other mode_codes) single character used to label entire auto network in Emme highway.maz_to_maz.mode_code: unique (with other mode_codes) single character used to label MAZ local auto network including connectors

The following link attributes are created (overwritten) and are subsequently used in the highway assignments. - “@flow_XX”: link PCE flows per class, where XX is the class name in the config - “@maz_flow”: Assigned MAZ-to-MAZ flow

The following attributes are calculated
  • vdf: volume delay function to use
  • “@capacity”: total link capacity
  • “@ja”: akcelik delay parameter
  • “@hov_length”: length with HOV lanes
  • “@toll_length”: length with tolls
  • “@bridgetoll_YY”: the bridge toll for class subgroup YY
  • “@valuetoll_YY”: the “value”, non-bridge toll for class subgroup YY
  • “@cost_YY”: total cost for class YY
Classes
PrepareNetwork
PrepareNetwork(controller: RunController)

Bases: Component

Highway network preparation.

Constructor for PPrepareNetwork.

Parameters:

Name Type Description Default
controller RunController

Reference to run controller object.

required
Source code in tm2py/components/network/highway/highway_network.py
66
67
68
69
70
71
72
73
74
75
76
def __init__(self, controller: "RunController"):
    """Constructor for PPrepareNetwork.

    Args:
        controller (RunController): Reference to run controller object.
    """
    super().__init__(controller)
    self.config = self.controller.config.highway
    self._emme_manager = self.controller.emme_manager
    self._highway_emmebank = None
    self._highway_scenarios = None
Functions
run
run()

Run network preparation step.

Source code in tm2py/components/network/highway/highway_network.py
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
@LogStartEnd("Prepare network attributes and modes")
def run(self):
    """Run network preparation step."""
    for time in self.time_period_names:
        with self.controller.emme_manager.logbook_trace(
            f"prepare for highway assignment {time}"
        ):
            scenario = self.highway_emmebank.scenario(time)
            self._create_class_attributes(scenario, time)
            network = scenario.get_network()
            self._set_tolls(network, time)
            self._set_vdf_attributes(network, time)
            self._set_link_modes(network)
            self._calc_link_skim_lengths(network)
            self._calc_link_class_costs(network)
            self._calc_interchange_distance(network)
            self._calc_link_static_reliability(network)
            scenario.publish_network(network)
validate_inputs
validate_inputs()

Validate inputs files are correct, raise if an error is found.

Source code in tm2py/components/network/highway/highway_network.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
def validate_inputs(self):
    """Validate inputs files are correct, raise if an error is found."""
    toll_file_path = self.get_abs_path(self.config.tolls.file_path)
    if not os.path.exists(toll_file_path):
        self.logger.log(
            f"Tolls file (config.highway.tolls.file_path) does not exist: {toll_file_path}",
            level="ERROR",
        )
        raise FileNotFoundError(f"Tolls file does not exist: {toll_file_path}")
    src_veh_groups = self.config.tolls.src_vehicle_group_names
    columns = ["fac_index"]
    for time in self.controller.config.time_periods:
        for vehicle in src_veh_groups:
            columns.append(f"toll{time.name.lower()}_{vehicle}")
    with open(toll_file_path, "r", encoding="UTF8") as toll_file:
        header = set(h.strip() for h in next(toll_file).split(","))
        missing = []
        for column in columns:
            if column not in header:
                missing.append(column)
                self.logger.log(
                    f"Tolls file missing column: {column}", level="ERROR"
                )
    if missing:
        raise FileFormatError(
            f"Tolls file missing {len(missing)} columns: {', '.join(missing)}"
        )

Highway Traffic Assignment

Traffic assignment algorithms and congestion modeling.

tm2py.components.network.highway.highway_assign

Highway assignment and skim component.

Performs equilibrium traffic assignment and generates resulting skims. The assignmend is configured using the “highway” table in the source config. See the config documentation for details. The traffic assignment runs according to the list of assignment classes under highway.classes.

Other relevant parameters from the config are: - emme.num_processors: number of processors as integer or “MAX” or “MAX-N” - time_periods[].emme_scenario_id: Emme scenario number to use for each period - time_periods[].highway_capacity_factor

The Emme network must have the following attributes available:

Link - attributes: - “length” in feet - “vdf”, volume delay function (volume delay functions must also be setup) - “@useclass”, vehicle-class restrictions classification, auto-only, HOV only - “@free_flow_time”, the free flow time (in minutes) - “@tollXX_YY”, the toll for period XX and class subgroup (see truck class) named YY, used together with @tollbooth to generate @bridgetoll_YY and @valuetoll_YY - “@maz_flow”, the background traffic MAZ-to-MAZ SP assigned flow from highway_maz, if controller.iteration > 0 - modes: must be set on links and match the specified mode codes in the traffic config

Network results - attributes: - @flow_XX: link PCE flows per class, where XX is the class name in the config - timau: auto travel time - volau: total assigned flow in PCE

Notes: - Output matrices are in miles, minutes, and cents (2010 dollars) and are stored/ as real values; - Intrazonal distance/time is one half the distance/time to the nearest neighbor; - Intrazonal bridge and value tolls are assumed to be zero

Classes
HighwayAssignment
HighwayAssignment(controller: 'RunController')

Bases: Component

Highway assignment and skims. Args: controller: parent RunController object

Constructor for HighwayAssignment components.

Parameters:

Name Type Description Default
controller RunController

Reference to current run controller.

required
Source code in tm2py/components/network/highway/highway_assign.py
76
77
78
79
80
81
82
83
84
85
86
def __init__(self, controller: "RunController"):
    """Constructor for HighwayAssignment components.

    Args:
        controller (RunController): Reference to current run controller.
    """
    super().__init__(controller)

    self.config = self.controller.config.highway
    self._highway_emmebank = None
    self._class_config = None
Attributes
highway_emmebank property
highway_emmebank

The ProxyEmmebank object connect to the EMMEBANK for the highway assignment.

classes property
classes

The list of class names in the highway config

class_config property
class_config

Mapping of class names and config objs in the highway config.

Functions
validate_inputs
validate_inputs()

Validate inputs files are correct, raise if an error is found.

Source code in tm2py/components/network/highway/highway_assign.py
108
109
def validate_inputs(self):
    """Validate inputs files are correct, raise if an error is found."""
run
run()

Run highway assignment.

Source code in tm2py/components/network/highway/highway_assign.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
@LogStartEnd("Highway assignment and skims", level="STATUS")
def run(self):
    """Run highway assignment."""
    demand = PrepareHighwayDemand(self.controller)
    if self.controller.iteration == 0:
        self.highway_emmebank.create_zero_matrix()
        if self.controller.config.warmstart.warmstart:
            if self.controller.config.warmstart.use_warmstart_demand:
                demand.run()
    else:
        demand.run()

    distribution = self.controller.config.emme.highway_distribution
    if distribution:
        launchers = self.setup_process_launchers(distribution[:-1])
        self.start_proccesses(launchers)
        # Run last configuration in process
        in_process_times = distribution[-1].time_periods
        num_processors = distribution[-1].num_processors
        self.run_in_process(in_process_times, num_processors)
        self.wait_for_processes(launchers)
    else:
        num_processors = self.controller.emme_manager.num_processors
        self.run_in_process(self.time_period_names, num_processors)
run_in_process
run_in_process(times: List[str], num_processors: Union[int, str])

Start highway assignments in same process

Source code in tm2py/components/network/highway/highway_assign.py
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
def run_in_process(self, times: List[str], num_processors: Union[int, str]):
    "Start highway assignments in same process"
    self.logger.status(
        f"Running highway assignments in process: {', '.join(times)}"
    )
    iteration = self.controller.iteration
    for time in times:
        project_path = self.emme_manager.project_path
        emmebank_path = self.highway_emmebank.path
        params = self._get_assign_params(time, num_processors)
        runner = AssignmentRunner(
            project_path,
            emmebank_path,
            iteration=iteration,
            logger=self.logger,
            **params,
        )
        runner.run()
setup_process_launchers
setup_process_launchers(distribution)

Setup (copy data) databases for running assignments in separate processes

Source code in tm2py/components/network/highway/highway_assign.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def setup_process_launchers(self, distribution):
    "Setup (copy data) databases for running assignments in separate processes"
    self.logger.status(
        f"Running highway assignments in {len(distribution)} separate processes"
    )
    iteration = self.controller.iteration
    launchers = []
    time_params = {}
    for config in distribution:
        assign_launcher = AssignmentLauncher(
            self.highway_emmebank.emmebank, iteration
        )
        launchers.append(assign_launcher)
        for time in config.time_periods:
            params = self._get_assign_params(time, config.num_processors)
            assign_launcher.add_run(**params)
            time_params[time] = params

    # initialize all skim matrices - complete all periods in order
    for time in self.time_period_names:
        params = time_params.get(time)
        if params:
            for matrix_name in params["skim_matrices"]:
                self.highway_emmebank.create_matrix(matrix_name, "FULL")

    return launchers
start_proccesses
start_proccesses(launchers)

Start separate processes for running assignments

Source code in tm2py/components/network/highway/highway_assign.py
182
183
184
185
186
187
188
189
def start_proccesses(self, launchers):
    "Start separate processes for running assignments"
    for i, assign_launcher in enumerate(launchers):
        self.logger.status(
            f"Starting highway assignment process {i} {', '.join(assign_launcher.times)}"
        )
        assign_launcher.setup()
        assign_launcher.run()
AssignmentLauncher
AssignmentLauncher(emmebank: Emmebank, iteration: int)

Bases: BaseAssignmentLauncher

Manages Emme-related data (matrices and scenarios) for multiple time periods and kicks off assignment in a subprocess.

Source code in tm2py/emme/manager.py
551
552
553
554
555
556
557
558
559
560
561
562
def __init__(self, emmebank: Emmebank, iteration: int):
    self._primary_emmebank = emmebank
    self._iteration = iteration

    self._times = []
    self._scenarios = []
    self._assign_specs = []
    self._demand_matrices = []
    self._skim_matrices = []
    self._omx_file_paths = []

    self._process = None
AssignmentRunner
AssignmentRunner(project_path: str, emmebank_path: str, scenario_id: Union[str, int], time: str, iteration: int, assign_spec: Dict, demand_matrices: List[str], skim_matrices: List[str], omx_file_path: str, logger=None)

Constructor to run the highway assignment for the specified time period.

Parameters:

Name Type Description Default
project_path str

path to existing EMME project (*.emp file)

required
emmebank_path str

path to existing EMME databsae (emmebank) file

required
scenario_id str

existing scenario ID for assignment run

required
time str

time period ID (only used for logging messages)

required
iteration List[str]

global iteration number

required
assign_spec Dict

EMME SOLA assignment specification

required
skim_matrices List[str]

list of skim matrix ID.

required
omx_file_path str

path to resulting output of skim matrices to OMX

required
logger Logger

optional logger object if running in process. If not specified a new logger reference is created.

None
Source code in tm2py/components/network/highway/highway_assign.py
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
def __init__(
    self,
    project_path: str,
    emmebank_path: str,
    scenario_id: Union[str, int],
    time: str,
    iteration: int,
    assign_spec: Dict,
    demand_matrices: List[str],
    skim_matrices: List[str],
    omx_file_path: str,
    logger=None,
):
    """
    Constructor to run the highway assignment for the specified time period.

    Args:
        project_path (str): path to existing EMME project (*.emp file)
        emmebank_path (str): path to existing EMME databsae (emmebank) file
        scenario_id (str): existing scenario ID for assignment run
        time (str): time period ID (only used for logging messages)
        iteration (List[str]): global iteration number
        assign_spec (Dict): EMME SOLA assignment specification
        skim_matrices (List[str]): list of skim matrix ID.
        omx_file_path (str): path to resulting output of skim matrices to OMX
        logger (Logger): optional logger object if running in process.
            If not specified a new logger reference is created.
    """
    self.emme_manager = EmmeManagerLight(project_path, emmebank_path)
    self.emmebank = Emmebank(emmebank_path)
    self.scenario = self.emmebank.scenario(scenario_id)

    self.time = time
    self.iteration = iteration
    self.assign_spec = assign_spec
    self.skim_matrix_ids = skim_matrices
    self.demand_matrix_ids = demand_matrices
    self.omx_file_path = omx_file_path

    self._matrix_cache = None
    self._network_calculator = None
    self._skim_matrix_objs = []
    if logger:
        self.logger = logger
    else:
        root = os.path.dirname(os.path.dirname(project_path))
        name = f"run_highway_{time}_{iteration}"
        run_log_file_path = os.path.join(root, f"{name}.log")
        log_on_error_file_path = os.path.join(root, f"{name}_error.log")
        self.logger = ProcessLogger(
            run_log_file_path, log_on_error_file_path, self.emme_manager
        )
Attributes
assign_spec_no_analysis property
assign_spec_no_analysis

Return modified SOLA assignment specification with no analyses.

Functions
run
run()

Run time period highway assignment

Source code in tm2py/components/network/highway/highway_assign.py
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
def run(self):
    "Run time period highway assignment"
    with self._setup():
        if self.iteration > 0:
            self._copy_maz_flow()
        else:
            self._reset_background_traffic()
        for matrix_name in self.demand_matrix_ids:
            if not self.emmebank.matrix(matrix_name):
                raise Exception(f"demand matrix {matrix_name} does not exist")

        self._create_skim_matrices()
        with self.logger._skip_emme_logging():
            self.logger.log_dict(self.assign_spec, level="DEBUG")
        with self.logger.log_start_end(
            "Run SOLA assignment (no path analyses)", level="INFO"
        ):
            assign = self.emme_manager.tool(
                "inro.emme.traffic_assignment.sola_traffic_assignment"
            )
            assign(
                self.assign_spec_no_analysis, self.scenario, chart_log_interval=1
            )

        with self.logger.log_start_end(
            "Calculates link level LOS based reliability", level="DETAIL"
        ):
            exf_pars = self.scenario.emmebank.extra_function_parameters
            vdfs = [
                f for f in self.emmebank.functions() if f.type == "VOLUME_DELAY"
            ]
            net_calc = self._network_calculator
            for function in vdfs:
                expression = function.expression
                for el in ["el1", "el2", "el3", "el4"]:
                    expression = expression.replace(el, getattr(exf_pars, el))
                if "@static_rel" in expression:
                    # split function into time component and reliability component
                    time_expr, reliability_expr = expression.split(
                        "*(1+@static_rel+"
                    )
                    net_calc.add_calc(
                        "@auto_time",
                        time_expr,
                        {"link": f"vdf={function.id[2:]}"},
                    )
                    net_calc.add_calc(
                        "@reliability",
                        f"(@static_rel+{reliability_expr}",
                        {"link": f"vdf={function.id[2:]}"},
                    )
            net_calc.add_calc("@reliability_sq", "@reliability**2")
            net_calc.run()

        with self.logger.log_start_end(
            "Run SOLA assignment with path analyses and highway reliability",
            level="INFO",
        ):
            assign(self.assign_spec, self.scenario, chart_log_interval=1)

        # Subtract non-time costs from gen cost to get the raw travel time
        self._calc_time_skims()
        # Set intra-zonal for time and dist to be 1/2 nearest neighbour
        self._set_intrazonal_values()
        self._export_skims()

Highway-MAZ Connectivity

Micro-zone access point connections to the highway network.

tm2py.components.network.highway.highway_maz

Assigns and skims MAZ-to-MAZ demand along shortest generalized cost path.

MAZ to MAZ demand is read in from separate OMX matrices as defined under the config table highway.maz_to_maz.demand_county_groups,

The demand is expected to be short distance (e.g. <0.5 miles), or within the same TAZ. The demand is grouped into bins of origin -> all destinations, by distance (straight-line) to furthest destination. This limits the size of the shortest path calculated to the minimum required. The bin edges have been predefined after testing as (in miles): [0.0, 0.9, 1.2, 1.8, 2.5, 5.0, 10.0, max_dist]

Input: Emme network with: Link attributes: - time attribute, either timau (resulting VDF congested time) or @free_flow_time Node attributes: @maz_id, x, y, and #node_county Demand matrices under highway.maz_to_maz.demand_file, and can have a placeholder auto_{period}MAZ_AUTO{number}_{period}.omx

Output: The resulting MAZ-MAZ flows are saved in link @maz_flow which is used as background traffic in the equilibrium Highway assignment.

Classes
AssignMAZSPDemand
AssignMAZSPDemand(controller: RunController)

Bases: Component

MAZ-to-MAZ shortest-path highway assignment.

Calculates shortest path between MAZs with demand in the Emme network and assigns flow.

MAZ-to-MAZ shortest-path highway assignment.

Parameters:

Name Type Description Default
controller RunController

parent Controller object

required
Source code in tm2py/components/network/highway/highway_maz.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def __init__(self, controller: RunController):
    """MAZ-to-MAZ shortest-path highway assignment.

    Args:
        controller: parent Controller object
    """

    super().__init__(controller)
    self.config = self.controller.config.highway.maz_to_maz
    self._debug = False

    # bins: performance parameter: crow-fly distance bins
    #       to limit shortest path calculation by origin to furthest destination
    #       semi-exposed for performance testing
    self._bin_edges = _default_bin_edges

    # Lazily-loaded Emme Properties
    self._highway_emmebank = None
    self._eb_dir = None

    # Internal attributes to track data through the sequence of steps
    self._scenario = None
    self._mazs = None
    self._demand = _defaultdict(lambda: [])
    self._max_dist = 0
    self._network = None
    self._root_index = None
    self._leaf_index = None
Functions
validate_inputs
validate_inputs()

Validate inputs files are correct, raise if an error is found.

Source code in tm2py/components/network/highway/highway_maz.py
110
111
112
113
def validate_inputs(self):
    """Validate inputs files are correct, raise if an error is found."""
    # TODO
    pass
run
run()

Run MAZ-to-MAZ shortest path assignment.

Source code in tm2py/components/network/highway/highway_maz.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
@LogStartEnd()
def run(self):
    """Run MAZ-to-MAZ shortest path assignment."""

    county_groups = {}
    for group in self.config.demand_county_groups:
        county_groups[group.number] = group.counties
    for time in self.time_period_names:
        self._scenario = self.highway_emmebank.scenario(time)
        with self._setup(time):
            self._prepare_network()
            for i, names in county_groups.items():
                maz_ids = self._get_county_mazs(names)
                if len(maz_ids) == 0:
                    self.logger.log(
                        f"warning: no mazs for counties {', '.join(names)}"
                    )
                    continue
                self._process_demand(time, i, maz_ids)
            demand_bins = self._group_demand()
            for i, demand_group in enumerate(demand_bins):
                self._find_roots_and_leaves(demand_group["demand"])
                self._set_link_cost_maz()
                self._run_shortest_path(time, i, demand_group["dist"])
                self._assign_flow(time, i, demand_group["demand"])
SkimMAZCosts
SkimMAZCosts(controller: RunController)

Bases: Component

MAZ-to-MAZ shortest-path skim of time, distance and toll.

MAZ-to-MAZ shortest-path skim of time, distance and toll.

Parameters:

Name Type Description Default
controller RunController

parent RunController object

required
Source code in tm2py/components/network/highway/highway_maz.py
670
671
672
673
674
675
676
677
678
679
680
681
def __init__(self, controller: RunController):
    """MAZ-to-MAZ shortest-path skim of time, distance and toll.

    Args:
        controller: parent RunController object
    """
    super().__init__(controller)
    self.config = self.controller.config.highway.maz_to_maz
    # TODO add config requirement that most be a valid time period
    self._scenario = None
    self._network = None
    self._highway_emmebank = None
Functions
validate_inputs
validate_inputs()

Validate inputs files are correct, raise if an error is found.

Source code in tm2py/components/network/highway/highway_maz.py
695
696
697
698
def validate_inputs(self):
    """Validate inputs files are correct, raise if an error is found."""
    # TODO
    pass
run
run()

Run shortest path skims for all available MAZ-to-MAZ O-D pairs.

Runs a shortest path builder for each county, using a maz_skim_cost to limit the search. The valid gen cost (time + cost), distance and toll (drive alone) are written to CSV at the output_skim_file path: FROM_ZONE, TO_ZONE, COST, DISTANCE, BRIDGETOLL

The following config inputs are used directly in this component. Note also that the network mode_code is prepared in the highway_network component using the excluded_links.

config.highway.maz_to_maz: skim_period: name of the period used for the skim, must match one the defined config.time_periods demand_county_groups: used for the list of counties, creates a list out of all listed counties under [].counties output_skim_file: relative path to save the skims value_of_time: value of time used to convert tolls and auto operating cost operating_cost_per_mile: auto operating cost max_skim_cost: max cost value used to limit the shortest path search mode_code:

Source code in tm2py/components/network/highway/highway_maz.py
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
@LogStartEnd()
def run(self):
    """Run shortest path skims for all available MAZ-to-MAZ O-D pairs.

    Runs a shortest path builder for each county, using a maz_skim_cost
    to limit the search. The valid gen cost (time + cost), distance and toll (drive alone)
    are written to CSV at the output_skim_file path:
    FROM_ZONE, TO_ZONE, COST, DISTANCE, BRIDGETOLL

    The following config inputs are used directly in this component. Note also
    that the network mode_code is prepared in the highway_network component
    using the excluded_links.

    config.highway.maz_to_maz:
        skim_period: name of the period used for the skim, must match one the
            defined config.time_periods
        demand_county_groups: used for the list of counties, creates a list out
            of all listed counties under [].counties
        output_skim_file: relative path to save the skims
        value_of_time: value of time used to convert tolls and auto operating cost
        operating_cost_per_mile: auto operating cost
        max_skim_cost: max cost value used to limit the shortest path search
        mode_code:
    """

    # prepare output file and write header
    output = self.get_abs_path(self.config.output_skim_file)
    os.makedirs(os.path.dirname(output), exist_ok=True)
    with open(output, "w", encoding="utf8") as output_file:
        output_file.write("FROM_ZONE, TO_ZONE, COST, DISTANCE, BRIDGETOLL\n")
    counties = []
    for group in self.config.demand_county_groups:
        counties.extend(group.counties)
    with self._setup():
        self._prepare_network()
        for county in counties:
            num_roots = self._mark_roots(county)
            if num_roots == 0:
                continue
            sp_values = self._run_shortest_path()
            self._export_results(sp_values)

Highway Configuration Classes

tm2py.config.HighwayConfig

Highway assignment and skims parameters.

Properties
Functions
valid_skim_template
valid_skim_template(value)

Validate skim template has correct {} and extension.

Source code in tm2py/config.py
983
984
985
986
987
988
989
990
991
992
@validator("output_skim_filename_tmpl")
def valid_skim_template(value):
    """Validate skim template has correct {} and extension."""
    assert (
        "{time_period" in value
    ), f"-> output_skim_filename_tmpl must have {{time_period}}', found {value}."
    assert (
        value[-4:].lower() == ".omx"
    ), f"-> 'output_skim_filename_tmpl must end in '.omx', found {value[-4:].lower() }"
    return value
valid_skim_matrix_name_template
valid_skim_matrix_name_template(value)

Validate skim matrix template has correct {}.

Source code in tm2py/config.py
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
@validator("output_skim_matrixname_tmpl")
def valid_skim_matrix_name_template(value):
    """Validate skim matrix template has correct {}."""
    assert (
        "{time_period" in value
    ), "-> 'output_skim_matrixname_tmpl must have {time_period}, found {value}."
    assert (
        "{property" in value
    ), "-> 'output_skim_matrixname_tmpl must have {property}, found {value}."
    assert (
        "{mode" in value
    ), "-> 'output_skim_matrixname_tmpl must have {mode}, found {value}."
    return value
unique_capclass_numbers
unique_capclass_numbers(value)

Validate list of capclass_lookup has unique .capclass values.

Source code in tm2py/config.py
1008
1009
1010
1011
1012
1013
1014
@validator("capclass_lookup")
def unique_capclass_numbers(cls, value):
    """Validate list of capclass_lookup has unique .capclass values."""
    capclass_ids = [i.capclass for i in value]
    error_msg = "-> capclass value must be unique in list"
    assert len(capclass_ids) == len(set(capclass_ids)), error_msg
    return value
unique_class_names
unique_class_names(value)

Validate list of classes has unique .name values.

Source code in tm2py/config.py
1016
1017
1018
1019
1020
1021
1022
@validator("classes", pre=True)
def unique_class_names(cls, value):
    """Validate list of classes has unique .name values."""
    class_names = [highway_class["name"] for highway_class in value]
    error_msg = "-> name value must be unique in list"
    assert len(class_names) == len(set(class_names)), error_msg
    return value
validate_class_mode_excluded_links(value, values)

Validate list of classes has unique .mode_code or .excluded_links match.

Source code in tm2py/config.py
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
@validator("classes")
def validate_class_mode_excluded_links(cls, value, values):
    """Validate list of classes has unique .mode_code or .excluded_links match."""
    # validate if any mode IDs are used twice, that they have the same excluded links sets
    mode_excluded_links = {}
    for i, highway_class in enumerate(value):
        # maz_to_maz.mode_code must be unique
        if "maz_to_maz" in values:
            assert (
                highway_class["mode_code"] != values["maz_to_maz"]["mode_code"]
            ), f"-> {i} -> mode_code: cannot be the same as the highway.maz_to_maz.mode_code"
        # make sure that if any mode IDs are used twice, they have the same excluded links sets
        if highway_class.mode_code in mode_excluded_links:
            ex_links1 = highway_class["excluded_links"]
            ex_links2 = mode_excluded_links[highway_class["mode_code"]]
            error_msg = (
                f"-> {i}: duplicated mode codes ('{highway_class['mode_code']}') "
                f"with different excluded links: {ex_links1} and {ex_links2}"
            )
            assert ex_links1 == ex_links2, error_msg
        mode_excluded_links[highway_class.mode_code] = highway_class.excluded_links
    return value
validate_class_keyword_lists
validate_class_keyword_lists(value, values)

Validate classes .skims, .toll, and .excluded_links values.

Source code in tm2py/config.py
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
@validator("classes")
def validate_class_keyword_lists(cls, value, values):
    """Validate classes .skims, .toll, and .excluded_links values."""
    if "tolls" not in values:
        return value
    avail_skims = [
        "time",
        "dist",
        "hovdist",
        "tolldist",
        "freeflowtime",
        "rlbty",
        "autotime",
    ]
    available_link_sets = ["is_sr", "is_sr2", "is_sr3", "is_auto_only"]
    avail_toll_attrs = []
    for name in values["tolls"].dst_vehicle_group_names:
        toll_types = [f"bridgetoll_{name}", f"valuetoll_{name}"]
        avail_skims.extend(toll_types)
        avail_toll_attrs.extend(["@" + name for name in toll_types])
        available_link_sets.append(f"is_toll_{name}")

    # validate class skim name list and toll attribute against toll setup
    def check_keywords(class_num, key, val, available):
        extra_keys = set(val) - set(available)
        error_msg = (
            f" -> {class_num} -> {key}: unrecognized {key} name(s): "
            f"{','.join(extra_keys)}.  Available names are: {', '.join(available)}"
        )
        assert not extra_keys, error_msg

    for i, highway_class in enumerate(value):
        check_keywords(i, "skim", highway_class["skims"], avail_skims)
        check_keywords(i, "toll", highway_class["toll"], avail_toll_attrs)
        check_keywords(
            i,
            "excluded_links",
            highway_class["excluded_links"],
            available_link_sets,
        )
    return value

tm2py.config.HighwayClassConfig

Highway assignment class definition.

Note that excluded_links, skims and toll attribute names include vehicle groups (“{vehicle}”) which reference the list of highway.toll.dst_vehicle_group_names (see HighwayTollsConfig). The default example model config uses: “da”, “sr2”, “sr3”, “vsm”, sml”, “med”, “lrg”

Example single class config

name = “da” description= “drive alone” mode_code= “d” [[highway.classes.demand]] source = “household” name = “SOV_GP_{period}” [[highway.classes.demand]] source = “air_passenger” name = “da” [[highway.classes.demand]] source = “internal_external” name = “da” excluded_links = [“is_toll_da”, “is_sr2”], value_of_time = 18.93, # $ / hr operating_cost_per_mile = 17.23, # cents / mile toll = [“@bridgetoll_da”] skims = [“time”, “dist”, “freeflowtime”, “bridgetoll_da”],

Properties

tm2py.config.HighwayTollsConfig

Highway assignment and skim input tolls and related parameters.

Properties
Functions
dst_vehicle_group_names_length
dst_vehicle_group_names_length(value, values)

Validate dst_vehicle_group_names has same length as src_vehicle_group_names.

Source code in tm2py/config.py
841
842
843
844
845
846
847
848
849
850
851
@validator("dst_vehicle_group_names", always=True)
def dst_vehicle_group_names_length(cls, value, values):
    """Validate dst_vehicle_group_names has same length as src_vehicle_group_names."""
    if "src_vehicle_group_names" in values:
        assert len(value) == len(
            values["src_vehicle_group_names"]
        ), "dst_vehicle_group_names must be same length as src_vehicle_group_names"
        assert all(
            [len(v) <= 4 for v in value]
        ), "dst_vehicle_group_names must be 4 characters or less"
    return value

tm2py.config.DemandCountyGroupConfig

Grouping of counties for assignment and demand files.

Properties

tm2py.config.HighwayMazToMazConfig

Highway MAZ to MAZ shortest path assignment and skim parameters.

Properties
Functions
unique_group_numbers
unique_group_numbers(value)

Validate list of demand_county_groups has unique .number values.

Source code in tm2py/config.py
918
919
920
921
922
923
@validator("demand_county_groups")
def unique_group_numbers(cls, value):
    """Validate list of demand_county_groups has unique .number values."""
    group_ids = [group.number for group in value]
    assert len(group_ids) == len(set(group_ids)), "-> number value must be unique"
    return value

🚌 Transit Network Components

Public transit system modeling including bus, rail, ferry, and other transit modes.

Transit Modeling Workflow

  1. Network Definition: Transit lines, stops, and schedules
  2. Service Patterns: Frequency, capacity, and routing
  3. Path Building: Transit path enumeration and choice
  4. Assignment: Passenger flow assignment to transit services

Transit Assignment

Transit passenger assignment and capacity analysis.

tm2py.components.network.transit.transit_assign

Transit assignment module.

Classes
TransitAssignment
TransitAssignment(controller: 'RunController')

Bases: Component

Run transit assignment.

Constructor for TransitAssignment.

Parameters:

Name Type Description Default
controller 'RunController'

RunController object.

required
Source code in tm2py/components/network/transit/transit_assign.py
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
def __init__(self, controller: "RunController"):
    """Constructor for TransitAssignment.

    Args:
        controller: RunController object.
    """
    super().__init__(controller)
    self.config = self.controller.config.transit
    self.sub_components = {
        "prepare transit demand": PrepareTransitDemand(controller),
    }
    self.transit_network = PrepareTransitNetwork(controller)
    self._demand_matrix = None  # FIXME
    self._num_processors = self.controller.emme_manager.num_processors
    self._time_period = None
    self._scenario = None
    self._transit_emmebank = None
Functions
validate_inputs
validate_inputs()

Validate the inputs.

Source code in tm2py/components/network/transit/transit_assign.py
422
423
def validate_inputs(self):
    """Validate the inputs."""
run
run()

Run transit assignments.

Source code in tm2py/components/network/transit/transit_assign.py
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
@LogStartEnd("Transit assignments")
def run(self):
    """Run transit assignments."""

    if self.controller.iteration == 0:
        self.transit_emmebank.create_zero_matrix()
        if self.controller.config.warmstart.warmstart:
            if self.controller.config.warmstart.use_warmstart_demand:
                self.sub_components["prepare transit demand"].run()
        else:
            # give error message to user about not warmstarting transit
            raise Exception(
                f"ERROR: transit has to be warmstarted, please either specify use_warmstart_skim or use_warmstart_demand"
            )
    else:
        self.sub_components["prepare transit demand"].run()

    for time_period in self.time_period_names:
        # update auto times
        print("updating auto time in transit network")
        self.transit_network.update_auto_times(time_period)

        if self.controller.iteration == 0:
            # iteration = 0 : run uncongested transit assignment
            use_ccr = False
            congested_transit_assignment = False
            print("running uncongested transit assignment with warmstart demand")
            self.run_transit_assign(
                time_period, use_ccr, congested_transit_assignment
            )
        elif (self.controller.iteration == 1) & (self.controller.config.warmstart.use_warmstart_skim):
            # iteration = 1 and use_warmstart_skim = True : run uncongested transit assignment
            use_ccr = False
            congested_transit_assignment = False
            self.run_transit_assign(
                time_period, use_ccr, congested_transit_assignment
            )               
        else:
            # iteration >= 1 and use_warmstart_skim = False : run congested transit assignment
            use_ccr = self.config.use_ccr
            if time_period in ["EA", "EV", "MD"]:
                congested_transit_assignment = False
            else:
                congested_transit_assignment = (
                    self.config.congested_transit_assignment
                )

            self.run_transit_assign(
                time_period, use_ccr, congested_transit_assignment
            )

        # output_summaries
        if self.config.output_stop_usage_path is not None:
            network, class_stop_attrs = self._calc_connector_flows(time_period)
            self._export_connector_flows(network, class_stop_attrs, time_period)
        if self.controller.iteration == self.controller.config.run.end_iteration:
            if self.config.output_transit_boardings_path is not None:
                self._export_boardings_by_line(time_period)
            if self.config.output_transit_segment_path is not None:
                self._export_transit_segment(time_period)
            if self.config.output_station_to_station_flow_path is not None:
                self._export_boardings_by_station(time_period)
            if self.config.output_transfer_at_station_path is not None:
                self._export_transfer_at_stops(time_period)
TransitAssignmentClass
TransitAssignmentClass(tclass_config: TransitClassConfig, config: TransitConfig, time_period: str, iteration: int, num_processors: int, fare_modes: Dict[str, Set[str]], spec_dir: str)

Transit assignment class, represents data from config and conversion to Emme specs.

Internal properties

Assignment class constructor.

Parameters:

Name Type Description Default
tclass_config TransitClassConfig

the transit class config (TransitClassConfig)

required
config TransitConfig

the root transit assignment config (TransitConfig)

required
time_period str

the time period name

required
iteration int

the current iteration

required
num_processors int

the number of processors to use, loaded from config

required
fare_modes Dict[str, Set[str]]

the mapping from the generated fare mode ID to the original source mode ID

required
spec_dir str

directory to find the generated journey levels tables from the apply fares step

required
Source code in tm2py/components/network/transit/transit_assign.py
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
def __init__(
    self,
    tclass_config: TransitClassConfig,
    config: TransitConfig,
    time_period: str,
    iteration: int,
    num_processors: int,
    fare_modes: Dict[str, Set[str]],
    spec_dir: str,
):
    """Assignment class constructor.

    Args:
        tclass_config: the transit class config (TransitClassConfig)
        config: the root transit assignment config (TransitConfig)
        time_period: the time period name
        iteration: the current iteration
        num_processors: the number of processors to use, loaded from config
        fare_modes: the mapping from the generated fare mode ID to the original
            source mode ID
        spec_dir: directory to find the generated journey levels tables from
            the apply fares step
    """
    self._name = tclass_config.name
    self._class_config = tclass_config
    self._config = config
    self._time_period = time_period
    self._iteration = iteration
    self._num_processors = num_processors
    self._fare_modes = fare_modes
    self._spec_dir = spec_dir
Attributes
name property
name: str

The class name.

emme_transit_spec property
emme_transit_spec: EmmeTransitSpec

Return Emme Extended transit assignment specification.

Converted from input config (transit.classes, with some parameters from transit table), see also Emme Help for Extended transit assignment for specification details.

Functions
time_period_capacity
time_period_capacity(vehicle_capacity: float, headway: float, time_period_duration: float) -> float

summary

Parameters:

Name Type Description Default
vehicle_capacity float

Vehicle capacity per hour. For vehicles with multiple cars (i.e. trainsets), should be the capacity of all of them that are traveling together.

required
headway float

Vehicle (or train sets) per hour.

required
time_period_duration float

duration of the time period in minutes

required

Returns:

Name Type Description
float float

capacity for the whole time period

Source code in tm2py/components/network/transit/transit_assign.py
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
def time_period_capacity(
    vehicle_capacity: float, headway: float, time_period_duration: float
) -> float:
    """_summary_

    Args:
        vehicle_capacity (float): Vehicle capacity per hour. For vehicles with multiple cars
            (i.e. trainsets), should be the capacity of all of them that are traveling together.
        headway (float): Vehicle (or train sets) per hour.
        time_period_duration (float): duration of the time period in minutes

    Returns:
        float: capacity for the whole time period
    """
    return vehicle_capacity * time_period_duration * 60 / headway
func_returns_crowded_segment_cost
func_returns_crowded_segment_cost(time_period_duration, weights: CcrWeightsConfig)

function that returns the calc_segment_cost function for emme assignment, with partial preloaded parameters acts like partial as emme does not take partial

Source code in tm2py/components/network/transit/transit_assign.py
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def func_returns_crowded_segment_cost(time_period_duration, weights: CcrWeightsConfig):
    """
    function that returns the calc_segment_cost function for emme assignment, with partial preloaded parameters
    acts like partial as emme does not take partial
    """

    def calc_segment_cost(transit_volume: float, capacity, segment) -> float:
        """Calculates crowding factor for a segment.

        Toronto implementation limited factor between 1.0 and 10.0.
        For use with Emme Capacitated assignment normalize by subtracting 1

        Args:
            time_period_duration(float): time period duration in minutes
            weights (_type_): transit capacity weights
            segment_pax (float): transit passengers for the segment for the time period
            segment: emme line segment

        Returns:
            float: crowding factor for a segment
        """

        from tm2py.config import (
            CcrWeightsConfig,
            EawtWeightsConfig,
            TransitClassConfig,
            TransitConfig,
            TransitModeConfig,
        )

        if transit_volume == 0:
            return 0.0

        line = segment.line

        seated_capacity = (
            line.vehicle.seated_capacity * {time_period_duration} * 60 / line.headway
        )

        seated_pax = min(transit_volume, seated_capacity)
        standing_pax = max(transit_volume - seated_pax, 0)

        seated_cost = {weights}.min_seat + ({weights}.max_seat - {weights}.min_seat) * (
            transit_volume / capacity
        ) ** {weights}.power_seat

        standing_cost = {weights}.min_stand + (
            {weights}.max_stand - {weights}.min_stand
        ) * (transit_volume / capacity) ** {weights}.power_stand

        crowded_cost = (seated_cost * seated_pax + standing_cost * standing_pax) / (
            transit_volume + 0.01
        )

        normalized_crowded_cost = max(crowded_cost - 1, 0)

        return normalized_crowded_cost

    return textwrap.dedent(inspect.getsource(calc_segment_cost)).format(
        time_period_duration=time_period_duration, weights=weights
    )
func_returns_segment_congestion
func_returns_segment_congestion(time_period_duration, scenario, weights: CongestedWeightsConfig, use_fares: bool = False)

function that returns the calc_segment_cost function for emme assignment, with partial preloaded parameters acts like partial as emme does not take partial

Source code in tm2py/components/network/transit/transit_assign.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def func_returns_segment_congestion(
    time_period_duration,
    scenario,
    weights: CongestedWeightsConfig,
    use_fares: bool = False,
):
    """
    function that returns the calc_segment_cost function for emme assignment, with partial preloaded parameters
    acts like partial as emme does not take partial
    """
    if use_fares:
        values = scenario.get_attribute_values("TRANSIT_LINE", ["#src_mode"])
        scenario.set_attribute_values("TRANSIT_LINE", ["#src_mode"], values)

    def calc_segment_cost(transit_volume: float, capacity, segment) -> float:
        """Calculates crowding factor for a segment.

        Toronto implementation limited factor between 1.0 and 10.0.
        For use with Emme Capacitated assignment normalize by subtracting 1

        Args:
            time_period_duration(float): time period duration in minutes
            weights (_type_): transit capacity weights
            segment: emme line segment

        Returns:
            float: crowding factor for a segment
        """

        from tm2py.config import (
            CongestedWeightsConfig,
            TransitClassConfig,
            TransitConfig,
            TransitModeConfig,
        )

        if transit_volume <= 0:
            return 0.0

        line = segment.line

        if {use_fares}:
            mode_char = line["#src_mode"]
        else:
            mode_char = line.mode.id

        if mode_char in ["p"]:
            congestion = 0.25 * ((transit_volume / capacity) ** 10)
        else:
            seated_capacity = (
                line.vehicle.seated_capacity
                * {time_period_duration}
                * 60
                / line.headway
            )

            seated_pax = min(transit_volume, seated_capacity)
            standing_pax = max(transit_volume - seated_pax, 0)

            seated_cost = {weights}.min_seat + (
                {weights}.max_seat - {weights}.min_seat
            ) * (transit_volume / capacity) ** {weights}.power_seat

            standing_cost = {weights}.min_stand + (
                {weights}.max_stand - {weights}.min_stand
            ) * (transit_volume / capacity) ** {weights}.power_stand

            crowded_cost = (seated_cost * seated_pax + standing_cost * standing_pax) / (
                transit_volume
            )

            congestion = max(crowded_cost, 1) - 1.0

        return congestion

    return textwrap.dedent(inspect.getsource(calc_segment_cost)).format(
        time_period_duration=time_period_duration, weights=weights, use_fares=use_fares
    )
calc_total_offs
calc_total_offs(line) -> float

Calculate total alightings for a line.

Parameters:

Name Type Description Default
line _type_

description

required
Source code in tm2py/components/network/transit/transit_assign.py
196
197
198
199
200
201
202
203
204
205
206
207
208
def calc_total_offs(line) -> float:
    """Calculate total alightings for a line.

    Args:
        line (_type_): _description_
    """
    # NOTE This was done previously using:
    # total_offs += prev_seg.transit_volume - seg.transit_volume + seg.transit_boardings
    # but offs should equal ons for a whole line, so this seems simpler
    offs = [seg.transit_boardings for seg in line.segments(True)]
    total_offs = sum(offs)
    # added lambda due to divide by zero error
    return total_offs if total_offs >= 0.001 else 9999
calc_offs_thru_segment
calc_offs_thru_segment(segment) -> float

summary

Parameters:

Name Type Description Default
segment _type_

description

required

Returns:

Name Type Description
float float

description

Source code in tm2py/components/network/transit/transit_assign.py
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
def calc_offs_thru_segment(segment) -> float:
    """_summary_

    Args:
        segment (_type_): _description_

    Returns:
        float: _description_
    """
    # SIJIA TODO check that it should be [:segment.number+1] . Not sure if 0-indexed in emme or 1-indexed?
    segments_thru_this_segment = [seg for seg in iter(segment.line.segments(True))][
        : segment.number + 1
    ]
    offs_thru_this_seg = [
        prev_seg.transit_volume - this_seg.transit_volume + this_seg.transit_boardings
        for prev_seg, this_seg in zip(
            segments_thru_this_segment[:-1], segments_thru_this_segment[1:]
        )
    ]
    total_offs_thru_this_seg = sum(offs_thru_this_seg)
    return total_offs_thru_this_seg
calc_extra_wait_time
calc_extra_wait_time(segment, segment_capacity: float, eawt_weights, mode_config: dict, use_fares: bool = False)

Calculate extra added wait time based on…

TODO document fully.

Parameters:

Name Type Description Default
segment _type_

Emme transit segment object.

required
segment_capacity float

description

required
eawt_weights

extra added wait time weights

required
mode_config dict

mode character to mode config

required
use_fares bool

description. Defaults to False.

False

Returns:

Name Type Description
_type_

description

Source code in tm2py/components/network/transit/transit_assign.py
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
def calc_extra_wait_time(
    segment,
    segment_capacity: float,
    eawt_weights,
    mode_config: dict,
    use_fares: bool = False,
):
    """Calculate extra added wait time based on...

    # TODO document fully.

    Args:
        segment (_type_): Emme transit segment object.
        segment_capacity (float): _description_
        eawt_weights: extra added wait time weights
        mode_config: mode character to mode config
        use_fares (bool, optional): _description_. Defaults to False.

    Returns:
        _type_: _description_
    """
    _transit_volume = segment.transit_volume
    _headway = segment.line.headway if segment.line.headway >= 0.1 else 9999
    _total_offs = calc_total_offs(segment.line)
    _offs_thru_segment = calc_offs_thru_segment(segment)

    # TODO Document and add params to config. Have no idea what source is here.
    eawt = (
        eawt_weights.constant
        + eawt_weights.weight_inverse_headway * (1 / _headway)
        + eawt_weights.vcr * (_transit_volume / segment_capacity)
        + eawt_weights.exit_proportion * (_offs_thru_segment / _total_offs)
    )

    if use_fares:
        eawt_factor = (
            1
            if segment.line["#src_mode"] == ""
            else mode_config[segment.line["#src_mode"]]["eawt_factor"]
        )
    else:
        eawt_factor = (
            1
            if segment.line.mode.id == ""
            else mode_config[segment.line.mode.id]["eawt_factor"]
        )

    return eawt * eawt_factor
calc_adjusted_headway
calc_adjusted_headway(segment, segment_capacity: float) -> float

Headway adjusted based on ....?

TODO: add documentation about source and theory behind this.

Parameters:

Name Type Description Default
segment

Emme transit segment object

required
segment_capacity float

description

required

Returns:

Name Type Description
float float

Adjusted headway

Source code in tm2py/components/network/transit/transit_assign.py
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
def calc_adjusted_headway(segment, segment_capacity: float) -> float:
    """Headway adjusted based on ....?

    TODO: add documentation about source and theory behind this.

    Args:
        segment: Emme transit segment object
        segment_capacity (float): _description_

    Returns:
        float: Adjusted headway
    """
    # TODO add to params
    max_hdwy_growth = 1.5
    max_headway = 999.98
    # QUESTION FOR INRO: what is the difference between segment["@phdwy"] and line.headway?
    # is one the perceived headway?
    _transit_volume = segment.transit_volume
    _transit_boardings = segment.transit_boardings
    _previous_headway = segment["@phdwy"]
    _current_headway = segment.line.headway
    _available_capacity = max(
        segment_capacity - _transit_volume + _transit_boardings, 0
    )

    adjusted_headway = min(
        max_headway,
        _previous_headway
        * min((_transit_boardings + 1) / (_available_capacity + 1), 1.5),
    )
    adjusted_headway = max(_current_headway, adjusted_headway)

    return adjusted_headway
func_returns_calc_updated_perceived_headway
func_returns_calc_updated_perceived_headway(time_period_duration, eawt_weights, mode_config, use_fares)

function that returns the calc_headway function for emme assignment, with partial preloaded parameters acts like partial as emme does not take partial

Source code in tm2py/components/network/transit/transit_assign.py
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
def func_returns_calc_updated_perceived_headway(
    time_period_duration, eawt_weights, mode_config, use_fares
):
    """
    function that returns the calc_headway function for emme assignment, with partial preloaded parameters
    acts like partial as emme does not take partial
    """

    def calc_headway(transit_volume, transit_boardings, headway, capacity, segment):
        """Calculate perceived (???) headway updated by ... and extra added wait time.

        # TODO Document more fully.

        Args:
            time_period_duration(float): time period duration in minutes
            segment: Emme Transit segment object
            eawt_weights:
            mode_config:
            use_fares (bool): if true, will use fares

        Returns:
            _type_: _description_
        """
        # QUESTION FOR INRO: Kevin separately put segment.line.headway and headway as an arg.
        # Would they be different? Why?
        # TODO: Either can we label the headways so it is clear what is diff about them or just use single value?

        from tm2py.config import (
            CcrWeightsConfig,
            EawtWeightsConfig,
            TransitClassConfig,
            TransitConfig,
            TransitModeConfig,
        )

        _segment_capacity = capacity

        vcr = transit_volume / _segment_capacity

        _extra_added_wait_time = calc_extra_wait_time(
            segment,
            _segment_capacity,
            {eawt_weights},
            {mode_config},
            {use_fares},
        )

        _adjusted_headway = calc_adjusted_headway(
            segment,
            _segment_capacity,
        )

        return _adjusted_headway + _extra_added_wait_time

    return textwrap.dedent(inspect.getsource(calc_headway)).format(
        time_period_duration=time_period_duration,
        eawt_weights=eawt_weights,
        mode_config=mode_config,
        use_fares=use_fares,
    )

Transit Skimming

Travel time and cost matrix generation for transit services.

tm2py.components.network.transit.transit_skim

Transit skims module.

Classes
TransitSkim
TransitSkim(controller: 'RunController')

Bases: Component

Transit skim calculation methods.

Constructor for TransitSkim class.

Parameters:

Name Type Description Default
controller 'RunController'

The RunController instance.

required
Source code in tm2py/components/network/transit/transit_skim.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
def __init__(self, controller: "RunController"):
    """Constructor for TransitSkim class.

    Args:
        controller: The RunController instance.
    """
    super().__init__(controller)
    self.config = self.controller.config.transit
    self._emmebank = None
    self._num_processors = self.controller.emme_manager.num_processors_transit_skim
    self._networks = None
    self._scenarios = None
    self._matrix_cache = None
    self._skim_properties = None
    self._skim_matrices = {
        k: None
        for k in itertools.product(
            self.time_period_names,
            self.config.classes,
            self.skim_properties,
        )
    }
    self._skim_outputs = None
Attributes
skim_properties property
skim_properties

List of Skim Property named tuples: name, description.

TODO put these in config.

skim_outputs property
skim_outputs

List of Skim Property named tuples: name, description.

TODO put these in config.

Functions
validate_inputs
validate_inputs()

Validate inputs.

Source code in tm2py/components/network/transit/transit_skim.py
54
55
56
57
def validate_inputs(self):
    """Validate inputs."""
    # TODO add input validation
    pass
run
run()

Run transit skims.

Source code in tm2py/components/network/transit/transit_skim.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
@LogStartEnd("Transit skims")
def run(self):
    """Run transit skims."""
    self.emmebank_skim_matrices(
        self.time_period_names, self.config.classes, self.skim_properties
    )
    with self.logger.log_start_end(f"period transit skims"):
        for _time_period in self.time_period_names:
            with self.controller.emme_manager.logbook_trace(
                f"Transit skims for period {_time_period}"
            ):
                for _transit_class in self.config.classes:
                    self.run_skim_set(_time_period, _transit_class)
                    self._export_skims(_time_period, _transit_class)
                if self.logger.debug_enabled:
                    self._log_debug_report(_time_period)
emmebank_skim_matrices
emmebank_skim_matrices(time_periods: List[str] = None, transit_classes=None, skim_properties: Skimproperty = None) -> dict

Gets skim matrices from emmebank, or lazily creates them if they don’t already exist.

Source code in tm2py/components/network/transit/transit_skim.py
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
def emmebank_skim_matrices(
    self,
    time_periods: List[str] = None,
    transit_classes=None,
    skim_properties: Skimproperty = None,
) -> dict:
    """Gets skim matrices from emmebank, or lazily creates them if they don't already exist."""
    create_matrix = self.controller.emme_manager.tool(
        "inro.emme.data.matrix.create_matrix"
    )
    if time_periods is None:
        time_periods = self.time_period_names
    if not set(time_periods).issubset(set(self.time_period_names)):
        raise ValueError(
            f"time_periods ({time_periods}) must be subset of time_period_names ({self.time_period_names})."
        )

    if transit_classes is None:
        transit_classes = self.config.classes
    if not set(transit_classes).issubset(set(self.config.classes)):
        raise ValueError(
            f"time_periods ({transit_classes}) must be subset of time_period_names ({self.config.transit_classes})."
        )

    if skim_properties is None:
        skim_properties = self.skim_properties
    if not set(skim_properties).issubset(set(self.skim_properties)):
        raise ValueError(
            f"time_periods ({skim_properties}) must be subset of time_period_names ({self.skim_properties})."
        )

    _tp_tclass_skprop = itertools.product(
        time_periods, transit_classes, skim_properties
    )
    _tp_tclass_skprop_list = []

    for _tp, _tclass, _skprop in _tp_tclass_skprop:
        a = 1
        _name = f"{_tp}_{_tclass.name}_{_skprop.name}"
        _desc = f"{_tp} {_tclass.description}: {_skprop.desc}"
        _matrix = self.scenarios[_tp].emmebank.matrix(f'mf"{_name}"')
        if not _matrix:
            _matrix = create_matrix(
                "mf", _name, _desc, scenario=self.scenarios[_tp], overwrite=True
            )
        else:
            _matrix.description = _desc

        self._skim_matrices[_name] = _matrix
        _tp_tclass_skprop_list.append(_name)

    skim_matrices = {
        k: v
        for k, v in self._skim_matrices.items()
        if k in list(_tp_tclass_skprop_list)
    }
    return skim_matrices
run_skim_set
run_skim_set(time_period: str, transit_class: str)

Run the transit skim calculations for a given time period and assignment class.

Results are stored in transit emmebank.

Steps
  1. determine if using transit capacity constraint
  2. skim walk, wait time, boardings, and fares
  3. skim in vehicle time by mode
  4. mask transfers above max amount
  5. mask if doesn’t have required modes
Source code in tm2py/components/network/transit/transit_skim.py
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
def run_skim_set(self, time_period: str, transit_class: str):
    """Run the transit skim calculations for a given time period and assignment class.

    Results are stored in transit emmebank.

    Steps:
        1. determine if using transit capacity constraint
        2. skim walk, wait time, boardings, and fares
        3. skim in vehicle time by mode
        4. mask transfers above max amount
        5. mask if doesn't have required modes
    """
    use_ccr = False
    congested_transit_assignment = self.config.congested_transit_assignment
    if self.controller.iteration >= 1:
        use_ccr = self.config.use_ccr
    with self.controller.emme_manager.logbook_trace(
        "First and total wait time, number of boardings, "
        "fares, and total and transfer walk time"
    ):
        self.skim_walk_wait_boards_fares(time_period, transit_class)
    with self.controller.emme_manager.logbook_trace("In-vehicle time by mode"):
        self.skim_invehicle_time_by_mode(time_period, transit_class, use_ccr)
    with self.controller.emme_manager.logbook_trace(
        "Drive distance and time",
        "Walk auxiliary time, walk access time and walk egress time",
    ):
        self.skim_drive_walk(time_period, transit_class)
    with self.controller.emme_manager.logbook_trace("Calculate crowding"):
        self.skim_crowding(time_period, transit_class)
    if use_ccr:
        with self.controller.emme_manager.logbook_trace("CCR related skims"):
            self.skim_reliability_crowding_capacity(time_period, transit_class)
skim_walk_wait_boards_fares
skim_walk_wait_boards_fares(time_period: str, transit_class: str)

Skim wait, walk, board, and fares for a given time period and transit assignment class.

Skim the first and total wait time, number of boardings, (transfers + 1) fares, total walk time, total in-vehicle time.

Source code in tm2py/components/network/transit/transit_skim.py
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
def skim_walk_wait_boards_fares(self, time_period: str, transit_class: str):
    """Skim wait, walk, board, and fares for a given time period and transit assignment class.

    Skim the first and total wait time, number of boardings, (transfers + 1)
    fares, total walk time, total in-vehicle time.
    """
    _tp_tclass = f"{time_period}_{transit_class.name}"
    _network = self.networks[time_period]
    _transit_mode_ids = [
        m.id for m in _network.modes() if m.type in ["TRANSIT", "AUX_TRANSIT"]
    ]
    spec = {
        "type": "EXTENDED_TRANSIT_MATRIX_RESULTS",
        "actual_first_waiting_times": f'mf"{_tp_tclass}_IWAIT"',
        "actual_total_waiting_times": f'mf"{_tp_tclass}_WAIT"',
        "by_mode_subset": {
            "modes": _transit_mode_ids,
            "avg_boardings": f'mf"{_tp_tclass}_BOARDS"',
        },
    }
    if self.config.use_fares:
        spec["by_mode_subset"].update(
            {
                "actual_in_vehicle_costs": f'mf"{_tp_tclass}_IN_VEHICLE_COST"',
                "actual_total_boarding_costs": f'mf"{_tp_tclass}_FARE"',
            }
        )

    self.controller.emme_manager.matrix_results(
        spec,
        class_name=transit_class.name,
        scenario=self.scenarios[time_period],
        num_processors=self._num_processors,
    )

    self._calc_xfer_wait(time_period, transit_class.name)
    self._calc_boardings(time_period, transit_class.name)
    if self.config.use_fares:
        self._calc_fares(time_period, transit_class.name)
skim_invehicle_time_by_mode
skim_invehicle_time_by_mode(time_period: str, transit_class: str, use_ccr: bool = False) -> None

Skim in-vehicle by mode for a time period and transit class and store results in Emmebank.

Parameters:

Name Type Description Default
time_period str

time period abbreviation

required
transit_class str

transit class name

required
use_ccr bool

if True, will use crowding, capacity, and reliability (ccr). Defaults to False

False
Source code in tm2py/components/network/transit/transit_skim.py
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
def skim_invehicle_time_by_mode(
    self, time_period: str, transit_class: str, use_ccr: bool = False
) -> None:
    """Skim in-vehicle by mode for a time period and transit class and store results in Emmebank.

    Args:
        time_period (str): time period abbreviation
        transit_class (str): transit class name
        use_ccr (bool): if True, will use crowding, capacity, and reliability (ccr).
            Defaults to False

    """
    mode_combinations = self._get_emme_mode_ids(transit_class, time_period)
    if use_ccr:
        total_ivtt_expr = self._invehicle_time_by_mode_ccr(
            time_period, transit_class, mode_combinations
        )
    else:
        total_ivtt_expr = self._invehicle_time_by_mode_no_ccr(
            time_period, transit_class, mode_combinations
        )
    # sum total ivtt across all modes
    self._calc_total_ivt(time_period, transit_class, total_ivtt_expr)
skim_drive_walk
skim_drive_walk(time_period: str, transit_class: str) -> None
Source code in tm2py/components/network/transit/transit_skim.py
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
def skim_drive_walk(self, time_period: str, transit_class: str) -> None:
    """"""
    _tp_tclass = f"{time_period}_{transit_class.name}"
    # _network = self.networks[time_period]

    # drive time here is perception factor*(drive time + toll penalty),
    # will calculate the actual drive time and substract toll penalty in the following steps
    spec1 = {
        "type": "EXTENDED_TRANSIT_MATRIX_RESULTS",
        "by_mode_subset": {
            "modes": ["D"],
            "actual_aux_transit_times": f'mf"{_tp_tclass}_DTIME"',
            "distance": f'mf"{_tp_tclass}_DDIST"',
        },
    }
    # skim walk distance in walk time matrices first,
    # will calculate the actual walk time and overwrite the distance in the following steps
    spec2 = {
        "type": "EXTENDED_TRANSIT_MATRIX_RESULTS",
        "by_mode_subset": {
            "modes": ["w"],
            "distance": f'mf"{_tp_tclass}_WAUX"',
        },
    }
    spec3 = {
        "type": "EXTENDED_TRANSIT_MATRIX_RESULTS",
        "by_mode_subset": {
            "modes": ["a"],
            "distance": f'mf"{_tp_tclass}_WACC"',
        },
    }
    spec4 = {
        "type": "EXTENDED_TRANSIT_MATRIX_RESULTS",
        "by_mode_subset": {
            "modes": ["e"],
            "distance": f'mf"{_tp_tclass}_WEGR"',
        },
    }
    if transit_class.name not in ['WLK_TRN_WLK']:
        self.controller.emme_manager.matrix_results(
            spec1,
            class_name=transit_class.name,
            scenario=self.scenarios[time_period],
            num_processors=self._num_processors,
        )
    self.controller.emme_manager.matrix_results(
        spec2,
        class_name=transit_class.name,
        scenario=self.scenarios[time_period],
        num_processors=self._num_processors,
    )
    if transit_class.name not in ['PNR_TRN_WLK','KNR_TRN_WLK']:
        self.controller.emme_manager.matrix_results(
            spec3,
            class_name=transit_class.name,
            scenario=self.scenarios[time_period],
            num_processors=self._num_processors,
        )
    if transit_class.name not in ['WLK_TRN_PNR','WLK_TRN_KNR']:
        self.controller.emme_manager.matrix_results(
            spec4,
            class_name=transit_class.name,
            scenario=self.scenarios[time_period],
            num_processors=self._num_processors,
        )


    drive_perception_factor = self.config.drive_perception_factor
    walk_speed = self.config.walk_speed
    vot = self.config.value_of_time
    # divide drive time by mode specific perception factor to get the actual time
    # for walk time, use walk distance/walk speed
    # because the mode specific perception factors are hardcoded in the mode definition
    spec_list = [
        {
            "type": "MATRIX_CALCULATION",
            "constraint": None,
            "result": f'mf"{_tp_tclass}_DTIME"',
            "expression": f'mf"{_tp_tclass}_DTIME"/{drive_perception_factor}',
        },
        {
            "type": "MATRIX_CALCULATION",
            "constraint": None,
            "result": f'mf"{_tp_tclass}_DTIME"',
            "expression": f'mf"{_tp_tclass}_DTIME"',
        },
        {
            "type": "MATRIX_CALCULATION",
            "constraint": None,
            "result": f'mf"{_tp_tclass}_WAUX"',
            "expression": f'mf"{_tp_tclass}_WAUX"/({walk_speed}/60)',
        },
        {
            "type": "MATRIX_CALCULATION",
            "constraint": None,
            "result": f'mf"{_tp_tclass}_WACC"',
            "expression": f'mf"{_tp_tclass}_WACC"/({walk_speed}/60)',
        },
        {
            "type": "MATRIX_CALCULATION",
            "constraint": None,
            "result": f'mf"{_tp_tclass}_WEGR"',
            "expression": f'mf"{_tp_tclass}_WEGR"/({walk_speed}/60)',
        },
    ]
    self.controller.emme_manager.matrix_calculator(
        spec_list,
        scenario=self.scenarios[time_period],
        num_processors=self._num_processors,
    )
skim_penalty_toll
skim_penalty_toll(time_period: str, transit_class: str) -> None
Source code in tm2py/components/network/transit/transit_skim.py
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
def skim_penalty_toll(self, time_period: str, transit_class: str) -> None:
    """"""
    # transfer boarding time penalty
    self._run_strategy_analysis(
        time_period, transit_class, {"boarding": "@xboard_nodepen"}, "XBOATIME"
    )

    _tp_tclass = f"{time_period}_{transit_class.name}"
    if ("PNR_TRN_WLK" in _tp_tclass) or ("WLK_TRN_PNR" in _tp_tclass):
        spec = {  # subtract PNR boarding from total transfer boarding time penalty
            "type": "MATRIX_CALCULATION",
            "constraint": {
                "by_value": {
                    "od_values": f'mf"{_tp_tclass}_XBOATIME"',
                    "interval_min": 0,
                    "interval_max": 9999999,
                    "condition": "INCLUDE",
                }
            },
            "result": f'mf"{_tp_tclass}_XBOATIME"',
            "expression": f'(mf"{_tp_tclass}_XBOATIME" - 1).max.0',
        }

        self.controller.emme_manager.matrix_calculator(
            spec,
            scenario=self.scenarios[time_period],
            num_processors=self._num_processors,
        )

    # drive toll
    if ("PNR_TRN_WLK" in _tp_tclass) or ("KNR_TRN_WLK" in _tp_tclass):
        self._run_path_analysis(
            time_period,
            transit_class,
            "ORIGIN_TO_INITIAL_BOARDING",
            {"aux_transit": "@drive_toll"},
            "DTOLL",
        )
    elif ("WLK_TRN_PNR" in _tp_tclass) or ("WLK_TRN_KNR" in _tp_tclass):
        self._run_path_analysis(
            time_period,
            transit_class,
            "FINAL_ALIGHTING_TO_DESTINATION",
            {"aux_transit": "@drive_toll"},
            "DTOLL",
        )
skim_reliability_crowding_capacity
skim_reliability_crowding_capacity(time_period: str, transit_class) -> None

Generate skim results for CCR assignment and stores results in Emmebank.

Generates the following: 1. Link Unreliability: LINKREL 2. Crowding penalty: CROWD 3. Extra added wait time: EAWT 4. Capacity penalty: CAPPEN

Parameters:

Name Type Description Default
time_period str

time period abbreviation

required
transit_class

transit class

required
Source code in tm2py/components/network/transit/transit_skim.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
def skim_reliability_crowding_capacity(
    self, time_period: str, transit_class
) -> None:
    """Generate skim results for CCR assignment and stores results in Emmebank.

    Generates the following:
    1. Link Unreliability: LINKREL
    2. Crowding penalty: CROWD
    3. Extra added wait time: EAWT
    4. Capacity penalty: CAPPEN

    Args:
        time_period (str): time period abbreviation
        transit_class: transit class
    """

    # Link unreliability
    self._run_strategy_analysis(
        time_period, transit_class, {"in_vehicle": "ul1"}, "LINKREL"
    )
    # Crowding penalty
    self._run_strategy_analysis(
        time_period, transit_class, {"in_vehicle": "@ccost"}, "CROWD"
    )
    # skim node reliability, extra added wait time (EAWT)
    self._run_strategy_analysis(
        time_period, transit_class, {"boarding": "@eawt"}, "EAWT"
    )
    # skim capacity penalty
    self._run_strategy_analysis(
        time_period, transit_class, {"boarding": "@capacity_penalty"}, "CAPPEN"
    )
skim_crowding
skim_crowding(time_period: str, transit_class) -> None
Source code in tm2py/components/network/transit/transit_skim.py
806
807
808
809
810
811
def skim_crowding(self, time_period: str, transit_class) -> None:
    """"""
    # Crowding penalty
    self._run_strategy_analysis(
        time_period, transit_class, {"in_vehicle": "@ccost"}, "CROWD"
    )
mask_if_not_required_modes
mask_if_not_required_modes(time_period: str, transit_class) -> None

Enforce the required_mode_combo parameter by setting IVTs to 0 if don’t have required modes.

Parameters:

Name Type Description Default
time_period str

Time period name abbreviation

required
transit_class _type_

description

required
Source code in tm2py/components/network/transit/transit_skim.py
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
def mask_if_not_required_modes(self, time_period: str, transit_class) -> None:
    """
    Enforce the `required_mode_combo` parameter by setting IVTs to 0 if don't have required modes.

    Args:
        time_period (str): Time period name abbreviation
        transit_class (_type_): _description_
    """
    if not transit_class.required_mode_combo:
        return

    _ivt_skims = {}
    for mode in transit_class.required_mode_combo:
        transit_modes = [m for m in self.config.modes if m.type == mode]
        for transit_mode in transit_modes:
            if mode not in _ivt_skims.keys():
                _ivt_skims[mode] = self.matrix_cache[time_period].get_data(
                    f'mf"{time_period}_{transit_class.name}_{transit_mode.name}IVTT"'
                )
            else:
                _ivt_skims[mode] += self.matrix_cache[time_period].get_data(
                    f'mf"{time_period}_{transit_class.name}_{transit_mode.name}IVTT"'
                )

    # multiply all IVT skims together and see if they are greater than zero
    has_all = None
    for key, value in _ivt_skims.items():
        if has_all is not None:
            has_all = np.multiply(has_all, value)
        else:
            has_all = value

    self._mask_skim_set(time_period, transit_class, has_all)
mask_above_max_transfers
mask_above_max_transfers(time_period: str, transit_class)

Reset skims to 0 if number of transfers is greater than max_transfers.

Parameters:

Name Type Description Default
time_period str

Time period name abbreviation

required
transit_class _type_

description

required
Source code in tm2py/components/network/transit/transit_skim.py
930
931
932
933
934
935
936
937
938
939
940
941
942
def mask_above_max_transfers(self, time_period: str, transit_class):
    """Reset skims to 0 if number of transfers is greater than max_transfers.

    Args:
        time_period (str): Time period name abbreviation
        transit_class (_type_): _description_
    """
    max_transfers = self.config.max_transfers
    xfers = self.matrix_cache[time_period].get_data(
        f'mf"{time_period}_{transit_class.name}_XFERS"'
    )
    xfer_mask = np.less_equal(xfers, max_transfers)
    self._mask_skim_set(time_period, transit_class, xfer_mask)

Transit Configuration Classes

tm2py.config.TransitModeConfig

Transit mode definition (see also mode in the Emme API).

Functions
in_vehicle_perception_factor_valid
in_vehicle_perception_factor_valid(value, values)

Validate in_vehicle_perception_factor exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1116
1117
1118
1119
1120
1121
@validator("in_vehicle_perception_factor", always=True)
def in_vehicle_perception_factor_valid(cls, value, values):
    """Validate in_vehicle_perception_factor exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
speed_or_time_factor_valid
speed_or_time_factor_valid(value, values)

Validate speed_or_time_factor exists if assign_type is AUX_TRANSIT.

Source code in tm2py/config.py
1123
1124
1125
1126
1127
1128
@validator("speed_or_time_factor", always=True)
def speed_or_time_factor_valid(cls, value, values):
    """Validate speed_or_time_factor exists if assign_type is AUX_TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "AUX_TRANSIT":
        assert value is not None, "must be specified when assign_type==AUX_TRANSIT"
    return value
initial_boarding_penalty_valid
initial_boarding_penalty_valid(value, values)

Validate initial_boarding_penalty exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1130
1131
1132
1133
1134
1135
@validator("initial_boarding_penalty", always=True)
def initial_boarding_penalty_valid(value, values):
    """Validate initial_boarding_penalty exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
transfer_boarding_penalty_valid
transfer_boarding_penalty_valid(value, values)

Validate transfer_boarding_penalty exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1137
1138
1139
1140
1141
1142
@validator("transfer_boarding_penalty", always=True)
def transfer_boarding_penalty_valid(value, values):
    """Validate transfer_boarding_penalty exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
headway_fraction_valid
headway_fraction_valid(value, values)

Validate headway_fraction exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1144
1145
1146
1147
1148
1149
@validator("headway_fraction", always=True)
def headway_fraction_valid(value, values):
    """Validate headway_fraction exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
transfer_wait_perception_factor_valid
transfer_wait_perception_factor_valid(value, values)

Validate transfer_wait_perception_factor exists if assign_type is TRANSIT.

Source code in tm2py/config.py
1151
1152
1153
1154
1155
1156
@validator("transfer_wait_perception_factor", always=True)
def transfer_wait_perception_factor_valid(value, values):
    """Validate transfer_wait_perception_factor exists if assign_type is TRANSIT."""
    if "assign_type" in values and values["assign_type"] == "TRANSIT":
        assert value is not None, "must be specified when assign_type==TRANSIT"
    return value
mode_id_valid classmethod
mode_id_valid(value)

Validate mode_id.

Source code in tm2py/config.py
1158
1159
1160
1161
1162
1163
@classmethod
@validator("mode_id")
def mode_id_valid(cls, value):
    """Validate mode_id."""
    assert len(value) == 1, "mode_id must be one character"
    return value

tm2py.config.TransitConfig

Transit assignment parameters.


🚶 Active Transportation Components

Development Status

Active transportation components (walking, cycling) are currently under development. Documentation will be added as these components are implemented.


🔌 EMME Integration

Interface layers for integrating with EMME transportation planning software.

EMME Software

EMME is a comprehensive transportation planning software package. tm2py provides Python wrappers for EMME functionality.

tm2py.emme

Emme components module.

Configuration:

tm2py.config.EmmeConfig

Emme-specific parameters.

Properties


📊 Data Models & Validation

Data validation models ensuring input file integrity and consistency.

Data Validation Philosophy

tm2py uses Pandera for robust data validation. Data models define expected schemas, data types, and validation rules for all input files.

Input File Documentation

For detailed information about input file formats and requirements, see Input Files Documentation 📁

MAZ Data Model

Micro-Analysis Zone land use data validation and management.

MAZ Input File Format

For information about the mazData.csv file structure and field requirements, see MAZ Data Input Documentation 🗂️

tm2py.data_models.maz_data

MAZ Data Model for TM2.0 Transportation Modeling

This module provides data validation and management for Micro-Analysis Zone (MAZ) data, which forms the foundation of land use inputs for the TM2.0 transportation model.

Overview

MAZ (Micro-Analysis Zone) data represents fine-grained geographic units that contain detailed land use, demographic, and employment information. This data is crucial for:

  • Trip generation modeling based on land use characteristics
  • Accessibility calculations for transportation modes
  • Economic and demographic analysis at a granular geographic level
  • Integration with larger Traffic Analysis Zones (TAZ) for model hierarchy
Key Components

MAZData : pandera.model.DataFrameModel Primary data validation class containing 60+ attributes for land use characteristics including employment by sector, demographic data, parking supply, and density measures.

pandera.model.DataFrameModel

Manages the mapping between model node IDs and sequential IDs for MAZ, TAZ, and external zones to ensure consistent geographic referencing.

Data Structure

The MAZ data follows a hierarchical structure where: - Multiple MAZs can belong to a single TAZ (Traffic Analysis Zone) - Each MAZ has unique identifiers (original and sequential) - Land use data is categorized by employment sectors, housing types, and amenities - Validation ensures data consistency and completeness for modeling

Usage

This module is typically used during the data preparation phase of transportation modeling to validate and standardize land use inputs before they are consumed by trip generation and other demand modeling components.

Example
from pathlib import Path
from tm2py.data_models.maz_data import load_maz_data, create_sequential_index

# Create node ID crosswalk from Lasso network build output
xwalk_file = Path('model_to_emme_node_id.csv')
crosswalk = create_sequential_index(xwalk_file)

# Load and validate MAZ data
maz_file = Path('maz_land_use_data.csv')
maz_data = load_maz_data(maz_file, crosswalk)
Classes
MAZData

Micro-Analysis Zone (MAZ) Land Use Data Validation Model.

This class validates MAZ-level land use data used in TM2.0 transportation modeling. MAZs represent the finest geographic resolution for land use data, containing detailed information about employment by sector, demographics, parking supply, and accessibility measures. This data drives trip generation and other demand modeling components.

The validation ensures data consistency, proper data types, and logical constraints across all land use attributes before they are consumed by the transportation model.

Geographic Hierarchy
  • MAZ (Micro-Analysis Zone): Finest geographic unit
  • TAZ (Traffic Analysis Zone): Aggregates multiple MAZs
  • District/County: Higher-level geographic groupings
Data Categories
  1. Geographic Identifiers: MAZ/TAZ IDs, coordinates, district/county information
  2. Demographics: Households, population, school enrollment by type
  3. Employment by Sector: 21 detailed employment categories (retail, manufacturing, services, etc.)
  4. Parking Supply: Hourly, daily, and monthly parking by destination type
  5. Density Measures: Employment, population, and household densities within ½ mile
  6. Accessibility: Intersection counts and density classifications
Employment Categories

The model includes detailed employment data across major sectors: - Primary: Agriculture (ag), Natural Resources (natres) - Manufacturing: Bio (man_bio), Light (man_lgt), Heavy (man_hvy), Tech (man_tech)
- Services: Professional (prof), Business (serv_bus), Personal (serv_pers), Social (serv_soc) - Retail: Local (ret_loc), Regional (ret_reg) - Education: K-12 (ed_k12), Higher Ed (ed_high), Other (ed_oth) - Other: Government (gov), Health, Construction (constr), Transportation (transp), etc.

Parking Data Structure

Parking supply is categorized by: - Duration: Hourly (h), Daily (d), Monthly (m) - Destination: Same MAZ (sam) vs Other MAZs (oth) - Costs: Average hourly, daily, and monthly parking costs

Density Classifications

Several attributes use binned density measures (1-3 scale): - IntDenBin: Intersection density (walkability proxy) - EmpDenBin: Employment density (job accessibility) - DUDenBin: Household density (residential intensity)

Validation Rules
  • All geographic IDs must be unique and non-null
  • Employment and demographic counts must be non-negative integers
  • Parking costs and areas must be non-negative floats
  • Density measures include both raw values and binned classifications
Example
import pandas as pd
from tm2py.data_models.maz_data import MAZData

# Validate MAZ data
maz_df = pd.read_csv('maz_land_use.csv')
validated_data = MAZData.validate(maz_df)

# Access employment totals
total_jobs = validated_data['emp_total'].sum()
retail_jobs = validated_data['ret_loc'].sum() + validated_data['ret_reg'].sum()

# Analyze density patterns
high_density_mazs = validated_data[validated_data['EmpDenBin'] == 3]
walkable_areas = validated_data[validated_data['IntDenBin'] >= 2]
NodeIDCrosswalk

Node ID to Sequential ID Mapping for Transportation Model Geography.

This class validates the crosswalk table that maps original model node IDs to sequential zone identifiers used throughout the TM2.0 transportation model. It ensures consistent geographic referencing across MAZ, TAZ, and external zones.

Purpose

The transportation model requires sequential zone IDs (starting from 1) for efficient matrix operations and memory management, while the underlying network model uses arbitrary node IDs. This crosswalk maintains the mapping between these two ID systems.

Geographic Types
  • MAZ (Micro-Analysis Zone): Finest resolution zones for land use data
  • TAZ (Traffic Analysis Zone): Aggregated zones for trip matrices
  • EXT (External Zone): Special zones for external traffic flows
ID System Design
  • Original model_node_id: Arbitrary integers from network model (can have gaps)
  • Sequential IDs: Continuous 1-based indexing for each zone type
  • Zero values: Indicate the node doesn’t belong to that zone type
Usage in Model

This crosswalk is used to: 1. Convert between original and sequential IDs during data loading 2. Validate that MAZ/TAZ relationships are consistent 3. Ensure all required zones have proper sequential numbering 4. Support matrix operations that require continuous indexing

Data Validation
  • All model_node_id values must be unique and non-null
  • Sequential IDs must be non-negative integers
  • Zero values allowed to indicate non-membership in zone type
  • Total count of non-zero sequential IDs should match expected zone counts
Attributes

model_node_id : int Original node identifier from the transportation network model. Must be unique across all geographic zone types. MAZSEQ : int
Sequential MAZ identifier (1-based). Zero if node is not a MAZ. Used for MAZ-level land use data indexing and trip generation. TAZSEQ : int Sequential TAZ identifier (1-based). Zero if node is not a TAZ. Used for trip matrix indexing and zone-to-zone travel calculations. EXTSEQ : int Sequential external zone identifier (1-based). Zero if node is not external. Used for modeling trips entering/exiting the model region.

Example
import pandas as pd
from tm2py.data_models.maz_data import NodeIDCrosswalk, create_sequential_index

# Create crosswalk from node lists
crosswalk = create_sequential_index(
    node_id_df=network_nodes,
    maz_N_list=[101, 102, 103],  
    taz_N_list=[201, 202],
    ext_N_list=[301, 302]
)

# Validate the crosswalk
validated = NodeIDCrosswalk.validate(crosswalk)

# Use for ID conversion
maz_sequential = validated.set_index('model_node_id')['MAZSEQ']
original_to_seq = dict(zip(validated['model_node_id'], validated['MAZSEQ']))
Functions
create_sequential_index
create_sequential_index(model_to_emme_node_id_xwalk: Path) -> DataFrame[NodeIDCrosswalk]

Create stable sequential IDs for MAZ, TAZ, and external zones.

This function generates a crosswalk table that maps original model node IDs to sequential zone identifiers (1-based indexing) for efficient matrix operations in the transportation model. It uses predefined node lists to categorize zones into MAZ, TAZ, and external types.

The function handles: - TAZ nodes: Traffic Analysis Zones for trip matrix operations - MAZ nodes: Micro-Analysis Zones including disconnected zones - External nodes: Boundary zones for external trips

Sequential ID Assignment
  • TAZ: Sequential numbering based on sort order of node IDs
  • MAZ: Includes both connected network nodes and disconnected zones
  • EXT: External zones for trips entering/exiting the model region
  • Zero values indicate the node doesn’t belong to that zone type
Node List Sources

The function uses module-level constants: - taz_N_list: Predefined TAZ node ID ranges - maz_N_list: Predefined MAZ node ID ranges
- external_N_list: External zone node ID range (900001-999999) - disconnected_maz_N_list: Special disconnected MAZ nodes

Parameters

model_to_emme_node_id_xwalk : pathlib.Path Path to CSV file containing the crosswalk between model node IDs and Emme node IDs, created during the network build process. Must contain columns: ‘emme_node_id’, ‘model_node_id’

Returns

DataFrame[NodeIDCrosswalk] Validated crosswalk with columns: - model_node_id: Original network node identifier - MAZSEQ: Sequential MAZ ID (0 if not a MAZ) - TAZSEQ: Sequential TAZ ID (0 if not a TAZ) - EXTSEQ: Sequential external zone ID (0 if not external)

Raises

ValueError If required columns are missing from the input crosswalk file

Example
from tm2py.data_models.maz_data import create_sequential_index
from pathlib import Path

# Create crosswalk from Lasso network build output
xwalk_file = Path('model_to_emme_node_id.csv')
crosswalk = create_sequential_index(xwalk_file)

# Use crosswalk for ID conversion
maz_lookup = crosswalk.set_index('model_node_id')['MAZSEQ']
sequential_maz_id = maz_lookup[original_node_id]
See Also

NodeIDCrosswalk : The validation schema for the output crosswalk validate_sequential_id : Function to validate MAZ data against this crosswalk

Source code in tm2py/data_models/maz_data.py
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def create_sequential_index(
    model_to_emme_node_id_xwalk: pathlib.Path
) -> DataFrame[NodeIDCrosswalk]:
    """Create stable sequential IDs for MAZ, TAZ, and external zones.

    This function generates a crosswalk table that maps original model node IDs 
    to sequential zone identifiers (1-based indexing) for efficient matrix operations
    in the transportation model. It uses predefined node lists to categorize zones
    into MAZ, TAZ, and external types.

    The function handles:
    - TAZ nodes: Traffic Analysis Zones for trip matrix operations
    - MAZ nodes: Micro-Analysis Zones including disconnected zones
    - External nodes: Boundary zones for external trips

    Sequential ID Assignment
    ------------------------
    - TAZ: Sequential numbering based on sort order of node IDs
    - MAZ: Includes both connected network nodes and disconnected zones
    - EXT: External zones for trips entering/exiting the model region
    - Zero values indicate the node doesn't belong to that zone type

    Node List Sources
    -----------------
    The function uses module-level constants:
    - taz_N_list: Predefined TAZ node ID ranges
    - maz_N_list: Predefined MAZ node ID ranges  
    - external_N_list: External zone node ID range (900001-999999)
    - disconnected_maz_N_list: Special disconnected MAZ nodes

    Parameters
    ----------
    model_to_emme_node_id_xwalk : pathlib.Path
        Path to CSV file containing the crosswalk between model node IDs
        and Emme node IDs, created during the network build process.
        Must contain columns: 'emme_node_id', 'model_node_id'

    Returns
    -------
    DataFrame[NodeIDCrosswalk]
        Validated crosswalk with columns:
        - model_node_id: Original network node identifier
        - MAZSEQ: Sequential MAZ ID (0 if not a MAZ)
        - TAZSEQ: Sequential TAZ ID (0 if not a TAZ) 
        - EXTSEQ: Sequential external zone ID (0 if not external)

    Raises
    ------
    ValueError
        If required columns are missing from the input crosswalk file

    Example
    -------
    ```python
    from tm2py.data_models.maz_data import create_sequential_index
    from pathlib import Path

    # Create crosswalk from Lasso network build output
    xwalk_file = Path('model_to_emme_node_id.csv')
    crosswalk = create_sequential_index(xwalk_file)

    # Use crosswalk for ID conversion
    maz_lookup = crosswalk.set_index('model_node_id')['MAZSEQ']
    sequential_maz_id = maz_lookup[original_node_id]
    ```

    See Also
    --------
    NodeIDCrosswalk : The validation schema for the output crosswalk
    validate_sequential_id : Function to validate MAZ data against this crosswalk
    """
    node_id_df = pd.read_csv(model_to_emme_node_id_xwalk)
    required_cols = ["emme_node_id", "model_node_id"]
    missing_cols = [c for c in required_cols if c not in node_id_df.columns]
    if missing_cols:
        raise ValueError(f"Missing columns in model_to_emme_node_id_xwalk: {missing_cols}")
    # taz node
    taz_node_id_df = (
        node_id_df[node_id_df["model_node_id"].isin(taz_N_list)]
        .copy()
        .rename(columns={"emme_node_id":"TAZSEQ"})
    )
    # external taz node
    ext_node_id_df = (
        node_id_df[node_id_df["model_node_id"].isin(external_N_list)]
        .copy()
        .rename(columns={"emme_node_id":"EXTSEQ"})
    )
    # maz node, including the five disconnected mazs
    maz_node_id_df = (
        node_id_df[node_id_df["model_node_id"].isin(maz_N_list)]
        .copy()
        .rename(columns={"emme_node_id":"MAZSEQ"})
    )
    maz_node_id_df = pd.concat(
        [maz_node_id_df[["model_node_id"]],
        pd.DataFrame({"model_node_id":disconnected_maz_N_list})]
    )
    maz_node_id_df = (
        maz_node_id_df
        .sort_values(by="model_node_id")
        .reset_index(drop=True)
    )
    maz_node_id_df["MAZSEQ"] = maz_node_id_df.index + 1

    out = (
        taz_node_id_df.merge(maz_node_id_df, on="model_node_id", how="outer")
        .merge(ext_node_id_df, on="model_node_id", how="outer")
        .fillna(0)
        .astype(int)
    )
    out = out[["model_node_id"] + [c for c in out.columns if c!="model_node_id"]]

    return NodeIDCrosswalk.validate(out, lazy=True)
validate_sequential_id
validate_sequential_id(maz_data_df: DataFrame, node_seq_id_xwalk: DataFrame[NodeIDCrosswalk]) -> None

Validate consistency between MAZ data and node ID crosswalk.

This function ensures that the sequential MAZ and TAZ IDs in the land use data file match the expected values from the node ID crosswalk. This validation is critical for maintaining geographic consistency across model components.

The validation checks that: - Each MAZ_ORIGINAL in the data maps to the correct MAZ sequential ID - Each TAZ_ORIGINAL in the data maps to the correct TAZ sequential ID
- No mismatches exist that would cause geographic referencing errors

Validation Process
  1. Create lookup from original node IDs to sequential IDs
  2. Map MAZ_ORIGINAL and TAZ_ORIGINAL to expected sequential values
  3. Compare with actual MAZ and TAZ columns in the data
  4. Report any mismatches that indicate data inconsistency
Use Case

This function is essential when loading MAZ data from external sources to ensure the geographic identifiers are properly aligned with the transportation model’s internal numbering system.

Parameters

maz_data_df : pd.DataFrame MAZ land use data containing columns: - MAZ: Sequential MAZ identifier
- TAZ: Sequential TAZ identifier - MAZ_ORIGINAL: Original MAZ node ID - TAZ_ORIGINAL: Original TAZ node ID node_seq_id_xwalk : DataFrame[NodeIDCrosswalk] Validated crosswalk mapping original node IDs to sequential IDs. Created by create_sequential_index function.

Returns

None Function validates in-place and raises exception on failure

Raises

ValueError If any MAZ or TAZ sequential IDs don’t match the crosswalk expectations. Error message includes count of mismatched zones for debugging.

Example
import pandas as pd
from tm2py.data_models.maz_data import validate_sequential_id

# Load data and crosswalk
maz_df = pd.read_csv('maz_land_use.csv')
crosswalk = create_sequential_index(node_xwalk_file)

# Validate consistency  
try:
    validate_sequential_id(maz_df, crosswalk)
    print("MAZ data geographic IDs validated successfully")
except ValueError as e:
    print(f"Geographic ID mismatch: {e}")
See Also

create_sequential_index : Creates the required crosswalk load_maz_data : Higher-level function that includes this validation NodeIDCrosswalk : Schema for the crosswalk data

Source code in tm2py/data_models/maz_data.py
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
def validate_sequential_id(
    maz_data_df: pd.DataFrame,
    node_seq_id_xwalk: DataFrame[NodeIDCrosswalk]
) -> None:
    """Validate consistency between MAZ data and node ID crosswalk.

    This function ensures that the sequential MAZ and TAZ IDs in the land use
    data file match the expected values from the node ID crosswalk. This validation
    is critical for maintaining geographic consistency across model components.

    The validation checks that:
    - Each MAZ_ORIGINAL in the data maps to the correct MAZ sequential ID
    - Each TAZ_ORIGINAL in the data maps to the correct TAZ sequential ID  
    - No mismatches exist that would cause geographic referencing errors

    Validation Process
    ------------------
    1. Create lookup from original node IDs to sequential IDs
    2. Map MAZ_ORIGINAL and TAZ_ORIGINAL to expected sequential values
    3. Compare with actual MAZ and TAZ columns in the data
    4. Report any mismatches that indicate data inconsistency

    Use Case
    --------
    This function is essential when loading MAZ data from external sources
    to ensure the geographic identifiers are properly aligned with the 
    transportation model's internal numbering system.

    Parameters
    ----------
    maz_data_df : pd.DataFrame
        MAZ land use data containing columns:
        - MAZ: Sequential MAZ identifier  
        - TAZ: Sequential TAZ identifier
        - MAZ_ORIGINAL: Original MAZ node ID
        - TAZ_ORIGINAL: Original TAZ node ID
    node_seq_id_xwalk : DataFrame[NodeIDCrosswalk]
        Validated crosswalk mapping original node IDs to sequential IDs.
        Created by create_sequential_index function.

    Returns
    -------
    None
        Function validates in-place and raises exception on failure

    Raises  
    ------
    ValueError
        If any MAZ or TAZ sequential IDs don't match the crosswalk expectations.
        Error message includes count of mismatched zones for debugging.

    Example
    -------
    ```python
    import pandas as pd
    from tm2py.data_models.maz_data import validate_sequential_id

    # Load data and crosswalk
    maz_df = pd.read_csv('maz_land_use.csv')
    crosswalk = create_sequential_index(node_xwalk_file)

    # Validate consistency  
    try:
        validate_sequential_id(maz_df, crosswalk)
        print("MAZ data geographic IDs validated successfully")
    except ValueError as e:
        print(f"Geographic ID mismatch: {e}")
    ```

    See Also
    --------
    create_sequential_index : Creates the required crosswalk 
    load_maz_data : Higher-level function that includes this validation
    NodeIDCrosswalk : Schema for the crosswalk data
    """
    xwalk = node_seq_id_xwalk.set_index("model_node_id")
    maz = maz_data_df["MAZ_ORIGINAL"].map(xwalk["MAZSEQ"])
    taz = maz_data_df["TAZ_ORIGINAL"].map(xwalk["TAZSEQ"])

    bad_maz = maz_data_df.index[maz_data_df["MAZ"]!=maz]
    bad_taz = maz_data_df.index[maz_data_df["TAZ"]!=taz]

    if len(bad_maz)>0 or len(bad_taz)>0:
        raise ValueError(
            f"Node ID crosswalk mismatch: {len(bad_maz)} MAZ, {len(bad_taz)} TAZ"
        )
load_maz_data
load_maz_data(maz_data_file: Path, node_seq_id_xwalk: DataFrame[NodeIDCrosswalk]) -> DataFrame[MAZData]

Load and validate MAZ land use data for transportation modeling.

This is the main function for loading MAZ (Micro-Analysis Zone) land use data into the TM2.0 transportation model. It performs comprehensive validation to ensure data quality and geographic consistency before the data is used in trip generation and other modeling components.

The function performs two levels of validation: 1. Geographic ID validation against the node crosswalk 2. Schema validation against the MAZData model specification

Validation Steps
  1. Load CSV data from the specified file path
  2. Validate MAZ/TAZ sequential IDs match the crosswalk expectations
  3. Validate all data fields against MAZData schema constraints
  4. Return validated DataFrame ready for modeling use
Data Requirements

The input CSV must contain all required MAZData columns including: - Geographic identifiers (MAZ, TAZ, original node IDs)
- Employment by sector (21 detailed categories) - Demographics (households, population, school enrollment) - Parking supply (hourly, daily, monthly by destination type) - Density measures and accessibility indicators

Error Handling

The function will raise descriptive errors for common data issues: - Missing or malformed CSV files - Geographic ID mismatches with the crosswalk - Schema violations (wrong data types, negative values, etc.) - Missing required columns or invalid data ranges

Parameters

maz_data_file : pathlib.Path Path to CSV file containing MAZ land use data. Must include all required columns as defined in MAZData schema. node_seq_id_xwalk : DataFrame[NodeIDCrosswalk]
Validated crosswalk mapping original node IDs to sequential zone IDs. Created by create_sequential_index function.

Returns

DataFrame[MAZData] Validated MAZ land use data conforming to the MAZData schema. All geographic IDs verified against crosswalk and data types validated. Ready for use in trip generation and accessibility calculations.

Raises

FileNotFoundError If the specified maz_data_file does not exist ValueError If geographic IDs don’t match the crosswalk or schema validation fails pd.errors.ParserError If the CSV file is malformed or unreadable

Example
from pathlib import Path
from tm2py.data_models.maz_data import create_sequential_index, load_maz_data

# Create crosswalk and load data
xwalk_file = Path('model_to_emme_node_id.csv')
maz_file = Path('maz_land_use_data.csv')

crosswalk = create_sequential_index(xwalk_file) 
maz_data = load_maz_data(maz_file, crosswalk)

# Use validated data
total_population = maz_data['POP'].sum()
employment_by_maz = maz_data['emp_total']
See Also

MAZData : The validation schema applied to the loaded data create_sequential_index : Function to create the required crosswalk validate_sequential_id : Geographic ID validation performed internally

Source code in tm2py/data_models/maz_data.py
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
def load_maz_data(
    maz_data_file: pathlib.Path, 
    node_seq_id_xwalk: DataFrame[NodeIDCrosswalk]
) -> DataFrame[MAZData]:
    """Load and validate MAZ land use data for transportation modeling.

    This is the main function for loading MAZ (Micro-Analysis Zone) land use data
    into the TM2.0 transportation model. It performs comprehensive validation to
    ensure data quality and geographic consistency before the data is used in
    trip generation and other modeling components.

    The function performs two levels of validation:
    1. Geographic ID validation against the node crosswalk
    2. Schema validation against the MAZData model specification

    Validation Steps
    ----------------
    1. Load CSV data from the specified file path
    2. Validate MAZ/TAZ sequential IDs match the crosswalk expectations
    3. Validate all data fields against MAZData schema constraints
    4. Return validated DataFrame ready for modeling use

    Data Requirements
    -----------------
    The input CSV must contain all required MAZData columns including:
    - Geographic identifiers (MAZ, TAZ, original node IDs)  
    - Employment by sector (21 detailed categories)
    - Demographics (households, population, school enrollment)
    - Parking supply (hourly, daily, monthly by destination type)
    - Density measures and accessibility indicators

    Error Handling
    --------------
    The function will raise descriptive errors for common data issues:
    - Missing or malformed CSV files
    - Geographic ID mismatches with the crosswalk
    - Schema violations (wrong data types, negative values, etc.)
    - Missing required columns or invalid data ranges

    Parameters
    ----------
    maz_data_file : pathlib.Path
        Path to CSV file containing MAZ land use data.
        Must include all required columns as defined in MAZData schema.
    node_seq_id_xwalk : DataFrame[NodeIDCrosswalk]  
        Validated crosswalk mapping original node IDs to sequential zone IDs.
        Created by create_sequential_index function.

    Returns
    -------
    DataFrame[MAZData]
        Validated MAZ land use data conforming to the MAZData schema.
        All geographic IDs verified against crosswalk and data types validated.
        Ready for use in trip generation and accessibility calculations.

    Raises
    ------
    FileNotFoundError
        If the specified maz_data_file does not exist
    ValueError
        If geographic IDs don't match the crosswalk or schema validation fails
    pd.errors.ParserError
        If the CSV file is malformed or unreadable

    Example
    -------
    ```python
    from pathlib import Path
    from tm2py.data_models.maz_data import create_sequential_index, load_maz_data

    # Create crosswalk and load data
    xwalk_file = Path('model_to_emme_node_id.csv')
    maz_file = Path('maz_land_use_data.csv')

    crosswalk = create_sequential_index(xwalk_file) 
    maz_data = load_maz_data(maz_file, crosswalk)

    # Use validated data
    total_population = maz_data['POP'].sum()
    employment_by_maz = maz_data['emp_total']
    ```

    See Also
    --------
    MAZData : The validation schema applied to the loaded data
    create_sequential_index : Function to create the required crosswalk
    validate_sequential_id : Geographic ID validation performed internally
    """

    maz_data_df = pd.read_csv(maz_data_file)
    validate_sequential_id(maz_data_df, node_seq_id_xwalk)

    return MAZData.validate(maz_data_df, lazy=True)

🛠️ Utilities & Tools

Supporting utilities for logging, file management, and common operations.

Logging System

Comprehensive logging infrastructure for model run monitoring and debugging.

Log Configuration

Logging behavior is configurable through the model configuration files. See the Configuration Guide for details on setting log levels and output destinations.

tm2py.logger

Logging module.

Note the general definition of logging levels as used in tm2py:

highly detailed level information which would rarely be of interest

except for detailed debugging by a developer

Classes
BaseLogger
BaseLogger(log_formatters, log_cache_file)

Base class for logging. Not to be constructed directly.

Source code in tm2py/logger.py
56
57
58
59
60
61
62
63
64
65
66
67
def __init__(self, log_formatters, log_cache_file):
    self._indentation = 0
    self._log_cache = LogCache(log_cache_file)
    self._log_formatters = log_formatters + [self._log_cache]

    # these will be set later via set_emme_manager()
    self._emme_manager = None
    self._use_emme_logbook = False

    for log_formatter in self._log_formatters:
        if hasattr(log_formatter, "open"):
            log_formatter.open()
Attributes
debug_enabled property
debug_enabled: bool

Returns True if DEBUG is currently filtered for display or print to file.

Can be used to enable / disable debug logging which may have a performance impact.

trace_enabled property
trace_enabled: bool

Returns True if TRACE is currently filtered for display or print to file.

Can be used to enable / disable trace logging which may have a performance impact.

Functions
log
log(text: str, level: LogLevel = 'INFO', indent: bool = True)

Log text to file and display depending upon log level and config.

Parameters:

Name Type Description Default
text str

text to log

required
level str

logging level

'INFO'
indent bool

if true indent text based on the number of open contexts

True
Source code in tm2py/logger.py
83
84
85
86
87
88
89
90
91
92
93
94
95
def log(self, text: str, level: LogLevel = "INFO", indent: bool = True):
    """Log text to file and display depending upon log level and config.

    Args:
        text (str): text to log
        level (str): logging level
        indent (bool): if true indent text based on the number of open contexts
    """
    timestamp = datetime.now().strftime("%d-%b-%Y (%H:%M:%S) ")
    for log_formatter in self._log_formatters:
        log_formatter.log(text, LEVELS_STR_TO_INT[level], indent, timestamp)
    if self._use_emme_logbook:
        self._emme_manager.logbook_write(text)
trace
trace(text: str, indent: bool = False)

Log text with level=TRACE.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
 97
 98
 99
100
101
102
103
104
def trace(self, text: str, indent: bool = False):
    """Log text with level=TRACE.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "TRACE", indent)
debug
debug(text: str, indent: bool = False)

Log text with level=DEBUG.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
106
107
108
109
110
111
112
113
def debug(self, text: str, indent: bool = False):
    """Log text with level=DEBUG.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "DEBUG", indent)
detail
detail(text: str, indent: bool = False)

Log text with level=DETAIL.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
115
116
117
118
119
120
121
122
def detail(self, text: str, indent: bool = False):
    """Log text with level=DETAIL.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "DETAIL", indent)
info
info(text: str, indent: bool = False)

Log text with level=INFO.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
124
125
126
127
128
129
130
131
def info(self, text: str, indent: bool = False):
    """Log text with level=INFO.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "INFO", indent)
status
status(text: str, indent: bool = False)

Log text with level=STATUS.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
133
134
135
136
137
138
139
140
def status(self, text: str, indent: bool = False):
    """Log text with level=STATUS.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "STATUS", indent)
warn
warn(text: str, indent: bool = False)

Log text with level=WARN.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
142
143
144
145
146
147
148
149
def warn(self, text: str, indent: bool = False):
    """Log text with level=WARN.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "WARN", indent)
error
error(text: str, indent: bool = False)

Log text with level=ERROR.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
151
152
153
154
155
156
157
158
def error(self, text: str, indent: bool = False):
    """Log text with level=ERROR.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "ERROR", indent)
fatal
fatal(text: str, indent: bool = False)

Log text with level=FATAL.

Parameters:

Name Type Description Default
text str

text to log

required
indent bool

if true indent text based on the number of open contexts

False
Source code in tm2py/logger.py
160
161
162
163
164
165
166
167
def fatal(self, text: str, indent: bool = False):
    """Log text with level=FATAL.

    Args:
        text (str): text to log
        indent (bool): if true indent text based on the number of open contexts
    """
    self.log(text, "FATAL", indent)
log_time
log_time(text: str, level=1, indent=False)

Log message with timestamp

Source code in tm2py/logger.py
169
170
171
172
173
174
175
176
def log_time(self, text: str, level=1, indent=False):
    """Log message with timestamp"""
    timestamp = datetime.now().strftime("%d-%b-%Y (%H:%M:%S)")
    if indent:
        indent = "  " * self._indentation
        self.log(f"{timestamp}: {indent}{text}", level)
    else:
        self.log(f"{timestamp}: {text}", level)
log_start_end
log_start_end(text: str, level: LogLevel = 'STATUS')

Use with ‘with’ statement to log the start and end time with message.

If using the Emme logbook (config.logging.use_emme_logbook is True), will also create a logbook nest in the tree view using logbook_trace.

Parameters:

Name Type Description Default
text str

message text

required
level str

logging level

'STATUS'
Source code in tm2py/logger.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
@_context
def log_start_end(self, text: str, level: LogLevel = "STATUS"):
    """Use with 'with' statement to log the start and end time with message.

    If using the Emme logbook (config.logging.use_emme_logbook is True), will
    also create a logbook nest in the tree view using logbook_trace.

    Args:
        text (str): message text
        level (str): logging level
    """
    with self._skip_emme_logging():
        self._log_start(text, level)
    if self._use_emme_logbook:
        with self._emme_manager.logbook_trace(text):
            yield
    else:
        yield
    with self._skip_emme_logging():
        self._log_end(text, level)
log_dict
log_dict(mapping: dict, level: LogLevel = 'DEBUG')

Format dictionary to string and log as text.

Source code in tm2py/logger.py
221
222
223
def log_dict(self, mapping: dict, level: LogLevel = "DEBUG"):
    """Format dictionary to string and log as text."""
    self.log(pformat(mapping, indent=1, width=120), level)
clear_msg_cache
clear_msg_cache()

Clear all log messages from cache.

Source code in tm2py/logger.py
237
238
239
def clear_msg_cache(self):
    """Clear all log messages from cache."""
    self._log_cache.clear()
Logger
Logger(controller: RunController)

Bases: BaseLogger

Logging of message text for display, text file, and Emme logbook, as well as notify to slack.

The log message levels can be one of: TRACE, DEBUG, DETAIL, INFO, STATUS, WARN, ERROR, FATAL Which will filter all messages of that severity and higher. See module note on use of descriptive level names.

logger.log(“a message”) with logger.log_start_end(“Running a set of steps”): logger.log(“Message with timestamp”) logger.log(“A debug message”, level=”DEBUG”) # equivalently, use the .debug: logger.debug(“Another debug message”) if logger.debug_enabled: # only generate this report if logging DEBUG logger.log(“A debug report that takes time to produce”, level=”DEBUG”) logger.notify_slack(“A slack message”)

Methods can also be decorated with LogStartEnd (see class for more).

Note that the Logger should only be initialized once per model run. In places where the controller is not available, the last Logger initialized can be obtained from the class method get_logger::

1
logger = Logger.get_logger()
Internal properties

Constructor for Logger object.

Parameters:

Name Type Description Default
controller RunController

Associated RunController instance.

required
Source code in tm2py/logger.py
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
def __init__(self, controller: RunController):
    """Constructor for Logger object.

    Args:
        controller (RunController): Associated RunController instance.
    """
    self.controller = controller

    log_config = controller.config.logging
    iter_component_level = log_config.iter_component_level or []
    iter_component_level = dict(
        ((i, c), LEVELS_STR_TO_INT[l]) for i, c, l in iter_component_level
    )
    display_logger = LogDisplay(LEVELS_STR_TO_INT[log_config.display_level])
    run_log_formatter = LogFile(
        LEVELS_STR_TO_INT[log_config.run_file_level],
        os.path.join(controller.run_dir, log_config.run_file_path),
    )
    standard_log_formatter = LogFileLevelOverride(
        LEVELS_STR_TO_INT[log_config.log_file_level],
        os.path.join(controller.run_dir, log_config.log_file_path),
        iter_component_level,
        controller,
    )
    log_formatters = [display_logger, run_log_formatter, standard_log_formatter]
    log_cache_file = os.path.join(
        controller.run_dir, log_config.log_on_error_file_path
    )
    # set this latter via setEmmeManager()
    emme_manager = None
    super().__init__(log_formatters, log_cache_file)

    self._slack_notifier = SlackNotifier(self)
Functions
get_logger classmethod
get_logger()

Return the last initialized logger object.

Source code in tm2py/logger.py
350
351
352
353
@classmethod
def get_logger(cls):
    """Return the last initialized logger object."""
    return cls._instance
notify_slack
notify_slack(text: str)

Send message to slack if enabled by config.

Parameters:

Name Type Description Default
text str

text to send to slack

required
Source code in tm2py/logger.py
355
356
357
358
359
360
361
362
def notify_slack(self, text: str):
    """Send message to slack if enabled by config.

    Args:
        text (str): text to send to slack
    """
    if self.controller.config.logging.notify_slack:
        self._slack_notifier.post_message(text)
ProcessLogger
ProcessLogger(run_log_file_path, log_on_error_file_path, emme_manager)

Bases: BaseLogger

Logger for running in separate process with no RunController.

Constructor for Logger object.

Parameters:

Name Type Description Default
run_log_file_path
required
log_on_error_file_path
required
emme_manager
required
Source code in tm2py/logger.py
368
369
370
371
372
373
374
375
376
377
378
def __init__(self, run_log_file_path, log_on_error_file_path, emme_manager):
    """Constructor for Logger object.

    Args:
        run_log_file_path ():
        log_on_error_file_path ():
        emme_manager ():
    """
    run_log_formatter = LogFile(LEVELS_STR_TO_INT["INFO"], run_log_file_path)
    log_formatters = [run_log_formatter]
    super().__init__(log_formatters, log_on_error_file_path, emme_manager)
LogFormatter
LogFormatter(level: int)

Base class for recording text to log.

Properties

Constructor for LogFormatter.

Parameters:

Name Type Description Default
level int

log filter level (as an int)

required
Source code in tm2py/logger.py
389
390
391
392
393
394
395
396
def __init__(self, level: int):
    """Constructor for LogFormatter.

    Args:
        level (int): log filter level (as an int)
    """
    self._level = level
    self.indent = 0
Attributes
level property
level

The current filter level for the LogFormatter.

Functions
increase_indent
increase_indent(level: int)

Increase current indent if the log level is filtered in.

Source code in tm2py/logger.py
403
404
405
406
def increase_indent(self, level: int):
    """Increase current indent if the log level is filtered in."""
    if level >= self.level:
        self.indent += 1
decrease_indent
decrease_indent(level: int)

Decrease current indent if the log level is filtered in.

Source code in tm2py/logger.py
408
409
410
411
def decrease_indent(self, level: int):
    """Decrease current indent if the log level is filtered in."""
    if level >= self.level:
        self.indent -= 1
log abstractmethod
log(text: str, level: int, indent: bool, timestamp: Union[str, None])

Format and log message text.

Parameters:

Name Type Description Default
text str

text to log

required
level int

logging level

required
indent bool

if true indent text based on the number of open contexts

required
timestamp str

formatted datetime as a string or None

required
Source code in tm2py/logger.py
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
@abstractmethod
def log(
    self,
    text: str,
    level: int,
    indent: bool,
    timestamp: Union[str, None],
):
    """Format and log message text.

    Args:
        text (str): text to log
        level (int): logging level
        indent (bool): if true indent text based on the number of open contexts
        timestamp (str): formatted datetime as a string or None
    """
LogFile
LogFile(level: int, file_path: str)

Bases: LogFormatter

Format and write log text to file.

Properties
  • level: the log level as an int
  • file_path: the absolute file path to write to

Constructor for LogFile object.

Parameters:

Name Type Description Default
level int

the log level as an int.

required
file_path str

the absolute file path to write to.

required
Source code in tm2py/logger.py
464
465
466
467
468
469
470
471
472
473
def __init__(self, level: int, file_path: str):
    """Constructor for LogFile object.

    Args:
        level (int): the log level as an int.
        file_path (str): the absolute file path to write to.
    """
    super().__init__(level)
    self.file_path = file_path
    self.log_file = None
Functions
open
open()

Open the log file for writing.

Source code in tm2py/logger.py
475
476
477
def open(self):
    """Open the log file for writing."""
    self.log_file = open(self.file_path, "w", encoding="utf8")
log
log(text: str, level: int, indent: bool, timestamp: Union[str, None])

Log text to file and display depending upon log level and config.

Note that log will not write to file until opened with a context.

Parameters:

Name Type Description Default
text str

text to log

required
level int

logging level

required
indent bool

if true indent text based on the number of open contexts

required
timestamp str

formatted datetime as a string or None for timestamp

required
Source code in tm2py/logger.py
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
def log(self, text: str, level: int, indent: bool, timestamp: Union[str, None]):
    """Log text to file and display depending upon log level and config.

    Note that log will not write to file until opened with a context.

    Args:
        text (str): text to log
        level (int): logging level
        indent (bool): if true indent text based on the number of open contexts
        timestamp (str): formatted datetime as a string or None for timestamp
    """
    if level >= self.level and self.log_file is not None:
        text = self._format_text(text, level, indent, timestamp)
        self.log_file.write(f"{text}\n")
        self.log_file.flush()
close
close()

Close the open log file.

Source code in tm2py/logger.py
495
496
497
498
def close(self):
    """Close the open log file."""
    self.log_file.close()
    self.log_file = None
LogFileLevelOverride
LogFileLevelOverride(level, file_path, iter_component_level, controller)

Bases: LogFile

Format and write log text to file.

Properties
  • level: the log level as an int
  • file_path: the absolute file path to write to
  • iter_component_level: TODO
  • controller: TODO

Constructor for LogFileLevelOverride object.

Parameters:

Name Type Description Default
level _type_

TODO

required
file_path _type_

TODO

required
iter_component_level _type_

TODO

required
controller _type_

TODO

required
Source code in tm2py/logger.py
511
512
513
514
515
516
517
518
519
520
521
522
def __init__(self, level, file_path, iter_component_level, controller):
    """Constructor for LogFileLevelOverride object.

    Args:
        level (_type_): TODO
        file_path (_type_): TODO
        iter_component_level (_type_): TODO
        controller (_type_): TODO
    """
    super().__init__(level, file_path)
    self.iter_component_level = iter_component_level
    self.controller = controller
Attributes
level property
level

Current log level with iter_component_level config override.

LogDisplay
LogDisplay(level: int)

Bases: LogFormatter

Format and print log text to console / Notebook.

Properties
  • level: the log level as an int
Source code in tm2py/logger.py
389
390
391
392
393
394
395
396
def __init__(self, level: int):
    """Constructor for LogFormatter.

    Args:
        level (int): log filter level (as an int)
    """
    self._level = level
    self.indent = 0
Functions
log
log(text: str, level: int, indent: bool, timestamp: Union[str, None])

Format and display text on screen (print).

Parameters:

Name Type Description Default
text str

text to log

required
level int

logging level

required
indent bool

if true indent text based on the number of open contexts

required
timestamp str

formatted datetime as a string or None

required
Source code in tm2py/logger.py
539
540
541
542
543
544
545
546
547
548
549
def log(self, text: str, level: int, indent: bool, timestamp: Union[str, None]):
    """Format and display text on screen (print).

    Args:
        text (str): text to log
        level (int): logging level
        indent (bool): if true indent text based on the number of open contexts
        timestamp (str): formatted datetime as a string or None
    """
    if level >= self.level:
        print(self._format_text(text, level, indent, timestamp))
LogCache
LogCache(file_path: str)

Bases: LogFormatter

Caches all messages for later recording in on error logfile.

Properties
  • file_path: the absolute file path to write to

Constructor for LogCache object.

Parameters:

Name Type Description Default
file_path str

the absolute file path to write to.

required
Source code in tm2py/logger.py
559
560
561
562
563
564
565
566
567
def __init__(self, file_path: str):
    """Constructor for LogCache object.

    Args:
        file_path (str): the absolute file path to write to.
    """
    super().__init__(level=0)
    self.file_path = file_path
    self._msg_cache = []
Functions
open
open()

Initialize log file (remove).

Source code in tm2py/logger.py
569
570
571
572
def open(self):
    """Initialize log file (remove)."""
    if os.path.exists(self.file_path):
        os.remove(self.file_path)
log
log(text: str, level: int, indent: bool, timestamp: Union[str, None])

Format and store text for later recording.

Parameters:

Name Type Description Default
text str

text to log

required
level int

logging level

required
indent bool

if true indent text based on the number of open contexts

required
timestamp str

formatted datetime as a string or None

required
Source code in tm2py/logger.py
574
575
576
577
578
579
580
581
582
583
584
585
def log(self, text: str, level: int, indent: bool, timestamp: Union[str, None]):
    """Format and store text for later recording.

    Args:
        text (str): text to log
        level (int): logging level
        indent (bool): if true indent text based on the number of open contexts
        timestamp (str): formatted datetime as a string or None
    """
    self._msg_cache.append(
        (level, self._format_text(text, level, indent, timestamp))
    )
write_cache
write_cache()

Write all cached messages.

Source code in tm2py/logger.py
587
588
589
590
591
592
def write_cache(self):
    """Write all cached messages."""
    with open(self.file_path, "w", encoding="utf8") as file:
        for level, text in self._msg_cache:
            file.write(f"{LEVELS_INT_TO_STR[level]:6} {text}\n")
    self.clear()
clear
clear()

Clear message cache.

Source code in tm2py/logger.py
594
595
596
def clear(self):
    """Clear message cache."""
    self._msg_cache = []
LogStartEnd
LogStartEnd(text: str = None, level: str = 'INFO')

Log the start and end time with optional message.

Used as a Component method decorator. If msg is not provided a default message is generated with the object class and method name.

Example:: @LogStartEnd(“Highway assignment and skims”, level=”STATUS”) def run(self): pass

Properties

text (str): message text to use in the start and end record. level (str): logging level as a string.

Constructor for LogStartEnd object.

Parameters:

Name Type Description Default
text str

message text to use in the start and end record. Defaults to None.

None
level str

logging level as a string. Defaults to “INFO”.

'INFO'
Source code in tm2py/logger.py
618
619
620
621
622
623
624
625
626
627
def __init__(self, text: str = None, level: str = "INFO"):
    """Constructor for LogStartEnd object.

    Args:
        text (str, optional): message text to use in the start and end record.
            Defaults to None.
        level (str, optional): logging level as a string. Defaults to "INFO".
    """
    self.text = text
    self.level = level
SlackNotifier
SlackNotifier(logger: Logger, slack_webhook_url: str = None)

Notify slack of model run status.

The slack channel can be input directly, or is configured via text file found at “M:\Software\Slack\TravelModel_SlackWebhook.txt” (if on MTC server) rr”C:\Software\Slack\TravelModel_SlackWebhook.txt” (if local)

Properties
  • logger (Logger): object for logging of trace messages
  • slack_webhook_url (str): optional, url to use for sending the message to slack

Constructor for SlackNotifier object.

Parameters:

Name Type Description Default
logger Logger

logger instance.

required
slack_webhook_url str

. Defaults to None, which is replaced by either: - r”M:\Software\Slack\TravelModel_SlackWebhook.txt” (if on MTC server) - r”C:\Software\Slack\TravelModel_SlackWebhook.txt” (otherwise)

None
Source code in tm2py/logger.py
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
def __init__(self, logger: Logger, slack_webhook_url: str = None):
    r"""Constructor for SlackNotifier object.

    Args:
        logger (Logger): logger instance.
        slack_webhook_url (str, optional): . Defaults to None, which is replaced by either:
            - r"M:\Software\Slack\TravelModel_SlackWebhook.txt" (if on MTC server)
            - r"C:\Software\Slack\TravelModel_SlackWebhook.txt" (otherwise)
    """
    self.logger = logger
    if not logger.controller.config.logging.notify_slack:
        self._slack_webhook_url = None
        return
    if slack_webhook_url is None:
        hostname = socket.getfqdn()
        if hostname.endswith(".mtc.ca.gov"):
            slack_webhook_url_file = (
                r"M:\Software\Slack\TravelModel_SlackWebhook.txt"
            )
            self.logger.log(
                f"SlackNotifier running on mtc host; using {slack_webhook_url_file}",
                level="TRACE",
            )
        else:
            slack_webhook_url_file = (
                r"C:\Software\Slack\TravelModel_SlackWebhook.txt"
            )
            self.logger.log(
                f"SlackNotifier running on non-mtc host; using {slack_webhook_url_file}",
                level="TRACE",
            )
        if os.path.isfile(slack_webhook_url_file):
            with open(slack_webhook_url_file, "r", encoding="utf8") as url_file:
                self._slack_webhook_url = url_file.read()
        else:
            self._slack_webhook_url = None
    else:
        self._slack_webhook_url = slack_webhook_url
    self.logger.log(
        f"SlackNotifier using slack webhook url {self._slack_webhook_url}",
        level="TRACE",
    )
Functions
post_message
post_message(text)

Posts text to the slack channel via the webhook if slack_webhook_url is found.

Parameters:

Name Type Description Default
text

text message to send to slack

required
Source code in tm2py/logger.py
704
705
706
707
708
709
710
711
712
713
714
715
716
def post_message(self, text):
    """Posts text to the slack channel via the webhook if slack_webhook_url is found.

    Args:
       text: text message to send to slack
    """
    if self._slack_webhook_url is None:
        return
    headers = {"Content-type": "application/json"}
    data = {"text": text}
    self.logger.log(f"Sending message to slack: {text}", level="TRACE")
    response = requests.post(self._slack_webhook_url, headers=headers, json=data, timeout=10)
    self.logger.log(f"Receiving response: {response}", level="TRACE")

General Utilities

Common utility functions for file operations, data manipulation, and system integration.

tm2py.tools

Tools module for common resources / shared code and “utilities” in the tm2py package.

Classes
SpatialGridIndex
SpatialGridIndex(size: float)

Simple spatial grid hash for fast (enough) nearest neighbor / within distance searches of points.

Parameters:

Name Type Description Default
size float

the size of the grid to use for the index, relative to the point coordinates

required
Source code in tm2py/tools.py
329
330
331
332
333
334
335
def __init__(self, size: float):
    """
    Args:
        size: the size of the grid to use for the index, relative to the point coordinates
    """
    self._size = float(size)
    self._grid_index = _defaultdict(lambda: [])
Functions
insert
insert(obj: Any, x: float, y: float)

Add new obj with coordinates x and y. Args: obj: any python object, will be returned from search methods “nearest” and “within_distance” x: x-coordinate y: y-coordinate

Source code in tm2py/tools.py
337
338
339
340
341
342
343
344
345
346
def insert(self, obj: Any, x: float, y: float):
    """
    Add new obj with coordinates x and y.
    Args:
       obj: any python object, will be returned from search methods "nearest" and "within_distance"
       x: x-coordinate
       y: y-coordinate
    """
    grid_x, grid_y = round(x / self._size), round(y / self._size)
    self._grid_index[(grid_x, grid_y)].append((obj, x, y))
nearest
nearest(x: float, y: float)

Return the closest object in index to the specified coordinates Args: x: x-coordinate y: y-coordinate

Source code in tm2py/tools.py
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
def nearest(self, x: float, y: float):
    """Return the closest object in index to the specified coordinates
    Args:
        x: x-coordinate
        y: y-coordinate
    """
    if len(self._grid_index) == 0:
        raise Exception("SpatialGrid is empty.")

    def calc_dist(x1, y1, x2, y2):
        return sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)

    grid_x, grid_y = round(x / self._size), round(y / self._size)
    step = 0
    done = False
    found_items = []
    while not done:
        search_offsets = list(range(-1 * step, step + 1))
        search_offsets = _product(search_offsets, search_offsets)
        items = []
        for x_offset, y_offset in search_offsets:
            if abs(x_offset) != step and abs(y_offset) != step:
                continue  # already checked this grid tile
            items.extend(self._grid_index[grid_x + x_offset, grid_y + y_offset])
        if found_items:
            done = True
        found_items.extend(items)
        step += 1
    min_dist = 1e400
    closest = None
    for i, xi, yi in found_items:
        dist = calc_dist(x, y, xi, yi)
        if dist < min_dist:
            closest = i
            min_dist = dist
    return closest
within_distance
within_distance(x: float, y: float, distance: float)

Return all objects in index within the distance of the specified coordinates Args: x: x-coordinate y: y-coordinate distance: distance to search in point coordinate units

Source code in tm2py/tools.py
385
386
387
388
389
390
391
392
393
394
395
396
def within_distance(self, x: float, y: float, distance: float):
    """Return all objects in index within the distance of the specified coordinates
    Args:
        x: x-coordinate
        y: y-coordinate
        distance: distance to search in point coordinate units
    """

    def point_in_circle(x1, y1, x2, y2, dist):
        return sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2) <= dist

    return self._get_items_on_grid(x, y, distance, point_in_circle)
within_square
within_square(x: float, y: float, distance: float)

Return all objects in index within a square box distance of the specified coordinates. Args: x: x-coordinate y: y-coordinate distance: distance to search in point coordinate units

Source code in tm2py/tools.py
398
399
400
401
402
403
404
405
406
407
408
409
def within_square(self, x: float, y: float, distance: float):
    """Return all objects in index within a square box distance of the specified coordinates.
    Args:
        x: x-coordinate
        y: y-coordinate
        distance: distance to search in point coordinate units
    """

    def point_in_box(x1, y1, x2, y2, dist):
        return abs(x1 - x2) <= dist and abs(y1 - y2) <= dist

    return self._get_items_on_grid(x, y, distance, point_in_box)
Functions
download_unzip
download_unzip(url: str, out_base_dir: str, target_dir: str, zip_filename: str = 'test_data.zip') -> None

Download and unzips a file from a URL. The zip file is removed after extraction.

Parameters:

Name Type Description Default
url str

Full URL do download from.

required
out_base_dir str

Where to unzip the file.

required
target_dir str

What to unzip the file as.

required
zip_filename str

Filename to store zip file as. Defaults to “test_data.zip”.

'test_data.zip'
Source code in tm2py/tools.py
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
def download_unzip(
    url: str, out_base_dir: str, target_dir: str, zip_filename: str = "test_data.zip"
) -> None:
    """Download and unzips a file from a URL. The zip file is removed after extraction.

    Args:
        url (str): Full URL do download from.
        out_base_dir (str): Where to unzip the file.
        target_dir (str): What to unzip the file as.
        zip_filename (str, optional): Filename to store zip file as. Defaults to "test_data.zip".
    """
    target_zip = os.path.join(out_base_dir, zip_filename)
    if not os.path.isdir(out_base_dir):
        os.makedirs(out_base_dir)
    urllib.request.Request(url)
    _download(url, target_zip)
    _unzip(target_zip, target_dir)
    os.remove(target_zip)
temp_file
temp_file(mode: str = 'w+', prefix: str = '', suffix: str = '')

Temp file wrapper to return open file handle and named path.

A named temporary file (using mkstemp) with specified prefix and suffix is created and opened with the specified mode. The file handle and path are returned. The file is closed and deleted on exit.

Parameters:

Name Type Description Default
mode str

mode to open file, [rw][+][b]

'w+'
prefix str

optional text to start temp file name

''
suffix str

optional text to end temp file name

''
Source code in tm2py/tools.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
@_context
def temp_file(mode: str = "w+", prefix: str = "", suffix: str = ""):
    """Temp file wrapper to return open file handle and named path.

    A named temporary file (using mkstemp) with specified prefix and
    suffix is created and opened with the specified mode. The file
    handle and path are returned. The file is closed and deleted on exit.

    Args:
        mode: mode to open file, [rw][+][b]
        prefix: optional text to start temp file name
        suffix: optional text to end temp file name
    """
    file_ref, file_path = tempfile.mkstemp(prefix=prefix, suffix=suffix)
    file = os.fdopen(file_ref, mode=mode)
    try:
        yield file, file_path
    finally:
        if not file.closed:
            file.close()
        os.remove(file_path)
run_process
run_process(commands: Collection[str], name: str = '')

Run system level commands as blocking process and log output and error messages.

Parameters:

Name Type Description Default
commands Collection[str]

list of one or more commands to execute

required
name str

optional name to use for the temp bat file

''
Source code in tm2py/tools.py
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
def run_process(commands: Collection[str], name: str = ""):
    """Run system level commands as blocking process and log output and error messages.

    Args:
        commands: list of one or more commands to execute
        name: optional name to use for the temp bat file
    """
    # when merged with develop_logging branch can use get_logger
    # logger = Logger.get_logger
    logger = None
    with temp_file("w", prefix=name, suffix=".bat") as (bat_file, bat_file_path):
        bat_file.write("\n".join(commands))
        bat_file.close()
        if logger:
            # temporary file to capture output error messages generated by Java
            # Note: temp file created in the current working directory
            with temp_file(mode="w+", suffix="_error.log") as (err_file, _):
                try:
                    output = _subprocess.check_output(
                        bat_file_path, stderr=err_file, shell=True
                    )
                    logger.log(output.decode("utf-8"))
                except _subprocess.CalledProcessError as error:
                    logger.log(error.output)
                    raise
                finally:
                    err_file.seek(0)
                    error_msg = err_file.read()
                    if error_msg:
                        logger.log(error_msg)
        else:
            _subprocess.check_call(bat_file_path, shell=True)
interpolate_dfs
interpolate_dfs(df: DataFrame, ref_points: Collection[Union[float, int]], target_point: Union[float, int], ref_col_name: str = 'ends_with') -> pd.DataFrame

Interpolate for the model year assuming linear growth between the reference years.

Parameters:

Name Type Description Default
df DataFrame

dataframe to interpolate on, with ref points contained in column name per ref_col_name.

required
ref_points Collection[Union[float, int]]

reference years to interpolate between

required
target_point Union[float, int]

target year

required
ref_col_name str

column name to use for reference years. Defaults to “ends_with”.

'ends_with'
Source code in tm2py/tools.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def interpolate_dfs(
    df: pd.DataFrame,
    ref_points: Collection[Union[float, int]],
    target_point: Union[float, int],
    ref_col_name: str = "ends_with",
) -> pd.DataFrame:
    """Interpolate for the model year assuming linear growth between the reference years.

    Args:
        df (pd.DataFrame): dataframe to interpolate on, with ref points contained in column
            name per ref_col_name.
        ref_points (Collection[Union[float,int]]): reference years to interpolate between
        target_point (Union[float,int]): target year
        ref_col_name (str, optional): column name to use for reference years.
            Defaults to "ends_with".
    """
    if ref_col_name not in ["ends_with"]:
        raise NotImplementedError(f"{ref_col_name} not implemented")
    if len(ref_points) != 2:
        raise NotImplementedError(f"{ref_points} reference points not implemented")

    _ref_points = list(map(int, ref_points))
    _target_point = int(target_point)

    _ref_points.sort()
    _start_point, _end_point = _ref_points
    if not _start_point <= _target_point <= _end_point:
        raise ValueError(
            f"Target Point: {_target_point} not within range of \
            Reference Points: {_ref_points}"
        )

    _start_ref_df = df[[c for c in df.columns if c.endswith(f"{_start_point}")]].copy()
    _end_ref_df = df[[c for c in df.columns if c.endswith(f"{_end_point}")]].copy()

    if len(_start_ref_df.columns) != len(_end_ref_df.columns):
        raise ValueError(
            f"{_start_point} and {_end_point} have different number of columns:\n\
           {_start_point} Columns: {_start_ref_df.columns}\n\
           {_end_point} Columns: {_end_ref_df.columns}\
        "
        )

    _start_ref_df.rename(
        columns=lambda x: x.replace(f"_{_start_point}", ""), inplace=True
    )
    _end_ref_df.rename(columns=lambda x: x.replace(f"_{_end_point}", ""), inplace=True)
    _scale_factor = float(target_point - _start_point) / (_end_point - _start_point)

    interpolated_df = (1 - _scale_factor) * _start_ref_df + _scale_factor * _end_ref_df

    return interpolated_df
zonal_csv_to_matrices
zonal_csv_to_matrices(csv_file: str, i_column: str = 'ORIG', j_column: str = 'DEST', value_columns: str = ['VALUE'], default_value: float = 0.0, fill_zones: bool = False, max_zone: int = None, delimiter: str = ',') -> Mapping[str, pd.DataFrame]

Read a CSV file with zonal data and into dataframes.

Input CSV file should have a header row specifying the I, J, and Value column names.

Parameters:

Name Type Description Default
csv_file str

description

required
i_column str

Name of j zone column. Defaults to “ORIG”.

'ORIG'
j_column str

Name of i zone column. Defaults to “DEST”.

'DEST'
value_columns str

List of columns to turn into matrices. Defaults to [“VALUE”].

['VALUE']
default_value float

Value to fill empty cells with. Defaults to 0.0.

0.0
fill_zones bool

If true, will fill zones without values to max zone with default value. Defaults to False.

False
max_zone int

If fill_zones is True, used to determine matrix size. Defaults to max(I, J).

None
delimiter str

Input file delimeter. Defaults to “,”.

','

Returns:

Name Type Description
dict Mapping[str, DataFrame]

Dictionary of Pandas dataframes with matrix names as keys.

Source code in tm2py/tools.py
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
def zonal_csv_to_matrices(
    csv_file: str,
    i_column: str = "ORIG",
    j_column: str = "DEST",
    value_columns: str = ["VALUE"],
    default_value: float = 0.0,
    fill_zones: bool = False,
    max_zone: int = None,
    delimiter: str = ",",
) -> Mapping[str, pd.DataFrame]:
    """Read a CSV file with zonal data and into dataframes.

    Input CSV file should have a header row specifying the I, J, and Value column names.

    Args:
        csv_file (str): _description_
        i_column (str, optional): Name of j zone column. Defaults to "ORIG".
        j_column (str, optional): Name of i zone column. Defaults to "DEST".
        value_columns (str, optional): List of columns to turn into matrices.
            Defaults to ["VALUE"].
        default_value (float, optional): Value to fill empty cells with. Defaults to 0.0.
        fill_zones (bool, optional): If true, will fill zones without values to max zone with
            default value. Defaults to False.
        max_zone (int, optional): If fill_zones is True, used to determine matrix size.
            Defaults to max(I, J).
        delimiter (str, optional): Input file delimeter. Defaults to ",".

    Returns:
        dict: Dictionary of Pandas dataframes with matrix names as keys.
    """
    # TODO Create a test
    _df = pd.read_csv(csv_file, delimiter=delimiter)
    _df_idx = _df.set_index([i_column, j_column])

    _dfs_dict = {v: _df_idx[v] for v in value_columns}
    if not fill_zones:
        return _dfs_dict

    if max_zone is None:
        max_zone = _df[[i_column, j_column]].max().max()

    _zone_list = list(range(1, max_zone + 1))
    for v, _df in _dfs_dict.items():
        _df[v].reindex(index=_zone_list, columns=_zone_list, fill_value=default_value)
    return _dfs_dict
mocked_inro_context
mocked_inro_context()

Mocking of modules which need to be mocked for tests.

Source code in tm2py/tools.py
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
def mocked_inro_context():
    """Mocking of modules which need to be mocked for tests."""
    import sys
    from unittest.mock import MagicMock

    sys.modules["inro.emme.database.emmebank"] = MagicMock()
    sys.modules["inro.emme.database.emmebank.path"] = MagicMock(return_value=".")
    sys.modules["inro.emme.network.link"] = MagicMock()
    sys.modules["inro.emme.network.mode"] = MagicMock()
    sys.modules["inro.emme.network.node"] = MagicMock()
    sys.modules["inro.emme.network"] = MagicMock()
    sys.modules["inro.emme.database.scenario"] = MagicMock()
    sys.modules["inro.emme.database.matrix"] = MagicMock()
    sys.modules["inro.emme.network.node"] = MagicMock()
    sys.modules["inro.emme.desktop.app"] = MagicMock()
    sys.modules["inro"] = MagicMock()
    sys.modules["inro.modeller"] = MagicMock()
    sys.modules["tm2py.emme.manager.EmmeManager.project"] = MagicMock()
    sys.modules["tm2py.emme.manager.EmmeManager.emmebank"] = MagicMock()
    sys.modules["tm2py.emme.manager"] = MagicMock()
emme_context
emme_context()

Return True if Emme is installed.

Source code in tm2py/tools.py
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def emme_context():
    """Return True if Emme is installed."""
    import pkg_resources

    _inro_package = "inro-emme"
    _avail_packages = [pkg.key for pkg in pkg_resources.working_set]

    if _inro_package not in _avail_packages:
        print("Inro not found. Skipping inro setup.")
        mocked_inro_context()
        return False
    else:
        import inro

        if "MagicMock" in str(type(inro)):
            return False

    return True

Example Workflows

Pre-built example configurations and workflows for common modeling scenarios.

Available Examples

The examples module provides ready-to-use configurations for:

  • Base year model setup
  • Scenario analysis workflows
  • Sensitivity testing procedures
  • Custom component integration

tm2py.examples

Download and unzip examples for tm2py, used in tests.

Functions
get_example
get_example(example_name: str = _DEFAULT_EXAMPLE_NAME, example_subdir: str = _DEFAULT_EXAMPLE_SUBDIR, root_dir: str = _ROOT_DIR, retrieval_url: str = _DEFAULT_EXAMPLE_URL) -> str

Returns example directory; downloads if necessary from retrieval URL.

Parameters:

Name Type Description Default
example_name str

Used to retrieve sub-folder or create it if doesn’t exist. Defaults to _DEFAULT_EXAMPLE_NAME.

_DEFAULT_EXAMPLE_NAME
example_subdir str

Where to find examples within root dir. Defaults to _DEFAULT_EXAMPLE_SUBDIR.

_DEFAULT_EXAMPLE_SUBDIR
root_dir str

Root dir of project. Defaults to _ROOT_DIR.

_ROOT_DIR
retrieval_url str

URL to retrieve example data zip from. Defaults to _DEFAULT_EXAMPLE_URL.

_DEFAULT_EXAMPLE_URL

Raises:

Type Description
FileNotFoundError

If can’t find the files after trying to download it.

Returns:

Name Type Description
str str

Path to example data.

Source code in tm2py/examples.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
def get_example(
    example_name: str = _DEFAULT_EXAMPLE_NAME,
    example_subdir: str = _DEFAULT_EXAMPLE_SUBDIR,
    root_dir: str = _ROOT_DIR,
    retrieval_url: str = _DEFAULT_EXAMPLE_URL,
) -> str:
    """Returns example directory; downloads if necessary from retrieval URL.

    Args:
        example_name (str, optional): Used to retrieve sub-folder or create it if doesn't exist.
            Defaults to _DEFAULT_EXAMPLE_NAME.
        example_subdir (str, optional): Where to find examples within root dir. Defaults
            to _DEFAULT_EXAMPLE_SUBDIR.
        root_dir (str, optional): Root dir of project. Defaults to _ROOT_DIR.
        retrieval_url (str, optional): URL to retrieve example data zip from. Defaults
            to _DEFAULT_EXAMPLE_URL.

    Raises:
        FileNotFoundError: If can't find the files after trying to download it.

    Returns:
        str: Path to example data.
    """
    _example_dir = os.path.join(root_dir, example_subdir)
    _this_example_dir = os.path.join(_example_dir, example_name)
    if os.path.isdir(_this_example_dir):
        return _this_example_dir

    download_unzip(retrieval_url, _example_dir, _this_example_dir)
    if not os.path.isdir(_this_example_dir):
        raise FileNotFoundError(f"example {_this_example_dir} not found")

    return _this_example_dir

📚 Additional Resources

Related Documentation

User Guides & Tutorials

Input & Output Reference

Technical Documentation

Need Help?