Land Use Data 🏙️ ¶
Overview ¶
Land use data provides the spatial distribution of households, population, employment, and other activities that generate and attract travel. The travel model uses two levels of spatial detail: Traffic Analysis Zones (TAZ) for regional analysis and Micro Analysis Zones (MAZ) for detailed local accessibility and mode choice modeling.
Creating Land Use Files
For detailed instructions on how to prepare land use data files for the base year, see Creating Base Year Inputs 🏙️
File Structure ¶
Land use data consists of two main files located in the landuse\ directory:
mazData.csv- Micro Analysis Zone level data (detailed land use characteristics)tazData.csv- Traffic Analysis Zone level data (regional characteristics)
Micro Analysis Zones (MAZ Data) ¶
The mazData.csv file contains detailed land use characteristics at the micro-zone level, providing the fine-grained spatial detail needed for accessibility calculations and local travel modeling.
Data Model Validation
The MAZ data file structure is validated using Pandera data models to ensure data quality and consistency. See the complete field specifications and validation rules below.
Cross Reference
For detailed API documentation and programmatic access to MAZ data validation, see MAZ Data Model API Reference 📖
MAZ Data Model Specification ¶
The following fields are required in the mazData.csv file. All field names are case-sensitive and must match exactly:
tm2py.data_models.maz_data.MAZData
¶
Micro-Analysis Zone (MAZ) Land Use Data Validation Model.
This class validates MAZ-level land use data used in TM2.0 transportation modeling. MAZs represent the finest geographic resolution for land use data, containing detailed information about employment by sector, demographics, parking supply, and accessibility measures. This data drives trip generation and other demand modeling components.
The validation ensures data consistency, proper data types, and logical constraints across all land use attributes before they are consumed by the transportation model.
Geographic Hierarchy ¶
- MAZ (Micro-Analysis Zone): Finest geographic unit
- TAZ (Traffic Analysis Zone): Aggregates multiple MAZs
- District/County: Higher-level geographic groupings
Data Categories ¶
- Geographic Identifiers: MAZ/TAZ IDs, coordinates, district/county information
- Demographics: Households, population, school enrollment by type
- Employment by Sector: 21 detailed employment categories (retail, manufacturing, services, etc.)
- Parking Supply: Hourly, daily, and monthly parking by destination type
- Density Measures: Employment, population, and household densities within ½ mile
- Accessibility: Intersection counts and density classifications
Employment Categories ¶
The model includes detailed employment data across major sectors:
- Primary: Agriculture (ag), Natural Resources (natres)
- Manufacturing: Bio (man_bio), Light (man_lgt), Heavy (man_hvy), Tech (man_tech)
- Services: Professional (prof), Business (serv_bus), Personal (serv_pers), Social (serv_soc)
- Retail: Local (ret_loc), Regional (ret_reg)
- Education: K-12 (ed_k12), Higher Ed (ed_high), Other (ed_oth)
- Other: Government (gov), Health, Construction (constr), Transportation (transp), etc.
Parking Data Structure ¶
Parking supply is categorized by: - Duration: Hourly (h), Daily (d), Monthly (m) - Destination: Same MAZ (sam) vs Other MAZs (oth) - Costs: Average hourly, daily, and monthly parking costs
Density Classifications ¶
Several attributes use binned density measures (1-3 scale): - IntDenBin: Intersection density (walkability proxy) - EmpDenBin: Employment density (job accessibility) - DUDenBin: Household density (residential intensity)
Validation Rules ¶
- All geographic IDs must be unique and non-null
- Employment and demographic counts must be non-negative integers
- Parking costs and areas must be non-negative floats
- Density measures include both raw values and binned classifications
Example ¶
import pandas as pd
from tm2py.data_models.maz_data import MAZData
# Validate MAZ data
maz_df = pd.read_csv('maz_land_use.csv')
validated_data = MAZData.validate(maz_df)
# Access employment totals
total_jobs = validated_data['emp_total'].sum()
retail_jobs = validated_data['ret_loc'].sum() + validated_data['ret_reg'].sum()
# Analyze density patterns
high_density_mazs = validated_data[validated_data['EmpDenBin'] == 3]
walkable_areas = validated_data[validated_data['IntDenBin'] >= 2]
Attributes ¶
MAZ_ORIGINAL
class-attribute
instance-attribute
¶
MAZ_ORIGINAL: Series[int] = Field(nullable=False, unique=True)
publicEnrollGradeKto8
class-attribute
instance-attribute
¶
publicEnrollGradeKto8: Series[int] = Field(nullable=False, ge=0)
privateEnrollGradeKto8
class-attribute
instance-attribute
¶
privateEnrollGradeKto8: Series[int] = Field(nullable=False, ge=0)
publicEnrollGrade9to12
class-attribute
instance-attribute
¶
publicEnrollGrade9to12: Series[int] = Field(nullable=False, ge=0)
privateEnrollGrade9to12
class-attribute
instance-attribute
¶
privateEnrollGrade9to12: Series[int] = Field(nullable=False, ge=0)
comm_coll_enroll
class-attribute
instance-attribute
¶
comm_coll_enroll: Series[int] = Field(nullable=False, ge=0)
EnrollGradeKto8
class-attribute
instance-attribute
¶
EnrollGradeKto8: Series[int] = Field(nullable=False, ge=0)
EnrollGrade9to12
class-attribute
instance-attribute
¶
EnrollGrade9to12: Series[float] = Field(nullable=False, ge=0)
collegeEnroll
class-attribute
instance-attribute
¶
collegeEnroll: Series[float] = Field(nullable=False, ge=0)
otherCollegeEnroll
class-attribute
instance-attribute
¶
otherCollegeEnroll: Series[float] = Field(nullable=False, ge=0)
AdultSchEnrl
class-attribute
instance-attribute
¶
AdultSchEnrl: Series[int] = Field(nullable=False, ge=0)
hstallsoth
class-attribute
instance-attribute
¶
hstallsoth: Series[float] = Field(nullable=False, ge=0)
hstallssam
class-attribute
instance-attribute
¶
hstallssam: Series[float] = Field(nullable=False, ge=0)
dstallsoth
class-attribute
instance-attribute
¶
dstallsoth: Series[float] = Field(nullable=False, ge=0)
dstallssam
class-attribute
instance-attribute
¶
dstallssam: Series[float] = Field(nullable=False, ge=0)
mstallsoth
class-attribute
instance-attribute
¶
mstallsoth: Series[float] = Field(nullable=False, ge=0)
mstallssam
class-attribute
instance-attribute
¶
mstallssam: Series[float] = Field(nullable=False, ge=0)
park_area
class-attribute
instance-attribute
¶
park_area: Series[float] = Field(nullable=False, ge=0)
hparkcost
class-attribute
instance-attribute
¶
hparkcost: Series[float] = Field(nullable=False, ge=0)
numfreehrs
class-attribute
instance-attribute
¶
numfreehrs: Series[float] = Field(nullable=False, ge=0)
dparkcost
class-attribute
instance-attribute
¶
dparkcost: Series[float] = Field(nullable=False, ge=0)
mparkcost
class-attribute
instance-attribute
¶
mparkcost: Series[float] = Field(nullable=False, ge=0)
RetEmpDen
class-attribute
instance-attribute
¶
RetEmpDen: Series[float] = Field(nullable=False, ge=0)
PopEmpDenPerMi
class-attribute
instance-attribute
¶
PopEmpDenPerMi: Series[float] = Field(nullable=False, ge=0)
Traffic Analysis Zones (TAZ Data) ¶
The tazData.csv file contains zone-level data used for specific model components, particularly the transponder ownership model.
Required Fields ¶
| Column Name | Description | Used by | Source |
|---|---|---|---|
TAZ_ORIGINAL |
Original TAZ number (renumbered during model run) | Zone system definition | |
AVGTTS |
Average travel time savings for transponder ownership | TazDataManager | Highway network analysis |
DIST |
Distance for transponder ownership model | TazDataManager | Highway network analysis |
PCTDETOUR |
Percent detour for transponder ownership model | TazDataManager | Highway network analysis |
TERMINALTIME |
Terminal time | TazDataManager | Highway network analysis |
Data Integration and Processing ¶
Zone System Coordination ¶
- MAZ to TAZ Mapping: Each MAZ must be assigned to exactly one TAZ
- Numbering Convention: Original numbers preserved, but model renumbers zones during execution
- Consistency Checks: Population and employment totals must be consistent between MAZ and TAZ levels
Employment Allocation ¶
- Industry Classification: Employment data classified by detailed NAICS codes
- Spatial Distribution: Employment allocated to MAZ level for accessibility calculations
- Validation: Total employment should match regional control totals
- Special Generators: Major employers (airports, universities) require special treatment
Density Calculations ¶
Density measures calculated using TBD:
- Dwelling Unit Density: Households per acre
- Employment Density: Jobs per acre
- Population Density: Persons per acre
- Intersection Density: Total intersections (walkability measure)
Model Applications ¶
Accessibility Calculations ¶
Land use data drives accessibility calculations used throughout the model:
- Employment Accessibility: By industry sector for location choice
- Population Accessibility: For service and retail accessibility
- Education Accessibility: For school location choice
- Mixed-Use Measures: Combined residential/commercial accessibility
Mode Choice Integration ¶
- Parking Supply: Available spaces by type and duration
- Parking Costs: Hourly, daily, and monthly rates
- Built Environment: Density measures for walk/bike mode choice
- Activity Density: Combined employment and population measures
Location Choice Models ¶
- Work Location: Industry-specific employment accessibility
- School Location: Enrollment and capacity by education level
- Non-Mandatory Activities: Retail, service, and recreational accessibility
Data Quality Requirements ¶
Validation Checks ¶
- Completeness: No missing values in required fields
- Consistency: Employment totals match across classification levels
- Geographic Integrity: All MAZ assigned to valid TAZ
- Logical Relationships: Enrollment consistent with education employment
- Density Calculations: Consistent with zone area measurements
Common Issues ¶
- Missing Employment: Zones with population but no employment data
- Inconsistent Totals: MAZ totals not matching TAZ aggregations
- Parking Data Gaps: Missing parking supply or cost information
- Enrollment Mismatches: School enrollment not aligned with capacity
- Density Anomalies: Unrealistic density calculations
Update Procedures ¶
- Base Year Preparation: Align with most recent Census/survey data
- Forecast Year Development: Apply land use forecasts and development scenarios
- Validation Process: Compare against observed patterns and trends
- Sensitivity Testing: Verify model response to land use changes
- Documentation: Maintain metadata and processing documentation
This comprehensive land use data structure supports detailed spatial analysis and realistic travel behavior modeling in the CT-RAMP framework.