Military maritime load planning instances for prioritized two-dimensional orthogonal packing
Data files
Dec 30, 2025 version files 165.51 KB
-
instances.zip
120.67 KB
-
Project.toml
589 B
-
README.md
23.18 KB
-
run_all.jl
6.30 KB
-
src.zip
14.77 KB
Abstract
This dataset supports the computational validation of prioritized two-dimensional orthogonal packing algorithms applied to military combat loading scenarios, as detailed in "Enhancing Military Load Planning: A Prioritized 2-D Orthogonal Packing Approach". The repository comprises 70 problem instances derived from authoritative U.S. Army equipment databases, specifically the Joint Equipment Characteristic Database (JECD) and the Modified Table of Organizational Equipment (MTOE) for a representative Armored Brigade Combat Team (ABCT). The data represents approximately two brigades of equipment filtered for roll-on/roll-off capability—including tracked combat vehicles, self-propelled artillery, and heavy wheeled vehicles—to simulate realistic amphibious embarkation requirements. Instances are provided in JSON format and categorize items by Unit Identification Code (UIC) and Paragraph Number (PARNO) to enforce hierarchical packing priorities that balance access-point proximity with unit cohesion. The dataset spans six representative vessel classes (Whidbey Island, Wasp, Harpers Ferry, Besson, America, and Runnymede) with target space utilization levels ranging from 65% to 85%. Supplementary Julia scripts utilizing the Gurobi optimizer are included to reproduce computational experiments across three solution methods: a monolithic Mixed-Integer Linear Program (MILP), a standard sliding-window matheuristic, and an in-stride balancing variant. These scripts evaluate algorithmic performance against strict center-of-gravity deviation tolerances (δ∈{0.01,0.05,0.10,0.15}), enabling the assessment of trade-offs between load balancing feasibility, solution quality, and computational efficiency.
1. Overview
This repository contains the data, instances, and supplementary materials for our paper "Enhancing Military Load Planning: A Prioritized 2-D Orthogonal Packing Approach." The repository includes:
- 70 problem instances derived from U.S. Army equipment databases
- Detailed data generation methodology
- Julia scripts for reproducing experiments
2. Data Generation Methodology
This section provides comprehensive details on how the 70 test instances were generated from military equipment databases. The methodology ensures a realistic representation of combat loading scenarios while maintaining reproducibility.
2.1 Equipment Data Sources and Filtering
Data Sources
Our equipment data derives from two authoritative U.S. military sources:
- Joint Equipment Characteristic Database (JECD): Provides comprehensive technical specifications including:
- Physical dimensions (length, width, height)
- Weight specifications
- Equipment classification codes
- Operational characteristics
- Modified Table of Organizational Equipment (MTOE): Defines the equipment authorization for a representative U.S. Army Armored Brigade Combat Team (ABCT), including:
- Unit organizational structure
- Equipment quantities by unit
- Hierarchical relationships between units
Equipment Filtering Criteria
To focus on equipment suitable for roll-on/roll-off combat loading operations, we applied the following filtering criteria:
EQUIP CODE Filtering: We retained only items with the following equipment codes:
- Codes 1-9: Wheeled vehicles of various weight classes
- Code 0: Special purpose wheeled vehicles
- Code C: Tracked combat vehicles
- Code D: Tracked support vehicles
- Code E: Self-propelled artillery systems
- Code F: Engineer equipment (self-propelled)
- Code H: Towable equipment and trailers
Rationale for Filtering:
- These codes represent self-propelled and towable vehicles that can be driven or towed onto vessels
- Excludes equipment requiring material handling equipment (MHE) for loading
- Focuses on items that can rapidly off-load in contested environments without external assistance
- Represents the core combat power of an ABCT that would be prioritized in amphibious operations
Sample Equipment Entries (anonymized):
Tracked Combat Vehicle (Code C): 360" × 144" × 96", 140,000 lbs
Tracked Support Vehicle (Code D): 264" × 143" × 114", 61,218 lbs
Self-Propelled Artillery (Code E): 384" × 123" × 120", 78,000 lbs
Heavy Wheeled Vehicle (Code 2): 396" × 96" × 113", 42,500 lbs
2.2 Group Formation and Priority Assignment
Organizational Structure
Items are organized into functional groups based on military organizational hierarchy using two key identifiers:
Unit Identification Code (UIC):
- Anonymized identifier code for specific units
- Typically represents battalion-level organizations (300-800 personnel)
- Example: "UIC_001" represents a battalion-level organization
Paragraph Number (PARNO):
- Three-digit code indicating sub-unit allocation within a UIC
- Follows standard military organizational patterns:
- 100-series: Battalion headquarters and headquarters company
- 101: Company headquarters
- 102: Battalion command section
- 103: Medical platoon
- 104: Maintenance platoon
- 105: Scout platoon
- 200-series: First line company (e.g., Alpha Company)
- 201: Company headquarters
- 202: First platoon
- 203: Second platoon
- 204: Third platoon
- 300-series: Second line company (e.g., Bravo Company)
- 400-series: Third line company (e.g., Charlie Company)
- 500-series: Fourth line company (e.g., Delta Company)
- 100-series: Battalion headquarters and headquarters company
Dataset Statistics
Our filtered ABCT dataset contains:
- 299 unique UIC/PARNO combinations: Representing distinct organizational elements
- 2,294 total equipment entries: Individual pieces of equipment across all units
- Approximately 2 brigades worth of equipment: Sufficient for comprehensive testing
Priority Assignment Methodology
Group-Level Priority:
- Groups assigned sequential priorities (1 through n) based on MTOE appearance order
- MTOE documents structured to list units in doctrinally meaningful sequences
- Reflects operational employment order (e.g., reconnaissance before main body)
- Maintains consistency with military planning doctrine
Item-Level Priority Within Groups:
- Items within each group receive randomized priorities (1 through m, where m = group size)
- Randomization simulates variety of possible loading sequences based on:
- Mission-specific requirements
- Equipment readiness status
- Tactical considerations at embarkation time
- Commander's guidance for specific operations
Rationale for Within-Group Randomization:
This randomization acknowledges that while group-level priorities often follow established military doctrine (reflected in MTOE ordering), the specific sequencing of equipment within a unit may vary considerably based on operational factors at the time of embarkation. Unlike group-level priorities which follow doctrinal relationships, item-level priorities are more flexible and mission-dependent, allowing tactical planners to adjust loading sequences based on immediate operational needs, current equipment readiness, or specific mission requirements.
Example Priority Structure (anonymized):
Group 1 (PARATITLE_001) - Priority 1
- Vehicle #1 (MOSLIN_001) - Item Priority 3
- Vehicle #2 (MOSLIN_001) - Item Priority 1
- Vehicle #3 (MOSLIN_002) - Item Priority 2
Group 2 (PARATITLE_002) - Priority 2
- Vehicle #1 (MOSLIN_003) - Item Priority 2
- Vehicle #2 (MOSLIN_003) - Item Priority 4
- Vehicle #3 (MOSLIN_003) - Item Priority 1
- Vehicle #4 (MOSLIN_003) - Item Priority 3
2.3 Vessel Specifications and Sources
Vessel Dimension Derivation
We model six representative vessel types with dimensions derived from official sources:
Whidbey Island-class (LSD-41):
- Source: U.S. Navy Fact File (official specifications)
- Well deck: 440' × 50' = 5280" × 600"
- Dimensions extracted directly from published well deck measurements
- Largest amphibious well deck in U.S. inventory
- Well deck area specifically designed for vehicle loading
Wasp-class (LHD):
- Source: U.S. Navy Fact File (official specifications)
- Well deck: 266' × 50' = 3192" × 600"
- Dimensions extracted directly from published well deck measurements
- Primary amphibious assault platform
Harpers Ferry-class (LSD-49):
- Source: U.S. Navy Fact File (official specifications)
- Cargo deck area: 220' × 50' = 2640" × 600"
- Dimensions from cargo deck measurements (not well deck)
- Modified design emphasizing cargo over well deck space
General Frank S. Besson-class (LSV):
- Source: U.S. Army ATP 4-15 (official Army doctrine)
- Main deck cargo area: 190.9' × 55' = 2291" × 660"
- Dimensions from main cargo deck area specifications
- Army logistics support vessel for intra-theater transport
America-class Landing Helicopter Assault (LHA):
- Source: Derived from Landing Craft Air Cushion (LCAC) capacity specifications
- Well deck designed to accommodate 2 LCACs side-by-side
- Estimated dimensions: 160' × 50' = 1920" × 600"
- Derivation methodology: Based on documented capacity to accommodate two LCAC vehicles simultaneously
- Note: Early America-class vessels (LHA-6, LHA-7) built without well decks for increased aviation capacity; dimensions calculated for LHA-8+ design which reintroduces well deck capability
Runnymede-class Landing Craft Utility (LCU 2000):
- Source: U.S. Army ATP 4-15 (official Army doctrine)
- Main cargo deck: 150' × 35' = 1800" × 420"
- Dimensions from main cargo deck area specifications
- Landing craft for tactical intra-theater movement and beach landings
Center of Gravity Parameters
Target CG Location:
- Set at geometric center: (B_optx = L/2, B_opty = W/2)
- Standard assumption absent vessel-specific stability curves
- Represents neutral loading condition
CG Deviation Tolerances:
Tolerances (δ_x, δ_y) defined as percentages of vessel dimensions:
| Tolerance Level | X-Direction (% of Length) | Y-Direction (% of Width) |
|---|---|---|
| 1% | ±0.01 × L | ±0.01 × W |
| 5% | ±0.05 × L | ±0.05 × W |
| 10% | ±0.10 × L | ±0.10 × W |
| 15% | ±0.15 × L | ±0.15 × W |
Detailed Example: Runnymede-class LCU (L=1800", W=420"):
For a vessel with length L = 1800 inches and width W = 420 inches:
- Geometric center target: (B_optx = 900", B_opty = 210")
1% CG Deviation:
- δ_x = 0.01 × 1800" = ±18"
- δ_y = 0.01 × 420" = ±4.2"
- Allowable CG x-coordinate range: [882", 918"] (centered at 900")
- If cargo CG falls outside this range, load is unbalanced in longitudinal direction
- Allowable CG y-coordinate range: [205.8", 214.2"] (centered at 210")
- If cargo CG falls outside this range, load is unbalanced in lateral direction
- Interpretation: A 1% CG deviation allows the cargo center of gravity x-coordinate to vary within ±18 inches of the target (i.e., between 882 and 918 inches if centered at 900 inches). This represents an extremely tight constraint requiring precise load distribution.
5% CG Deviation:
- δ_x = 0.05 × 1800" = ±90"
- δ_y = 0.05 × 420" = ±21"
- Allowable CG x-range: [810", 990"]
- Allowable CG y-range: [189", 231"]
10% CG Deviation:
- δ_x = 0.10 × 1800" = ±180"
- δ_y = 0.10 × 420" = ±42"
- Allowable CG x-range: [720", 1080"]
- Allowable CG y-range: [168", 252"]
15% CG Deviation:
- δ_x = 0.15 × 1800" = ±270"
- δ_y = 0.15 × 420" = ±63"
- Allowable CG x-range: [630", 1170"]
- Allowable CG y-range: [147", 273"]
Scaling Rationale:
This percentage-based approach scales tolerances appropriately with vessel size, reflecting that larger vessels can typically accommodate greater absolute deviations while maintaining stability. For instance, a 1 % deviation on the Whidbey Island-class (L = 5280", W = 600") yields δ_x = ±52.8" and δ_y = ±6", which are larger absolute values than for the LCU but represent the same relative constraint on load distribution.
2.4 Instance Generation Algorithm
Two-Phase Assignment Approach
The instance generation process uses a sophisticated two-phase approach to create realistic loading scenarios:
Phase 1: Remainder Item Processing
- For each vessel-target combination, the algorithm first attempts to place any remaining items from previous assignments that could not fit in earlier vessels
- If the remaining items list is empty or items don't fit in the current vessel, the algorithm moves to Phase 2
- This simulates realistic cross-loading where equipment from a tactical unit may need to be distributed across multiple vessels
Phase 2: Sequential Group Assignment
- If Phase 1 doesn't fill the vessel to target capacity, the algorithm moves to the next equipment group in sequence
- Within each group, items are evaluated individually for placement
- Items that fit within the remaining target area are added to the current assignment
- Items that would exceed the target utilization are placed in a queue (remainder list) for future vessel assignments
- This process continues until the vessel reaches its target utilization or all groups have been processed
Detailed Algorithm Mechanics:
- Vessel-Target Pair Creation: The algorithm begins by creating vessel-target pairs from five utilization levels (65%, 70%, 75%, 80%, 85%) across all six vessel types, yielding 30 possible combinations.
- Cyclic Assignment: The algorithm cycles through these pairs systematically, ensuring even distribution across utilization levels (targeting 14 instances per level for 70 total instances).
- Target Area Calculation: For each vessel-target combination, the process begins with an empty assignment and calculates the target area as the vessel's deck area multiplied by the desired utilization percentage. For example, for a Runnymede-class LCU (1800" × 420" = 756,000 sq in) at 70 % utilization, the target area is 529,200 sq in.
- Item-by-Item Evaluation: Equipment is not assigned as complete groups when capacity constraints exist. Instead, individual items within a group are evaluated sequentially. If an item fits, it's added; if not, it's queued for the next vessel. This creates the realistic scenario of partial group loading.
- Utilization Constraints:
- Historical Precedent: The 65 % lower bound represents worst-case scenarios where available vessels may not be ideally suited for the cargo mix
- Feasibility Constraints: Prior testing at 90 % utilization showed that a majority of instances became infeasible due to geometric packing constraints and load balancing requirements
- The 65-85% range provides a realistic spectrum from conservative to aggressive loading
- Cross-Loading Realism: When a group's equipment exceeds the target utilization for a vessel, items are split across multiple instances, with excess items potentially forming partial groups in subsequent assignments. This splitting behavior reflects realistic tactical scenarios where units may be divided across vessels due to capacity constraints, adding operational realism to the problem instances.
Key Implementation Details
Equipment Cycling:
- Ensures even distribution across vessel types and utilization levels
- Creates 14 instances per utilization level (70 total ÷ 5 levels)
- Natural variation in group composition due to cycling pattern
Group Splitting Behavior:
- When group cannot fit entirely, items split based on area constraints
- Maintains group integrity when possible
- Reflects realistic cross-loading in capacity-constrained operations
Instance Naming Convention:
instance_XX_VESSEL_groups_YY_items_ZZ.json
Where:
XX = Instance number (01-70)
VESSEL = Vessel class abbreviation
YY = Number of groups in instance
ZZ = Number of items in instance
Example: instance_42_LHD_groups_09_items_44.json
3. Instance Files
Instance File Format
Each instance is stored as a JSON file with the following structure:
{
"bin_dimensions": {
"class": "Runnymede Class LCU",
"length": 1800,
"width": 420,
"target_utilization": 70,
"B_opt_X": 900.0,
"B_opt_Y": 210.0,
"delta_x": 0.01,
"delta_y": 0.01
},
"groups": [
{
"id": 1,
"priority": 1,
"UIC": "UIC_001",
"PARNO": "PARNO_001",
"PARATITLE": "PARATITLE_001",
"item_ids": [1, 2, 3, 4, 5, 6],
"size_category": "medium",
"heterogeneity": "strong",
"split": true
},
...
],
"items": [
{
"id": 1,
"group_id": 1,
"priority": 6,
"length": 321,
"width": 152,
"height": 115,
"weight": 110510,
"MOSLIN": "MOSLIN_001",
"SHIP CODE": "B",
"AREA_TO_WEIGHT_RATIO": 0.441516605
},
...
],
"num_groups": 2,
"num_items": 11,
"heterogeneity_pattern": "ss",
"group_size_pattern": "ml"
}
4. Solution Methods
This repository implements three solution methods for the prioritized 2D rectangular packing problem with load balancing constraints:
Method 1: Single MILP Approach
Solves the entire instance at once using a mixed-integer linear program (MILP) with full load balancing constraints. This is the exact method that provides optimal solutions but may be computationally expensive for large instances.
Key Parameters:
delta_vals: CG deviation tolerances (e.g., 0.01, 0.05, 0.10, 0.15)time_limit: Maximum solve time in seconds (default: 3600)nThreads: Gurobi solver threads
Method 2: Standard Sliding Window Approach
A two-stage decomposition approach:
- Stage 1: Solve using sliding window decomposition without load balancing constraints
- Stage 2: Iteratively unfix items to achieve load balance feasibility
Key Parameters:
window_size: Number of items in sliding window (default: 7)time_limit_per_window: Time limit per window iteration (default: 5 sec)time_limit_stage2: Time limit per Stage 2 iteration (default: 5 sec)unfixing_strategies: reverse_priority, decreasing_xr, mass_weighted_xr, greedy_cg_impact
Method 3: In-Stride Load Balancing Approach
A two-stage approach with load balancing penalties applied during Stage 1:
- Stage 1: Windowed decomposition with lambda penalty terms for CG deviation
- Stage 2: Iterative unfixing adjustment (same as Method 2)
Variants:
standard: Y-direction penalty onlyin_out: Both X and Y penalties with linear interpolation inside/outside tolerance
Lambda Modes:
constant: Fixed lambda factor throughout all iterationsdynamic: Exponentially increasing lambda schedule
Key Parameters:
lambda_factor: Penalty weight for constant mode (e.g., 0.05, 0.25)curve_shape: Controls steepness of dynamic schedule (e.g., 1.0, 2.5, 5.0, 10.0)lambda_max: Maximum lambda value for dynamic mode (e.g., 0.25, 0.5)lambda_inside_factor: Penalty reduction inside tolerance zone (e.g., 0.05, 0.10, 0.20)
5. Repository Structure
data_repo/
├── README.md # This file
├── run_all.jl # Parallel experiment runner
├── src.zip/
│ ├── vessel_loading_solver.jl # Main entry point
│ ├── solvers.jl # Core solver functions
│ └── experiment_runners.jl # Experiment runner functions
├── instances.zip/ # 70 JSON instance files
│ └── instance_*.json
└── results/ # Created when running experiments
└── <instance_name>/
└── *.json result files
6. Reproduction Instructions
Prerequisites
- Julia: Install Julia 1.11.6 or later from https://julialang.org/downloads/
- Gurobi: Install Gurobi 12.0.2 with a valid license from https://www.gurobi.com/
- Clone/Download this repository
Package Installation
# Start Julia in the repository directory
julia --project=.
# Install required packages (first time only)
using Pkg
Pkg.instantiate()
Quick Start: Single Instance Examples
# Load the solver
cd("path/to/data_repo")
include("src/vessel_loading_solver.jl")
# Method 1: Single MILP approach
run_single_solve_on_vessel_instance(
"instances/instance_1_11g_37it_70SU_AC-LHA.json", 8;
delta_vals=[0.05],
time_limit=3600,
results_dir="results"
)
# Method 2: Standard windowed + Stage 2
run_windowed_then_iterative_balance(
"instances/instance_1_11g_37it_70SU_AC-LHA.json",
7, 5, 5, 8; # window_size, time_per_window, time_stage2, threads
delta_vals=[0.05],
offload_labels=["CL"],
unfixing_strategies=["reverse_priority"],
results_dir="results"
)
# Method 3: In-stride + Stage 2
run_windowed_in_stride_then_iterative_balance(
"instances/instance_1_11g_37it_70SU_AC-LHA.json",
7, 5, 5, 8;
delta_vals=[0.05],
variants=["in_out"],
lambda_modes=["dynamic"],
lambda_curve_shapes=[2.5],
lambda_max_values=[0.25],
lambda_inside_factors=[0.10],
results_dir="results"
)
Full Reproduction: All 70 Instances
# Run all experiments in parallel (8 threads)
julia --project=. -t 8 run_all.jl
# Quick test on single instance
julia --project=. -t 8 run_all.jl --quick
# Run only specific method
julia --project=. -t 8 run_all.jl --method=1 # Single MILP only
julia --project=. -t 8 run_all.jl --method=2 # Windowed only
julia --project=. -t 8 run_all.jl --method=3 # In-stride only
Estimated Computation Time:
- Method 1 (Single MILP): ~1-60 minutes per instance depending on size
- Method 2 (Windowed): ~2-10 minutes per instance
- Method 3 (In-stride): ~2-10 minutes per instance
- Full reproduction: Several days with parallel execution
Output Format
Results are saved as JSON files in the results/ directory with the following structure:
{
"method": "two_stage_windowed_iterative_balance",
"instance": "instance_1_11g_37it_70SU_AC-LHA",
"num_items": 37,
"delta_value": 0.05,
"stage1_objective": 12345.67,
"stage2_load_balancing_feasible": true,
"final_objective": 12400.89,
"total_solve_time": 45.2,
"timestamp": "2025-01-15 10:30:00"
}
7. Key Parameters from Experiments
| Parameter | Value(s) |
|---|---|
| delta_vals (CG deviation) | 0.01, 0.05, 0.10, 0.15 |
| window_size | 7 |
| time_limit_per_window | 5 seconds |
| time_limit_stage2 | 5 seconds |
| initial_free_items | 6 |
| offload_labels | ["CL"] (center-left only) |
| variants | ["standard", "in_out"] |
| lambda_modes | ["constant", "dynamic"] |
| lambda_factors | [0.05, 0.25] |
| lambda_curve_shapes | [1.0, 2.5, 5.0, 10.0] |
| lambda_max_values | [0.25, 0.5] |
| lambda_inside_factors | [0.05, 0.10, 0.20] |
| unfixing_strategies | [reverse_priority, decreasing_xr, mass_weighted_xr, greedy_cg_impact] |
8. Citation
If you use this code or these instances in your research, please consider citing this repository (https://doi.org/10.5061/dryad.vt4b8gv5z) and the associated paper on prioritized 2D orthogonal packing (https://doi.org/10.31224/6041).
9. License
This code is provided as supplemental material for academic research purposes via the CC0 license. All data is CC0 and in the public domain.
