ModelGrouping ============= Trial-to-model mapping for CrossSegmentationExplorer-style comparison. Maps optimization trials to "models" for side-by-side comparison, similar to how CrossSegmentationExplorer groups AI segmentations. See ADR-018 for design rationale. Classes ------- .. py:class:: ComparisonModel A model for comparison, containing one or more trials. In CrossSegmentationExplorer, a "model" is a group of segmentations from the same AI model. For optimization, we map this to groupings like "all watershed trials" or "best trial per algorithm". Attributes: name: Display name for this model (e.g., "watershed", "Gold Standard"). trials: List of trials belonging to this model. color: Optional display color (R, G, B) for visualization. metadata: Additional metadata about this model. **Methods:** .. py:method:: best_trial() Get the best performing trial in this model. .. py:method:: best_score() Get the best Dice score in this model. .. py:method:: trial_count() Get the number of trials in this model. .. py:class:: TrialModelMapper Map optimization trials to comparison models. Provides various grouping strategies for organizing trials into CrossSegmentationExplorer-compatible "models". Usage: mapper = TrialModelMapper() # Group by algorithm models = mapper.group_by_algorithm(trials) # Returns: {"watershed": ComparisonModel, "geodesic": ComparisonModel, ...} # Get top N per algorithm models = mapper.get_top_n_per_algorithm(trials, n=1) # Returns: {"watershed": ComparisonModel(trials=[best_watershed]), ...} # Group by Dice score range ranges = [(0.95, 1.0, "excellent"), (0.90, 0.95, "good"), (0.0, 0.90, "poor")] models = mapper.group_by_dice_range(trials, ranges) **Methods:** .. py:method:: __init__() Initialize trial-to-model mapper. .. py:method:: group_by_algorithm() Group trials by algorithm parameter. .. py:method:: get_top_n_per_algorithm() Get top N trials by Dice score for each algorithm. .. py:method:: group_by_dice_range() Group trials by Dice score ranges. .. py:method:: group_by_trial_numbers() Select specific trials by number for comparison. .. py:method:: create_gold_standard_model() Create a placeholder model for gold standard segmentation. .. py:method:: get_best_overall() Get a model containing only the single best trial. .. py:method:: filter_by_algorithm() Filter trials to include only specified algorithms. .. py:method:: filter_by_min_dice() Filter trials to include only those above minimum Dice score. Functions --------- .. py:function:: best_trial() Get the best performing trial in this model. .. py:function:: best_score() Get the best Dice score in this model. .. py:function:: trial_count() Get the number of trials in this model. .. py:function:: group_by_algorithm() Group trials by algorithm parameter. Args: trials: List of TrialData objects to group. include_empty: If True, include algorithms with no trials. Returns: Dictionary mapping algorithm name to ComparisonModel. .. py:function:: get_top_n_per_algorithm() Get top N trials by Dice score for each algorithm. Args: trials: List of TrialData objects. n: Number of top trials to include per algorithm. Returns: Dictionary mapping algorithm name to ComparisonModel with top N trials. .. py:function:: group_by_dice_range() Group trials by Dice score ranges. Args: trials: List of TrialData objects. ranges: List of (min, max, name) tuples defining score ranges. Example: [(0.95, 1.0, "excellent"), (0.90, 0.95, "good")] Returns: Dictionary mapping range name to ComparisonModel. .. py:function:: group_by_trial_numbers() Select specific trials by number for comparison. Args: trials: List of TrialData objects. trial_numbers: List of trial numbers to include. Returns: Dictionary mapping "trial_N" to ComparisonModel. .. py:function:: create_gold_standard_model() Create a placeholder model for gold standard segmentation. The gold standard is treated as a special "model" in comparison views. Args: gold_name: Display name for the gold standard. Returns: ComparisonModel for gold standard (trials list will be empty, segmentation loaded separately). .. py:function:: get_best_overall() Get a model containing only the single best trial. Args: trials: List of TrialData objects. Returns: ComparisonModel with the single best trial. .. py:function:: filter_by_algorithm() Filter trials to include only specified algorithms. Args: trials: List of TrialData objects. algorithms: List of algorithm names to include. Returns: Filtered list of trials. .. py:function:: filter_by_min_dice() Filter trials to include only those above minimum Dice score. Args: trials: List of TrialData objects. min_dice: Minimum Dice score threshold. Returns: Filtered list of trials. .. py:function:: quick_compare_algorithms() Quick comparison: best trial per algorithm. Args: trials: List of TrialData objects. Returns: Dictionary of algorithm name to ComparisonModel with best trial. .. py:function:: quick_compare_top_trials() Quick comparison: top N trials overall. Args: trials: List of TrialData objects. n: Number of top trials to include. Returns: Dictionary of trial identifiers to ComparisonModel.