Skip to content

Core API Reference

This page documents the core NullStrike modules and functions for programmatic use. The API provides fine-grained control over the analysis process and enables integration into custom workflows.

Main Analysis Function

main() - Complete Analysis

nullstrike.cli.complete_analysis.main()

Main function to run complete analysis.

Source code in src/nullstrike/cli/complete_analysis.py
def main():
    """Main function to run complete analysis."""

    # Parse command line arguments - ADD SCOPE PARSING
    model_name = sys.argv[1] if len(sys.argv) > 1 and not sys.argv[1].startswith('--') else 'calibration_single'

    # Determine options file: specific options file > default options file
    if len(sys.argv) > 2 and not sys.argv[2].startswith('--'):
        # User specified options file explicitly
        options_file = sys.argv[2]
    else:
        # Check if model-specific options file exists
        model_specific_options = f'options_{model_name}'
        try:
            # Try to import the model-specific options to see if it exists
            __import__(f'custom_options.{model_specific_options}')
            options_file = model_specific_options
        except ImportError:
            # Fall back to default options file
            options_file = 'options_default'

    # Check for scope option
    analysis_scope = 'full'  # default
    if '--parameters-only' in sys.argv or '-p' in sys.argv:
        analysis_scope = 'parameters'

    print(f"Analysis scope: {analysis_scope}")

    print("="*80)
    print("COMPLETE IDENTIFIABILITY ANALYSIS")
    print("StrikePy + Nullspace Analysis")
    print("="*80)

    try:


        print("\nMETHOD 1: Integrated Analysis")

        print("-" * 40)
        from nullstrike.analysis.integrated_analysis import run_integrated_analysis
        results = run_integrated_analysis(model_name, options_file, analysis_scope)

        print(f"\n✓ Analysis completed successfully for model: {results['model_name']}")

    except Exception as e:
        print(f"Method 1 failed: {e}")
        print("\nTrying Method 2: Step-by-step analysis...")

        try:
            # Method 2: Step-by-step (fallback)
            print("\nMETHOD 2: Step-by-Step Analysis") 
            print("-" * 40)

            results = run_step_by_step_analysis(model_name, options_file)

        except Exception as e2:
            print(f"Method 2 also failed: {e2}")
            print("\nTrying Method 3: Analyze existing results...")

            # Method 3: Just analyze existing StrikePy results
            if model_name:
                from nullstrike.analysis.enhanced_subspace import analyze_strikepy_results
                results = analyze_strikepy_results(model_name)
            else:
                print("Please provide a model name or run StrikePy first.")
                return

    # Display final summary
    display_final_summary(results)

The primary entry point for running complete NullStrike analysis programmatically.

Basic Usage

from nullstrike.cli.complete_analysis import main

# Run analysis with default options
result = main('C2M')

# Run with custom options file
result = main('my_model', 'options_my_model')

# Run parameters-only analysis
result = main('my_model', 'options_my_model', parameters_only=True)

Return Value

The function returns a dictionary containing:

  • 'success': Boolean indicating if analysis completed successfully
  • 'model_name': Name of the analyzed model
  • 'options_file': Options file used
  • 'results_path': Path to results directory
  • 'computation_time': Total analysis time in seconds
  • 'identifiable_parameters': List of identifiable parameter names
  • 'unidentifiable_parameters': List of unidentifiable parameter names
  • 'nullspace_dimension': Dimension of the nullspace
  • 'rank': Rank of the identifiability matrix

Example with Error Handling

try:
    result = main('my_model')
    if result['success']:
        print(f"Analysis complete! Results in: {result['results_path']}")
        print(f"Identifiable parameters: {result['identifiable_parameters']}")
    else:
        print("Analysis failed")
except Exception as e:
    print(f"Error during analysis: {e}")

Core Analysis Modules

STRIKE-GOLDD Implementation

nullstrike.core.strike_goldd

================================================================================================================= -> STRIKE-GOLDD (Structural Identifiability taken as Extended-Generalized Observability with Lie Derivatives) <-


  • A Matlab toolbox implementation in Python for structural identifiability and observability analysis of nonlinear models by David Rey Rostro (davidreyrostro@gmail.com)
  • Based on the Matlab STRIKE-GOLDD version by Alejandro Fernandez Villaverde (afvillaverde@uvigo.gal)

* User input can be specified in file 'options.py' or by creating an options file in the folder custom_options

The core STRIKE-GOLDD algorithm implementation for structural identifiability analysis.

Key Functions

strike_goldd_analysis(model, options)

Performs the complete STRIKE-GOLDD analysis.

Parameters: - model: Model object containing symbolic definitions - options: Configuration object with analysis parameters

Returns: - Dictionary with observability matrix, rank, and identifiability results

compute_lie_derivatives(h, f, x, max_order)

Computes Lie derivatives up to specified order.

Parameters: - h: Output function (SymPy expression) - f: System dynamics (SymPy Matrix) - x: State variables (SymPy Matrix) - max_order: Maximum derivative order

Returns: - List of Lie derivatives

build_observability_matrix(lie_derivatives)

Constructs the observability matrix from Lie derivatives.

Parameters: - lie_derivatives: List of SymPy expressions

Returns: - SymPy Matrix representing observability matrix

Usage Example

from nullstrike.core.strike_goldd import strike_goldd_analysis
import sympy as sym

# Define simple model
x1, x2 = sym.symbols('x1 x2')
p1, p2 = sym.symbols('p1 p2')

model = {
    'x': sym.Matrix([[x1], [x2]]),
    'p': sym.Matrix([[p1], [p2]]),
    'f': sym.Matrix([[-p1*x1], [p1*x1 - p2*x2]]),
    'h': sym.Matrix([x1])
}

# Run STRIKE-GOLDD analysis
results = strike_goldd_analysis(model, options)
print(f"Matrix rank: {results['rank']}")

Nullspace Analysis

nullstrike.analysis.enhanced_subspace

analyze_identifiable_combinations(O_matrix, param_symbols, state_symbols, input_symbols=None, analysis_scope='full')

Enhanced version that handles the specific structure of StrikePy's augmented state vector.

Parameters:

analysis_scope : str 'full' - analyze all variables (states + parameters + inputs) [DEFAULT] 'parameters' - analyze only parameters

Source code in src/nullstrike/analysis/enhanced_subspace.py
def analyze_identifiable_combinations(O_matrix, param_symbols, state_symbols, input_symbols=None, analysis_scope='full'):
    """
    Enhanced version that handles the specific structure of StrikePy's augmented state vector.

    Parameters:
    -----------
    analysis_scope : str
        'full' - analyze all variables (states + parameters + inputs) [DEFAULT]
        'parameters' - analyze only parameters
    """

    if input_symbols is None:
        input_symbols = []

    n_states = len(state_symbols)
    n_params = len(param_symbols) 
    n_inputs = len(input_symbols)

    print("\n" + "="*60)
    print("ENHANCED NULLSPACE ANALYSIS")
    print("="*60)

    print(f"Augmented state vector structure:")
    print(f"  States (indices 0-{n_states-1}): {state_symbols}")
    print(f"  Parameters (indices {n_states}-{n_states+n_params-1}): {param_symbols}")
    if n_inputs > 0:
        print(f"  Unknown inputs (indices {n_states+n_params}-{n_states+n_params+n_inputs-1}): {input_symbols}")

    # Compute nullspace
    print("\nComputing nullspace...")
    nullspace_basis = O_matrix.nullspace()

    if len(nullspace_basis) == 0:
        print("✓ The nullspace is empty - system is fully observable and identifiable!")
        return {
            "fully_identifiable": True,
            "matrix_rank": O_matrix.rank(),
            "expected_rank": O_matrix.cols
        }

    print(f"Nullspace dimension: {len(nullspace_basis)}")
    print(f"Matrix rank: {O_matrix.rank()}/{O_matrix.cols}")
    print(f"Rank deficiency: {O_matrix.cols - O_matrix.rank()}")

    # Analyze each nullspace vector
    unidentifiable_patterns = []

    for i, null_vec in enumerate(nullspace_basis):
        print(f"\n--- Nullspace Vector {i+1} ---")
        print(f"Full vector: {null_vec}")

        # Extract components
        state_part = null_vec[:n_states] if n_states > 0 else []
        param_part = null_vec[n_states:n_states+n_params] if n_params > 0 else []
        input_part = null_vec[n_states+n_params:] if n_inputs > 0 else []

        print(f"State components: {state_part}")
        print(f"Parameter components: {param_part}")
        if input_part:
            print(f"Input components: {input_part}")

        # Analyze state observability
        unobs_states = []
        for j, coeff in enumerate(state_part):
            if coeff != 0:
                unobs_states.append((state_symbols[j], coeff))

        # Analyze parameter identifiability  
        param_combo = []
        for j, coeff in enumerate(param_part):
            if coeff != 0:
                param_combo.append((param_symbols[j], coeff))

        # Analyze input observability
        input_combo = []
        for j, coeff in enumerate(input_part):
            if coeff != 0:
                input_combo.append((input_symbols[j], coeff))

        # Create human-readable relationship
        relationship = build_relationship_string(param_combo, input_combo, unobs_states)

        pattern = {
            'vector_index': i,
            'nullspace_vector': null_vec,  # Store the original vector!
            'unobservable_states': unobs_states,
            'parameter_combination': param_combo,
            'input_combination': input_combo,
            'relationship': relationship,
            'interpretation': interpret_pattern(param_combo, input_combo, unobs_states)
        }

        unidentifiable_patterns.append(pattern)

        print(f"Relationship: {relationship}")
        print(f"Interpretation: {pattern['interpretation']}")

    identifiable_info = find_identifiable_directions(
        O_matrix, param_symbols, state_symbols, input_symbols, 
        nullspace_basis, unidentifiable_patterns, analysis_scope  # Add this parameter!
    )

    return {
        "fully_identifiable": False,
        "matrix_rank": O_matrix.rank(),
        "expected_rank": O_matrix.cols,
        "nullspace_dimension": len(nullspace_basis),
        "nullspace_basis": nullspace_basis,
        "unidentifiable_patterns": unidentifiable_patterns,
        "identifiable_info": identifiable_info
    }

analyze_strikepy_matrix(onx_matrix, model, analysis_scope='full')

Analyze StrikePy's observability-identifiability matrix for parameter combinations.

Parameters:

onx_matrix : sympy.Matrix The observability-identifiability matrix from StrikePy model : module The model module containing x, p, h, f definitions analysis_scope : str 'full' - analyze all variables (default) 'parameters' - analyze only parameters Returns:


dict : Complete analysis results

Source code in src/nullstrike/analysis/enhanced_subspace.py
def analyze_strikepy_matrix(onx_matrix, model, analysis_scope='full'):
    """
    Analyze StrikePy's observability-identifiability matrix for parameter combinations.

    Parameters:
    -----------
    onx_matrix : sympy.Matrix
        The observability-identifiability matrix from StrikePy
    model : module
        The model module containing x, p, h, f definitions
    analysis_scope : str
        'full' - analyze all variables (default)
        'parameters' - analyze only parameters
        Returns:
    --------
    dict : Complete analysis results
    """


    # Extract model components
    state_symbols = extract_flat_symbols(model.x)
    param_symbols = extract_flat_symbols(model.p)

    # Handle unknown inputs if they exist
    input_symbols = []
    if hasattr(model, 'w') and model.w:
        input_symbols = extract_flat_symbols(model.w)

    print(f"Analyzing matrix of size {onx_matrix.shape}")
    print(f"States ({len(state_symbols)}): {state_symbols}")
    print(f"Parameters ({len(param_symbols)}): {param_symbols}")
    if input_symbols:
        print(f"Unknown inputs ({len(input_symbols)}): {input_symbols}")

    # Perform nullspace analysis
    # results = analyze_identifiable_combinations(
    #     onx_matrix, 
    #     param_symbols, 
    #     state_symbols,
    #     input_symbols
    # )
    results = analyze_identifiable_combinations(
        onx_matrix, 
        param_symbols, 
        state_symbols,
        input_symbols,
        analysis_scope)

    # Add model-specific interpretations
    results['model_info'] = {
        'name': getattr(model, '__name__', 'unknown'),
        'states': state_symbols,
        'parameters': param_symbols,
        'inputs': input_symbols,
        'outputs': extract_flat_symbols(model.h) if hasattr(model, 'h') else []
    }

    return results

analyze_strikepy_results(model_name='C2M', analysis_scope='full')

Complete analysis of StrikePy results for a given model.

Parameters:

analysis_scope : str 'full' - analyze all variables (default) 'parameters' - analyze only parameters

Source code in src/nullstrike/analysis/enhanced_subspace.py
def analyze_strikepy_results(model_name='C2M', analysis_scope='full'):
    """
    Complete analysis of StrikePy results for a given model.

    Parameters:
    -----------
    analysis_scope : str
        'full' - analyze all variables (default)
        'parameters' - analyze only parameters
    """
    # Import model
    import importlib
    # model = importlib.import_module(f'models.{model_name}') #NOTE: what is this models. thing?  
    try:
        model = importlib.import_module(f'custom_models.{model_name}')
    except ImportError:
        # Try without the models prefix
        model = importlib.import_module(model_name)

    # Load most recent OIC matrix
    from pathlib import Path
    import ast

    results_dir = get_results_dir()
    oic_files = list(results_dir.glob(f'obs_ident_matrix_{model_name}_*_Lie_deriv.txt'))

    if not oic_files:
        raise FileNotFoundError(f"No OIC matrix found for {model_name}. Run StrikePy first!")

    latest_file = max(oic_files) #PATTERN -- cool file loading pattern. Memorize. 

    onx_matrix = load_oic_matrix_with_symbols(latest_file, model_name)

    print(f"Loaded OIC matrix from: {latest_file}")

    # Perform analysis
    results = analyze_strikepy_matrix(onx_matrix, model, analysis_scope)

    # Generate report
    generate_summary_report(results)

    return results

build_relationship_string(param_combo, input_combo, state_combo)

Build a human-readable string describing the unidentifiable combination.

Source code in src/nullstrike/analysis/enhanced_subspace.py
def build_relationship_string(param_combo, input_combo, state_combo):
    """Build a human-readable string describing the unidentifiable combination."""

    terms = []

    # Add parameter terms
    for param, coeff in param_combo:
        if coeff == 1:
            terms.append(str(param))
        elif coeff == -1:
            terms.append(f"-{param}")
        else:
            terms.append(f"{coeff}*{param}")

    # Add input terms  
    for inp, coeff in input_combo:
        if coeff == 1:
            terms.append(str(inp))
        elif coeff == -1:
            terms.append(f"-{inp}")
        else:
            terms.append(f"{coeff}*{inp}")

    # Add state terms
    for state, coeff in state_combo:
        if coeff == 1:
            terms.append(str(state))
        elif coeff == -1:
            terms.append(f"-{state}")
        else:
            terms.append(f"{coeff}*{state}")

    if not terms:
        return "No relationship found"

    relationship = " + ".join(terms).replace("+ -", "- ")
    return f"({relationship}) is unidentifiable"

extract_flat_symbols(nested_list)

Extract symbols from potentially nested lists (StrikePy format).

Source code in src/nullstrike/analysis/enhanced_subspace.py
def extract_flat_symbols(nested_list):
    """Extract symbols from potentially nested lists (StrikePy format)."""
    symbols = []
    if not nested_list:
        return symbols

    for item in nested_list:
        if isinstance(item, list):
            symbols.extend(extract_flat_symbols(item))
        elif hasattr(item, 'is_Symbol') and item.is_Symbol:
            symbols.append(item)
        else:
            # Try to convert to symbol if it's a string
            try:
                symbols.append(sym.Symbol(str(item)))
            except:
                pass

    return symbols

find_identifiable_directions(O_matrix, param_symbols, state_symbols, input_symbols, nullspace_vectors, patterns, analysis_scope='full')

Find which parameter/state/input combinations ARE identifiable.

Parameters:

O_matrix : sympy.Matrix The observability-identifiability matrix param_symbols : list Parameter symbols state_symbols : list
State symbols input_symbols : list Input symbols nullspace_vectors : list The actual nullspace vectors from O_matrix.nullspace() patterns : list Unidentifiable patterns (for interpretation only) analysis_scope : str 'full' - analyze all variables (states + parameters + inputs) 'parameters' - analyze only parameters

Returns:

dict : Identifiable combinations and analysis results

Source code in src/nullstrike/analysis/enhanced_subspace.py
def find_identifiable_directions(O_matrix, param_symbols, state_symbols, input_symbols, 
                               nullspace_vectors, patterns, analysis_scope='full'):
    """
    Find which parameter/state/input combinations ARE identifiable.

    Parameters:
    -----------
    O_matrix : sympy.Matrix
        The observability-identifiability matrix
    param_symbols : list
        Parameter symbols
    state_symbols : list  
        State symbols
    input_symbols : list
        Input symbols
    nullspace_vectors : list
        The actual nullspace vectors from O_matrix.nullspace()
    patterns : list
        Unidentifiable patterns (for interpretation only)
    analysis_scope : str
        'full' - analyze all variables (states + parameters + inputs)
        'parameters' - analyze only parameters

    Returns:
    --------
    dict : Identifiable combinations and analysis results
    """

    n_states = len(state_symbols)
    n_params = len(param_symbols)
    n_inputs = len(input_symbols)

    if analysis_scope == 'parameters':
        print("Finding identifiable PARAMETER directions...")
        working_symbols = param_symbols
        # Extract only parameter components from nullspace vectors
        param_nullspace_vectors = []
        for vec in nullspace_vectors:
            param_part = vec[n_states:n_states+n_params]
            param_nullspace_vectors.append([coeff for coeff in param_part])
        working_nullspace_vectors = param_nullspace_vectors

    else:  # 'full'
        print("Finding identifiable directions for ALL variables...")
        working_symbols = state_symbols + param_symbols + input_symbols
        # Use full nullspace vectors
        working_nullspace_vectors = [[coeff for coeff in vec] for vec in nullspace_vectors]

    print(f"Analysis scope: {analysis_scope}")
    print(f"Working with {len(working_symbols)} variables: {[str(s) for s in working_symbols]}")

    if not working_nullspace_vectors:
        return {
            "all_vars_identifiable": True,
            "identifiable_combinations": [str(s) for s in working_symbols],
            "analysis_scope": analysis_scope
        }

    # Create nullspace matrix (rows = nullspace vectors) - no reconstruction needed!
    N = Matrix(working_nullspace_vectors)
    print(f"Nullspace matrix N (shape {N.shape}):\n{N}")
    print(f"Rank of N: {N.rank()}")
    print(f"Expected identifiable dimension: {len(working_symbols) - N.rank()}")

    # Method 1: Use row space of observability matrix directly
    identifiable_combinations = []

    try:
        if analysis_scope == 'full':
            # Get row space of the full observability matrix
            row_space_basis = O_matrix.T.columnspace()
            print(f"Method 1: Found {len(row_space_basis)} identifiable directions from O_matrix row space")
        else:
            # For parameters only, extract parameter columns
            param_start = n_states
            param_end = n_states + n_params
            O_param = O_matrix[:, param_start:param_end]
            row_space_basis = O_param.T.columnspace()
            print(f"Method 1: Found {len(row_space_basis)} identifiable parameter directions")

        for i, vec in enumerate(row_space_basis):
            combo_terms = []
            for j, coeff in enumerate(vec):
                if coeff != 0:
                    if coeff == 1:
                        combo_terms.append(str(working_symbols[j]))
                    elif coeff == -1:
                        combo_terms.append(f"-{working_symbols[j]}")
                    else:
                        combo_terms.append(f"({coeff})*{working_symbols[j]}")

            if combo_terms:
                combo_expr = " + ".join(combo_terms).replace("+ -", "- ")
                identifiable_combinations.append(combo_expr)
                print(f"\nIdentifiable direction {i+1}: {combo_expr}")

    except Exception as e:
        print(f"Method 1 (row space) failed: {e}")
        identifiable_combinations = []

    # Method 2: Nullspace of nullspace matrix
    if not identifiable_combinations:
        try:
            print("Method 2: Computing nullspace of nullspace matrix...")

            # Find vectors orthogonal to all nullspace vectors
            orthogonal_vectors = N.nullspace()

            print(f"Method 2: Found {len(orthogonal_vectors)} identifiable directions")

            for i, vec in enumerate(orthogonal_vectors):
                combo_terms = []
                for j, coeff in enumerate(vec):
                    if coeff != 0:
                        if coeff == 1:
                            combo_terms.append(str(working_symbols[j]))
                        elif coeff == -1:
                            combo_terms.append(f"-{working_symbols[j]}")
                        else:
                            combo_terms.append(f"({coeff})*{working_symbols[j]}")

                if combo_terms:
                    combo_expr = " + ".join(combo_terms).replace("+ -", "- ")
                    identifiable_combinations.append(combo_expr)
                    print(f"\nIdentifiable combination {i+1}: {combo_expr}")

        except Exception as e:
            print(f"Method 2 (nullspace) also failed: {e}")

    # Build results
    results = {
        "all_vars_identifiable": False,
        "n_identifiable_combinations": len(identifiable_combinations),
        "identifiable_combinations": identifiable_combinations,
        "nullspace_dimension": len(working_nullspace_vectors),
        "expected_identifiable_dimension": len(working_symbols) - len(working_nullspace_vectors),
        "analysis_scope": analysis_scope
    }

    # Classify combinations by variable type (only for full analysis)
    if analysis_scope == 'full':
        param_combinations = []
        state_combinations = []
        input_combinations = []
        mixed_combinations = []

        for combo in identifiable_combinations:
            has_param = any(str(p) in combo for p in param_symbols)
            has_state = any(str(s) in combo for s in state_symbols)
            has_input = any(str(i) in combo for i in input_symbols)

            if has_param and not has_state and not has_input:
                param_combinations.append(combo)
            elif has_state and not has_param and not has_input:
                state_combinations.append(combo)
            elif has_input and not has_param and not has_state:
                input_combinations.append(combo)
            else:
                mixed_combinations.append(combo)

        results.update({
            "parameter_combinations": param_combinations,
            "state_combinations": state_combinations,
            "input_combinations": input_combinations,
            "mixed_combinations": mixed_combinations
        })

    return results

generate_summary_report(results)

Generate a comprehensive summary report.

Source code in src/nullstrike/analysis/enhanced_subspace.py
def generate_summary_report(results):
    """Generate a comprehensive summary report."""

    print("\n" + "="*70)
    print("COMPREHENSIVE IDENTIFIABILITY SUMMARY")
    print("="*70)

    if results['fully_identifiable']:
        print("\n✓ EXCELLENT: System is fully observable and identifiable!")
        print("  All parameters can be uniquely determined from the available measurements.")
        return

    print(f"\nMatrix Analysis:")
    print(f"  Rank: {results['matrix_rank']}/{results['expected_rank']}")
    print(f"  Rank deficiency: {results['expected_rank'] - results['matrix_rank']}")
    print(f"  Nullspace dimension: {results['nullspace_dimension']}")

    print(f"\nUnidentifiable Patterns ({len(results['unidentifiable_patterns'])}):")
    for i, pattern in enumerate(results['unidentifiable_patterns']):
        print(f"  {i+1}. {pattern['relationship']}")
        print(f"     → {pattern['interpretation']}")

    identifiable_info = results['identifiable_info']
    if identifiable_info['all_vars_identifiable']:
        print(f"\n✓ All parameters are individually identifiable!")
    else:
        n_id = identifiable_info['n_identifiable_combinations']
        print(f"\nIdentifiable Information ({n_id} combinations):")
        for i, combo in enumerate(identifiable_info['identifiable_combinations']):
            print(f"  {i+1}. {combo}")

    print(f"\nRecommendations:")
    if results['nullspace_dimension'] > 0:
        print(f"  • Fix {results['nullspace_dimension']} parameters to known values, OR")
        print(f"  • Add {results['nullspace_dimension']} additional independent measurements, OR")
        print(f"  • Reparameterize using the identifiable combinations shown above")

interpret_pattern(param_combo, input_combo, state_combo)

Provide physical interpretation of unidentifiable patterns.

Source code in src/nullstrike/analysis/enhanced_subspace.py
def interpret_pattern(param_combo, input_combo, state_combo):
    """Provide physical interpretation of unidentifiable patterns."""

    interpretations = []

    # Common patterns in compartmental models
    if len(param_combo) == 2:
        p1, c1 = param_combo[0]
        p2, c2 = param_combo[1]

        if c1 == 1 and c2 == -1:
            interpretations.append(f"Only the difference ({p1} - {p2}) matters")
        elif c1 == -1 and c2 == 1:
            interpretations.append(f"Only the difference ({p2} - {p1}) matters")
        elif c1 == c2:
            interpretations.append(f"Parameters {p1} and {p2} are perfectly correlated")

    if len(param_combo) > 2 and all(c == param_combo[0][1] for _, c in param_combo):
        params = [str(p) for p, _ in param_combo]
        interpretations.append(f"Parameters {params} can only be scaled together (time-scale ambiguity)")

    if param_combo and input_combo:
        interpretations.append("Input-parameter tradeoff detected")

    if param_combo and state_combo:
        interpretations.append("Parameter-initial condition tradeoff detected")

    if not interpretations:
        interpretations.append("Complex parameter relationship - see mathematical expression")

    return " | ".join(interpretations)

load_oic_matrix_with_symbols(filepath, model_name)

Load matrix with proper symbol context.

Source code in src/nullstrike/analysis/enhanced_subspace.py
def load_oic_matrix_with_symbols(filepath, model_name):
    """Load matrix with proper symbol context."""
    with open(filepath, 'r') as f:
        content = f.read()

    # Extract the matrix data (it's stored as "onx = [...]")
    matrix_str = content.split('onx = ')[1]

    # Import the model to get symbols
    import importlib
    import sympy as sym

    try:
        model = importlib.import_module(f'custom_models.{model_name}')
        model_vars = model.variables_locales if hasattr(model, 'variables_locales') else vars(model)
    except Exception as e:
        print(f"Warning: Could not import model {model_name}: {e}")
        model_vars = {}

    # Create execution namespace with all symbols
    exec_namespace = {**sym.__dict__, **model_vars}

    local_vars = {}
    exec(f"result = {matrix_str}", {"__builtins__": {}, **exec_namespace}, local_vars)

    return Matrix(local_vars['result'])

Enhanced nullspace analysis for finding identifiable parameter combinations.

Key Functions

compute_nullspace_analysis(identifiability_matrix)

Computes complete nullspace analysis.

Parameters: - identifiability_matrix: SymPy Matrix (Jacobian of observability matrix)

Returns: - Dictionary containing: - 'nullspace_basis': Basis vectors for nullspace - 'identifiable_basis': Basis vectors for identifiable subspace - 'nullspace_dimension': Dimension of nullspace - 'constraints': List of parameter constraint equations

find_identifiable_combinations(nullspace_basis, parameter_names)

Finds identifiable parameter combinations.

Parameters: - nullspace_basis: SymPy Matrix with nullspace basis vectors - parameter_names: List of parameter names

Returns: - List of identifiable parameter combination strings

Usage Example

from nullstrike.analysis.enhanced_subspace import compute_nullspace_analysis
import sympy as sym

# Compute Jacobian (identifiability matrix)
J = observability_matrix.jacobian(parameters)

# Perform nullspace analysis
nullspace_results = compute_nullspace_analysis(J)

print(f"Nullspace dimension: {nullspace_results['nullspace_dimension']}")
print(f"Parameter constraints: {nullspace_results['constraints']}")

Visualization

nullstrike.visualization.manifolds

3D manifold visualization for nullspace analysis.

plot_2d_constraint_line(symbols, coeffs, vec_idx, model_name, timestamp, plot_cfg=None)

Plot 2D implicit curve F(x,y)=0 where F = sum(c_i * x_i) with symbolic coefficients.

  • Preserves symbolic structure (no collapsing coefficients to 1.0 for plotted vars).
  • Substitutes only non-plotted symbols with defaults (here 1.0; swap in model defaults if available).
  • Masks singularities from rational denominators.
  • Overlays the singular set (denominator == 0) as dashed contours for interpretability.
Source code in src/nullstrike/visualization/manifolds.py
def plot_2d_constraint_line(symbols, coeffs, vec_idx, model_name, timestamp, plot_cfg=None):
    """Plot 2D implicit curve F(x,y)=0 where F = sum(c_i * x_i) with symbolic coefficients.

    - Preserves symbolic structure (no collapsing coefficients to 1.0 for plotted vars).
    - Substitutes only *non-plotted* symbols with defaults (here 1.0; swap in model defaults if available).
    - Masks singularities from rational denominators.
    - Overlays the singular set (denominator == 0) as dashed contours for interpretability.
    """
    import os
    import numpy as np
    import matplotlib.pyplot as plt
    import sympy as sym
    plot_cfg = plot_cfg or {}

    if len(symbols) != 2 or len(coeffs) != 2:
        return

    x_sym, y_sym = symbols

    print(f"  Plotting 2D implicit curve for {symbols}")

    # Full symbolic constraint
    constraint_expr = sym.simplify(sum(c * s for c, s in zip(coeffs, symbols)))
    print(f"  Constraint: {constraint_expr} = 0")

    # Fix *other* symbols (parameters/other states/inputs); keep the plotted vars free
    free_syms = set(constraint_expr.free_symbols)
    keep = {x_sym, y_sym}

    params = sorted(free_syms - keep, key=str)
    param_values = {p: _value_from_cfg(p, plot_cfg) for p in params}
    expr_sub = sym.simplify(constraint_expr.subs(param_values))


    expr_simpl = sym.simplify(expr_sub)

    # Degenerate cases
    if expr_simpl == 0:
        print("    Constraint simplifies to 0=0; manifold is entire R^2 for chosen params. Skipping plot.")
        return
    if not (set(expr_simpl.free_symbols) & keep):
        # No dependence on x,y; if constant != 0 then no solutions
        if sym.simplify(expr_simpl) != 0:
            print("    Constraint is constant and nonzero for chosen params; no real solutions. Skipping plot.")
            return

    # Extract denominator to mask singularities
    try:
        num, den = sym.together(expr_sub).as_numer_denom()
        den = sym.simplify(den)
        Dxy = None if den == 1 else sym.lambdify((x_sym, y_sym), den, 'numpy')
    except Exception:
        Dxy = None

    # Numeric evaluator for F(x,y)
    Fxy = sym.lambdify((x_sym, y_sym), expr_sub, 'numpy')

    # Grid
    Xv = _range_from_cfg(x_sym, plot_cfg)
    Yv = _range_from_cfg(y_sym, plot_cfg)
    X, Y = np.meshgrid(Xv, Yv)

    # Evaluate safely and mask invalids/singularities
    with np.errstate(divide='ignore', invalid='ignore', over='ignore'):
        try:
            Z = Fxy(X, Y)
            if Dxy is not None:
                den_vals = Dxy(X, Y)
                mask = np.isfinite(den_vals) & (np.abs(den_vals) > 1e-12)
                Z = np.where(mask, Z, np.nan)
            if np.iscomplexobj(Z):
                Z = np.where(np.abs(np.imag(Z)) < 1e-9, np.real(Z), np.nan)
            Z = np.where(np.isfinite(Z), Z, np.nan)
        except Exception as e:
            print(f"    Could not evaluate implicit curve: {e}")
            return

    # Plot
    fig, ax = plt.subplots(figsize=(8, 6))
    CS = ax.contour(X, Y, Z, levels=[0], linewidths=2)
    ax.clabel(CS, inline=1, fontsize=8, fmt={0: 'F=0'})

    # Overlay singular set (denominator == 0), if present
    if Dxy is not None:
        try:
            den_vals = Dxy(X, Y)
            den_vals = np.where(np.isfinite(den_vals), den_vals, np.nan)
            ax.contour(X, Y, den_vals, levels=[0], linewidths=1, linestyles='dashed')
        except Exception:
            pass

    ax.set_xlabel(_get_axis_label(x_sym, plot_cfg))
    ax.set_ylabel(_get_axis_label(y_sym, plot_cfg))

    # ax.set_xlabel(str(x_sym))
    # ax.set_ylabel(str(y_sym))
    ax.set_aspect('equal', adjustable='box')
    ax.set_title(f'Unidentifiable Curve: {sym.sstr(expr_sub)} = 0')

    ax.grid(True, alpha=0.3)

    results_base = get_results_dir() / model_name / timestamp
    filename = results_base / "manifolds_2d" / f"manifold_2d_vec{vec_idx+1}.png"    

    os.makedirs(os.path.dirname(filename), exist_ok=True)
    plt.savefig(filename, dpi=300, bbox_inches='tight', pad_inches=1)
    print(f"    2D manifold saved: {filename}")

    # Also save as vector format
    svg_filename = results_base / "manifolds_2d" / f"manifold_2d_vec{vec_idx+1}.svg"
    plt.savefig(svg_filename, dpi=300, bbox_inches='tight', pad_inches=1, format='svg')
    print(f"    2D manifold saved: {svg_filename}")

    plt.close(fig)

plot_3d_constraint_surface(symbols, coeffs, full_null_vec, involved_indices, vec_idx, combo_idx, model_name, timestamp, plot_cfg=None)

Plot the 3D constraint surface for three variables.

Source code in src/nullstrike/visualization/manifolds.py
def plot_3d_constraint_surface(symbols, coeffs, full_null_vec, involved_indices, 
                               vec_idx, combo_idx, model_name, timestamp,
                               plot_cfg=None):
    """
    Plot the 3D constraint surface for three variables.

    """
    plot_cfg = plot_cfg or {}
    if len(symbols) != 3 or len(coeffs) != 3:
        return

    # Create symbolic constraint equation
    constraint_expr = sum(c * s for c, s in zip(coeffs, symbols))   
    print(f"  Plotting 3D surface for {symbols}")
    print(f"  Constraint: {constraint_expr} = 0")

    free_syms = set(constraint_expr.free_symbols)
    param_syms = sorted(free_syms - set(symbols), key=str)
    param_values = {p: _value_from_cfg(p, plot_cfg) for p in param_syms}
    expr_sub = sym.simplify(constraint_expr.subs(param_values))


    expr_simpl = sym.simplify(expr_sub)
    if expr_simpl == 0:
        print("    Constraint simplifies to 0=0; manifold is entire R^3 for chosen params. Skipping plot.")
        return
    if not (set(expr_simpl.free_symbols) & set(symbols)):
        # No dependence on the plotted variables; constant ≠ 0 means no solutions
        if sym.simplify(expr_simpl) != 0:
            print("    Constraint is constant and nonzero for chosen params; no real solutions. Skipping plot.")
            return


    # Try to solve for one variable to get a parametric surface Z(X,Y)
    vars3 = list(symbols)
    solutions = None
    solve_idx = None
    for idx_try in [2, 1, 0]:  # prefer solving for the last symbol; fall back if needed
        try:
            sols = sym.solve(Eq(expr_sub, 0), vars3[idx_try], dict=True)
            if sols:
                solutions = sols
                solve_idx = idx_try
                break
        except Exception:
            pass

# -------------------- # 
    # Plot settings - create 2x2 subplot for 4 orientations
    fig = plt.figure(figsize=(16, 12))

    # Define 4 different viewing angles
    view_angles = [
        (30, 45),   # default
        (60, 135),  # rotated
        (45, 225),  # back view
        (75, 315)   # top-angled
    ]

    view_names = ['Default', 'Side', 'Back', 'Top']

    def mask_invalid(A):
        return np.where(np.isfinite(A), A, np.nan)

    for subplot_idx, ((elev, azim), view_name) in enumerate(zip(view_angles, view_names)):
        ax = fig.add_subplot(2, 2, subplot_idx + 1, projection='3d')

        ax.set_xlabel(_get_axis_label(symbols[0], plot_cfg))
        ax.set_ylabel(_get_axis_label(symbols[1], plot_cfg))
        ax.set_zlabel(_get_axis_label(symbols[2], plot_cfg))

        # Set the viewing angle
        ax.view_init(elev=elev, azim=azim)

        # [Insert the same plotting logic here - either parametric or implicit]
        # This is the same plotting code you had before, just repeated for each subplot

        if solutions:            
            idx_other = [i for i in [0,1,2] if i != solve_idx]
            u_sym, v_sym = vars3[idx_other[0]], vars3[idx_other[1]]

            for sol in solutions:
                sol_expr = sym.simplify(sol[vars3[solve_idx]])
                f = sym.lambdify((u_sym, v_sym), sol_expr, 'numpy')

                # mask singularities in the solved expression
                try:
                    num_p, den_p = sym.together(sol_expr).as_numer_denom()
                    den_p = sym.simplify(den_p)
                    Duv = None if den_p == 1 else sym.lambdify((u_sym, v_sym), den_p, 'numpy')
                except Exception:
                    Duv = None

                var_range_u = _range_from_cfg(u_sym, plot_cfg)
                var_range_v = _range_from_cfg(v_sym, plot_cfg)
                U, V = np.meshgrid(var_range_u, var_range_v)

                try:
                    W = f(U, V)
                    if Duv is not None:
                        den_vals = Duv(U, V)
                        mask = np.isfinite(den_vals) & (np.abs(den_vals) > 1e-12)
                        W = np.where(mask, W, np.nan) 

                    if np.iscomplexobj(W):
                        W = np.where(np.abs(np.imag(W)) < 1e-9, np.real(W), np.nan)
                    W = mask_invalid(W)
                except Exception:
                    continue

                # Place (U,V,W) onto the correct axes based on which var was solved
                if solve_idx == 2:
                    X, Y, Z = U, V, W
                elif solve_idx == 1:
                    X, Y, Z = U, W, V
                else:
                    X, Y, Z = W, U, V

                surf = ax.plot_surface(X, Y, Z, alpha=0.7, cmap='viridis', linewidth=0)
        else:
            # [Same implicit plotting code as before]
            # ... implicit contour plotting code

            # Fallback: implicit contours by z-slices (draw contour lines where F(x,y,z0)=0 at multiple z0)
            # Choose z as the 3rd symbol for slicing
            x_sym, y_sym, z_sym = symbols
            Fxyz = sym.lambdify((x_sym, y_sym, z_sym), expr_sub, 'numpy')

            Xv = _range_from_cfg(x_sym, plot_cfg)
            Yv = _range_from_cfg(y_sym, plot_cfg)
            X, Y = np.meshgrid(Xv, Yv)

            z_slices = _z_slices_from_cfg(z_sym, plot_cfg)
            ax.set_zlim(float(np.min(z_slices)), float(np.max(z_slices)))


            try:
                num, den = sym.together(expr_sub).as_numer_denom()
                den = sym.simplify(den)
                Dxyz = None if den == 1 else sym.lambdify((x_sym, y_sym, z_sym), den, 'numpy')
            except Exception:
                Dxyz = None


            for z0 in z_slices:
                try:
                    F_vals = Fxyz(X, Y, z0)
                    if Dxyz is not None:
                        den_vals = Dxyz(X, Y, z0)
                        F_vals = np.where(np.isfinite(den_vals) & (np.abs(den_vals) > 1e-12), F_vals, np.nan)

                    F_vals = mask_invalid(F_vals)
                    # Contour at level 0, positioned at z=z0
                    # ax.contour(X, Y, z0*np.ones_like(X), F_vals, levels=[0], zdir='z')
                    ax.contour(X, Y, F_vals, levels=[0], zdir='z', offset=z0)
                except Exception:
                    continue

            if Dxyz is not None:
                for z0 in z_slices:
                    try:
                        den_vals = Dxyz(X, Y, z0)
                        den_vals = np.where(np.isfinite(den_vals), den_vals, np.nan)
                        ax.contour(X, Y, den_vals, levels=[0], zdir='z', offset=z0, linewidths=1)
                    except Exception:
                        pass


        ax.set_title(f'{view_name} View')

        try:
            ax.set_box_aspect((1,1,1))
        except Exception:
            pass

    # Main title for the entire figure
    constraint_str = sym.sstr(expr_sub)
    fig.suptitle(f'Unidentifiable Manifold: {constraint_str} = 0\n'
                f'Nullspace Vector {vec_idx + 1}, Combination {combo_idx + 1}',
                fontsize=14)

    plt.tight_layout()

    constrained_layout=True

    # Get the base directory from the calling function
    results_base = get_results_dir() / model_name / timestamp
    filename = results_base / "manifolds_3d" / f"manifold_3d_vec{vec_idx+1}_combo{combo_idx+1}.png"

    os.makedirs(os.path.dirname(filename), exist_ok=True)

    plt.savefig(filename, dpi=300, bbox_inches='tight', pad_inches=1)
    print(f"    3D manifold saved: {filename}")

    # Also save as vector format
    svg_filename = results_base / "manifolds_3d" / f"manifold_3d_vec{vec_idx+1}_combo{combo_idx+1}.svg"
    plt.savefig(svg_filename, dpi=300, bbox_inches='tight', pad_inches=1, format='svg')
    print(f"    3D manifold saved: {svg_filename}")

    plt.close()

visualize_nullspace_manifolds(nullspace_results, state_symbols, param_symbols, symbol_universe, plot_cfg=None, input_symbols=None, model_name='Model', shared_timestamp=None)

Visualize the unidentifiable manifolds as 3D surfaces.

For each nullspace vector, find all combinations of 3 variables that are involved, and plot the constraint surface in 3D.

Source code in src/nullstrike/visualization/manifolds.py
def visualize_nullspace_manifolds(
    nullspace_results,
    state_symbols,
    param_symbols,
    symbol_universe,
    plot_cfg=None,
    input_symbols=None,
    model_name="Model",
    shared_timestamp=None):
    """
    Visualize the unidentifiable manifolds as 3D surfaces.

    For each nullspace vector, find all combinations of 3 variables that are involved,
    and plot the constraint surface in 3D.
    """
    import matplotlib.pyplot as plt

    import numpy as np
    from datetime import datetime
    plot_cfg = plot_cfg or {}

    def _create_results_structure(model_name, timestamp):
        """Create organized directory structure for results."""
        import os

        base_dir = get_results_dir() / model_name / timestamp
        base_dir.mkdir(parents=True, exist_ok=True)

        # Create subdirectories
        (base_dir / "manifolds_3d").mkdir(exist_ok=True)
        (base_dir / "manifolds_2d").mkdir(exist_ok=True)
        (base_dir / "graphs").mkdir(exist_ok=True)

        return base_dir

    # Resolve extra triplets/pairs by name to symbols
    extras_3d = []
    for t in plot_cfg.get("extra_triplets_3d", []):
        a,b,c = t
        extras_3d.append((_resolve_symbol(a, symbol_universe),
                          _resolve_symbol(b, symbol_universe),
                          _resolve_symbol(c, symbol_universe)))

    extras_2d = []
    for p in plot_cfg.get("extra_pairs_2d", []):
        a,b = p
        extras_2d.append((_resolve_symbol(a, symbol_universe),
                          _resolve_symbol(b, symbol_universe)))

    # Map every symbol to its global index in all_symbols (to pull the correct coeff)
    if input_symbols is None:
        input_symbols = []
    all_symbols = state_symbols + param_symbols + input_symbols
    sym_to_idx_global = {str(s): i for i, s in enumerate(all_symbols)}

    # Combine + dedup for the multiplets to plot (preserve order)
    def _uniq(seq):
        seen = set()
        out = []
        for item in seq:
            key = tuple(map(str, item))
            if key not in seen:
                seen.add(key)
                out.append(item)
        return out



    if shared_timestamp is None:
        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
    else:
        timestamp = shared_timestamp
    results_base = _create_results_structure(model_name, timestamp)

    # Configurable caps (None => no cap)
    max_triplets_3d = plot_cfg.get("max_triplets_3d", None)
    max_pairs_2d    = plot_cfg.get("max_pairs_2d", None)

    print(f"\nVisualizing nullspace manifolds for {model_name}...")

    plot_count = 0
    max_plots = 10  # Limit total number of plots

    for vec_idx, null_vec in enumerate(nullspace_results['nullspace_basis']):
        print(f"\n--- Nullspace Vector {vec_idx + 1} ---")

        # Find variables involved in this nullspace vector
        involved_indices = [i for i in range(len(null_vec)) if null_vec[i] != 0]
        involved_symbols = [all_symbols[i] for i in involved_indices if i < len(all_symbols)]
        involved_coeffs = [null_vec[i] for i in involved_indices if i < len(all_symbols)]

        print(f"Involved variables: {involved_symbols}")
        print(f"Coefficients: {involved_coeffs}")

        from itertools import combinations

        # Build a dict from symbol -> coeff for this vector
        sym_to_coeff = {str(s): c for s, c in zip(involved_symbols, involved_coeffs)}

        # --- 3D TRIPLETS ---
        triples_auto = [tuple(involved_symbols[i] for i in idxs)
                        for idxs in combinations(range(len(involved_symbols)), 3)]

        # Filter extras_3d to those fully present in this vector
        triples_extras = []
        for (a, b, c) in extras_3d:
            names = {str(a), str(b), str(c)}
            if names.issubset({str(s) for s in involved_symbols}):
                # use the existing SymPy symbols from involved_symbols for exact identity
                remapped = tuple(next(s for s in involved_symbols if str(s) == n) for n in [str(a), str(b), str(c)])
                triples_extras.append(remapped)

        # De-duplicate while preserving order
        triples_merged = _uniq(triples_auto + triples_extras)

        # Apply cap if configured
        if isinstance(max_triplets_3d, int):
            triples_merged = triples_merged[:max_triplets_3d]

        # Plot each triple
        for combo_idx, triple_symbols in enumerate(triples_merged):
            triple_coeffs = [sym_to_coeff[str(s)] for s in triple_symbols]
            try:
                plot_3d_constraint_surface(
                    triple_symbols, triple_coeffs, null_vec, involved_indices,
                    vec_idx, combo_idx, model_name, timestamp,
                    plot_cfg=plot_cfg
                )
            except Exception as e:
                print(f"Could not plot {triple_symbols}: {e}")

        # --- 2D PAIRS ---
        pairs_auto = [tuple(involved_symbols[i] for i in idxs)
                    for idxs in combinations(range(len(involved_symbols)), 2)]

        pairs_extras = []
        for (a, b) in extras_2d:
            names = {str(a), str(b)}
            if names.issubset({str(s) for s in involved_symbols}):
                remapped = tuple(next(s for s in involved_symbols if str(s) == n) for n in [str(a), str(b)])
                pairs_extras.append(remapped)

        pairs_merged = _uniq(pairs_auto + pairs_extras)

        if isinstance(max_pairs_2d, int):
            pairs_merged = pairs_merged[:max_pairs_2d]

        for pair_symbols in pairs_merged:
            pair_coeffs = [sym_to_coeff[str(s)] for s in pair_symbols]
            try:
                plot_2d_constraint_line(
                    pair_symbols, pair_coeffs,
                    vec_idx, model_name, timestamp,
                    plot_cfg=plot_cfg
                )
            except Exception as e:
                print(f"Could not plot 2D constraint for {pair_symbols}: {e}")

3D manifold visualization for parameter constraint surfaces.

nullstrike.visualization.graphs

Graph visualization for parameter identifiability relationships.

build_identifiability_graph(used_symbols, nullspace_vectors, symbol_types=None, index_range=None)

Construct a graph where nodes are variables (states/parameters/inputs). Edges connect variables that are part of the same unidentifiable combination. Isolated nodes are only identifiable if they don't appear in ANY nullspace vector. Parameters


used_symbols : list of sympy.Symbol Variables in the model. nullspace_vectors : list Each vector encodes one unidentifiable direction. symbol_types : list of str, optional Type of each symbol ('state', 'param', 'input') for coloring index_range : tuple (start, end) or None Only look at indices [start:end] in the nullspace vectors If None, use the full vector

Source code in src/nullstrike/visualization/graphs.py
def build_identifiability_graph(used_symbols, nullspace_vectors, symbol_types=None, index_range=None):
    """
    Construct a graph where nodes are variables (states/parameters/inputs).
    Edges connect variables that are part of the same unidentifiable combination.
    Isolated nodes are only identifiable if they don't appear in ANY nullspace vector.
    Parameters
    ----------
    used_symbols : list of sympy.Symbol
        Variables in the model.
    nullspace_vectors : list 
        Each vector encodes one unidentifiable direction.
    symbol_types : list of str, optional
        Type of each symbol ('state', 'param', 'input') for coloring
    index_range : tuple (start, end) or None
        Only look at indices [start:end] in the nullspace vectors
        If None, use the full vector
    """
    G = nx.Graph()

    # Add all nodes
    for i, s in enumerate(used_symbols):
        node_type = symbol_types[i] if symbol_types else 'unknown'
        G.add_node(str(s), node_type=node_type)

    variables_in_nullspace = set()

    for vec in nullspace_vectors:
        if index_range is None:
            # Use full vector - symbol index i maps to vector index i
            involved = [i for i in range(min(len(used_symbols), len(vec))) if vec[i] != 0] 
            # range(min(len(used_symbols), len(vec))) 
        else:
            # Use specified range - vector index (start_idx + i) maps to symbol index i
            start_idx, end_idx = index_range
            involved = []
            for i in range(len(used_symbols)):
                vec_idx = start_idx + i
                if vec_idx < end_idx and vec_idx < len(vec) and vec[vec_idx] != 0:
                    involved.append(i)  # i is the symbol index

        # Mark all involved variables as unidentifiable
        for symbol_idx in involved:
            if symbol_idx < len(used_symbols):
                variables_in_nullspace.add(str(used_symbols[symbol_idx]))

        # Add edges between all pairs in this nullspace vector
        for i in range(len(involved)):
            for j in range(i + 1, len(involved)):
                s1 = str(used_symbols[involved[i]])
                s2 = str(used_symbols[involved[j]])
                G.add_edge(s1, s2)

    # Mark nodes as identifiable or unidentifiable
    for node in G.nodes():
        if node in variables_in_nullspace:
            G.nodes[node]['identifiable'] = False
        else:
            G.nodes[node]['identifiable'] = True

    return G

visualize_identifiability_graph(G, title='Variable Identifiability Graph', save_path=None)

Visualize the identifiability graph with nodes colored by variable type and identifiability.

Source code in src/nullstrike/visualization/graphs.py
def visualize_identifiability_graph(G, title="Variable Identifiability Graph", save_path=None):
    """
    Visualize the identifiability graph with nodes colored by variable type and identifiability.
    """
    plt.figure(figsize=(12, 9))
    pos = nx.spring_layout(G, seed=42, k=2, iterations=50)

    # Separate nodes by identifiability
    identifiable_nodes = [n for n in G.nodes() if G.nodes[n].get('identifiable', False)]
    unidentifiable_nodes = [n for n in G.nodes() if not G.nodes[n].get('identifiable', True)]

    # Color by variable type
    def get_node_color(node, base_color):
        node_type = G.nodes[node].get('node_type', 'unknown')
        if node_type == 'state':
            return 'lightblue' if base_color == 'light' else 'darkblue'
        elif node_type == 'param':
            return 'lightcoral' if base_color == 'light' else 'darkred'
        elif node_type == 'input':
            return 'lightgreen' if base_color == 'light' else 'darkgreen'
        else:
            return 'lightgray' if base_color == 'light' else 'darkgray'

    # # Draw identifiable nodes (if any)
    # if identifiable_nodes:
    #     identifiable_colors = [get_node_color(n, 'light') for n in identifiable_nodes]
    #     nx.draw_networkx_nodes(G, pos, nodelist=identifiable_nodes, 
    #                           node_color=identifiable_colors, node_size=1000, 
    #                           edgecolors='green', linewidths=3, alpha=0.8)

    # # Draw unidentifiable nodes
    # if unidentifiable_nodes:
    #     unidentifiable_colors = [get_node_color(n, 'dark') for n in unidentifiable_nodes]
    #     nx.draw_networkx_nodes(G, pos, nodelist=unidentifiable_nodes, 
    #                           node_color=unidentifiable_colors, node_size=1000, 
    #                           edgecolors='red', linewidths=2, alpha=0.8)
    # Draw identifiable nodes (if any) - use same colors as unidentifiable for fill
    if identifiable_nodes:
        identifiable_colors = [get_node_color(n, 'light') for n in identifiable_nodes]
        nx.draw_networkx_nodes(G, pos, nodelist=identifiable_nodes, 
                            node_color=identifiable_colors, node_size=1000, 
                            edgecolors='green', linewidths=3, alpha=0.8)

    # Draw unidentifiable nodes - use same light colors for fill to match identifiable
    if unidentifiable_nodes:
        unidentifiable_colors = [get_node_color(n, 'light') for n in unidentifiable_nodes]  # Changed from 'dark' to 'light'
        nx.draw_networkx_nodes(G, pos, nodelist=unidentifiable_nodes, 
                            node_color=unidentifiable_colors, node_size=1000, 
                            edgecolors='red', linewidths=2, alpha=0.8)

    # Draw edges
    nx.draw_networkx_edges(G, pos, alpha=0.6, width=2)

    # # CREATE EXTERNAL LABEL POSITIONS
    # label_pos = {}
    # for node, (x, y) in pos.items():
    #     # Calculate offset based on node position to avoid overlaps
    #     offset_x = 0.15 if x >= 0 else -0.15  # Push right if on right side, left if on left
    #     offset_y = 0.15 if y >= 0 else -0.15  # Push up if on top, down if on bottom
    #     label_pos[node] = (x + offset_x, y + offset_y)

    # # Draw labels at external positions
    # nx.draw_networkx_labels(G, label_pos, font_size=10, font_weight='bold', 
    #                        bbox=dict(boxstyle="round,pad=0.3", facecolor="white", alpha=0.8))
    # CREATE EXTERNAL LABEL POSITIONS - closer to nodes
    label_pos = {}
    for node, (x, y) in pos.items():
        # Calculate smaller offset - closer to nodes
        offset_x = 0.04 if x >= 0 else -0.04  # Reduced from 0.15
        offset_y = 0.04 if y >= 0 else -0.04  # Reduced from 0.15
        label_pos[node] = (x + offset_x, y + offset_y)

    # Draw labels at external positions
    nx.draw_networkx_labels(G, label_pos, font_size=10, font_weight='bold', 
                        bbox=dict(boxstyle="round,pad=0.1", facecolor="white", alpha=0.8))
    # Create legend
    from matplotlib.patches import Patch
    legend_elements = [
        Patch(facecolor='lightblue', label='States', edgecolor='black'),
        Patch(facecolor='lightcoral', label='Parameters', edgecolor='black'), 
        Patch(facecolor='lightgreen', label='Inputs', edgecolor='black'),
        Patch(facecolor='white', edgecolor='green', linewidth=3, label='Identifiable/\nObservable'),
        Patch(facecolor='white', edgecolor='red', linewidth=2, label='Unidentifiable/\nUnobservable')
    ]
    plt.legend(handles=legend_elements, loc='upper left', bbox_to_anchor=(1, 1))

    # Update title to include Observable/Unobservable
    updated_title = title.replace("Identifiability", "Identifiability/Observability")
    plt.title(updated_title, fontsize=14, fontweight='bold')
    plt.axis('off')
    plt.tight_layout()

    if save_path:
        # Save with larger margins to prevent label cutoff
        plt.savefig(save_path, dpi=300, bbox_inches='tight', pad_inches=1)
        print(f"Graph saved to: {save_path}")

        # Also save as vector format
        svg_path = save_path.with_suffix('.svg')
        plt.savefig(svg_path, dpi=300, bbox_inches='tight', pad_inches=1, format='svg')
        print(f"Graph saved to: {svg_path}")

    plt.close()

Parameter dependency graph visualization.

Key Functions

create_manifold_plots(nullspace_basis, options)

Creates 3D manifold plots showing parameter constraints.

Parameters: - nullspace_basis: Nullspace basis vectors - options: Plotting configuration

Returns: - Dictionary of matplotlib Figure objects

create_parameter_graph(identifiability_results)

Creates network graph of parameter dependencies.

Parameters: - identifiability_results: Results from identifiability analysis

Returns: - NetworkX graph object and matplotlib Figure

Usage Example

from nullstrike.visualization.manifolds import create_manifold_plots
from nullstrike.visualization.graphs import create_parameter_graph

# Create 3D manifold visualizations
manifold_plots = create_manifold_plots(nullspace_basis, plot_options)

# Create parameter dependency graph
graph, fig = create_parameter_graph(identifiability_results)

# Save plots
for name, plot in manifold_plots.items():
    plot.savefig(f'manifold_{name}.png')

Configuration and Model Loading

Model Utilities

The nullstrike.models module provides utilities for loading and validating model definitions from the custom_models/ directory.

Key Functions

load_model(model_name)

Loads a model from the custom_models directory.

Parameters: - model_name: String name of model file (without .py extension)

Returns: - Model object with symbolic definitions

Available Models: - Bolie - Bolie's model for glucose-insulin dynamics - C2M - Two-compartment pharmacokinetic model - calibration_single - Single-parameter calibration model - 1A_integral, 1B_prop_integral, 1C_nonlinear - Control system examples

Usage Example

from nullstrike.models import load_model

# Load model
model = load_model('C2M')

# Access model components
print(f"State variables: {model.x}")
print(f"Parameters: {model.p}")
print(f"Outputs: {model.h}")
print(f"Dynamics: {model.f}")

Dynamic Model Access

Models can also be accessed as attributes:

from nullstrike.models import Bolie, C2M

# Direct access to models
bolie_model = Bolie
c2m_model = C2M

Configuration Management

nullstrike.configs.default_options

Default configuration options and utilities.

Key Functions

load_options(options_file)

Loads configuration from options file.

Parameters: - options_file: String name of options file

Returns: - Configuration object

get_default_options()

Returns default configuration options.

Returns: - Default configuration dictionary

Checkpointing System

nullstrike.analysis.checkpointing

compute_model_hash(model, options)

Compute hash of model and options for checkpoint validation.

Source code in src/nullstrike/analysis/checkpointing.py
def compute_model_hash(model, options):
    """Compute hash of model and options for checkpoint validation."""
    # Get model content
    model_content = {
        'x': str(model.x),
        'p': str(model.p), 
        'h': str(model.h),
        'f': str(model.f),
        'u': str(getattr(model, 'u', [])),
        'w': str(getattr(model, 'w', []))
    }

    # Get relevant options (exclude plotting config)
    options_content = {
        'modelname': options.modelname,
        'checkObser': options.checkObser,
        'maxLietime': options.maxLietime,
        'nnzDerU': options.nnzDerU,
        'nnzDerW': options.nnzDerW,
        'prev_ident_pars': str(options.prev_ident_pars)
    }

    combined = json.dumps([model_content, options_content], sort_keys=True)
    return hashlib.md5(combined.encode()).hexdigest()

load_checkpoint(model_name, options_file, expected_hash)

Load checkpoint if it exists and hash matches.

Source code in src/nullstrike/analysis/checkpointing.py
def load_checkpoint(model_name, options_file, expected_hash):
    """Load checkpoint if it exists and hash matches."""
    checkpoint_dir = get_checkpoints_dir()
    checkpoint_name = f"{model_name}_{options_file or 'default'}.pkl"
    checkpoint_path = checkpoint_dir / checkpoint_name

    if not checkpoint_path.exists():
        return None

    try:
        with open(checkpoint_path, 'rb') as f:
            checkpoint_data = pickle.load(f)

        if checkpoint_data['model_hash'] == expected_hash:
            print(f"✓ Checkpoint loaded: {checkpoint_path}")
            return checkpoint_data
        else:
            print(f"⚠ Checkpoint hash mismatch - model/options changed")
            return None

    except Exception as e:
        print(f"Error loading checkpoint: {e}")
        return None

save_checkpoint(model_name, options_file, model_hash, oic_matrix, nullspace_results, identifiable_results)

Save checkpoint data.

Source code in src/nullstrike/analysis/checkpointing.py
def save_checkpoint(model_name, options_file, model_hash, oic_matrix, nullspace_results, identifiable_results):
    """Save checkpoint data."""
    checkpoint_dir = get_checkpoints_dir()
    checkpoint_dir.mkdir(exist_ok=True)

    checkpoint_name = f"{model_name}_{options_file or 'default'}.pkl"
    checkpoint_path = checkpoint_dir / checkpoint_name

    checkpoint_data = {
        'model_hash': model_hash,
        'oic_matrix': oic_matrix,
        'nullspace_results': nullspace_results,
        'identifiable_results': identifiable_results,
        'model_name': model_name,
        'options_file': options_file
    }

    with open(checkpoint_path, 'wb') as f:
        pickle.dump(checkpoint_data, f)

    print(f"Checkpoint saved: {checkpoint_path}")

Intelligent caching system for expensive computations.

Key Functions

load_checkpoint(model_name, options_hash)

Loads cached results if available and valid.

Parameters: - model_name: String name of model - options_hash: Hash of options configuration

Returns: - Cached results dictionary or None if not available

save_checkpoint(model_name, options_hash, results)

Saves analysis results for future use.

Parameters: - model_name: String name of model - options_hash: Hash of options configuration - results: Results dictionary to cache

Usage Example

from nullstrike.analysis.checkpointing import load_checkpoint, save_checkpoint

# Try to load cached results
cached_results = load_checkpoint('my_model', options_hash)

if cached_results is None:
    # Run analysis
    results = run_analysis(model, options)

    # Save for future use
    save_checkpoint('my_model', options_hash, results)
else:
    print("Using cached results")
    results = cached_results

Utility Functions

Mathematical Utilities

nullstrike.utils

Utility functions for NullStrike package.

get_checkpoints_dir()

Get the checkpoints directory path.

Source code in src/nullstrike/utils.py
def get_checkpoints_dir():
    """Get the checkpoints directory path."""
    return get_project_root() / "checkpoints"

get_project_root()

Get the project root directory.

Source code in src/nullstrike/utils.py
def get_project_root():
    """Get the project root directory."""
    # This file is at src/nullstrike/utils.py, so go up 3 levels to get to project root
    return Path(__file__).parent.parent.parent

get_results_dir()

Get the results directory path.

Source code in src/nullstrike/utils.py
def get_results_dir():
    """Get the results directory path."""
    return get_project_root() / "results"

General utility functions for symbolic computation and analysis.

Key Functions

compute_model_hash(model, options)

Computes hash for model and options combination.

Parameters: - model: Model object - options: Options object

Returns: - String hash for caching purposes

simplify_expressions(expr_list)

Simplifies list of symbolic expressions.

Parameters: - expr_list: List of SymPy expressions

Returns: - List of simplified expressions

Error Handling

Exception Classes

class NullStrikeError(Exception):
    """Base exception for NullStrike errors."""
    pass

class ModelLoadError(NullStrikeError):
    """Raised when model loading fails."""
    pass

class AnalysisError(NullStrikeError):
    """Raised when analysis computation fails."""
    pass

class VisualizationError(NullStrikeError):
    """Raised when visualization generation fails."""
    pass

Exception Handling Example

from nullstrike.cli.complete_analysis import main
from nullstrike.core.exceptions import ModelLoadError, AnalysisError

try:
    result = main('my_model')
except ModelLoadError as e:
    print(f"Failed to load model: {e}")
except AnalysisError as e:
    print(f"Analysis failed: {e}")
except Exception as e:
    print(f"Unexpected error: {e}")

Advanced Usage Patterns

Custom Analysis Pipeline

from nullstrike.core.strike_goldd import strike_goldd_analysis
from nullstrike.analysis.enhanced_subspace import compute_nullspace_analysis
from nullstrike.visualization.manifolds import create_manifold_plots

def custom_analysis_pipeline(model_name, custom_options):
    """Custom analysis with specific modifications."""

    # Load model with custom preprocessing
    model = load_and_preprocess_model(model_name)

    # Run core analysis
    strike_results = strike_goldd_analysis(model, custom_options)

    # Enhanced nullspace analysis
    nullspace_results = compute_nullspace_analysis(
        strike_results['identifiability_matrix']
    )

    # Custom visualization
    plots = create_manifold_plots(
        nullspace_results['nullspace_basis'], 
        custom_options
    )

    return {
        'strike_goldd': strike_results,
        'nullspace': nullspace_results,
        'visualizations': plots
    }

Batch Processing

def batch_analysis(model_list, base_options):
    """Analyze multiple models with consistent options."""

    results = {}

    for model_name in model_list:
        try:
            print(f"Analyzing {model_name}...")
            result = main(model_name, base_options)
            results[model_name] = result
        except Exception as e:
            print(f"Failed to analyze {model_name}: {e}")
            results[model_name] = {'error': str(e)}

    return results

# Usage
models = ['C2M', 'Bolie', 'calibration_single']
batch_results = batch_analysis(models, 'options_default')

Integration with Parameter Estimation

def integrate_with_fitting(model_name, data, initial_guess):
    """Use identifiability results to guide parameter estimation."""

    # Run identifiability analysis
    id_results = main(model_name, parameters_only=True)

    # Extract identifiable combinations
    identifiable_params = id_results['identifiable_parameters']

    # Set up constrained optimization
    def objective(params):
        # Only fit identifiable combinations
        return compute_fit_error(params, data, identifiable_params)

    # Run optimization
    fitted_params = optimize(objective, initial_guess)

    return fitted_params, id_results

Development and Testing

Model Development Utilities

def debug_model(model_name):
    """Debug model definition issues."""

    try:
        model = load_model(model_name)
        is_valid, messages = validate_model(model)

        if is_valid:
            print("Model definition is valid")
        else:
            print("ERROR: Model validation failed:")
            for msg in messages:
                print(f"  - {msg}")

        # Test basic analysis
        result = main(model_name, parameters_only=True)
        print(f"Basic analysis successful: {result['rank']} identifiable combinations")

    except Exception as e:
        print(f"ERROR: {e}")

Performance Profiling

import time
from nullstrike.cli.complete_analysis import main

def profile_analysis(model_name):
    """Profile analysis performance."""

    start_time = time.time()

    # Parameters-only timing
    params_start = time.time()
    result = main(model_name, parameters_only=True)
    params_time = time.time() - params_start

    # Full analysis timing
    full_start = time.time()
    result = main(model_name)
    full_time = time.time() - full_start

    total_time = time.time() - start_time

    print(f"Performance profile for {model_name}:")
    print(f"  Parameters-only: {params_time:.2f}s")
    print(f"  Full analysis: {full_time:.2f}s")
    print(f"  Total time: {total_time:.2f}s")

    return {
        'parameters_time': params_time,
        'full_time': full_time,
        'total_time': total_time
    }

Further Reading

API Stability

The NullStrike API is designed for stability, but some advanced features may change in future versions. Pin to specific versions for production use.