Metrics
This guide covers the quality assessment metrics available in the Quantum Data Embedding Suite for evaluating quantum embeddings and kernels.
Overview
Quantum embedding metrics help you evaluate the quality and effectiveness of your quantum data embeddings. These metrics provide insights into:
- Expressibility: How well the embedding explores the available Hilbert space
- Trainability: How effectively gradients can be computed for optimization
- Barren Plateau Susceptibility: Whether the embedding suffers from vanishing gradients
- Effective Dimension: The intrinsic dimensionality of the embedded data
- Quantum Advantage Potential: Likelihood of achieving quantum advantage
Core Metrics
Expressibility
Expressibility measures how uniformly an embedding covers the available quantum state space.
Definition
Expressibility quantifies the difference between the distribution of quantum states generated by the embedding and the uniform (Haar random) distribution:
Implementation
from quantum_data_embedding_suite.metrics import expressibility
import numpy as np
# Compute expressibility
expr_score = expressibility(
embedding=embedding,
n_samples=1000,
backend="qiskit",
method="fidelity_sampling"
)
print(f"Expressibility: {expr_score:.4f}")
Parameters
embedding
: The quantum embedding to evaluaten_samples
: Number of random samples to generatebackend
: Quantum backend for computationmethod
: Computation method ("fidelity_sampling", "trace_distance")haar_samples
: Number of Haar random states for comparison
Advanced Usage
# Detailed expressibility analysis
result = expressibility(
embedding=embedding,
n_samples=2000,
return_details=True,
confidence_interval=0.95
)
print(f"Expressibility: {result['value']:.4f}")
print(f"Confidence interval: [{result['ci_lower']:.4f}, {result['ci_upper']:.4f}]")
print(f"Standard error: {result['std_error']:.4f}")
Interpretation
- High expressibility (close to 1): Embedding explores state space uniformly
- Low expressibility (close to 0): Embedding is concentrated in a small region
- Typical values: 0.6-0.9 for good embeddings
When to Use
- Comparing different embedding types
- Optimizing embedding parameters
- Ensuring adequate state space coverage
- Debugging poor performance
Trainability
Trainability measures the magnitude of gradients in the embedding's parameter space.
Definition
Trainability is quantified using the variance of parameter gradients:
Where \(\nabla_\theta\) represents gradients with respect to embedding parameters.
Implementation
from quantum_data_embedding_suite.metrics import trainability
# Compute trainability
train_score = trainability(
embedding=embedding,
X=X_sample,
n_parameters=None, # Auto-detect parameters
observable="Z",
n_shots=1024
)
print(f"Trainability: {train_score:.4f}")
Parameters
embedding
: The quantum embedding to evaluateX
: Sample data pointsn_parameters
: Number of trainable parametersobservable
: Observable for gradient computationgradient_method
: Method for gradient computation ("parameter_shift", "finite_diff")
Observable Options
# Different observables for trainability analysis
observables = {
"single_qubit": ["X", "Y", "Z"],
"multi_qubit": ["ZZ", "XX", "YY"],
"custom": custom_observable_matrix
}
for obs_name, obs in observables.items():
score = trainability(embedding, X, observable=obs)
print(f"Trainability ({obs_name}): {score:.4f}")
Interpretation
- High trainability: Large, useful gradients for optimization
- Low trainability: Small gradients, potential barren plateau
- Typical values: > 0.01 for trainable embeddings
Barren Plateau Detection
def detect_barren_plateau(embedding, X, threshold=1e-4):
"""Detect barren plateau in embedding"""
train_score = trainability(embedding, X)
if train_score < threshold:
return {
"barren_plateau": True,
"severity": "high" if train_score < threshold/10 else "moderate",
"recommendation": "Consider different embedding or initialization"
}
return {"barren_plateau": False}
Gradient Variance
Gradient variance analyzes the distribution of gradients across the parameter space.
Implementation
from quantum_data_embedding_suite.metrics import gradient_variance
# Compute gradient variance
grad_var = gradient_variance(
embedding=embedding,
X=X_sample,
n_parameters=10,
n_samples=100,
observable="Z"
)
print(f"Gradient variance: {grad_var:.6f}")
Advanced Analysis
# Detailed gradient analysis
grad_analysis = gradient_variance(
embedding=embedding,
X=X_sample,
return_details=True,
per_parameter=True
)
print("Per-parameter gradient variance:")
for i, var in enumerate(grad_analysis['per_parameter']):
print(f"Parameter {i}: {var:.6f}")
print(f"Mean gradient magnitude: {grad_analysis['mean_magnitude']:.6f}")
print(f"Gradient norm: {grad_analysis['gradient_norm']:.6f}")
Interpretation
- High variance: Good gradient signal, trainable
- Low variance: Poor gradient signal, optimization challenges
- Zero variance: Barren plateau region
Effective Dimension
Effective dimension measures the intrinsic dimensionality of the embedded quantum states.
Implementation
from quantum_data_embedding_suite.metrics import effective_dimension
# Compute effective dimension
eff_dim = effective_dimension(
embedding=embedding,
X=X_sample,
method="eigenvalue_decay",
threshold=0.95
)
print(f"Effective dimension: {eff_dim}")
Methods
Eigenvalue Decay
eff_dim = effective_dimension(
embedding=embedding,
X=X_sample,
method="eigenvalue_decay",
threshold=0.95 # Capture 95% of variance
)
Participation Ratio
Information Dimension
eff_dim = effective_dimension(
embedding=embedding,
X=X_sample,
method="information_dimension",
n_bins=50
)
Interpretation
- High effective dimension: Rich representation space
- Low effective dimension: Compressed representation
- Compare to: Classical PCA effective dimension
Composite Metrics
Quantum Advantage Score
Combines multiple metrics to estimate quantum advantage potential.
Implementation
from quantum_data_embedding_suite.metrics import quantum_advantage_score
# Compute quantum advantage score
qa_score = quantum_advantage_score(
embedding=embedding,
X=X_sample,
y=y_sample,
classical_baseline="rbf_svm",
metrics=["expressibility", "trainability", "effective_dimension"]
)
print(f"Quantum Advantage Score: {qa_score:.4f}")
Components
# Detailed quantum advantage analysis
qa_analysis = quantum_advantage_score(
embedding=embedding,
X=X_sample,
y=y_sample,
return_components=True
)
print("Component scores:")
for component, score in qa_analysis['components'].items():
print(f"{component}: {score:.4f}")
print(f"Overall score: {qa_analysis['total_score']:.4f}")
print(f"Confidence: {qa_analysis['confidence']:.4f}")
Embedding Quality Index
Overall quality measure combining all core metrics.
Implementation
from quantum_data_embedding_suite.metrics import embedding_quality_index
# Compute embedding quality index
eqi = embedding_quality_index(
embedding=embedding,
X=X_sample,
weights={
"expressibility": 0.3,
"trainability": 0.4,
"effective_dimension": 0.2,
"stability": 0.1
}
)
print(f"Embedding Quality Index: {eqi:.4f}")
Kernel-Specific Metrics
Kernel Alignment
Measures similarity between quantum and classical kernels.
Implementation
from quantum_data_embedding_suite.metrics import kernel_alignment
from sklearn.metrics.pairwise import rbf_kernel
# Compute quantum kernel
K_quantum = quantum_kernel.compute_kernel(X)
# Compute classical kernel
K_classical = rbf_kernel(X)
# Compute alignment
alignment = kernel_alignment(K_quantum, K_classical)
print(f"Kernel alignment: {alignment:.4f}")
Interpretation
- High alignment (close to 1): Similar to classical kernel
- Low alignment (close to 0): Different from classical kernel
- Optimal range: 0.3-0.7 for potential quantum advantage
Kernel Expressivity
Measures the diversity of kernel values.
Implementation
from quantum_data_embedding_suite.metrics import kernel_expressivity
# Compute kernel expressivity
k_expr = kernel_expressivity(
kernel=quantum_kernel,
X=X_sample,
method="entropy",
n_bins=50
)
print(f"Kernel expressivity: {k_expr:.4f}")
Kernel Stability
Measures sensitivity to noise and perturbations.
Implementation
from quantum_data_embedding_suite.metrics import kernel_stability
# Compute kernel stability
stability = kernel_stability(
kernel=quantum_kernel,
X=X_sample,
noise_levels=[0.01, 0.05, 0.1],
n_trials=50
)
print(f"Kernel stability: {stability:.4f}")
Performance Metrics
Computational Efficiency
Analyzes computational requirements and scaling.
Implementation
from quantum_data_embedding_suite.metrics import computational_efficiency
# Analyze computational efficiency
efficiency = computational_efficiency(
embedding=embedding,
X_sizes=[10, 50, 100, 200],
n_trials=5
)
print("Efficiency analysis:")
print(f"Circuit depth scaling: {efficiency['depth_scaling']}")
print(f"Time complexity: {efficiency['time_complexity']}")
print(f"Memory usage: {efficiency['memory_usage']}")
Hardware Compatibility
Evaluates compatibility with quantum hardware.
Implementation
from quantum_data_embedding_suite.metrics import hardware_compatibility
# Check hardware compatibility
compatibility = hardware_compatibility(
embedding=embedding,
hardware_specs={
"n_qubits": 127,
"connectivity": "heavy_hex",
"gate_fidelity": 0.999,
"coherence_time": 100e-6
}
)
print(f"Hardware compatibility score: {compatibility['score']:.4f}")
print(f"Bottlenecks: {compatibility['bottlenecks']}")
Comparative Analysis
Embedding Comparison
Compare multiple embeddings systematically.
Implementation
from quantum_data_embedding_suite.metrics import compare_embeddings
# Compare different embeddings
embeddings = {
"angle": AngleEmbedding(n_qubits=4),
"amplitude": AmplitudeEmbedding(n_qubits=4),
"iqp": IQPEmbedding(n_qubits=4, depth=2)
}
comparison = compare_embeddings(
embeddings=embeddings,
X=X_sample,
metrics=["expressibility", "trainability", "effective_dimension"],
n_trials=10
)
# Display results
import pandas as pd
df = pd.DataFrame(comparison)
print(df)
Hyperparameter Sensitivity
Analyze sensitivity to hyperparameter changes.
Implementation
from quantum_data_embedding_suite.metrics import hyperparameter_sensitivity
# Analyze sensitivity
sensitivity = hyperparameter_sensitivity(
embedding_class=AngleEmbedding,
hyperparameters={
"n_qubits": [2, 4, 6, 8],
"entangling_layers": [1, 2, 3],
"rotation_axis": ["X", "Y", "Z"]
},
X=X_sample,
metric="expressibility",
n_trials=5
)
print("Hyperparameter sensitivity analysis:")
for param, sens in sensitivity.items():
print(f"{param}: {sens:.4f}")
Visualization
Metric Dashboard
Create comprehensive metric visualization.
Implementation
from quantum_data_embedding_suite.visualization import create_metric_dashboard
# Create dashboard
dashboard = create_metric_dashboard(
embedding=embedding,
X=X_sample,
y=y_sample,
metrics=["expressibility", "trainability", "effective_dimension"],
save_path="metric_dashboard.html"
)
dashboard.show()
Metric Evolution
Track metrics during optimization.
Implementation
from quantum_data_embedding_suite.visualization import plot_metric_evolution
# Track metrics during training
metric_history = []
for epoch in range(100):
# Train embedding (pseudo-code)
train_embedding(embedding, X, y)
# Compute metrics
metrics = {
"expressibility": expressibility(embedding, X),
"trainability": trainability(embedding, X),
"loss": compute_loss(embedding, X, y)
}
metric_history.append(metrics)
# Plot evolution
plot_metric_evolution(
metric_history,
save_path="metric_evolution.png"
)
Advanced Analysis
Statistical Significance
Test statistical significance of metric differences.
Implementation
from quantum_data_embedding_suite.metrics import metric_significance_test
# Compare two embeddings statistically
p_value = metric_significance_test(
embedding1=angle_embedding,
embedding2=iqp_embedding,
X=X_sample,
metric="expressibility",
n_trials=100,
test="mann_whitney"
)
print(f"p-value: {p_value:.4f}")
print(f"Significant difference: {p_value < 0.05}")
Correlation Analysis
Analyze correlations between metrics.
Implementation
from quantum_data_embedding_suite.metrics import metric_correlation_analysis
# Analyze metric correlations
correlations = metric_correlation_analysis(
embeddings=embeddings_list,
X=X_sample,
metrics=["expressibility", "trainability", "effective_dimension"],
method="pearson"
)
print("Metric correlations:")
print(correlations)
Best Practices
Metric Selection
- Start with core metrics: Expressibility, trainability, effective dimension
- Consider your goal: Classification vs. regression vs. unsupervised
- Match hardware: Include stability metrics for real devices
- Compare baselines: Always include classical comparisons
Computation Guidelines
- Sample size: Use at least 1000 samples for reliable metrics
- Statistical testing: Report confidence intervals
- Multiple trials: Average over multiple random seeds
- Computational budget: Balance accuracy vs. computation time
Interpretation Guidelines
- Context matters: Compare to relevant baselines
- Combined analysis: Don't rely on single metrics
- Domain knowledge: Consider problem-specific requirements
- Validation: Verify metrics correlate with actual performance
Troubleshooting
Common Issues
Inconsistent Results
Problem: Metrics vary significantly between runs Solutions:
- Increase sample size
- Use multiple random seeds
- Check for numerical instability
- Verify implementation
Poor Metric Values
Problem: All metrics show poor performance Solutions:
- Check data preprocessing
- Verify embedding implementation
- Try different hyperparameters
- Consider simpler embeddings
Computational Issues
Problem: Metrics take too long to compute Solutions:
- Reduce sample size for initial exploration
- Use approximation methods
- Enable parallel computation
- Cache intermediate results
Debugging Tools
from quantum_data_embedding_suite.diagnostics import diagnose_metrics
# Comprehensive metric diagnosis
diagnosis = diagnose_metrics(
embedding=embedding,
X=X_sample,
y=y_sample
)
print(diagnosis.summary())
if diagnosis.has_issues():
print("Recommendations:")
for rec in diagnosis.recommendations:
print(f"- {rec}")
Integration with Optimization
Metric-Guided Optimization
Use metrics to guide embedding optimization.
Implementation
from quantum_data_embedding_suite.optimization import MetricGuidedOptimizer
# Optimize embedding using metrics
optimizer = MetricGuidedOptimizer(
embedding_class=DataReuploadingEmbedding,
objective_metrics=["expressibility", "trainability"],
weights=[0.6, 0.4]
)
best_embedding = optimizer.optimize(
X=X_train,
y=y_train,
n_trials=100,
validation_split=0.2
)
Multi-Objective Optimization
Optimize multiple metrics simultaneously.
Implementation
from quantum_data_embedding_suite.optimization import MultiObjectiveOptimizer
# Multi-objective optimization
optimizer = MultiObjectiveOptimizer(
embedding_class=IQPEmbedding,
objectives=["expressibility", "trainability", "hardware_compatibility"]
)
pareto_front = optimizer.optimize(
X=X_train,
n_trials=200,
algorithm="nsga2"
)
# Select from Pareto front
best_embedding = optimizer.select_best(
pareto_front,
preference_weights=[0.4, 0.4, 0.2]
)