Skip to content
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
f559b35
Add tqdm as a proper dependency
basnijholt Sep 18, 2025
ff14dd5
Update requirements.txt
rileyjmurray Sep 18, 2025
1107ee3
Merge pull request #653 from basnijholt/patch-1
rileyjmurray Sep 18, 2025
eaa1e7f
Revert "Add `tqdm` as a proper dependency"
rileyjmurray Sep 18, 2025
5bd509f
Merge pull request #654 from sandialabs/revert-653-patch-1
rileyjmurray Sep 18, 2025
c50127c
Update README and CHANGELOG for 0.9.14.2
Nov 8, 2025
67045dc
Merge remote-tracking branch 'origin/bugfix'
Nov 8, 2025
1706895
Modified GST run to take list of optimizers for each iteration
juangmendoza19 Nov 24, 2025
2a9f5ed
bugfixes. First working example of optimizers with GST
juangmendoza19 Nov 24, 2025
b3b59f7
one more bugfix
juangmendoza19 Nov 24, 2025
d0504ee
bugfix, unit tests passing
juangmendoza19 Nov 24, 2025
b5dbcde
Created unit test for new optimizer list feature for iterative_gst_ge…
juangmendoza19 Nov 25, 2025
d7c5257
added unit tests for all the new optimizer changes.
juangmendoza19 Nov 25, 2025
f9cc02f
Added type annotations
juangmendoza19 Nov 25, 2025
6389e3a
Small type annotation bugfix
juangmendoza19 Nov 25, 2025
d62e886
Added tests suggested by @nkoskelo
juangmendoza19 Nov 25, 2025
d6d0500
Deleted temp files
juangmendoza19 Nov 25, 2025
4e3fa03
Removed temp files pt 2
juangmendoza19 Nov 25, 2025
88440f0
Fix some typos in comments and improve readability.
nkoskelo Nov 29, 2025
b1d269b
Fixed some typos.
nkoskelo Nov 30, 2025
85fc08a
Changes suggested by @nkoskelo in the PR
juangmendoza19 Dec 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
# CHANGELOG

## [0.9.14.2] - 2025-11-08

This is a bugfix release patching the following issues:

* Bugfixes for diamond-distance wildcard budget computation in the context of leakage-aware analysis as reported in #652. (#671)
* Changes to default behavior of leakage-aware gauge optimization suite to address the manner in which relational leakage errors are attributed to each gate. Additional correctness checks/unit tests for leakage GST modeling. (#671)
* Bugfix for issue reported in #644 where it was found that when gauge-optimizing models with instruments using fidelity or trace distance as cost functions the instrument parameters were being ignored. Note that it was originally reported/thought that this applied to frobenius distance as well, but that turned out not to be true and this was working correctly. (#672)
* Fix for issue #600, which found the `Model.is_equivalent` method did not work for 'full TP' models. (#657)
* tqdm added as proper dependency (#653, #656)
* Shared memory bugs in objective function computation when parallelizing across parameters. (#660, #674)
* Fixes a bug in DataSet count retrieval and adds unit tests. (#663)

## [0.9.14.1] - 2025-08-30

This is a bugfix release patching two issues:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
********************************************************************************
pyGSTi 0.9.14.1
pyGSTi 0.9.14.2
********************************************************************************

[![master build](https://img.shields.io/github/actions/workflow/status/sandialabs/pyGSTi/beta-master.yml?branch=master&label=master)](https://github.com/sandialabs/pyGSTi/actions/workflows/beta-master.yml)
Expand Down
42 changes: 31 additions & 11 deletions pygsti/algorithms/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
import time as _time
import copy as _copy
import warnings as _warnings
from typing import Union

import numpy as _np
import scipy.optimize as _spo
Expand Down Expand Up @@ -785,7 +786,8 @@ def run_iterative_gst(dataset, start_model, circuit_lists,
return models, optimums, final_objfn, mdc_store_list

def iterative_gst_generator(dataset, start_model, circuit_lists,
optimizer, iteration_objfn_builders, final_objfn_builders,
optimizer: Union[_SimplerLMOptimizer, dict, list[_SimplerLMOptimizer], list[dict]],
iteration_objfn_builders, final_objfn_builders,
resource_alloc, starting_index=0, verbosity=0):
"""
Performs Iterative Gate Set Tomography on the dataset.
Expand All @@ -808,10 +810,13 @@ def iterative_gst_generator(dataset, start_model, circuit_lists,
either a Circuit object or as a tuple of operation labels (but all must be specified
using the same type).
e.g. [ [ (), ('Gx',) ], [ (), ('Gx',), ('Gy',) ], [ (), ('Gx',), ('Gy',), ('Gx','Gy') ] ]

optimizer : Optimizer or dict
optimizer : Optimizer, or dict, or list of Optimizer, or list of dict (default None)
The optimizer to use, or a dictionary of optimizer parameters
from which a default optimizer can be built.
from which a default optimizer can be built. If a list, the length
of the list should either be 1 or equal to the number of iterations.
If 1, then this optimizer is used for every iteration, otherwise
each optimizer is used for its corresponding iteration.

iteration_objfn_builders : list
List of ObjectiveFunctionBuilder objects defining which objective functions
Expand Down Expand Up @@ -847,7 +852,22 @@ def iterative_gst_generator(dataset, start_model, circuit_lists,
(an "evaluated" model-dataset-circuits store).
"""
resource_alloc = _ResourceAllocation.cast(resource_alloc)
optimizer = optimizer if isinstance(optimizer, _Optimizer) else _SimplerLMOptimizer.cast(optimizer)
if optimizer is None:
optimizer = _SimplerLMOptimizer.cast(None)
if isinstance(optimizer, (_Optimizer, dict)):
optimizers = [optimizer]*len(circuit_lists)

elif not isinstance(optimizer, list):
raise ValueError(f'Invalid argument for optimizers of type {type(optimizer)}, supported types are list, Optimizer, or dict.')
else:
optimizers = optimizer

assert len(optimizers) == 1 or len(optimizers) == len(circuit_lists), f'Optimizers must be length 1 or length {len(circuit_lists)=}'

temp_optimizers = []
for opt in optimizers:
temp_optimizers.append(opt if isinstance(opt, _Optimizer) else _SimplerLMOptimizer.cast(opt))
optimizers = temp_optimizers
comm = resource_alloc.comm
profiler = resource_alloc.profiler
printer = VerbosityPrinter.create_printer(verbosity, comm)
Expand All @@ -872,8 +892,8 @@ def _max_array_types(artypes_list): # get the maximum number of each array type

#These lines were previously in the loop below, but we should be able to move it out from there so we can use it
#in precomputing layouts:
method_names = optimizer.called_objective_methods
array_types = optimizer.array_types + \
method_names = optimizers[0].called_objective_methods
array_types = optimizers[0].array_types + \
_max_array_types([builder.compute_array_types(method_names, mdl.sim)
for builder in iteration_objfn_builders + final_objfn_builders])

Expand Down Expand Up @@ -929,11 +949,11 @@ def _max_array_types(artypes_list): # get the maximum number of each array type
for j, obj_fn_builder in enumerate(iteration_objfn_builders):
tNxt = _time.time()
if i == 0 and j == 0: # special case: in first optimization run, use "first_fditer"
first_iter_optimizer = _copy.deepcopy(optimizer) # use a separate copy of optimizer, as it
first_iter_optimizer.fditer = optimizer.first_fditer # is a persistent object (so don't modify!)
first_iter_optimizer = _copy.deepcopy(optimizers[i]) # use a separate copy of optimizer, as it
first_iter_optimizer.fditer = optimizers[i].first_fditer # is a persistent object (so don't modify!)
opt_result, mdc_store = run_gst_fit(mdc_store, first_iter_optimizer, obj_fn_builder, printer - 1)
else:
opt_result, mdc_store = run_gst_fit(mdc_store, optimizer, obj_fn_builder, printer - 1)
opt_result, mdc_store = run_gst_fit(mdc_store, optimizers[i], obj_fn_builder, printer - 1)
profiler.add_time('run_iterative_gst: iter %d %s-opt' % (i + 1, obj_fn_builder.name), tNxt)

tNxt = _time.time()
Expand All @@ -946,7 +966,7 @@ def _max_array_types(artypes_list): # get the maximum number of each array type
for j, obj_fn_builder in enumerate(final_objfn_builders):
tNxt = _time.time()
mdl.basis = start_model.basis
opt_result, mdc_store = run_gst_fit(mdc_store, optimizer, obj_fn_builder, printer - 1)
opt_result, mdc_store = run_gst_fit(mdc_store, optimizers[i], obj_fn_builder, printer - 1)
profiler.add_time('run_iterative_gst: final %s opt' % obj_fn_builder.name, tNxt)
tNxt = _time.time()
printer.log("Final optimization took %.1fs\n" % (tNxt - tRef), 2)
Expand Down
56 changes: 51 additions & 5 deletions pygsti/protocols/gst.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

import numpy as _np
from scipy.stats import chi2 as _chi2
from typing import Optional
from typing import Optional, Union, Any

from pygsti.baseobjs.profiler import DummyProfiler as _DummyProfiler
from pygsti.baseobjs.nicelyserializable import NicelySerializable as _NicelySerializable
Expand Down Expand Up @@ -1314,7 +1314,8 @@ def __init__(self, initial_model=None, gaugeopt_suite='stdgaugeopt',
self.unreliable_ops = ('Gcnot', 'Gcphase', 'Gms', 'Gcn', 'Gcx', 'Gcz')

def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=None, disable_checkpointing=False,
simulator: Optional[ForwardSimulator.Castable]=None):
simulator: Optional[ForwardSimulator.Castable]=None,
optimizers: Optional[Union[_opt.Optimizer, dict, list[_opt.Optimizer], list[dict]]] = None):
"""
Run this protocol on `data`.

Expand Down Expand Up @@ -1351,11 +1352,19 @@ def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=N
Ignored if None. If not None, then we call
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.

optimizers : Optimizer, or dict, or list of Optimizer, or list of dict (default None)
The optimizer to use, or a dictionary of optimizer parameters
from which a default optimizer can be built. If a list, the length
of the list should either be 1 or equal to the number of iterations.
If 1, then this optimizer is used for every iteration, otherwise
each optimizer is used for its corresponding iteration.

Returns
-------
ModelEstimateResults
"""
from pygsti.forwardsims.matrixforwardsim import MatrixForwardSimulator as _MatrixFSim
tref = _time.time()

profile = self.profile
Expand Down Expand Up @@ -1388,6 +1397,35 @@ def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=N
data.dataset, comm)
if simulator is not None:
mdl_start.sim = simulator

if optimizers is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is repeated from the algorithms/core.iterative_gst_generator function. I think we should refactor to call the same utility helper function in both cases. That way if we update the allowed types to pass in we will only need to update the logic in one location.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take this as an opportunity to soap box against DRY in this particular instance, and instead advocate for LoB (https://htmx.org/essays/locality-of-behaviour/).

optimizers = [self.optimizer]*len(circuit_lists)

else:
if isinstance(optimizers, (_opt.Optimizer, dict)):
optimizers = [optimizers]*len(circuit_lists)
else:
if not isinstance(optimizers, (list, dict)):
raise ValueError(f'Invalid argument for optimizers of type {type(optimizers)}, supported types are list, Optimizer')
temp_optimizers = []
default_first_fditer = 1 if mdl_start and isinstance(mdl_start.sim, _MatrixFSim) else 0
for optimizer in optimizers:

if isinstance(optimizer, _opt.Optimizer):
temp_optimizer = _copy.deepcopy(optimizer) # don't mess with caller's optimizer
if hasattr(optimizer,'first_fditer') and optimizer.first_fditer is None:
# special behavior: can set optimizer's first_fditer to `None` to mean "fill with default"
temp_optimizer.first_fditer = default_first_fditer

else:
if optimizer is None:
temp_optimizer = {}
else:
temp_optimizer = _copy.deepcopy(optimizer) # don't mess with caller's optimizer
if 'first_fditer' not in optimizer: # then add default first_fditer value
temp_optimizer['first_fditer'] = default_first_fditer
temp_optimizers.append(_opt.SimplerLMOptimizer.cast(temp_optimizer))
optimizers = temp_optimizers

if disable_checkpointing:
seed_model = mdl_start.copy()
Expand Down Expand Up @@ -1447,7 +1485,7 @@ def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=N
#Run Long-sequence GST on data
#Use the generator based version and query each of the intermediate results.
gst_iter_generator = _alg.iterative_gst_generator(
ds, seed_model, bulk_circuit_lists, self.optimizer,
ds, seed_model, bulk_circuit_lists, optimizers,
self.objfn_builders.iteration_builders, self.objfn_builders.final_builders,
resource_alloc, starting_idx, printer)

Expand Down Expand Up @@ -1816,7 +1854,8 @@ def __init__(self, modes=('full TP','CPTPLND','Target'), gaugeopt_suite='stdgaug
self.starting_point = {} # a dict whose keys are modes

def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=None,
disable_checkpointing=False, simulator: Optional[ForwardSimulator.Castable]=None):
disable_checkpointing=False, simulator: Optional[ForwardSimulator.Castable]=None,
optimizers: Optional[Union[_opt.Optimizer, dict, list[_opt.Optimizer], list[dict]]] = None):
"""
Run this protocol on `data`.

Expand Down Expand Up @@ -1854,6 +1893,13 @@ def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=N
fwdsim = ForwardSimulator.cast(simulator),
and we set the .sim attribute of every Model we encounter to fwdsim.

optimizers : Optimizer, or dict, or list of Optimizer, or list of dict (default None)
The optimizer to use, or a dictionary of optimizer parameters
from which a default optimizer can be built. If a list, the length
of the list should either be 1 or equal to the number of iterations.
If 1, then this optimizer is used for every iteration, otherwise
each optimizer is used for its corresponding iteration.

Returns
-------
ProtocolResults
Expand Down Expand Up @@ -1977,7 +2023,7 @@ def run(self, data, memlimit=None, comm=None, checkpoint=None, checkpoint_path=N
result = gst.run(data, memlimit, comm,
disable_checkpointing=disable_checkpointing,
checkpoint=child_checkpoint,
checkpoint_path=checkpoint_path)
checkpoint_path=checkpoint_path, optimizers=optimizers)
ret.add_estimates(result)

return ret
Expand Down
43 changes: 43 additions & 0 deletions test/unit/algorithms/test_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -451,3 +451,46 @@ def test_iterative_gst_generator_starting_index(self):

#Make sure we get the same result in both cases.
self.assertArraysAlmostEqual(models[-1].to_vector(), models1[-1].to_vector())
def test_iterative_gst_generator_optimizers_list(self):

#Test that passing a different optimizer per iteration works as intended
optimizers = []
tols = [1e1, 1e-8]
maxiters = [10, 150]

#First create substantially different optimizers
for i in range(len(self.lsgstStrings)):
optimizers.append({'tol': tols[i], 'maxiter':maxiters[i]})

assert len(self.lsgstStrings) == len(tols), f' If you change {self.lsgstStrings=}, this unit test must be modified to account for it'

generator_optimizers = core.iterative_gst_generator(
self.ds, self.mdl_clgst, self.lsgstStrings,
optimizer=optimizers,
iteration_objfn_builders=['chi2'],
final_objfn_builders=['logl'],
resource_alloc=None, verbosity=4
)

models1 = []
models0 = []
#loop over all iterations
for j in range(0,len(self.lsgstStrings)):

models0.append(next(generator_optimizers)[0])

#create a gst generator for the iteration that we are in,
#to be compared with generator
generator_step = core.iterative_gst_generator(
self.ds, self.mdl_clgst, self.lsgstStrings,
optimizer={'tol':tols[j], 'maxiter':maxiters[j]},
iteration_objfn_builders=['chi2'],
final_objfn_builders=['logl'],
resource_alloc=None, verbosity=4,
starting_index=j
)

models1.append(next(generator_step)[0])

self.assertArraysAlmostEqual(models0[-1].to_vector(), models1[-1].to_vector())

46 changes: 44 additions & 2 deletions test/unit/protocols/test_gst.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from pygsti.tools import two_delta_logl
from ..util import BaseCase
import pytest
import numpy as _np


class GSTUtilTester(BaseCase):
Expand Down Expand Up @@ -232,7 +233,7 @@ def _bulk_fill_probs_atom(self, array_to_fill, layout_atom, resource_alloc):
super(MapForwardSimulatorWrapper, self)._bulk_fill_probs_atom(array_to_fill, layout_atom, resource_alloc)


class TestGateSetTomography(BaseProtocolData):
class GateSetTomographyTester(BaseProtocolData):
"""
Tests for methods in the GateSetTomography class.

Expand All @@ -248,6 +249,27 @@ def test_run(self):
twoDLogL = two_delta_logl(mdl_result, self.gst_data.dataset)
assert twoDLogL <= 1.0 # should be near 0 for perfect data

def test_optimizer_list_run(self):
self.setUpClass()

optimizer = {'tol':1e-5}

proto1 = gst.GateSetTomography(smq1Q_XYI.target_model("CPTPLND"), 'stdgaugeopt', name="testGST", optimizer=optimizer)
results1 = proto1.run(self.gst_data)
results2 = proto1.run(self.gst_data, optimizers=optimizer)

mdl_result1 = results1.estimates["testGST"].models['stdgaugeopt']
mdl_result2 = results2.estimates["testGST"].models['stdgaugeopt']

assert _np.allclose(mdl_result1.to_vector() , mdl_result2.to_vector())

#Test that we can pass a list
optimizers = [optimizer]*len(self.gst_data.edesign.circuit_lists)
results3 = proto1.run(self.gst_data, optimizers=optimizers)
mdl_result3 = results3.estimates["testGST"].models['stdgaugeopt']
assert _np.allclose(mdl_result3.to_vector() , mdl_result1.to_vector())


def test_run_custom_sim(self, capfd: pytest.LogCaptureFixture):
self.setUpClass()
proto = gst.GateSetTomography(smq1Q_XYI.target_model("CPTPLND"), 'stdgaugeopt', name="testGST")
Expand Down Expand Up @@ -317,7 +339,7 @@ def test_write_and_read_to_dir(self):
assert proto_read.name == proto.name
assert proto_read.badfit_options.actions == proto.badfit_options.actions

class TestStandardGST(BaseProtocolData):
class StandardGSTTester(BaseProtocolData):
"""
Tests for methods in the StandardGST class.

Expand Down Expand Up @@ -357,6 +379,26 @@ def _test_run_custom_sim(self, mode, parent_capfd, check_output):
for model in estimate.models.values():
assert isinstance(model, MapForwardSimulatorWrapper)
pass

def test_optimizer_list_run(self):
self.setUpClass()

optimizer = {'tol':1e-5}

proto1 = gst.StandardGST(modes=["full TP","CPTPLND","Target"])
results1 = proto1.run(self.gst_data)
results2 = proto1.run(self.gst_data, optimizers=optimizer)
#Test that we can pass a list
optimizers = [optimizer]*len(self.gst_data.edesign.circuit_lists)
results3 = proto1.run(self.gst_data, optimizers=optimizers)
for mode in ["full TP","CPTPLND","Target"]:
mdl_result1 = results1.estimates[mode].models['stdgaugeopt']
mdl_result2 = results2.estimates[mode].models['stdgaugeopt']
mdl_result3 = results3.estimates[mode].models['stdgaugeopt']

assert _np.allclose(mdl_result1.to_vector() , mdl_result2.to_vector())
assert _np.allclose(mdl_result3.to_vector() , mdl_result1.to_vector())

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you also want to test if we can have different optimizer settings for each iteration of the GST experiment?

Copy link
Contributor Author

@juangmendoza19 juangmendoza19 Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add a test to check if it runs, but I don't see an easy way to test its valid. At that point it might not have any more value than the

results3 = proto1.run(self.gst_data, optimizers=optimizers)

test.


def test_write_and_read_to_dir(self):
#integration test to at least confirm we are writing and reading
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"module": "pygsti.protocols.gst",
"class": "GSTBadFitOptions",
"version": 0,
"threshold": 2.0,
"actions": [],
"wildcard": {
"budget_includes_spam": true,
"L1_weights": null,
"primitive_op_labels": null,
"initial_budget": null,
"methods": [
"neldermead"
],
"indadmissable_action": "print"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"module": "pygsti.protocols.gst",
"class": "GSTGaugeOptSuite",
"version": 0,
"gaugeopt_suite_names": [
"stdgaugeopt"
],
"gaugeopt_argument_dicts": {},
"gaugeopt_target": null
}
Loading
Loading