Identifying Hardware Changes

View on QuantumAI Run in Google Colab View source on GitHub Download notebook

You've run your circuit with Google's Quantum Computing Service and you're getting results that unexpectedly differ from those you saw when you ran your experiment last week. What's the cause of this and what can you do about it?

Your experience may be due to changes in the device that have occurred since the most recent maintenance Calibration. Every few days, the QCS devices are calibrated for the highest performance across all of their available qubits and operations. However, in the hours or days since the most recent maintenance calibration, the performance of the device hardware may have changed significantly, affecting your circuit's results.

The rest of this tutorial will describe these hardware changes, demonstrate how to collect error metrics for identifying if changes have occurred, and provide some examples of how you can compare your metric results to select the most performant qubits for your circuit. For more further reading on qubit picking methodology, see the Best Practices guide and Qubit Picking with Loschmidt Echoes tutorial. The method presented in the Loschmidt Echoes tutorial is an alternative way to identify hardware changes.

Hardware Changes

The device hardware changes occur in both the qubits themselves and the control electronics used to drive gates and measure the state of the qubits. As analog devices, both the qubits and control electronics are subject to interactions with their environment that manifest as a meaningful change to the qubits gate or readout fidelity.

Quantum processors based on frequency tunable superconducting qubits use a direct current (DC) bias current to set the frequency of the qubits' \(|0\rangle\) state to \(|1\rangle\) state transition. These DC biases are generated by classical analog control electronics, where resistors and other components can be affected by environmental temperature changes in an interaction called thermal drift. Uncompensated thermal drift results in a change in the qubit's transition frequency, which can cause unintended state transitions in the qubits during circuit execution or incorrect readout of the qubits' state. These manifest as changes to the error rates associated with gate and readout operations.

Additionally, the qubits may unexpectedly couple to other local energy systems and exchange energy with or lose energy to them. Because a qubit is only able to identify the presence of two levels in the parasitic local system, these interacting states are often referred to as two-level systems (TLS). While the exact physical origin of these states is unknown, defects in the hardware materials are a plausible explanation. It has been observed that interactions with these TLS can result in coherence fluctuations in time and frequency, again causing unintended state transitions or incorrect readouts, affecting error rates.

For more information on DC Bias and TLS and how they affect the devices, see arXiv:1809.01043.

Qubit Error Metrics

There are many Calibration Metrics available to measure gate and readout error rates and see if they have changed. The Visualizing Calibration Metrics tutorial demonstrates how to collect and visualize each of these available metrics. You can apply the comparison methods presented in this tutorial to any such metric, but the examples below focus on the two following metrics:

  • two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle: This metric captures the estimated probability for the quantum state on two neighboring qubits to depolarize (as if a Pauli gate was applied to either or both qubits) after applying an \(\sqrt{i\mathrm{SWAP} }\) gate. This metric includes some coherent error like the error introduced by control hardware. This metric is computed using Cross Entropy Benchmarking (XEB) during maintenance calibration and in this tutorial.
  • parallel_p11_error: This metric estimates the probability for a readout register to correctly measure a \(|1\rangle\) state on a qubit that was prepared to be in the \(|1\rangle\) state. The Simultaneous Readout experiment used to collect this metric evaluates all of the qubits in parallel/simultaneously.

Disclaimer: The data shown in this tutorial is an example and not representative of the QCS in production.

Data Collection

Setup

First, install Cirq and import the necessary packages.

try:
    import cirq
except ImportError:
    !pip install --quiet cirq --pre
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np

import cirq
import cirq_google as cg

Next, authorize to use the Quantum Computing Service with a project_id and processor_id, and get a sampler to run your experiments. Set the number of repetitions you'll use for all experiments.

from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook

# Set key variables
project_id = "your_project_id_here"
processor_id = "your_processor_id_here"
repetitions = 2000

# Get device sampler
qcs_objects = get_qcs_objects_for_notebook(project_id=project_id, processor_id=processor_id)

device = qcs_objects.device
sampler = qcs_objects.sampler

# Get qubit set
qubits = device.qubit_set()

# Limit device qubits to only those before row/column `device_limit`
device_limit = 10
qubits = {qb for qb in qubits if qb.row<device_limit and qb.col<device_limit}

# Visualize the qubits on a grid by putting them in a throwaway device object used only for this print statement
print(cg.devices.XmonDevice(0,0,0,qubits))
Getting OAuth2 credentials.
Press enter after entering the verification code.
Go to the following link in your browser:

 Your link will appear here

Enter verification code: There will be a box to enter your code here
Authentication complete.









                  (3, 2)
                  │
                  │
         (4, 1)───(4, 2)───(4, 3)
         │        │        │
         │        │        │
(5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4)
         │        │        │        │
         │        │        │        │
         (6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5)
                  │        │        │        │
                  │        │        │        │
                  (7, 2)───(7, 3)───(7, 4)───(7, 5)───(7, 6)
                           │        │        │
                           │        │        │
                           (8, 3)───(8, 4)───(8, 5)
                                    │
                                    │
                                    (9, 4)

Maintenance Calibration Data

Query for the calibration data with cirq_google.get_engine_calibration, select the two metrics by name from the calibration object, and visualize them with its plot() method.

# Retrieve maintenance calibration data.
calibration = cg.get_engine_calibration(processor_id=processor_id)

# Heatmap the two metrics.
two_qubit_gate_metric = "two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle"
readout_metric = "parallel_p11_error"

# Plot heatmaps with integrated histogram
calibration.plot(two_qubit_gate_metric, fig=plt.figure(figsize=(22, 10)))
calibration.plot(readout_metric, fig=plt.figure(figsize=(22, 10)))
(<Figure size 1584x720 with 3 Axes>,
 array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f9d52136410>,
        <matplotlib.axes._subplots.AxesSubplot object at 0x7f9d520e98d0>],
       dtype=object))

png

png

You may have already seen this existing maintenance calibration data when you did qubit selection in the first place. Next, you'll run device characterization experiments to collect the same data metrics from the device, to see if their values have changed since the previous calibration.

Current Two-Qubit Metric Data with XEB

This section is a shortened version of the Parallel XEB tutorial, which runs characterization experiments to collect data for the two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle metric. First, generate a library of two qubit circuits using the \(\sqrt{i\mathrm{SWAP} }\) gate . These circuits will be run in parallel in larger circuits according to combinations_by_layer.

"""Setup for parallel XEB experiment."""
from cirq.experiments import random_quantum_circuit_generation as rqcg
from itertools import combinations

random_seed = 52

# Generate library of two-qubit XEB circuits.
circuit_library = rqcg.generate_library_of_2q_circuits(
    n_library_circuits=20, 
    two_qubit_gate=cirq.SQRT_ISWAP,
    random_state=random_seed,
)

device_graph = nx.Graph((q1,q2) for (q1,q2) in combinations(qubits, 2) if q1.is_adjacent(q2))

# Generate different possible pairs of qubits, and randomly assign circuit (indices) to then, n_combinations times. 
combinations_by_layer = rqcg.get_random_combinations_for_device(
    n_library_circuits=len(circuit_library),
    n_combinations=10,
    device_graph=device_graph,
    random_state=random_seed,
)
# Prepare the circuit depths the circuits will be truncated to. 
cycle_depths = np.arange(3, 100, 20)

Then, run the circuits on the device, combining them into larger circuits and truncating the circuits by length, with cirq.experiments.xeb_sampling.sample_2q_xeb_circuits.

Afterwards, run the same circuits on a perfect simulator, and compare them to the sampled results. Finally, fit the collected data to an exponential decay curve to estimate the error rate per appication of each two-qubit \(\sqrt{i\mathrm{SWAP} }\) gate.

"""Collect all data by executing circuits."""
from cirq.experiments.xeb_sampling import sample_2q_xeb_circuits
from cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities, fit_exponential_decays

# Run XEB circuits on the processor.
sampled_df = sample_2q_xeb_circuits(
    sampler=sampler,
    circuits=circuit_library,
    cycle_depths=cycle_depths,
    combinations_by_layer=combinations_by_layer,
    shuffle=np.random.RandomState(random_seed),
    repetitions=repetitions,
)

# Run XEB circuits on a simulator and fit exponential decays to get fidelities.
fidelity_data = benchmark_2q_xeb_fidelities(
    sampled_df=sampled_df,
    circuits=circuit_library,
    cycle_depths=cycle_depths,
)
fidelities = fit_exponential_decays(fidelity_data)

#Grab (pair, sqrt_iswap_pauli_error_per_cycle) data for all qubit pairs.
pxeb_results = {
    pair: (1.0 - fidelity) / (4 / 3) #Scalar to get Pauli error
    for (_, _, pair), fidelity in fidelities.layer_fid.items()
}
100%|██████████| 207/207 [06:47<00:00,  1.97s/it]

Current Readout Metric Data with Simultaneous Readout

To evaluate performance changes in the readout registers, collect the Parallel P11 error data for each qubit with the Simultaneous Readout experiment, accessible with cirq.estimate_parallel_single_qubit_readout_errors. This function runs the experiment to estimate P00 and P11 errors for each qubit (as opposed to querying for the most recent calibration data). The experiment prepares each qubit in the \(|0\rangle\) and \(|1\rangle\) states, measures them, and evaluates how often the qubits are measured in the expected state.

# Run experiment
sq_result = cirq.estimate_parallel_single_qubit_readout_errors(sampler, qubits=qubits, repetitions=repetitions)

# Use P11 errors
p11_results = sq_result.one_state_errors

Heatmap Comparisons

For each metric, plot the calibration and collected characterization data side by side, on the same scale. Also plot the difference between the two datasets (on a different scale).

Two-Qubit Metric Heatmap Comparison

from matplotlib.colors import LogNorm

# Plot options. You may need to change these if you data shows a lot of the same colors.
vmin = 5e-3
vmax = 3e-2
options = {"norm": LogNorm()}
format = "0.3f"

fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9))

# Calibration two qubit data
calibration.heatmap(two_qubit_gate_metric).plot(
  ax=ax1, title="Calibration", vmin=vmin, vmax=vmax, 
  collection_options=options, annotation_format=format,
)
# Current two qubit data
cirq.TwoQubitInteractionHeatmap(pxeb_results).plot(
  ax=ax2, title="Current", vmin=vmin, vmax=vmax, 
  collection_options=options, annotation_format=format,
)

# Calculate difference in two-qubit metric
twoq_diffs = {}
for pair,calibration_err in calibration[two_qubit_gate_metric].items():
    # The order of the qubits in the result dictionary keys is sometimes swapped. Eg: (Q1,Q2):0.04 vs (Q2,Q1):0.06 
    if pair in pxeb_results:
        characterization_err = pxeb_results[pair]
    else:
        characterization_err = pxeb_results[tuple(reversed(pair))]
    twoq_diffs[pair] = characterization_err - calibration_err[0]

# Two qubit difference data
cirq.TwoQubitInteractionHeatmap(twoq_diffs).plot(
    ax=ax3, title='Difference in Two Qubit Metrics',
    annotation_format=format,
)

# Add titles
plt.figtext(0.5,0.97, two_qubit_gate_metric.replace("_"," ").title(), ha="center", va="top", fontsize=14)
Text(0.5, 0.97, 'Two Qubit Parallel Sqrt Iswap Gate Xeb Pauli Error Per Cycle')

png

The large numbers of zero and below values (green and darker colors) in the difference heatmap indicate that the device's two-qubit \(\sqrt{i\mathrm{SWAP} }\) gates have improved noticeably across the device. In fact, only a couple qubit pairs towards the bottom of the device have worsened since the previous calibration.

You should try to make use of the qubit pairs \((Q(5,2),Q(5,3))\) and \((Q(5,1),Q(6,1))\), which were previously average but have become the most reliable \(\sqrt{i\mathrm{SWAP} }\) gates in the device.

Qubit pairs \((Q(6,2),Q(7,2))\), \((Q(7,2),Q(7,3))\) and especially \((Q(6,4),Q(7,4))\), were the worst qubit pairs on the device, but have improved so significantly that they are within an acceptable range of \(0.010\) to \(0.016\) Pauli error. You may not need to avoid them now, if you were previously.

It's important to note that, if you have the option to use a consistently high reliablity qubit or qubit pair, instead of one that demonstrates inconsistent performance, you should do so. For example, qubit pairs \((Q(5,1),Q(5,2))\) and \((Q(5,2),Q(6,2))\) have not changed much, are still around \(0.010\) Pauli error, and happen to be near the other two good qubit pairs mentioned earlier, making them a good candidates for inclusion.

Readout Metric Heatmap Comparisons

# Plot options, with different vmin and vmax for readout data.
vmin = 3e-2
vmax = 1.1e-1
options = {"norm": LogNorm()}
format = "0.3f"

fig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9))

# Calibration readout data
calibration.heatmap(readout_metric).plot(
  ax=ax1, title="Calibration", vmin=vmin, vmax=vmax, 
  collection_options=options, annotation_format=format,
)

# Current readout data
cirq.Heatmap(p11_results).plot(
  ax=ax2, title="Current", vmin=vmin, vmax=vmax, 
  collection_options=options, annotation_format=format,
)

# Collect difference in readout metrics
readout_diffs = {q[0]:p11_results[q[0]] - err[0] for q,err in calibration[readout_metric].items()}

# Readout difference data
cirq.Heatmap(readout_diffs).plot(
    ax=ax3, title='Difference in Readout Metrics',
    annotation_format=format,
)

# Add title
plt.figtext(0.5,0.97, readout_metric.replace("_"," ").title(), ha="center", va="top", fontsize=14)
Text(0.5, 0.97, 'Parallel P11 Error')

png

The readout data demonstrates demonstrates more varying results than the two-qubit data. Many of the qubits have not changed significantly, but a few have, by a large margin.

Qubit \(Q(5,0)\) has improved massively, but is still among the least reliable qubits on the device for readouts. Qubit \(Q(7,2)\) has also improved, but is still quite high in Pauli error. Qubit \(Q(5,4)\) was previously one of the best qubits to perform readout on, but has since deteriorated to be the second worst since the most recent maintenance calibration. Qubits \(Q(7,3)\) and \(Q(9,4)\) have improved meaningfully, becoming some of the best readout qubits available.

Again, it is valuable to find reliable qubits that didn't demonstrate significant change. In this case qubits \(Q(4,1)\), \(Q(4,2)\), and \(Q(5,1)\) have not changed much but remain among the better qubits available for readout.

What's Next?

You've selected better candidate qubits for your circuit, based on updated information about the device. What else can you do for further improvements?

  • You need to map your actual circuit's logical qubits to your selected hardware qubits. This is in general a difficult problem, and the best solution can depend on the specific structure of the circuit to be run. Take a look at the Qubit Picking with Loschmidt Echoes tutorial, which estimates the error rates of gates for your specific circuit. Also, consider Best Practices#qubit picking for additional advice on this.
  • The Optimization, Alignment, and Spin Echoes tutorial provides resources on how you can improve the reliability of your circuit by: optimizing away redundant or low-impact gates, aligning gates into moments with others of the same type, and preventing decay on idle qubits with by adding spin echoes.
  • Other than for qubit picking, you should also use calibration for error compensation. The Coherent vs incoherent noise with XEB, XEB Calibration Example, Parallel XEB and Isolated XEB tutorials demonstrate how to run a classical optimizer on collected two-qubit gate characterization data, identity the true unitary matrix implemented by each gate, and add Virtual Pauli Z gates to compensate for the identified error, improving the reliability of your circuit.
  • You are also free to use the characterization data to improve the performance of large batches of experiment circuits. In this case you'd want to prepare your characterization ahead of running all your circuits, and use the data to compensate each circuit, right before running them. See Calibration FAQ for more information.