GPU-based quantum simulation on Google Cloud

In this tutorial, you configure and test a virtual machine (VM) to run GPU-based quantum simulations on Google Cloud. The instructions for compiling qsim with GPU support are also relevant if you are interested in running GPU simulations locally or on a different cloud platform.

Before starting this tutorial, we recommend reading the choosing hardware guide in order to decide which type of GPU you would like to use and how many GPUs you will need. As discussed there, you have a choice among 3 options:

  1. using the native qsim GPU backend,
  2. using NVIDIA's cuQuantum as a backend for the latest version of qsim, or
  3. using cuQuantum Appliance, which runs in a Docker container and has a modified version of qsim. If you plan to do multi-GPU simulations, then you will need to pick option 3. The following steps depend on which option you pick. The headers for each step note which of these three options they apply to.

1. Create a virtual machine (Options 1, 2, and 3)

Follow the instructions in the Quickstart using a Linux VM guide to create a VM. In addition to the guidance specified in the Create a Linux VM instance section, ensure that your VM has the following properties:

  • In the Machine Configuration section:
    1. Select the tab for the GPU machine family.
    2. Select the GPU Type and Number of GPUs that you would like to use.
  • In the Boot disk section, click the Change button:

    1. In the Operating System option, choose Ubuntu.
    2. In the Version option, choose 20.04 LTS.
    3. In the Size field, enter 40 (minimum).

    Alternatively, you can click the "Switch Image" button and use the image with CUDA pre-installed, which lets you skip step 3. It has been verified that this works with cuQuantum Appliance (option 3).

  • The instructions above override steps 3 through 5 in the Create a Linux VM instance Quickstart.

  • In the Firewall section, ensure that both the Allow HTTP traffic checkbox and the Allow HTTPS traffic checkboxs are selected.

When Google Cloud finishes creating the VM, you can see your VM listed in the Compute Instances dashboard for your project.

Find out more

2. Prepare your computer (Options 1, 2, and 3)

Use SSH in the gcloud tool to communicate with your VM.

  1. Install the gcloud command line tool. Follow the instructions in the Installing Cloud SDK documentation.
  2. After installation, run the gcloud init command to initialize the Google Cloud environment. You need to provide the gcloud tool with details about your VM, such as the project name and the region where your VM is located.
    1. You can verify your environment by using the gcloud config list command.
  3. Connect to your VM by using SSH. Replace [YOUR_INSTANCE_NAME] with the name of your VM.

    gcloud compute ssh [YOUR_INSTANCE_NAME]

When the command completes successfully, your prompt changes from your local machine to your virtual machine.

3. Enable your virtual machine to use the GPU (Options 1, 2, and 3)

  1. Install the GPU driver. Complete the steps provided in the following sections of the Installing GPU drivers: guide:
  2. If missing, install the CUDA toolkit. (This may have already been installed along with the driver. You can check whether it is installed by checking whether the CUDA toolkit directory exists as described in step 3.)

    sudo apt install -y nvidia-cuda-toolkit
  3. Add your CUDA toolkit to the environment search path (Options 1 and 2 only)

    1. Discover the directory of the CUDA toolkit that you installed.

      ls /usr/local

      The toolkit is the highest number that looks like the pattern cuda-XX.Y. The output of the command should resemble the following:

      bin cuda cuda-11 cuda-11.4 etc games include lib man sbin share src

      In this case, the directory is cuda-11.4.

    2. Add the CUDA toolkit path to your environment. You can run the following command to append the path to your ~/.bashrc file. Replace [DIR] with the CUDA directory that you discovered in the previous step.

      echo "export PATH=/usr/local/[DIR]/bin${PATH:+:${PATH} }" >> ~/.bashrc
    3. Run source ~/.bashrc to activate the new environment search path

4. Install build tools (Options 1 and 2)

Install the tools required to build qsim. This step might take a few minutes to complete.

sudo apt install cmake && sudo apt install pip && pip install pybind11

5. Install cuQuantum SDK/cuStateVec (Option 2 only)

Reboot the VM. Then follow the instructions here to install cuQuantum. Specifically, this is the appropriate installer. Reboot the VM again. Set the CUQUANTUM_DIR and CUQUANTUM_ROOT environment variables,

export CUQUANTUM_DIR=/opt/nvidia/cuquantum/
export CUQUANTUM_ROOT=/opt/nvidia/cuquantum/

modifying the above if cuQuantum was installed to a different directory.

6. Create a GPU-enabled version of qsim (Options 1 and 2 only)

  1. Reboot the VM (option 1).
  2. Clone the qsim repository.

    git clone
  3. Run cd qsim to change your working directory to qsim.

  4. Run make to compile qsim. When make detects the CUDA toolkit during compilation, make builds the GPU version of qsim automatically.

  5. Run pip install . to install your local version of qsimcirq.

  6. Verify your qsim installation.

    python3 -c "import qsimcirq; print(qsimcirq.qsim_gpu)"

    If the installation completed successfully, the output from the command should resemble the following:

    <module 'qsimcirq.qsim_cuda' from '/home/user_org_com/qsim/qsimcirq/'>

7. Install Docker Engine (Option 3 only)

If you are setting up cuQuantum Appliance, follow these instructions to install Docker Engine.

8. Install NVIDIA Container Toolkit (Option 3 only)

If you are setting up cuQuantum Appliance, follow the instructions here to set up NVIDIA Container Toolkit.

9. Install NVIDIA cuQuantum Appliance (Option 3 only)

Follow the instructions here to set up cuQuantum Appliance. You may need to use sudo for the Docker commands.

10. Verify your installation (Options 1, 2, and 3)

You can use the following code to verify that qsim uses your GPU. You can paste the code directly into the REPL, or paste the code in a file. See the documentation here for Options 1 and 2 or here for Option 3. Make sure to change xx to 0 for Option 1, 1 for Option 2, or the number of GPUs for Option 3. For option 3, note that QSimOptions has a disable_gpu flag instead of a use_gpu flag.

# Import Cirq and qsim
import cirq
import qsimcirq

# Instantiate qubits and create a circuit
q0, q1 = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.H(q0), cirq.CX(q0, q1))

# Instantiate a simulator that uses the GPU
# Option 1 (mode=0) or Option 2 (mode=1)
gpu_options = qsimcirq.QSimOptions(use_gpu=True, gpu_mode = mode, max_fused_gate_size=4)
# Option 3 (number of GPUs = `num_gpus`)
gpu_options = qsimcirq.QSimOptions(disable_gpu=False, gpu_mode = num_gpus, max_fused_gate_size=4)
qsim_simulator = qsimcirq.QSimSimulator(qsim_options=gpu_options)

# Run the simulation
print("Running simulation for the following circuit:")

qsim_results = qsim_simulator.compute_amplitudes(
    circuit, bitstrings=[0b00, 0b01])

print("qsim results:")

After a moment, you should see a result that looks similar to the following.

[(0.7071067690849304+0j), 0j]

Next steps

After you finish, don't forget to stop or delete your VM on the Compute Instances dashboard to prevent further billing.

You are now ready to run your own large simulations on Google Cloud. For sample code of a large circuit, see the Simulate a large circuit tutorial.