Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.qbraid.com/llms.txt

Use this file to discover all available pages before exploring further.

API Reference: qbraid.runtime.native
This page covers how to submit quantum jobs through the QbraidProvider — from single one-off submissions to grouped workflows that span multiple devices and providers. For provider setup and authentication, see QbraidProvider - Usage.

Single Job Submission

Submit a quantum program to any qBraid-supported device and retrieve results:
from qbraid.runtime import QbraidProvider

provider = QbraidProvider()
device = provider.get_device("qbraid:qbraid:sim:qir-sv")

# Define a quantum program (QASM, Qiskit, Cirq, Braket, etc.)
bell = """
OPENQASM 3.0;
include "stdgates.inc";
qubit[2] q;
bit[2] c;
h q[0];
cx q[0], q[1];
c = measure q;
"""

# Submit and wait for results
job = device.run(bell, shots=100)
result = job.result()
print(result.data.get_counts())
# {'00': 52, '11': 48}
You can also submit multiple programs at once — each is executed as a separate job:
programs = [bell_circuit, ghz_circuit, qft_circuit]
jobs = device.run(programs, shots=100)
results = [job.result() for job in jobs]

Group Jobs

Why Group?

When running related quantum experiments — parameter sweeps, algorithm comparisons, cross-device benchmarks — you often end up with many independent jobs that are logically part of the same workflow. Without grouping, these jobs are scattered across your job history with no way to track or retrieve them together. GroupJobSession solves this by grouping any number of jobs under a single group ID. Key benefits:
  • Cross-device, cross-provider: submit jobs to different backends (AWS SV1, IonQ, qBraid simulators) within the same group
  • Unified tracking: all jobs share a group QRN visible in the qBraid dashboard
  • Aggregated results: retrieve all results at once with group.results()
  • Lifecycle management: auto-close with TTL, cancellation, completion callbacks

Context Manager

The simplest way to use group jobs. All jobs submitted inside the with block are automatically tagged with the group ID. The group closes when the block exits.
from qbraid.runtime import GroupJobSession, QbraidProvider

provider = QbraidProvider()

bell = """
OPENQASM 3.0;
include "stdgates.inc";
qubit[2] q;
bit[2] c;
h q[0];
cx q[0], q[1];
c = measure q;
"""

# All jobs inside this block belong to the same group
with GroupJobSession(name="Bell State Sweep") as group:
    # Submit to different devices within the same group
    sv1 = provider.get_device("aws:aws:sim:sv1")
    job1 = sv1.run(bell, shots=100)

    tn1 = provider.get_device("aws:aws:sim:tn1")
    job2 = tn1.run(bell, shots=100)

    print(f"Group ID: {group.group_id}")
    print(f"Jobs in group: {len(group.jobs)}")

# Group ID: group:abc-123456
# Jobs in group: 2

Manual Open / Close

For interactive workflows like Jupyter notebooks, you can open and close the group manually across multiple cells.
from qbraid.runtime import GroupJobSession, QbraidProvider

provider = QbraidProvider()

# Create and open the group manually
group = GroupJobSession(name="Notebook Experiment")
group.open()

# Cell 2: submit jobs (can be in separate notebook cells)
device = provider.get_device("qbraid:qbraid:sim:qir-sv")
job1 = device.run(circuit_1, shots=100)
job2 = device.run(circuit_2, shots=100)

# Cell 3: close when done submitting
group.close()

Auto-Close with TTL

Set a time-to-live so the group automatically closes after a duration, even if you forget to call close() or your kernel crashes. Defaults to 1 hour (3600s) if not specified.
# Group auto-closes after 60 seconds (default: 3600s / 1 hour)
with GroupJobSession(name="Quick Sweep", max_ttl=60) as group:
    device = provider.get_device("aws:aws:sim:sv1")
    job = device.run(bell, shots=10)
    print(f"TTL: {group.max_ttl}s")
The max_ttl parameter accepts values from 1 to 86400 seconds (24 hours). If not specified, the backend defaults to 3600 seconds (1 hour).

Retrieving Results

After closing a group, retrieve all job results at once. group.results() blocks until every job reaches a terminal state (completed, failed, or cancelled).
with GroupJobSession(name="Result Demo") as group:
    device = provider.get_device("qbraid:qbraid:sim:qir-sv")
    job1 = device.run(bell, shots=100)
    job2 = device.run(bell, shots=100)

# Wait for all jobs and collect results
results = group.results(timeout=300)
print(results)

for job_id, result in results.results.items():
    print(f"{job_id}: {result.data.get_counts()}")
You can also filter results by outcome:
# Only successful results
successful = results.successful()

# Only failed results
failed = results.failed()

Completion Callback

Register a callback that fires automatically when all jobs complete. The callback runs at context exit, after the group is closed.
def analyze(results):
    """Process results when all jobs finish."""
    for job_id, result in results.items():
        counts = result.data.get_counts()
        print(f"{job_id}: {counts}")

with GroupJobSession(name="With Callback") as group:
    device = provider.get_device("qbraid:qbraid:sim:qir-sv")
    device.run(bell, shots=100)
    device.run(bell, shots=100)

    # Callback fires automatically when the context exits
    group.on_all_complete(analyze, timeout=600)

Cancellation

Cancel a group and all its non-terminal jobs:
group = GroupJobSession(name="Cancellable")
group.open()

device = provider.get_device("aws:aws:sim:sv1")
job1 = device.run(bell, shots=1000)
job2 = device.run(bell, shots=1000)

# Cancel the entire group and reset the session
group.cancel()

print(group.status())
# GroupStatus.CANCELLED

Coming Soon: Circuit Batching

Circuit Batching will let you submit multiple circuits as a single job to one device, unlike group jobs which coordinate independent jobs across devices. Available on backends that support batched execution.