This page covers how to submit quantum jobs through the QbraidProvider — from single
one-off submissions to grouped workflows that span multiple devices and providers.For provider setup and authentication, see QbraidProvider - Usage.
When running related quantum experiments — parameter sweeps, algorithm comparisons,
cross-device benchmarks — you often end up with many independent jobs that are logically
part of the same workflow. Without grouping, these jobs are scattered across your job history
with no way to track or retrieve them together.GroupJobSession solves this by grouping any number of jobs under a single group ID.
Key benefits:
Cross-device, cross-provider: submit jobs to different backends (AWS SV1, IonQ, qBraid simulators) within the same group
Unified tracking: all jobs share a group QRN visible in the qBraid dashboard
Aggregated results: retrieve all results at once with group.results()
Lifecycle management: auto-close with TTL, cancellation, completion callbacks
The simplest way to use group jobs. All jobs submitted inside the with block are
automatically tagged with the group ID. The group closes when the block exits.
from qbraid.runtime import GroupJobSession, QbraidProviderprovider = QbraidProvider()bell = """OPENQASM 3.0;include "stdgates.inc";qubit[2] q;bit[2] c;h q[0];cx q[0], q[1];c = measure q;"""# All jobs inside this block belong to the same groupwith GroupJobSession(name="Bell State Sweep") as group: # Submit to different devices within the same group sv1 = provider.get_device("aws:aws:sim:sv1") job1 = sv1.run(bell, shots=100) tn1 = provider.get_device("aws:aws:sim:tn1") job2 = tn1.run(bell, shots=100) print(f"Group ID: {group.group_id}") print(f"Jobs in group: {len(group.jobs)}")# Group ID: group:abc-123456# Jobs in group: 2
For interactive workflows like Jupyter notebooks, you can open and close the group
manually across multiple cells.
from qbraid.runtime import GroupJobSession, QbraidProviderprovider = QbraidProvider()# Create and open the group manuallygroup = GroupJobSession(name="Notebook Experiment")group.open()# Cell 2: submit jobs (can be in separate notebook cells)device = provider.get_device("qbraid:qbraid:sim:qir-sv")job1 = device.run(circuit_1, shots=100)job2 = device.run(circuit_2, shots=100)# Cell 3: close when done submittinggroup.close()
Set a time-to-live so the group automatically closes after a duration, even if you
forget to call close() or your kernel crashes. Defaults to 1 hour (3600s) if not specified.
# Group auto-closes after 60 seconds (default: 3600s / 1 hour)with GroupJobSession(name="Quick Sweep", max_ttl=60) as group: device = provider.get_device("aws:aws:sim:sv1") job = device.run(bell, shots=10) print(f"TTL: {group.max_ttl}s")
The max_ttl parameter accepts values from 1 to 86400 seconds (24 hours). If
not specified, the backend defaults to 3600 seconds (1 hour).
After closing a group, retrieve all job results at once. group.results() blocks until
every job reaches a terminal state (completed, failed, or cancelled).
with GroupJobSession(name="Result Demo") as group: device = provider.get_device("qbraid:qbraid:sim:qir-sv") job1 = device.run(bell, shots=100) job2 = device.run(bell, shots=100)# Wait for all jobs and collect resultsresults = group.results(timeout=300)print(results)for job_id, result in results.results.items(): print(f"{job_id}: {result.data.get_counts()}")
You can also filter results by outcome:
# Only successful resultssuccessful = results.successful()# Only failed resultsfailed = results.failed()
Register a callback that fires automatically when all jobs complete. The callback
runs at context exit, after the group is closed.
def analyze(results): """Process results when all jobs finish.""" for job_id, result in results.items(): counts = result.data.get_counts() print(f"{job_id}: {counts}")with GroupJobSession(name="With Callback") as group: device = provider.get_device("qbraid:qbraid:sim:qir-sv") device.run(bell, shots=100) device.run(bell, shots=100) # Callback fires automatically when the context exits group.on_all_complete(analyze, timeout=600)
group = GroupJobSession(name="Cancellable")group.open()device = provider.get_device("aws:aws:sim:sv1")job1 = device.run(bell, shots=1000)job2 = device.run(bell, shots=1000)# Cancel the entire group and reset the sessiongroup.cancel()print(group.status())# GroupStatus.CANCELLED
Circuit Batching will let you submit multiple circuits as a single job to
one device, unlike group jobs which coordinate independent jobs across
devices. Available on backends that support batched execution.