Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
159 commits
Select commit Hold shift + click to select a range
fe70198
first try on compressed contraction
danlkv Jan 13, 2023
22199b8
add complexity estimation based on hypergraph
danlkv Jan 14, 2023
f5b1ee1
update gitignore (wily, pre-commit)
danlkv Jan 14, 2023
f258e68
more correct memory estimation with compression
danlkv Jan 14, 2023
9540aae
add compression_ratio to arguments
danlkv Jan 14, 2023
0062a50
adapt to use slicing
danlkv Jan 21, 2023
5708555
Add __getitem__ to CompressedTensor, generalize compressed contraction
danlkv Jan 21, 2023
8098507
Added cuSZx APIs to Compressor class
mkshah5 Jan 24, 2023
cb05f9f
first try: reverse order
danlkv Jan 23, 2023
5482c68
merge summation into last einsum in torch backend
danlkv Jan 25, 2023
3ad55ac
update submodule
danlkv Jan 25, 2023
405d5ab
fix dtype for torch_gpu
danlkv Jan 25, 2023
5c90eb6
Add ordering algo as parameter to run.py. Fix merged backend in torch
danlkv Jan 27, 2023
ff40e70
Merge branch 'dev' into compression
danlkv Jan 27, 2023
1298fda
update compression cost test
danlkv Jan 28, 2023
e9cf799
fix bug in profiler summary
danlkv Jan 28, 2023
cdd724b
fix torch backend bug
danlkv Jan 28, 2023
2e7831b
Merge branch 'dev' into reverse_ix_order
danlkv Feb 1, 2023
aa351f8
Merge branch 'reverse_ix_order' into compression
danlkv Feb 10, 2023
737beab
add minimal CUSZCompressor
danlkv Feb 23, 2023
72d1031
fix cusz pointer conversion
danlkv Feb 24, 2023
e9f3981
Fixed CUDA misaligned address error
Feb 25, 2023
62f0911
remove unused szx/src/Dynamic* files; minor tweak intest_cost_estimation
danlkv Feb 26, 2023
67cb1ff
rename merged_indices/ to contraction_algos to host variants of bucke…
danlkv Feb 27, 2023
976f699
add test for reversed order backend
danlkv Feb 27, 2023
701adc4
Updated device compress to use R2R error and threshold
Feb 27, 2023
b873f6d
Reset cuszx library path in wrapper
Feb 27, 2023
b2ffd09
add better slicing tools, integrate reverse contraction
danlkv Feb 27, 2023
a0e00e0
rename to is_reverse_order_backend
danlkv Feb 27, 2023
e14cf37
rename to is_reverse_order_backend
danlkv Feb 27, 2023
7b46ec6
Merge branch 'reverse_ix_order' into compression
danlkv Feb 27, 2023
8126f30
fix dtype issue with test_compresssor
danlkv Feb 27, 2023
fd3888e
first try on compressed backend
danlkv Feb 28, 2023
ecd18cf
don't test complex128 or float64. This is not supported
danlkv Feb 28, 2023
77c27df
reverse cupy order
danlkv Feb 28, 2023
2daa0dc
Merge branch 'reverse_ix_order' into compression
danlkv Feb 28, 2023
0a7c43f
reverse cupy tensor index ordering
danlkv Feb 28, 2023
a6ee295
add compressor backend and tests for it; add cbe/common.py
danlkv Mar 1, 2023
9faae95
fix the tests
danlkv Mar 1, 2023
514a54b
use lazy_import for cupy
danlkv Mar 1, 2023
091bc79
change _get_ordering_ints to get_ordering_ints
danlkv Mar 2, 2023
7a1ef39
make adaptive optimizer compatibile with slicing
danlkv Mar 2, 2023
1902897
make common `update_peo_after_slice`
danlkv Mar 2, 2023
8dd69fc
add source for qc_simulation bench
danlkv Mar 2, 2023
cdde852
modify slicing algo to change update_peo strategy; support slice_ext_…
danlkv Mar 2, 2023
e43c2a1
move qtensor profile bench to subfolder
danlkv Mar 2, 2023
e4113c3
move compression tests
danlkv Mar 2, 2023
10ed7df
move Compressor to a separate file
danlkv Mar 2, 2023
9b787d1
add memory leak test
danlkv Mar 3, 2023
7a260d4
Merge branch 'bench' into compression
danlkv Mar 3, 2023
e86e303
maybe fix the memory leak problem; update memory leak test
danlkv Mar 3, 2023
7d5818c
update compression test_memory_leak
danlkv Mar 3, 2023
7756340
Fixed incorrectly initialized variable; test_memory_leak returning co…
Mar 3, 2023
75066a5
add memory cleanup operations and memory profile compressor/backend
danlkv Mar 3, 2023
13b2c44
update submodule
danlkv Mar 3, 2023
53d1f88
Updated post proc for compression
mkshah5 Mar 5, 2023
e4184b7
Complete merge with fixed for cuszx_entry.cu
mkshah5 Mar 5, 2023
a8c9e70
Compilation error fixes
mkshah5 Mar 5, 2023
50be22f
Compilation error fixes, integral type error
mkshah5 Mar 5, 2023
12dd9c0
Updated post_proc to faster kernels
Mar 5, 2023
df8ca2e
Merge branch 'bench' of github.com:DaniloZZZ/Qensor into bench
Mar 7, 2023
43adb66
add memory prof and fix reversed backend
danlkv Mar 7, 2023
708bc8d
Fixed median value array bug
mkshah5 Mar 7, 2023
fe8a7d4
Merge branch 'compression' into bench
Mar 9, 2023
5307d8e
add instructions on how to use main.py
Mar 9, 2023
d89181c
fix link in readme
Mar 9, 2023
1692096
Updated compress pipeline for throughput improvement
Mar 17, 2023
3c33739
Merge branch 'compression' of https://github.com/DaniloZZZ/QTensor in…
Mar 17, 2023
a091ae3
add new preprocess data for tests
Mar 17, 2023
2642bae
Updated decompression for faster pre- and post-processing
Mar 19, 2023
68dd02a
Merge branch 'compression' of https://github.com/DaniloZZZ/QTensor in…
Mar 19, 2023
0af7bc6
Improved compression throughput further
mkshah5 Mar 23, 2023
1ade10a
add small bench analysis script
Mar 23, 2023
14f6d47
add usage of simple simulation analysis to README.md
Mar 23, 2023
7072f36
Added definitions of blocks and threads for kernel launches
mkshah5 Mar 23, 2023
a0e8630
add simple analysis file, add nvmem monitor
Mar 26, 2023
a7e7818
remove some old prints
Mar 28, 2023
bd0564f
improve compression cusz wrapper. add details in perf
Apr 3, 2023
aae1de3
Merge branch 'compression' into bench
Apr 4, 2023
8d5111e
threshold real value
Apr 4, 2023
9e1ea59
Merge branch 'compression' into bench
Apr 4, 2023
fdd3181
small refactor in bench simulation
Apr 4, 2023
7f2d6d4
reduce verbosity, mpi is functional
Apr 4, 2023
95c0abf
minor fix with mpi bench sim
Apr 4, 2023
98b6fa1
add polaris scripts
Apr 4, 2023
5f1e786
upload preprocess file
Apr 4, 2023
80c8edf
minor fix in run script
Apr 4, 2023
c8d262c
minor fix in mpi run script
Apr 4, 2023
d2a2d4d
Reduce output compressed buffer size
mkshah5 Apr 5, 2023
310e74a
update submit script
Apr 5, 2023
b9a2c25
Merge branch 'compression' into bench
Apr 5, 2023
9664af8
Updated outsize and compressed buffer to reflect accurate value
mkshah5 Apr 5, 2023
be16ded
Merge branch 'compression' into bench
Apr 5, 2023
820f8a3
Modifiable data block size
mkshah5 Apr 20, 2023
24c6bcb
Merge branch 'compression' into bench
Apr 21, 2023
145810d
adjust slice count in qtensor bench estimation
danlkv May 5, 2023
714fa67
Added threshold and grouping code outside CUDA kernel
mkshah5 May 8, 2023
f53e6ed
Added cuSZp as compressor
mkshah5 May 19, 2023
7bf786b
Merge branch 'bench' of github.com:DaniloZZZ/Qensor into bench
May 19, 2023
2ea9d88
Merge branch 'compression' into bench
May 19, 2023
f2a4305
Revert to SZx compression
mkshah5 May 22, 2023
1252c2b
Fix lib paths for Compressor
mkshah5 May 22, 2023
22e8bc8
Merge branch 'compression' into bench
May 24, 2023
3894220
Added cuSZ base compressor with only threshold
mkshah5 Jun 6, 2023
a60556c
Fixed merge conflicts
mkshah5 Jun 6, 2023
87184a9
Added PyTorch based lossy compressor
mkshah5 Jun 7, 2023
6a5d472
torch transpose changes
danlkv Jun 12, 2023
79fe72e
Merge remote-tracking branch 'origin/bench' into bench
Jun 16, 2023
af8c59e
fix DOS newline characters
Jun 16, 2023
7f7a17d
Merge branch 'compression' into bench
Jun 16, 2023
a0591f2
Updated scale and zero point for quantization
mkshah5 Jun 16, 2023
a1ee144
Merge branch 'compression' into bench
Jun 16, 2023
e8ebe8c
Updated to add threshold+grouping
mkshah5 Jun 23, 2023
c13ee3a
Added packbits call to compress bitmap
mkshah5 Jun 23, 2023
888b622
Merge branch 'compression' into bench
Jun 29, 2023
ecc66bb
Merge branch 'dev' into bench
Jun 30, 2023
311211f
Updated zero point and grouping criteria
mkshah5 Jul 3, 2023
b808285
Merge branch 'compression' into bench
Jul 3, 2023
4712784
Quantize per channel
mkshah5 Jul 7, 2023
46b6994
Added grouping to perchannel quantization
mkshah5 Jul 7, 2023
b55bc8a
Merge branch 'compression' into bench
Jul 7, 2023
afe92ff
Added all compressors to Compressor.py
mkshah5 Jul 10, 2023
ec6aee8
Can change compressor with flag
mkshah5 Jul 11, 2023
df1f8c1
Bug fixes for freeing pointers
mkshah5 Jul 11, 2023
d34e0d7
add qaoa parameters config to circuit genm
Jul 18, 2023
9e10b77
Merge branch 'compression' into bench
Jul 21, 2023
27d44a2
Added new compressor: combines quantization with lossless compression…
mkshah5 Jul 24, 2023
fac16fd
Updated lib paths in newsz_wrapper.py
mkshah5 Jul 24, 2023
75f57dd
Merge branch 'compression' into bench
Aug 4, 2023
1e728c5
add WriteToDiskCompressor
Aug 4, 2023
cdd9c35
Minor cleanup in Compressor.py
Aug 11, 2023
ac03eb5
Change pynauty to pynauty-nice
danlkv Aug 11, 2023
957ddb5
add energy simulation to compression bench
Nov 10, 2023
5f5a15c
update in slicing history shape
Dec 1, 2023
547d05b
add more info for compression profiling
danlkv Dec 1, 2023
4ce7d0c
compressed contraction memory leak testing
danlkv Dec 14, 2023
547c5f5
fix test_leak for cusz. only complex64 works
danlkv Dec 15, 2023
12e21f8
add test test for leak in contraction
danlkv Dec 15, 2023
d66c90b
fix memory leak problems with cuszx
danlkv Dec 16, 2023
b4a112f
fix line endings
danlkv Dec 16, 2023
354b4e0
replace crlf with lf
danlkv Dec 16, 2023
055aef5
replace crlf with lf
danlkv Dec 16, 2023
c644f8e
replace crlf with lf
danlkv Dec 16, 2023
6ed7f85
Merge branch 'compression' into bench
danlkv Dec 16, 2023
c667539
minor torch compressor refactor
danlkv Dec 16, 2023
405314b
Merge branch 'compression' into bench
danlkv Dec 16, 2023
b52d487
torch compressor fix
danlkv Dec 16, 2023
a600c99
Added cuSZp compressor
mkshah5 Feb 9, 2024
f4bad2b
Added cuszp, fixed merge conflicts
mkshah5 Feb 9, 2024
9bc3d9c
Remove empty directory
mkshah5 Feb 15, 2024
92bf98b
Added cuSZp src code
mkshah5 Feb 15, 2024
019b506
small fix to Compressor api
Feb 15, 2024
9108d80
Merge remote-tracking branch 'origin/compression' into compression
Feb 15, 2024
a61a658
replace lineend characters
danlkv Mar 15, 2024
c8447ad
trying to make cuSZp work
Mar 22, 2024
447fc32
use cuszp module in compressor; add compressors to init.py
Apr 26, 2024
6137d2c
add test for compressed energy exp calculation
Apr 26, 2024
1506748
fix cuszp implementation
May 8, 2024
82fb586
cuszp compressor import optional
danlkv May 9, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.pre-commit-config.yaml
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
1 change: 1 addition & 0 deletions bench/qc_simulation/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
data/
67 changes: 67 additions & 0 deletions bench/qc_simulation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@

## Examples

1. generate or download circuits:

* As tar `./main.py echo github://danlkv:GRCS@/inst/bristlecone/cz_v2/bris_11.tar.gz data/circuits/bris11/\{in_file\}.circ` (need to unzip)
* Using http and [unzip on the fly](./scripts/http_unzip_on_the_fly.sh)
* generate `./main.py generate data/circuits/qaoa/maxcut_regular_N{N}_p{p} --type=qaoa_maxcut --N=8,12,16,24,32,48,64 --p=1,2,3,4,5 --d=3`

2. preprocess using both of `greedy` and `rgreedy` algorithms:
`./main.py preprocess data/circuits/qaoa/maxcut_regular\* data/preprocess/maxcut/\{in_file\}_oalgo{O}.circ --O=greedy,rgreedy --sim=qtensor
`
3. Simulate: `./main.py simulate ./data/preprocess/maxcut/maxcut_regular\* data/simulations/maxcut/{in_file}_comp_m{M} --sim qtensor -M 25 --backend=cupy --compress=szx`

### Easily manage simulation and estimation results

After running preprocess, one can estimate runtime and compare that to actual time to simulate
```bash
# Assume 1GFlop (low-end cpu number)
./main.py estimate preprocess/bris/bris_\*.txt_oalgogreedy.circ estimations/bris/cpu --sim qtensor -M 27 -F 1e9
./main.py estimate preprocess/bris/bris_\*.txt_oalgorgreedy.circ estimations/bris/cpu --sim qtensor -M 27 -F 1e9

rm -r simulations/bris/*
# Simulate Greedy
./main.py simulate preprocess/bris/bris_\*.txt_oalgogreedy.circ simulations/bris --sim qtensor -M 27
# Simulate RGreedy
./main.py simulate preprocess/bris/bris_\*.txt_oalgorgreedy.circ simulations/bris --sim qtensor -M 27
cat simulations/bris/*rgreedy*
cat estimations/bris/cpu/*rgreedy*
cat simulations/bris/*greedy*
cat estimations/bris/cpu/*greedy*
```

This shows how UNIX utilities are used to filter and present data. In SQL this would be something like
`SELECT * FROM simulations WHERE ordering_algo="greedy"`.

## Filetypes

- `.txt` - gate sequence as in GRCS
- `.qasm` - openqasm file
- `.jsonterms` - json file of QAOA terms (`src/circuit_gen/qaoa.py`)

## Advanced usage

It is possible to glob over inputs and vectorize over outputs
The globbing is possible over remote files

```
main.py process \
gh://example.com/data/*/*.element \
results/{X}/{in_file}_y{y}.r \
-X=1,2 --Y=foo,bar
```

The parent directory for each out file will be created automatically


## Analysis

Simple simulation analysis script: `analysis/compression_scaling_analysis.py`.
Accepts a glob pattern for simulation output files

Usage:

```
python analysis/compression_scaling_analysis.py ./data/simulations/maxcut/file\*
``
46 changes: 46 additions & 0 deletions bench/qc_simulation/analysis/compression_scaling_analysis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
import glob
import pandas as pd
import json
import numpy as np
import sys

def fmt_unit(x, unit):
return str(np.round(x, 2)) + " " + unit

def main():
glob_pat = sys.argv[1]
filenames = glob.glob(glob_pat)
filenames = sorted(filenames)

for file in filenames:
data = json.load(open(file))
stats = {}
for atr in ["compress", "decompress"]:
items = data["compression"][atr]
if len(items)==0:
continue
df = pd.DataFrame(items)
df["CR"] = df["size_in"]/df["size_out"]
df["T"] = df["size_in"]/df["time"]
stats["mean " + atr+" CR"] = df["CR"].mean()
stats["mean " + atr+" Throughput"] = fmt_unit(df["T"].mean( )/1e9, "GB/s")
stats[atr+" Count"] = len(df)

_res = data["result"]
stats["result"] = (_res["Re"] , _res["Im"])
stats["Time"] = fmt_unit(data["time"],'s')
stats["Memory"] = str(data["memory"]/1024/1024) + " MB"
if data.get('nvmemory'):
stats["NVMemory"] = str(data["nvmemory"]/1024/1024) + " MB"
print(file)
_prefix = " "
last = lambda x: x==len(stats.items())-1
char = lambda i: "⎬ " if not last(i) else "┕ "
print("\n".join([
_prefix+char(i) + " = ".join(map(str, items))
for i, items in enumerate(stats.items())
]))


if __name__=="__main__":
main()
36 changes: 36 additions & 0 deletions bench/qc_simulation/analysis/simple_compression_report.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import pandas as pd
import json
import sys

def main():
file = sys.argv[1]
data = json.load(open(file))
rows = []
for item in data['compression']['compress']:
k = item.copy()
k['type']='compress'
rows.append(k)

for item in data['compression']['decompress']:
k = item.copy()
k['type']='decompress'
rows.append(k)

if len(rows) == 0:
print("Rows:\n", rows)
return
df = pd.DataFrame(rows)
dfc = df[df['type'] == 'compress']
dfd = df[df['type'] == 'decompress']

for d in [dfc, dfd]:
d['Throughput'] = d['size_in'] / d['time']
d['CR'] = d['size_in'] / d['size_out']

print("Compression:")
print(dfc.describe([0.5]))
print("Decompression:")
print(dfd.describe([0.5]))

if __name__=="__main__":
main()
Binary file not shown.
Binary file not shown.
Binary file not shown.
195 changes: 195 additions & 0 deletions bench/qc_simulation/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
#!/usr/bin/env python3
import sys
from pathlib import Path
from functools import wraps
import fire
def log(*args):
print(f"[main.py] ", *args, file=sys.stderr, flush=True)

# -- Utils

import pandas as pd
import fsspec
import itertools
from dataclasses import dataclass
import io

@dataclass
class File:
path: Path
f: io.IOBase

def general_glob(urlpath, **kwargs):
"""General glob function to handle local and remote paths."""
filelist = fsspec.open_files(urlpath, **kwargs)
for file in filelist:
yield file

def is_sequence(x):
if isinstance(x, str):
return False
try:
iter(x)
return True
except TypeError:
return False

def dict_vector_iter(**d):
"""
For each value that is a list in dict d, iterate over all possible
combinations of values.
"""
keys = d.keys()
vals = d.values()
vector_keys = [k for k, v in zip(keys, vals) if is_sequence(v)]
vector_vals = [v for v in vals if is_sequence(v)]
for instance in itertools.product(*vector_vals):
p = dict(d)
p.update(zip(vector_keys, instance))
yield p

def general_indexed(in_path, out_path, func, fsspec_kwargs={}, **kwargs):
"""
Arguments:
in_path: a glob-like urlpath to pass to fsspec.open_files
out_path: a string to store the output into. Optionally,
can provide formatting arguments
If no formatting arguments provided, will be treated as a directory,
I.E `<out_path>/{in_file}`
otherwise, will be treated as a file, I.E. `<out_path>.format(**kwargs)`
For many input files, the {in_file} argument will be provided.
This will be passed as the second argument to the function
func: a function that takes two arguments, the first being the input
file object, and the second being the output file.
fsspec_kwargs: kwargs to pass to fsspec.open_files
"""
# If no formatting arguments provided, treat as directory
if "{" not in out_path:
out_pattern = f"{out_path}/{{in_file}}"
else:
out_pattern = out_path

def unit(kwargs):
in_file = kwargs.pop("in_file")
in_path = Path(in_file.path)
out_file = out_pattern.format(
in_path=in_path,
in_file=in_path.name,
**kwargs)
out_path = Path(out_file)
# make parent dir
out_path.parent.mkdir(parents=True, exist_ok=True)
with in_file.open() as f:
fl = File(in_path, f)
changed_out = func(fl, out_file, **kwargs)

log(f"{in_file.path} -> [{func.__name__}] -> {changed_out}")
index_file = Path(changed_out).parent / "index.csv"
update_index(index_file, input=in_file.path, output=changed_out, **kwargs)
return changed_out


in_path = in_path.format(**kwargs)
files = iter(general_glob(in_path, **fsspec_kwargs))
combinations = iter(dict_vector_iter(in_file=files, **kwargs))
return list(map(unit, combinations))

def update_index(index_file, **kwargs):
df = pd.DataFrame(kwargs, index=[0])
# check if index file exists
if not (file := Path(index_file)).exists():
# create directories if needed
file.parent.mkdir(parents=True, exist_ok=True)

print("Creating index file")
df.to_csv(index_file, header=True, index=False)
else:
df_exist = pd.read_csv(index_file, nrows=2)
if isinstance(df_exist, pd.DataFrame):
if df_exist.columns.tolist() != df.columns.tolist():
raise ValueError("Index file already exists but has different columns")
# append to csv
print(f"Appending to index file {index_file}")
df.to_csv(index_file, mode="a", header=False, index=False)
# --

from src.simulators.qtensor import preprocess as qtensor_preprocess
from src.simulators.qtensor import estimate as qtensor_estimate
from src.simulators.qtensor import simulate as qtensor_simulate
from src.simulators.qtensor_energy import simulate as qtensor_simulate_energy
from src.simulators.qtensor_energy import preprocess as qtensor_preprocess_energy
from src.circuit_gen.qaoa import generate_maxcut

# -- Main
sim_preprocessors = {
'qtensor': qtensor_preprocess,
'qtensor_energy': qtensor_preprocess_energy
}

sim_estimators = {
'qtensor': qtensor_estimate
}

sim_simulators = {
'qtensor': qtensor_simulate,
'qtensor_energy': qtensor_simulate_energy
}

circ_generators = {
'qaoa_maxcut': generate_maxcut
}
class Main:

def echo(self, in_path, out_dir, **kwargs):
"""
Simple mapper that just echoes stuff
"""
@wraps(self.echo)
def unit(in_file, out_file, **kwargs):
with open(out_file, "wb") as f:
f.write(in_file.f.read())
return out_file
general_indexed(in_path, out_dir, unit, **kwargs)

def generate(self, out_dir, type, **kwargs):
@wraps(self.generate)
def unit(in_file, out_file, type, **kwargs):
circ_generators[type](out_file, **kwargs)
return out_file
general_indexed('/dev/null', out_dir, unit, type=type, **kwargs)

def preprocess(self, in_path, out_dir, sim='qtensor', **kwargs):
@wraps(self.preprocess)
def unit(in_file, out_file, sim, **kwargs):
sim_preprocessors[sim](in_file, out_file, **kwargs)
return out_file
general_indexed(in_path, out_dir, unit, sim=sim, **kwargs)

def estimate(self, in_path, out_dir, sim='qtensor', **kwargs):
"""
Estimate the parameters of a simulator
"""
@wraps(self.estimate)
def unit(in_file, out_file, sim, **kwargs):
sim_estimators[sim](in_file, out_file, **kwargs)
return out_file
general_indexed(in_path, out_dir, unit, sim=sim, **kwargs)

if estimate.__doc__:
# Modify doc to include info about additional parameters
estimate.__doc__ += f"\n{qtensor_estimate.__doc__.replace('Arguments:', 'Additional:')}"

def simulate(self, in_path, out_dir, sim='qtensor', **kwargs):
"""
Simulate the quantum circuit
"""
@wraps(self.simulate)
def unit(in_file, out_file, **kwargs):
sim_simulators[sim](in_file, out_file, **kwargs)
return out_file
general_indexed(in_path, out_dir, unit, sim=sim, **kwargs)


if __name__ == "__main__":
fire.core.Display = lambda lines, out: print(*lines, file=out)
fire.Fire(Main)
20 changes: 0 additions & 20 deletions bench/qc_simulation/qtensor/test_circuits.py

This file was deleted.

6 changes: 6 additions & 0 deletions bench/qc_simulation/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
fire
fsspec
pandas
qiskit
aiohttp
cupy
Loading