opengl2.0 based deeplearning inference engine, used for stricted environment, like web, opencl-disable arm
OGLE is mainly for deeplearning inferece, it has two parts,
- Model Converter
- opengl based graph runtime you need to convert model you trained to onnx format first. then use converter to convert model from onnx to dlx format, only dlx model can used for graph runtime
- opengl libraries
- common utils libraries, like glog gtest gflag
- opencv, protobuf please build from source
# build converter
cd converter
mkdir build && cd build && cmake .. && make -j`nproco`
# build runtime
cd ..
cd ./ogl_runtime/opengl/nn
# generate kernel files
python make_shaders.py glsl all_shaders.h all_shaders.cc
cd -
mkdir build && cd build && cmake .. && make -j`nproco`- convert model first
cd converter/build
export SRC=model.onnx
exoprt DST=demo.dlx
./Converter ${SRC} ${DST}
# then you can get the demo.dlx in current directory- to use opengl based graph runtime, please refer to
ogl_runtime/opengl/examples/ssd/main.ccfor example.
# demo
cd ogl_runtime/build
./opengl/examples/ssd_detector- trained model
download from google drive
or baiduyunpan , passwd: ef7i, then put
demo.dlxinto build directory, you can use the person detector model to detect in camera or image. the follow image is used for demostration.
| framework | model | time | platform | mem |
|---|---|---|---|---|
| OGLE | mobilenet ssd | 8ms | 1080TI | 187m |
| MNN | mobilenet ssd | 9-10ms | 1080TI | 174m |
- add new op
please read
ogl_runtime/opengl/nn/glslandogl_runtime/opengl/nn/kernelsfor reference. then you need to rerunpython make_shaders.py glsl all_shaders.h all_shaders.ccagain