Abstract: To enable the inference of high-precision deep neural networks (DNNs) on resource-constrained devices, DNN offloading has been widely explored in recent years. Some works have also ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results