FPGA-based neural network accelerator outperforms GPUs
E bonts'itsoe e le GoogLeNet Incuction-v1 CNN, e sebelisang qeto ea manane a robeli. E fihletse ts'ebetso ea 16.ra ea terra motsotsoana (TOPS) mme e ka hatella litšoantšo tse fetang 5,300 motsotsoana ho Xilinx Virtex UltraScale + XCVU9P-3 fpga. Mokhoa o tloaelehileng, oa scalable, o etsa hore e tšoanelehe bakeng sa ho bonoa ha lintho le lits'ebetso tsa ts'ebetso ea video pheletsong le lerung, ho hlalositse Fawcett, hammoho le ho ts'oaroa litsing tsa data le lik'hamera tse bohlale.
DPU e ka hlophisoa ho fana ka ts'ebetso e phethahetseng ea komporo bakeng sa li-topology tsa neural marangrang a ho ithuta ka mochini, ho sebelisa moralo oa DSP o ts'oanang, ho boloka memori le ho kopanya hape hoa mantlha le khokahanyo bakeng sa li-algorithms tse fapaneng.
DPU e fihlella tšebetso e phahameng ea 50% ho feta li-CNN leha e le life tse hlolisanoang 'me e sa sebetse li-GPU bakeng sa matla a fuoeng kapa litjeo tsa litšenyehelo, ho bolela k'hamphani. "Fpga ke sethala sa mekhabiso ea lefatše le meaho, e tenyetsehang haholo bakeng sa ho fana ka bopaki nakong e tlang ebile e khona ho feta li-GPUs ho AI, ka latency e tlase," a eketsa Fawcett.
Khamphani e boetse e phatlalalitse hore e tšehetsa DPhil (PhD0 Univesithing ea Oxford ho ithuta mekhoa ea ho kenya tšebetsong ho potlakisa ho ithuta ka lifeme. Mosebetsi ona o tla be o sebelisana 'moho le lipatlisiso tsa Omnitek tsa enjine le li-algorithms tsa AI).
