Hundreds of AI chips are releasing in the near future, the latest figures indicate 34 IC and IP vendors will provide various AI chips and deep learning accelerator (DLA) ASICs in 2018. These all reflect the urgent need for an open compiler to support different AI chips. We foresaw the trend and developed the compiler ONNC. Based on ONNX, ONNC is an efficient way to connect all current AI chips, especially DLA ASICs, with ONNX. Open Neural Network Exchange Format (ONNX) is a standard for representing deep learning models that enables models to be transferred between frameworks. We introduce ONNC that supports ONNX format and mainstream AI frameworks such as Caffe and Tensorflow. ONNC’s dominant advantage to current AI frameworks is that it provides direct support to DLA ASIC chips by ability to describe variants of performance cost models of hardware and by general optimization passes. DLA ASIC vendors can reuse these optimization passes by describing its special performance cost model in ONNC. ONNX and ONNC together help DLA ASIC vendors support various AI frameworks within a short time, improves DLA’s performance and shortens developing schedule. ONNX aims to guarantee `interoperatability` for all tools, and ONNC further guarantees `executability` for all devices.