Application- AI Inference Device

  AI model will generate lots of feature maps during inference, and it will be a problem when you deploy the model to the device end, there is the bandwidth issue, memory issue, power consumption, and heat issue while accessing memory to get feature maps to do convolution or the other function.

  So TITC proposes the feature map compression to solve the issue mentioned above, TITC provides the lossy and lossless compression to compress the feature map before saving it to memory and decompressing it after accessing memory,

We put our codec after the convolution layer of the model, and we use Mobilenet v2 and tiny-yolo v2 to see our codec performance.

Mobilenet v2 is the classification model. tiny-yolo v2 is the object detection model.