Field Application

Fast inference
in any environments

Rapidly deploy inspection models optimized
for various manufacturing environments.

model deployment

Solution

Fast inspection speed and rapid on-site deployment

Solution

Fast inspection speed and rapid on-site deployment

Fast inspection speed

Rapid on-site deployment

Key features 1

Optimization options for high-speed inspection

Achieve target inspection speed with deep learning models optimized for fast inference.

Auto Deep Learning optimization

Achieve fast inspection speed with model optimization for lightweight devices.

Auto Deep Learning optimization

Achieve fast inspection speed with model optimization for lightweight devices.

Quantization

Maintain high model performance while improving inspection speed with simplified computations.

Benefits

Inference speed in record time: As fast as 1.7ms

Benefits

Inference speed in record time: As fast as 1.7ms

Inference speed
for each model

● Image size : 512 x 512
● GPU : Geforce RTX 4090
● Unit : millisecond

Classification

1.7ms

Segmentation

2.3ms

OCR

7.6ms

Key features 2

Fast on-site model deployment

Speed up deep learning vision inspection model deployment with seamless integration.

Supports wide range of processor and hardware

Deploy models on-site immediately without requiring developers to separately optimize models.

Call multiple models with a single API

Apply multiple models on-site by calling them with a single API instead of loading them individually.

Call multiple models with a single API

Apply multiple models on-site by calling them with a single API instead of loading them individually.

Product introduction

Looking for fast real-time inference?

Runtime Library for On-site Deployment

Runtime library for real-time, on-site deep learning insepction model operation

Runtime Library for On-site Deployment

Runtime Library for On-site Deployment