Supported OpenVINO™ toolkit, AI edge computing ready device.
FPGAs can be optimized for different deep learning tasks.
Intel® FPGAs supports multiple float-points and inference workloads
OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.
It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
Intel® Deep Learning Deployment Toolkit
– Model Optimizer
– Inference Engine
Optimized computer vision libraries
Intel® Media SDK
*OpenCL™ graphics drivers and runtimes.
Current Supported Topologies: AlexNet, GoogleNet, Tiny Yolo, LeNet, SqueezeNet, VGG16, ResNet (more variants are coming soon)
Intel® FPGA Deep Learning Acceleration Suite
High flexibility, Mustang-F100-A10 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
*OpenCL™ is the trademark of Apple Inc. used by permission by Khronos
Intel® Arria® 10 GX1150 FPGA
Ubuntu 16.04.3 LTS 64-bit, CentOS 7.4 64-bit (Support Windows® 10 in the end of 2018 & more OS are coming soon)
Voltage Regulator and Power Supply
Intel® Enpirion® Power Solutions
8G on board DDR4
PCI Express x8
Compliant with PCI Express Specification V3.0
5°C~60°C (ambient temperature)
Standard Half-Height, Half-Length, Double-Slot
5% ~ 90%
*Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator
Identify card number
*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration.