Getting Started

Bring Up

Perform some easy steps for run Fast Sense X Robotics AI Platform (all required accessories are included).

  1. Connect all required devices:
    • monitor via miniDP-to-DP adapter;
    • keyboard via micro USB B 2.0 to USB A 2.0 jack cable;
    • mouse via micro USB B 2.0 to USB A 2.0 jack cable;
    • usb-wifi stick via micro USB B 3.0 to USB A 3.0 jack adapter;
    • power supply via adapter cable. Power supply should be rated for >30W (for example 12V 3A). Input voltage 7V to 35V are supported.
  2. Wait for a couple of minutes for system boot.
  3. Log in to the system. Default credentials are robot/fastsense (login/password).
  4. Find the WiFi network you are interested in among the available ones and connect to it.

You are ready to continue!

Docker setup

By default, Docker is already installed on the operating system.

It is highly recommended to run everything inside a docker container from our Docker Hub.

docker pull fastsense/ros_ai

The container can be started with the following command:

docker run -it -v /dev/:/dev/ -v /home/robot/docker_workspace:/home/user/workspace --net=host --privileged fastsense/ros_ai /bin/bash

Run your first demo

Update NNIO:

pip3 install -U nnio

To get started, download the test image:

wget -O ~/workspace/input.jpeg

Create a simple python script:

import cv2
import nnio

# Load image
img = cv2.imread('/home/user/workspace/input.jpeg')

# Load model (For EdgeTPU)
model = nnio.zoo.edgetpu.detection.SSDMobileNet(device='TPU')

# Preprocess your numpy image
preproc = model.get_preprocessing()
img_prep = preproc(img)

# Make prediction
boxes = model(img_prep)

for box in boxes:
    print('"%s" detected!' % box.label)

# Save output image
cv2.imwrite('/home/user/workspace/output.jpeg', img)

Now you can see the image output.jpeg with the bounding boxes drawn on it, as well as the output in the terminal of the objects found on the input image.

If you want to run inference on another device, replace the model initialization line:

  • model = nnio.zoo.openvino.detection.SSDMobileNetV2(device='MYRIAD') for OpenVINO framework;
  • model = nnio.zoo.onnx.detection.SSDMobileNetV1() for ONNX framework.

What's next?

Try to run our Edge AI Demo, in which five neural networks are launched in parallel in ROS to process the video stream from two cameras.

Some popular models are already built in nnio. Look at the ready-to-use on the edge devices networks from the NNIO Model Zoo.

Finally, you can convert your models to run on devices using Converting models guide.