Perform some easy steps for run Fast Sense X Robotics AI Platform (all required accessories are included).
- Connect all required devices:
- monitor via miniDP-to-DP adapter;
- keyboard via micro USB B 2.0 to USB A 2.0 jack cable;
- mouse via micro USB B 2.0 to USB A 2.0 jack cable;
- usb-wifi stick via micro USB B 3.0 to USB A 3.0 jack adapter;
- power supply via adapter cable. Power supply should be rated for >30W (for example 12V 3A). Input voltage 7V to 35V are supported.
- Wait for a couple of minutes for system boot.
- Log in to the system. Default credentials are robot/fastsense (login/password).
- Find the WiFi network you are interested in among the available ones and connect to it.
You are ready to continue!
By default, Docker is already installed on the operating system.
docker pull fastsense/ros_ai
The container can be started with the following command:
docker run -it -v /dev/:/dev/ -v /home/robot/docker_workspace:/home/user/workspace --net=host --privileged fastsense/ros_ai /bin/bash
Run your first demo
pip3 install -U nnio
To get started, download the test image:
wget https://habrastorage.org/webt/bs/26/rf/bs26rf28a9ze_noyyw5jlcylas8.jpeg -O ~/workspace/input.jpeg
Create a simple python script:
import cv2 import nnio # Load image img = cv2.imread('/home/user/workspace/input.jpeg') # Load model (For EdgeTPU) model = nnio.zoo.edgetpu.detection.SSDMobileNet(device='TPU') # Preprocess your numpy image preproc = model.get_preprocessing() img_prep = preproc(img) # Make prediction boxes = model(img_prep) for box in boxes: box.draw(img) print('"%s" detected!' % box.label) # Save output image cv2.imwrite('/home/user/workspace/output.jpeg', img)
Now you can see the image
output.jpeg with the bounding boxes drawn on it, as well as the output in the terminal of the objects found on the input image.
If you want to run inference on another device, replace the model initialization line:
model = nnio.zoo.openvino.detection.SSDMobileNetV2(device='MYRIAD')for OpenVINO framework;
model = nnio.zoo.onnx.detection.SSDMobileNetV1()for ONNX framework.
Finally, you can convert your models to run on devices using Converting models guide.