【Unboxing and Reviewing】---ROSMASTER A1 AI Large Model ROS2 Robot with Ackerman steering chassis

Yahboom newly launched ROSMASTER A1 is an embodied intelligent robot car platform designed specifically for ROS education and artificial intelligence research. Built on an Ackerman steering chassis, ROSMASTER A1 accurately replicates the steering model of smart car. It integrates peripherals such as a 3D depth camera/2MP HD camera PZT (optional), T-miniPlus LiDAR, AI intelligent voice module, and multi-function ROS robot expansion board, providing the robot with stereoscopic vision comparable to the human eye and precise environmental perception. Users can choose Raspberry Pi 5, Jetson NANO 4GB, or Jetson ORIN NANO 8G as the main control board. Developed in the ROS2 HUMBLE environment, it innovatively employs a dual-model inference architecture, ensuring a clear division of labor between decision-making and execution. It seamlessly integrates visual, voice, and text information, enabling not only precise SLAM mapping and navigation, AI visual recognition, but also human-like interaction capabilities with free dialogue interruption and dynamic feedback reasoning, meeting the needs of diverse scenarios.

Features

  • High-quality hardware configuration enables precise perception:

Integrated with 3D depth camera (superior kit) and 200W HD camera PTZ (standard kit), T-MINI Plus LiDAR, and AI large model voice module. Combined with high-torque metal 520 motor, ROS expansion board, and large-capacity lithium battery, provide robots with powerfule and long-lasting movement capabilities, as well as the ability to accurately perceive the 3D world.

  • Professional Ackerman motion platform:

Replicates the actual steering structure of intelligent vehicles, providing a highly reliable mobile chassis foundation for the research and verification of autonomous driving algorithms (such as path planning and tracking control).

  • Cutting-edge AI core, intelligent interactive experience:

Innovative dual-model inference architecture separates decision-making and execution, ensuring clear logic and efficient operation. Deeply integrated with AI large model voice technology, it supports free conversation interruption, continuous questioning, and dynamic feedback.

  • Flexible and open master control solution:

Seamlessly supports the most popular development platforms: Raspberry Pi 5-8GB, Jetson NANO 4GB, or Jetson ORIN NANO 8GB. Users can freely choose the most cost-effective solution and build the most familiar development environment based on the project's requirements for computing power and AI performance.

  • Deeply developed with ROS2 for Interesting functions:

Developed with ROS2-HUMBLE, compatible with mainstream simulation and visualization tools like Gazebo and RViz. In addition to intelligent interaction, it supports SLAM mapping and navigation, intelligent obstacle avoidance, tracking cruise control, AI visual recognition, object tracking, and so on. Users can learn basics in the classroom or achieve advanced functions in research and maker projects, meeting diverse needs from beginners to innovative users.

Unboxing & Shipping List

As shown below. These are all the parts for the ROSMASTER A1.

There are three option available: standard version, superior version, ultimate version.

Three main control board for choice: Raspberry Pi 5, JETSON NANO B01NVIDIA JETSON ORIN NANO 8GB SUPER

The course materials, product features, and control software for each main control board are essentially the same. Only affects the performance of the A1.

Car body (Standard) *1

Basic car body (Pre-installed with OLED and robot control board) *1

USB wireless handle + AAA battery

Velcro strap *1

15cm XH2.54 cable (15cm) *1

100mm Black cable ties*3 (100mm) *1

Crystal screwdriver  *1

USB HUB expansion board  *1

12.6V charger (2A, DC4017)  *1

Battery pack (12.6V, 6000mAh)  *1

Upper elbow USB to USB cable (30cm)  *1

Orange screwdriver  *1

Accessory package

AI large model voice module (Standard) *1

AI large model voice module  *1

Speaker  *1

Side elbow Type-C cable (25cm)  *1

Accessory package  *1

Note: If you choose standard version, you will also got following content.

2DOF PTZ  *1, PTZ package

T-MINI PLUS LiDAR *1, Accessory package

Note: If you choose superior version, you will also got following content.

Nuwa-HP60C depth camera *1, Camera bracket (Pre-installed) *1, Side elbow Type-C cable (30cm) *1, Accessory package

T-MINI PLUS LiDAR *1, Accessory package

Note: If you choose ultimate version, you will also got following content.

Nuwa-HP60C depth camera *1, Camera bracket (Pre-installed) *1, Side elbow Type-C cable (30cm) *1, Accessory package

SLAM C1 LiDAR  *1, Accessory package + Velcro

Product Parameter Details

About Structure Design and Hardware2MP HD camera PTZ(For Standard Version)

2MP HD camera PTZ is equipped with two HQ digital servos, supporting 100° vertical rotation; and 180° horizontal rotation. Equipped with 2MP HD camera, it enables AI vision functions such as facial recognition, color recognition, and human detection. It also supports PTZ; tracking for an enhanced dynamic interactive experience.

3D Structured Light Depth Camera [Superior Kit]

The 3D depth camera utilizes 3D imaging technology, offers a working distance of 0.2-4m, a horizontal field of view of up to 73.8°. Supports pitch adjustment, and is compatible with HD camera AI functions. It also enables advanced applications such as 3D depth data processing, 3D mapping, and navigation.

Integrates multiple mainstream image processing algorithms, supports frameworks such as OpenCV and MediaPipe, and can efficiently identify a variety of target objects, helping developers quickly build high-performance computer vision applications. 

Combining visual algorithms with robot motion control enables efficient target tracking. After the camera PTZ detects the target, the robot will move and follow it synchronously. The superior kit(including a depth camera) can achieve 3D following with distance perception.

TOF LiDAR

Standard/Superior Kit: Utilizes the T-MINIPLUS, compactness and high precision, making it suitable for entry-level learning.

Superior Kit: Equipped with the SLAM C1, it offers higher sampling rates and system stability, adapting to complex application scenarios.

Both LiDARs can perform various functions, including mapping, navigation, and obstacle avoidance.

Equipped with a high-precision TOF lidar, integrated with encoder and IMU gyroscope data, it enables high-precision mapping and navigation. It supports multiple mapping algorithms and offers single-point and multi-point navigation capabilities, which can be controlled via the app. Specially optimized repositioning and navigation technology significantly reduces positioning drift during operation, improving navigation stability and reliability.

AI large model voice module

AI voice large model module is the core hub connecting user voice input and intelligent model decision-making. The module is equipped with a high-sensitivity MEMS microphone and a cavity speaker, which can clearly pick up voice and has functions such as far-field pickup, echo cancellation, voice broadcast, and environmental noise reduction.

Combining visual algorithms with robot motion control enables efficient target tracking. After the camera PTZ detects the target, the robot will move and follow it synchronously. The superior kit(including a depth camera) can achieve 3D following with distance perception.

ROS Robot Control Board

Designed specifically for ROS robot car development, it can control and drive various robot chassis, including Mecanum wheel, Ackerman, four-wheel differential, two-wheel differential, omnidirectional, and tracked. It supports the Raspberry Pi 5 power supply protocol and meets the power requirements of various ROS main control.

Interesting Functions Introduction

Scene understanding

Through large visual model, A1 can understand the scene information within its field of view, recognize object names and spatial relationships, and respond in real time using voice large model.

Visual Tracking/Following

Leveraging the powerful analytical capabilities of a large visual model, the A1 can automatically identify and lock onto target objects in complex environments, tracking them by camera PTZ system or intelligently following them by robot car.

Autonomous cruising

Through deep analysis of the large visual model, A1 can accurately identify and track lines of different colors in real time.


Deep distance Q&A [Only for Superior Kit]

By combining the large visual model and the depth camera, A1 possesses environmental understanding and distance perception capabilities, combining visual recognition with distance data for intelligent Q&A.

 

SLAM mapping environment perception

Through large visual model analysis, the A1 can deeply understand the objects and spatial layout within different areas of the map. 

SLAM intelligent multi-point navigation

A1 can transmit environmental data to the visual large model in real time for in-depth analysis, and plan dynamic paths based on different user voice commands, autonomously navigate to single or multiple designated areas, thereby achieving intelligent navigation. 

SLAM map object search

Through voice large model inference and visual large model analysis, A1 can accurately recognize and analyze user voice commands, deeply understand the meaning of the commands, and autonomously plan and complete the corresponding tasks.

Embodied Intelligence for complex and long-term task processing

By integrating visual understanding, voice intent recognition, and SLAM dynamic path planning, A1 can decompose complex user commands, perceive environmental changes in real time, and complete a series of coherent operations including recognition, tracking, navigation, and Q&A.

Large model intention understanding planning | Context-aware response

By expanding the RAG knowledge base to realize user intention recognition and environmental context analysis, the robot can understand the user's potential needs, independently plan tasks and respond dynamically without issuing detailed instructions.

Summary

After this in-depth experience, Yahboom ROSMASTER A1 not only support command ROS functionsa, and AI visual interactionBut also expand many functions. All version are equipped with an AI large voice module, supporting large model functions, significantly improving the overall cost-effectiveness. Ackermann vehicle structure perfectly replicates the steering characteristics of a real vehicle. It is an ideal platform for autonomous driving algorithm validation and research. Suitable for university laboratory teaching, autonomous driving experiments, and other application scenarios.


Leave a comment