【Unboxing and Reviewing】--- ROSMASTER M3 AI Large Model ROS2 Robot with Mecanum Wheel

About product

Yahboom ROSMASTER M3 is a high-performance ROS2 AI Large Model robot car engineered for Jetson Orin Nano, Orin NX, Raspberry Pi 5 and RDK X5, it seamlessly integrates AI Large Language, Vision, and Voice Models to perceive, understand, and dynamically interact with the world around it—transforming complex commands into intelligent action. Key features include dual T-mini Plus LiDARs(Optional) for robust 360° SLAM, a unique pendulum-style independent suspension system for superior shock absorption on complex terrain and upgraded Nylon Mecanum wheels for enhanced durability and smooth omnidirectional movement. Pre-installed with Ubuntu, ROS 2, and AI demos, advanced visual algorithms with the DABAI DCW2 depth camera, and multi-sensor fusion applications right out of the box.

  • Complete ROS2 & AI Development Platform
ROSMASTER M3 robot car is a hardware-ready foundation for advanced robotics. It comes pre-assembled with Ubuntu and ROS 2 (Humble) installed, allowing you to focus on coding from day one. It's fully compatible with Jetson Orin Nano SUPER/Orin NX SUPER, Raspberry Pi 5 and RDK X5, offering flexibility for various project needs and skill levels.
  • Advanced Perception with Dual LiDAR & Depth Camera
Equipped with dual T-mini Plus LiDARs and the high-performance DABAI DCW2 Binocular depth camera, the ROSMASTER M3 delivers exceptional environmental awareness. The dual LiDAR setup enables stable, high-precision 360° mapping and navigation, while the depth camera provides rich data for 3D vision, object recognition and AI model training, forming a complete sensor suite for autonomous tasks.
  • Engineered for Performance & Durability
Built for real-world testing, the ROSMASTER M3 features a unique pendulum-style independent suspension system that significantly improves shock absorption and stability on uneven surfaces. The upgraded Nylon Mecanum wheels offer greater wear resistance and smoother omnidirectional movement compared to standard plastic wheels, ensuring reliable and agile operation.
  • Enhanced Expandability & Cool Visual Design
The included latest ROS expansion board is a hub for connectivity, supporting the simultaneous connection of dual LiDARs, multiple cameras, and various other sensors (like IMU) for complex multi-sensor fusion projects. A dynamic colorful RGB LED strip on the front adds a cool visual effect to the ROSMASTER M3 robot car.

Unboxing & Shipping List

As shown below. These are all the parts for the ROSMASTER M3.
Here are three option available: Standard version, Superior version, Ultimate version.

Car body (Standard) 

Basic car body (Pre-installed with OLED and robot control board) *1

USB HUB expansion board *1

USB wireless handle + AAA battery

12.6V charger *1

Battery pack (12.6V, 6000mAh)  *1

XH2.54 cable (10cm) *1

Side elbow Type-C data cable (30cm)  *1

Screw driver  *1

Small screw driver  *1

Accessory package

Velcro strap *1

AI large model voice module (Standard) 

AI large model voice module  *1

Speaker  *1

Side elbow Type-C cable (25cm)  *1

Accessory package  

DABAI DCW2 depth camera (Standard) 

DABAI DCW2 depth camera *1

7-inch screen package (Just for Superior version and Ultimate version)

7-inch screen *1

7-inch screen brackets *2

Accessory package

Single LiDAR package (Just for Standard version and Superior version)

Tmini-Plus LiDAR *1

LiDAR line (15cm) *1

Accessory package

Dual LiDAR package (Just for Ultimate version)

Tmini-Plus LiDAR *2

LiDAR line (30cm) *1

LiDAR line (15cm) *1LiDAR holder *2

L-shaped bracket *2

Accessory package

About main controller

Three main control board for choice: Raspberry Pi 5, RDK X5, NVIDIA Jetson ORIN NANO, NVIDIA Jetson ORIN NX.

Note: The course materials, product features, and control software for each main control board are essentially the same. Only affects the performance of the M3.

Product Parameter Details

About Structure Design and Hardware

Binocular Structured Light Depth Camera

Equipped with Orbbec DABAI DCW2 binocular structured light 3D depth camera, can
accurately measure the distance, shape, height, volume and other information of
objects, thereby realizing Al projects such as griping, sorting and handling in 3D
space.

TOF LiDAR

The T-mini Plus LiDAR uses the Time-of-Flight (TOF) ranging principle, with a ranging
range of 0.05m to 12m and a sampling frequency of up to 40o0 times/second.It
supports optional single or dual LiDAR configurations. The dual-LiDAR version uses
a diagonally staggered layout, combined with a multi-LiDAR data fusion and filtering
algorithm, effectively improving the robot's mapping and navigation accuracy and
operational efficiency in complex environments.

Single TOF LiDAR

The single LiDAR adopts a top-mounted external radar layout, supporting 360° continuous scanning for all-around environmental perception. Combined with SLAM algorithm and fusion of data from multiple sensors such as IMU, it supports the construction of high-precision environmental maps, significantly improving the robot's navigation stability in complex and dynamic.

Dual TOF LiDAR

The M3 adapots a diagonally offset dual-radar layout for 360° all-around environmental perception. The front right radar precisely scans the driving path, while the rear left radar simultaneously completes the dynamic environmental information, making it suitable for scenarios with frequent turns.

Through timestamp alignment, point cloud registration, and IMU fusion, it reduces distortion during high-speed motion, improving mapping and navigation accuracy, and enabling one-step path planning. Utilizing a wired microROS solution, the two radars and control board occupy only one ROS main controlport, saving port resources.

Al large model voice module

Al voice large model module is the core hub connecting user voice input and intelligent model decision-making. The module is equipped with a high-sensitivity MEMS microphone and a cavity speaker, which can clearly pick up voice and has functions
such as far-field pickup, echo cancellation, voice broadcast, and environmental noise
reduction.

ROS Robot Control Board

This M3 Pro robot control board is equipped with a high-performance STM32 main control chip and a 9-axis IMU sensor. It uses the microROS solution to drive two
T-mini Plus LiDAR, reducing the resource usage of the ROS main control port. It supports 4-channel encoder motors and 6DOF robotic arm drives, meets the power supply requirements of multiple main controls, and realizes an eficient and stable intelligent control system.

12.6V 6000mAh Li-ion Battery Pack

Equipped with a 12.6V 60oomAh lithium-ion battery pack, it possess over-charge, over-discharge, short-circuit, over-current, low-voltage, and over-voltage protection. This ensures safe and reliable cels, large battery capacity, and long-lasting battery life.

 

 






 

 

 

 

Interesting Functions: Multimodal visual model creative application 

Scene understanding

Through large visual model, M3 can understand the scene information within its field of view, recognize object names and spatial relationships, and respond in real time using voice large model.

Visual following

With the powerful analysis ability of the visual large model, M3 can automatically identify and lock target objects in complex environments to achieve three-dimensional following with spatial distance perception.

Deep distance Q&A

By combining the large visual model and the depth camera, M3 possesses environmental understanding and distance perception capabilities, combining visual recognition with distance data for intelligent Q&A.

Embodied intelligent SLAM navigation

Al large model + SLAM mapping environment perception

Through large-scalevisualmodel analysis,M3can deeply understand the objects and spatial layout within different areas of the map.

AI large model + SLAM intelligent multi-point navigation

M3 can transmit environmental data to the visual large model in real time for indepth analysis, and plan dynamic paths based on different user voice commands, autonomously
navigate to single or multiple designated areas, thereby achieving intelligent navigation.

Al large model fusion SLAM track map navigation

Leveraging the powerful analytical capabilities of large-scale visual models, M3 can identify road signs in autonomous driving sandbox maps and execute corresponding actions based on the road sign instructions. [Need purchase track map]

More Common Functions

LiDAR Functions

Equipped with a high-precision TOF LiDAR, fusing encoder and IMU gyroscope data, it enables high-precision mapping and navigation. Supports multiple mapping algorithms and Archive Mapping, features single-point/multi-point navigation, and can be operated via the APP. Specially optimized relocation navigation technology significantly reduces positioning drift during operation, improving navigation stability and reliability.

Depth Camera Functions

The 3D structured light depth camera generates depth images and point cloud data, accurately acquiring depth information of target objects, enabling precise distance and volume calculations.Combined with radar data, it can construct high-precision 3D color maps, supporting more accurate environmental perception and intelligent navigation.

YOLOv11 Model Detection

Built-in YOLOv11 deep learning model, supporting image segmentation, pose estimation, image classification, and oriented object detection, giving robots more powerful environmental perception and decision-making capabilities. Complete model training and deployment tutorials are provided to help developers quickly customize their own vision applications.

Al Visual Recognition Functions

Integrates multiple mainstream image processing algorithms, supports frame works such as OpenCV and MediaPipe, and can efficiently recognize various target objects, helping developers quickly build high-performance computer vision applications.

AI Visual Interaction Functions

Combines visual algorithms with vehicle motion control to achieve efficient target
tracking. The PTZ locks onto the target, and the vehicle moves synchronously to
follow; the depth camera version enables distance-aware stereo tracking.

Multi-Robot Formation and Interconnection Control

Supports multi-robot navigation and dynamic obstacle avoidance on the same map.
Multiple robots can be simultaneously controlled by a single host, enabling
multi-robot synchronous control and formation performance.

Summary

ROSMASTER M3 is a high-end ROS2 AI large model robot platform designed specifically for high-performance controllers such as Jetson Orin series, Raspberry Pi 5, RDK X5, etc. It demonstrates a higher level of hardware configuration and software ecology.  
Its core advantage lies in its powerful perception and motion capabilities: optional dual T-Mini Plus LiDAR, combined with DABAI DCW2 depth camera, can achieve robust 360 ° SLAM mapping and multi-sensor fusion; The combination of a powerful encoder motor and upgraded nylon Mecanum wheels significantly improves the passability, shock absorption, smoothness, and durability of omnidirectional movement on complex terrains. The entire body is made of metal chassis, pre installed and tested at the factory, saving users assembly time.

In terms of intelligence, M3 deeply integrates AI big language models, visual and speech models. User tests have shown that complex instructions such as "look at the door" or "bypass the water cup on the table" issued through natural language have lower understanding and execution delays than expected, making it very suitable for rapid upper level development in multimodal interaction research. The platform comes pre installed with Ubuntu, ROS2, and AI demo programs at the factory, providing developers with a complete out of the box environment.

Deja un comentario