【Unboxing and Reviewing】---ROSMASTER M1 AI Large Model ROS2 Robot with  Mecanum Wheel

The newly developed ROSMASTER M1 by Yahboom is an omnidirectional mobile embodied intelligent robot designed specifically for robotics education, ROS research, and AI multimodal interaction experiments. It adopts Mecanum wheel chassis structure, it supports precise omnidirectional movement, enabling complex trajectory control such as lateral movement, diagonal movement, and in siturotation, providing an excellent experimental platform for path planning and motion control algorithm research.

ROSMASTER M1 can be equipped with various peripherals, including a 3D depth camera/2MP HD camera PTZ(optional), LiDAR, AI voice module, and ROS robot expansion board, building human-level 3D visual perception and environmental understanding capabilities. It supports Raspberry Pi 5, RDK X5, Jetson Nano 4GB, and Jetson Orin Nano 8G, and is fully compatible with ROS2 HUMBLE, deeply integrating with mainstream AI frameworks. Employing an innovative multimodal dual-model collaborative reasoning architecture, it efficiently integrates visual, voice, and text information, possessing human-like capabilities such as continuous dialogue, instant interruption, dynamic scene reasoning, and intentions speculation.

Whether conducting SLAM mapping and navigation, AI visual recognition, path planning research, or carrying out multimodal human-computer interaction experiments, this robot car can meet all your needs.

Features

Mecanum omnidirectional drive chassis, High-torque 520 encoder motor

ROSMASTER M1 adopts high-performance four wheel Mecanum wheel structure, enabling omnidirectional movements such as lateral movement, diagonal movement, circular rotation, and precise edge following. It perfectly meets the teaching and research needs of robot path planning, omnidirectional obstacle avoidance, and motion control algorithms, providing a powerful motion foundation for high-difficulty mobile robot experiments.

Multi-master platform compatibility, Meeting needs from beginner to research

Supports multiple computing platforms including Raspberry Pi 5, RDK X5, Jetson Nano 4GB, and Jetson Orin Nano 8G.Compatible with ROS2 HUMBLE, it adapts to multi-level needs such as school teaching, laboratory work, and AI research, providing users with extremely high scalability and sustainability.

Multi-sensor fusion, Building human-like 3D perception

Equipped with 3D depth camera, 2MP HD camera PTZ, LiDAR, AI voice module, and other peripherals, this system forms a multimodal environmental perception system, supporting advanced applications such as visual recognition, SLAM mapping, and environmental understanding.

Multimodal human-like Interaction with superior AI capabilities

Utilizing an innovative multimodal dual-model collaborative reasoning architecture, it deeply integrates visual, voice, and text information. It possesses cutting-edge human-like interaction capabilities, including continuous dialogue, real-time interruption, dynamic scene reasoning, and intent inference.

Highly scalable for research purposes, Meeting diverse experimental needs

Suitable for various advanced scenarios such as SLAM navigation, AI visual recognition, path planning, multimodal interaction, and embodied intelligence research. Supports a wealth of ROS teaching examples and open-source resources, making it easy to use in classroom teaching and research projects.

Unboxing & Shipping List

As shown below. These are all the parts for the ROSMASTER M1.

There are three option available: standard version, superior version.

Three main control board for choice: Raspberry Pi 5, JETSON NANO B01NVIDIA JETSON ORIN NANO 8GB SUPER

The course materials, product features, and control software for each main control board are essentially the same. Only affects the performance of the M1.

Car body (Standard) *1

Basic car body (Pre-installed with OLED and robot control board) *1

USB wireless handle + AAA battery

Velcro strap *1

15cm XH2.54 cable (15cm) *1

100mm Black cable ties*3 (100mm) *1

Crystal screwdriver  *1

USB HUB expansion board  *1

12.6V charger (2A, DC4017)  *1

Battery pack (12.6V, 6000mAh)  *1

Upper elbow USB to USB cable (30cm)  *1

Orange screwdriver  *1

Accessory package

AI large model voice module (Standard) *1

AI large model voice module  *1

Speaker  *1

Side elbow Type-C cable (25cm)  *1

Accessory package  *1

Note: If you choose standard version, you will also got following content.

2DOF PTZ  *1, PTZ package

T-MINI PLUS LiDAR *1, Accessory package

Note: If you choose superior version, you will also got following content.

Nuwa-HP60C depth camera *1, Camera bracket (Pre-installed) *1, Side elbow Type-C cable (30cm) *1, Accessory package

T-MINI PLUS LiDAR *1, Accessory package

Product Parameter Details

About Structure Design and Hardware

2MP HD camera PTZ(For Standard Version)

2MP HD camera PTZ is equipped with two HQ digital servos, supporting 100° vertical rotation; and 180° horizontal rotation. Equipped with 2MP HD camera, it enables AI vision functions such as facial recognition, color recognition, and human detection. It also supports PTZ; tracking for an enhanced dynamic interactive experience.

3D Structured Light Depth Camera [Superior Kit]

The 3D depth camera utilizes 3D imaging technology, offers a working distance of 0.2-4m, a horizontal field of view of up to 73.8°. Supports pitch adjustment, and is compatible with HD camera AI functions. It also enables advanced applications such as 3D depth data processing, 3D mapping, and navigation.

Integrates multiple mainstream image processing algorithms, supports frameworks such as OpenCV and MediaPipe, and can efficiently identify a variety of target objects, helping developers quickly build high-performance computer vision applications. 

Combining visual algorithms with robot motion control enables efficient target tracking. After the camera PTZ detects the target, the robot will move and follow it synchronously. The superior kit(including a depth camera) can achieve 3D following with distance perception.

Combines visual algorithms with vehicle motion control to achieve eficient target tracking. The PTZ locks onto the target, and the vehicle moves synchronously to follow the depth camera version enables distance-aware stereo tracking.

TOF LiDAR

Standard/Superior Kit: Utilizes the T-MINIPLUS, compactness and high precision, making it suitable for entry-level learning.

Equipped with a high-precision TOF lidar, integrated with encoder and IMU gyroscope data, it enables high-precision mapping and navigation. It supports multiple mapping algorithms and offers single-point and multi-point navigation capabilities, which can be controlled via the app. Specially optimized repositioning and navigation technology significantly reduces positioning drift during operation, improving navigation stability and reliability.

AI large model voice module

AI voice large model module is the core hub connecting user voice input and intelligent model decision-making. The module is equipped with a high-sensitivity MEMS microphone and a cavity speaker, which can clearly pick up voice and has functions such as far-field pickup, echo cancellation, voice broadcast, and environmental noise reduction.

ROS Robot Control Board

Designed specifically for ROS robot car development, it can control and drive various robot chassis, including Mecanum wheel, Ackerman, four-wheel differential, two-wheel differential, omnidirectional, and tracked. It supports the Raspberry Pi 5 power supply protocol and meets the power requirements of various ROS main control.

Interesting Functions Introduction

Scene understanding

Through large visual model, M1 can understand the scene information within its field of view, recognize object names and spatial relationships, and respond in real time using voice large model.

Visual Tracking/Following

Leveraging the powerful analytical capabilities of a large visual model, the M1 can automatically identify and lock onto target objects in complex environments, tracking them by camera PTZ system or intelligently following them by robot car.

Autonomous cruising

Through deep analysis of the large visual model, M1 can accurately identify and track lines of different colors in real time.


Deep distance Q&A [Only for Superior Kit]

By combining the large visual model and the depth camera, M1 possesses environmental understanding and distance perception capabilities, combining visual recognition with distance data for intelligent Q&A.

SLAM mapping environment perception

Through large visual model analysis, the M1 can deeply understand the objects and spatial layout within different areas of the map. 

SLAM intelligent multi-point navigation

M1 can transmit environmental data to the visual large model in real time for in-depth analysis, and plan dynamic paths based on different user voice commands, autonomously navigate to single or multiple designated areas, thereby achieving intelligent navigation. 

SLAM map object search

Through voice large model inference and visual large model analysis, M1 can accurately recognize and analyze user voice commands, deeply understand the meaning of the commands, and autonomously plan and complete the corresponding tasks.

Embodied Intelligence for complex and long-term task processing

By integrating visual understanding, voice intent recognition, and SLAM dynamic path planning, M1 can decompose complex user commands, perceive environmental changes in real time, and complete a series of coherent operations including recognition, tracking, navigation, and Q&A.

Large model intention understanding planning | Context-aware response

By expanding the RAG knowledge base to realize user intention recognition and environmental context analysis, the robot can understand the user's potential needs, independently plan tasks and respond dynamically without issuing detailed instructions.

Summary

Yahboom ROSMASTER M1 not only continues the mature application of traditional robots within the ROS ecosystem, but also achieves significant functional expansion.

It not only supports large model interactive functions, but also integrates autonomous driving functions.

Innovatively deeply integrates AI large model with traditional autonomous driving systems, a ROS2 road network planning. Significantly enhancing the robot intelligence.

With its omnidirectional Mecanum wheel chassis, The M1 possess exceptional flexibility and responsiveness. Suitable for a variety of application scenarios, such as university lab teaching,  autonomous driving algorithm verification, and large model application development.

Reviews

Deja un comentario