In the autonomous Leo Rover robotics project in Manchester, I was primarily responsible for navigation, mapping, final code integration, and logic design,
while also assisting with the robotic arm grasping tasks. I utilized SLAM for real-time map construction, integrating LiDAR, IMU, and odometry data (EKF) to enhance localization accuracy.
I implemented A for global path planning and TEB for local obstacle avoidance, optimizing the robot's autonomous movement capabilities. This experience enhanced my practical skills in Gazebo,
ROS2, SLAM, and path planning, improved my simulation abilities, and deepened my understanding of autonomous navigation systems.
In developing the Adaptive A* path planning algorithm, I optimized search efficiency across different grid sizes by implementing three adaptive selection mechanisms using dictionaries,
2D arrays, and 1D memory-optimized structures. The algorithm supports efficient four-directional movement, leveraging the Euclidean heuristic and a six-layer error-checking mechanism to enhance path continuity and robustness.
Through this project, I deepened my understanding of A* optimization strategies, strengthened my application of data structures in performance and memory management, and improved my ability to design intelligent navigation in complex environments.
Using some commonly used waste items in life, we design a web robot that can be remotely controlled to walk, can walk autonomously,
and can avoid obstacles when it meets obstacles. The robot's subsequent optimisation direction: automatic tracing function,
voice control function, interactive function (including feedback on voice, vibration, etc.).
In the SLAMTEC RPLIDAR A2M12 performance evaluation project, I conducted experimental assessments of LiDAR accuracy, resolution, and detection range,
analyzing its impact on the Leo Rover's SLAM and obstacle avoidance capabilities. I identified key limitations, including distance miscalculations,
environmental noise interference, and vertical blind spots, and proposed multi-sensor fusion and algorithmic optimizations as improvement strategies.
This experience enhanced my skills in robot perception, SLAM, sensor data analysis, and experimental evaluation, deepening my understanding of LiDAR-based mapping
and autonomous navigation systems.
As a core developer of this project, I was responsible for designing and testing the multi-axis PID-based drone position control algorithm.
By incorporating differential filtering, integral clamping, and fifth-order trajectory planning,
I optimized the steady-state error to 0.12m and reduced overshoot by 42%. Currently, I am proposing future optimizations and continuous improvements,
including: (1) Developing a Model Predictive Control (MPC) module to enhance adaptability in dynamic environments; (2) Implementing reinforcement learning-based parameter auto-tuning
to reduce manual tuning efforts; (3) Building wind disturbance and sensor noise models in a simulation environment to validate algorithm robustness. Through this project,
I further enhanced my expertise in drone dynamics modeling, multivariable control optimization, and Python-based real-time system development.
Educational video about the contribution of robotic systems to addressing the challenges around sustainability and their ethical perspective.
TECHNICAL SKILLS
Robotics Systems
Manipulation Systems
MoveIt! · URDF · ROS Control
Designing and implementing robotic manipulation systems using ROS tools for task automation.
Autonomous Navigation
ROS2 Nav2 · A* · D* Lite
Developing algorithms for autonomous navigation and path planning in dynamic environments.
Sensor Fusion
EKF · IMU · LiDAR
Combining multiple sensor data streams to improve robot perception and localization accuracy.
Development Stack
Core Tools
VSCode · Jupyter · Git · Python
Using modern development tools to write, test, and version-control robotic code and algorithms.
Environment Management
Conda
Managing project dependencies and ensuring reproducibility across different platforms and teams.
Scientific Computing
NumPy · SciPy · Matplotlib
Applying scientific computing libraries to perform data analysis, modeling, and visualization in robotics.
Visualization and Simulation Tools
Rviz · Gazebo · TF · Behaviour Trees
Utilizing powerful simulation and visualization tools for robot modeling, testing, and debugging.
Intelligent Perception
Computer Vision
OpenCV · YOLOv8
Leveraging computer vision techniques for real-time object detection and image processing.
3D Reconstruction
Multi-view · SLAM
Techniques for reconstructing 3D environments using stereo vision, Structure-from-Motion (SFM), and SLAM.
Deep Learning
TensorFlow · PyTorch
Building and training deep neural networks for tasks such as classification, regression, and segmentation.
Extended Capabilities
Hardware Integration
Raspberry Pi · Mechanical Design · LeoRover Robot Assembly
Designing and integrating hardware systems for robotics, including the use of Raspberry Pi and mechanical components for building robots like LeoRover
Prototyping Tools
3D Printing · Laser Cutting
Utilizing advanced prototyping tools for creating custom parts and models, with expertise in 3D printing, laser cutting, and CNC milling for rapid prototyping.
Technical Documentation
LaTeX · Github · Jupyter
Writing comprehensive technical documentation using LaTeX for high-quality reports and papers, and managing projects on GitHub for version control. Also proficient in creating interactive Jupyter notebooks for analysis and presentation.
Select Awards & Honor
National Computer Rank Examination (C Language Programming)
Issued by: Ministry of Education of China
Competition Achievement
4th Place - Faculty Innovation Planning Competition
Oct-Nov 2016 | Project Management & Coordination
Freshman Culture & Sports Scholarship
Sophomore Self-Improvement Scholarship
Junior Academic Excellence Award
Get In Touch
Ready to start a conversation? I would love to hear from you. Feel free to reach out for collaboration, inquiries, or just to say hello!