top of page

Portfolio

Ideas that move - Literally

Project Directory

Autonomous Car

​Jetson-based full-stack system with real-time navigation & vision.

image-14.png
image-15.png
Robotic Arm (Dobot CR3)

ROS powered smart manipulator with dynamic object handling.

Infotainment System

Embedded GUI + vehicle simulation with custom UI and sensors.

image-13.png
Stroke Counter

Real-time interactive tracker designed for in-store engagement.

image-16.png
Smart Anti-Theft System

Facial recognition-based vehicle immobilizer with cloud integration.

Delivery Robot 

Low-voltage autonomous cart built for obstacle detection trials.

Project Directory

Autonomous Car

​Jetson-based full-stack system with real-time navigation & vision.

image-14.png
image-15.png

Robotic Arm (Dobot CR3)

ROS powered smart manipulator with dynamic object handling.

Infotainment System

Embedded GUI + vehicle simulation with custom UI and sensors.

image-13.png

Stroke Counter

Real-time interactive tracker designed for in-store engagement.

image-16.png

Smart Anti-Theft System

Facial recognition-based vehicle immobilizer with cloud integration.

Delivery Robot 

Low-voltage autonomous cart built for obstacle detection trials.

Autonomous Car

A2A5DA9C-F6EE-4238-8008-123B915A3C38_edited.png

NVIDIA Jetson Nano

ROS

YOLOv5

OpenCV

Python

Gazebo

RViz

Computer Vision

Odometry

Real-Time Inference

NVIDIA Jetson Nano

Computer Vision

OpenCV

Python

ROS

Odometry

RViz

Gazebo

Real-Time Inference

YOLOv5

A full-stack autonomous system developed alongside NVIDIA and Manchester Robotics

A modular robotics system designed for real-time autonomous navigation in a controlled indoor track environment. The platform executes lane keeping, traffic sign and lights detection, path switching, and behavior transitions. The system runs on a compact embedded stack based on the NVIDIA Jetson Nano and a custom Hackerboard.

 

ROS was used as the middleware backbone, handling sensor streams, message passing, task coordination, and modular testing. Each subsystem: vision, inference, control, and odometry, was implemented from scratch and optimized to run concurrently on resource-constrained hardware.

Perception Systems

Vision System: Lane detection was achieved through a minimalist, compute-efficient vision pipeline. A vertical Sobel operator combined with a tightly defined region of interest allowed the system to isolate and track lane lines without relying on fragile edge-detection methods like Canny or Hough transforms. Binary thresholding and morphological operations further stabilized output. This approach produced accurate lane centroids in real time, even under variable lighting and slight occlusion, without overwhelming the Jetson Nano’s limited resources. Object Detection: Traffic sign recognition was handled using a custom-trained YOLOv5 model, selected after early experiments with TensorFlow proved unstable at runtime. Over 7,000 annotated images were collected and labeled to build a task-specific dataset. The model’s accuracy was enhanced through careful preprocessing, including color correction, confidence thresholds, and bounding box size filters. The result was a detection system capable of recognizing turn signs, stops, and intersections with over 95% precision and minimal latency, allowing the robot to react in real time during autonomous runs.

Autonomy Logic & Embedded Integration

Control logic: A state-driven controller handled all behavior transitions, ensuring that task execution was clean, predictable, and free from conflict. Task flags managed the system’s internal logic, while odometry-based distance tracking prevented re-triggering or overlap during maneuvers. Behaviors such as stopping, turning, or lane recovery were executed only once conditions were fully met and previous states had completed. This approach removed the need for manual resets or hard-coded delays, improving the system’s reliability and responsiveness. Platform Architecture: All subsystems ran concurrently on the Jetson Nano, integrated with a custom Manchester Robotics Hackerboard. The robot’s wide-angle RPi camera streamed directly to the vision module, while encoder data fed into ROS topics for movement tracking. ROS nodes were containerized for isolation and ease of debugging. Heat, power draw, and CPU usage were closely monitored to keep the system within embedded constraints while still running inference, image processing, and control loops in real time.

Phase II: Advanced Perception & Sensor Fusion Enhancements

The project evolved beyond its initial deployment to incorporate advanced perception and localization capabilities. A 360° LIDAR sensor was added, enabling precise mapping of the surroundings and significantly improving obstacle avoidance. To enhance spatial awareness, ArUco markers were distributed across the arena, providing reliable visual anchors for external pose estimation.
 
These sensor streams: LIDAR, wheel encoders, and visual markers, were fused through an Extended Kalman Filter (EKF), reducing drift and improving localization accuracy in real time. All modules were seamlessly integrated into the existing ROS Melodic framework and rigorously tested through real-world trials and Gazebo simulations.

Color-Sorting with the Dobot CR3

EF76D61D-43D1-4B67-90C1-10D88C75B0DF_edited.png

Dobot CR3

ROS

URDF

MoveIt!

OpenCV

Python

TCP/IP

Gazebo

RViz

Computer Vision

Kinematics

Industrial  Automation

Dobot CR3

URDF

ROS

MoveIt!

OpenCV

Python

TCP/IP

Gazebo

RViz

Computer Vision

Kinematics

Industrial Automation

Precision pick-and-place automation with the Dobot CR3, integrating Kinect 3D vision and ROS Noetic for real-time object classification and control.

A full-stack object sorting solution leveraging a Dobot CR3 collaborative robotic arm, integrated with a Kinect v1 RGB-D sensor, and orchestrated via a custom ROS Noetic architecture. The system was capable of detecting objects based on color and geometry, computing valid grasp positions, and autonomously executing pick-and-place actions in a shared tabletop workspace.
 

Designed for continuous operation, the robot could respond in real time to changes in the scene, blocks could be added, removed, or repositioned mid-execution without requiring the system to pause or reset. This level of adaptability was key to demonstrating robust object tracking and task-level autonomy under dynamic conditions.

Perception Pipeline

Real-time perception relied on a combination of color filtering and depth data analysis. Using OpenCV with ROS image transport, each frame was converted into HSV space to isolate specific color ranges. Depth data from the Kinect was used to filter out background noise and validate the position and height of detected objects. A custom point cloud filtering routine allowed the system to localize objects in 3D space with millimeter-level accuracy. Because perception ran continuously in parallel with the control system, the robot could respond immediately to new or moved blocks, recalculating trajectories on-the-fly without human intervention.

Simulation & Modeling

The complete physical environment, including the robot, camera, workspace table, and objects, was modeled in URDF, using precise link configurations and TF relationships. Simulation was run in Gazebo for dynamics testing and RViz for real-time trajectory visualization. The robot’s joint limits, gripper range, and mounted sensor frame were calibrated to match real-world specifications, ensuring that the simulated motions and grasps would perform identically in physical execution.

Manipulation & Control

Pick-and-place operations were coordinated through a modular ROS node architecture. The motion planner used MoveIt to generate collision-free trajectories based on the detected object’s location and height. Execution was handled by ROS action servers interfacing with a TCP/IP controller for the DH Robotics AG95 gripper. The grasping pipeline accounted for approach vectors, end-effector constraints, and object-specific handling sequences. Post-placement retraction and reset positions ensured safe, repeatable behavior.

Communication & Coordination

A master control node handled inter-node synchronization using publisher-subscriber patterns and ROS services. The Kinect, manipulator, and gripper were each interfaced through independent nodes to allow asynchronous communication and modular debugging. TF broadcasts maintained alignment between the sensor frame and world frame, enabling consistent object localization even as the arm repositioned or adjusted its field of view.

Car Infotainment System

ChatGPT Image Jun 5, 2025 at 10_27_36 PM_edited.png

Obstacle Detection

HMI

MP3

Python

Tkinter

Arduino

UART

Raspberry Pi 4

UI/UX

Embedded Systems

Obstacle Detection

MP3

HMI

Python

Tkinter

Arduino

UART

UI/UX

Raspberry Pi 4

Embedded Systems

A next-gen in-car experience prototyped with Intel, combining touchscreen UX, safety sensors, and microcontroller-driven automation, modeled inside a custom-built Cybertruck.

Developed in collaboration with Intel, this project reimagines the in-car infotainment experience through a fully integrated dashboard system designed for an internally modeled Tesla Cybertruck inspired prototype. Built using a Raspberry Pi 4, Arduino Mega & UNO, and a touchscreen interface, the system brings together real-time controls, multimedia, safety features, and smart automation, all while simulating a complete automotive environment.
 

Inspired by the Cybertruck's minimalist aesthetic and digital-first approach, the interface delivers intuitive control over core driving functionalities, lighting, and navigation, while backend microcontrollers ensure real-time responsiveness and hardware-level automation.

Key Features

Minimalist GUI (Python, Tkinter): A custom-designed user interface tailored for intuitive control, inspired by clean design principles and responsive layout logic. MP3 Playback: Local music player with favorites, shuffle mode, and dynamic metadata display using TinyTag and Mutagen libraries. Real-Time Navigation: Lightweight embedded map interface powered by tkintermapview with dynamic centering and zoom controls. Safety System Integration: HC-SR04 ultrasonic sensor with real-time obstacle detection. Variable-frequency buzzer for distance-based alerts. Auto-stop logic when reversing toward nearby objects. Vehicle Controls: Touchscreen-based light and emergency signal toggles. Door and trunk actuation via servo motors and L293D IC. LCD speed display with live motor feedback. Call Shortcuts: Tap-to-dial interface with customizable contact list (via Twilio or placeholder system).

Architecture

Core Stack: Raspberry Pi 4 (Python 3.10.4, Tkinter GUI, serial communication). Arduino UNO (UI-triggered mechanical functions). Arduino Mega (motor logic, LCD display, sensor data). Communication Pipeline: UART over USB for serial Pi-Arduino messaging; secondary signaling handled through GPIO for event synchronization. Cybertruck Prototype: Scale vehicle model designed for internal hardware integration. Wiring and mount points structured to reflect a real automotive system layout.

Interactive Stroke Counter

ChatGPT Image Jun 6, 2025 at 02_33_51 PM_edited.png

Embedded Systems

OpenCV

Python

Skeleton Tracing

Pose Estimation

Retail Technology

Retail-ready stroke detection system for Decathlon: an interactive kiosk blending MediaPipe-based motion tracking and a custom touchscreen interface.

This project was comissioned by Decathlon, aiming to drive in-store engagement through interactive activity stations. The system uses real-time human pose estimation and a friendly touchscreen interface to track arm strokes and provide immediate feedback with sound and visuals.
 

Installed in physical stores, this system enabled customers to test their resistance and coordination in a gamified experience while simultaneously promoting related products. Its lightweight design and fully offline performance allowed for easy deployment on standard hardware, with no cloud dependency.

Key Features

Real-Time Pose Detection: Leveraging MediaPipe Pose, the system accurately tracks arm positions to recognize complete and valid stroke motions, filtering out incorrect or partial movements through custom joint-angle logic. Angle-Based Stroke Validation: Uses calculated shoulder-elbow-wrist angles to distinguish intentional movements, reducing noise and accidental triggers. Touchscreen GUI (Tkinter): A responsive and visually clean interface allows users to start sessions, view counts, and interact with results, all on a single screen. Audio Feedback (Pygame): Integrates sound effects for countdowns and confirmed strokes, enhancing engagement and guiding users through the activity. Automatic Session Handling: Timer-based logic starts and ends sessions with no manual intervention, perfect for hands-free, store-floor operation. Custom Asset Integration: Brand-consistent visuals, sound files, and interaction flows tailored to match Decathlon's aesthetic and experience goals.

Architecture & Tools

Libraries Used: MediaPipe, OpenCV, Tkinter, Pygame, NumPy, PIL Tracking Model: Google’s Pose Estimation model from MediaPipe, running in real-time streaming mode. Angle Analysis: Custom function for computing joint angles based on landmarks. GUI Framework: Tkinter with custom image assets and button styling. Multithreading: Ensures responsive video feed and audio feedback concurrently.

Smart Anti-Theft System with Real-Time Facial Recognition

C3D95B03-084B-4CE6-8891-56CF41A1DE2D_edited.png

Facial Recognition

CAN Bus

IoT

OpenCV

Real-time Tracking

Arduino

Raspberry Pi 4

Mobile App Integration

Pyrebase

SIM800C

Python

Cloud Integration

Embedded Systems

Facial Recognition

CAN Bus

IoT

OpenCV

Real-time Tracking

Arduino

Raspberry Pi 4

Mobile App Integration

Pyrebase

SIM800C

Python

Cloud Integration

Embedded Systems

Face-authenticated vehicle access system developed along B&W Soluciones Integrales, integrating Raspberry Pi, CAN Bus, and cloud-based tracking.

This security-focused project was developed in collaboration with B&W Soluciones Integrales, addressing Mexico’s high auto theft rates with a real-time, cloud-connected vehicle access and tracking solution. The system uses facial recognition for driver authentication, allowing or blocking vehicle ignition based on identity.

Built on a Raspberry Pi 4 with CAN Bus integration and a native iOS app, the platform enables remote control, location monitoring, and intrusion detection, all synced through Firebase infrastructure.

Core Features

Facial Recognition Engine using a pre-trained face detection library, OpenCV, and Pyrebase for identifying authorized users or flagging unknown faces. Live Image Upload every 30 seconds to Firestore Storage with categorization into registered users, intruders, and system users. Cloud-Connected Mobile App built in Xcode (iOS 15+), displaying real-time vehicle location, user images, and interactive vehicle commands including: Valet Mode, Remote Shutdown, and Guest Access Authorization. CAN Bus Control to physically disable the vehicle’s ignition via dual Arduino MCP2515 modules in case of unauthorized access. Real-Time Communication with Firestore to send and receive live updates between the vehicle, Raspberry Pi, and the mobile application. Raspberry Pi 4 as Central Node handling face recognition, CAN Bus signaling, and remote data transmission over mobile networks via SIM800C.

System Architecture Highlights

Raspberry Pi 4 + YoBuyBuy camera module for onboard image processing. CAN protocol implemented for secure ignition disablement. Modular Firebase (Firestore + Storage) schema for images, user flags, and geolocation. Python-powered backend interfacing with Firebase and CAN controllers. Secure cloud-to-vehicle handshake via Firestore permissions.

Autonomous Delivery Cart Prototype

E7EDC631-E609-40C0-A07C-8C9B1D05A835_edited.png

DC Motors

Obstacle Detection

Analog Electronics

Motor Control

Energy Optimization

Circuit Simulation

Obstacle-detecting miniature delivery robot with infrared-guided traction system and current-boosted motor control.

This proof-of-concept project presents the design and implementation of a small-scale autonomous delivery robot, simulating future last-mile logistics systems. Designed on a compact 10×10 cm wooden chassis, the system integrates infrared sensors for obstacle detection, a dual DC motor-based traction system, and a Darlington-transistor power stage to ensure motor performance under variable current demands.

 

The robot continuously monitors its path, halting immediately upon detecting an obstacle within a safe threshold. The design was rigorously optimized through a complete power supply overhaul and enhanced transistor protection measures, ensuring reliable operation on a strict 4.5V power budget.

Key Features

Chassis & Power: Constructed on a compact 10×10 cm wooden base. Powered by three 1.5V batteries, optimized for torque and efficiency. Control & Sensing: Implemented two analog infrared sensors for line-following behavior. Used a Darlington pair configuration to amplify motor current. Integrated 100Ω and 1kΩ resistors to protect transistors under load. Motor System: Equipped with two gear motors acting as traction drivers. Final drive stage refined through circuit simulation and iterative physical testing. Simulation & Validation: Simulated sensor input using potentiometers to model real-world conditions. Conducted current draw and transistor switching tests to ensure stable operation. Validated sensor responsiveness and circuit robustness under real loads.

Still Curious?

There's always more beneath the surface.

Details, decisions, experiments that didn’t make it into the showcase.

If something caught your eye or left you wondering, I’d love to talk.

bottom of page