Skip to content

Andrewmh71/ROB421

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project Overview: Mirror Master

Table of Contents

Introduction

  • Project Mirror Master is a project designed, and created by Andrew Dillon, Andrew Hiser, Miguel Garcia Garspa, Daniel McVay, and Conor Rosenberger, who are currently students at Oregon State University in the Applied Robotics class (ROB 421), that would take a completely assembled and functioning Social Animated Mechanical Interlocutor (aka SAMI) robot currently used at oregon state for research, and would allow the user to install some modifications and a camera to allow the robot to be able to mirror the movements that would be performed right in front of it
  • This repository contains all relevant documentation and visual images needed to transform the SAMI robot into Project: Mirror Master

Reasoning:

  • We decided on attempting this project as we had already added a camera to our SAMI Robot and thought that it would be difficult and interesting to attempt to have the robot copy movements, which none of us hav ever done before. With our goal to get it to copy in 2, or even 3 dimensions. However our final Project: Mirror Master is only able to function in a 2 dimensional space.

initial_test_videos

Completed_Videos

Features

Supplemental Artifacts:

  • Miguel Garcia's Artifacts
    • Gimbal tracking of the head in order to track user if they step out of view
PXL_20250605_200903145.mp4

-Daniel McVay's Artifacts:

  • Create collision detection for system to determine if a motion is safe to mirror

  • Artifact 1) created code for identification of both shoulders and elbow to initialize system

  • Artifact 2) created code that simulates the arms position and checks for the collision {image below} image image

  • Artifact 3) Collision detection checks individually for left arm and right arm individually NOTES: Overall the code woorks to test and simulate positions using the Trimesh library, however, because we could not get 3d detection to work, it would have been a waist time to try to implement that with minor affect, Instead we bound motions to prevent collisions.

General Robot Facts

  • Using 1/2 cameras to pose match and copy shown movement in all 3 coordinate directions when compared to the standard 2 directions
  • Use DeepFace Facial recognition to isolate and recognize authorized users and copy the movements of only those authorized users
  • MediaPipe to capture the skeleton pose matching
  • Calculates servo angles based off the skeleton to match the targets movements and send them to the robot for it to replicate
  • Utilizes up to 21 servos to replicate human joints and allow the robot to replicate human motion

Installation

Assemble Logitech c920e camera and combine it with the camera cover and assemble it on top of the robot ROB 421 Camera Cover ROB 421 Image with camera cover

Captions/Speakers/Video/Microphone

  • Current camera model for project is a Logitech c920e camera
  • Currently uses hooked up computers speakers for sound and audio

Configuration

  • 3D tracking
  • Accurate Angle and servo determination
  • more human like bending of the arms, biceps and shoulders. It should not bend in non human ways when copying movement
  • Safe and reasonable servo movement
  • Collision prediction and prevention

System Requirements

  • Must be on Python ver. 3.11
    • one of the few versions that would work with mediapipe and DeepFace
  • Python Module Version:

About

For Team 2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5