Skip to content

AapseMatlb/Pickasso-Complete

Repository files navigation

🤖 Pickasso Unified Robotic System

📦 Project Overview

A fully autonomous trash-picking robot with integrated:

  • 📸 Vision Inference (YOLO + MediaPipe)
  • 📖 Ethical Decision-Making Engine
  • 📡 Navigation (ROS)
  • 🤖 Manipulation Simulation
  • 🎛️ HMI Dashboard (React + Convex)
  • 🗣️ Speech Interface (Recognition + TTS)

🚀 Deployment Steps

  1. Clone the Repository
git clone https://github.com/AapseMatlb/pickasso_unified.git
cd pickasso_unified
  1. Install Dependencies
 pip install -r requirements.txt
 cd hmi/frontend && npm install
  1. Start the Complete System
chmod +x startup/start_simulation.sh
./startup/start_simulation.sh

🧩 Subsystem Details

Subsystem Access/Command
HMI Dashboard http://localhost:3000
Vision Inference Runs Automatically
Decision Engine Runs Automatically
Navigation & Manipulation Simulated using ROS
Speech Interface Accepts Simple Voice Commands

📖 For detailed configuration and subsystem usage, refer to the respective directories.


👨‍💻 Developed by: Yashashwani Kashyap

📚 Supporting Repositories


🛠️ Hardware Requirements (For Real Robot Deployment)

  • Intel RealSense D435i Camera
  • OpenManipulator-X Arm
  • TurtleBot3 Waffle Pi Base

⚙️ Environment Setup

  • Python 3.8+
  • ROS Noetic (Navigation & Manipulation)
  • Node.js 16+ (HMI Dashboard)
  • Convex Backend Account

📦 Pre-trained Models

Ensure YOLOv8 models are downloaded and placed under : vision_inference/models/


🌐 Convex Backend Configuration

Update CONVEX_API_URL in all relevant scripts with your deployed Convex backend URL.

To Run HMI Frontend:

  1. Navigate to frontend and run:
npm install
npm start

To Run Convex Backend:

  1. Navigate to convex directory and run:
npx convex dev

About

ROS-based autonomous robot with YOLOv8 perception, navigation, manipulation, and multi-modal HRI, built at NUS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors