Skip to content

Quickly integrate face, hand, and/or pose tracking to your frontend projects in a snap βœ¨πŸ‘Œ

License

Notifications You must be signed in to change notification settings

CreativeInquiry/handsfree

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

22-02-28 This project has been archived

It's with a very heavy heart that I'm moving this project into archive. I started this project in 2018 while I was homeless in order to help a friend at the shelter I was staying at who was recovering from a stroke. I didn't even have my own computer until engineers from Google visited me at the shelter and bought me one.

The problem is that although I eventually made it out of homelessness, I was only able to raise just enough support to barely stay afloat (in fact, people have sent me food boxes, have helped pay my bills, and have helped me purchase things like webcams and tools to explore). I actually became homeless again last summer after I was assaulted and hospitalized in a random act of violence from a crazy neighbor. I was afraid to live in my apartment and was homeless while I transferred the lease to a new place as I wasn't able to raise enough support.

It's been 4 years and I get panic attacks when I get notifications on this project because it's a very heavy feeling to have put so much work into this and related tools without enough support. I don't have a car or even a bycicle, I am on food stamps, occasionally I shoplift essentials like soap and toothpaste, and I've spent months inpatient at mental health centers from the burnout of trying to raise support for this project.

All we can do is try our best, and I can definitely say I gave it my best. I've rewritten the library from scratch 8 times (including the interactive documentation), I've created integrations for various platforms and libraries, I've created chrome extension starter kits with well researched security/privacy features (which I even consulted Mozilla on), and I've created dozens of examples, tutorials, and streams.

I plan on writing a summary of my experiences in developing this library and the people with disabilities which I've helped through it and will eventually publish it on my site at ozramos.com. I'm archiving this project because I would like to try and find a job and use it on my portfolio, but I don't want people to use it without realizing that there is no more support. Please feel free to fork and continue it however

I need to archive this project now and try to work on something that will generate money. I'm sorry,

  • Oz







handsfree.js.org

Quickly integrate face, hand, and/or pose tracking to your frontend projects in a snap βœ¨πŸ‘Œ

Powered by:

Β Β Β  Β Β Β 








πŸ’» Project Documentation

I'm still experimenting with various ways to create documentation. The docs can be found:

Sorry for the confusion! I'll likely be settling on Notion but am still trying to find the best docs. Thanks!








Contents

This repo is broken into 3 main parts: The actual library itself found in /src/, the documentation for it in /docs/, and the Handsfree Browser Extension in /extension/.








Quickstart

Installing from CDN

Note: models loaded from the CDN may load slower on the initial page load, but should load much faster once cached by the browser.

This option is great if you don't have or need a server, or if you're prototyping on a site like CodePen. You can also just download this repo and work with one of the /boilerplate/.

<head>
  <!-- Include Handsfree.js -->
  <link rel="stylesheet" href="https://unpkg.com/[email protected]/build/lib/assets/handsfree.css" />
  <script src="https://unpkg.com/[email protected]/build/lib/handsfree.js"></script>
</head>

<body>
  <!-- Your code must be inside body as it applies classes to it -->
  <script>
    // Let's use handtracking and show the webcam feed with wireframes
    const handsfree = new Handsfree({showDebug: true, hands: true})
    handsfree.start()

    // Create a plugin named "logger" to show data on every frame
    handsfree.use('logger', data => {
      console.log(data.hands)
    })
  </script>
</body>

Installing from NPM

# From your projects root
npm i handsfree
// Inside your app
import Handsfree from 'handsfree'

// Let's use handtracking and enable the plugins tagged with "browser"
const handsfree = new Handsfree({showDebug: true, hands: true})
handsfree.enablePlugins('browser')
handsfree.start()

Hosting the models yourself

The above will load models, some over 10Mb, from the Unpkg CDN. If you'd rather host these yourself (for example, to use offline) then you can eject the models from the npm package into your project's public folder:

# Move the models into your project's public directory
# - change PUBLIC below to where you keep your project's assets

# ON WINDOWS
xcopy /e node_modules\handsfree\build\lib PUBLIC
# EVERYWHERE ELSE
cp -r node_modules/handsfree/build/lib/* PUBLIC
import Handsfree from 'handsfree'

const handsfree = new Handsfree({
  hands: true,
  // Set this to your where you moved the models into
  assetsPath: '/PUBLIC/assets',
})
handsfree.enablePlugins('browser')
handsfree.start()







Example Workflow

The following aims to give you a quick overview of how things work. The key takeaway is that everything is centered around hooks/plugins, which are basically named callbacks which are run on every frame and can be toggled on and off.

Quickstart Workflow

The following workflow demonstrates how to use all features of Handsfree.js. Check out the Guides and References to dive deeper, and feel free to post on the Google Groups or Discord if you get stuck!

// Let's enable face tracking with the default Face Pointer
const handsfree = new Handsfree({weboji: true})
handsfree.enablePlugins('browser')

// Now let's start things up
handsfree.start()

// Let's create a plugin called "logger"
// - Plugins run on every frame and is how you "plug in" to the main loop
// - "this" context is the plugin itself. In this case, handsfree.plugin.logger
handsfree.use('logger', data => {
  console.log(data.weboji.morphs, data.weboji.rotation, data.weboji.pointer, data, this)
})

// Let's switch to hand tracking now. To demonstrate that you can do this live,
// let's create a plugin that switches to hand tracking when both eyebrows go up
handsfree.use('handTrackingSwitcher', {weboji} => {
  if (weboji.state.browsUp) {
    // Disable this plugin
    // Same as handsfree.plugin.handTrackingSwitcher.disable()
    this.disable()

    // Turn off face tracking and enable hand tracking
    handsfree.update({
      weboji: false,
      hands: true
    })
  }
})

// You can enable and disable any combination of models and plugins
handsfree.update({
  // Disable weboji which is currently running
  weboji: false,
  // Start the pose model
  pose: true,

  // This is also how you configure (or pre-configure) a bunch of plugins at once
  plugin: {
    fingerPointer: {enabled: false},
    faceScroll: {
      vertScroll: {
        scrollSpeed: 0.01
      }
    }
  }
})

// Disable all plugins
handsfree.disablePlugins()
// Enable only the plugins for making music (not actually implemented yet)
handsfree.enablePlugins('music')

// Overwrite our logger to display the original model APIs
handsfree.plugin.logger.onFrame = (data) => {
  console.log(handsfree.model.pose?.api, handsfree.model.weboji?.api, handsfree.model.pose?.api)
}







Examples

Face Tracking Examples

Face Pointers

Motion Parallax Display

Puppeteering Industrial Robots

Playing desktop games with face clicks


Hand Tracking Examples

Hand Pointers

Use with Three.js

Playing desktop games with pinch clicks

Laser pointers but with your finger


Pose Estimation Examples

Flappy Pose - Flappy Bird but where you have to flap your arms








Local Development

If you'd like to contribute to the library or documentation then the following will get you going:

  • Install NodeJS and git
  • Clone this repository: git clone https://github.com/handsfreejs/handsfree
  • Install dependencies by running npm i in a terminal from the project's root
  • Start development on localhost:8080 by running npm start
  • Hit CTRL+C from the terminal to close the server

Once you've run the above, you can just use npm start. If you pull the latest code, remember to run npm i to get any new dependencies (this shouldn't happen often).

Command line scripts

# Start local development on localhost:8080
npm start 

# Builds the library, documentation, and extension
npm run build

# Build only the library /dist/lib/
npm run build:lib

# Build only the documentation at /dist/docs/
npm run build:docs

# Build only the extension at /dist/extension
npm run build:extension

# Publish library to NPM
npm login
npm publish

# Deploy documentation to handsfree.js.org
deploy.sh

Dev Notes

  • See vuepress-component-font-awesome for adding new icons to the documentation. Remember to run npm run fa:build when adding new font icons so that they are copied over into the docs/.vuepress/components/FA folder
  • You may occasionally need to restart server when adding new files to the /docs, this is true when changing /docs/.vuepress.config.js as well







The Handsfree Browser Extension

The Browser Extension is a designed to help you browse the web handsfree through face and/or hand gestures. The goal is to develop a "Userscript Manager" like Tampermonkey, but for handsfree-ifying web pages, games, apps, WebXR and really any other type of content found the web.

How it works

  • When you first install the extension, /src/background/handsfree.js checks if you've approved the webcam. If not, then it'll open the options page in src/options/stream-capture.html
  • The popup panel has a "Start/Stop Webcam" button that communicates with the background script to start the webcam: /src/popup/index.html
  • The background page is where the models are stored and run. This keeps everything isolated and only asks for webcam permission once (vs on every domain): /src/background/handsfree.js
  • The background page also uses the "Picture in Picture" API to "pop the webcam" out of the browser. It renders the webcam feed and debug canvases into a single canvas, and uses that as the srcObject to a separate video element which is the PiP'ed

How to install

Google Chrome

Install as an unpacked chrome extension.

  1. Visit chrome://extensions
  2. Enable Developer Mode on the top right
  3. Then click Load unpacked and select this project's root folder

Handsfree Browsing

By default, each page will get a "Face Pointer" or a set of "Palm Pointers" for you to browse pages handsfree.

A person controlling a virtual mouse pointer by tilting their head around A person scrolling a page by pinching their index and thumb together and raising or lowering their pinched hand

However, in addition to the pointers you can add custom handsfree interactions. For example, here's a demonstration of handsfree-ifying different things:

Creating generative art with hand gestures A person pinching the air to "pinch" a blob. Moving a pinched blob causes it to sing in a different pitch








Explore the interactive docs at: Handsfree.js.org

Or try it right away with the serverless boilerplates in /boilerplate/!








License & Attributions

License: Apache 2.0

The Handsfree.js core is available for free and commercial use under Apache 2.0. Each of the models are also available for free and commercial use under Apache 2.0:

Attributions

I'd like to also thank the following people and projects:








Special Thanks

  • @Golan and the The STUDIO for Creative Inquiry for hosting me for a residency during 2019 and for helping me approach projects in a more expressive way. Also for inviting me back for a multi-month residency in Spring 2021!
  • @AnilDash for supporting the project during Winter 2018 out of the blue and the opportunities to share my project on Glitch.com
  • The School of AI for the 2018 Fellowship in support of this project
  • @jessscon and Google PAIR for the very early support that made starting this project possible
  • Everyone who's previously supported the project through GitHub Sponsors, Patreon, GoFundMe, and through Twitter and everywhere else over the years

About

Quickly integrate face, hand, and/or pose tracking to your frontend projects in a snap βœ¨πŸ‘Œ

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 98.8%
  • Other 1.2%