Join us for an interactive event about how to create custom Zoom Filters with Python and OpenCV that allows you to join any meeting as an animal of your choice!
This event is hosted by Fred Lu. (Email: dl3957@nyu.edu, GitHub: http://github.com/noodleHam/, Slack: find me on tech@nyu community slack channel)
Install Python 3.x from https://www.python.org/
Then, use pip by typing "pip" in command prompt or terminal. If you have a mac or Linux machine, replace "pip" with "pip3"
pip(3) install --upgrade opencv-python
pip(3) install --upgrade opencv-contrib-python
pip(3) install --upgrade matplotlib
(If typing "pip" causes an error in terminal, or leads to the pip belong to a different python installation, you may have to manually locate pip for your python installation like this: type "python -m site" and find pip.exe in python3x/Scripts)
You can optionally install OBS-Studio from https://obsproject.com/ to create a virtual webcam, but I'll mainly be showing how to get the filter working and then you can use OBS to use it as your actual webcam later.
In terminal or command prompt, type the following:
python (or "python3" for macs and Linux)
>>> import cv2
>>> import matplotlib
If you see no errors, then you're all set for running and tweak what I'll be demoing in a bit.
I'll be explaining how I built the face filter step-by-step.
To tag along, clone my repo by typing
git clone https://github.com/NoodleHam/Python-Zoom-Face-Filters.git
or download the zip by visiting https://github.com/NoodleHam/Python-Zoom-Face-Filters
First, I imported code written by @Daniel Otulagun that visualizes facial landmarks using functions provided by OpenCV. The code simply draws circles around your face. Too see this, run (press 'q' to exit)
python3 codesss/1-facial-landmark-original.py
Here's what it looks like for me:
It looks good, but there's an immediate problem: we don't know which dots belong to which region! For example, we can't tell the mouth from the eyes. It turns out that these dots are ordered and each of them corresponds to a specific facial region, and we'll need another piece of code to utilize this information.
I used a piece of code from Adrian Rosebrock's imutils repo to segment and visualize these landmarks on a face. To see this, run:
python3 codesss/2-facial-landmark-segmentation.py
This is what it looks like for me:
I'll save the space here. Basically I had to apply a blurring effect to the masks around the mouth and the eye so that the crop isn't too jagged around the edges and looks more natural. Once I've done this, I pasted my eyes and mouth onto a cat. This looks VERY uncanny (and funny too đŸ˜‚). I decided to change my code so I can have cartoon/anime eyes instead.
Now, we want to crop the eyes and detect the direction of the gaze so we can animate our cartoon eyes. I achieved this using convolution + truncation. Here's what it looks like:
python3 codesss/4-visualize-gaze.py
Note that on the top left window, the location of the pupil is accurately estimated, marked with a blue circle.
Next I computed the location of the pupil relative to the center of the eye to represent the direction the eye is looking at. I then pasted cartoon eyes onto a cartoon cat and used this information to animate the cartoon eyes. To see this, run:
python3 codesss/5-cartoon-eyes.py
This is what it looks like:
run
python3 mark_eyes.py
And this will show all pictures stored in the filter_imgs folder. Use your mouse to navigate to the location of the eyes and the mouth on the pictures shown, record these x-y locations, and type them into line 25 of codesss/5-cartoon-eye.py:
Then, change the filter background image in line 82 to utilize your new virtual background!