Nearly frame-by-frame lens to video with Snap Camera

I’ve been working on applying lenses to videos with Snap Camera frame-by-frame. I have a script that mostly works and am sharing it in case anyone else finds it useful.

Lenses with particles, chain physics, or anything else that updates each frame won’t work great because Snap Camera is running at 30 fps but the script is feeding frames in much slower. But other than that it produces fairly good results.

Let me know if you run into any issues!

Woah this is super cool! I have done something similar with Spark to capture face tracking data.

I’m curious to hear what your use-case is. Why do the processing frame-by-frame? Can you not use a virtual webcam and send video to snap cam and just make a screen recording?

Whenever there is a jump cut in the video it takes a few frames for Snap Camera to find the face again. In an ideal world you could process all the raw video before editing, but if the edit is already done then this method helps reduce the number of frames without the lens applied.