Invisible Body

I’m wondering if anyone knows how they project background so smooth to the body segmentation?

Any tricks?



The simplest trick is to use a fake background :slight_smile:

The hard way to do it is to paint the background over time. You can remove the user from the camera feed and use that to paint in the missing parts. It’s pretty messy, but it works well enough. There’s a walkthrough in one of Dan Moller’s quick tips videos, like v93 ish.


Yeah, I know this technique for Spark. But, I guess in this Snapchat lens they use some ML, cuz if you try this lens it’s work even if u don’t show full background first, and it’s replace peace by peace like it’s do by ML. But, it’s seems not possible to create this ML for Lens Studio for now, until they add it to the library :slightly_frowning_face:

I dont know this technique even for spark :grin:
Where i can find some explanations man?


Oh, I haven’t seen the lens studio invisibility! Seems like all of their really nice effects lately have been 100% ML. Hopefully they will release some new models soon, but maybe you could get one of these models to work in LS.


Ohhh I guess it exactly what I need. Will be Dive in, thank you, masta:)

1 Like

Sadly bu they close access to beta =(

Hah) Lens Studio release erase ML in new 4.1 version :partying_face:

1 Like

Aha that’s perfect! Have you tried it out yet? Please keep us posted on your progress :slight_smile:

1 Like

Just trying, work well. Now need to create transition with alpha cut :smiley:

ok, well…
I figured out how to move alpha and cut out texture, but I can’t understand how to connect my camera texture with the person cutting alpha
its turn all screen to black

Okay! its work for now, but I have little issue with segmentation edges


It actually doesn’t use ML, it’s all material based so if anyone can decipher what exactly they are doing the same technique might be portable to Spark AR. It looks like they are using some custom materials but they aren’t compatible with the graph editor, so not exactly sure what is going on that makes it all work.


Yeah! exactly.
Its all gone downsampling :slight_smile:

// initial buffer merging with mask
var downSamplePass = createPass(script.downSample, 2*procSizeX, 2*procSizeY)
script.downSample.mainPass.baseTex = script.inputTexture
script.downSample.mainPass.maskTex = script.maskTexture

var renderTexture = [];
renderTexture[0] = createErasePass(script.step2, downSamplePass, procSizeX, procSizeY)
renderTexture[1] = createErasePass(script.step2.clone(), renderTexture[0], procSizeX/2, procSizeY/2)
renderTexture[2] = createErasePass(script.step2.clone(), renderTexture[1], procSizeX/4, procSizeY/4)
renderTexture[3] = createErasePass(script.step2.clone(), renderTexture[2], procSizeX/8, procSizeY/8)
renderTexture[4] = createErasePass(script.eraseall, renderTexture[3], procSizeX/16, procSizeY/16)