Mouth Catching

Does anyone have any examples or recommendations how how to create an effect where you catch things in your mouth.
I started with https://medium.com/@Julien_He/game-in-spark-ar-part-1-1ddefdfa9838
but not sure this is the best way to go about it
So i abandoned that way and now have
I have 3d objects pinned to the upper lip, lower lip, left corner, right corner. Not sure what to do next if I translate those positions to a camera canvas or what. Any ideas?

I have made a few of these games and my approach is just to get the center of the mouth and use distance and Less Than to determine if an object was hit. It works well enough because the mouth is pretty small on the screen. No need to track all of those other points.

There’s another thread here that might help you for more complex hit-testing

2 Likes

maybe you are right and I am overthinking it with all the tracking points. Its a silly game so does not need to be 100 percent accurate.

@josh_beckwith is there a way i can send you my file? I feel like it is close but seem to be missing something

Can you describe your approach and project setup? Maybe with some annotated screenshots. Might be hard to dissect your project without some context, but if you want to post it here, I can take a look.

I think the simple solution is by making 2 version of each object. 1 moving, 1 catched.
where the moving will move around waiting for a collision detected by the “sensor/detector”, and the catched version is simply an object that always stuck on the mouth position.

so when a collision occurs, simply send a signal to turn off the visibily of the moving object, and turn on the catched version visibility.

check out this video by RokkoEffe for the brief explanation of how collision works in spark ar:

and go to his github to read more detailed information about it.
There’s also a project file for you to try and analyze in there.
here is the github link: https://github.com/RokkoEffe/Creating-2D-3D-colliders-with-Scripting-Spark-AR/blob/main/README.md

I hope you find it helpful.

2 Likes

So in my script I have

const center= face.mouth.center;
CenterCube.transform.x = center.x;
CenterCube.transform.y = center.y;
CenterCube.transform.z = center.z;> 
Patches.inputs.setScalar('CenterCubeX', CenterCube.worldTransform.x);
Patches.inputs.setScalar('CenterCubeY', CenterCube.worldTransform.y);

then my patch you can I am comparing the XY of the object falling and the Mouth i believe what might be happening is the XY are changing based on how close (Z axis) the mouth is to the object.
Here is a video.

you can see in the patch editor the value is the X axis of the mouth. You can see the values when the mouth is to the far left or far right are not the same and change based on how close/far you are from the camera. How would you suggest i normalize this?
it works great when the objects fall down the center because center is always 0 but if it is on the right (say 3 just for a example) It might look like your mouth is right on it but the mouth could be registering as 2 / 5 / or some other number because you are not in the same space as the Z axis. as the falling object.

The approach I took was similar to @monogon, where the objects are all projected to the screen or focal plane, so they can be compared hit-tested using 2d methods.

Hey @kdarius,
the issue you’re having is one we were fighting with for some time as well. My repo that josh posted contains a function called projectFacePointsToFocalPlane(), that is the one you want. It’ll take your face-points from “face-space”, so, say, mouth.center, and project them onto the focal plane as if with a ray from the camera-eye.
I’ll write some comments to explain what each line of the function does:

const F = require('FaceTracking');
const S = require('Scene');

/** @param { PointSignal[] } points
 *  @return { PointSignal[] } */
function projectFacePointsToFocalPlane(points) {
    return points
        // When you grab the points from the face(0), they are in "face-space", relative to the face 
        // that contains them. So while the mouth and its features are moving all the time, their local position
        // and rotation actually remains (0,0,0). The points/features just inherit the movement from their parent,
        // the tracked face. This first line turns these relative points into "absolute" world coordinates,
        // that then actually contain the global xyz position of the mouth-features.
        .map(point => F.face(0).cameraTransform.applyToPoint(point))
        // This global point can then be projected onto the focal plane - BUT the result is a 2D-point
        // in "focal-plane space" D: That is to say the x and y coordinates are pixelcoordinates!
        .map(point => S.projectToScreen(point))
        // I didn't know how to work with that stuff, I just wanted xyz coordinates on the focal plane to
        // compare to my falling objects... thankfully, there is ANOTHER built-in function that will
        // turn that 2D-pixel-point into world coordinates again:
        .map(point => S.unprojectToFocalPlane(point))
        // For some reason that I was too tired to understand, the coordinates then need to be negated
        // to be in the correct spot ...just do it :P
        .map(point => point.neg())
}

Hope this helps you out a little… good luck!

Edit: Corrected the script imports to require ‘Scene’ (instead of ‘Patches’)… Sorry about that

3 Likes

Wow, thanks for sharing. That’s incredibly helpful! Bless you, Sensei @monogon :pray:t5: :pray:t5: :pray:t5:

So incredibly helpful.