Check if object is visible on screen

I’m trying to find a simple and reliable way to detect if an object is visible on screen. I have tried using the World to Canvas patches and Josh Beckwith’s World2Screen patch. I can’t get either of them to work. I done my research, tried a bunch of things and come up empty.

My use case is that I’m only using device motion and not any tracker. Though I haven’t had success with that either. I just want to know if an object is visible on screen.

@josh_beckwith - can you provide an example of how to use your World2Screen patch? I don’t know how to use a shader signal and I can’t find any useful documentation on what it is or how to use it. I would like to do this all in the patch editor if possible.

To detect if object is visible on screen or not, you can use bounding box and grab its values for every point , convert its global position transform from world to canvas or screen. and then compare it to the canvas or screen coordinate depends on how or what you want it. the same principle like collision detection.

afaik. the world2screen patch is generating vec2 which is a 2Dpoint in UV shader generic format. and i find this to be a bit buggy/confusing because the orbit from device motion will cause the value to shifts around. at least that’s my experience from using it in a project where it has plane tracker and face tracker in it. I used it to grab 2d position of face on screen space as pivot/origin. I ended up using script 2d face tracking.

If you you want to the detection output as a boolean, you can’t use shader generic signal as the trigger. you would need a scalar or vector to compare in patch editor.

that leaves us to two main options:
if you want to use shader signal processing, the detection method would be using something like a mask that you then can use step, smooth step, mix, to continue.

if you just want a boolean, you would need to create tracking points using bounding box and use the same principle as collision detection. but in this case the boundary is just the the projection of those points in 2d space which is the screen space or the normalized screen space. To calculate the projection you can use matrix. learn more here.
*this process is super tedious to do manually in patch editor tho. so maybe you can try to use @Keeator’s transform patches in spark ar library to ease the pain a lil bit. :rofl:

1 Like

Thanks for your answer. It doesn’t really clarify the situation for me. The world2screen patch outputs a shader signal which I have no idea how to use. Was hoping for some clarification from @josh_beckwith on that.

I think much of the issue is one of those “crossing coordinate systems” problems. The object I’m trying to detect is completely outside of any trackers.

I have managed to get something working which is a bit of a hack. I’m using the world2canvas2 patch which after some fiddling will give me a rough coordinate on screen. However, it gives a false positive if the object is behind the camera so I’m doing some z value detection to fix that.

Would still love to get an example from Josh on how to properly use his World2Screen patch.

Oh, you bring up some good points. Thanks for sharing about your use-case because I was only testing this on the face before. I am using it to detect the screen position of a specific facial landmark.

FWIW you can convert it to “reactive” by using screen size divided by screen scale instead of render target size. I don’t think it will help in your case since this patch doesn’t take the camera rotation into consideration.

I have done some quick tests but I’m a little stuck because we can’t apply the camera projection in patches… so maybe the solution would be to make a block for this since blocks support scripts now. I’ll keep working on it and report back soon.

So, to get stuff in actual world space, I think we definitely need some kind of camera projection. Fortunately it looks like there’s a script method specifically for this. I hope they’ll expose it in patches soon, but for now we can do something like this with minimal script involved.

const Scene = require('Scene')
const {log} = require('Diagnostics')
const Patches = require('Patches')

const init = (async function () {
  const objectName = await Patches.outputs.getString('objectName')
  objectName.monitor({fireOnInitialValue: true}).take(1).subscribe(async (val) => {
    // log(val)
    const plane = await Scene.root.findFirst(val.newValue)
    const screenPos = Scene.projectToScreen(plane.worldTransform.position)
    Patches.inputs.setPoint2D('screenPos', screenPos)


This is just checking the center point, but you might want to do more than that if your object is large.

Here’s a demo project:
spark-screen-projection.arprojpkg (14.9 KB)

1 Like

Thanks Josh! I appreciate you looking into it.

I tried out the project and it has the same issue that I was running into with the false positive when the object is directly behind the camera.

This is the solution I was using that is patch based. I did some Z testing to fix the behind the camera issue. It’s not perfect but it works well enough and I was able to get it into a usable patch.

I still don’t understand how to use your World2Screen patch even if it’s not applicable in this case. You might want to consider adding an example. The input part is fine but how to use the output is a complete mystery.

onScreenWorks02.arprojpkg (73.9 KB)

1 Like

Unfortunately my world2screen patch won’t actually work for objects positioned in world space. It’s a bummer because I was looking for a patch-only solution to this problem, but it’s just not possible yet. I hope they will make a patch for getting screen position of an object, or maybe expose some kind of script patch thing so we can make it more portable than the current way of using inputs and outputs in script.

I’m going to leave it up for now since I plan to update it when I find a better solution. For now, it will only work on face/hand/body tracked objects since that stuff is independent of world space.

1 Like