To detect if object is visible on screen or not, you can use bounding box and grab its values for every point , convert its global position transform from world to canvas or screen. and then compare it to the canvas or screen coordinate depends on how or what you want it. the same principle like collision detection.
afaik. the world2screen patch is generating vec2 which is a 2Dpoint in UV shader generic format. and i find this to be a bit buggy/confusing because the orbit from device motion will cause the value to shifts around. at least that’s my experience from using it in a project where it has plane tracker and face tracker in it. I used it to grab 2d position of face on screen space as pivot/origin. I ended up using script 2d face tracking.
If you you want to the detection output as a boolean, you can’t use shader generic signal as the trigger. you would need a scalar or vector to compare in patch editor.
that leaves us to two main options:
if you want to use shader signal processing, the detection method would be using something like a mask that you then can use step, smooth step, mix, to continue.
if you just want a boolean, you would need to create tracking points using bounding box and use the same principle as collision detection. but in this case the boundary is just the the projection of those points in 2d space which is the screen space or the normalized screen space. To calculate the projection you can use matrix. learn more here.
*this process is super tedious to do manually in patch editor tho. so maybe you can try to use @Keeator’s transform patches in spark ar library to ease the pain a lil bit.