Optical Flow

Hi, so I’ve been experimenting with the Optical Flow patch from Sugar Asset Store.
The output texture from the patch is red and green with 0 blue and black background. when compared to the normal flow project’s there is an add 0.5 patch before it goes thru the raw OF sender which makes the background grey and blue color is added to the detected movement. and if i change the add to vector 4, red .5,green .5, blue 1 and alpha 0, it becomes normal. and if i change it to blue to 0, it becomes flow map.

My question:

  1. is the output color from the optical flow is already matched to the screen texture coordinate where black is 0.0 top left, red is 1,0. green is 0,1 and yellow is 1.1? Or is it simply just giving output by changes in luminosity value?

What I want to do is to use the color output from the optical flow to dictate the next processing using flowmap.
but what i get from the output is only significant based on right and left movement where going to right is green/cyan-ish and left is red/yelow-ish. No obvious changes when movement occurs in y axis. and also it seems to only detect mostly edges.

  1. Should I double the delay frame to get even further time changes so it covers more area? cuz if I only change the offset, it doesn’t really act like what i expected it to do.

  2. If I want to only get the output texture vector (make the black background transparent) which one is the correct way: add red and green and feed it to the alpha, or add red and green first, divide by 2 to average them and feed it to the alpha?

They are matched in the sense that the coordinates are the same, but the actual flowmap output is relative.

You should be seeing vertical and horizontal movement. If your y offset is set to zero, then you won’t see vertical movement.

This gave me an idea to add a pan gesture to the demo project. This will let you see more clearly how the colors behave.

This optical flow algorithm is using some convolutions, similar to edge detection, so that is expected.

You can try that, but given it’s doing convolutions to get the result, I don’t think it will do what you want. Instead, try using a render pass to downsample the camera texture, and play with multiplying the texture size. That should let you fine-tune the behavior a bit more.

The alpha channel isn’t used in the flowmap, but if you want to do something to determine alpha, you could get the distance of the RG color from (.5, .5), which would give you the flow vector length.

1 Like

Thank you so much for the answer!

They are matched in the sense that the coordinates are the same, but the actual flowmap output is relative.

I see… So i can’t just use the output RG to drive the flowmap. maybe I’ll try to smear it with some blur or downsampling with some averaging combined with some range or smooth step patch to get smoother feed to drive the flowmap with distortion. the effect i want to get is some extended movement based on the vector produced by the optical flow. For now I only get the slow motion interpolation i need to figure out how to add some acceleration exponentially and mask it with the distance for the alpha and mask it again to invert it so the exact pixel is untouched while the area outside the movement is the one that get distorted. I think that would be a cool shader for some dance routine where the camera is static. But I still get some trouble around the person that blends in with the background especially for a drastic speed changes for the movement. I tried to use @DanMoller’s fill in face using delay frame like in spark ar quicktips, but still showing up some weird and noticeable artifact. Do you know how long is the time difference between input and output of delay frame? is it just 1 frame difference? in some miliseconds?

You should be seeing vertical and horizontal movement. If your y offset is set to zero, then you won’t see vertical movement.

Cool, I’ll try this setup. to monitor it. Thanks!

Instead, try using a render pass to downsample the camera texture, and play with multiplying the texture size. That should let you fine-tune the behavior a bit more.

do you think screen scale from device patch should also play a role? Until now I still have no idea when to use it in an effect. is it to display the same resolution in different device for consistency or what? i don’t get it. lol

The alpha channel isn’t used in the flowmap, but if you want to do something to determine alpha, you could get the distance of the RG color from (.5, .5), which would give you the flow vector length.

How can i didn’t think about that?! omg. Thanks!

It’s always one frame difference, but FPS can vary wildly depending on lighting or GPU load.

As for the extended movement, you could use the NormalDistortion patch instead of FlowMap. In the Flowkit bundle, there are some examples of how to use OpticalFlow + NormalDistortion patches to get this kind of continuous movement.

It’s always one frame difference, but FPS can vary wildly depending on lighting or GPU load.

Ah… okay… got it.

you could use the NormalDistortion patch instead of FlowMap .

Yes I just tried some experiment with it. it’s pretty cool. I’m still trying to figure out how the vector behaves so i could try to manipulate the movement by… i don’t know, maybe funnel thru the loop or something.

can this optical flow vector drive some billboard particles?

That would be super cool, but I think you can’t use shader signals to drive that stuff.

In the optical flow patch, you use luminance calculation to get the black and white. Can you tell me why you don’t use color space HSV patch to swizzle the Value from there but using luminance patch instead? what’s the thought process behind that?

HSV is more expensive, and we really only care about the luminance value. Luminance is pretty cheap to compute from RGB - just dot product of another value. If you want to see what’s involved in RGB/HSV conversion, check this answer:

ahhh i see… I thought hsv would be cheaper just because it’s a built in patch. thanks josh!