How does Premiere Pro's Program Monitor zoom work? Are pixels scaled or cropped?

Question: How does zooming in and out inside the PP’s Program Monitor work/affect the underlying pixels of your dispaly monitor?

If your monitor/display reso is 1360x768 and you open a sequence in APP having the same resolution, and play that clip in PMon, then if the Program monitor is playing at 100% zoom and full screen (is occupying every pixel of your display) then yes, each pixel of your display will be reflecting one pixel of your video frame.

But suppose we’ve zoomed out our video frame to 50%, so that our video frame is now left in the half of the display monitor (or say we reduced the PM’s Panel size to almost half of the display screen and actual video frame is now exactly occupying exaclty half of my monitor display’s real estate). In this case now, 680 horizontal pixels of display monitor will need to show 1360 pixels of video frame.

Just a reference Screen shot:

Now remember, no Display device can create extra pixels on itself than its manufacturing allowed originally. So any display’s pixels’ size can be made bigger (we can choose to display fewer number of pixels on the same screen), but never smaller or more number of pixels. But our video rendering engine, here Adobe Premiere Pro, can do both ways. It can reduce its own projecting display reso or enlarge.

Let me break our example to just two pixels, so that it becomes easy to explain and understand. Think it like your PP’s Sequence’s video horizontal reso is of just 2 horizontal pixels and our display too has just 2 pixels horizontally. Now suppose we’ve zoomed out/reduced the PP’s Program Monitor to right half of our display monitor, i.e. only one pixel of display is now assigned to Prog Monitor/video frame. Then the video rendering engine of the PP will decide that it has no other option but to start some heavy lifting and after (much :grinning:) processing, will decide that it has to combine its 2 pixels into one (by overlapping both pixels onto each other).

You can think of ‘Combine’ as overlapping or disposing off pixels, its the same thing (thought a lot, yes its same thing). If one pixel was yellow and other red, now only one green pixel will show.

Remember the fact that one pixel is too small to be individually seen by us unless there are hundreds of similar pixels doing this, and so zooming out our video frame isn’t noticed by our eyes, because our eyes can see/notice less and less as the frame/picutre gets smaller and smaller. Had the display pixels been not so small, then our eyes could clearly tell that there were two red and yellow pixels on full monitor screen, and now there is only one green pixel in half monitor screen.

Summary: When we zoom out our video display (say reduce the size of Prog Monitor) to 50%, then PP combines pixels into lesser number of display pixels by overlapping them, zooming out would never crop our visual frame.


Contrary to above, what if we ZOOM IN?

Next is what if we zoom our frame to 200% in PP (when PMon is occupying full display monitor)? Or we keep the zoom of our video frame to 100% but reduce the size of the PP’s Prog Monitor to half, both are same things.

Now with our 2 pixels analogy: Now instead of zooming out to 50% real-estate of the monitor, we’ve zoomed in and have thus demanded double number of pixels. In other words, in the earlier case, video frame’s 2 pixels were trying to fit on one display pixel (and thus one video frame pixel had to be disposed off), now one pixel of video frame needs 2 pixels of display monitor and since our display device can’t produce more pixels (even by making its pixels smaller). Now in this case, nothing can happen, except that half of video frame’s pixels would go out of display/frame. And thus cropping WOULD occur!!


Next/Last is transparency problem:

Now this seems to be some App bug. When I’ve turned on the Transparency Grid from the PM’s wrench menu, then if the Video is set to ‘Fit’ in the PMon, then all is ok, but if I resize the frame even 1%, then the transparency grid is shown/spread over much more area than the 1% that I’ve reduced. So its not clear whether this transparency should/is meant to show only in those areas where there is no picture but sequence frame exists, or even around the sequence frame, but inside the PM’s bounds?!

Also, if I reduce my Video frame’s zoom to 10% then except a small video picture in the center, the whole space of PM (i.e. not only within the bounds of Sequence frame, but also beyond sequence frame but within the PMon) is shown as Black & White squares of Transparency. See Sshot:

But when I zoom to 25% or 50% size and there is still some area left blank around the video frame, but then that TGrid disappears. See SShot:

For the above topic’s confusion, I’d like to add one thing, or you must understand this:
Resolution and Pixel Density, though related, but are different things.

While resolution is an independent number, like 1200 by 900 (HD Resolution) contains 1,080,600 Pixles (1 Mega Pixels), Pixel Density is number of pixels per square inch, e.g. most printers (can) print at 300 Pixels per inch/Ppi.

So 2 webcams, both of HD quality 1280 by 720, can have different number of Pixels, as Logitech C270 has 2 MP while C525 has 8 MP.

So sometimes the terms get confused with each other.

An 8 Megapixel camera produces a file size that is 2448 x 3264 PPI. The image dimensions are about 8" x 11" at maximum print quality of 300 dpi (2448/300 & 3264/300).