Announcement

Collapse
No announcement yet.

YUV12 AVI Capture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • YUV12 AVI Capture

    Hi, we have an application which currently captures video from an ATI card using the YUV12 Color Space/Compression. We are looking at switching to Matrox cards to do this and the Matrox Tech. Support folks tell me that I need a hack in order to capture using YUV12. Does anyone know how to do this or where I can find more information on how?

    Thanks,
    Jesse

  • #2
    Moved to Desktop Video
    "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

    "Always do good. It will gratify some and astonish the rest." ~Mark Twain

    Comment


    • #3
      Marvel cards capture 4:2:2 YUY2 (similar to UYVY) uncompressed format and do it nicely.

      The rubs:

      1. The current driver set (1.54) doesn't do YUY2 right. You have to degrade to 1.52.

      2. You need a patch program to activate this ability. This patcn is called "Flying Dutchmans YUY enabling utility". The word is that this may only be necessary in the short term. Future drivers will include YUY2 functionality from the get-go. You can find the YUY2 enabler at the bottom of this page;

      http://idiots-guide.matroxusers.com/DTVW_downloads.htm

      The good part;

      1. You lose little in degrading to 1.52 from 1.54. The difference is minimal other than YUY2 works in 1.52.

      2. The resultant HuffYUV *.avi's encode into very nice MPEG's, especially when encoded with the freeware TMPGEnc 12a MPEG-1/2 encoder. Get TMPGEnc here;

      http://www.jamsoft.com/tmpgenc/

      3. The Marvel's are DualHead cards like most of the G400 and G450 display cards. This means you can activate a mode called "DualHead/DVDMax". This mode allows you to output most any *.avi, *.mpg or *.rm file to the video output for recording.

      Using Matrox DualHead cards I've exported to tape everything from the normal MJPeg to DV, MPEG-1, MPEG-2, MPEG-4, Cinepak, Indeo and most software codecs I've been able to lay my hands on.

      The main limitation seems to be with Matrox MJPeg, MPEG-4 (and DivX) which require a frame size divisible by 16. Example: a 704 width works, 720 doesn't.

      4. The upcoming Marvel eTV MPEG-2 card will likely not need the enabler.

      5. Even the G400/450 DualHead display cards without capture capability can perform the DualHead magic.

      YUY2 works best when used in conjunction with two other programs: the lossless HuffYUV compressor and the AVI_IO capture program.

      HuffYUV cuts full frame data rates to about 10 mb/s. Get it here;

      http://www.math.berkeley.edu/~benrg/huffyuv.html

      The AVI_IO capture program allows for the capture of long sequences using multiple 2 gig (or 4 gig) files. Get it here;

      http://www.nct.ch/multimedia/avi_io/index.html

      Dr. Mordrid


      [This message has been edited by Dr Mordrid (edited 11 January 2001).]

      Comment


      • #4
        YUY2 is currently much worse than MJPEG on Matrox cards.

        http://forums.murc.ws/ubb/Forum2/HTML/004744.html


        http://www.geocities.com/mgu222/yuy2.jpg

        http://www.geocities.com/mgu222/mjpeg.jpg

        Comment


        • #5
          Hi,

          <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Dr Mordrid:
          The main limitation seems to be with Matrox MJPeg, MPEG-4 (and DivX) which require a frame size divisible by 16. Example: a 704 width works, 720 doesn't.
          </font>
          How do you mean, 720 doesn't work...? 720/16 should be 45...

          TIA
          Roman

          Comment


          • #6
            My mistake. That should have read "evenly divisible by 32". This means the result has to be an integer. 704 gives 22. 720 gives 22.5.

            Dr. Mordrid

            Comment


            • #7
              This reply is specifically for mgu2;

              If indeed YUY2 were not "properly implemented" you wouldn't get decent MJPeg either. Why? Because YUY2 is the original signal used to create MJPeg in the first place. A quality increase after encoding the signal to a lossy format would seem to violate the laws of entropy.

              Perhaps part of the differnce you're seeing is a shortening of the contrast scale due to quantization of the data in the encoding process.

              I also noticed in previous threads that most of those complaining of problems use PAL. I use NTSC and see nothing of the kind. Perhaps another part of the problem is that in PAL every other scanline some of the color information is phase inverted 180 degrees?

              (PAL = Phase Alternation Line)

              This phase alternation is done in PAL to help the eye in cancelling color distortions in alternating scan lines brought on by the fading of those scanlines that are drawn first. This fading can cause a color change.

              Viewing this phase altered information in a bitmap, which is non-interlaced, may give a false reading as to its viewability on an interlaced display.

              Dr. Mordrid



              [This message has been edited by Dr Mordrid (edited 12 January 2001).]

              Comment


              • #8



                If indeed YUY2 were not "properly implemented" you wouldn't
                get decent MJPeg either.
                </P>

                YUY2 is native for hardware but it is software which is responsible
                of controlling the cards settings. Marvel G400TV defocuses (blurs) the
                on-screen overlay image. As suggested by other members of this forum it
                serves as deinterlacing. I don't know if NTSC picture is deinterlaced by
                the card in this way but PAL is. </P>

                The drivers can enable/disable the defocusing effect. When the card
                is not capturing, the overlay image is defocused. When you start MJPEG
                capture, the overlay image gets much sharper - capture drivers turn off
                the deinterlacing. That's why resulting MJPEG AVI is not deinterlaced.</P>

                Using VirtualDub I was able to capture MJPEG showing the same defocusing
                as YUY2 capture. I caused the overlay window to redraw its contents during
                capture (by moving another window over it) and the overlay image was deinterlaced
                again. Then I replayed the captured MJPEG AVI and exactly at the point
                when the overlay got blurred on screen the captured video was too. This
                is actually a bug in drivers. PCVCR is not affected though. </P>

                The problem with YUY2 is that the defocus is never disabled before capture.
                See above link for result. MJPEG capture is properly implemented as it
                disables deinterlacing, YUY2 capture is not as it leaves it enabled. </P>

                Perhaps part of the differnce you're seeing is a shortening of the
                contrast scale due to quantization of the data in the encoding process.

                </P>

                Which encoding process do you mean?</P>

                I noticed that most of those complaining of problems use PAL. I use
                NTSC and see nothing of the kind. Perhaps the problem is that in PAL every
                other scanline some of the color information is phase inverted 180 degrees?

                </P>

                If the problem is PAL then how would you explain that MJPEG is not showing
                the blur effect? I am sure phase inverting has nothing to do with this
                problem. </P>

                Comment


                • #9
                  Doc, a small correction:
                  PAL is not used to aid the eye to cancel colour distortions. In Pal, the colour signal of each line is stored in turn into a delay line with a time constant exactly equal to one scan line. What is presented on screen is the mean value of the signal coming in and the signal coming out of this delay line. All this is done even before the colour signal is decoded. So, the distortion cancellation is entirely done electronically before the picture is presented to the eye. Therefore, it has nothing to do with the blurring that PAL users are complaining about. Another idea is that whilst US has traded picture resolution (525 scan lines versus PAL 615) in favour of a higher refresh rate (30 fps versus 25), but Europe has done exactly the opposite, maybe the deinterlacing algorithm is better suited to NTSC than to PAL (and/or SECAM of course).
                  Michka
                  I am watching the TV and it's worthless.
                  If I switch it on it is even worse.

                  Comment


                  • #10

                    Michel Carleer said:

                    "Doc, a small correction:
                    PAL is not used to aid the eye to cancel colour distortions."

                    Lets test that out....

                    Mark Sauerwald,
                    National Semiconductor Corporation
                    Interface Products Group:

                    "In PAL, the Phase of the V component is reversed every other line. The result of this phase reversal is that any color subcarrier phase errors create complementary color errors on alternating lines of the picture, giving more precise hue."

                    St. Andrews School of Physics and Astronomy;

                    "The PAL system — as the name Phase Alternate Line implies — alternates the phase relationship from one line to the next........

                    The effect of this alternation is to ensure that any colour error produced by oscillator drift has opposite effects on adjacent lines. If one line has too much red at its right hand end the next will have too little, then too much, etc."

                    Sounds like a technique for mitigating color distortions to me. It's also why PAL doesn't really need a hue control.

                    As for my statement that shortening the contrast scale after encoding goes, I'm talking about when the YUY2 signal is encoded into MJPeg.

                    MJPeg and other DCT codecs quantize the colors when each frame is encoded, effectively reducing the number of available hues. When you reduce the number of available hues one side effect is that the contrast scale shortens in the output image (short scale = higher contrast).

                    This can, and often does, give the appearance of more sharpness in encoded images and therefore a perception of higher image quality. This is because the human eye likes short scale contrast, within reason.

                    The literature on JPEG (and MJPeg) compression words it this way;

                    "JPEG also has a contrast enhancing effect (when) subtle variations in shades of colour are lost, as the DCT transform coefficients whilst not zero (no change at all) are very near zero, and will be treated as zero for the purposes of compression. The result is that the received picture will have less subtle colour and image transitions, giving a subjectively crisper and sharper image."

                    Uppance: an image with a shorter scale of contrast will not only appear sharper to the eye but will make its lower contrast source image appear blurred by comparison.

                    Apply that to this discussion as you see fit.

                    Dr. Mordrid


                    [This message has been edited by Dr Mordrid (edited 13 January 2001).]

                    Comment


                    • #11
                      Sorry Doc,
                      I suppose I wasn't clear enough: the way you phrased your explanation of phase alternation could be interpreted as a way to FOOL THE EYE into thinking it is seeing the correct colour. Remember what you wrote:
                      "is done in PAL to help the eye in cancelling color distortions ".
                      This is not correct. The eye has nothing to do with it. Proper colour recovery is all done in the receiving electronics and the recalculated correct (supposedly) colour is shown on screen.
                      In other words, if you look at the screen close enough, you won't see one line with, say, too much red and then the next one with too little. They will both have the (supposedly once again) correct amount of red.
                      This being said, PAL is indeed a way to correct the phase errors suffered by the colour subcarrier signal during transmission. And it does its job pretty well, except in conditions where the phase rotation changes faster than the line frequency.
                      Michka


                      [This message has been edited by Michel Carleer (edited 13 January 2001).]
                      I am watching the TV and it's worthless.
                      If I switch it on it is even worse.

                      Comment


                      • #12



                        As for my statement that shortening the contrast scale after encoding
                        goes, I'm talking about when the YUY2 signal is encoded into MJPeg.
                        </P>

                        Well, if you noticed, the YUY2 sample
                        <A HREF="http://www.geocities.com/mgu222/yuy2.jpg">http://www.geocities.com/mgu222/yuy2.jpg</A>
                        is encoded in JPEG. Even if you applied greater compression you would just
                        get more "blocky" looking picture. There is no way JPEG could sharpen
                        the image to look like
                        <A HREF="http://www.geocities.com/mgu222/mjpeg.jpg">http://www.geocities.com/mgu222/mjpeg.jpg</A></P>

                        As for PAL, the blurred text in both pictures is white. Y component of PAL signal
                        is not phase inverted in any way.</P>

                        I can enable/disable the blur at will by starting/stopping MJPEG capture
                        and see the picture on screen change between blurred and sharp states.
                        The overlay image is displayed before any encoding takes place.
                        The blur shown on screen is identical to the blur found in YUY2
                        captured frames. </P>

                        Comment


                        • #13
                          Hey guys, this is all interesting stuff but forgive my ignorance... I don't see what it has to do with my wanting to capture YUV12.

                          What am I missing here?

                          Jesse

                          Comment


                          • #14
                            What you're missing isn't much. We often stray into technical discussions that leave new users wondering whats going on

                            Dr. Mordrid

                            Comment


                            • #15
                              Cool, now... is there actually a reply to my question in there somewhere?

                              Thanks,
                              Jesse

                              Comment

                              Working...
                              X