Announcement

Collapse
No announcement yet.

Toshiba's 3CCD mini HD cam head

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Toshiba's 3CCD mini HD cam head



    Link....

    Toshiba PDF

    World’s smallest hi-def camera head from Toshiba

    April 7, 2008 Toshiba Imaging has launched the world’s smallest HD camera head, measuring just 1.6 inches and weighing only 2.3 oz. It is envisaged that the mini IK-HD1 will be used for scientific imaging and diagnostics, specialty broadcasting, homeland security, and industrial video and inspection applications.

    The ultra-compact control unit makes the IK-HD1 ideal for broadcasting and other imaging tasks where space is limited as it is no much bigger than an ice cube. The camera has 1080i output and Toshiba’s HD 3CCD prism block technology helps deliver sharp, clear, true color imagery. High-definition 1920 (H) x 1080 (V) resolution at 30 frames per second means the camera is an attractive option for reality TV and sports broadcasters and it has already been used for filming on New American Gladiators.

    The camera comes standard with a C-mount lens flange and RS232C serial interface and multiple outputs for HD-SDI (SMPTE 292M), analog RGB, or Y/Pb/Pr. Accessories for the HDTV camera system includes a 4mm or 15mm lens and camera cables in 3-, 6-, 10- or 30-meter lengths.
    Dr. Mordrid
    ----------------------------
    An elephant is a mouse built to government specifications.

    I carry a gun because I can't throw a rock 1,250 fps

  • #2
    Maybe a noob question, but why make this and only do i rather than p?
    FT.

    Comment


    • #3
      Originally posted by Fat Tone View Post
      Maybe a noob question, but why make this and only do i rather than p?

      Most likely a cost issue. You only need 540 vertical resolution to do 1080i. Sampling at 1/60 second. It is much more difficult (expensive) to manufacture an imager that has 1080 vertical resolution rather than 540.
      - Mark

      Core 2 Duo E6400 o/c 3.2GHz - Asus P5B Deluxe - 2048MB Corsair Twinx 6400C4 - ATI AIW X1900 - Seagate 7200.10 SATA 320GB primary - Western Digital SE16 SATA 320GB secondary - Samsung SATA Lightscribe DVD/CDRW- Midiland 4100 Speakers - Presonus Firepod - Dell FP2001 20" LCD - Windows XP Home

      Comment


      • #4
        Doesn't that mean you have to move the sensor/optics between frames to register a different part of the scene?
        FT.

        Comment


        • #5
          Originally posted by Fat Tone View Post
          Doesn't that mean you have to move the sensor/optics between frames to register a different part of the scene?

          No. Imagine the imager "snaps" two shots 1/60 of a second apart in time. The resolution of each shot is, say 540x1920.

          The first image, let's call it field A is separated into odd lines by the hardware/software in the camera, that is each of the 540 scan lines is numbered 1, 3, 5, 7, ...

          The second image, Field B, is separated into even lines, 2, 4, 6, ...

          Now the camera "interlaced" or combines all of those scan lines into 1 interlaced image or frame of video. If there was movement in the two "shots" then you will see the zig-zags we often see in interlaced video. If there was no movement then you will see a perfect 1080x1920 image.

          If you looked at each of the fields by itself then the image would be squashed, or half height with everything being shorter and fatter than it should be.

          It's kind of hard to explain without pictures. Using interlacing allows for much lower resolution (vertically) imagers while still approximating higher resolution. It works very well for static images but resolution starts to break down for dynamic video. There are people that proclaim interlaced video is actually better for video with motion since the higher field rate may show smoother, yet less resolved motion. I'm not going to get into that except to say for that to happen you must be viewing your interlaced video on an interlaced display, displaying fields at 60Hz, and not just slapping together fields into frames and displaying them at 30Hz.

          If you've read my previous posts on this then you know that I'm not a fan of interlacing. That's not to say it doesn't have it's uses. I should say I'm not a fan of it for a video camera that I want to buy. I don't want to deal with fields and deinterlacing with my video. That being said the interlacing artifacts with HD video are much less noticeable than with SD video since you are dealing with twice as many lines, 1080 vs 540. There are no interlaced 720 line video cameras.
          - Mark

          Core 2 Duo E6400 o/c 3.2GHz - Asus P5B Deluxe - 2048MB Corsair Twinx 6400C4 - ATI AIW X1900 - Seagate 7200.10 SATA 320GB primary - Western Digital SE16 SATA 320GB secondary - Samsung SATA Lightscribe DVD/CDRW- Midiland 4100 Speakers - Presonus Firepod - Dell FP2001 20" LCD - Windows XP Home

          Comment


          • #6
            Hi Mark

            I'm familliar with the traditional meaning of interlacing. My degree and background were electronics. However, I don't see how that description can result in a perfect 1080 image.
            Surely all it is doing is line-doubling. If the scene is perfectly still and the camera takes two identical images

            A
            B
            C
            D etc


            and

            A
            B
            C
            D etc

            Then the reconstruction as you describe it would be

            A
            A
            B
            B
            C
            C
            D
            D etc

            I think the key word here is approximation. I know for example in the PAL broadcast system the vertical colour resolution is halved due to P.A.L. itself being used to keep colour consistency yet this matters very little to the perception of the image quality.
            FT.

            Comment


            • #7
              You are absolutely right. I don't know what the heck I was thinking. That's what I get for typing while I'm on the phone....

              One reason many cameras used interlaced formats is due to legacy concerns. Since when video signals are stored on tape formats for HDV or DV they are field based, each helical scan lays down a field, for the sake of compatibility many manufacturers have as yet been unwilling to move to fully progressive storage even though with hard disk and SS storage there is not reason to stay with the old field based storage. This is why you will see progressive streams from cameras like the Canon's using a 24PsF mode. Progressive video comes off sensor, then 2:3:3:2 pulldown for storage in 60i wrapper, and *hopefully* reverse pulldown in the NLE to recover a perfect 24p stream.

              Note that 2:3:3:2 pulldown is an editing storage format while 2:3 pulldown is a playing/viewing pulldown. 2:3:3:2 only has one mixed frame of fields that can easily be discarded.

              So legacy storage in the 60i wrapper might be one reason. The other may have to do with readout rates from the imager being easier to design/impliment with interlacing. I'm not sure about that though.

              I was confusing the imaging technology with the pixel binning the Panasonic HVX3000 uses. That camera has a 1920x1080 natively progressive sensor. When recording in 1080i mode this camera can pixel bin in the vertical axis to double sensitivity. According to Panasonic. This would imply that only 540 pixels high IS needed for 1080i interlaced video?

              But this means that my initial description is somehow correct?

              Now I'm confused too!
              - Mark

              Core 2 Duo E6400 o/c 3.2GHz - Asus P5B Deluxe - 2048MB Corsair Twinx 6400C4 - ATI AIW X1900 - Seagate 7200.10 SATA 320GB primary - Western Digital SE16 SATA 320GB secondary - Samsung SATA Lightscribe DVD/CDRW- Midiland 4100 Speakers - Presonus Firepod - Dell FP2001 20" LCD - Windows XP Home

              Comment


              • #8
                I would say that you are forgetting the fundamental reason behind interlacing: the reduction of bandwidth. AFAIK, Baird was the first to suggest it for the BBCs first experiments using AM medium-wave broadcasting in 1932, limited to about 10 kHz on each sideband. If my memory is correct, he used a Nipkow disk with 28 + 28 holes + 1 hole outside the picture area for synch. I don't remember the frame rate, but it was fairly low, maybe 12 fps. I never saw such TV, but I understand it was NOT good, a flickering orange image little bigger than a postage stamp. When RCA (Zworykin) in the US and EMI in the UK simultaneously (1935) came up with the iconoscope/emiscope, unknown to each other, the former chose 525 lines and the latter 405 lines and they both interlaced so as to keep the bandwidth down to ~3.5 MHz, for broadcasting in the 45-60 MHz regions, using reduced upper sidebands, as this was about the limit of current technology, at the time.

                Thus interlacing was born to keep the bandwidth down to manageable limits and this is still the case today, only it's called bitrate in digital terminology.

                As for low resolution in chroma, I've been aware since 1950, when I started working on colour TV, that it does not matter two hoots that the chroma is well out, provided the luma is fine. I borrowed my prof's prewar Leica (with a fantastic Elmar f2.8 lens) and persuaded Ilford to give me two rolls of film, one Ilfochrome and the other a panchromatic reversible mono. (These were not available on the market at that time.) I set up a bowl of fruit as the subject. I used three slides, one an ordinary colour photo, one an ordinary mono photo, with the same lighting (2 photofloods at 45 deg, one closer than t'other). The third photo was a colour image with frontal lighting, overexposed by one stop, so that it was essentially chroma with little luma. If I projected the mono and then superimposed the chroma from a second projector, it looked pretty identical to the ordinary colour photo, perhaps slightly washed out in colours but not unacceptably so. If I then turned the focus ring on the chroma projector, nothing happened: the pic on the screen still looked the same until it was wildly out of focus. I concluded the chroma receptors in the retina were much spaced, compared with the luma ones, at the centtre of vision. This was important and my whole dissertation used this fact to argue that colour TV did not need much more bandwidth than mono TV, with practical experiments to support it. At the time, I thought it was original work, but I found out, just before crunch time, that two others had published similar conclusions. That is why we do not need 4:4:4 colour space. In fact, 4:1:1 is adequate, as NTSC DV shows.
                Brian (the devil incarnate)

                Comment


                • #9
                  For a final output image this is very true, but the rub comes when you try to edit or add effects to a chroma-reduced image like that provided by NTSC DV. That can really cause the bed bugs to bite.
                  Dr. Mordrid
                  ----------------------------
                  An elephant is a mouse built to government specifications.

                  I carry a gun because I can't throw a rock 1,250 fps

                  Comment


                  • #10
                    Brian,

                    You certainly have a lot of experience in this area!

                    I understand the reason for interlacing with TV I think. As I remember reading that the fundamental problem was that the electronics of the day were not able to avoid flicker without using interlacing.

                    What I don't understand is how a interlaced digital signal would have less bandwidth than a progressive digital signal with the same parameters.

                    For example,
                    1080p, 1920x1080 at 30 frames per second
                    and
                    1080i, 1920x1080 at 60 fields per second
                    look to me to contain the same amount of information per unit time.

                    I have also been told by various engineers that MPEG-2 encoding does not do well with interlacing and for equal quality there is approximately a 30% increase in bandwidth.

                    Seems like whenever we go down this interlacing rabbit hole we end up with more questions than answers!
                    - Mark

                    Core 2 Duo E6400 o/c 3.2GHz - Asus P5B Deluxe - 2048MB Corsair Twinx 6400C4 - ATI AIW X1900 - Seagate 7200.10 SATA 320GB primary - Western Digital SE16 SATA 320GB secondary - Samsung SATA Lightscribe DVD/CDRW- Midiland 4100 Speakers - Presonus Firepod - Dell FP2001 20" LCD - Windows XP Home

                    Comment


                    • #11
                      Simply because 30 fps on a 1080p system would be less pleasant to watch than a 1080i: for comfort, the eye requires a renewal of the image every 1/50th sec or less. This is why a 24 fps film projector has a three-bladed shutter to fool the eye to think it is seeing 72 fps. To get the full advantage out of 1080p (or any np system), you should have the same frame rate as the field rate in a 1080i system.

                      Sorry, badly explained. Imagine Federer v Nadal. One of them hits the ball so that a TV cam has the ball moving horizontally across its image. With 60 field interlacing, the "half ball" overlaps the next "half ball", so that, to a normal viewer, the movement appears fluid, albeit blurred. With p at 30 fps, each frame may show an image of the ball which does not overlap the previous or the next. This is less agreeable to the eye. With p at 60 fps, the overlap would reduce the disagreeable effect.
                      Brian (the devil incarnate)

                      Comment


                      • #12
                        It is interesting that major studios, or should I say producers are still opting to shoot 24p. When shooting digital they can shoot any frame rate (basically) that they want. While researching my latest HD project I got a chance to speak with engineers at some of the big camera manufacturers such as Red and Panavision. I asked them if we'll be seeing major motion pictures starting to come out at frame rates higher than 24 since these high end cameras can do those rates. The pretty much universal reply, in my admittedly small sample, was no. The general feeling is that 24fps is a frame rate slower than reality and thus gives the video a surreal feeling and is part of the motion picture experience for the viewer. It allows the viewer to escape from reality into the world of the film for an hour or two. As you know testing was done in the early '80's with 65mm film at 60fps and audiences hated the result. I can't remember the name of the guy that did this at the moment. It looks like we'll be seeing perhaps documentaries, nature shows, etc... shows that are communicating reality at higher frame rates but possibly 24p will be around for a long time due to the creative reasons for using it.
                        - Mark

                        Core 2 Duo E6400 o/c 3.2GHz - Asus P5B Deluxe - 2048MB Corsair Twinx 6400C4 - ATI AIW X1900 - Seagate 7200.10 SATA 320GB primary - Western Digital SE16 SATA 320GB secondary - Samsung SATA Lightscribe DVD/CDRW- Midiland 4100 Speakers - Presonus Firepod - Dell FP2001 20" LCD - Windows XP Home

                        Comment

                        Working...
                        X