<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Panasonic on Fran Kuo | R&amp;D Leadership</title>
    <link>https://chenfu.ai/en/tags/panasonic/</link>
    <description>Recent content in Panasonic on Fran Kuo | R&amp;D Leadership</description>
    <generator>Hugo -- 0.157.0</generator>
    <language>en</language>
    <lastBuildDate>Mon, 25 Feb 2013 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://chenfu.ai/en/tags/panasonic/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Panasonic&#39;s 3D CMOS Image Sensor</title>
      <link>https://chenfu.ai/en/posts/panasonics-3d-cmos-image-sensor/</link>
      <pubDate>Mon, 25 Feb 2013 00:00:00 +0000</pubDate>
      <guid>https://chenfu.ai/en/posts/panasonics-3d-cmos-image-sensor/</guid>
      <description>&lt;p&gt;3D imagery and video have seen massive growth in recent years, driven largely by the movie and gaming industries. This has spurred a wave of 3D-capable devices like cameras and TVs. In the early stages, capturing 3D content typically required bulky &amp;ldquo;two-camera&amp;rdquo; setups where two separate lenses and sensors were bonded together, with the resulting photos processed into a single 3D file.&lt;/p&gt;
&lt;p&gt;Companies like GoPro followed this modular path, requiring users to buy two cameras and a dedicated housing to achieve 3D effects. Some higher-end devices, like Sony’s 3D camcorders, began embedding two lenses and sensors into a single integrated chassis.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
