Saturday, March 17, 2018

ST Announces 4m Range ToF Sensor

The VL53L1X TOF sensor extends the detection range of ST's FlightSense technology to four meters, bringing high-accuracy, low-power distance measurement, and proximity detection to an even wider variety of applications. The fully integrated VL53L1X measures only 4.9mm x 2.5mm x 1.56mm, allowing use even where space is very limited. It is also pin-compatible with its predecessor, the VL53L0X, allowing easy upgrading of existing products. The compact package contains the laser driver and emitter as well as SPAD array light receiver that gives ST’s FlightSense sensors their ranging speed and reliability. Furthermore, the 940nm emitter, operating in the non-visible spectrum, eliminates distracting light emission and can be hidden behind a protective window without impairing measurement performance.

ST publishes quite a detailed datasheet with the performance data:

GM 4th Gen Self-Driving Car Roof Module

GM has started production of a roof rack for its fourth generation Cruise AV featuring 5 Velodyne LiDARs and, at least, 7 cameras:

Friday, March 16, 2018

MEMSDrive OIS Technology Presentation

MEMSDrive kindly sent me a presentation on its OIS technology:

Pictures from Image Sensors Europe 2018

Few assorted pictures from Image Sensors Europe conference being held these days in London, UK.

From Ron (Vision Markets) twitter:

Image Sensors twitter:

From X-Fab presentation:

Thursday, March 15, 2018

Rumor: Mantis Vision 3D Camera to Appear in Samsung Galaxy S10 Phone

Korean newspaper The Investor quotes local media reports that Mantis Vision and camera module maker Namuga are developing 3-D sensing camera for Samsung next-generation Galaxy S smartphones, tentatively called the Galaxy S10. Namuga is also providing 3-D sensing modules for Intel’s RealSense AR cameras.

TechInsights: Samsung Galaxy S9+ Cameras Cost 12.7% of BOM

TechInsights Samsung Galaxy S9+ cost table estimates cameras cost at $48 out of $379 total. The previous generation S8 camera was estimated at $25.50 or 7.8% of the total BOM.

TechInsights publishes a cost comparison of this year and last yera;s flagship phones. Galaxy S9+ appears to have the largest investment in camera and imaging hardware:

ICFO Graphene Image Sensors

ICFO food analyzer demo at MWC in Barcelona in February 2018:

UV graphene sensors:

Samsung CIS Production Capacity to Beat Sony

ETNews reports that Samsung is to convert its 300mm DRAM 13 line in Hwasung to CMOS sensors production. Since last year, the company also working to convert its DRAM 11 line in Hwasung into an image sensor production (named as S4 line). Conversion of S4 line will be done by end of this year. Right after that, Samsung is going to convert its 300mm 13 line. The 13 line can produce about 100,000 DRAM wafers per month. Because image sensor has more manufacturing steps than DRAM, the production capacity is said to be reduced by about 50% after conversion.

At the end of last year, production capacity of image sensor from 300mm plant based on wafer input was about 45,000 units.” said ETNews source. “Because production capacities of image sensor that will be added from 11 line and 13 line will exceed 70,000 units per month, Samsung Electronics will have production capacity of 120,000 units of image sensor after these conversion processes are over.

Sony CIS capacity is about 100,000 wafers per month. Even with Sony capacity extension plans are accounted, Samsung should be able to match or exceed Sony production capacity.

While increasing production capacity of 300mm CIS lines for 13MP and larger sensors, Samsung is planning to slowly decrease output of 200mm line located in Giheung.

Samsung capacity expansion demonstrates its market confidence. Samsung believes that its image sensor capabilities approach that of Sony. The number of the company's outside CIS customers is over 10.

Wednesday, March 14, 2018

ULIS Video

ULIS publishes a promotional video about its capabilities and products:

Vivo Announces SuperHDR

One of the largest smartphone makers in China, Vivo, announces its AI-powered Super HDR that follows the same principles as regular multi-frame HDR but merges more frames.

The Super HDR’s DR is said to reach up to 14 EV. With a single press of the shutter, Super HDR captures up to 12 frames, significantly more than former HDR schemes. AI algorithms are used to adapt to different scenarios. The moment the shutter is pressed, the AI will detect the scene to determine the ideal exposure strategy and accordingly select the frames for merging.

Alex Feng, SVP at Vivo says “Vivo continues to push the boundaries and provide the ultimate camera experience for consumers. This goes beyond just adding powerful functions, but to developing innovations that our users can immediately enjoy. Today’s showcase of Super HDR is an example of our continued commitment to mobile photography, to enable our consumers to shoot professional quality photos at the touch of a button. Using intelligent AI, Super HDR can capture more detail under any conditions, without additional demands on the user.

Tuesday, March 13, 2018

Prophesee Expands Event Driven Concept to LiDARs

EETimes publishes an article on event-driven image sensors such as Prophesee's (former Chronocam) Asynchronous Time-Based Image Sensor (ATIS) chip.

The company CEO Luca Verre "disclosed to us that Prophesee is exploring the possibility that its event-driven approach can apply to other sensors such as lidars and radars. Verre asked: “What if we can steer lidars to capture data focused on only what’s relevant and just the region of interest?” If it can be done, it will not only speed up data acquisition but also reduce the data volume that needs processing.

Phrophesee is currently “evaluating” the idea, said Luca, cautioning that it will take “some months” before the company can reach that conclusion. But he added, “We’re quite confident that we can pull it off.”

Asked about Prophesee’s new idea — to extend the event-driven approach to other sensors — Yole Développement’s analyst Cambou told us, “Merging the advantages of an event-based camera with a lidar (which offers the “Z” information) is extremely interesting.”

Noting that problems with traditional lidars are tied to limited resolution — “relatively less than typical high-end industrial cameras” — and the speed of analysis, Cambou said that the event-driven approach can help improve lidars, “especially for fast and close-by events, such as a pedestrian appearing in front of an autonomous car.

Samsung Galaxy S9+ Cameras

TechInsights publishes an article on Galaxy S9+ reverse engineering including its 4 cameras - a dual rear camera, a front camera and an iris recognition sensor:

"We are excited to analyze Samsung's new 3-stack ISOCELL Fast 2L3 and we'll be publishing updates as our labs capture more camera details.

Samsung is not first to market with variable mechanical apertures or 3-layer stacked image sensors, however the integration of both elements in the S-series is a bold move to differentiate from other flagship phones.

The S9 wide-angle camera system, which integrates a 2 Gbit LPDDR4 DRAM, offers similar slo-mo video functionality with 0.2 s of video expanded to 6 s of slo-mo captured at 960 fps. Samsung promotes the memory buffer as beneficial to still photography mode where higher speed readout can reduce motion artifacts and facilitate multi-frame noise reduction.

iFixit reverse engineering report publishes nice pictures showing a changing aperture on the wide-angle rear camera:

Monday, March 12, 2018

3DInCites Awards

Phil Garrou's IFTLE 374 reviews 3DInCites Award winners. Two of them are related to image sensors:

Device of the Year: OS05A20 Image Sensor with Nyxel Technology, OmniVision:

"OmniVision’s OS05A20 Image Sensor was nominated for being the first of its image sensors to be built with Nyxel ™ Technology. This approach to near-infrared (NIR) imaging combines thick-silicon pixel architectures with careful management of wafer surface texture to improve quantum efficiency (QE), and extended deep trench isolation to help retain modulation transfer function without affecting the sensor’s dark current. As a result, this image sensor sees better and farther under low- and no-light conditions than previous generations."

Engineer of the Year: Gill Fountain, Xperi:

"Known as Xperi’s guru on Ziptronix’ technologies, Gill was nominated for his most recent contribution, expanding the chemical mechanical polishing process window for Cu damascene from relatively fine features. His team developed a process that delivers uniform, smooth Cu/Ta/Oxide surfaces with a controlled Cu recess with very small variance across wafer sizes. He has been an integral part of Xperi’s technical team and his work allows the electronics industry to apply direct bond interconnect (DBI) for high-volume wafer-to-wafer applications."

Interview with Steven Sasson

IEEE publishes an interview with Steven J. Sasson who invented the first digital camera in 1975 while working at Eastman Kodak, in Rochester, N.Y. A notable Q&A:

Q: What tech advance in recent years has surprised you the most?

A: Cameras are everywhere! I would have never anticipated how ubiquitous the imaging of everything would become. Photos have become the universal form of casual conversation. And cameras are present in almost every type of environment, including in our own homes. I grossly underestimated how quickly it would take for us to get here.

Beer Idenitfication with Hamamatsu Micro-spectrometer

Hamamatsu publishes a beer identification article showing it as an application for its micro-spectrometers:

Forza Silicon Applies Machine Learning to Production Yield Improvement

BusinessWire: Forza Silicon CTO, Daniel Van Blerkom, is to present a paper titled “Accelerated Image Sensor Production Using Machine Learning and Data Analytics” at Image Sensors Europe 2018 in London on March 15, 2018.

The machine learning has been applied to sensor data sets to identify and measure critical yield limiting defects. “Image sensors offer the unique opportunity to image the yield limiting defect mechanisms in silicon,” said Daniel Van Blerkom. “By applying machine learning to image sensor test procedures we’re able to quickly and easily classify sensor defects, identify root-cause and feedback the results to improve the process, manufacturing flow and sensor design for our clients.

ON Semi Announces X-Class CMOS Image Sensor Platform

BusinessWire: ON Semiconductor announces X-Class image sensor platform, which allows a single camera design supporting multiple sensors across the platform. The first devices in the new platform are the 12MP XGS 12000 and 4k / UHD resolution XGS 8000 sensors for machine vision, intelligent transportation systems, and broadcast imaging applications.

The X-Class image sensor platform supports multiple CMOS pixel architectures within the same image sensor frame. This allows a single camera design to support multiple product resolutions and different pixel functionality, such as larger pixels that trade resolution at a given optical format for higher imaging sensitivity, designs optimized for low noise operation to increase DR, and more. By supporting these different pixel architectures through a common high bandwidth, low power interface, camera manufacturers can leverage existing parts inventory and accelerate time to market for new camera designs.

The initial devices in the X-Class family, the XGS 12000 and XGS 8000, are based on the first pixel architecture to be deployed in this platform – a 3.2 µm global shutter CMOS pixel. The XGS 12000 12 MP device is planned to be available in two speed grades – one that fully utilizes 10GigE interfaces by providing full resolution speeds up to 90 fps, and a lower price version providing 27 fps at full resolution that aligns with the bandwidth available from USB 3.0 computer interfaces. The XGS 8000 is also planned to be available in two speed grades (130 and 75 fps) for broadcast applications.

As the needs of industrial imaging applications such as machine vision inspection and industrial automation continue to advance, the design and performance of the image sensors targeting this growing market must continue to evolve,” said Herb Erhardt, VP and GM, Industrial Solutions Division, Image Sensor Group at ON Semiconductor. “With the X-Class platform and devices based on the new XGS pixel, end users have access to the performance and imaging capabilities they need for these applications, while camera manufacturers have the flexibility they require to develop next-generation camera designs for their customers both today and in the future.

The XGS 12000 and XGS 8000 will begin sampling in the 2Q2018, with production availability scheduled for the 3Q2018. Additional devices based on the 3.2 µm XGS pixel as well as products based on other pixel architectures are planned for the X-Class family in the future.

: ON Semiconductor also announces a fully AEC-Q100 qualified version of its circa-2016 2.1 MP CMOS sensor, AR0237 for the OEM-fitted dash cam or before-market in-car DVR market.

The AR0237AT is a cost-optimized, automotive qualified version of the same sensor that can operate across the full automotive operating temperature range of -40°C to +105°C and deliver the right performance at the right price point. The low-light performance of the AR0237AT is improved when it is coupled to a Clarity+ enabled DVR processor. ON Semiconductor’s Clarity+ technology employs filtering to optimize the SNR of automotive imaging solutions, which can deliver an additional 2X increase in light capture.

Sunday, March 11, 2018

Adafruit Publishes ST FlightSense Performance Data

Adafruit publishes a datasheet of its distance sensor using ST SPAD-based ToF chip VL53L0X.

Update: Upon a closer look, the official ST VL530L0X datasheet has all these tables with the performance data.

Saturday, March 10, 2018

ToF Sensor Used for 3D Photometric Imaging

MDPI Sensors publishes a paper from a group of Japanese universities "The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor" by Takuya Yoda, Hajime Nagahara, Rin-ichiro Taniguchi, Keiichiro Kagawa, Keita Yasutomi, and Shoji Kawahito. The paper proposes using of a 4-tap ToF sensor developed in Shizuoka University for 3D imaging in a different way:

"The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes."

Friday, March 09, 2018

TechInsights Publishes Samsung 0.9um Tetracell Pixel Analysis

Techinsights publishes reverse engineering report of Samsung 0.9um Tetracell pixel sensor:

There are many reasons we are excited about the Samsung S5K2X7SP 0.9µm Image Sensor, including Samsung’s claims about it:
  • “Slim 2X7 with Tetracell technology” (.9um, 24MP)
  • The first 0.9 µm generation pixels in mass production
  • Targeting both front and rear cameras
As well as its noted technology features:

Improved ISOCELL technology with deeper deep trench isolation
  • (DTI)Reduced color crosstalk
  • Expands the full-well capacity to hold more light information
  • At 0.9um, allows 24Mp image sensor to fit in a thinner camera module
Tetracell Technology
  • Merges four neighboring pixels to work as one for better light sensitivity in low light situations

Broadcom Enters ToF Sensing Business

Broadcom-Avago unveils its first ToF sensor product. The AFBR-S50MV85G has quite a nice spec:

"The AFBR-S50 is Broadcom's multipixel distance and motion measurement sensor based on the optical time-of-flight principle. It supports up to 3000 frames per second with up to 16 illuminated pixels.

This sensor has been developed with a special focus on industrial sensing applications and gesture sensing with the need for high speed, small size and very low power consumption. Through its best-in-class ambient light suppression of up to 200k Lux, its use in outside environments is no problem.

The technology has been optimized to measure distances up to 10m (black target) with an accuracy of < 1 percent on a wide variety of surfaces. It works equally well on white, black, colored and metallic reflective surfaces.

The module has an integrated 850nm laser light source and uses a single voltage supply of 5V; the data is transferred via a digital SPI interface.

Broadcom presented the new ToF sensor in February 2018 at Embedded World trade fair in Nuremberg, Germany:

Thursday, March 08, 2018

NIT Presents Affordable HDR Sensor for Machine Vision Applications

New Imaging Technologies presents an affordable CMOS sensor (12 bits) HV2061 with native HDR capability (140dB intra-scene and inter-scene) offering three operating modes; rolling, global and differential (subtraction of two frames in pixel). Its performance is supposed to allow users to get local illumination in real time for many computer vision application such as biometrics, gesture detection, sense & avoid, etc.:

Chronocam Presentation at AutoSens 2017

AutoSens publishes Chronocam-Prophesee CTO and Co-Founder Christoph Posch presentation "Event-based vs conventional cameras for ADAS and autonomous driving applications:"

Image Sensor Performance Improvements over Time

Multianalytics Blog publishes nice videos of image sensor performance progress over the years:

Aeye Adaptive Scanning LiDAR Patents Granted

BusinessWire: AEye announces it has been awarded foundational patents for its solid state MEMS-based LiDAR. These include 71 claims covering AEye inventions ranging from an approach to dynamic scan and shot pattern for LiDAR transmissions to the ability to control and shape each laser pulse and methods for interrogating each voxel within a point cloud. These inventions are said to contribute significant performance improvements for the iDAR perception system: improving range by 400%; increasing speed by 20x; and boosting object classification accuracy while reducing laser interference.

"AEye's groundbreaking iDAR system is the first to use intelligent data capture to enable rapid perception and path planning,” said Elliot Garbus, former VP of Transportation Solutions at Intel. “Most LiDAR systems function at only 10Hz, while the human visual cortex processes at 27Hz. Autonomous vehicles need perception systems that work at least as fast as humans. iDAR is the first and only perception system to consistently deliver performance of at least 30-50Hz. Better quality information, faster. This is a game changer for the autonomous vehicle market.

Leveraging the inventions covered by our patents, we created the worlds first intelligent agile LiDAR – enabling us to interrogate a point cloud as individual voxels and control each one using multiple levers,” said Allan Steinhardt, Chief Scientist at AEye. “Traditional systems only adapt on frame size or placement. In addition to frame size and placement, Agile LiDAR – a core feature of the iDAR perception system – allows us to dynamically control frame pattern, pulse tuning, pulse shaping, pulse energy and other critical dimensions that enable embedded AI.

AEye’s first iDAR-based product, the AE100 artificial perception system, will be available this summer to OEMs and Tier 1s launching autonomous vehicle initiatives.

The granted patents are probably US9885778 and US9897689 proposing the adaptive scanning so that the laser energy is spent in a more economical way only on "interesting spots", for the most part:

Wednesday, March 07, 2018

Trendforce Predicts Adoption Rate of 3D Sensing in Smartphones at 13.1% in 2018

Trendforce publishes its analysis of 3D sensing in smartphones:

"According to Peter Huang, analyst at TrendForce, there are three major technical barriers in producing 3D sensing modules at present. First, it is not easy to manufacture high-efficiency VCSELs, and the current electrical-to-optical power conversion efficiency is only about 30% on average. Second, the production of diffractive optical elements (DOE), a necessary component of Structured Light technology, and CMOS image sensor (CIS) in infrared cameras, require sophisticated technology. Third, the issue of thermal expansion also needs to be taken into consideration, making 3D sensing module assembly even more challenging. In sum, all these factors contribute to low yield of 3D sensing modules.

Therefore, it is estimated that only up to two Android phone vendors, most likely Huawei and Xiaomi, would adopt 3D sensing modules in 2018 with very limited shipments. Thus, Apple will remain the major smartphone company that adopts 3D sensing this year. It is estimated that the production volume of smartphones equipped with 3D sensing modules will reach 197 million units by the end of 2018, of which 165 million units will be iPhones. In addition, the market value of 3D sensing module in 2018 is estimated to be about US$5.12 billion, with iPhones alone accounting for 84.5% of the entire value. By 2020, the market value is estimated to reach US$10.85 billion, and the CAGR will be 45.6% from 2018 to 2020.

Avianization vs Dinosaurization in Image Sensor Industry

Wiley Strategic Entrepreneurship Journal publishes a paper "When dinosaurs fly: The role of firm capabilities in the ‘avianization’ of incumbents during disruptive technological change" by Raja Roy, Curba Morris Lampert, and Irina Stoyneva.

"Research Summary: We investigate the image sensor industry in which the emergence of CMOS sensors challenged the manufacturers of CCD sensors. Although this disruptive technological change led to the demise of CCD technology, it also led to avianization — or strategic renewal — for some incumbents, similar to how some dinosaurs survived the mass Cretaceous-Tertiary extinction by evolving into birds. We find that CCD manufacturers that did avianize were preadapted to the disruptive CMOS technology in that they possessed relevant complementary technologies and access to in-house users that allowed them to strategically renew themselves.

Managerial Summary: We investigate the transition from CCD to CMOS image sensors in the digital image sensor industry. Although the emergence of CMOS sensors was disruptive to CCD sensors, we find that CCD sensor manufacturers such as Sony and Sharp successfully transitioned to manufacturing CMOS sensors. Contrary to popular press and prior academic research characterizing disruptive change as being a source of failure for large firms, our research reveals that firms that possess relevant complementary technologies and have access to in-house users are able to strategically renew themselves in the face of a disruptive threat."

While the main paper is behind a paywall, the supplementary material is openly available.

The complementary technologies (CT) are said to enable the CCD companies to win a place on CMOS sensor market:
  • Global or electronic shuttering
  • Microlenses
  • CDS
  • Lightpipe or light shield
  • Hole Accumulation Diode (HAD)

Another key condition for successful transition to CMOS technology is an access to in-house users. It is used to explain Kodak demise:

"The lack of access to in-house users at Kodak was consistent with its corporate strategy. According to George Fisher, ex-CEO, Eastman Kodak was a ‘horizontal firm because in a digital world, it is much more important to pick out horizontal layers where you have distinctive capabilities. In the computer world, one company specializes in microprocessors, one in monitors, and another in disk drives’ (Galaza and Fisher, 1999: 46). Chinon was eventually acquired by Kodak in 2004 (Eastman Kodak Company, 2004a) and continued to design and manufacture the point-and-shoot cameras."

Reticon/EG&G, Tektronix, and Ford Aeronutronic used to have access to in-house users but lacked relevant CTs. "We find that Reticon/EG&G, Tektronix, and Aeronutronic Ford failed to avianize themselves during the disruptive change to CMOS sensors from CCD sensors."

Tuesday, March 06, 2018

Active Sensing in Automotive Applications

AutoSens publishes Panasonic Soeren Molander presentation "Active sensing technologies for automotive applications:"

ON Semi Announces 43MP Full Frame CCD

BusinessWire: ON Semiconductor introduces a 43MP CCD in 35 mm optical format, said to be the highest CCD resolution in full-frame format. The KAI-43140 is aimed to applications such as end of line inspection of HD and UHD flat panel displays and aerial photography.

The KAI-43140 utilizes a new 4.5 µm Interline Transfer CCD (ITCCD) pixel that increases resolution by 50% compared to the prior 5.5 µm design while preserving critical imaging performance. Featuring a true electric “global” shutter, the device supports full resolution frame rates up to 4 fps through the use of flexible 1, 2, or 4 output readout architecture. The KAI-43140 shares the same package and pin definitions as the popular 29 MP KAI-29050 and KAI-29052 image sensors, allowing it to be incorporated into existing camera designs with only minor electrical changes.

Many industrial imaging applications demand the image uniformity currently only available from CCD technology, while needing the resolution increases that require continued pixel development,” said Herb Erhardt, VP and GM, Industrial Solutions Division, Image Sensor Group at ON Semiconductor. “With the KAI-43140, camera manufacturers and end customers can continue to push the boundaries of high resolution image capture without sacrificing the image quality their applications require.

Engineering grade versions of the KAI-43140 are now available, with production versions planned for early 3Q18.

Monday, March 05, 2018

Pixart Q4 2017 Report

Pixart keeps diversifying its image sensor portfolio with some degree of success:

Sunday, March 04, 2018

Himax on 3D Sensing Strategy

SeekingAlpha publishes Himax Q4 2017 earnings call transcript. Few quotes on 3D and CMOS sensor business:

"At present, our total market is primarily the Android based smartphone. SLiM, our total – our structure light-based 3D sensing total solutions which we announced jointly with Qualcomm last August, brings together Qualcomm’s industry leading 3D algorithm with Himax’s cutting-edge design and manufacturing capabilities in optics and NIR sensors as well as our unique know-how in 3D sensing system integration.

The majority of the key technologies inside the SLiM total solution is developed and supplied by Himax ourselves. These critical technologies include, on the projector end, DOE and collimator utilizing our world leading WLO technology, a tailor-made laser driver IC, and high precision active alignment for the projector assembly; and on the receiver end, a high efficiency near-infrared CMOS image sensor. Last but not least, Himax also developed an ASIC by incorporating Qualcomm’s algorithm for 3D depth map generation. The fact that all of these critical components are developed in-house puts us in a unique leading position. It represents a very high barrier of entry for any potential competition and a much higher ASP and profit margin for us.

The Qualcomm/Himax solution is by far the highest quality 3D sensing total solution available for the Android market right now. It has the industry’s best performance in all of the dimension, 3D depth accuracy, indoor/outdoor sensitivity and power consumption. It passes the toughest eye safety standards with a proprietary glass broken detection mechanism to safeguard the user from any potential harm. Furthermore, we have the only solution to offer face recognition for secure online payment, a must-have feature for high end smartphones of the future. We are working with multiple tier-1 smartphone makers, aiming to launch 3D sensing on their premium smartphones starting the first half of 2018.

Our SLiM solution will be ready for mass production and shipment by the end of the first quarter, 2018 with an initial capacity of 2 million units per month, following some waiting period. The initial capacity is part of our Phase I expansion of $80 million. We have already achieved pretty satisfactory production yields in our internal pilot production. Given that SLiM is a highly integrated solution with ASPs much higher than those of individual components, by the time we started making shipment, it will be a major growth contributor to our top and bottom lines.

In an attempt to accelerate the adoption of 3D sensing for Android phones, in addition to SLiM, we’re also working on stereoscopic type 3D sensing as a lower costs alternative. Unlike SLiM which utilizes structure light to generate 3D, stereoscopic type uses two cameras to replicate 3D vision in nature, augmented by coded light for image depth enhancement. Both types of solutions offered by Himax operate on active NIR light source with high sensitivity NIR sensors, thus working very well even under extreme brightness or total darkness.

For 3D sensing purposes, structure light approach offers better depth precision than stereoscopic type but the cost is also higher. By introducing stereoscopic 3D sensing, we aim to bring down the cost of 3D sensing so that it can be afforded by mass market smartphone models. We are pleased to report that development of stereoscopic 3D sensing total solution for face recognition and 3D features has been under way. We are aiming to be mass production and shipment ready by Q4 of this year. Similar to our experience in SLiM, we are working with some of the most prominent ecosystem partners in developing our stereoscopic 3D total solution.

We are very update progress in due course or low costs compared to structure light their stereoscopic 3D was still represent a much higher ASP and better gross margin potential for us. Last but not least and this year CES many of our customers and partners demonstrated 3D sensing applications in IoT or promoted AR/VR and robotic related products with Himax SLiM inside and received very positive feedback. As I mentioned before, 3D sensing can have a broad range of applications that go beyond smartphone. We are very excited about the growth prospects it represents and believes 3D sensing will be our biggest long-term growth engine.

A slide from the company presentation:

Friday, March 02, 2018

Noise in Charge Domain Sampling Readouts

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes Delft University paper "Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors" by Xiaoliang Ge and Albert J. P. Theuwissen.

"In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models."