HomeSide 720° – A helmet mounted panoramic camera
Introducing the River View Web Player & Other News from River Studies
Subpixel Registration and Distortion Measurement
File:Text slides.pdf
uploaded "[[File:Text slides.pdf]]" Elphel presentation at Russian Open Source Hardware, mini-conference, October 2010
OlgaElphel camera parts 0353-70
0353-70-63 - Lens, 5Mpix format 1/1.8 E3Z4518CS-MPIR:
← Older revision Revision as of 20:29, 11 June 2013 (One intermediate revision not shown)Line 135: Line 135: === 0353-70-61 - Lens, Computar 4.5-13.2mm F1.8 Manual Iris Vari-Focal CS-Mount, 3 Megapixel, Day/Night, IR === === 0353-70-61 - Lens, Computar 4.5-13.2mm F1.8 Manual Iris Vari-Focal CS-Mount, 3 Megapixel, Day/Night, IR === === 0353-70-62 - Lens, M12, Sunex DSL219A Fisheye 1.8 mm, F2 === === 0353-70-62 - Lens, M12, Sunex DSL219A Fisheye 1.8 mm, F2 === +---- +=== 0353-70-63 - 4.5-13.2mm F1.8 Manual Iris Vari-Focal CS-Mount, Day/Night, IR, 5 Megapixel (HD) E3Z4518CS-MPIR === ---- ---- OlgaElphel camera parts 0353-01
0353-01-46 - SATA to SATA Cable Assembly 500mm, p/n 5602-44-0142A-500:
← Older revision Revision as of 21:23, 10 June 2013 Line 96: Line 96: === 0353-01-45 - eSATA to eSATA Cable Assembly === === 0353-01-45 - eSATA to eSATA Cable Assembly === === 0353-01-46 - SATA to SATA Cable Assembly 500mm, p/n 5602-44-0142A-500 === === 0353-01-46 - SATA to SATA Cable Assembly 500mm, p/n 5602-44-0142A-500 === + +=== 0353-01-47 - Micro SATA 1.8" combo 5V & 3.3V Power & SATA cable, p/n MS16P15PMWVD7S === === 0353-01-50 - FPC flexible printed circuit 90 degree for panoramic head === === 0353-01-50 - FPC flexible printed circuit 90 degree for panoramic head === === 0353-01-51 - FPC flexible printed circuit straight for panoramic head === === 0353-01-51 - FPC flexible printed circuit straight for panoramic head === OlgaElphel new camera calibration facility
Fig.1. Elphel new calibration pattern
Elphel has moved to a new calibration facility in May 2013. The new office is designed with the calibration room being it’s most important space, expandable when needed to the size of the whole office with the use of wide garage door. Back wall in the new calibration room is covered with the large, 7m x 3m pattern, illuminated with bright fluorescent lights. The length of the room allows to position the calibration machine 7.5 meters away from the pattern. The long space and large pattern will allow to calibrate Eyesis4π positioned far enough from the pattern to be withing depth of field of its lenses focused for infinity, while still keeping wide angular size, preferred for accuracy of measurements.
We already hit the precision limits using the previous, smaller pattern 2.7m x 3.0m. While the software was designed to accommodate for the pattern where each of the nodes had to have individually corrected position (from the flat uniform grid), the process assumed that the 3d coordinates of the nodes do not change between measurements.
The main problem with the old pattern was that the material it was printed on was attached to the wall along the top edge but still had a freedom to slightly move perpendicular to the wall. We noticed that while combining measurements made at different time, as most of our cameras need to be calibrated at several “stations” – positions relative to the target (rotation around 2 axes is performed automatically). We ran calibration during night time to reduce variations caused by vibrations in the building, so next station measurements were performed at different dates. Modified software was able to deal with variations in Z (perpendicular to the surface) direction between station measurements (that actually did help in the overall adjustment of variables), but the shape of the target pattern could change if the temperature in the building was changing during measurements. The PVC material has high thermal expansion, and small expansion in the X,Y directions could cause much higher variations perpendicular when the target is attached to the wall with lower thermal coefficient in multiple points.
Fig. 2. Floor plan
Calibration RoomThe new space is designed to accommodate various camera calibration procedures.
- First of all we made the pattern as large as possible – it is 7,01m x 3.07m – we even raised the ceiling near the target.
- The target itself is now printed on the film attached to the wall as a wallpaper, so there is no movement relative to the wall, and thermal expansion is defined by a lower coefficient of the drywall. We also provided the air channels inside the wall to make it possible to implement thermal stabilization of the wall.
- The calibration room allows to move camera under test up to 7.5m away from the pattern, the room is separated from the rest of the facility with the wide “garage” door, so changing the lighting conditions outside of the room do not influence calibration.
- Other rooms are designed in such a way that the camera can be moved up to 24 meters from the target (with the garage door open) and have unobstructed view of virtually the full pattern – that may be needed for the long focal length lenses.
Fig. 3. Pattern wall during construction
Preparing the wall for the target patternDuring construction of the new facility we were carefully watching the progress as our temporary space was located just on the next floor and we were mostly concerned about the quality of the target wall. Yes, software can accommodate for the non-flatness of the wall but it is better to start with the good “hardware” – to achieve subpixel precision the software averages correlation over rather large areas of the image (currently 64×64 pixels) so sharp variations will produce different measurements from different distances or viewing angles. When we first measured the wall flatness, we noticed large steps between the gypsum board panels, so the construction people promised to make it level 5 finish and flatten the surface. They put “mud” all over the wall, sanded it and that removed all of the sharp discontinuities on the target surface, but still leaving some smooth ones up to ±3mm as we measured later with the camera.
When the wall was made flat it had to be prepared for application of the self-adhesive vinyl film, so the wall finish will not make it bubble later. Ideally we wanted it to be able to withstand peeling off the film if we’ll have to do that. When we searched Internet about vinyl film application to the painted wall we found that most fresh paint needs some 60(!) days to cure before the film can be applied. So we decided to go with two-component epoxy paint that requires only one week before the film can be applied. When we inspected that epoxy painted wall (the paint was applied with the regular rollers) – it did not look flat. Well, it was just a roller-painted wall, so it had those small bumps and we were concerned that the vinyl film will conform to these bumps, and if it will – the position “noise” will be higher than what cameras can resolve. So we’ve got more epoxy paint and started a long process of wet-sanding and application of the new paint coats. We have compressed air (used to blow during optical and mechanical assembly) so we thought we’ll just spray the paint instead of rolling it to avoid those bumps that were left even after professional work. Unfortunately, without the needed experience in spray-painting, we adjusted pressure too high, and probably as much as a half of our first coat ended somewhere else, but not on the sprayed wall – the paint droplets were too small. Next coat was better, and in several days we had a wall that seemed to be covered with hard plastic laminate, not just painted.
Installing the patternOur next concern was – how to install the vinyl film? We wanted to have very good match between the individual panels, as it is not possible to have the target printed on a single piece, maximal width of which is just over 1.5m. We hesitated to order professional installation because for regular applications (like vehicle wraps) such sub-millimeter precision is not required. For the really seamless (compared to the precision of the calibration) we needed better than 0.1mm match, but it is possible to just mask out the grid nodes around the seams and disregard them during calibration data processing, so we planned to get to about 0.5mm match.
Fig.4. Pattern Z-deviations (perpendicular to the target plane)
Fig. 5. Pattern deviations in X,Y plane
Fig. 6. Pattern deviation from the "ideal" grid (horizontal profile)
We knew people are doing that but still it seemed very difficult to apply 1.5m wide by 3m long “stickers” without wrinkles and bubbles. Web search provided multiple recommendations, but the main thing was to use “wet” method that none of us new before. It involves spraying the wall (and the film on the adhesive side) with “application fluid” (basically water with small addition of soap and alcohol). When the sticky film is applied to the wet surface, the adhesive is temporarily inhibited and it is possible to reposition (slide) the film to achieve required match. Then the water is squeezed away with the squeegee tools, and if done properly, there should be no bubbles left.
Geometric properties of the patternThe Z-deviations on Fig. 4 show the wall non-flatness, the gypsum panel borders are still visible (even with “level 5″ finish), the horizontal discontinuity near the top is where the wall was extended to accommodate increased ceiling height. Positive Z direction is away from the camera, so lighter areas are concave areas on the wall and darker are bumps extending out from the wall.
Fig.5. illustrates mismatch and stretching of the vinyl panels application. Red/green color difference corresponds to the horizontal shift, while blue/green – the vertical one.
Figure 6. contains a horizontal profile at the half-height and provides numerical values of the deviations. Diff. Error plot indicates areas around panel boundaries that should be avoided during reprojection errors minimization and measuring point spread functions (PSF) for aberration correction.
Illuminating the target patternWe use the same pattern for different parts of the camera calibration. Correction of aberrations and distortions does not impose strict requirements on the illumination of the pattern, but we use the same images to measure (and compensate) lens vignetting and color variations of the camera sensitivity caused among other reasons by the multilayer infrared cutoff filter and angular variations of the pixel color sensitivity. This method works for low-frequency part of the flat field correction and does not deal with the pixel fixed-pattern noise that, if present should be corrected by other means.
Fig. 7. pattern brightness for station 2 view 0 (top channels)
Fig. 8. pattern brightness for station 2 view 0 (top), specular component
Fig. 9. pattern brightness for station 2 view 1 (bottom channels)
Fig. 10. pattern brightness for station 2 view 1 (bottom channels), specular
Acquiring thousands of images made by different channels of the camera and capturing the same target, it is possible to perform simultaneous relative photometric calibration of the pattern and the sensors, provided that each element of the pattern preserves the same brightness for each image where it is captured. This may be true when the target is observed from the same point, but when we calibrate Eyesis4π camera with 2 sensors attached far from the other ones, and these sensors travel significantly when capturing the target, this assumption does not hold. The same pattern element has different brightness depending on the lens position when the image is acquired. This is because even matte pattern material is not perfectly diffusive, there is some specular (reflective) component.
In the earlier setup we used photographic lamps with large umbrellas, but these umbrellas were still small when placed at a distance that they were out of the camera view. Specular component was still visible when the diffusive part was subtracted. When designing the new calibration target we decided to use bright linear fluorescent lamps along the floor and the ceiling and keep them spatially compact without any diffusers or umbrellas, we only used mirrors behind the lamps to effectively double the output. Such light source was expected to produce specular reflections on the target, but these reflections occupy just a small portion of the target surface, the rest of it is close to be pure diffusive. That allowed us to locate positions of the specular reflections for each camera station/viewpoint by subtracting the average (between all stations/viewpoints) pattern brightness from each individual station/view of the pattern and then masking out this areas of the pattern during flat-field calculations.
Images on Fig. 7-10 were made for camera station 2 – 3.3m from the target and 1.55m to the right of the target center, that caused lamp reflections to be shifted to the left. View 0 (Fig. 7-8) correspond to the camera head, which is the center of rotations. View 1 (Fig. 9-10) is captured by the camera 2 bottom sensors mounted 820 mm below the camera head, so they were moving significantly between the images – that caused visible curvature on the top lamps reflection.
Virtual tour of Elphel calibration facilityYou may walk through our calibration facility using our WebGL viewer/editor. The images were captured with newly calibrated Eyesis4π camera, there is no 3-d parallax correction – these are just raw panoramas stitched for infinity and most close objects are out of depth-of-field of the lenses. Hope you’ll still enjoy this snapshot of the new facility were we plan to develop and precisely calibrate many new cameras.
[elphel353-8.0] By dzhimiev: 1. added eyesis_ide.php
1. added eyesis_ide.php
- Modified src.list rev1.104 - added 4 lines, removed none
[elphel353-8.0] By dzhimiev: 1. Eyesis4Pi support
1. Eyesis4Pi support
- Modified ide.sh rev1.3 - added 35 lines, removed 26 lines
[elphel353-8.0] By dzhimiev: 1. eyesis_ide.php included
1. eyesis_ide.php included
- Modified Makefile rev1.7 - added 4 lines, removed one line
[elphel353-8.0] By dzhimiev: 1. 103697 (IDE Multiplexer Board) configuration for Eyesis4Pi
1. 103697 (IDE Multiplexer Board) configuration for Eyesis4Pi
- Modified eyesis_ide.php rev1.1 - added none, removed none
[elphel353-8.0] By dzhimiev: 1. sync instead of unmount at stop
1. sync instead of unmount at stop
- Modified logger_launcher.php rev1.6 - added none, removed none
[elphel353-8.0] By dzhimiev: 1. fixed list-command
1. fixed list-command
- Modified camogm_interface.php rev1.23 - added 4 lines, removed one line
[elphel353-8.0] By dzhimiev: 1. sync instead of unmount
1. sync instead of unmount
- Modified logger_launcher.php rev1.5 - added 5 lines, removed one line
[elphel353-8.0] By dzhimiev: 1. minor changes to avoid php hanging when not communicating with camogm
1. minor changes to avoid php hanging when not communicating with camogm
- Modified camogm_interface.php rev1.22 - added 165 lines, removed 99 lines
Sensor+Lens Tool
It might help to figure out what lens is needed for a particular application where certain parameters can be important, e.g.:
- Field of view for a lens of the given format
- Depth of field at a fixed distance and f-stop
- Aperture size (f-stop) at which the resolution starts to degrade due to diffraction limiting
- Different sensor formats (also compared to the “full frame” format)
- Circle of confusion formulae (affects hyperfocal distance and depth of field):
- 1px – for machine vision applications
- d/1730 & d/1000 – “Zeiss formula” for photography
- Distance to in-focus plane
- Lens focal length
- Field of view
- Diffraction limit for aperture size (calculated for red light of 690nm, Airy disk size equals to 1px)
- Depth of field
Links
[elphel353-8.0] By dzhimiev: error in header
error in header
- Modified setparameters_demo.php rev1.3 - added 2 lines, removed one line
Heptaclops camera and the 393
Last years we were working on the multi-sensor cameras and optical parts of the cameras. It all started as a temporary diversion from the development of the model 373 cameras that we planned to use instead of our current model 353 cameras based on the discontinued Axis CPU. The problem with the 373 design was that while the prototype was assembled and successfully tested (together with two new add-on boards) I did not like the bandwidth between the FPGA and the CPU – even as I used as many connection channels between them as possible. So while the Texas Instruments DaVinci processor was a significant upgrade to the camera CPU power, the camera design did not seem to me as being able to stay current for the next 3-5 years and being able to accommodate new emerging (not yet available) sensors with increased resolution and frame rate. This is why we decided to put that design on hold being ready to start the production if our the number of our stored Axis CPU would fall dangerously low. Meanwhile wait for the better CPU/FPGA integration options to appear and focus on the development of the other parts of the system that are really important.
Now that wait for the processor is nearly over and it seems to be just in time – we still have enough stock to be able to provide NC353 cameras until the replacement will be ready. I’ll get to this later in the post, and first tell where did we get during these 3 years.
Up until 2009 we did not really bother with the optics of the cameras we made – cameras have a standard CS-mount that can accommodate C- and CS-mount lenses, available from many suppliers. We provided the electronics and software, but it was up to our users to deal with the rest. Yes, we did offer cameras with color and monochrome sensors, with or without IR cutoff filters, stocked some basic varifocal lenses – but that was virtually all. When we started to develop panoramic cameras ourselves we quickly recognized that the lenses we need just do not exist. The C/CS-mount format lenses are too big to make a compact layout of the camera (it not only becomes big itself, but large distance between the lenses cause large parallax that makes panorama stitching more difficult). The smaller M12 mount lenses (also called “S-mount”, and “board lens”) are mostly designed for the small security cameras and being cost-sensitive are not usually designed for the top performance.
We also realized that putting together multiple individual cameras to cover a panorama is not enough. All camera lenses have best resolution in the center, while closer to the corners it degrades. In many, especially small lenses the corners are substantially darker due to vignetting. And while we got used to it making photographs – in many cases it was even be considered as a useful feature to focus on the object in the center and blur and fade out the periphery, in stitched panoramas it is a disaster, as the individual lenses peripheral areas will be mapped to the middle areas of the composite panorama image.
Not being the lens manufacturers ourselves we went the path of correcting the lens aberrations by software post-processing ( “Zoom in. Now… enhance.” and later posts) – that allowed us to effectively double number of lens “megapixels”. Later we used the same pattern we developed for aberration correction to precisely correct the lens distortions. This process of camera calibration for the spherical view camera is described in my previous blogs (such as Building and Calibrating Eyesis4π) – we started to do so for the precise panorama stitching but later worked on making it suitable for the stereo photogrammetry and 3d reconstruction.
So now we have what we believe is the highest performance camera of a kind – the one that we demonstrated at SIGGRAPH-2012. We also have now precise thermally-compensated sensor front end that can be used in other applications – in an individual camera or in multi-camera setups.
One such application is
Shallow depth of field and cinema camerasFor many years now Elphel was cooperating with a group of enthusiasts who tried to adapt our cameras to use for cinema applications – and that fits very well into our vision: take our cameras and use them as clay to form something you (not us) envision. But eventually they got tired of waiting for our next model 373 camera (that they needed to support higher frame rate and larger image sensor) so they decided to develop a new camera themselves.
One of the main camera features they (and others who are interested in the cinematographic applications) needed was the physically large sensor. Such sensors allow capturing images with “shallow” depth of field (DoF) and can be used to shoot video where some objects are in focus, while others (farther or closer) need to be blurred. With the single lens systems the scale of distances where you can use DoF depends on the physical size of the sensor and with the small sensor as we use (and those used in camera-phones) are approximately 5 times (linearly) smaller than the 35-mm film frame. So what you can achieve with 35mm camera in 5-10 meter range is only possible in the 1-2 meter range with the small 1/2.5″ (~7mm diagonal) sensor – so instead of the human actors you’ll have to make animation with dolls. There are even special optical adapters that use 35mm format lens to focus image on the diffusing screen (made of wax or even fast rotating disk to make diffusing grains smaller) and then transfer the image on that screen to the small format sensor of the inexpensive camcorder. But that system still had limited resolution and was loosing a lot of light, dramatically reducing the camera sensitivity.
Three-camera setup for controlled depth of field capturing
The DoF first came as the feature inherent to the physical camera, the process of capturing the three-dimensional world on a two-dimensional media (film or image sensor). But in the artist’s hands it became a tool to focus viewer attention on the intended objects and also to show the 3-d nature of the actual world. With the modern computer animation there are no physical cameras with the lenses involved, but the depth of field is still present (like in this Sintel gallery). That means that the “shallow” DoF can be synthesized when the 3d information about the scene is present, and such information can be captured by other means – not only by the large format sensor and then the result image is rendered with synthetic depth of field. In some cases even a stereo-camera setup (a pair of synchronized cameras) can be used. Such setup is generally sufficient, if the in-focus objects are in foreground and there is nothing closer to the camera that occludes the target. But if such system is used to capture say image of a human behind the tree branches, then a single horizontal branch can close view of the human eye to both camera lenses. So regardless of how you blur the foreground objects (tree branches in this case) you will not be able to reconstruct the sharp image of the human face – there is no information about the color of the eye completely missing on both camera images. Using more cameras in the setup helps to provide more information about the objects – in our last case the third camera shifted vertically from the first two will have the information about the eye that was missing on the images from the first two cameras.
Building the 3-d model of the scene from the multiple images is not an easy task. The precision of the depth measurements is much lower than measuring distances in the direction orthogonal to the line of view. And often the portions of the scene have no fine details and so there is nothing to match to find out the distance to that object. On the other hand, when the 3-d reconstruction is needed just for synthesizing DoF, the precision of the distance needed to simulate the DoF of a real lens is the same as you can get from the lenses separated by the large lens diameter. The areas that do not have details, where it is impossible to measure distance – that areas would look the same on the final image, even if you blur them with the wrong sigma (or not blur at all).
HTML5 demoA section of the screenshot - click on the image to open the actual HTML5 demo page
We do not yet have a seven-camera setup or “heptaclops”, we used a smaller “triclops” configuration. When we had built and calibrated the new camera (using the target pattern data measured earlier with Eyesis) we looked at the way to demonstrate it. First the images were processed with the known calibration and each of the raw images was mapped to the common projection plane – each pixel with ~0.15 pix accuracy – this process compensates for the lenses distortions and mis-alignment of the individual sub-cameras. These images can be used as the input data for the 3-d reconstruction. We do not have finished 3-d processing software yet, Oleg Dzhimiev made a small HTML5 application that illustrates the information from the camera triplet.
This web application overlaps the triplet of the corrected images acquired simultaneously by the 3 sub-cameras and applies the transparencies to the two of them so the the visible superposition has equal weight of each image of the set. Then each image is shifted by the value of the disparity that matches the distance from the camera to the image plane – the amount of disparity is controlled by a slider or by rotating the mouse scroll wheel. The objects in or near the selected image plane from all three images coincide, while the objects closer or farther from the camera are shifted from each other. When the shift is small, it looks like a blur, but farther images look as they actually are – as individual ones. While these separate image spoil illusion of the out-of-focus blurring (but still looking more realistic than dual images in old rangefinder cameras), they illustrate the raw data. Using more parallel cameras would improve illusion of focusing on such fast demo and provide more data for the actual reconstruction, reduce ambiguity when finding the disparity (and so the distance) at each pixel. Additionally, combining the data from multiple individual sensors would increase signal-to-noise ratio of the result image and so the dynamic range even if used with the same exposure/gain settings. And it is possible to program some channels with different exposure and run the whole system in the HDR mode.
The same applcation can be useful with the 3-d processing too. Instead of the 3 images that are just aberration and distortion corrected originals acquired from the different sensors, we can generate multiple close views and feed them to the same program – just shifting multiple images (or videos) is much less computationally demanding as correct 3-d rendering of the scene with the selected image plane and DoF, so such application can be used as a preview for the artist to dynamically adjust those parameters (distance and DoF) before running the final rendering (when it is possible to add desired bokeh too).
Back to the Model NC373 camera statusModel NC393 CAD rendering with M12 (S-mount) lens and thermally compensated sensor front end
We decided to drop the idea of building the already designed and prototyped model NC373 camera. While the next camera will share some parts with the 373, the changes are too big to call it just a revision “C” of the 10373 system board, so it will be model NC393. The camera system board will have Xilinx Zynq that combines FPGA and a dual-core ARM processor on a same chip, so my main concern of the FPGA-CPU bandwidth is not applicable here.
When information about the new Xilinx device was announced, I thought it is a good candidate for the next camera design. In spring of the last year we had a Xilinx seminar in Salt Lake City, where I was told that these new devices will be supported by the zero-cost development software.
Model NC393 CAD rendering with a C/CS-mount lens
That feature is very important for us, because while the cost of the tools is not high for the manufacturer, it is higher than the cost of a camera. We strive to make our products highly customizable by the users, each camera contains the source code needed to compile the executables (including the FPGA code). Making our customers to pay high price to be able to modify even a single line of the FPGA code is not acceptable to us, so we use only those FPGA devices in our designs that are supported by the software that our users can download at zero cost. Of course ideally we would love to use free (FLOSS) development tools (like we use for the FPGA functional simulation), not just the zero cost ones, but in the real world it is not possible yet, so we develop and share our free (licensed under GNU GPLv3) code with the non-free closed-source tools.
The news that came from the Xilinx reps later last year were really disappointing – none of the Zynq devices (even the smallest one) will be supported by the zero-price software tools. And only this year it finally became official that 3 of the 4 devices is going to be supported and so we can use them. Xilinx did have some production delays, the availability schedule slipped to later dates, but I’m crossing my fingers that the needed part/package combination will actually be in production by the end of the Q1 2013.
Model NC393 CAD rendering with three M12 lenses and thermally compensated sensor front ends
While the NC393 design is far from being finished, some features are already settled and are likely to remain unchanged in the final product.
- The camera will be compatible with both parallel output sensors (such as the Aptina MT9P001/MP9P031/MP9P006 that we use currently) and the multi-lane serial sensors (such as having MIPI). The connectors will not change and the sensors used with NC353 will fit directly to the NC393 camera
- The camera system board is being designed for the multi-sensor operation. It will accommodate three sensors without the need to use multiplexer boards (like 10359 needed for NC353). Multiplexer boards will likely still be used in some cases, but the system board itself will have 3 identical sensor board connectors
- Physical dimensions of the camera and the mounting holes location on the system board will remain the same as on the previous camera models
- Camera will have a single GigE port as a main communication channel
- One serial console port with internal USB converter, so a microUSB cable will be sufficient to use system console for the software development.
- Firmware installation and update will be done by booting from the microSD accessible without opening of the camera. It will be possible to use the same card slot during normal operation for data storage.
- 512MB NAND flash as a main storage for firmware, boot source for camera normal operation.
- 1GB of the system memory made of the two 256×16 DDR3 chips.
- 512MB of dedicated video memory (not shared with the CPU) – one 256×16 DDR3 chip, same the one used for the system memory.
- USB2 (host): One external micro-USB and 2 internal flex cable connectors with USB, additional 3.3VDC power, I2C and FPGA general purpose I/O compatible withe the add-on boards for the NC353
- 30-pin board-to-board connector with 12 differential /24 single-ended FPGA I/O for add-on boards.
- a pair of 2.5mm audio connectors on the back panel for camera synchronization – from external trigger and/or from other cameras
- 2-port SATA controller based on the free (GNU/GPL) implementation. Camera will have eSATA/USB external connector (so capable of running external SATA device without additional power supply) and internal mSATA SSD that fits inside the camera.
Model NC393 CAD rendering with three M12 lenses for capturing panorama images
NC393 camera will have significantly higher performance than the 353 and it will inherit the openness and flexibility from its predecessors. Elphel does not take orders on the custom design, but rather we try to do our best in making sure our users can do the customization themselves. The same policy would remain the same for the NC393 too – we will offer some camera options and add-ons, and in most cases it will be up to the camera users to build the camera of their dream.
Elphel will use the new system board in the Eyesis cameras. It will allow us to make the overall design more compact by reducing number of boards inside, increase the network bandwidth as well as the SSD bandwidth, increase the frame rate. We also plan to increase the camera resolution by switching to the same format but smaller pixel sensor while reusing the same optical-mechanical design – that would be definitely too much for the current system that is limited to the currently used sensors.
And of course we will continue to build “small” cameras based on the new design – with universal C/CS-mount and with M12 one, including precisely calibrated fixed-lens systems. And as the camera is designed for the multi-sensor operations, we will offer several typical configurations for robotic (parallel sensors for stereo-vision) and panoramic applications, as shown on the images above.
All the camera hardware documentation (circuit diagrams, parts lists, PCB layout and mechanical CAD files) will be released under CERN OHL license when the design will be finished and we will start the actual production of the cameras (add-on documentation will be released when it will become available) . All the firmware and FPGA code will be traditionally released under GNU GPL and maintained at Sourceforge repositories.
[elphel353-8.0] By dzhimiev: 1. removed a warning if the file doesn't exist
1. removed a warning if the file doesn't exist
- Modified store.php rev1.2 - added one line, removed one line
[elphel353-8.0] By dzhimiev: 1. < show_source.inc
1. < show_source.inc
- Modified target.list rev1.98 - added 2 lines, removed 3 lines
Pages
