Archive

Archive for the ‘Prototype4’ Category

Example 3D Model Using the 3D-360

February 28th, 2011 1 comment
This 3D model includes alignment errors....... and we know how to fix them. Our objective is to develop an automatic 3D model creation system, and we know from experience that the errors will get smaller as our calibration process is refined. Below is a description of how this model was made using images from Proto-4F of our 8-camera 3D-360 scanner. A 3D model requires images from multiple perspectives, so for this model we scanned from 4 different locations: two scans from a high perspective with the scanner cameras at 6 feet, and two low scans with the scanner 3 feet above the floor. Once the scans were completed (all of the pictures have been taken and downloaded) the images from the 4 scans were processed using our automatic 3D reconstruction software. This processing resulted in 4 "point clouds" of 3D data: one point cloud for each scan. Next the 4 point clouds were aligned with each other to create a single "point cloud" of, in this case, 20 million points. Point clouds are a precise, but inefficient way to format and store 3D data. Point clouds for 3D data can be compared to the BMP format for 2D images. Just as compressed JPEGs are about 10x more efficient than uncompressed BMPs for storing 2D images, triangular meshes are a more efficient way to store 3D data than uncompressed point clouds. Meshes are efficient because a group of 3 points for a single triangle can replace thousands (or millions) of points if the points are in a plane. Decades of work from people around the world has resulted in mature procedures to generate meshes from point clouds. Our current meshing routine turned the 400 Mbyte "point cloud" of 20,000,000 points into a 20MB mesh of 24,000 triangles. In the future we will use more efficient meshing procedures that produce better meshes with even fewer triangles. After meshing we have a 3D model of the area that was scanned, but at this point the mesh is not photorealistic. We make the model photorealistic by "projecting" the original color images taken during the scanning process onto the mesh. This automatic process is called "texture projection," and when it is done well it results in a photorealistic 3D model. Texture projection works very well when everything is correctly aligned and registered, but alignment errors can rapidly build on each other and produce errors that make a model look bad. The alignment errors in this process come from several different sources in the calibration/scanning/processing pipeline: - Lens distortion correction errors inside each camera - Alignment errors between the left and right camera in each of the 4 pairs of cameras - Alignment errors between each of the 4 pairs of cameras - Alignment errors between the 4 scans These are all well defined problems that we are working on. We could proceed slowly and reduce the errors by recalibrating the existing Proto-4F 3D-360 camera system. This approach would take weeks and it could cut the errors in half a few times, but it cannot correct the built-in limitations of our current lenses and calibration facility. Another option is to build on our two plus years of experience with the Proto-4x family and design a new Proto-5x series. The new design will have more lenses, higher resolution sensors, faster processors (ARM/AMD Fusion/Tegra/FPGA/other?), and it will be calibrated with a 10x larger "calibration bunker." I am currently working on Proto-5x designs, and a key characteristic may be to increase the number of cameras from the current 8 to 32, or even as many as 100. A large array of inexpensive lenses can cost less and outperform a small number of expensive lenses. The trick is to design a manufacturable and and inexpensive array of sensors, lenses and processors. While a design with up to 100 camera may sound extravagant, remember that the fly's eyes have over 1,000 lenses: Because Proto-5x will require the design, layout, fabrication and testing of a new camera/processor board, this approach will take at least four months. Software porting, calibration, and testing could add another 4 to 8 months to the process. Depending on the final design, the Proto-5x family could reduce the errors by a factor of 10 or more.

First Low Resolution 3D Point Cloud from Proto-4F

October 25th, 2010 Comments off
The cameras are finally calibrated, and the communications and power systems are installed and working. Now I can finally begin producing scans to test and fine tune the software. 7-shot Today I scanned part of the lab, and the animated GIF illustrates the 3D nature of the scan. When producing a 3D model, multiple perspectives must be captured to fill in occlusions (blind spots). For this model, three scans from different locations were merged to produce a point cloud. The GIF consists of 7 different screen-shots of the point-cloud. While there are still occlusions, many have been filled. For example, notice that you can see both above and below the table. The original 32-bit software that we use to turn pictures into 3D models is almost 5 years old, and it runs on 32-bit Windows XP. The old software often crashes when processing high resolution images because the 2GB memory limit isn't enough to process the gigabytes of data that our scanner can quickly produce. Today's scan was made on a computer running 64-bit Windows 7, and we are currently replacing the old 32-bits software with more advanced 64-bit code. The new software runs much faster in 64-bit mode because it can keep temporary files in RAM instead of writing them to and reading them from a slow disk. Even using a Solid State Drive (SSD) wastes minutes of unnecessary processing. COMING UP: Much better scans processed by SketchUp & posted into Google Earth.

Web Promotion with Google Earth

June 21st, 2010 Comments off

Four buildings that we have recently uploaded

We can now create 3D buildings and place them into Google Earth. The results can be viewed with a normal web browser, and we are exploring how to take advantage of this new low cost form of web promotion. Creating 3D models for Google Earth is a useful capability that we intend to continue refining, and we will begin offering the service of modeling local buildings and placing them into Google Earth. Click here to go to Purcellville Virginia and see one of our 3D model on Google Maps. Use the left, right, and center mouse buttons for navigation. Here is an overview of how to put images onto Google Earth. The program Google SketchUp can be used to create photorealistic 3D models of real buildings. Once a model is built it can be uploaded to Google for possible placement into Google Earth. A model is integrated into Google Earth only after a reviewer decides that it satisfies all of Google's acceptance criteria. Our early efforts were rejected for reasons like being too big, being too complex, or for exhibiting "Z-fighting." At times we have experienced delays of over four weeks during the review process, but after nearly three months of effort we have learned how to efficiently make models that Google will accept and place into Google Earth. While creating 3D content for Google Earth is interesting, our main objective is to build a practical 3D-360 photorealistic scanner. Prototype-4E is in the final stages of assembly, and this July we plan to begin using it to create photorealistic interiors for our 3D models. When the results are good enough we plan to use SketchUp and Google Earth to demonstrate the ability to create models that you can walk around and also into. Once inside you will see detailed photorealistic interiors that are too complex to model with most traditional techniques.

3D-360 Camera vs Canon 5D

October 27th, 2009 Comments off
The Prototype-4.x family of 3D-360s is based on a camera that we have been developing for over a year.  While several areas of enhancement are still left to be implemented, the new camera is ready to be compared against the Canon 5D.  Prototype-3 used eight Canon 5Ds, and the new camera in Prototype-4 needs to meet or exceed the 5D's performance. One significant difference between our camera and the Canon 5D is that the 5D (and all other color cameras) uses tiny color filters arranged in a Bayer pattern on top of the individual pixels inside of the camera.  While the 5D has 12 million pixels, only 3 million are RED, 6 million are GREEN, and 3 million are BLUE.  Our camera is arguably a 15 million pixel sensor because it cycles through three large filters with the 5 million pixel monochrome sensor to produce 5 million RED pixels, 5 million GREEN pixels, and 5 million BLUE pixels. Our camera is immune to color artifacts caused by the Bayer patterns, but taking a picture takes three times longer because the filters must be rotated into place between shots. Fortunately our system automatically changes between filters in less than one second.  In the future we may want to add filters for other parts of the spectrum including infrared (IR) and ultra violet. The purpose of this test is to compare the color reproduction, noise, and Bayer pattern artifacts between the two cameras. The 5D has a 14mm Canon lens, and the FOV is similar to our custom lens. Here is the test procedure: 1) Take a picture with each camera in RAW mode 2) Use minimal automatic processing on each image.  For the 3D-360 Photoshop was used for color balance and sharpening.  For the Canon 5D the image was processed with DxO 3) Compare the cropped images at actual size and zoomed to 600% Here are the results: scan001_face01_cam01_texturecropped-600wide Above is the shot from the Prototype-4 camera, And below is the shot from the Canon 5D. 5d-cropped-600wide The two shots show that our camera compares well to the Canon 5D.  A slight BLUE halo is visible to the left of some objects, but this may be caused by a dirty or warped Wratten filter. Below is a zoomed comparison of the areas the GREEN circles. 5d-vs-mycam-zoom-66Close inspection shows that the 3D-360 camera has less noise and fewer Bayer pattern artifacts, but the 5D seems a little sharper.  The difference in sharpness could be related to the dynamic range of the two images.  The raw 3D-360 image covers a linear range of 24 bits, but the 5D covers a smaller range of only 12 bits.  We use a combination of linear and logarithmic curves to squeeze the 24 bits per pixel per color channel down to 16 bits per pixel per channel.  To improve contrast we may reduce our range from 24 bits to 22 bits. I am pleased with this early test, and we are currently implementing upgrades that should make the difference even more dramatic.

Converting 16 bit Images to 8-bit Images

June 21st, 2009 Comments off
We spent the last year designing and building a camera and software that can capture images with pixels that are 16-bits deep.  It isn't easy to view these images since most tools expect 8-bit images, so the following routine is used to squeeze the 65,536 values in the 16-bit image down to the 256 values of an 8-bit image.  There are thousands of ways to compress a 16-bit image, and this approach is specifically for our machine vision/stereoscopic needs. This approach to compressing pixel intensities is based on the octave relationship, and it is similar to the way a piano's keys represent a wide range of frequencies. Each "octave" in this case is light intensity that is either twice as bright or half as bright as its neighboring octave.  Each octave of light intensity is broken into 20 steps, and this is similar to the 12 keys (steps) in each octave of a piano keyboard.  Below is a table and chart that illustrate the conversion from 16-bit images to 8-bits. Each red dot in the chart represent an octave, and there are 20 steps inside each octave.  The approach outlined here allows an 8-bit image to evenly cover 12 octaves: almost the full dynamic range of a 16-bit image. octaves This curve will probably be modified many times with different numbers of divisions per octave, but the basic approach will stay the same.  Below is an example  of an original 16-bit linear image, and an 8-bit version of the same image after application of the above logarithmic curve.  The pictures are not pretty, but they illustrate how details can be pulled from the shadows.  The 16-bit linear image is on the left, and the curve-adjusted 8-bit image is on the right. comparisonv2 The image at the right allows you to see the details in the shadows (notice the wires in the upper right) as well as details in the bright areas.   An image editing program could be used to manually adjust brightness and extract details from the 16-bit image, but the curve described here can do a good job automatically. Next post: Rectification.