Here is an early 3D scan from our latest prototype scanner. EARLY is a key word here, because our scanner has only been operational for a couple of weeks. During the last year my team has completely upgraded the scanner hardware including cameras, lenses, chassis, and calibration tools. We have also ported our software from from Windows to Linux and from CUDA to OpenCL. We have weeks of fine-tuning and calibration that still needs to be done, but we feel that the early results are worth posting. The new system is called Proto-5A, and below lists the most significant improvements over its predecessor Proto-4F:
Produces higher resolution 3D scans
Scans 10x to 20x faster
Uses less power
Scans with nearly 100% reliability
Below is a first test example of Proto-5A’s output. I’ll upload better versions as they are produced.
This was a risky upgrade of both hardware & software. While it was our 13th hardware iteration, it was our first major change in nearly 5 years. Our plan was sound, and we managed to avoid the Second System Effect that can kill a project that is too aggressive. Instead of adding unnecessary bells & whistles, we refined proven features and eliminated compromises and inefficiencies that had worked their way into this 12-year project. The result of our efforts is clean, fast and efficient scanner.
We have proven the Proto-5A approach is viable, and we are motivated to begin making plans for Proto-5B: a smaller and lighter version. Proto-5B will be low risk because it will use 90% of Proto-5A’s software. Most of the effort will go into designing a new circuit board that will integrate all of our current off-the-shelf boards. This new board will improve performance and reduce system size, weight and cost. We have the necessary skills to design and build the board, but collaborating with a more established team would also be attractive.
Now that the bulk of the 3D-360 R&D has been completed, we have begun the search for a partner who is interested in adapting the system for a specific market (or markets). Potential 3D applications include photorealistic architecture scanning, insurance or forensic scanning, content creations for training or video games, content creation for head mounted VR such as Oculus, and robotic visions & navigation. VR & robotics are exciting options, but we will pursue whatever market makes sense.
Below are our objectives for the rest of 2015:
Find a partner interested in moving the Proto-5x concept forward. The ideal partner could support our development plan, or we could jointly develop a new plan. The 3D-360 IP is valuable, and we need a partner willing to help defend our international 3D-360 patents.
Expand the capabilities of our 3D Scanning process. Our scanner can be configured to produce high resolution photorealistic scans in about 5 minutes, or it can produce high-speed low-resolution scans at 10 to 60 Hz (we haven’t benchmarked high-speed yet). We will spend the summer developing routines to enable & refine these features.
Collect feedback from potential users/customers by participating in online 3D communities. We will solicit market feedback by posting downloadable 3D models. The market feedback will help shape our future development.
Stereo reconstruction works by identifying similar features within two images, and we will use any technique that enhances small features. As a first step in our stereo reconstruction pipeline we currently use bilinear interpolation to rectify/dewarp images. While bilinear interpolation is easy to code and does a good job, there are many other types of interpolation worth considering. The two images below have been modified with bicubic interpolation and bilinear interpolation. The results confirm that bicubic is sharper, so we will eventually migrate to bicubic interpolation.
We spent the last year designing and building a camera and software that can capture images with pixels that are 16-bits deep. It isn’t easy to view these images since most tools expect 8-bit images, so the following routine is used to squeeze the 65,536 values in the 16-bit image down to the 256 values of an 8-bit image. There are thousands of ways to compress a 16-bit image, and this approach is specifically for our machine vision/stereoscopic needs.
This approach to compressing pixel intensities is based on the octave relationship, and it is similar to the way a piano’s keys represent a wide range of frequencies. Each “octave” in this case is light intensity that is either twice as bright or half as bright as its neighboring octave. Each octave of light intensity is broken into 20 steps, and this is similar to the 12 keys (steps) in each octave of a piano keyboard. Below is a table and chart that illustrate the conversion from 16-bit images to 8-bits. Each red dot in the chart represent an octave, and there are 20 steps inside each octave. The approach outlined here allows an 8-bit image to evenly cover 12 octaves: almost the full dynamic range of a 16-bit image.
This curve will probably be modified many times with different numbers of divisions per octave, but the basic approach will stay the same. Below is an example of an original 16-bit linear image, and an 8-bit version of the same image after application of the above logarithmic curve. The pictures are not pretty, but they illustrate how details can be pulled from the shadows. The 16-bit linear image is on the left, and the curve-adjusted 8-bit image is on the right.
The image at the right allows you to see the details in the shadows (notice the wires in the upper right) as well as details in the bright areas. An image editing program could be used to manually adjust brightness and extract details from the 16-bit image, but the curve described here can do a good job automatically.
From June 11 to 13 Google hosted a 3-day training called SketchUp Basecamp. About 400 people from all over the world all gathered to learn about the 3D visualization techniques of SketchUp and how to integrate the 3D models into Google Earth. We spent most of our time on this patio and in the buildings that you see.
Inspired by Google’s global perspective, I decided to make a Google-centric version of a “Google Earth.” I stiched 18 images together to make a spherical panorama, and then warped the image to form a globe. This is a first draft, and I plan to post a better HDR version without the tripod and shadow once I get back home.
The stitching errors have been removed, and HDR & tone mapping have improved the details. There will probably be a V6 with a few more tweaks. A little more sky would be nice, and there are some HDR artifacts in the lower left.