We have steadily improved our scanning results over the last 6 weeks by modifying hardware, writing new software, and tuning over a dozen variables. The video below demonstrates the effect of our enhanced noise reduction:
Low noise in 3D models is important for two reasons:
- Low noise 3D looks better.
- Low noise 3D models are easier to compress & display. In many cases smoothing should allow us to reduce a scan to less than 1% of the original size.
Noise reduction & smoothing has been around for decades, but there is a delicate balance between appropriate smoothing, and over-smoothing which can make objects look like jelly beans. Our past experience with generic smoothing routines has been disappointing because they often round edges & eliminate important details.
Why Our Smoothing Is Better Than Other Options
Instead of applying generic smoothing filters to our data after the 3D data has been created, we apply smoothing during the creation of 3D data. We can achieve an optimal level of smoothness because our smoothing software has intimate knowledge of the scanner hardware and configuration. Stereo scanners like ours can be accurate to a fraction of a millimeter up close, but precision falls off as the distance from the scanner increases. Our smoothing routines use this fact to smooth our 3D data with more finesse.
Here is an early 3D scan from our latest prototype scanner. EARLY is a key word here, because our scanner has only been operational for a couple of weeks. During the last year my team has completely upgraded the scanner hardware including cameras, lenses, chassis, and calibration tools. We have also ported our software from from Windows to Linux and from CUDA to OpenCL. We have weeks of fine-tuning and calibration that still needs to be done, but we feel that the early results are worth posting. The new system is called Proto-5A, and below lists the most significant improvements over its predecessor Proto-4F:
- Produces higher resolution 3D scans
- Scans 10x to 20x faster
- Uses less power
- Scans with nearly 100% reliability
Below is a first test example of Proto-5A’s output. I’ll upload better versions as they are produced.
This was a risky upgrade of both hardware & software. While it was our 13th hardware iteration, it was our first major change in nearly 5 years. Our plan was sound, and we managed to avoid the Second System Effect that can kill a project that is too aggressive. Instead of adding unnecessary bells & whistles, we refined proven features and eliminated compromises and inefficiencies that had worked their way into this 12-year project. The result of our efforts is clean, fast and efficient scanner.
We have proven the Proto-5A approach is viable, and we are motivated to begin making plans for Proto-5B: a smaller and lighter version. Proto-5B will be low risk because it will use 90% of Proto-5A’s software. Most of the effort will go into designing a new circuit board that will integrate all of our current off-the-shelf boards. This new board will improve performance and reduce system size, weight and cost. We have the necessary skills to design and build the board, but collaborating with a more established team would also be attractive.
Now that the bulk of the 3D-360 R&D has been completed, we have begun the search for a partner who is interested in adapting the system for a specific market (or markets). Potential 3D applications include photorealistic architecture scanning, insurance or forensic scanning, content creations for training or video games, content creation for head mounted VR such as Oculus, and robotic visions & navigation. VR & robotics are exciting options, but we will pursue whatever market makes sense.
Below are our objectives for the rest of 2015:
- Find a partner interested in moving the Proto-5x concept forward. The ideal partner could support our development plan, or we could jointly develop a new plan. The 3D-360 IP is valuable, and we need a partner willing to help defend our international 3D-360 patents.
- Expand the capabilities of our 3D Scanning process. Our scanner can be configured to produce high resolution photorealistic scans in about 5 minutes, or it can produce high-speed low-resolution scans at 10 to 60 Hz (we haven’t benchmarked high-speed yet). We will spend the summer developing routines to enable & refine these features.
- Collect feedback from potential users/customers by participating in online 3D communities. We will solicit market feedback by posting downloadable 3D models. The market feedback will help shape our future development.
This year’s enhancements to the image processing routines in our stereo scanning software has improved processing speed and 3D model accuracy. Comparisons between our current results and those from 9 months ago show that we have reduced the magnitude of one type of geometric error in our 3D scans by a factor of 2 to 4, and we project that future software and hardware enhancements will allow us to cut the noise in half at least 5 more times. Finding/developing a benchmark to clearly reflect these results has been tricky.
In the previous post we compared scans by superimposing them on each other and then comparing the non-linearity of flat surfaces. Because each surface should be flat, any deviation from a straight line represents a scanning error. We used standard deviation analysis to determine that our improvements had cut the error in half for this specific test, but that one number doesn’t tell the whole story. What other metrics and ratios should we use to judge the quality of the 3D scans that our scanner produces?
Until we come up with a more useful metric to quantify the relative quality, we will use human perception to evaluate the quality of scans. The video below shows the results of our last 9 months of software enhancement.
This post compares our latest 3D results with results from November 2012 (3 months ago).
Our 3D models are generated by processing pairs of 2D images, and the same 2D images that were processed in the November post have been processed again. The only difference between the two 3D models is that new version was created using more sophisticated sub-pixel processing routines.
To compare the models we use the 3D program Scanalyze from Stanford. The models can be viewed with realistic coloring, but it is easier to compare them if they are given “false colors” In the video below, Scanalyze is used to display the latest 3D model (GREEN) and the older 3D model (RED). For the comparison we zoom into a part of the model that should be flat, and then we study the points in each model associated with line across this flat region. If the line is flat then the model is accurate, but any deviations from a straight line represent errors.
To evaluate the relative error of the two approaches we calculate the Standard Deviation (STDEV) of 750 points in each model that should define a straight line. The results below show that the errors in the new model have a STDEV of 0.75, and this is less than half of the November results with a STDEV of 1.7.
It is nice to see that the GREEN line is over 2x better (flatter) than the RED line, but we were hoping for an even larger improvement. Unfortunately we must accept the fact that better software can help reduce errors, but software cannot completely overcome the small errors that our current calibration “bakes” into the 2D images. The correct way to fix the problem is to bring sub-pixel precision to the rectification process of the original 2D images. We expect a much larger error reduction after implementing the new calibration/rectification process.
We spent the summer of 2012 enhancing our 3D scanner. The 3D scan below with 3.5 million points shows that the system can now produce high-resolution 3D models. Some improvement was the result of integrating code from the open source projects Point Cloud Library & OpenCV, but the largest improvement came from camera recalibration.
(direct YouTube link)
Why did we need to recalibrate?
Over the last year it seems that 4 of the 8 image sensor boards in our prototype had vertically shifted up to 5 pixels since the last calibration. Because stereo cameras need to be calibrated to within at least 1/2 pixel, a 5 pixel error is completely unacceptable. The 5-pixel shift represents a very small mechanical change. The pixels on our image sensor are 2.2 microns per side, so a 5-pixel error is a shift of only 11 microns: less than the diameter of one human hair!
The quality of our 3D models improved significantly once we corrected the problem by shifting our images up or down the appropriate number of pixels.
Why wasn’t this shift discovered sooner?
Our stereoscopic camera system had been producing good results, so we incorrectly assumed that the cameras were still calibrated. Because we trusted the calibration, we spent the summer carefully reviewing everything else in the system. During our search we optimized the code to improve 3D reconstruction speed and quality, but certain problems remained. It wasn’t until this September that we identified and fixed the calibration problem.
This experience has demonstrated the robustness of our 3D scanning approach which uses both passive pixel matching and pattern projection. Before we fixed the calibration errors, the passive pixel-matching part of our scanning process was effectively disabled. Our robust pattern projection is the only reason that we were able to produce usable 3D models from such a poor calibration. Now that we have both good calibration and solid pattern projection, our results are the best ever.
There are still some loose ends from this summer’s work that we want to tie up by the end of the year. These last few tweaks will improve 3D accuracy and reduce or eliminate the distortion in surfaces that should be flat.
Finally, we have also gained a valuable insight for the next design. The new system will be designed to maintain camera rigidity/stability to within about 1/10 micron. This is about 100x better than the current prototype. We plan to finalize the new system design and begin construction in 2013.