swiss grade https://www.swissreplica.is.
replica-watches watches and you will understand it!
don't miss any time to own your dream fake watches.
www.watchesko.com are uniquely made compared to the majority of websites.

Archive

Archive for the ‘Camera Calibration’ Category

Scanning Results Keep Getting Better

May 20th, 2015 No comments

We have  steadily improved our scanning results over the last 6 weeks by modifying hardware, writing new software, and tuning over a dozen variables.  The video below demonstrates the effect of our enhanced noise reduction:

Low noise in 3D models is important for two reasons:

  1. Low noise 3D looks better.  
  2. Low noise 3D models are easier to compress & display.  In many cases smoothing should allow us to reduce a scan to less than 1% of the original size.

Noise reduction & smoothing has been around for decades, but there is a delicate balance between appropriate smoothing, and over-smoothing which can make objects look like jelly beans.  Our past experience with generic smoothing routines has been disappointing because they often round edges & eliminate important details. 

Why Our Smoothing Is Better Than Other Options

Instead of applying generic smoothing filters to our data after the 3D data has been created, we apply smoothing during the creation of 3D data.  We can achieve an optimal level of smoothness because our smoothing software has intimate knowledge of the scanner hardware and configuration.  Stereo scanners like ours can be accurate to a fraction of a millimeter up close, but precision falls off as the distance from the scanner increases.  Our smoothing routines use this fact to smooth our 3D data with more finesse.

9 Months of Software Enhancements Have Cut Errors in Half Again

November 11th, 2013 No comments

This year’s enhancements to the image processing routines in our stereo scanning software has improved processing speed and 3D model accuracy. Comparisons between our current results and those from 9 months ago show that we have reduced the magnitude of one type of geometric error in our 3D scans by a factor of 2 to 4, and we project that future software and hardware enhancements will allow us to cut the noise in half at least 5 more times. Finding/developing a benchmark to clearly reflect these results has been tricky.

In the previous post we compared scans by superimposing them on each other and then comparing the non-linearity of flat surfaces. Because each surface should be flat, any deviation from a straight line represents a scanning error. We used standard deviation analysis to determine that our improvements had cut the error in half for this specific test, but that one number doesn’t tell the whole story. What other metrics and ratios should we use to judge the quality of the 3D scans that our scanner produces?

Until we come up with a more useful metric to quantify the relative quality, we will use human perception to evaluate the quality of scans. The video below shows the results of our last 9 months of software enhancement.

Progress Report: Sub-Pixel Upgrade Cuts Errors in Half

February 8th, 2013 No comments

This post compares our latest 3D results with results from November 2012 (3 months ago).

Our 3D models are generated by processing pairs of 2D images, and the same 2D images that were processed in the November post have been processed again. The only difference between the two 3D models is that new version was created using more sophisticated sub-pixel processing routines.

To compare the models we use the 3D program Scanalyze from Stanford. The models can be viewed with realistic coloring, but it is easier to compare them if they are given “false colors” In the video below, Scanalyze is used to display the latest 3D model (GREEN) and the older 3D model (RED). For the comparison we zoom into a part of the model that should be flat, and then we study the points in each model associated with line across this flat region. If the line is flat then the model is accurate, but any deviations from a straight line represent errors.

[embedplusvideo height=”480″ width=”640″ editlink=”https://bit.ly/1alfdjS” standard=”https://www.youtube.com/v/yumkXHgAniA?fs=1&hd=1″ vars=”ytid=yumkXHgAniA&width=640&height=480&start=&stop=&rs=w&hd=1&autoplay=0&react=0&chapters=&notes=” id=”ep9229″ /]

To evaluate the relative error of the two approaches we calculate the Standard Deviation (STDEV) of 750 points in each model that should define a straight line. The results below show that the errors in the new model have a STDEV of 0.75, and this is less than half of the November results with a STDEV of 1.7.

2013-01-STDEV

It is nice to see that the GREEN line is over 2x better (flatter) than the RED line, but we were hoping for an even larger improvement. Unfortunately we must accept the fact that better software can help reduce errors, but software cannot completely overcome the small errors that our current calibration “bakes” into the 2D images. The correct way to fix the problem is to bring sub-pixel precision to the rectification process of the original 2D images. We expect a much larger error reduction after implementing the new calibration/rectification process.

Results from Summer 2012

November 14th, 2012 No comments

We spent the summer of 2012 enhancing our 3D scanner. The 3D scan below with 3.5 million points shows that the system can now produce high-resolution 3D models. Some improvement was the result of integrating code from the open source projects Point Cloud Library & OpenCV, but the largest improvement came from camera recalibration.

Why did we need to recalibrate?
Over the last year it seems that 4 of the 8 image sensor boards in our prototype had vertically shifted up to 5 pixels since the last calibration. Because stereo cameras need to be calibrated to within at least 1/2 pixel, a 5 pixel error is completely unacceptable. The 5-pixel shift represents a very small mechanical change. The pixels on our image sensor are 2.2 microns per side, so a 5-pixel error is a shift of only 11 microns: less than the diameter of one human hair!

The quality of our 3D models improved significantly once we corrected the problem by shifting our images up or down the appropriate number of pixels.

Why wasn’t this shift discovered sooner?
Our stereoscopic camera system had been producing good results, so we incorrectly assumed that the cameras were still calibrated. Because we trusted the calibration, we spent the summer carefully reviewing everything else in the system. During our search we optimized the code to improve 3D reconstruction speed and quality, but certain problems remained. It wasn’t until this September that we identified and fixed the calibration problem.

This experience has demonstrated the robustness of our 3D scanning approach which uses both passive pixel matching and pattern projection. Before we fixed the calibration errors, the passive pixel-matching part of our scanning process was effectively disabled. Our robust pattern projection is the only reason that we were able to produce usable 3D models from such a poor calibration. Now that we have both good calibration and solid pattern projection, our results are the best ever.

Next Steps
There are still some loose ends from this summer’s work that we want to tie up by the end of the year. These last few tweaks will improve 3D accuracy and reduce or eliminate the distortion in surfaces that should be flat.

Finally, we have also gained a valuable insight for the next design. The new system will be designed to maintain camera rigidity/stability to within about 1/10 micron. This is about 100x better than the current prototype. We plan to finalize the new system design and begin construction in 2013.