swiss grade https://www.swissreplica.is.
replica-watches watches and you will understand it!
don't miss any time to own your dream fake watches.
www.watchesko.com are uniquely made compared to the majority of websites.

Archive

Posts Tagged ‘3D Reconstruction’

We Are #1 in the ETH 3D Software Benchmark

April 1st, 2022 No comments

The Steuart Systems photorealistic 3D scanner uses hardware & software to produce 3D models, but how good is our software? We used the ETH 3D benchmark to evaluate it. We were hoping to be in the top 10% of the 116 entries from around the world, and the March 31 2022 benchmark result put us at #1. Some other group will claim the top spot eventually, but for now the ranking proves that our approach is world-class.

We Are #1 in the ETH 3D Software Benchmark
Above is a screenshot from the day that we were #1. Here is the current ranking of the ETH 3D benchmark.

Good software is nice, but camera hardware is our main strength. Over the years our tests have shown that high-quality low-noise HDR images from our camera hardware amplifies the power of whatever software we use. Low-noise 3D content looks better, and it is easier to compress, distribute and view on the web.

Stay tuned. Over the next few weeks we will post results as we tune our software and our array of 32 cameras.

Scanning Results Keep Getting Better

May 20th, 2015 No comments

We have  steadily improved our scanning results over the last 6 weeks by modifying hardware, writing new software, and tuning over a dozen variables.  The video below demonstrates the effect of our enhanced noise reduction:

Low noise in 3D models is important for two reasons:

  1. Low noise 3D looks better.  
  2. Low noise 3D models are easier to compress & display.  In many cases smoothing should allow us to reduce a scan to less than 1% of the original size.

Noise reduction & smoothing has been around for decades, but there is a delicate balance between appropriate smoothing, and over-smoothing which can make objects look like jelly beans.  Our past experience with generic smoothing routines has been disappointing because they often round edges & eliminate important details. 

Why Our Smoothing Is Better Than Other Options

Instead of applying generic smoothing filters to our data after the 3D data has been created, we apply smoothing during the creation of 3D data.  We can achieve an optimal level of smoothness because our smoothing software has intimate knowledge of the scanner hardware and configuration.  Stereo scanners like ours can be accurate to a fraction of a millimeter up close, but precision falls off as the distance from the scanner increases.  Our smoothing routines use this fact to smooth our 3D data with more finesse.

9 Months of Software Enhancements Have Cut Errors in Half Again

November 11th, 2013 No comments

This year’s enhancements to the image processing routines in our stereo scanning software has improved processing speed and 3D model accuracy. Comparisons between our current results and those from 9 months ago show that we have reduced the magnitude of one type of geometric error in our 3D scans by a factor of 2 to 4, and we project that future software and hardware enhancements will allow us to cut the noise in half at least 5 more times. Finding/developing a benchmark to clearly reflect these results has been tricky.

In the previous post we compared scans by superimposing them on each other and then comparing the non-linearity of flat surfaces. Because each surface should be flat, any deviation from a straight line represents a scanning error. We used standard deviation analysis to determine that our improvements had cut the error in half for this specific test, but that one number doesn’t tell the whole story. What other metrics and ratios should we use to judge the quality of the 3D scans that our scanner produces?

Until we come up with a more useful metric to quantify the relative quality, we will use human perception to evaluate the quality of scans. The video below shows the results of our last 9 months of software enhancement.

Making Good Photorealistic 3D Models from 2D Pictures

March 25th, 2011 No comments

Making 3D models is time consuming. Recent programs like Google’s SketchUp (it’s free) have simplified the process of making digital 3D models, but SketchUp is definitely not automatic.

Example of photorealistic SketchUp Model created manually and placed into Google Earth

To make a 3D model look photorealistic, real world pictures can be “projected” onto a SketchUp model. While this technique can add realism, SketchUp is still a manual approach that can take hours, weeks, or even months to produce good results.

 

Many in the 3D and animation world would like an automatic process that can produce 3D models from a series of 2D pictures. Our goal is to create a system that automatically produces photorealistic digital 3D models that can be processed in existing 3D programs like 3D Studio Max, GeoMagic, or SketchUp.

The Microsoft Photosynth project can automatically create 3D-like effects (some call it 2.5D) by automatically processing 10s to 100s of 2D images. While this process is automatic, it does not produce a 3D model that can be used by other programs.

Garbage in…….. Garbage out.

A challenge for Photosynth and other automatic stitching/panoramic approaches is that they often use regular uncalibrated cameras. While this is convenient, it forces the programs to analyze each camera image to determine the field of view and other essential lens/camera characteristics: the cameras are essentially calibrated during processing. Precisely calibrating a camera is challenging in a lab setting, so it is reasonable to expect that on-the-fly calibration results will not be very precise. Any errors in the camera calibration step will build on each other and cause problems later in the process. While calibration problems cause annoying alignment errors in panoramic 2D & 2.5D images, they cause unacceptable distortion in 3D models. Here is a list of variables that must be determined before using a 2D image to create an accurate 3D model:

Camera Variables that must be determined for Precise Stereoscopic 3D Reconstruction
– The exact center of the image sensor behind the lens: sensors are normally a few pixels off-center
– Camera Horizontal & Vertical Field of View to within 1/100 degree
– Camera lens distortion correction variables: Pincushion, barrel, radial.
– Camera horizontal orientation (0.00 to 360.00 degrees) to within 1/100 of a degree
– Camera vertical orientation (tilt, roll) to within 1/100 degree
– Camera location for each shot: X, Y, and Z coordinates to within one millimeter
– Camera dynamic range and gamma

The quality of a 3D model is limited by the quality of the 2D pictures used to make it. Here’s how we calibrate our camera system:

1) Design and build a calibration routine/facility to determine the key camera variables.
2) Design and build a system of cameras that can be easily calibrated.

The important point is that the camera system and the calibration system need to be built for each other: they literally fit together like a lock and key. As we see it, a calibrated system produces “clean” images that simplify and speed up the 3D reconstruction process. Our current 8-camera system (Proto-4F) has been designed to produce sets of calibrated images, and these images are used to automatically produce 3D models.