swiss grade
replica-watches watches and you will understand it!
don't miss any time to own your dream fake watches. are uniquely made compared to the majority of websites.


Archive for the ‘Uncategorized’ Category

ETH Benchmark Results on the Way

May 28th, 2022 No comments

How can you evaluate photogrammetry software and the 3D models that it creates? Switzerland’s ETH Institute of Technology (some rankings place ETH 4th in the world for engineering and technology) has developed a 25-part benchmark to evaluate photogrammetry software like ours. We have reason to believe that our results will be very strong.

The purpose of this post is to dust off our web site and say “get ready.” Our software’s ETH benchmark results should be posted this week. Soon after that we will release a 1.0 version of our OpenSource software MeshroomCL.

Categories: Uncategorized Tags:

We Are #1 in the ETH 3D Software Benchmark

April 1st, 2022 No comments

The Steuart Systems photorealistic 3D scanner uses hardware & software to produce 3D models, but how good is our software? We used the ETH 3D benchmark to evaluate it. We were hoping to be in the top 10% of the 116 entries from around the world, and the March 31 2022 benchmark result put us at #1. Some other group will claim the top spot eventually, but for now the ranking proves that our approach is world-class.

We Are #1 in the ETH 3D Software Benchmark
Above is a screenshot from the day that we were #1. Here is the current ranking of the ETH 3D benchmark.

Good software is nice, but camera hardware is our main strength. Over the years our tests have shown that high-quality low-noise HDR images from our camera hardware amplifies the power of whatever software we use. Low-noise 3D content looks better, and it is easier to compress, distribute and view on the web.

Stay tuned. Over the next few weeks we will post results as we tune our software and our array of 32 cameras.

Precise Calibration Is Essential for Good 3D Models

July 6th, 2015 No comments

Garbage in
Garbage out

It is amazing that 3D models can be created from regular (uncalibrated) pictures. This link shows some nice examples of how uncalibrated images can be used to make 3D models.

Quality in
Quality out

Our approach is harder, but our results are better. We precisely calibrate our cameras, and we use the calibrated pictures to make 3D models.  As our calibration improves, our 3D models improve. The combination of better calibration and sub-pixel processing has allowed us to create models that are accurate to within 1 or 2 millimeters at 10 feet. Also, our new color processing routine adjusts lighting and allows the colors in different 3D scans to blend better. More accurate scan geometry and better color control results in better looking 3D models. The video below demonstrates our latest improvments. The complete 3D model in our video consists of 13 scans from 13 different locations.

I believe that we have tackled the hardest part of the problem: calibration.  Now that our geometry is correct, we can focus on making our 3D models attractive and easy to work with.  We will continue improving the system in the following three ways:

We are finalizing the Proto-5B design. The current system takes about 7 minutes to create a single scan, and the next iteration of our scanner will capture the imagery at least 20 times faster.  ETA for this system is April 2016.

Post-processing imagery is normal. Photoshop is often used to clean up 2D pictures, and many 3D programs can clean up our 3D data. We are evaluating several programs, and will make our data work with the best solution(s).

Make it easier to move our data to other software like Unity, Meshlab, SketchUp, and possibly Matterport. Being compatible with Unity will make us compatible with headsets like Oculus, and that will allow a photorealistic 3D VR experience.

Categories: Uncategorized Tags:

Results from Summer 2012

November 14th, 2012 No comments

We spent the summer of 2012 enhancing our 3D scanner. The 3D scan below with 3.5 million points shows that the system can now produce high-resolution 3D models. Some improvement was the result of integrating code from the open source projects Point Cloud Library & OpenCV, but the largest improvement came from camera recalibration.

Why did we need to recalibrate?
Over the last year it seems that 4 of the 8 image sensor boards in our prototype had vertically shifted up to 5 pixels since the last calibration. Because stereo cameras need to be calibrated to within at least 1/2 pixel, a 5 pixel error is completely unacceptable. The 5-pixel shift represents a very small mechanical change. The pixels on our image sensor are 2.2 microns per side, so a 5-pixel error is a shift of only 11 microns: less than the diameter of one human hair!

The quality of our 3D models improved significantly once we corrected the problem by shifting our images up or down the appropriate number of pixels.

Why wasn’t this shift discovered sooner?
Our stereoscopic camera system had been producing good results, so we incorrectly assumed that the cameras were still calibrated. Because we trusted the calibration, we spent the summer carefully reviewing everything else in the system. During our search we optimized the code to improve 3D reconstruction speed and quality, but certain problems remained. It wasn’t until this September that we identified and fixed the calibration problem.

This experience has demonstrated the robustness of our 3D scanning approach which uses both passive pixel matching and pattern projection. Before we fixed the calibration errors, the passive pixel-matching part of our scanning process was effectively disabled. Our robust pattern projection is the only reason that we were able to produce usable 3D models from such a poor calibration. Now that we have both good calibration and solid pattern projection, our results are the best ever.

Next Steps
There are still some loose ends from this summer’s work that we want to tie up by the end of the year. These last few tweaks will improve 3D accuracy and reduce or eliminate the distortion in surfaces that should be flat.

Finally, we have also gained a valuable insight for the next design. The new system will be designed to maintain camera rigidity/stability to within about 1/10 micron. This is about 100x better than the current prototype. We plan to finalize the new system design and begin construction in 2013.

First Low Resolution 3D Point Cloud from Proto-4F

October 25th, 2010 No comments

The cameras are finally calibrated, and the communications and power systems are installed and working. Now I can finally begin producing scans to test and fine tune the software.

Today I scanned part of the lab, and the animated GIF illustrates the 3D nature of the scan. When producing a 3D model, multiple perspectives must be captured to fill in occlusions (blind spots). For this model, three scans from different locations were merged to produce a point cloud. The GIF consists of 7 different screen-shots of the point-cloud. While there are still occlusions, many have been filled. For example, notice that you can see both above and below the table.

The original 32-bit software that we use to turn pictures into 3D models is almost 5 years old, and it runs on 32-bit Windows XP. The old software often crashes when processing high resolution images because the 2GB memory limit isn’t enough to process the gigabytes of data that our scanner can quickly produce. Today’s scan was made on a computer running 64-bit Windows 7, and we are currently replacing the old 32-bits software with more advanced 64-bit code. The new software runs much faster in 64-bit mode because it can keep temporary files in RAM instead of writing them to and reading them from a slow disk. Even using a Solid State Drive (SSD) wastes minutes of unnecessary processing.

COMING UP: Much better scans processed by SketchUp & posted into Google Earth.