▲ | jgord 2 days ago | |
short answer yes .. I tried a _lot_ of approaches, many worked partially. I think I linked to a YT video screencast showing edges of planes that my algo had detected in a sample pointcloud ? Efficient re-meshings are important, and its worth improving on the current algorithms to get crisper breaklines etc, but you really want to go a step further and do what humans do manually now when they make a CAD model from a pointcloud - ie. convert it to its most efficient / compressed / simple useful format, where a wall face is recognized as a simple plane. Even remeshing and flat triangle tesselation can be improved a lot by ML techniques. As with pointclouds, likewise with 'photogrammetry', where you reconstruct a 3D scene from hundreds of photos, or from 360 panoramas or stereo photos. I think in the next 18 months ML will be able to reconstruct an efficient 3D model from a streetview scene, or 360 panorama tour of a building. An optimized mesh is good for visualization in a web browser, but its even more useful to have a CAD style model where walls are flat quads, edges are sharp and a door is tagged as a door etc. Perhaps the points Im trying to make are :
| ||
▲ | MITSardine a day ago | parent [-] | |
I'm not sure I'm sold on the necessity to detect lines and planes specifically. My issue is, suppose you could do that perfectly, then what of the rest of the geometry? If you're aiming for a CAD model in the end (BREP), you'll want to fit the whole thing, not only the planes and lines. And it seems to me an approach specialized for lines and planes is helpless at fitting general surfaces and curves. In my mind, a general approach that incidentally also finds straight lines and planes would be better (necessary). Note if you can fit a BREP, it's fairly trivial to find whether a curve is close enough to a straight line that you can just stipulate it's a straight line (same for a plane). Have you looked into NURBS fitting through point clouds? I understand those can be noisy and over sampled. A colleague got away with sorting point clouds by a Hilbert curve (or other space filling curve) and then keeping 1/N points (just by index), a simple but elegant way to remove N-1 every N points while keeping the general distribution mostly intact (you could also use an octree). Though I recall in some cases the distribution of points was not uniformly too dense, but e.g. dense along scanning lines and sparse between those lines. Once it's tractable to triangulate the point cloud, you have two important pieces of information at your disposal: local connectivity (prior to that, it'd have been nlog(n) at best to find nearby points) and a notion of topology after some basic processing (e.g. detecting ridges to make out surface patches). With the former, you could do things like smooth the surface to do away with noisiness (say your points are randomly a small distance away from the plane, but any shape really) for better NURBS fitting, estimating normals, etc. and with the latter you could split the domain into faces and curves for your BREP. At the very least, you'd get much cleaner data to feed an ML algo than basic point clouds. I just find it strange to tackle the raw data head on when there's so many methods for dealing with geometry already, at the very least to clean things up and make some sense of the data. Have you looked at existing products that do point cloud -> CAD? What are they lacking in? |