Remix.run Logo
MITSardine 2 days ago

Have you tried "traditional" approaches like a Delaunay triangulation on the point cloud, and how does your method compare to that? Or did you encounter difficulties with that?

Regarding what you say of planes and compression, you can look into metric-based surface remeshing. Essentially, you estimate surface curvature (second derivatives) and use that to distort length computations, remeshing your surface to length one in that distorted space, which then yields optimal DoFs to surface approximation error. A plane (or straight line) has 0 curvature so lengths are infinite along it (hence final DoFs there minimal). There's software to do that already, thought I'm not sure it's robust to your usecase, because they've been developed for scientific computing with meshes generated from CAD (presumably smoother than your point cloud meshes).

I'd be really curious to know more about the type of workflow you're interested in, i.e. what does your input look like (do you use some open data sets as well?) and what you hope for in the end (mesh, CAD).

jgord 2 days ago | parent [-]

short answer yes .. I tried a _lot_ of approaches, many worked partially. I think I linked to a YT video screencast showing edges of planes that my algo had detected in a sample pointcloud ?

Efficient re-meshings are important, and its worth improving on the current algorithms to get crisper breaklines etc, but you really want to go a step further and do what humans do manually now when they make a CAD model from a pointcloud - ie. convert it to its most efficient / compressed / simple useful format, where a wall face is recognized as a simple plane. Even remeshing and flat triangle tesselation can be improved a lot by ML techniques.

As with pointclouds, likewise with 'photogrammetry', where you reconstruct a 3D scene from hundreds of photos, or from 360 panoramas or stereo photos. I think in the next 18 months ML will be able to reconstruct an efficient 3D model from a streetview scene, or 360 panorama tour of a building. An optimized mesh is good for visualization in a web browser, but its even more useful to have a CAD style model where walls are flat quads, edges are sharp and a door is tagged as a door etc.

Perhaps the points Im trying to make are :

  - the normal techniques are useful but not quite enough [ heuristics, classical CV algorithms, colmap/SfM ] 
  - NeRFs and gaussian splats are amazing innovations, but dont quite get us there
  - to solve 3D reconstruction, from pointclouds or photos, we need ML to go beyond our normal heuristics : 3D reality is complicated
  - ML, particularly RL, will likely solve 3D reconstruction quite soon, for useful things like buildings
  - this will unlock a lot of value across many domains - AEC / construction, robotics, VR / AR
  - there is low hanging fruit, such as my algo detecting planes and pipes in a pointcloud
  - given the progress and the promise, we should be seeing more investment in this area [ 2Mn of investment could potentially unlock 10Bn/yr in value ]
MITSardine a day ago | parent [-]

I'm not sure I'm sold on the necessity to detect lines and planes specifically. My issue is, suppose you could do that perfectly, then what of the rest of the geometry? If you're aiming for a CAD model in the end (BREP), you'll want to fit the whole thing, not only the planes and lines. And it seems to me an approach specialized for lines and planes is helpless at fitting general surfaces and curves. In my mind, a general approach that incidentally also finds straight lines and planes would be better (necessary).

Note if you can fit a BREP, it's fairly trivial to find whether a curve is close enough to a straight line that you can just stipulate it's a straight line (same for a plane).

Have you looked into NURBS fitting through point clouds? I understand those can be noisy and over sampled. A colleague got away with sorting point clouds by a Hilbert curve (or other space filling curve) and then keeping 1/N points (just by index), a simple but elegant way to remove N-1 every N points while keeping the general distribution mostly intact (you could also use an octree). Though I recall in some cases the distribution of points was not uniformly too dense, but e.g. dense along scanning lines and sparse between those lines.

Once it's tractable to triangulate the point cloud, you have two important pieces of information at your disposal: local connectivity (prior to that, it'd have been nlog(n) at best to find nearby points) and a notion of topology after some basic processing (e.g. detecting ridges to make out surface patches). With the former, you could do things like smooth the surface to do away with noisiness (say your points are randomly a small distance away from the plane, but any shape really) for better NURBS fitting, estimating normals, etc. and with the latter you could split the domain into faces and curves for your BREP.

At the very least, you'd get much cleaner data to feed an ML algo than basic point clouds. I just find it strange to tackle the raw data head on when there's so many methods for dealing with geometry already, at the very least to clean things up and make some sense of the data.

Have you looked at existing products that do point cloud -> CAD? What are they lacking in?