▲ | Apple's Cubify Anything: Scaling Indoor 3D Object Detection(github.com) | |||||||||||||||||||||||||||||||
183 points by Tycho87 4 days ago | 22 comments | ||||||||||||||||||||||||||||||||
▲ | pablogancharov a day ago | parent | next [-] | |||||||||||||||||||||||||||||||
In case anyone is interested in rendering USDZ scans in Three.js, I created a demo: https://usdz-threejs-viewer.vercel.app/ | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | Carrok 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I really want an app I can scan my whole house with the camera/lidar combo on my phone, and export it into Blender, where I can then rearrange furniture and stuff. Apps like Scaniverse get you pretty close, but everything is one mesh, would be great to be able to slide the couch around the space without having the manually cut it out of the mesh. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | pzo 13 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
They overcomplicate by using 3-4 different (sub) license in one project: in README: Licenses - The sample code is released under Apple Sample Code License. - The data is released under CC-by-NC-ND. - The models are released under Apple ML Research Model Terms of Use. Acknowledgements - We use and acknowledge contributions from multiple open-source projects in ACKNOWLEDGEMENTS." then having in github license button "Copyright (C) 2025 Apple Inc. All Rights Reserved." in repo file LICENSE LICENSE_MODEL why making it so confusing and elaborate? Its so useless to even use by 3rd party devs for making apps and releasing on their platform. So then just make it one license with the most strict restrictions you can make AGPL and/or CC-by-NC-ND . | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | desertmonad a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Looks promising but the license, Attribution-NonCommercial-NoDerivatives is pretty limiting.. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | callumprentice 17 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I keep meaning to get back to my suite of equirectangular image functions - viewers, editors, authoring etc. and this reminded me to resurrect the Viewer. https://equinaut.surge.sh/?eqr=https://raw.githubusercontent... Not quite right I think because the source image issn't 2x1 aspect ratio. They can look really nice: both in the real world - https://equinaut.surge.sh/?eqr=https://upload.wikimedia.org/... or the virtual world: https://equinaut.surge.sh/?eqr=https://live.staticflickr.com... | ||||||||||||||||||||||||||||||||
▲ | syntaxing a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Surprised this isn’t in coreML. Seems useful for the Vision Pro or something | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | fidotron a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
The accuracy of the results don't seem that great. For example, looking at the pictures on the wall in their sample, or the beams in the ceiling. It's possible it's some artifact of the processing resolution, but I think most people that have worked with NNs for AR input will be surprised that this is not considered disappointing. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | totetsu 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Is this so your smart speaker can better report whats in your house back to apple? | ||||||||||||||||||||||||||||||||
▲ | Svip 16 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Will it work on a picture of a Power Mac G4 Cube[0]? Whenever I see "cube" and "apple" together (which, in fairness, is rare), I think of the Cube. |