| ▲ | jandrewrogers 2 days ago |
| I'm not aware of any convenient literature but it is relatively obvious once someone explains it to you (as it was explained to me). At its root it is a cutting problem, like graph cutting but much more general because it includes things like non-trivial geometric types and relationships. Solving the cutting problem is necessary to efficiently shard/parallelize operations over the data models. For classic scalar data models, representations that preserve the relationships have the same dimensionality as the underlying data model. A set of points in 2-dimensions can always be represented in 2-dimensions such that they satisfy the cutting problem (e.g. a quadtree-like representation). For non-scalar types like rectangles, operations like equality and intersection are distinct and there are an unbounded number of relationships that must be preserved that touch on concepts like size and aspect ratio to satisfy cutting requirements. The only way to expose these additional relationships to cutting algorithms is to encode and embed these other relationships in a (much) higher dimensionality space and then cut that space instead. The mathematically general case isn't computable but real-world data models don't need it to be. Several decades ago it was determined that if you constrain the properties of the data model tightly enough then it should be possible to systematically construct a finite high-dimensionality embedding for that data model such that it satisfies the cutting problem. Unfortunately, the "should be possible" understates the difficulty. There is no computer science literature for how one might go about constructing these cuttable embeddings, not even for a narrow subset of practical cases. The activity is also primarily one of designing data structures and algorithms that can represent complex relationships among objects with shape and size in dimensions much greater than three, which is cognitively difficult. Many smart people have tried and failed over the years. It has a lot of subtlety and you need practical implementations to have good properties as software. About 20 years ago, long before "big data", the iPhone, or any current software fashion, this and several related problems were the subject of an ambitious government research program. It was technically successful, demonstrably. That program was killed in the early 2010s for unrelated reasons and much of that research was semi-lost. It was so far ahead of its time that few people saw the utility of it. There are still people around that were either directly involved or learned the computer science second-hand from someone that was but there aren't that many left. |
|
| ▲ | calf 2 days ago | parent | next [-] |
| But then that sounds more like that person explained it wrong. They didn't explain why it is necessary to reduce to GRAPHCUT, it seems to me to beg the question. We should not assume this is true based on some vague anthropomorphic appeal to spatial locality, surely? |
| |
| ▲ | jandrewrogers 2 days ago | parent [-] | | It isn’t a graph cutting problem, graph cutting is just a simpler, special case of this more general cutting problem (h/t IBM Research). If you can solve the general problem you effectively get efficient graph cutting for free. This is obviously attractive to the extent you can do both complex spatial and graph computation at scale on the same data structure instead of specializing for one or the other. The challenge with cutting e.g. rectangles into uniform subsets is that logical shard assignment must be identical regardless of insertion order and in the absence of an ordering function, with O(1) space complexity and without loss of selectivity. Arbitrary sets of rectangles overlap, sometimes heavily, which is the source of most difficulty. Of course, with practical implementations write scalability matters and incremental construction is desirable. | | |
| ▲ | calf a day ago | parent [-] | | Well, previously you said that it (presumably "it" broadly refers to spatial reasoning AI) is a "high dimensional complex type cutting problem". You said this is obvious once explained. I don't see this as obvious, rather, I see this as begging the question--the research program you were secretly involved in wanted to parallelize the engineering of it so obviously they needed some fancy "cutting algorithm" to make it possible. The problem is that this conflated the scientific statement of what "spatial reasoning" is. There's no obvious explanation why spatial reasoning should intuitively be some kind of cutting problem however you wish to define or generalize a cutting problem. That's not how good CS research is done or explained. In fact I could (mimicking your broad assertions) go so far as to claim, the project was doomed to fail because they weren't really trying to understand something, they want to make something without understanding it as the priority. So they were constrained by the parallel technology that they had at the time, and when the computational power available didn't pan out they reached a natural dead end. |
|
|
|
| ▲ | andoando 2 days ago | parent | prev | next [-] |
| Ive spent years trying to tackle spatial representations on my own, so Im extremely curious here. How does the cutting problem relate to intelligence in the first place? |
| |
| ▲ | jandrewrogers 2 days ago | parent [-] | | Indexing is a special case of AI. At the limit, optimal cutting and learning are equivalent problems. Non-trivial spatial representations push these two things much closer together than is normally desirable for e.g. indexing algorithms. Tractability becomes a real issue. Practically, scalable indexing of complex spatial relationships requires what is essentially a type of learned indexing, albeit not neural network based. | | |
| ▲ | RaftPeople 5 hours ago | parent [-] | | > is essentially a type of learned indexing, albeit not neural network based. NN is just function approximation, why do you think that could not be a valuable part of the solution? It seems like a dynamically adjusted/learned function approximator is a good general tool to most of these hard problems. |
|
|
|
| ▲ | tehjoker 2 days ago | parent | prev | next [-] |
| Did that research program have a public code name? |
| |
| ▲ | mindcrime 2 days ago | parent | next [-] | | Looking through some old DARPA budget docs[1], it seems like there's a chance that what's being discussed here falls under DARPA's "PE 0602702E TACTICAL TECHNOLOGY" initiative, project TT-06. Some other possibilities might include: - "PE 0602304E COGNITIVE COMPUTING SYSTEMS", project COG-02.
- "PE 0602716E ELECTRONICS TECHNOLOGY", project ELT-01
- "PE 0603760E COMMAND, CONTROL AND COMMUNICATIONS SYSTEMS", project CCC-02
- "PE 0603766E NETWORK-CENTRIC WARFARE TECHNOLOGY", project NET-01
- "PE 0603767E SENSOR TECHNOLOGY", project SEN-02
Or maybe it's nothing to do with this at all. But in either case, this looks like some interesting stuff to explore in its own right. :-)[1]: https://web.archive.org/web/20181001000000/https://www.darpa... | |
| ▲ | jandrewrogers 2 days ago | parent | prev | next [-] | | Not that I know of. If I drop the program director’s name, people that know, know. That is all the handshake you usually need. | |
| ▲ | sho 2 days ago | parent | prev [-] | | Sounds like Genoa/Topsail |
|
|
| ▲ | jedharris 2 days ago | parent | prev [-] |
| some pointers to the research program please? |
| |
| ▲ | jandrewrogers 2 days ago | parent [-] | | It was a national security program with no public face. I was recruited into it because I solved a fundamental computer science problem they were deeply interested in. I did not get my extensive supercomputing experience in academia. It was a great experience if you just wanted to do hardcore computer science research, which at the time I did. There are several VCs with knowledge of the program. It is obscure but has cred with people that know about it. I’ve raised millions of dollars off the back of my involvement. A lot of really cool computer science research has happened inside the government. I think it is a bit less these days but people still underestimate it. | | |
| ▲ | DennisP a day ago | parent [-] | | I'm not surprised that the government does great research, but I wonder how much good does that research does, if it's unpublished and disappears after budget cuts. |
|
|