▲ | mindcrime 5 days ago | |
Today (Sunday) I've spent the day studying Analogical Reasoning. Specifically reading chapters from The Analogical Mind by Gentner, Holyoak, and Kokinov (eds), and Similarity and Analogical Reasoning by Vosniadou and Ortony (eds). Beyond that, I've spent most of the weekend working on some "test harness" code for doing AI research. You all may have seen me mention XMPP a few times over the last year or so and if so, you have have rightly wondered "What does XMPP have to do with anything?" Good question. The short answer is "nothing, in and of itself." That is, there's nothing in particular about XMPP that has anything to do with AI. I'm just using XMPP as a convenient interface to interact with my AI experiments. The thing is, most of this code was written in very much an "exploratory programming" style (eg, "vibe coding before vibe coding was a thing and done without an LLM"). As such, the architecture and structure of the code is kinda crap and it's hard to extend, reuse, modify, etc. There's too much "XMPP stuff" tightly coupled to my "Blackboard" system[1] and nothing was written to use dependency injection and so on. Soooooo... I've spent a bunch of time over the weekend re-working that stuff to make my test harness much more useful. Now, all the "XMPP stuff" is contained to a single deployable unit, and the Blackboard stuff is likewise properly designed to allow making all the components Spring managed beans and wired together in a Spring Boot application. And that in turn exposes it's interface as a simple REST API. One thing I'm debating now is if I want to try and coerce this into fitting the OpenAI API model, and then adopt the OpenAI API for my backends[2]. Still debating with myself on that point. Anyway, with this stuff done, it will be easy to switch out the AI backend components, run parallel tests, and do other nifty things. One thing I'll probably do is integrate Apache Camel into the XMPP receiver component to support complex message routing logic where desired. I also finally created a Dockerized build for all of this stuff and a docker compose file, so now I can just run "docker compose up" and have a running system in a few seconds. And since everything is built as a Docker image now, if I want to move this to K8S or something in the future that becomes less of a slog. All in all, I have gotten quite a bit done the last couple of days. I attribute a lot of this to the success of my eye procedure on Thursday. Now that I can see again, and am not experiencing near constant severe levels of eye strain and fatigue, it's a LOT easier to get stuff done! [1]: https://en.wikipedia.org/wiki/Blackboard_system [2]: As an aside, I say "coerce" because what I'm doing is not fundamentally based on LLM's or GenAI in general. Most of this work is either purely symbolic AI, or neuro-symbolic hybrid stuff at present. That said, I do allow for the possibility of using an LLM in places, especially for the "language" part. That is, if my system does a bunch of computation and creates an "answer" as a bunch of RDF triples or something, I can then take that and feed it to an LLM and say "translate this into conventional English prose" or whatever. I'm not an absolutist about any particular approach. |