Remix.run Logo
s_brady 13 hours ago

Inspired by [1] I have been working on a system to integrate the Normative Calculus of Lawrence Becker from "A New Stoicism" [2] into an LLM agent. This works out kind of like Constitutional AI but prompt engineering based. There is a Raku and a Python implementation. I much preferred using Raku as the built in text handling and multithreading is a joy to work with. Python is very clunky compared to it. it just has better libraries.

I make no great claims for the system, it has major issues being prompt based. It is a prototype to explore the feasibility of the idea of giving a chatbot arete, a code of conduct. There are few tests, no evals so all the usual caveats! An intellectual exercise in possibilities not currently being explored anywhere else. Does it work? Hmm, almost :)

It extracts normative propositions from incoming user requests then compares then to it's own internal ethical normative propositions using the Normative Calculus. The system also uses the Decision Paradigm algorithm from Lee Roy Beach [3] to make a forecast on whether to take up the user's task or not.

[1] https://link.springer.com/article/10.1023/A:1013805017161 [2] https://www.jstor.org/stable/j.ctt1pd2k82 [3] https://books.google.ie/books/about/The_Psychology_of_Narrat...

antononcube 12 hours ago | parent [-]

What is better in Raku than Python? Did you use any of the dedicated Raku LLM packages?

looking4advice 11 hours ago | parent [-]

I think Raku is better than Python agent based systems for a few reasons:

- You don't have to think about concurrency or multithreading as in Python. There is no GIL to worry about. The built in support for things Supply and hyper-operators are all available in the language. It is really easy to hook up disparate parts of a distributed agent without having to think about async or actors libraries or whatever in Python.

- Something I prefer is the OOP abstractions in Raku. They are much richer than Python. YMMV, depending on what you prefer.

- Better support for gradual typing and constraints out of the box in Raku.

Python wins on the AI ecosystem though :)

I started messing around with this code several years ago and the LLM libs in Raku were not as rich as today. I thought I needed a specific type of LLM message handling structure that could be extended to do tool handling and some of Letta type memory management (which I never got around to!). I have some Python libs of my own and I ported them. I suspect if I was starting now, I would use what is available in the community. This version of TallMountain is the last of a long series of prototypes, so I never rewrote those parts.

librasteve 3 hours ago | parent | next [-]

Nice to see others who think that Raku is a good fit for LLM ... I have had some success integrating LLM::DWIM (a raku command line LLM client built on LLM::Functions etc) with a DSL approach to make a command line calculator based on Raku Grammars.

  > crag
  > ?^<elephant mass in kg> / ?^<mouse mass in kg>    #300000①
  > ?^<speed of a flying swallow in mph>              #30mph
https://github.com/librasteve/raku-App-Crag

PS. Raku has Inline::Python where you need a lib from the Python ecosystem (which I am sure you know, but in case others are curious)

antononcube 9 hours ago | parent | prev [-]

God to know.

BTW, several years ago the LLM-revolution didn't happen yet. Raku started to have sound LLM packages circa March-May 2023.

looking4advice 2 hours ago | parent [-]

Yes indeed. I was already poking around with GPT-3 sometime in 2022. I can’t even remember exactly when. Feels like ages ago now!