▲ | markanton 18 hours ago | |||||||||||||||||||||||||||||||
Nice project but there are several dozens of “AI/LLM gateways” now.. all kind doing the same thing. Kong AI gateway [1] was maybe the first to attack the LLM traffic governance and is indeed far ahead in both features and adoption. Trying to understand the value add and differentiator here, since it’s a problem kinda solved already. | ||||||||||||||||||||||||||||||||
▲ | honorable_coder 16 hours ago | parent [-] | |||||||||||||||||||||||||||||||
There are a few critical differences. archgw is designed as a data plane for agents - handling and processing ingress and egress (prompt) traffic to/from agents. Unlike frameworks or libraries, it runs as a single process that includes edge functionality and task-specific LLMs, tightly integrated to reduce latency and complexity. Second, it’s about where the project is headed. Because archgw is built as a proxy server for agents, it’s designed to support emerging low-level protocols like A2A and MCP in a consistent, unified way—so developers can focus purely on high-level agent logic. This borrows from the same design decision that made Envoy successful for microservices: offload infrastructure concerns to a specialized layer, and keep application code clean. In our next big release, you will be able to run archgw as a sidecar proxy for improved orchestration and observability of agents. Something that other projects just won't be able to do. Kong was designed for APIs. Envoy was built for microservices. Arch is built for agents. | ||||||||||||||||||||||||||||||||
|