I believe the person you're replying to meant local inferencing. The tool you shared, like most (all?) LLM utilities, is wrapping API calls.
https://github.com/theJayTea/WritingTools/blob/main/Windows_...