Remix.run Logo
Show HN: Roundtable MCP, Orchestrate Claude Code, Cursor, Gemini and Codex(github.com)
1 points by mahdiyar 6 hours ago | 2 comments

Hey HN, Last week, I spent 40 minutes debugging a production issue that should have taken 5. Not because the bug was complex, but because I kept switching between Claude Code, Cursor, Codex, and Gemini - copying context, losing thread, starting over.

  The workflow was painful:
  1. Claude Code couldn't reproduce a React rendering bug
  2. Copy-pasted 200 lines to Cursor - different answer, still wrong
  3. Tried Codex - needed to re-explain the database schema
  4. Finally Gemini spotted it, but I'd lost the original error logs

  This context-switching tax happens weekly. So I built Roundtable AI MCP Server.



What makes it different: Unlike existing multi-agent tools that require custom APIs or complex setup, Roundtable works with your existing AI CLI tools through the Model Context Protocol. Zero configuration - it auto-discovers what's installed and just works. Architecture: Your IDE → MCP Server → Multiple AI CLIs (parallel execution) It runs CLI Coding Agents in headless mode and shares the results with the LLM of choice. Real examples I use daily:

  Example 1 - Parallel Code Review:
  Claude Code > Run Gemini, Codex, Cursor and Claude Code Subagent in parallel and task them to review my landing page at '@frontend/src/app/roundtable/page.tsx'

  → Gemini: React performance, component architecture, UX patterns
  → Codex: Code quality, TypeScript usage, best practices
  → Cursor: Accessibility, SEO optimization, modern web standards
  → Claude: Business logic, user flow, conversion optimization

  Save their review in {subagent_name}_review.md then aggregate their feedback

  Example 2 - Sequential Task Delegation:
  First: Assign Gemini Subagent to summarize the logic of '@server.py'
  Then: Send summary to Codex Subagent to implement Feature X from 'feature_x_spec.md'
  Finally: I run the code and provide feedback to Codex until all tests in 'test_cases.py' pass
  (Tests hidden from Codex to avoid overfitting)

  Example 3 - Specialized Debugging:
  Assign Cursor with GPT-5 and Cursor with Claude-4-thinking to debug issues in 'server.py'
  Here's the production log: [memory leak stacktrace]
  Create comprehensive fix plan with root cause analysis

  All run in parallel with shared project context. Takes 2-5 minutes vs 20+ minutes of manual copy-paste coordination.

Try it: pip install roundtable-ai roundtable-ai --check # Shows which AI tools you have I'd love feedback on: 1. Which AI combinations work best for your debugging workflows? 2. Any IDE integration pain points? 3. Team adoption blockers I should address?

  GitHub: [https://github.com/askbudi/roundtable](https://github.com/askbudi/roundtable)
  Website: [https://askbudi.ai/roundtable](https://askbudi.ai/roundtable)
mahdiyar 5 hours ago | parent | next [-]

Example usecase:

Prompt: ``` The user dashboard is randomly slow for enterprise customers.

Use Gemini SubAgent to analyze frontend performance issues in the React components, especially expensive re-renders and inefficient data fetching.

Use Codex SubAgent to examine the backend API endpoint for N+1 queries and database bottlenecks.

Use Claude SubAgent to review the infrastructure logs and identify memory/CPU pressure during peak hours. ```

mahdiyar 4 hours ago | parent | prev [-]

[dead]