| ▲ | TheAceOfHearts 6 hours ago | |
This comment is a bit confusing and surprising to me because I tried Antigravity three weeks ago and it was very undercooked. Claude was actually able to identify bugs and get the bigger picture of the project, while Gemini 3 with Antigravity often kept focusing on unimportant details. My default everyday model is still Gemimi 3 in AI Studio, even for programming related problems. But for agentic work Antigravity felt very early-stages beta-ware when I tried it. I will say that at least Gemimi 3 is usually able to converge on a correct solution after a few iterations. I tried Grok for a medium complexity task and it quickly got stuck trying to change minor details without being able to get itself out. Do you have any advice on how to use Antigravity more effectively? I'm open to trying it again. | ||
| ▲ | paxys 5 hours ago | parent | next [-] | |
Ask it to verify stuff in the browser. It can open a special Chrome instance, browse URLs, click and scroll around, inspect the DOM, and generally do whatever it takes to verify that the problem is actually solved, or it will go back and iterate more. That feedback loop IMO makes it very powerful for client-side or client-server development. | ||
| ▲ | Analemma_ 4 hours ago | parent | prev [-] | |
I've mentioned this before, but I think Gemini is the smartest raw model for answering programming questions in chatbot mode, but these CC/Codex/gemini-cli tools need more than just the model, the harness has to be architected intelligently and I think that's where Google is behind for the moment. | ||