| ▲ | mentos 3 hours ago | |
Given that Python tends to produce fewer hallucinations when generated by LLMs I wonder if former Django developers using AI tools are secretly having a blast right now. | ||
| ▲ | tirpen 3 hours ago | parent | next [-] | |
I think another ace up Django's sleeve is that it has had a remarkable stable API for a long time with very few breaking changes, so almost all blogposts about Django that the LLM has gobbled up will still be mostly correct whether they are a year or a decade old. I get remarkably good and correct LLM output for Django projects compared to what I get in project with more fast moving and frequently API breaking frameworks. | ||
| ▲ | Genego 2 hours ago | parent | prev | next [-] | |
Whenever I saw people complain about LLMs writing code, I never really understood why they were so adamant that it just didn’t work at all for them. The moment I did try to use LLMs outside of Django, it became clear that some frameworks are just much easier to work with LLMs than others. I immediately understood their frustrations. | ||
| ▲ | m_ke an hour ago | parent | prev | next [-] | |
What a lot of people don’t know is that SWE-bench is over 50% Django code, so all of the top labs hyper optimize to perform well on it. | ||
| ▲ | boxed 2 hours ago | parent | prev [-] | |
If Python produces less hallucinations it's not because of the syntax, it's because there's so much training data. | ||