| ▲ | Crash0v3rid3 11 hours ago | |
Mind sharing some examples of bad results you've seen vs other LLMs? | ||
| ▲ | citizenpaul 3 hours ago | parent [-] | |
1. Seems to forget its context about 20/80 of results now. It used to be decent but now I may make only two prompts and it forgets the previous one noticeably more. 2. Results are noticeably worse, much more prone to "cheating" outcomes like generating some logic then = true to all results so it always finishes regardless of conditions. | ||