▲ | 42lux a day ago | |||||||||||||||||||||||||
Because it's alchemy and everyone believes they have an edge on turning lead into gold. | ||||||||||||||||||||||||||
▲ | elcritch a day ago | parent | next [-] | |||||||||||||||||||||||||
I've been thinking for a couple of months now that prompt engineering, and therefore CoT, is going to become the "secret sauce" companies want to hold onto. If anything that is where the day to day pragmatic engineering gets done. Like with early chemistry, we didn't need to precisely understand chemical theory to produce mass industrial processes by making a good enough working model, some statistical parameters, and good ole practical experience. People figured out steel making and black powder with alchemy. The only debate now is whether the prompt engineering models are currently closer to alchemy or modern chemistry? I'd say we're at advanced alchemy with some hints of rudimentary chemistry. Also, unrelated but with CERN turning lead into gold, doesn't that mean the alchemists were correct, just fundamentally unprepared for the scale of the task? ;) | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | viraptor 14 hours ago | parent | prev [-] | |||||||||||||||||||||||||
We won't know without an official answer leaking, but a simple answer could be - people spend too much time trying to analyse those without understanding the details. There was a lot of talk on HN about the thinking steps second guessing and contradicting itself. But in practice that step is both trained by explicitly injecting the "however", "but" and similar words and they do more processing than simply interpreting the thinking part as text we read. If the content is commonly misunderstood, why show it? |