| ▲ | simonw 2 hours ago | |||||||||||||||||||
Sure, but the problem is when you take that half hour of work and share it with other people without making clear how much effort has gone into it. Software is valuable if it has been tested and exercised properly by other people. I don't care if you vide coded it provided you then put the real work in to verify that it actually works correctly - and then include the proof that you've done that when you start widely sharing it with the world. Right now it's impossible to tell which of these projects implementing the paper are worth spending time with. | ||||||||||||||||||||
| ▲ | kristjansson an hour ago | parent | next [-] | |||||||||||||||||||
> without making clear how much effort has gone into it I'm increasingly convinced this is the critical context for sharing LLM outputs with other people. The robots can inflate any old thought into dozens of pages of docs, thousands of lines of MR. That might be great! But it completely severs the connection between the form of a work and the author's assessment/investment/attachment/belief in it. That's something one's audience might like to know! | ||||||||||||||||||||
| ▲ | dalemhurley an hour ago | parent | prev [-] | |||||||||||||||||||
Is t the point of an MVP to be an MVP? The OP put together a POC and shared it, showing novel concepts used together. They are not some large R&D lab. The purist tests being asked for is in contradiction to the ShowHN guidelines. | ||||||||||||||||||||
| ||||||||||||||||||||