It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.