Another strategy, especially for more technical prompts (for example, to use with a Deep Research model when you want to be very methodical about the format of the report) is to iterate your prompt using AI - in essence - use AI to use AI better. There is a limit to how long and detailed you want your prompt to be, but there is a sweet spot between too short and too long. And the formatting observation definitely resonates. I am curious if all of these PhD level questions had a clear and defined answer. I put one of the physics questions from "Humanity's Last Exam" a few months ago into DeepSeek two different times - for both questions it took about 20 minutes to solve. One time it said the answer was 10. The second time it said the answer was 8. So clearly there is something going on.
"Use AI to use AI better" - I agree wholeheartedly. Recently I've pushed myself to end contextualized prompts with "Ask me questions about _____ to ensure we are aligned." The questions that it asks not only help AI understand me (and my project) better, but they help ME understand me better - if that makes sense. There are tricks such as these that increase cognition while also increasing AI's accuracy, utility, and support of our human work.
Thoughtful and useful as usual.
Thanks Jim!
Another strategy, especially for more technical prompts (for example, to use with a Deep Research model when you want to be very methodical about the format of the report) is to iterate your prompt using AI - in essence - use AI to use AI better. There is a limit to how long and detailed you want your prompt to be, but there is a sweet spot between too short and too long. And the formatting observation definitely resonates. I am curious if all of these PhD level questions had a clear and defined answer. I put one of the physics questions from "Humanity's Last Exam" a few months ago into DeepSeek two different times - for both questions it took about 20 minutes to solve. One time it said the answer was 10. The second time it said the answer was 8. So clearly there is something going on.
"Use AI to use AI better" - I agree wholeheartedly. Recently I've pushed myself to end contextualized prompts with "Ask me questions about _____ to ensure we are aligned." The questions that it asks not only help AI understand me (and my project) better, but they help ME understand me better - if that makes sense. There are tricks such as these that increase cognition while also increasing AI's accuracy, utility, and support of our human work.