Many thanks for the thought provoking article. Interesting observation about human and llm reasoning ... Both are maps of unseen and unseeble terrains of processes.
I have been wondering about how the reasoning llms can be useful for students. I have also thought about the idea of using llm reasoning as a counterpoint or comparison with learners own reasoning to solve problems like the geometry type alluded to in the post. ... Regardless of how faithful the CoT actually is.
But letting students know that there is a difference between the map and territory and perhaps speculating on how it differs might be an interesting meta-exercise ... Presumably for higher level learners.
Thanks, much appreciated! Yes, that Anthropic article is highly relevant to this discussion. It is also not the first time that AI researcher have observed this phenonenon. I recently wrote an article over at my SubStack where I talked about some of the emergent behaviors we observe in LLMs, alignment faking being one of them. It is a fascinating topic.
Many thanks for the thought provoking article. Interesting observation about human and llm reasoning ... Both are maps of unseen and unseeble terrains of processes.
I have been wondering about how the reasoning llms can be useful for students. I have also thought about the idea of using llm reasoning as a counterpoint or comparison with learners own reasoning to solve problems like the geometry type alluded to in the post. ... Regardless of how faithful the CoT actually is.
But letting students know that there is a difference between the map and territory and perhaps speculating on how it differs might be an interesting meta-exercise ... Presumably for higher level learners.
Especially if we have some concrete examples like Anthropic recently shared in one of their research articles. https://www.anthropic.com/research/tracing-thoughts-language-model
I agree with many like Yan Lecun and the 4E group that llms is not the way to human like thinking or agi if a model of human thinking is the end goal.
Thanks, much appreciated! Yes, that Anthropic article is highly relevant to this discussion. It is also not the first time that AI researcher have observed this phenonenon. I recently wrote an article over at my SubStack where I talked about some of the emergent behaviors we observe in LLMs, alignment faking being one of them. It is a fascinating topic.
Thanks for the opportunity to contribute to AI EduPathways! I was a lot of fun to write this article.