2 Comments

I think this is a good way to look at it. The new model's ability to explain itself is words is neat, and uncovers more potential as an educational or cultural tool, but the assumption that it is like what happens in the human mind is...we'll its a bad analogy.

Based on what I have seen so far, this model is better at the one thing LLMs do well: language games. In this case it is better at word puzzles and slightly better at standardized tests. Since it uses complex math to solve these problems, it is clearly doing something quite different from the human brain.

Expand full comment

The thinking algorithm is not transparent. If you ask 1o to explain its thinking, you get an OpenAI revised summary. What this means is that we users will never really engage with this thinking machine in a meaningful way. Only as consumers really. We won’t get to reason with it. This is probably the most concerning aspect of the business model.

Expand full comment