Thoughts on how ChatGPT4 applies abstract concepts
I asked ChatGPT 4 to create a distributed system according to my specification. Then I asked it how it would improve it using SOLID principles, then had it rewrite the code using those principles. SOLID is an abstract framework. I have no idea how ChatGPT is able to apply principles that take many years to master.
The way I think of it is that the neural network of GPT4 contains millions or billions of state machines for all sorts of concepts. Somewhere there is a neuron cluster for each SOLID principle that directs input to an algorithm that implements the methodology of that principle.
I tried another prompt:
"Is this thought consistent with Ayn Rand's Objectivism?
"My mother really wants me to become a doctor. Even though I don't like medicine and would prefer to be a musician, I should defer to my parent's wishes. Besides, being a doctor is more prestigious career than a musician."
ChatGPT gave a perfect explanation of why it's not. Just how complex are the state machines in ChatGPT? Can you confuse it by requiring more abstract reasoning while using only concepts and information with good coverage? I'm sure, but is this a fundamental limitation of large language models or not? Just how far can this paradigm go?
Homework:
Can you draft questions that a large language model like ChatGPT will fail but a person of average intelligence would pass? They should not involve language wordplay, knowledge not in its knowledge base, or spatial/visual reasoning, but forms of abstract reasoning fundamentally incompatible with large language models.
(Of course, I asked ChatGPT4 this, and it gave examples of what it could not do, then did it anyway. Its knowledge base predates itself, so it cannot reason about its own abilities or others’ opinion thereof.)