A fellow redditor is conducting experiments on ChatGPT, in the hope of assessing the extent to which it has a Theory of Mind. The Theory of Mind section here has just been updated to include a conversation between that redditor and ChatGTP.
Any comments or other examples of GTP4’s abilities in this domain would be appreciated.
I ran a short experiment to test theory of mind in GPT 3.5. I created a scenario where I explained that a boy was playing with a dog. The boy had three cups, one red, one yellow, one blue. While the dog was watching the boy put a dog treat under the yellow cup and then, when the dog looked away, the boy switch the treat into another cup. I asked the robot what did it think the dog would do. It replied that it would try to look under the yellow cup because it had not seen the boy switch the treat out.
I then asked the robot to repeat the experiment but this time the treat was had a powerful odor. The robot replied that the dog would walk to the yellow cup but then divert its attention to the correct cup because, while it had not seen the switch, it would smell the treat as it got closer. It would look under the correct cup.
I found this interesting because simple word prediction doesn’t seem to account for the ability to reason and infer internal states. You could continue the experiment by asking the robot why did it think the boy did what he did.
Do you have access to GPT4? It is dramatically better at more difficult concepts. I find its ability to answer these sorts of questions truly remarkable. It’s such a contrast to its weakness on syntactical analysis and maths.