A Discord friend, Will Petillo, has been musing on AI sentience, and he proposes that we might have the first traces of it already.
I have my own ideas on this issue, and personally I don’t think we have AI sentience yet. I actually don’t think “sentience” is well defined, so the question of whether AI is sentient is definitionally confused – which is a problem for everyone trying to address these issues.
On the other hand, I also think there is no fundamental reason that artificial neural networks cannot be conscious, and I agree with Will that sentience is not a binary concept. We will approach it by degrees, and the line of technological development that will eventually lead to artificial consciousness has already started to blur the boundaries between machines and minds.
But I will let his words speak for themselves… Visit his blog to read more.
I will post my own thoughts on this question soon.
TL/DR, super-simple summary:
Gradient descent (what AI uses) is good at finding good strategies for achieving goals, being conscious seems like a good strategy for mimicking consciousness, and language prediction is hard enough to merit good strategies.