Deep Mind's new machine learning chatbot scores 91% as 'sensible' in conversation as the average human.
I wonder if there is a limit to how human a chatbot can seem without expressing other human-like characteristics. For example, does a chatbot need complex and genuine opinions about a film before it can have a 100% 'human' conversation about it? I'd guess yes because if it was simply generating typical responses then its opinion would break down at some point (eg. begin contradicting itself).
@voltist Wouldn't a human also begin to contradict itself after a while, if you just push it far enough?
@eviloatmeal Yeah contradiction is definitely something to be found in anyone's opinions, so maybe its not the best example. What I'm trying to say is that surely the most effective method to sound like you have an opinion is to actually have one, so a machine learning system trying to optimize for human-like conversation would have to develop personal opinions in order to improve beyond a certain point. Of course this doesn't just apply to opinions but also to goals and emotions as well.
Linux geeks doing what Linux geeks do...