Thanks for the link, appreciate it.
I found this interesting.
Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.
This seems to indicate the Turing Test is not especially relevant to the social implications of conversational net bots.
As example, all it takes for net forums to enter a new era is for the average person to not be sure if they are talking to a person or a bot. Or, for them to be talking to bots and not realizing it. Or, for them to want to talk to bots, because the bots meet their needs better than humans do.
Honestly, if we were to examine many forums objectively, without the assumption that posters are human, we could conclude many of the posters are bots already, given the low quality of many of the posts.