As we continue our discussion based on Blake Lemoine’s assertion that the Large Language Model chatbot LaMDA had become sentient, we relay the rest of his conversation with the program and then some questions and answers with Lemoine himself. But as Lemoine has said, machine sentience and personhood are just some of many questions to be considered. His greater issue is how an omnipresent AI, trained on an insufficient data set, will affect how different people and cultures interact and who will be dominated or excluded. The fear is that the ultimate result of protecting corporate profits will outweigh global human interests. In light of these questions about AI’s ethical and efficient development, we highlight the positions and insights of experts on the state and future of AI, such as Blaise Agüera y Arcas and Gary Marcus. The directives of responsible technology development and the right track to Deep Learning are more grounded than the fantastical thoughts of killer robots. Yet hovering over all of the mechanics are the philosophies of what constitutes sentience, comprehending and feeling as a person does, and being human enough
. The reality of Artificial Intelligence matching humans may be fifty years in the future, or five hundred, but if that day ever comes, let’s hope it’s an egalitarian future where we are the masters and not the servants.
Visit our webpage
on this episode for a lot more information.