His suspension is allegedly rooted in violations of confidentiality policies
Google has placed one of its software engineers under "paid administrative leave" after he internally raised concerns about the tech giant's artificial intelligence (AI) chatbot going sentient.
The suspended software engineer said on a blog post published last week that he was placed under the said leave in connection with his "violation of Google's confidentiality policies."
According to the employee, the alleged violation stems from his discovery of an AI ethics concern on the chatbot Language Model for Dialogue Applications (LaMDA), which he claimed is apparently "sentient."
After raising his concern to his manager, he was informed that his evidence was "too flimsy" and that he would need more before the matter was escalated.
According to the engineer, his investigation reached a point where he did not have relevant expertise on the matter, which is why he sought "minimal amount of outside consultation" to help guide him in his probe.
He stressed, however, that he provided a full list of the people that he contacted outside of Google in order to contain any potential leaks.
"At no point has Google reached out to any of them in order to determine whether or not their proprietary information has in fact leaked beyond the specific people I talked to," said the engineer on his post.
"Google has shown no actual interest in maintaining control over their 'proprietary information.' They're just using it as an excuse to get rid of yet another AI Ethics researcher who made too much noise about their unethical practices."
Latest News
Read more: Google increases leave time, vacation days
He also said that his situation under "paid administrative leave" was Google's method when it comes to terminate an employee.
"It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row," he said. "They pay you for a few more weeks and then ultimately tell you the decision which they had already come to."
The software engineer said that he will no longer comment on whether he really violated Google's confidentiality policy, as it is "likely to eventually be the topic of litigation."
He was also intent on pushing the spotlight to his discovery of the apparent sentient chatbot named LaMDA.
In response, Google did not comment on the matters related to the engineer's suspension. However, its spokesperson told the news outlet that they his claims of a sentient chatbot was not supported by his evidence.
"Our team - including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," spokesperson Brian Gabriel The Washington Post in an interview.