Hong Kong's privacy watchdog sounds alarm about platform's use of data
LinkedIn has suspended the collection of Hong Kong users' private data for training its generative AI models after concerns raised by the Office of the Privacy Commissioner for Personal Data (PCPD).
The PCPD said on Tuesday that this was LinkedIn's response to their concerns regarding the matter.
"The PCPD received a response from LinkedIn yesterday confirming that it has paused any use of Hong Kong users' personal data for such purposes as of 11 October 2024 while the PCPD's concerns are being addressed," the privacy watchdog said.
Privacy Commissioner for Personal Data Ada Chung Lai-ling said she "welcomes" LinkedIn's decision to pause the collection.
"The PCPD will continue to follow up and monitor the situation to ensure that the personal data privacy of Hong Kong users is safeguarded," she said in a statement.
The office first raised concerns about LinkedIn's practice earlier this month, after it noted that the employment-oriented social media platform implemented an update to its privacy policy.
Under the update, LinkedIn has enabled by default a relevant setting involving the collection of its users' personal data and content to train its generative AI models for content creation.
"LinkedIn's privacy policy update has aroused concerns of data protection authorities in other jurisdictions," the PCPD previously said in a statement.
"The PCPD is also concerned about whether LinkedIn's default opt-in setting for using users' personal data to train generative AI models correctly reflects users' choices."
Chung then reminded LinkedIn users who are unwilling to participate in the data collection to revoke the permission in their LinkedIn accounts.
Hong Kong's PCPD has previously raised privacy concerns over how user conversations on GenAI-powered chatbots may become new training data for the large language models (LLMs).
"If users inadvertently fed personal data to an AI chatbot, the data is susceptible to misuse beyond the original purpose without consent, thereby contravening the limitation of use of data principle," Chung previously warned.
"Indeed, an AI chatbot may produce an output response containing personal data which has been removed from the original context."
To address the emerging privacy and ethical concerns surrounding GenAI, Chung called on stakeholders to ensure that they are compliant with applicable laws and ethical principles in the development and use of AI.
"While we are keeping a close eye on the global development in the regulation of the new technology, we would remind tech companies that, whatever regulations or standards are put in place, they bear responsibilities in the first place to ensure the lawful and ethical development and use of AI so that the new technology is used for human good," Chung said.