LinkedIn, the popular business and employment-focused social platform owned by Microsoft, has initiated its AI training processes using user data without explicit user consent. The company has confirmed that it is using user data to power its generative AI features in a recent blog post by LinkedIn’s SVP and General Counsel, Blake Lawit.
The updated user agreement includes details about content recommendations, moderation practices, and the new generative AI features. The updated agreement will take effect in November. Additionally, LinkedIn has introduced a new privacy policy to clarify how user information is utilized in the development of its products and services, including AI-generated content.
The policy specifies that LinkedIn collects, processes, and uses posts, articles, language preferences, and any feedback previously provided by users. The company claims it is working to minimize the presence of personal information in data used to train AI through the use of “privacy enhancing” technologies.
Users can now opt out of this unrequested feature by navigating to their account settings and selecting the new “Data for Generative AI Improvement” option. European users will be exempt from automatic data scraping for AI training “until further notice,” according to the company.
LinkedIn’s actions have raised concerns among users who rely on the network for professional connections and job opportunities. Users can review the updated user agreement and privacy policy to understand how their data is being used and take steps to opt out if desired.
LinkedIn’s use of user data to train generative AI models has sparked debate about the balance between innovation and user privacy. As the technology landscape continues to evolve, it is essential for companies like LinkedIn to prioritize transparency and user consent.