LinkedIn is now training in-house artificial intelligence (AI)-tools on user data. The process, first reported by technology-focused online publication 404 Media, primarily affects US accounts and seemingly began before any changes had been made to the social media platform‘s user agreement and privacy policy. Data belonging to LinkedIn users in the European Economic Area (EEA) and Switzerland has not been scraped, according to the firm’s FAQ on its AI services, likely due to stringent privacy regulations in those jurisdictions.

LinkedIn has since updated its user agreement and privacy policy, allowing customers to opt out of the data use via a toggle button in their profile settings. “In our Privacy Policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (‘generative AI’) and through security and safety measures,” wrote the social media platform in a blog post explaining the changes. “When it comes to using members’ data for generative AI training, we offer an opt-out setting.”

The user agreement has also been updated to include additional details on content recommendation and content moderation practices. It now comprises provisions relating to the generative AI features that the platform offers, along with licence updates to benefit the creators of such services.

According to LinkedIn’s updated privacy policy, the platform may use personal data to develop and provide products and services. The data can also be used to train AI models and gain insights to personalise the platform’s services and make it more relevant to the users.

Read more: LinkedIn snubs parent company Microsoft’s Azure cloud to stay on-prem