Many companies are now using AI to record and summarise meetings or to create action points without transcribing the meeting entirely. Using such tools can certainly be beneficial for businesses as AI efficiently documents discussions, resulting in better recordkeeping, however, AI impacts data privacy.
There must be an appropriate lawful basis for data processing. Businesses must consider what legal basis can be relied on for the use of AI in meetings. It is necessary to consider whether express consent of participants is required, and if so, when? Alternatively, can a business rely on legitimate interests? Businesses will need to undertake thorough assessments to understand the risks and appropriate mitigation. All personal data needs to be processed fairly and in a transparent manner in relation to the data subject.
Where AI uses already AI-generated information as its source material, it can be difficult to assess who is the controller in respect of personal data. Generative AI can also be inaccurate; transcriptions may not capture context or tone in discussions, potentially resulting in misinterpretation.
Recording entire meetings creates a vast quantity of data which must be stored securely. This should be considered because data subjects can make a subject access request (DSAR) to access their personal data held by a business. Businesses will need to find the data subject’s personal data held and assess whether this should be disclosed. Once a controller receives a DSAR, data cannot usually be deleted except as part of routine business procedures, and it should be noted that intentional destruction of DSAR material under section 173 of the Data Protection Act 2018 is a criminal offence. Businesses will need to consider how long they will retain the data for.
Transcripts created by AI tools may capture informal or impulsive comments, which attendees might not wish to be recorded. Using such technology may result in people being less willing to express their opinions during frank discussions and this may, in turn, frustrate business developments or the resolution of difficult issues where open conversation as to strategies, risks, or blame is required.
Additionally, businesses must consider whether transcriptions are likely to capture sensitive Special Category data, which requires extra protection, and whether the AI carries a risk of determining bias or discrimination. Businesses may also have to consider further obligations if they wish to use AI to process data concerning children.
It should also be considered whether creating a transcript may breach terms of an NDA a business has entered, by creating copies and circulating confidential information of other parties. The same may potentially create copyright issues.
It should be considered how personal data will be shared and whether this may result in international transfers, for example if the AI provider is based overseas.
Business will need to ensure they:
- have carefully documented the decision to use AI, which should be based on an evidenced argument that AI is appropriate;
- ensure compliance with relevant data protection laws;
- analyse the risks which using AI transcription tools may bring, including data breaches or misuse, and how these can be mitigated; and
- ensure their policies, procedures, and privacy notice address the use of AI.
If you would like further advice, please get in touch with our Corporate & Commercial team by emailing online.enquiries@la-law.com or calling 01202 786188.
Ruth Chornolutskyy
Associate in the Corporate and Commercial team at Lester Aldridge.