Safeguarding Data in the AI Era: Guidelines for Skylite Associates and Candidates

The adoption of Generative AI tools presents an exciting opportunity for consultancies like Skylite Associates to enhance efficiency, but it also introduces critical data protection responsibilities. As highlighted by UK advisory sites, the core principles of the UK GDPR—such as lawfulness, fairness, and data minimisation—must be strictly applied to AI usage.

Over the last few months, as the Data Protection Made Easy podcast has reported, there are growing privacy concerns using AI. Those developing AI systems rely on vast amounts of personal data to make them function, but whether this data is used appropriately is another cause for concern. We’ve already speculated about AI privacy threats, including data breaches, biased algorithms, deep fakes and cyber attacks. Taking a risk-based approach when handling AI will ensure you have the appropriate measures to mitigate these risks and stay compliant.

In this blog, we’ll outline the critical considerations for businesses adopting AI and how to maintain GDPR compliance when using it, referencing the expert guidance of UK data protection specialists like Data Protection People.


Key GDPR Compliance Considerations for AI Usage

 

As a data controller, Skylite Associates is accountable for ensuring all processing, including that which involves third-party AI, is compliant. The guidance from Data Protection People outlines six critical steps for using AI and personal data lawfully:

  • 1. Assess Business Use of AI Systems: You first need to assess how you are (or will be) using AI across your business. Are you using AI to streamline repetitive tasks or to make better decisions? Whatever your reason, you’re still processing personal data, so you must have a lawful basis for doing so.

  • 2. Conduct a Data Protection Impact Assessment (DPIA): The UK GDPR requires all businesses to do a DPIA if they process data that may result in high risk to the individual. As AI is considered a high-risk technology, it is highly recommended to carry out a DPIA to identify and mitigate your AI’s privacy risks.

  • 3. Respect Data Subject Rights: If you’re processing personal data, you must comply with the eight individual privacy rights referred to in the UK GDPR, including the Right to Erasure and the Rights to Automated Decision-Making & Profiling.

  • 4. Collect & Process Only the Data You Need: Known as the data minimisation principle, the UK GDPR says you should only collect the minimum amount of personal data needed. This principle is directly addressed in our guidelines below.

  • 5. Identify Bias and Discrimination Risks Early on: AI works in a black box. If an AI system is trained on skewed or biased data, the algorithms can sustain these biases. Address these risks early to avoid discriminatory outcomes.

  • 6. Receive External Support for Using AI: When using third-party AI systems, you remain the data controller. You must ensure the service provider (processor) can assist you in meeting your GDPR obligations.


Skylite Associates and Candidate Guidelines for Generative AI Use

 

To fully adhere to the data minimisation principle (Point 4 above) and mitigate the risk of data breaches and unlawful processing via third-party AI systems, Skylite Associates has established the following strong and non-negotiable guidelines for all employees and candidates.

I. Data Redaction and Anonymity: The Golden Rule

 

Never, under any circumstances, input personal details or proprietary organisational information into a Generative AI tool.

  • Personal Data Redaction: Before using any document or text (e.g., CVs, project notes, reports) to provide context or structure for a generative AI tool, all personal identifying details must be fully redacted and anonymised. This includes, but is not limited to:

    • Names, Addresses, and Telephone Numbers

    • Personal Email Addresses

    • Specific Dates of Birth

    • Candidate/Employee IDs that link back to a specific individual

  • Organisational Data Redaction: All client-specific, financial, or proprietary details that could identify a client, a project, or Skylite Associates itself must be removed or generalised.

    • Replace specific company names with generic placeholders like “[Client A]” or “[Large Financial Institution]”.

    • Redact sensitive financial or strategic figures.

  • Principle: All documents or text used to “restructure contents and provide frameworks” must be made generic and anonymous so that no personal or organisational details are stored or processed by the AI provider, protecting both our clients and our candidates.

II. AI Tool Configuration and Privacy

 

As the data controller, we must ensure appropriate security measures are in place to protect any data, including anonymised data, as part of our accountability obligation.

  • Disable Training Data Storage: All Generative AI tools approved for use within Skylite Associates must have their privacy settings configured to explicitly prohibit the storage or use of input data for model training purposes.

  • Confirmation of Settings: Every user is responsible for ensuring the “History” or “Data Storage” setting is disabled within their approved AI environment before each session begins.

  • Approved Tools Only: Only AI tools where a rigorous Data Protection Impact Assessment (DPIA) has been conducted and approved by the Data Protection Lead may be used for professional purposes. This ensures the third-party processor contract is compliant and the system offers the necessary safeguards, as advised by the ICO’s guidance on using third-party AI.

By adhering to these stringent guidelines, Skylite Associates demonstrates its commitment to the highest standards of data protection, allowing us to leverage the power of AI responsibly while safeguarding the privacy of our candidates and clients.

Categories: New.