As an executive working across compliance, audit readiness, and document modernisation, I've found that generative AI tools like Microsoft Copilot can significantly accelerate workflows, but only when paired with rigorous human oversight.
One practical example is how I've used Copilot within Microsoft 365 Notebooks to assist with audit reporting. During audits, I capture raw notes, often fragmented observations, clause references, and findings (and mostly in my messy handwriting 😂). Copilot helps transform these into structured sentences aligned with the relevant ISO 9001:2015, ISO 14001:2015, or ISO 45001:2018 clauses. It can even suggest potential correlations between findings and standard requirements, which is invaluable when preparing integrated audit reports.
However, speed must never compromise quality. Copilot's outputs are drafts only. Every sentence, clause reference, and interpretation must be validated by a qualified human.
To strike the right balance (suggestions only):
- Use AI for first-pass drafting, especially when dealing with repetitive or templated content.
- Build structured workflows (e.g., Power Automate flows) that guide AI use within defined boundaries.
- Maintain a human-in-the-loop review process to ensure compliance, correctness, and contextual relevance.
- Log all AI-assisted changes for audit trail purposes; this is especially critical in document modernisation initiatives.
Generative AI is a powerful accelerator, but quality assurance remains a human responsibility.
------------------------------
David New
Executive
Dyadic Consultancy
------------------------------
Original Message:
Sent: 09-06-2025 07:23 PM
From: Frank Higley-Sanchez
Subject: How can teams balance speed and quality when using generative AI tools?
For AI experts: How can teams strike the right balance between speed and quality when working with generative AI tools?
------------------------------
Frank Higley-Sanchez
------------------------------