

12 Best Practices for Scaling Gen AI: What Most Companies Miss
We at turian see the same pattern across mid‑sized companies in Europe: teams integrate generative AI into their workflows, often starting with a promising demo, but value stalls before real impact is achieved. The issue isn’t the technology itself, generally, but a common reason is that the organisation hasn’t built the right adoption infrastructure in place. That is not a technology failure. It is an operating‑model gap.
Recent McKinsey research captures this reality: fewer than one in three organisations follow effective adoption and scaling practices for generative AI, and fewer than one in five track KPIs for their implementation. Larger organisations are more likely to be using these practices, but even there the work is far from done. The same research also found that tracking KPIs and laying out a clear road map correlate most strongly with bottom‑line impact, while CEO‑level oversight and workflow redesign matter greatly for value realisation.
McKinsey also reminds that the honeymoon is over: scaling gen AI now depends on doing fewer things better, with disciplined cost management and reuse across use cases.
In this article, we turn the research insights into a practical playbook to operationalise best practices and help your team implement them.
The 12 best practices most companies overlook
McKinsey’s latest State of AI research isolates twelve adoption and scaling practices that correlate with bottom‑line impact. The findings are stark: less than one‑third of organisations are following most of them, and fewer than one in five track KPIs for gen‑AI solutions. Larger organisations are more likely to adopt more of these practices, but there is headroom everywhere.
Below we restate each practice in practical terms and how to put it into practice in real workflows.

1) Build a dedicated adoption engine
Most pilots fail not because the model is weak but because nobody owns the last mile from demo to daily work. A dedicated adoption engine gives you a cross‑functional team with explicit decision rights to coordinate IT, risk, frontline users, and more. Think of it as a delivery office for gen‑AI outcomes, not an R&D lab.
How to put it into practice: form a team that runs intake and prioritisation, coordinates rollouts, manages cross-functional collaboration, and governs releases. Consider setting up domain pods with business users, analysts, and prompt specialists.
Mini checklist
- Charter a cross‑functional AI adoption office with clear decision rights.
- Name accountable owners for each domain: procurement, sales, service, finance, compliance.
- Define intake, prioritisation, and exit criteria for pilots.
2) Communicate value internally, early and often
Teams may be skeptical and have concerns about the use of AI. Regular, credible communication accelerates trust and adoption. This is about showing the organisation that AI is making work faster and more reliable, not broadcasting hype. In practice, that means a light rhythm of before‑and‑after metrics, stories from the frontline, and a shared view of what’s next.
How to put it into practice: share before‑and‑after metrics (for example, cycle time or exception rates), stories from the frontline, and a shared view of what’s next. Tie them to real workflows, and ensure to summarize the progress in an easily digestible format, e.g.: three simple graphs everyone understands.
Mini checklist
- Establish timely updates to share wins and metrics with the team and define the format (e.g.: bi-monthly all-team meetings where graphs are presented to showcase the wins on AI adoption).
- Determine who is in charge of this communication.
3) Make leadership ownership visible
Sponsorship is not enough. Teams change behaviour when leaders and executives actively model the new way of working and remove cross‑functional blockers. McKinsey highlights senior‑leader engagement and role‑modelling as a practice associated with higher AI adoption. The most effective change often starts when a CEO or VP uses the AI agent live and sets the expectation that this is now how work gets done.
How to put it into practice: Assign a clear executive owner. Make gen-AI progress part of regular leadership reviews. Let leaders demo real tasks, approve rollouts, and unlock integration or security hurdles directly. Align mandates with governance to keep things moving.
Mini checklist
- Assign a named executive for AI governance.
- Require product demos in leadership meetings.
4) Embed gen AI into frontline workflows
Embedding AI into the system of work is the difference between a neat demo and a real process change. Implementing AI in your processes is not simply using tools like ChatGPT to read a document, it implies integrating AI agents with emails, ERP, and CRM to achieve real workflow automation. McKinsey flags effective embedding of gen‑AI solutions into business processes as one of the core practices.
How to put it into practice: integrate AI agents with your existing tools and systems and let them operate where the work happens: into the mailbox where your team receives order confirmations, draft replies to suppliers or customers, extract data from documents, and update the ERP record in context. You can include Human‑in‑the‑loop checkpoints to ensure human reviews in critical parts of the process.
Mini checklist
- Integrate AI directly into ERP and CRM systems and shared mailboxes.
- Use AI for daily, common workflows such as data entry and extraction.
- Ideate your Human-in-the-loop mechanism: determine where human approval is needed, how to handle exceptions, etc. and put it into place.
5) Run role‑based enablement for every function
Generic “AI 101” training rarely changes daily work. McKinsey singles out role‑based capability training so employees know how to use gen‑AI capabilities appropriately. We see the best adoption when enablement is compact, scenario‑driven, and rooted in the actual use cases the team uses.
How to put it into practice: split training based on roles: you can set a 45‑minute enablement for procurement officers focused on order confirmations, a separate session for the team covering sales order intake, and another for service agents on complaint handling. Each ends with a live run and clear escalation paths. Make it iterative, training once won’t be enough as AI evolves and integrates with more of your workflows.
Mini checklist
- Build persona-specific guides for different departments: procurement, sales, finance, etc.
- Decide whether to get external training or there will be an internal dedicated small team in charge of it.
- Ensure training is scenario-based (aligning with your company), not featured-based.
6) Foster employee trust in the tech
Adoption follows trust. That requires transparency about what the agent can and cannot do, where humans remain in control, and how errors are caught. This way employees can feel confident about the way AI is used and how it operates in their day-to-day workflows.
How to put it into practice: configure approval gates for sensitive fields, define clear standards for human review thresholds, and ensure visible rationale for agent actions. Explain limitations transparently. In addition, teams should be able to see how feedback or corrections lead to system changes.
Mini checklist
- Create a capability guide explaining what the agent does, what it doesn’t, and which decisions remain human.
- Define escalation paths, including who gets notified for exceptions.
- Show the reasoning by logging a short rationale for every automated action.
7) Close the loop with feedback and iteration
Pilots often gather praise but no structured feedback. McKinsey’s best practices list includes having a mechanism to incorporate feedback on gen‑AI performance and improve over time. The mechanism matters: capture edits at the point of work, triage weekly, and ship small fixes fast.
How to put it into practice: every agent draft gets an Approve, Edit, or Reject signal with a reason code (specially during the pilot stage). After successful implementation, your team can carry on with a weekly triage group reviewing error patterns and updates prompts, templates, or further steps. This feedback is then shared with the team in charge of implementation to make necessary changes.
Mini checklist
- Create simple options to rate or provide feedback on AI outputs.
- Run regular surveys to check what’s working or not (e.g.: set a dedicated Slack/Team channel for team members to drop their feedback.)
- Designate a person to review the feedback on a regular basis and create action points (fixes, improvements, revisions, etc.)
8) Publish a clear adoption road map
You need a living plan that sequences rollouts by value, risk, and readiness. McKinsey finds that a clearly defined road map is one of the practices with the greatest impact on the bottom line, especially in larger organisations.
How to put it into practice: a living roadmap that sequences gen AI adoption based on business impact, complexity, and team readiness. It defines the first wave of high-value, low-risk use cases, sets clear criteria for scaling, and outlines supporting enablers like integrations or training. The roadmap should also include expectations: what will be live in 30, 60, and 90 days, and clarify when the use case is ready to move from pilot to production.
Mini checklist
- Prioritize use cases by business value, risk, and team readiness.
- Define 3 to 5 initial “áttern” use cases to start with.
- Set clear success criteria to move from pilot to production.
- Create a 30/60/90-day timeline with milestones.
- Map out ownership: who drives, who supports, etc.
9) Tell a compelling change story
If your people do not know why you are deploying gen AI, they will fill the gap with anxiety. McKinsey lists a “compelling change story” among the recommended practices, which is tied to transparent communication to workforce trust and adoption. The story should explain how AI helps jobs evolve, not end, and where the saved time will be reinvested.
How to put it into practice: provide your team with a real story showing how gen AI is improving daily work in, for example, sales. Show a sales team member who moved from manual keying to managing exceptions and helping the field team. Make it concrete. Recognise teams that retire manual steps and achieve quality targets with the agent in the loop.
Ensure it’s a consistent narrative being reinforced in meetings, onboarding, and progress updates.
Mini checklist
- Ensure employees in HR and leadership positions communicate the message: it’s all about job evolution, not job or people replacement. Team members are still essential.
- Collect a real before/after success story from a frontline team.
- Share success stories through different channels: onboarding, team meetings, internal updates.
10) Track results with defined KPIs
This is the single strongest lever in McKinsey’s analysis. Tracking well‑defined KPIs for gen‑AI solutions is most correlated with bottom‑line impact. It is also the practice least widely adopted. Define baselines before pilots, instrument at launch, and publish results on a predictable rhythm.
How to put it into practice: Start with a core set of KPIs that reflect both usage and value. That typically includes activity metrics (e.g. tasks automated, time saved), quality metrics (e.g. first-pass accuracy, error rates), and business outcomes (e.g. cycle time, cost to serve, or DSO). What matters is consistency: define baselines before rollout, collect data from day one, and share results regularly.
Mini checklist
- Define baseline metrics before the pilot begins.
- Select 1-2 KPIs in each category:
- Activity (e.g. tasks automated, time saved)
- Quality (e.g. error rate, first-pass accuracy)
- Outcomes (e.g. cycle time, cost to serve)
- Instrument data collection from day one.
- Set a regular reporting cadence (biweekly or monthly.)
11) Reinforce adoption with incentives
People do what they are measured on. McKinsey includes employee incentives that reinforce gen‑AI adoption as one of the twelve practices, yet it is among the least used. We recommend tying incentives to business outcomes rather than raw usage.
How to put it into practice: Set targets linked to real outcomes (reduced cycle time, fewer exceptions, or higher straight-through processing) and recognize teams that simplify workflows or consistently hit quality benchmarks with AI agents in the process.
Mini checklist
- Define clear, measurable targets (e.g. reduced cycle time) and reward teams that achieve it.
- Define incentive structures: keep them fair, aligned with actual impact, define the incentive.
- Celebrate wins publicly to build momentum (e.g. share the wins with the whole team via email or on an all-hands call.)
12) Earn customer trust focusing on compliance
Trust is not only an internal matter. McKinsey lists fostering trust among customers as a practice correlated with better outcomes. That starts with transparency and solid compliance practices. For EU‑exposed processes, map your flows to CSDDD, CBAM, REACH, and RoHS where applicable.
How to put it into practice: Use a human-in-the-loop approach for customer-facing tasks until agent accuracy is proven. Be clear about which steps are automated, log key actions, and maintain an audit trail for escalations or audits. Show customers you’re in control, not just compliant.
Mini checklist
- Create and implement a human-in-the-loop system for all sensitive or external-facing workflows.
- Be transparent with customers about what is automated (e.g. let them know if they are chatting with an AI-bot or if they are receiving an automated email.)
- Map your AI workflows to relevant compliance regulations.
- Regularly check on compliance regulations updates.
Where to start: a phased, pragmatic maturity model
You do not need to adopt all twelve practices at once. In our experience, momentum comes from sequencing the right steps, not from doing everything at once. Start by laying the groundwork with a clear roadmap, defined ownership, and a simple, high-impact use case. Then scale gradually, layering in additional practices as you build confidence.
Here’s our recommended maturity path for mid‑sized enterprises:
Pilot
Pick one or two high‑volume, rules‑based processes, such as procurement order confirmations or sales order intake. Define KPIs. Deploy the AI agent in the shared inbox and ERP, or wherever needed, including a Human‑in‑the‑loop approach, always.
Operationalise
Once the pilot runs smoothly, deepen adoption by embedding the agent into daily tools, refining prompts, updating Standard Operating Procedures (SOPs), and rolling out tailored training per role. Start tracking performance and share early results to build momentum internally.
Scale
Expand to adjacent processes: supplier communication, invoice verification, quality or compliance cases. Introduce domain pods to reuse components, templates, and playbooks.
Optimise
Tune for cost-efficiency and quality. Remove redundant tools, raise straight-through processing thresholds, and monitor runtime costs vs build costs.
A sample 6‑ to 12‑month roadmap

- Weeks 1‑4: Charter adoption office. Capture baselines. Select two use cases. Draft and share your change story.
- Weeks 5‑12: Go live ensuring human‑in‑the‑loop is in place. Deliver role‑based training. Encourage your team by sharing benefits and expectations.
- Months 4‑6: Embed AI into core workflows. Adjust SOPs. Launch the first KPI dashboard. Triage feedback weekly.
- Months 7‑9: Scale to two more domains. Introduce pods by domain. Consolidate toolset where possible.
- Months 10‑12: Optimise costs and quality. Raise straight‑through thresholds where justified. Add external transparency steps for customer‑facing flows.
Adapt your pace based on regulatory complexity, data readiness, and internal alignment. McKinsey notes that while large firms often check more boxes, mid-sized “hidden champions” move faster by focusing on real workflows, not just frameworks.
Pitfalls and risks to watch (and how to avoid them)
Over‑reliance on tech without cultural change
Tools don’t change habits, people do. Without role-based enablement and a clear change narrative, even the best AI won’t stick. Start with training and storytelling that connects the tech to real daily work. As McKinsey points out, what moves the needle is end-to-end integration, not tool hoarding.
Lack of ownership or diffusion of responsibility
If no one owns adoption, progress slows. Gen AI adoption needs visible leadership and a dedicated team to drive it forward. Assign a person in charge, clarify who makes decisions, and give that team the mandate to prioritize, unblock, and track outcomes. Your roadmap will also help keep everyone aligned on what’s next and how success is defined.
Poor data quality or outdated systems
Poor data quality and legacy infrastructure can quietly derail even the most promising gen AI initiatives. Success starts with a reliable data foundation and secure operating patterns. Focus on the specific data needed for your first three use cases: is it accessible, accurate, and structured enough for automation? Address gaps pragmatically. For instance, if PDF purchase orders arrive in five different formats, consider starting with the most common one and expanding from there. Incremental upgrades to systems and data pipelines are more effective (and realistic) than chasing a perfect architecture from day one.
Neglecting ethics and governance
Ethics and governance shouldn’t be seen as just legal checkboxes; they’re strategic foundations for sustainable AI use. Build controls directly into your workflows: establish clear policies on data privacy, make AI decisions auditable, and ensure users understand what’s automated versus reviewed by a human. For customer-facing processes, transparency is critica. Deloitte recommends evaluating risk across four dimensions: to your enterprise (e.g. reputational damage), to the AI system (e.g. model drift or bias), from adversarial use (e.g. prompt attacks), and from shifts in regulation or public trust. Proactively addressing these areas protects both performance and credibility.
Overpromising AI Benefits
AI is powerful, but it’s not magic. Overselling benefits too early can create disillusionment and stall momentum. Start with clear, measurable goals in areas like cycle time reduction, first-pass accuracy, or cost-to-serve, especially in back-office workflows where repetitive tasks are ripe for automation. Track those gains rigorously and celebrate progress without hype. Reuse is the real compounding factor: patterns learned in one process can often be applied to others with minimal adaptation.
Next steps
If your organisation is stuck in pilot purgatory, you’re not alone, and there’s a clear way forward. Audit yourself against the 12 adoption practices. Choose 3 to 5 areas to improve first. Focus on trust, measurable value, and reusability. Then apply what works to new domains.
At turian, we’ve built our AI agents with this journey in mind. They integrate into your existing ecosystem to automate full workflows in procurement, sales, compliance, and more, with human oversight built in. The result? Hybrid teams where people focus on the work that matters most, and AI handles the rest.
Get started today and take your gen AI integration to the next level.
