The following is anonymized and paraphrased client inquiry I received recently, along with my response.
Given the general nature of my answers-based on the 100+ client engagements we've had at GSD at Work LLC, helping organizations across both the public and private sectors meaningfully adopt AI-I thought it might be helpful to share them more broadly.
Happy to dig into any of these further in the comments.
Prospect inquiry [paraphrased]:
...a few questions:
How might leaders manage their employees' educational journeys, including inviting them to think through the ways in which they might incorporate general tools like Co-Pilot or ChatGPT into their day-to-day workflows?
What are good practices in re prioritizing, implementing, and even shutting down specific/niche gen AI tools?
How do you strike a balance between bottoms-up innovation and prudent top-down oversight and governance?
Who is the ideal Gen AI user? And how do you think work will change?
My response:
All this makes sense and resonates with the challenges (and latent opportunities!) that we've seen with other clients-and ultimately helped them overcome-from PE-backed companies to VC-backed startups to the US Federal Government.
[key outstanding questions from my / our perspective in bold below]
In terms of managing the education journey, what we've found worked really well (contra one-size-fits-all self-serve LMS training or "prompting courses") is combining a top-down and bottoms-up approach that's particular to [organization]: Meaning, invest in 1-2 really high-value workflows (which, in practice, means some degree of business process re-engineering that would help you accelerate progress toward your 3-6 or 12-month goals independent of AI-do you have committed/formalized targets, e.g., KPIs, OKRs for the next 3, 6, or 12 months?) and empowering employees with best-of-breed, general-purpose tools (e.g., ChatGPT Enterprise, Claude Enterprise, WisprFlow for dictation - NOT Microsoft Copilot) that they can use to experiment-with formal approval and executive enthusiasm-closest to the customer or the problem space they work in. Do [organization] employees have access to either ChatGPT Enterprise or Claude Enterprise? And how many employees do you have who use computers on a daily basis?
(To get clear on those workflows worth strategically investing in-and align large, cross-functional teams-we sometimes lead executive teams through an AI-assisted interactive data analysis process via our AI Oracle / 🔮 offering.)
You can then look at telemetry analytics-via an admin portal-to see who your most engaged users are for those general tools. This will almost always look like a power law curve, and you should invest some extra time and attention into working with those users: Formalize what they're doing, empower them, encode the knowledge as a custom GPT or a more sophisticated automation-and even custom software-towards the end of democratizing the power user know-how with non-power users; we do this via AI Action Workshops.
Those compelling facts (so to speak) that are created in private via the hands-on sessions demonstrate that workflows like:
contract review,
data analysis,
prospect research and proposal creation,
marketing campaign activation,
software product development,
HR pulse survey data collection,
and more...
can be completed 10x faster with AI (e.g., tasks that normally take 2 days can be done in 2 hours - or even fully automated ~5% of the time) and often lead to higher quality work product.
Once one of these 4-minute mile moments happens, it's critical to capture it and incentivize the power users to spread the word, which you do by inviting them to do a show-and-tell session at optional public recorded bi-weekly office hours. (We facilitate these for clients as a part of our AAA Transformation package.) Word spreads, incontrovertible success is incredibly charismatic, and even skeptical employees become curious and start experimenting when they see peers they trust (not just some outside consultants) demonstrating that this technology is good and real and works if you use it in ways that are very particular to your business and workflows. You should expect real transformation start to happen in ~3 months, and this shows up in
engagement metrics with the general tools (aim for 50% WAUs [or DAUs if you're feeling extra ambitious])
an inventory of 5-10 "10x workflow transformation" cases with straight lines to financial impact [ideally EBITDA, with formal sign-off by the CFO]
measures of satisfaction (individual Action Workshop level) and dissatisfaction writ large, if the employees were to lose access to the tools
Regarding your second and third bullet points, I need more context, e.g., a specific case that you have in mind - would you share more?
If I'm reading between the lines here, there are lots of different tools that you might use, and the question is, "Which ones to use or adopt or let employees experiment with?" I'm of two minds about this:
Generally, the most powerful general purpose tools are sufficient-again, I'm leaning on ChatGPT Enterprise here, especially with Deep Research and connectors to your internal data (beyond official connectors, you can also build and host custom MCP servers, which internal development or IT teams should be able to build and support, especially since agentic coding tools like Claude Code make this easier than ever). Related: I'm deeply deeply skeptical of these VC-backed vertical SaaS companies (without naming names). It's not uncommon that you could save five, or even six, figures on some shiny enterprise AI SaaS tool that's supposedly built for a particular function by instead simply training the employees in that function on how to use dictation and the most powerful model that you can get from your general enterprise license. Many such cases.
On the flip side, there are specific, niche tools that operators in a particular role may benefit from using; the challenge then becomes letting them experiment prudently while also ensuring that their use complies with security and information handling policies. I don't have a silver bullet for this (although a best practice, certainly. is formalizing a policy document that tells employees explicitly not to share sensitive information with unapproved tooling-and that they can put everything, including sensitive information, in one such approved general purpose tool, e.g., Chat Enterprise), but I will say that at many (but, not all!) of the organizations I've worked with, there is a very deep tension brewing between centralized IT organizations and people who want to-and, frankly, should be able to-rapidly experiment with niche tech that helps them get their work done faster. My recommendation is that the procurement, security, legal, and IT teams should be directed by the CEO to figure out how to reduce cycle times from request to approval by 10x or even 100x; if the process becomes a bottleneck, then employees will not be able to experiment enough to generate the data that's needed to understand where to make additional deep and narrow investments.
Regarding your last question, it's an excellent one; I've found that the people who tend to pick up the tools and run with them are often not who you'd expect.
I'd say, in general, they're people with:
at least 10 years of professional experience (so they know what good looks like in their particular area of expertise) and
they possess good management skills, meaning they're able to clearly verbally articulate strategic intent and provide sufficient context such that a direct report with high intelligence could accomplish the task at hand.
For this reason, I've found that many (but not all) highly technical ICs-who one might assume are early AI adopters-actually struggle tremendously with making a transition to this new way of working because:
They don't want to give up direct control of for example, writing lines of code and
They've never managed someone before, so have not built muscle around teaching delegates, sufficiently scoping and formalizing success criteria in English, or thinking about system design, e.g., implementation of QA/QC processes and so forth.
I absolutely agree that the way we have been working will need to change and evolve. In the abstract, this means that knowledge work will be more asynchronous, more outcome-oriented, more AI-mediated, and more fractional; and, labor pools will become more liquid, which means that traditional roles and departments will likely need to change as much of the specialized bureaucratic tasks that are bundled into those roles (all of which have emerged to support the scaling of 20th-century institutions) will be necessarily delegated to word-crunching computers that can perform some percentage of those tasks 100x faster and, possibly, deliver higher-quality work product; "UX researchers" and "scrum masters" (per se) are good examples.
To figure out what that means for [organization], you need to run the experiment and pay very close attention to the results, per the recommendations I've made above-and only then, re-form roles and departments, prudently.
To share some more concrete particulars in terms of how you should expect how work gets done to change:
Significantly fewer meetings
Use of dictation instead of typing
Commitment to scoping work in terms of time-bound outcomes (not hours or even outputs) and letting people have many more degrees of freedom in terms of how they accomplish the work, which is necessary because AI is such a general purpose technology, and so giving people that much more freedom is beneficial and frankly necessary to realize the upside.
Ready to discuss further?
I'm happy to double-click into any of these, either via this thread or live - I believe my assistant, @Arlo has been in touch; please book a time for us to chat here or coordinate with Arlo to find a time.
Best,
Christian

