In today's rapidly evolving business landscape, AI isn't just a buzzword—it's a game-changer. As an executive, you're likely facing the challenge of creating an AI strategy that drives real value for your organization. Maybe you’re at the stage where many different departments are piloting AI projects, and you’re wondering how they all add up. Or maybe some pilots have yielded results, and you’re wondering if there’s any infrastructure you could put in place to accelerate AI’s impact.
At Hop Labs, we've guided numerous clients through this process, and we've identified two levels of questions that can serve as the pillars of a robust AI strategy:
Level #1: Organization - Across all your AI projects, examine whether you have the foundations in place to support any individual initiative. We’ll share four questions you should consider at this level.
Level #2: Use-Case - For each application of AI that you consider, there are three questions you should explore more deeply.
The diagram below shows how the organization level and use-case level complement each other during the creation of your AI strategy.
A Brief Aside on AI Readiness
While not necessarily part of the strategy framework we’re discussing in this article, it’s also important to assess your organization’s technological readiness for AI projects. The extent of your computing power, your ability to take pilot projects to production, the state of your data, your security and privacy measures, and so on, will both inform your strategic answers and evolve with your use-cases. We’ll dive more into AI readiness in a future post.
Back to Our Strategic Questions
First, some good news: If you’re even considering the organization level of an AI strategy, you’re already ahead of the curve. We’ve found many clients are in a rush to answer questions about individual use-cases, and can overlook the cross-cutting foundations that are necessary to enable and accelerate multiple use-cases.
Organization Level: Building Your AI Foundation
Before diving into specific use-cases, it's critical to establish a solid organizational foundation for AI. We've identified four key areas that can make or break your AI initiatives:
AI Project Governance: How do we decide what to work on?
Talent Strategy: How do we find people to work on it?
Success Metrics: How do we track if they’re doing a good job?
Organizational Chart: How do they fit into our team?
You can ask the four questions above in any order, but it's important to be intentional and deliberate in answering them. An intentional process will ensure that you have thoughtful, strategic answers rather than accidental, organic answers that may not be compatible with each other, leading projects to veer off track. For example, if project governance is not aligned with how you hire people, that will pose a problem. And if you hire people without a clear idea of how you're going to evaluate them, or if they have a different idea than you do of what success looks like, there will be some challenges. These four questions answered deliberately will encourage innovation in the right place and ensure your AI teams are set up for success.
Question #1: AI Project Governance, or “How do we decide what to work on?”
Think through questions like:
Where do ideas come from?
How do you evaluate which ones to pursue?
And, most importantly, how do you decide when an idea isn’t working, and you should cut it off?
There are many right answers along the spectrum between centralized and decentralized approaches toward idea generation – some of our clients’ leadership teams are the ones to decide which strategic uses of AI to pursue while other clients invite innovative ideas from anywhere in their organizations. Any point on the spectrum is fine, as long as you pick one for the sake of clarity across the org.
Once ideas are chosen, many folks naturally focus on when to begin the project, but knowing from the start how you’ll decide if it isn’t working and when to cut it off is just as important. Success metrics to be identified in question #3 below are helpful with this.
Question #2: Talent Strategy, or “How do we find people to work on it?”
Think through questions like:
How do you hire?
What is the minimum viable AI team?
Can you develop career paths to attract the right people?
What partnerships with external vendors or academic collaborators do you need to round out your skill sets?
AI people are extremely expensive and rare. Hiring for them is not the kind of thing you can do lightly, especially if you're not a tech-first organization. They have a lot of opportunities, and in most of those, they're top dog. If you’re looking to hire them for a role that is secondary to, say, your nuclear physicists or clinical doctors, even if that may be appropriate for your organization, it'll be hard for you to hire. You’ll need to think through how to solve this skill problem if you’re not an attractive destination.
Anyone who is considering joining your organization will be looking at the career path ahead: Will they build a team? Will success in this role be apparent to future employers? And so on. You’ll need to have answers to such questions.
We’ve found that it’s common for many executives getting started with AI to think the first hire should have a PhD in machine learning. A PhD is almost the last person to hire, as they have a very specific skill set and likely need surrounding infrastructure to leverage their skills. Most likely, the first person you should hire is someone fluent in your domain who is also conversant in AI. We have a whole blog post on this topic here.
Question #3: Success Metrics, or “How do we track whether they’re doing a good job?”
Think through questions like:
How do you manage risk?
How do you track progress, given that negative results are common?
It is surprisingly difficult to measure the success of an AI project during the R&D phase. Real business outcomes and positive results may be elusive, but that doesn’t mean things aren’t moving forward. Keep in mind that negative results are still progress, because they tell you what doesn’t work – you’ll need to have the right tolerance for that.
So if you can’t rely on results, then what? Publications as a measure of success apply only if you have a novel approach to a well-understood task – you may not have a well-understood task, and you may not care that your approach be novel. Perhaps a hacked-together pragmatic approach is just fine for your needs. Think through what’s appropriate for your organization, and then subsequently hire, reward, and promote in a way that's aligned with your metrics.
Question #4: Organizational Chart, or “How do they fit into our team?”
Think through questions like:
Where in our org chart does this role or team best fit?
Who will be ultimately responsible for them?
What does accountability for this role/team look like?
Again, there isn't a single right answer to these questions. The two most common scenarios we’ve seen are either AI folks embedded in a product-oriented team, or a centralized AI team.
If you don't have the capacity to have a whole AI team embedded in each of your individual product lines, then it may make sense to have a centralized team, so that people can bounce ideas off of each other, invest in shared infrastructure, and generally collaborate. However, this scenario will separate them from the product use-cases a little bit, making it harder for the cross-functional team to be as effective as an embedded product-focused team would be. So there's a balance here to consider.
Use-Case Level: Evaluating Specific AI Applications
We won’t cover the next level of questions in this article, but it's important to note that each AI use-case requires its own set of considerations. While the four organization questions we’ve outlined above require thorough deliberation, they need to be answered just once. Questions around business value, operations, and guardrails must then be answered separately for each potential use-case.
Developing a comprehensive AI strategy is a complex but crucial task for today's business leaders. By diving into these four topics – 1. AI Project Governance, 2. Talent Strategy, 3. Success Metrics, and 4. Organizational Chart – you'll begin to build a solid foundation for success with AI in your organization.
What questions do you think are important to consider as you create an AI strategy? Reach out – we’re always curious to hear how others are thinking through this.
— Ankur Kalra, CEO/Founder & Chetan Jhaveri, Strategy Consultant @ Hop