Practical steps to getting started with Gen AI for marketing
March 27, 2024A common question from every executive team these days: “What’s our AI strategy?”
It’s a fair question. When every podcast and article and blog has AI in the title (yes, this one too), leaders want to know how their teams are going to use this wonderful new tool.
My advice: start small, start now.
It’s much better to get the ball rolling by testing generative AI use cases that are lower lift and lower risk than to build a more comprehensive plan at the outset. That way your teams can adjust and learn as you find the best approach for getting value from this evolving technology.
Here are some straightforward tips to getting started.
Start small and move quickly
Generative AI for personalization is still new, so there’s no definitive playbook. Instead, teams should be experimenting to find out what works and how to build from there.
You’ll learn a lot by trying out a single use case end to end. It’s important that this be something that isn’t a huge lift or an outsized risk. Risk can be reduced by including a human in the loop (more on that below). Some good ones to start with:
- Email subject lines
- Email copy
- Website copy
- CTAs for paid media
- Images for email
- Images for website
- Images for paid media
And what sorts of things should you be looking to learn?
- How your developers interact with the generative AI models and tools
- What privacy considerations affect the workflow
- Whether the right information is available to pass to the AI model
- Feedback and approval processes for gen AI content
Experimenting with that first use case will help break the ice and accelerate your strategy. It’s easier to get the answers to the questions above by doing rather than spending a bunch of time planning upfront.
It’s worth keeping in mind that most decisions around which specific AI model to use are two-way doors, so if it doesn’t work you can try something different. Prioritize selecting an achievable use case — even if it’s not the most impactful one in absolute terms — and go build it. Don’t worry about making optimal technical decisions upfront.
Start with a human-in-the-loop use case
Marketing personalization with generative AI becomes a lot harder (and riskier) when there’s no human to review the results. AI-powered chatbots are a prime example of this — you don’t want to have them recommending bad info to the customer.
A good way to think of this is that you should look to AI to help scale output but not quality assurance.
By starting with a human-in-the-loop use case, like any of the content optimization ones listed above, your team can move quickly to establish a routine for working with generative AI while minimizing risk.
So when does it make sense to move beyond use cases that have a person involved at every step?
- Once you’ve answered the questions in the previous section from initial use case experimentation and you feel like your team has operating guidelines for gen AI
- You’ve worked through the low-hanging fruit use cases. This is the classic tech prioritization question of time vs. value — automated use cases with no human in the loop will be resource intensive and harder to get right, so do the other ones first.
- You have monitoring and validation plans in place to manage risk. The chances of AI outputting a problematic response won’t be zero, but solid processes for monitoring output can reduce risk to a degree that is far outweighed by the value AI generates.
Make sure you have the right information to pass to the model
The success of gen AI depends on the context (fancy way of saying information) that you supply to the model.
This means you need the right information available to make the model successful. For example, in the email content use case above, you probably want to create different messages for different customer segments. Information like demographics, channel preferences, and product preferences would be key inputs for the gen AI. It’s also a question of data quality: how accurate is the customer information?
A good litmus test is asking if the information you’re providing would allow a smart human to complete the task that you’re asking gen AI to complete — if it wouldn’t be enough for them, it won’t be enough for the AI.
Establish clear KPIs and get feedback after release
Once you have your initial use case online, you’ll want to evaluate whether the use case was successful so you can figure out how to iterate. Is this use case worth doubling down on? What could be improved and what investments are likely to move the needle?
Some areas that might need to be improved:
- Integration with workflows – is the gen AI initiative removing roadblocks or creating them?
- Training for your team – is the team feeling comfortable with the new tools?
- Context quality – is the information you’re passing to the models sufficiently accurate and up-to-date?
- Trust in outputs – does the output maintain brand standards?
Get started, things are moving fast
If it feels like AI news keeps coming at dizzying speed, that’s because the state of the art is advancing every day. It might be tempting to wait until the dust settles, but that will just result in standing still while others are moving forward. Generative AI isn’t a fad that’s going to blow over. The time to jump in is right now.
That’s why the best way to start is with one use case that you can test and iterate. Besides the resources needed and risk involved in a larger scale plan, by the time you put one together there might be new developments. Better to be evolving with the field rather than running to catch up.
If you’re thinking about the output of AI that customers see you’ll want to check out our other blog post on the internal-facing AI that improves the quality of customer-facing AI and makes it easier for your teams to work with customer data for personalization.