Artificial intelligence has been busy working away in the background of our everyday work and personal lives for years. Think social media, display advertising, tools like Grammarly or Clearscope...
But with the launch of ChatGPT over a year ago, generative AI was thrust into the limelight. This refers to deep-learning models that can generate new text, images, music and other content based on the data they were trained on.
Since then, we entered an AI boom and we’ve gone from asking ‘how do we feel about AI’ to ‘how can we use AI’, specifically:
To add value to our work
To drive efficiency for our clients
Without risk
The use of AI to create content or ideas, like any new skill or service line, needs careful planning, consideration and collaboration with our partners. As an agency, we want to reassure our clients, existing and future, that we’re open and responsible, yet proactive and ambitious in a continually evolving tech landscape.
Here we share how we tackle responsibly adopting AI into our offering as a global agency in the health and pharmaceutical sector.
First and foremost: we’re transparent…
We’re never going to produce work by AI and call it our own. We will only ever use AI in a meaningful way if we’ve planned and agreed to with our clients. If there's an opportunity or need to use AI, we'll be open about how we’re using it and the role our experience and expertise plays in getting the most out of generative AI systems.
Our work is the creation of a team of real people, with specific skills we select for each client or project need.
We are long-term strategic partners for our clients, where making you shine helps us shine, and we see no benefit in behaving in any other way.
…and we’re careful.
We know that for many marketers AI is a new field with uncertainties such as security, authenticity and compliance. We aim to deliver impact and ROI without risk and will work with clients keen to explore AI opportunities to understand and mitigate these risks.
We make sure we fully understand the limitations of AI systems and will always have our experienced team oversee, review, and edit any AI-assisted work to an agreed level to ensure accuracy, compliance and authenticity. As yet, generative AI can't produce high-quality work without needing a lot of careful input, editing and especially fact checking.
We never share any sensitive information without explicit written agreement from our clients, and only when that data is in a secure environment and not used to train the AI model. We follow client AI policies if available and have our own policy.
Our team has had AI training including:
Transparency
Tool selection
Accountability
Bias
Privacy
Compliance
Ethics
Considered curiosity
Our culture of learning and discovery at Wallace Health encourages everyone to keep up to date with the latest best practices, build new skills and develop personally and in their careers. Regular learning workshops spark individual curiosity – allowing our team to share new findings and ways of working and responsibly explore new approaches.
We plan and run pilot schemes to find out how we can responsibly and skilfully use new tools or techniques to get the best results for our clients’ objectives. Once we’ve highlighted a particular use case where we suspect AI could have impact (for example, something that’s causing friction or slowing the process down) then we enter this test and learn phase to evaluate capabilities and feasibility of adoption.
Not everything we try lives up to the hype or promise, but failures are sometimes where we learn the most. And we share what we’ve learned with our clients so we can evolve our understanding together.
Think about perception, not just production
While the focus seems to be on how well AI can create content, we also need to consider how well AI-generated content will be perceived, now and in future.
A study called "Human Favoritism, Not AI Aversion" showed people rated AI content well but preferred content made by real people. Platforms like Meta have indicated they'll be identifying and labelling AI-generated content, and no doubt others (e.g. Google and YouTube) will follow. What impact might this have on people, patients or HCPs and their perception of your content, your product and your brand?
It's an important aspect when reasoning whether generative AI could help solve a particular challenge, and a word of warning against poor planning and bad execution for short-term gains.
Case by case
We will take each use of generative AI on a case-by-case basis for our clients. So, while we’ve piloted a growing number of solutions to the challenges our clients face, we need to understand their goals, needs and restraints – now and in the future – to allow us to explore and use AI in the right way for each need. This means tailored processes, with more or less AI involvement depending on the desired outcomes.
As we look ahead to 2024 and beyond, we'll continue to keep our fingers on the pulse of AI development to help our clients stay ahead of the game and achieve more. We anticipate many new AI tools and data services that will help us reach new levels of insight, personalisation and unintrusive targeting.
We look forward to integrating innovative solutions with those who are able and willing to take that next step in content marketing.
If you’d like to chat to us about AI and exploring new opportunities, please get in touch.
Comments