Generative Artificial Intelligence (GenAI) is all the rage, but lets not get ahead of ourselves just yet.... Since ChatGPT launched, GenAI has dominated the media and whilst it is set to transform the marketing and digital industry as we know it, it is still very much in its infancy. We’ve created a handy guide to help you to make sense of GenAI as the industry grapples with yet another big change that is upon it.
What exactly is GenAI?
In the spirit of this article topic, we used ChatGPT to help us to define Generative AI. It defines Generative AI as follows;
Generative AI is a subset of AI that has the ability to create new content, such as images, videos, and text. It uses deep learning algorithms to learn patterns from existing data and then generates new data based on those patterns. This technology has significant implications for marketing, as it enables businesses to create personalized content on a large scale, and engage with their customers in a more meaningful way.
What are the different types of generative AI? Break down the market for me so it is easy to understand.
Whilst ChatGPT has become a household name, there are a host of providers already playing within the GenAI space, supporting text generation (copywriting etc), image generation, code generation, audio generation and more.
How accurate is the information produced by text based AI solutions and what should I be aware of?
Generative AI uses machine learning to infer information, and this often leads to inaccuracy. Pre-trained models like ChatGPT are also not dynamic in terms of keeping up with new information. A study from Stanford in 2022, found that most Generative AI models are only accurate 25% of the time. ChatGPT has been caught red-handed by scientists, completely making up entire science articles and bibliographies for them – which alters reality with little means of the user realising this to have occurred.
Model hallucination is a phenomenon that is also discussed within AI communities, which helps to describe some of the pitfalls of GenAI accuracy. Model hallucination is where an AI model confidently delivers credible-sounding information to a user, yet the information seems to be unjustified by the AI’s training data. Examples may include missing or making up information which the model lacks. Due to the way models are created, they also tend to stick to their own responses even when corrected by humans. It’s for this reason and others that GenAI tools leveraged by businesses should be used to assist the process of content development and other activities instead of being utilised to replace users and tasks. Today GenAI is not able to replace human knowledge but it can certainly aid and support the ability for people to become more efficient and effective.
What are some of the ethical issues surrounding GenAI that brands should aware of?
The ethical concerns over Generative AI are many and whilst the upside of GenAI is obvious in terms of productivity gains – there are a host of drawbacks which should be understood by brands and leaders. Without the right controls and governance in place, organisations could find themselves in hot water. Accuracy is one key issue as highlighted above, but what are some of the others?
Deepfakes - Deepfakes are graphics and content that has been altered in a way which makes it hard to distinguish between what is real and fake, and can be leveraged for media. Such media may spread misinformation, manipulate public opinion, or even harass or defame individuals.
Copyright Ambiguities - Another ethical issue around generative AI relates to the ambiguities over authorship and copyright of AI generated content. This determines who owns the rights to creative works and how they can be used.
Biases - Large language models enable human-like speech and text. However, recent evidence suggests that larger and more sophisticated systems are often more likely to absorb underlying social biases from their training data. These AI biases can include sexist, racist, or ableist approaches within online communities.
Is GenAI ready to replace humans in the workforce?
Brands need to remember that AI is a tool, like a camera or a paintbrush, they are not a replacement for humans – particularly in areas like creativity.
To use Generative AI effectively, you still need human involvement at both the beginning and the end of the process, and these people need to be trained in order to derive the best outcomes from leveraging GenAI. This is because a human must enter a prompt into a generative model in order to have it create content that meets the need. An 82-page book of DALL-E 2 image prompts, for example, has been established to help those utilising the platform to generate the best creative outcomes. We have also seen the emergence of a prompt marketplace which enables one to buy other users’ prompts. Whilst the technology and models will mature overtime, it's still in its infancy. Leaders need to think about Generative AI as a tool to assist teams as opposed to replace team members.
What are some examples of Generative AI being leveraged by brands?
As GenAI spans voice, text, image creation, audio and speech and more, the use cases it solves for are varied and go well beyond copywriting. Mattel and Instacart are two brands that have pushed beyond gimmicky applications to apply GenAI to solve business and customer challenges.
Product development
In 2022 Mattel shared that they sought inspiration from DALL∙E 2 to develop new prototypes for Hot Wheels. It creates custom images and art based on what people describe in plainspoken language. Using the tool, designers generated a number of prototypes, with each iteration sparking and refining ideas that could help design and flesh out a new Hot Wheels model car.
“It’s about going, ‘Oh, I didn’t think about that!’” said Carrie Buse, director of product design at Mattel Future Lab in El Segundo, California. She sees the AI technology as a tool to help designers generate more ideas.
Solving the dilemma of dinner
In March 2023, Instacart began rolling out a new Instacart plugin for ChatGPT Today. According to Instacart “the plugin allows ChatGPT users to turn the ever-present “dinner dilemma” into instant inspiration and, ultimately, instant gratification with ingredients delivered to their door in as fast as an hour so they can get cooking.” Over time, the plan is for Instacart to roll out new capabilities, such as the ability to help people shop recipes or ingredients that are on sale or in season.
How can we safeguard our organisation in this early phase of experimentation and testing?
Whilst GenAI comes with a host of benefits, brands looking to begin to leverage and experiment with it need to do so with their eyes wide open. Here are some of the practical considerations brands should make as they embark on leveraging Generative AI:
Define clear guardrails- Companies need to define clear guardrails that guide and govern how employees use Generative AI safely and inline with the organisation's values and ethical standards.
Train employees- As GenAI is quite easy to use, employees may feel overconfident in their ability to complete a task for which they lack the requisite background or skills. It is important that team members understand how Generative AI can assist, and the role these tools play alongside of their limitations.
Be aware of security risks- Leaders should define what data and information can and can’t be shared via public chatbots - giving special consideration to the degree to which sensitive information can be shared. All information typed into generative AI tools will be stored and used to continue to train the model; even Microsoft, which has made significant investments in Generative AI, has warned its employees not to share sensitive data with ChatGPT.