Content and Creative

Will AI Kill Content Creation or Improve It?

|
AI is a double-edged sword—we love the convenience of it, but can it produce the same quality content as the professionals? Click here to learn more.

Did artificial intelligence (AI) officially conquer the content creation profession on November 30, when ChatGPT was released for public use?  

Judging from the mountain of posts about it since its release, you might think so. A lot of people are trying to uncover all the potential applications (and implications) of the latest creation from OpenAI, a conversational bot called ChatGPT — which CEO Sam Altman said earned its first 1 million users in the span of just 5 days. 

ChatGPT uses AI to answer questions and create content ranging from poetry to blog posts to code, depending on how you choose to use it. ChatGPT is not the first AI application of its kind, but its ability to quickly generate well-articulated, polished content in response to user prompts has shaken content creators to the core.  

But how shook should we really be?  

Here’s what I’ll say: ChatGPT (and tools like it) aren’t here to take our jobs. But they will change the way we do our jobs. This article aims to explore some of the ways we can use ChatGPT and other generative AI tools today, and some very important limitations and considerations to keep in mind now and in the future as these tools continue to evolve. 

Rise of the machines 

ChatGPT is an example of generative AI, which refers to a type of AI that uses unsupervised learning algorithms to create new digital images, video, songs, text or code. When AI generates its own content, it’s generative AI.  

More rudimentary forms of generative AI have been around for a while, creating some passing interest and adoption. But the nature of AI is to train itself to get better — quickly. And as generative AI tools have gotten better, they’ve increasingly achieved more adoption, for example: 

  • Tencent Music’s streaming services now hosts more than 1,000 songs with AI-generated vocals (and counting). 
  • Ad agencies are using image generation tools such as the DALL-E 2 and Stable Diffusion in their work. 
  • In the writing world, content creation tools such as Jasper, Writesonic and Copy.ai have already been packaged and sold for use by copywriters in fields ranging from blogging to advertising. 

The uptake of these tools has gradually ushered in some serious debates ranging from the ethics of using AI to create deepfakes to the future relevance of term paper assignments in higher education as a form of learning and testing. 

At the same time, there is no question that they’ve also sparked an equally informative and fascinating dialogue about the role of generative AI to make some of the more time consuming and tedious aspects of content research, creation, editing and sourcing more productive. I particularly appreciated this roundup of ways to explore ChatGPT from AI entrepreneur Allie K. Miller on LinkedIn.  

Experimenting with ChatGPT 

Namely, and with some big caveats that we’ll explore below, you can start experimenting with ChatGPT right now to: 

  • Summarize. Feed ChatGPT a lengthy article or report and ask it to provide you with a summary of the content. You can prompt it to summarize for different reading levels or at different lengths, and more. 
  • Plan. That plan you’re dreading breaking out into achievable bits? Give ChatGPT the parameters and ask it to map out a plan that considers each deadline, deliverable, budget, or contingency you can provide. Tweak via further conversation with the bot or take the output and adjust it yourself. 
  • Get creative. That email, blog post or brainstorming you’ve been putting off? With just a few details, ChatGPT may help you get unstuck. Give it a prompt like “Give me 20 article ideas based on x keyword or y audience” or “write an email with the following updates and action items” and use what it gives you back as a jumping off point. (We tested this recently to generate blog post ideas based on a core keyword set to great results. There was still plenty of keyword and other research we had to do to validate the ideas were worth writing, but it gave us a robust list of promising ideas to start from in a fraction of the time we would have used to generate the same number of ideas.) 

As noted above, at IDX, we have been experimenting with ChatGPT in our content and creative ideation workflows, albeit with a hefty grain of salt, refinement and critical thinking along the way. We along with many other professional content creators are engaging in robust discussions about the magnificent speed, breadth and power we’re experiencing with it, but also the hilarious failures and scary implications. 

Reactions to ChatGPT in the industry 

This dialogue about the yin and yang of generative AI escalated with the proliferation of DALL-E 2 and then exploded in just a matter of days as ChatGPT took hold. The topic was trending all over the internet, as journalists, bloggers, content creators and businesspeople experimented with it. 

The reactions ranged from caution to astonishment to farcical to critical — with more reactions and applications flooding in each day.  

As New York Times technology columnist Kevin Roose wrote, “For most of the past decade, A.I. chatbots have been terrible — impressive only if you cherry-pick the bot’s best responses and throw out the rest… But ChatGPT feels different. Smarter. Weirder. More flexible. It can write jokes (some of which are debatably funny), working computer code and college-level essays. It can also guess at medical diagnosescreate text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.” 

He and other writers immersed in technology have also noted its potential for writing code and doing smarter searches. Although ChatGPT cannot consult the internet to find answers to deeper questions as a search engine can (nor can it provide the same level of sourcing or attribution), the attraction, delight and latent potential of its user interface has already triggered speculation that ChatGPT could destroy Google — though, I would argue that the issues with ChatGPT make such an outcome highly unlikely anytime soon, even if Google could learn a thing or two from ChatGPT’s UX.  

Which brings us to risk. 

Business risks of using generative AI 

Generative AI is scary. Exciting. Concerning. And a lot more. Agencies are not the only businesses experimenting with the use of generative AI. I think it’s safe to say just about every business is probably wondering when and how they should adopt these tools, as they look for ways to maximize profit and savings by finding less expensive ways to produce content faster.  

And OpenAI will sooner or later figure out how to monetize and commercialize tools like ChatGPT as the tool improves, especially with all the great, free user data and feedback they’re collecting by opening it up for all to use for free. After all, developing a tool like this costs money. As Altman tweeted, “we will have to monetize it somehow at some point; the compute costs are eye-watering.” 

Not only are the compute costs eye-watering — so are the potential problems that businesses face if they rush headlong into adopting generative AI too quickly. For instance: 

  • Plagiarism and copyright: We’re already seeing deep concerns about generative AI scraping other people’s data as to learn and, ultimately, to create images, video, text, music and other forms of content. Generative AI sources and generates content from wide swaths of available data, not necessarily with consent and without the ability to cite sources for its eventual outputs. In fact, Adobe recently announced a new policy that requires releases from identifiable people in created images and prohibits “submissions based on third-party content — including text prompts referring to people, places, property, or an artist’s style — without proper authorization.” It’s still to be seen how easy that and similar policies will be to enforce. Simply, there is too much data to parse into a single source, and indeed, the intent of generative AI is to create something new. ChatGPT and other generative AI models will need to determine their approach to sourcing or refining the outputs ethically (and legally).  

  • Accuracy and authenticity. OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers,” a phenomenon known as hallucination. In fact, ChatGPT was recently banned (albeit “temporarily”) from Stack Overflow because of a massive influx of seemingly right, but actually wrong, answers it generated, which users were adding to the site faster than the mods could effectively mitigate. So, here’s the inherent problem: When an answer sounds plausible, how do you know it’s wrong? If something is presented as authored by a human, but it’s actually been produced by a machine — how can you know? Multiple researchers (both public and private) are working on solving these issues with solutions like digital watermarking to identify AI-generated content and moderation tooling to trigger warnings of potential unsafe content. For its part, OpenAI acknowledges many of ChatGPT’s limitations. But for now, I recommend a “don’t trust, always verify” approach to any outputs from ChatGPT or similar tools.  

  • Bias. Generative AI has an ugly history of creating biased content. The problem is that generative AI is trained on and developed by people, and people are inherently biased. For example, a few years ago, Microsoft created a chatbot named Tay that was supposed to get better by learning from people on Twitter. Microsoft shut it down within 24 hours because a Tay was influenced by a user community that taught Tay to be racist and misogynistic. Perhaps more disturbing than this explicit manipulation of AI, however, are the implicit biases we may not even realize exist in a model until we start testing it. For example, GPT-3 (upon which ChatGPT runs, albeit a newer version), showed clear signals of sexism when a prompt was designed to expose it, and ChatGPT is already being scrutinized for producing racist responses. While both OpenAI and other providers are working on the problem, there is still plenty of work to be done. 

  • Privacy and safety. The rise of deepfakes — accurate recreations of people including celebrities and public figures usually in video and images — raises questions about transparency, privacy and safety. To be sure, creators are having a lot of fun constructing deepfakes of famous people to amuse the internet viewing public. Deepfakes of famous people are also appearing in advertisements. Looking beyond the privacy and copywrite concerns of celebrities, a more dystopian picture emerges when you consider how anyone could use ChatGPT to create deepfakes of people they know — ex-partners or crushes, bullies, employers, relatives, friends or enemies. Combine the power of generative AI with its learned bias problems and you get the disturbing results one user found when an AI persistently sexualized her in images but did not do the same to male colleagues. How can we protect against the potentially real damages a fake image could produce for the people AI makes both its muses and subjects?  

The issues above just scratch the surface of the risks that businesses and content creators face by moving too quickly with these tools. Here’s the thing: ChatGPT was launched November 30, 2022. It’s going to make more mistakes. It will continue to evolve, as it is itself an evolution of the versions before it.  

Anyone who starts using a tool like ChatGPT or DALL-E now in everyday application incurs untold risks to their reputations, not to mention legal repercussions, by adopting generative AI too soon or without careful consideration and mitigation of its real problems with bias, accuracy copyright infringement, privacy and safety. 

A new role for Content Creators 

Marketers cannot afford to be behind the curve. We are a community of early adopters. And while I’d caution against full-on adoption given all the potential risks, now is certainly the time for ethical, careful and critical experimentation with generative AI. Try the tools. What red flags do they raise? What outcomes are you producing? How might you use this data?  

As for our original question: Will ChatGPT replace writers? Well, I asked ChatGPT that very question. Here was its reply: 

I agree! I touched on this earlier, but generative AI’s best use — for now and likely for a long time to come — is in the early stages of, well, generation. Use it to fill a blank page with a first draft or to come up with a hundred concepts that through sheer volume can lead you to the perfect new idea. Then, refine, validate, edit, distill, improve and personalize. Make it your own.

I’ve always been a fan of remixing and even took a course on plagiarism-as-creation in college that specifically examined new forms of content generation in the digital age. (The course was aptly called Uncreative Writing and its professor, Kenneth Goldsmith, is a poet and critic who was truly ahead of his time.)

I’ve been considering these questions since that class, and here’s where I stand on the subject: We as content creators should view generative AI tools as assistants, not replacements.

As Seth Godin wrote, “If your work isn’t more useful or insightful or urgent than GPT can create in 12 seconds, don’t interrupt people with it. Technology begins by making old work easier, but then it requires that new work be better.” It’s our job to sift the gold from the muck, and it always has been — whether the rocks we’re sifting are ones we uncovered or not. If ChatGPT can fill the sieves for us, removing the tedious work of generating ideas and first drafts, that leaves a lot more time for sifting, sorting and polishing up of the real gold nuggets.

Yes, AI is scary, and its potential applications are even dangerous. But it's also exciting. When used carefully and critically, its power just might make a content creator’s job both easier and more fulfilling.

So, use it wisely — and never stop questioning. Critical thinking is the one thing we humans can do that machines still can’t, and it may save more than our jobs in the end.

Contact IDX

At IDX, we create content and the strategies to make your content sing—from the platforms it's on to the people who engage with it to the business results it drives. We do all this with a pulse on the tech that’s changing the demands on content, like ChatGPT, omnichannel marketing, headless CMS and more. Contact us to learn more about the future of content and how we can help you prepare.