
[Editor’s Note: Even experienced PR professionals need a refresher on the basics periodically, as well as insight about newer concepts. Whether it’s how to become a better writer or a review of PR ethics, we aim to provide you with content about a variety of topics and issues. Hence, our Explainer series.]
Previous posts looked at Barcelona Principles, the Metaverse, employee resource groups (ERGs), NIL, social conversation platforms, bounce rate, off the record and sonic branding (amongst others). Today we review the basics of language learning models also known as large language models, and how they influence artificial intelligence.
If there are topics you’d like to see discussed in this series, please let us know.]
What Is a Language Learning Model (LLM)?
In order to be a responsible AI user nowadays, it’s important to know the basics of what AI is composed of. If you use anything like ChatGPT, Claude, Google Gemini, Microsoft Copilot or others you’ve already encountered a language learning model, also known as large language model or LLM.
To put it simply, an LLM is a machine learning model that can understand, interpret and generate human language for further consumption. They do this by ingesting enormous amounts of data—articles, research, chats (pretty much anything ever published or uploaded to the model), and analyze that data to create a responsive environment for the user.
According to IBM these generative responses can include:
- Text generation
- Content summarization
- Sentiment analysis
- Language translation
- and more…
Why it Matters to Communicators
Generative AI can do amazing things for productivity and work efficiency for communicators. However, it’s not without its flaws.
The biggest problems with LLMs and generative AI include factual accuracy and bias. A recent UNESCO study shows LLMs tend to “produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men…and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.” This comes from the input of millions of data into the AI systems—by humans—which may tend to sway one way or another.
So what does this mean for those utilizing these tools? Always fact check. Always edit and reread and check for bias. Have others on staff read the results of what is generated to ensure it provides inclusive, correct information. It could always be your brand or organization that comes out on the wrong end of an AI generated social post that didn’t get a second look.
More resources for AI, prompts and language learning:
Nicole Schuman is Managing Editor at PRNEWS.