Building AI literacy for the future

Clouds can make more drama than blue sky…

We’re all on different paths to AI adoption. As we’ve been on this journey, we’ve noted there are people at every stage.

Understanding the world of AI is complicated, but exciting. Here’s a bit of a primer from our perspective.

What is AI and how is it going to be integrated into the workplace?

Let's dive into the world of AI and its integration into the workplace as the hype and promise of AI is just beginning to be recognized (much less understood) in the larger business context. 

First, let's start with understanding what AI is and how it works.

AI refers to the ability of machines to simulate human intelligence, such as learning, reasoning, and self-correction. 

AI was conceived in 1956 at Dartmouth University, at The Dartmouth Summer Research Project on Artificial Intelligence, and has been under extremely serious development for the past 20 years. In fact, most of us have been using AI in our daily lives whenever we use a smartwatch to measure our walking distances, say “hey Siri,” to get directions, use a chat function on a website or accept an autocorrect in a text or document.

But the hype cycle around the use of AI in business really took off in November of 2022, when Open AI offered ChatGPT, a regenerative Large Language Model (LLM) that generates text, to the public. Within months, ChatGPT had garnered 100mm users, an unprecedented feat of trial and adoption. Microsoft has put $10b into OpenAI and ChatGPT, promising to add AI capabilities to their Office Suite and more. (CoPilot just recently launched to do just that.)

AI Large Language Models in Action

Large Language Models (LLMs), such as Google’s Gemini and Open AI’s ChatGPT, use complex algorithms to analyze and understand natural language input. They have the ability to construct content within seconds, and much of it is truly amazing. 

For example, say you’re looking for a summary of a financial book (or a PDF you’ve just received). You can prompt ChatGPT or Claude from Anthropic (for PDFs) to do so, and within about 20 seconds, you’d have results like this, saving (potentially) hours of reading.

AI Image Models Mature Quickly

AI image models, such as those from Open AI’s DALL-E and MidJourney, generate images based on text input. For example, in the photo that follows the people pictured do not exist in real life. The results can be beautiful in the right hands.

Our very own Dru Martin, Moto Interactive, developed this image using DALL-E:

Developed by Dru Martin with AI augmentation from DALL-E

Dru used the following prompt:

PROMPT: “racoon, rabbit, squirrel, mouse, bluejay, crow moving through the forest along a winding trail, rhododendrons, mount hood, layers, in the style of bold graphic woodcut illustrations, 32k uhd, golden age illustrations, black and bronze, freestyle paintings, panoramic 6:1”

And photorealistic images are also amazing. The following scene was imagined by AI user Nicole Leffer, who prompted MidJourney v5 with this context: 

PROMPT: very happy Balinese girl looking straight at the camera, wearing bright colors and holding a soccer ball, standing in the middle of a street in a small local village, eyes are wide and shining with joy, shot on Nikon D6 f8.0 

Imagine, for a moment, how difficult it will be to tell what’s real and what’s not in pictures and video any more? And how easy it will be to spoof images - of children, of politicians, of organizational leaders in any context.

In fact, AI-generated fake biometric images are so good (in early 2024) that in just two years many firms won’t accept facial recognition alone for identity verification and authentication, according to researchers at Gartner. 

It doesn’t take much time to sense the good, the bad, and even the illegal resulting from even innocent use of AI tools that are widely available to anyone willing to play or pay today.

How Large Language Models (LLMs) and Image Models are Trained

AI image models and LLMs are types of machine learning (ML) models that can learn to perform tasks by analyzing data. In order to train these models, large datasets are used. One issue with the way these models are developed is that only 12% of all machine learning research professionals are women.

For AI image models, the training dataset typically consists of millions of labeled images. The goal of training is to teach the model to correctly recognize the objects or scenes in these images. During training, the model is presented with an image and must output the correct label for that image. The model's performance is evaluated by comparing its output to the true label of the image.

For LLMs, the dataset consists of large amounts of text data, such as books and the Common Crawl dataset, which contains billions of web pages, along with Wikipedia (only 9% written by women). The goal of training is to teach the model to understand and use language more effectively.

During training, the model is presented with a sequence of words and must predict the next word in the sequence. The model's performance is evaluated by comparing its output to the true next word in the sequence. The model adjusts its internal parameters using backpropagation, similar to the process used for AI image models.

How a Neural Network translates English to French - source: Atmosera, Understanding ChatGPT

While the training processes for AI image models and LLMs are different, the core principle is the same: the model learns by adjusting its internal parameters based on feedback. In both cases, the goal is to maximize the model's accuracy on a validation set. Once trained, these models can be used to perform a wide range of tasks, such as image recognition or language translation.

Search engines such as Google and Bing implement measures to ensure the credibility of content displayed on their platforms. However, it is possible to manipulate these measures to skew results in favor of unreliable sources.

Additionally, search engine results don’t always reflect the entirety of available online (or offline) content. Google's algorithm favors websites that employ contemporary web features such as encryption, mobile optimization, and schema markup. Consequently, high-quality websites with valuable content may be overlooked in search results.

On March 23, 2023, Chat GPT was connected to the Internet. (At it's launch, ChatGPT’s knowledge was limited to dates, events and people prior to around September 2021.) There are plug-ins for external sites, including Expedia, Kayak, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier, among others. 

Plugins could also increase safety challenges by taking harmful or unintended actions, so OpenAI implemented several safeguards. For example, publishers can add information to their robots.txt file to prevent ChatGPT from accessing their websites.

The New York Times has sued Microsoft and OpenAI for AI copyright infringement, given how ChatGPT has been taught - using thousands of NYT articles. The whole notion of “Fair Use” is currently under legal review.

Known Issues with Large Language Models: Garbage In, Garbage Out

As large language models, such as Claude, ChatGPT, Gemini and more continue to gain popularity and become more widely adopted in various industries, there are several known issues that businesses should be aware of. Some of the major concerns are:

Bias:

Language models can inadvertently perpetuate and amplify biases present in the training data they were trained on, which can have real-world consequences. Businesses must be vigilant about detecting and addressing biases in their models, especially when they are being used to make decisions that can affect people's lives.

Bias in images are especially evident, and have been largely circulated in the press, and encapsulated in the linked article in the MIT Technology Review. In an early example, researchers sampling images from AI found that DALL-E 2 generated images of white men 97% of the time when prompted for images of “CEO” or “director.” Bias is becoming an real problem as AI models are widely adopted and produce more realistic images for business’ use. 

From MIT Technology Review:Part of the problem is that these models are trained on predominantly US-centric data, which means they mostly reflect American associations, biases, values, and culture, says Aylin Caliskan, an assistant professor at the University of Washington who studies bias in AI systems and was not involved in this research.” 

Interpretability:

It can be difficult to understand how a language model arrived at a particular prediction or recommendation. This can be problematic for organizations when trying to explain the reasoning behind a decision made by the model to their customers or stakeholders.

Data privacy and security: Large language models require vast amounts of data to be trained, which can include sensitive or personal information. Organizations must take precautions to ensure that this data is kept secure and private, especially if they are working with sensitive information.

OpenAI, the organization behind ChatGPT, disclosed it had to take ChatGPT offline on March 20, 2023 to fix a bug that allowed some users to see the subject lines from other users’ chat history. That bug, which has been fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI noted in a blog post.

And just last week, Google updated its support service to reveal that it retains Gemini chatbot conversations for up to three years, and that human annotators routinely read, label and process those conversations. In other words:

Don’t type anything into Gemini, Google’s family of GenAI apps, that’s incriminating — or that you wouldn’t want someone else to see.

Generalization:

Large language models are trained on a vast amount of data, which makes them very good at predicting what comes next in a sequence of text (as seen above). However, they may not always understand the context or underlying meaning of what they are processing. This can result in errors or inaccurate results in some cases.

In a recent paper published in Free Speech Law by Eugene Volokh called LARGE LIBEL MODELS? LIABILITY FOR AI OUTPUT he noted:

“a libel lawsuit against OpenAI has already been filed, based on a claim that ChatGPT falsely summarized a complaint in a different case as alleging embezzlement by a particular person; that complaint actually had nothing to do with that person, or with embezzlement. Likewise, a libel lawsuit against Bing has been filed, based on a claim that Bing (which uses GPT-4 technology) responded to a query about “Jeffery Battle” with the following output: 

This output apparently mixes information about the technology expert Jeffery Battle with information about the convicted terrorist Jeffrey Battle”

Even small differences can confuse these LLMs into big mistakes. So be wary. Never wholeheartedly believe what these tools tell you.

Dependence:

Organizations and their employees who rely on large language models for their operations may become dependent on them. This can pose a risk if the model is suddenly unavailable or if there is a significant change in its performance or accuracy (as we’ve seen in the short time they’ve been widely on the market. Or, if the power goes out for a significant period of time!

These systems are still being developed and tuned by their engineers and fact checkers. It’s early days, and the ecosystem is being developed before our eyes.

Maintenance:

Large language models require ongoing maintenance and updates to keep them up to date and accurate. Businesses that use these models need to be prepared to invest in ongoing maintenance to ensure their models are up to date and performing optimally.

Overall, organizations should be aware of these potential issues and take steps to mitigate them where possible.

With proper use and management, large language models can be a powerful tool for organizations to leverage for a range of applications.

Resources from AIGG on your AI Journey of Understanding and Literacy

We can help. Check out our Resources section where you’ll find free checklists covering the adoption of AI tools and identifying legal and operational risks, along with drop-in HR Handbook policies for your team to review, augment and approve.

Need training or specific support in building AI Literacy? We’re a little different. We’re not approaching AI from a tech perspective. We’re approaching it from a safe, ethical and responsible use perspective. Because AI technology is here to stay, and can work brilliantly for your organization.

We have attorneys, anthropologists and business leaders to support you as you develop your Strategic AI Use Statements that can guide your organization’s use of the tools available to you. And we have bespoke educational workshops available to you as you explore and build your playbooks, develop your guidelines and guardrails as your adoption (and potential risk management) options grow.

Connect with us for more information, to get your free AI Tools Adoption Checklist, Legal and Operational Issues List, HR Handbook policy, or to schedule a workshop to learn more about how to make AI work safely for you. We are here for you.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Previous
Previous

AI: a push-pull of transparency and training

Next
Next

For Creators: GenAI Adoption Roadmap