The Ethical Use of AI: Insights from Leading Experts

fading wild blackberries

The Society of Information Management (SIM) Portland chapter held an important discussion around the ethics of AI. And I was fortunate to attend.

I was invited to attend a presentation on AI Ethics recently by a member of the SIMPDX group, where legal expert Martin Mederios of Mederios Law Group and Andrea Bollinger, Vice Provost and CIO at Oregon State University shared their insights on the ethics and AI with a group of 55 CIOs and CISOs around the Portland, OR area (and me). Here are some observations.

About our Brains - “We like to monkey with the truth.”

Martin Mederios, Esq. launched the meeting with a discussion of how humans use our brains, and how we naturally relate to data. When we see data out of context, our limbic systems kick in, and applies an emotional filter to the information, retains what seems important, finds patterns, links related memories, drives motivated behaviors, and does its best to regulate emotional arousal.

And in today’s world, we need to move the context of data into our prefrontal cortex, which is the slower, more logical part of our brains. While the limbic system reacts instinctively with emotion, motivation, and memory formation, the prefrontal cortex handles data in a more calculated, analytical, and goal-oriented way to support complex cognition and executive function. It essentially overrides more primitive limbic responses in favor of higher-order thinking and behavior control when needed. In other words, he said, “We like to monkey with the truth.”

(In an aside, he noted that psychopaths have very little activity in their prefrontal cortex.) A raucous start, in my opinion. I was captivated - hello limbic system!

Four Areas of Transformation

He then went on to describe what he thought were the four areas of transformation brought on by the use of AI:

1) Transactional: This is the area where generative AI can create content that could infringe on copyrights without consent, as a result of the way these LLMs were trained.

2) Intellectual Property: In an amazing cautionary tale, Mederios noted that TikTok will take all your data plus the data from your visitors, and create “personas” of you and your audience, and sell these personas to your “competitors.” TikTok (here are their terms of service owns all the IP rights to your voice and face on the site. Period. Non-negotiable. (Facebook’s terms of service are much the same, but they’re more willing to negotiate.)

3) Privacy: AI can now easily re-identify "anonymous" data (even “platinum” level anonymized data) by cross-referencing information from multiple sources. Stronger privacy laws like GDPR are essential as AI capabilities advance. And there was quite the foofaraw in August when news really broke that Zoom’s terms of service granted a global, perpetual license to use customer data for things like “product and service development,” machine learning, and artificial intelligence.

4) Disputes: He ended with a description of promise… AI can be a support in many legal disputes. Humans, it seems, are impulsive when it comes to splitting up our shared interests, whether through divorce, dissolution, or departing an organization. (See Brain, above.) In fact, he said, people who go through a divorce are likely to lose 50% of their shared assets in the process (those pesky legal fees, etc.). Generative AI can dispassionately find ways to split assets, because it can see data without limbic-type emotional connections.

While AI can streamline legal work like research and drafting, current regulations around copyrights and IP haven't kept pace. Updated IP policies and protections are critically needed, now.

Privacy regulations vary – state by state. Right now, many are looking to California to develop very EU-like regulations around AI. Where I live in Oregon, there currently are no state data privacy regulations.

Mederios has projected federal protections are “20 years out.”

“The Horse has Left the Barn”

Andrea Bollinger then spoke passionately about the promise of AI, especially in educating students. Given AI’s ability to look dispassionately at data, she believes that AI can support students’ responsible, equitable and personalized educations as it ingests data around grades, educational intent and historical performance.

She noted that now, more than ever, universities are preparing students for professions that doesn’t yet exist. “The horse has left the barn,” she noted, as she recommended all attendees encourage their employees to become familiar with AI tools by educating themselves about them, and by using them, often.

Don’t Shy Away and Do No Harm

Bollinger is a big AI advocate. And yet, her concerns are around transparency and the ability to explain in very simple terms, how we use data. She encouraged attendees to share information liberally with employees and others to understand the use of (and how to use) AI. And by all means, to reinforce your use - especially from a “do no harm” perspective - beyond simple bylaws and handbooks.

McKinsey has written often about AI, and have published an excellent framework of Responsible AI Principles that will support any organization as you explore the use of AI tools.  

AI Requires Ongoing Monitoring & Transparency 

Generative AI can unintentionally perpetuate biases if not properly monitored. Bollinger emphasized that people using AI systems should screen content continuously to look for ongoing examples of harmful bias or stereotypes.

(In fact, just this week there was a discussion on LinkedIn about prompting language where, basically the notion to "write like a woman" came very close to reinforcing / perpetuating bias. The link no longer works, or I’d share it here. h/t to Julia Alena for sharing at the time.)

It's also crucial that AI be transparent and results explainable in plain language. Bollinger suggested looking at the City of Tempe, AZ and their “no harm-based” Ethical AI Policy statement as an excellent example of a policy that calls out ethics in very simple language. Unafraid to use the word “ethics” in a Policy statement, this is the type of language they use:

The City of Tempe is committed to designing, developing, and deploying AI technologies in a responsible and ethical manner. We recognize that AI has the potential to significantly impact society and drive innovation, and we believe that it is our duty to ensure that its development, adoption, and use align with the principles of fairness, transparency, accountability, and respect for human rights.

Guiding AI Use with Clear Policies

For businesses deploying AI, clear guidelines are essential. Even simple bylaws requiring ethical AI use beyond what regulations mandate. As Bollinger noted, top-down AI rules often conflict with employee comfort levels. Companies should engage workers to understand how AI is being used today, and to support leaders as they develop transparent, do no harm policies, together.

Vigilant governance is crucial to ensure ethical, transparent AI use that protects privacy and prevents bias and other harms. With collaborative policies and oversight, AI can be harnessed safely and responsibly today. This tech transformation is under way. Leverage it intelligently. As Bollinger said,

“Human brains still win.”

AIGG Can Help

Find drop-in HR policies for your organization’s review and use in our Resources section. Written by our team, we recommend you have your attorneys review the policy, and have your HR resources ensure they support your culture and your brand. Your people will thank you for it.

Critical thinking, education, guidelines, guardrails, strategic AI statements, and supportive frameworks for AI adoption. Connect with the AI Governance Group. We’re here for you.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Previous
Previous

A Path to Innovation and Ethical AI

Next
Next

How Can AIGG Serve My Organization?