65% of companies are using AI in their marketing and customer engagement areas today, yet marketers are ‘concerned’ about revealing that fact to their prospects and clients, because they’re not sure how they’ll be perceived.

That’s a recipe for disaster.

Marketers - listen up. You’re blowing it already if you’re not forthcoming.

A recent study of more than 300 digital marketers conducted by SOCi (a marketing platform provider) has revealed a significant hesitation to disclose the use of generative AI in marketing because Marketers can’t predict how the public perceives AI (and therefore their organizations) at this point.

And so to hide the fact that they’re using AI is the answer? Oh, heck no. I’ve been a marketer through three huge tech transformations, the advent of the Internet, Social Media, and now (especially with the launch of ChatGPT and other generative) AI.

And three basic tenets have become increasingly important over time:

Truth | Trust | Transparency

As marketers (and salespeople) it’s our duty to lead with our organizations’ values. And while the values of Truth | Trust | Transparency may not be explicitly spelled out in our orgs’ handbooks, they are underlying principles that are especially critical where innovative AI crosses paths with customer interactions.

How to lead with Truth?

  • Avoid misleading your customers and prospects. (And your fellow employees!)

    • Fully disclose when you’re using technology to answer people’s questions, comments or reviews.

    • Let your employees know when you’re deploying AI systems to listen to their Slack or Teams-based conversations.

    • Make sure your leadership team and your legal and HR teams fully understand how you’re using AI tools

  • Fully attribute your work, especially when you use AI for support in drafting public material.

How to engender Trust?

  • Protect the data in your care.

    • That means following not only CAN-SPAM, but GDPR guidelines (I’m too often surprised by digital marketers willing to play “just a little fast and loose” with email best practices in this day and age)

    • And watching how state regulations play out with regard to AI - which will happen well before the Federal Government governs with regulatory authority (vis a vis Executive Orders)

  • Protect customer and employee privacy.

    • As software tools integrate more AI functionality (for summarizing conversations, meeting insights, intentions and highlights, etc.) be careful how / how much you share, and with whom.

    • Protect privacy in screen shots, reports, and other content that you might send to your manager, or cross-functional teammates. When in doubt, don’t.

  • Educate, then include your employees in the process.

    • It’s early days in the widespread use of AI tools that have come into the hands of people since ChatGPT burst on the scene. So spend some time learning about the possibilities and pitfalls and SHARE them with your organization. Marketers, you’re the communicators. Step up here.

    • People tend to support what they help to create. Include your teams as you develop or leverage playbooks around the use of the new tools at your disposal.

  • Create feedback loops and adjustment mechanisms as you learn from making mistakes and by avoiding them

    • Ensure your HR and legal teams are fully briefed on your tech stack and what it can do in conducting everyday business.

    • Your CIO or CISO should also be fully aware of what you and your teams are using (marketers like me have been guilty of building shadow IT for years. That has to stop.)

    • Your Leadership team should be well aware of exactly how, why and when AI tools are being deployed, and who is responsible for monitoring and managing them.

Just be Transparent

  • About exactly what data is being collected

    • And importantly how it’s being used

    • Where it’s being stored

    • How it’s being shared

  • And the same goes for your Marketing, Sales (and HR and Operational…) AI tools

    • State, very plainly, which tools are being deployed

    • Internally, your leadership team should know who’s managing each tool and who’s ultimately responsible for the proper use and governance of them

    • Plan for how you’ll protect your organization in the case that something happens that opens you to risk (it’s early days)

Minimize your risks as you maximize your productivity

There are so many excellent reasons to deploy these new and innovative tools. Your productivity can be boosted. Your understanding of your data can be enhanced. You can see patterns you’ve never imagined for better segmentation, product-market fit, etc. etc. etc.

Protect your brand. Protect your reputation. Protect your employees. Protect your intellectual property. Protect your organization from risk. Be Truthful, engender Trust, and simply be Transparent. You’ll be much safer in doing so.

Oh, and by the way… More on how the public actually DOES view AI and innovations in technology, etc. soon. The Edelman Trust Barometer was just published this week! And we’re digging in.

Resources from AIGG on your AI Journey of Understanding and Literacy

We can help. Check out our Resources section where you’ll find free checklists covering the adoption of AI tools and identifying legal and operational risks, along with drop-in HR Handbook policies for your team to review, augment and approve.

Need training or specific support in building AI Literacy? We’re a little different. We’re not approaching AI from a tech perspective. We’re approaching it from a safe, ethical and responsible use perspective. Because AI technology is here to stay, and can work brilliantly for your organization.

We have attorneys, anthropologists and business leaders to support you as you develop your Strategic AI Use Statements that can guide your organization’s use of the tools available to you. And we have bespoke educational workshops available to you as you explore and build your playbooks, develop your guidelines and guardrails as your adoption (and potential risk management) options grow.

Connect with us for more information, to get your free AI Tools Adoption Checklist, Legal and Operational Issues List, HR Handbook policy, or to schedule a workshop to learn more about how to make AI work safely for you. We are here for you.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Previous
Previous

AIGG’s “Ask Me Anything” about The State of AI

Next
Next

AI in the Rearview Mirror: A Year of Transformative Moments