Industry will take everything it can in developing Artificial Intelligence (AI) systems.

We will get used to it.

This will be done for our benefit.

Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly…. no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good.” - Woodrow Hartzog

The abstract from Woodrow Hartzog’s (Boston University School of Law; Stanford Law School Center for Internet and Society) recent essay, "Two AI Truths and a Lie," explores critical points of concern that we share.

In the piece (freely available and a highly recommended read) he thoughtfully presents a critical examination of the development and deployment of Artificial Intelligence (AI) systems, emphasizing the need for lawmakers to address the underlying dynamics of AI to prevent its misuse. His bottom line:

Lawmakers must change their usual approach to regulating technology

In the article, Hartzog identifies three key dynamics—dangerous extraction, harmful normalization, and adversarial self-dealing—and proposes a regulatory approach centered on duties, design rules, defaults, and data dead ends to mitigate potential harms.  

He speaks to the rapid development and deployment of Artificial Intelligence (AI) systems, emphasizing the urgent need for strong governance and regulatory frameworks. He argues that current procedural approaches, which focuses largely on tech-centric “transparency” and user consent, are inadequate to address these challenges. (We heartily agree!)

JLJ: But what do bullets have to do with AI Truths and a Lie? My brain made the connection right away… please do read on.

Four concerns to explore - (Each worth an entire post! Watch this space for more.) 

Industry exploitation and data collection (dangerous extraction):

  • AI systems are voracious for personal data, essential for training models

  • Companies exploit the narrative that human information is a raw resource for economic production, creating a "biopolitical public domain" where personal data is free for exploitation.

  • Examples include targeted ads based on comprehensive data mining, showing the extensive reach of data collection even without direct eavesdropping.

"Companies cannot create AI without data, and the race to collect information about literally every aspect of our lives is more intense than ever."

Normalization of invasive technologies (harmful normalization):

  • Society tends to acclimate to new forms of surveillance and data collection, gradually accepting them as normal.

  • This desensitization process, termed "techno-social engineering creep," leads to widespread deployment of AI tools in various aspects of life, from workplaces to schools, often under the guise of optimization.

    • JLJ: In the case of the bullet vending machines, (sorry) “smart retail automated ammo dispensers” benefit are that one wouldn’t have to wait in line, and could conveniently buy bullets at any time of day or night.

    • “According to American Rounds, the main objective is convenience. Its machines are accessible "24/7," its website reads, “ensuring that you can buy ammunition on your own schedule, free from the constraints of store hours and long lines.””

"After initial protests about new forms of data collection and exploitation, we will become accustomed to these new invasions, or at least develop a begrudging and fatalistic acceptance of them."

Speaking of misleading narratives of benefits (adversarial self-dealing):

  • Companies often claim AI technologies are deployed for public benefit, but these claims are mostly pretexts for further market expansion and profit maximization.

  • The benefits of AI, such as enhanced safety or productivity, often mask deeper motives of data extraction and control.

"Organizations will say the deployment of facial and emotion recognition in schools is motivated by the desire to keep students focused and edified. Employers will say that the deployment of neurotechnology in the workplace is to keep employees safe and engaged."

Regulatory recommendations – Hartzog’s “Four D's:

  • Duties: Imposing obligations on AI developers to prioritize public welfare over profit.

  • Design: Ensuring AI systems are designed with inherent safeguards to prevent misuse.

  • Defaults: Setting default settings in AI systems that protect users’ rights and privacy.

  • Data Dead Ends: Creating mechanisms to limit data retention and prevent the continuous exploitation of personal information.

"A better approach involves duties, design rules, defaults, and data dead ends. This layered approach will more squarely address dangerous extraction, harmful normalization, and adversarial self-dealing to better ensure that deployments of AI advance the public good."

Let’s get moving to that better approach - VOTE

I’ve been advocating the simple act of VOTING to ensure that AI principles are shaped properly. It may seem counterintuitive in today’s legislative environment, but electing officials who are aware of, and responsibly planning for, a future infused with AI technologies, we can actively elect the lawmakers who can better ensure that AI technologies contribute positively to society rather than exacerbating existing problems.

Only then can we better navigate the complex landscape of AI development and deployment, ensuring it serves the public interest rather than merely corporate or governmental power. 

With gratitude for the thoughtful piece, thank you Woodrow Hartzog. Two AI Truths and a Lie (May 24, 2024). 26 Yale Journal of Law and Technology (forthcoming 2024). And we got additional support on this post from our friend ChatGPT.

Resources from AIGG on your AI Journey

Need training or specific support in building AI Literacy or AI Regulations for your organization? We’re a little different. We’re not approaching AI from a tech perspective, though we have techies on staff. We’re approaching it from a safe, ethical, and responsible use perspective because we’ve been through technology and business transformations before.

Whether you’re a government agency, school, district, or business looking to add AI to your tech toolkit, we can guide the way in a responsible manner. AiGg is here to support you in navigating ethics, governance, and strategy setting.

We have attorneys, anthropologists, data scientists, and business leaders to support you as you develop your Strategic AI Use Statements, which can guide your organization’s use of the tools available to you. We also offer bespoke educational workshops to help you explore and build your playbooks, guidelines, and guardrails as your adoption (and potential risk management) options grow.

Connect with us for more information, to get your free AI Tools Adoption Checklist, Legal and Operational Issues List, HR Handbook policy, or to schedule a workshop to learn more about how to make AI work safely for you. We are here for you.

Reach out for more information and to begin the journey towards making AI work safely and advantageously for your organization.

Let’s invite AI in on our own terms.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Previous
Previous

Digging in: “Two AI Truths and a Lie” dangerous extractions

Next
Next

A New Era of AI