Creeping in: “Two AI Truths and a Lie” harmful normalization

Industry will take everything it can in developing Artificial Intelligence (AI) systems.

We will get used to it.

This will be done for our benefit.

Two of these things are true and one of them is a lie. It is critical that lawmakers identify them correctly…. no matter how AI systems develop, if lawmakers do not address the dynamics of dangerous extraction, harmful normalization, and adversarial self-dealing, then AI systems will likely be used to do more harm than good.” - Woodrow Hartzog

"Two AI Truths and a Lie" harmful normalization’s insidiousness

In my earlier post about the Woodrow Hartzog piece (freely available and a highly recommended read) I noted that each concept that we focused on deserved a deeper dive. Here is the second step of that exploration. If you’d like to read the first post digging into dangerous extractions, it’s here.

Normalization of invasive technologies (harmful normalization):

  • Society tends to acclimate to new forms of surveillance and data collection, gradually accepting them as normal.

  • This desensitization process, termed "techno-social engineering creep," leads to widespread deployment of AI tools in various aspects of life, from workplaces to schools, often under the guise of optimization.

"After initial protests about new forms of data collection and exploitation, we will become accustomed to these new invasions, or at least develop a begrudging and fatalistic acceptance of them."

Society's Acclimation to New Forms of Surveillance

Let’s face it. We’ve become complacent and compliant as our communications and information channels have turned “always on.” Our tech-centric ‘normal’ is just the way it is. I know I wouldn’t want to go too long without the internet. I get twitchy when meetings go long and I know I have email piling up. And it truly unnerves me to leave my phone at home.

Society tends to acclimate to new forms of surveillance and data collection, gradually accepting them as normal. This phenomenon of desensitization has been well documented in the widespread acceptance of CCTV cameras in public spaces. Initially, the introduction of these cameras sparked concerns about privacy and surveillance.

After 9/11, for example, Oakland, California, used federal funds intended for port security to create a citywide surveillance system, raising questions about the scope and justification for such extensive monitoring​. Yet, over time, as the presence of cameras became more commonplace, people grew accustomed to them and accepted them as a part of everyday life.

 "We will get used to it. After initial protests about new forms of data collection and exploitation, we will become accustomed to these new invasions, or at least develop a begrudging and fatalistic acceptance of them. Our current rules have no backstop against total exposure." (Hartzog, 2024, p. 12) 

Techno-Social Engineering Creep

The desensitization process, or "techno-social engineering creep," leads to the widespread deployment of AI tools in various aspects of life, often under the guise of optimization. Techno-social engineering creep describes how incremental changes in technology and its applications can gradually normalize invasive practices. For instance, what begins as simple data collection for service improvement can evolve into comprehensive surveillance systems that monitor behavior and personal data on an unprecedented scale. (Think: Facebook, Smart TVs and soon, in real time Spotify.)

 "Brett Frischmann and Evan Selinger have called this 'techno-social engineering creep,' and once you learn to recognize it, you see it everywhere. IoT doorbells were first designed to provide a simple video feed of the area right in front of the door. Now, they are being outfitted with AI-powered facial recognition and anomaly-recognition technologies and have a range of 1.5 miles." (Hartzog, 2024, p. 14)

 AI Tools in Workplaces and Schools

AI tools are increasingly deployed in workplaces and schools under the pretext of optimizing efficiency and enhancing security. In workplaces, AI-powered surveillance can monitor employees' productivity, track their movements, and even analyze their facial expressions to gauge their emotional states. And some systems are being installed to monitor the newer “digital watercoolers” like Slack, Teams, Zoom meetings and more. (In the name of data protection, of course.)

Similarly, in schools, AI technologies are used to monitor students' attentiveness, track their participation, and ensure their safety. While these tools are often promoted as beneficial, they raise significant privacy concerns and contribute to the normalization of surveillance. Not to mention the prevalence of data leakage incidents in schools.

We wrote here about a recent study showing nearly all applications used in educational settings share children's personal information with third parties, with 78% of these instances involving advertising and monetization entities, often without the knowledge or consent of users or the schools.

"Companies increasingly deploy AI to micromanage as many aspects of our work as that the technology will allow, including how long we take for bathroom breaks and whether our attention is completely focused on our task." (Hartzog, 2024, p. 15)

Public and Personal Spaces

The pervasive deployment of AI extends to both public and personal spaces. Public areas, such as streets and parks, are equipped with AI-enhanced surveillance cameras that can recognize faces and track movements over large distances. In personal spaces, smart home devices collect data on inhabitants' behaviors, preferences, and even health metrics. Just take a look at the technologies Amazon has been buying up that give them amazing insights into our homes. (The Roomba deal was called off earlier this year due to regulatory concerns - remember to VOTE!)

These technologies, marketed as conveniences or safety measures, contribute to an environment where continuous surveillance becomes the norm.

“AI surveillance is becoming ubiquitous, extending its reach from public spaces to private domains. In public, AI-powered cameras track individuals' movements and activities, while in private spaces, smart devices continuously monitor personal behaviors and preferences, often under the guise of enhancing convenience or security." (Hartzog, 2024, p. 13)

Once again, these examples underscore the critical need for robust regulatory frameworks to ensure AI systems are developed and deployed in ways that prioritize user privacy and societal benefit over unchecked and truly dangerous extractions.

With gratitude for the thoughtful piece, thank you Woodrow Hartzog. Two AI Truths and a Lie (May 24, 2024). 26 Yale Journal of Law and Technology (forthcoming 2024). And we got additional support on this post from our friend ChatGPT.

Resources from AIGG on your AI Journey

Need training or specific support in building AI Literacy or protecting privacy for your organization? We’re a little different. We’re not approaching AI from a tech perspective, though we have techies on staff. We’re approaching it from a safe, ethical, and responsible use perspective because we’ve been through technology and business transformations before.

Whether you’re a government agency, school, district, or business looking to add AI to your tech toolkit, we can guide the way in a responsible manner. AiGg is here to support you in navigating ethics, governance, and strategy setting.

We have attorneys, anthropologists, data scientists, and business leaders to support you as you develop your Strategic AI Use Statements, which can guide your organization’s use of the tools available to you. We also offer bespoke educational workshops to help you explore and build your playbooks, guidelines, and guardrails as your adoption (and potential risk management) options grow.

Connect with us for more information, to get your free AI Tools Adoption Checklist, Legal and Operational Issues List, HR Handbook policy, or to schedule a workshop to learn more about how to make AI work safely for you. We are here for you.

Reach out for more information and to begin the journey towards making AI work safely and advantageously for your organization.

Let’s invite AI in on our own terms.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Previous
Previous

Misused and Abused: “Two AI Truths and a Lie” adversarial self-dealing

Next
Next

Digging in: “Two AI Truths and a Lie” dangerous extractions