Misused and Abused: “Two AI Truths and a Lie” adversarial self-dealing
"Two AI Truths and a Lie" adversarial, self-dealing abuse
In my earlier post about the Woodrow Hartzog piece (freely available and a highly recommended read) I noted that each concept that we focused on deserved a deeper dive. Here is the third step of that exploration. If you’d like to read the first post digging into dangerous extractions, it’s here. And we crept insidiously into harmful normalization here.
Misleading narratives of benefits (adversarial self-dealing):
Companies often claim AI technologies are deployed for public benefit, but these claims are mostly pretexts for further market expansion and profit maximization.
The benefits of AI, such as enhanced safety or productivity, often mask deeper motives of data extraction and control.
"Organizations will say the deployment of facial and emotion recognition in schools is motivated by the desire to keep students focused and edified. Employers will say that the deployment of neurotechnology in the workplace is to keep employees safe and engaged."Society's Acclimation to New Forms of Surveillance
Profit Dealing Motivations:
Tech companies - especially those leading the generative AI transformation - have too often come under fire for prioritizing profit over ethical considerations, leading to decisions that serve their interests but that may harm the public.
For instance, companies have deployed AI systems (like OpenAI’s “Whisper’) that collect and exploit vast amounts of user data (in Whisper’s case, on YouTube) to enhance targeted advertising, manipulate consumer behavior, and reinforce their market dominance.
This self-serving behavior often is contrary to the company’s own policy and data governance rules, and certainly undermines trust and can lead to significant societal harm.
"Companies are going to seek to profit from AI and will take advantage of narratives to block rules that interfere with their business models." (Hartzog, 2024, p. 9)
Manipulation and Exploitation:
AI systems are designed to manipulate user behavior to maximize engagement and revenue. This can be done through persuasive technologies that encourage addictive behaviors - like purposefully sending people down rabbit holes full of misinformation, guaranteed to extend time-on-site (and ad revenue) - or through more subtle methods like targeted advertising based on extensive data mining.
In his excellent book The Chaos Machine, Max Fisher explored the effects of social media algorithms in rewiring our brains. From a NY Times review:
The enjoyment of moral outrage is one of the key sentiments Fisher sees being exploited by algorithms devised by Google (for YouTube) and Meta (for Facebook, Instagram and WhatsApp), which discovered they could monetize this impulse by having their algorithms promote hyperpartisanship. Divisiveness drives engagement, which in turn drives advertising revenues.
Companies exploit user data without transparent disclosure or proper consent, prioritizing financial gain over user rights and privacy. And the same companies behind social media are the exact same companies behind generative AI.
"Companies use generative AI, biometric surveillance, predictive analytics, and automated decision-making for power and profit. The benefits of AI systems are often pretexts for market expansion into the increasingly few spaces in our lives that are not captured, turned into data, and exploited for profit." (Hartzog, 2024, p. 8)
Resistance to Regulation:
It’s also very well known that these same tech organizations vehemently resist regulatory efforts that could limit their power or profitability. They lobby against new laws, exploit legal decisions (like Section 230), and engage in regulatory capture (where a regulator is co-opted to serve the interests of a small group over the general public's interests) to maintain their profitable positions.
This resistance hampers the implementation of necessary safeguards that protect public interests and ensure fair market practices. It also slows down regulations to be adopted at a federal level. (Reminder, please VOTE!) So organizations of all kinds must recognize local, state and even county regulations around AI right now in the US.
“The governments that want powerful AI tools won’t stand in the way." (Hartzog, 2024, p. 9)
Just this week, we learned that China dominates the global race in generative AI patents, filing more than 38,000 patents from 2014 to 2023, a new United Nations report showed. That’s six times more than were filed by U.S.-based inventors, according to the UN World Intellectual Property Organization.
Reinforcement of Existing Inequities:
As we’ve seen with social media and online advertising, AI systems designed and deployed with profit as the primary motive can exacerbate existing social and economic inequalities.
Because of how they’ve been trained, these LLM systems often reflect and reinforce the biases present in their data, leading to discriminatory outcomes. Additionally, the lack of diversity in AI development teams can perpetuate these issues, as the perspectives and experiences of entire hemispheres and marginalized communities are too often overlooked.
Facial and emotion recognition technologies are deployed in schools and workplaces under the pretense of enhancing productivity and safety, but these tools often reinforce existing inequities and biases." (Hartzog, 2024, p. 12)
By addressing these dynamics of adversarial self-dealing, lawmakers can develop more effective regulations to ensure AI systems serve the public good rather than just the interests of powerful entities.
Once again, these examples underscore the critical need for robust regulatory frameworks to ensure AI systems are developed and deployed in ways that prioritize user privacy and societal benefit over unchecked adversarial self-dealing by tech companies.
With gratitude for the thoughtful piece, thank you Woodrow Hartzog. Two AI Truths and a Lie (May 24, 2024). 26 Yale Journal of Law and Technology (forthcoming 2024). And we got additional support on this post from our friend ChatGPT.
Resources from AIGG on your AI Journey
Need training or specific support in building AI Literacy or protecting privacy for your organization? We’re a little different. We’re not approaching AI from a tech perspective, though we have techies on staff. We’re approaching it from a safe, ethical, and responsible use perspective because we’ve been through technology and business transformations before.
Whether you’re a government agency, school, district, or business looking to add AI to your tech toolkit, we can guide the way in a responsible manner. AiGg is here to support you in navigating ethics, governance, and strategy setting.
We have attorneys, anthropologists, data scientists, and business leaders to support you as you develop your Strategic AI Use Statements, which can guide your organization’s use of the tools available to you. We also offer bespoke educational workshops to help you explore and build your playbooks, guidelines, and guardrails as your adoption (and potential risk management) options grow.
Connect with us for more information, to get your free AI Tools Adoption Checklist, Legal and Operational Issues List, HR Handbook policy, or to schedule a workshop to learn more about how to make AI work safely for you. We are here for you.
Reach out for more information and to begin the journey towards making AI work safely and advantageously for your organization.