AI has always been driven by technical expertise and progress. The reason behind this is simple: like most technology, AI research was influenced by wartime developments. Early work drew from cybernetics and pioneers like Alan Turing (famously portrayed by Benedict Cumberbatch in “The Imitation Game”), focusing on creating machines that simulate human intelligence. Post-World War II, the field was spurred by technological advances and the return of scientists to academia. The 1956 Dartmouth Conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, marked the formal birth of AI.
I don’t want to overshadow this great article, but I need to explain why I chose to reflect on this paper. As a founder with over six years of experience in AI and sociology, I’ve been contemplating AI development—how, why, for what purpose, and in whose advantage we pursue it. In business, we often lack the ethical boundaries established by philosophical debates. Investor incentives tend to be our primary concern. If an investor cares about ethics, that’s great. But are they really willing to burn millions to ensure it remains ethical?
While academia may be different, AI development, especially for AI-powered products, is mostly driven by people lacking knowledge in social sciences. Today, product efficiency is prioritized over potential consequences. Engineers are like heavy, fast trains that can destroy everything in their path—that’s their job. The focus is on speed and efficiency, often at the expense of considering the broader impact on society. This lack of interdisciplinary understanding can lead to unintended and potentially harmful outcomes, highlighting the need for a more holistic approach to AI development.
As a tech person himself, Agre, in his article, argues for a transformative approach to AI research that incorporates critical reflection and interdisciplinary insights. This shift is essential not only for the advancement of the field but also for addressing its broader social and ethical implications.
The Necessity of Interdisciplinary Engagement
Agre believes that AI development often lacks ethical boundaries. He is somewhat right; such topics are mostly mentioned only when something goes wrong. There is no pre-planning for these issues because most tech people are not well-educated in such topics. One of Agre’s central points is the importance of integrating perspectives from philosophy, social sciences, and literary theory into AI research. When created, AI is not just zeros and ones anymore. The products we build affect everyone’s lives: poor, rich, strong, weak, women, men, and everyone in between.
Additionally, the development itself is quite rapid. Every day, new models emerge, and no one stops to think and reflect on the potential harm. It’s not an easy subject to address, but it’s still a significant problem. In cooler terms, Agre points out that the prioritization of product efficiency over potential consequences can lead to ethical oversights.
He writes, “AI has never had much of a reflexive critical practice, any more than any other technical field. Criticisms of the field, no matter how sophisticated and scholarly they might be, are certain to be met with the assertion that the author simply fails to understand a basic point.” By bringing in insights from other disciplines, AI researchers can challenge their own assumptions and methodologies, leading to more robust and ethically sound systems.
The Role of Critical Reflection
Agre’s personal journey from an AI researcher to a social scientist exemplifies the challenges and rewards of adopting a critical perspective. He emphasizes the importance of questioning the foundational assumptions of AI, stating, “A critical technical practice will, at least for the foreseeable future, must have a split identity—one foot planted in the craft work of design and the other foot planted in the reflexive work of critique.” This dual approach allows researchers to innovate while remaining mindful of the broader impacts of their work.
Moving Beyond Traditional AI
The traditional AI approach often relies heavily on technical formalization, sometimes at the expense of understanding the complexities of human behavior and social contexts. Agre critiques this, noting, “The field’s most prominent members tended to treat their research as the heir of virtually the whole of intellectual history. I have often heard AI people portray philosophy, for example, as a failed project, and describe the social sciences as intellectually sterile.” By acknowledging and addressing these complexities, AI can evolve to better meet real-world needs.
Establishing a Critical Technical Practice
Agre calls for the establishment of a critical technical practice that balances innovation with reflection. He explains, “Faced with a technical proposal whose substantive claims about human nature seem mistaken, the first step is to figure out what deleterious consequences those mistakes should have in practice.” This approach encourages researchers to rigorously test their assumptions and consider the broader implications of their work.
It is easier said than done. I am not a researcher, and it must be a great pain to consider the further implications of something when it works as well as today’s LLMs. History proves that no one ever questions something if it works, at least for a period (usually a bloody period).
What about modern humans, though? Thinking about ethics is old, but modern people are not all talk and no action. Thanks to our modern tech, we can cooperate much better than our ancestors used to. We can regulate and shape the AI that we create.
What I do is just create noise by saying we should consider what kind of monster we are creating. But being on the right side of history is important. A broad movement on AI ethics may be possible in the near future. Right now, all we can do is manage our own actions responsibly.
Conclusion
Philip E. Agre’s paper is a compelling call to action for the AI community. By embracing interdisciplinary engagement and critical reflection, AI researchers can create more ethical and effective technologies. Agre’s vision is one where innovation and critique go hand in hand, leading to a more thoughtful and impactful AI field.
In Agre’s words, “The constructive path is much harder to follow, but more rewarding. Its essence is to evaluate a research project not by its correspondence to one’s own substantive beliefs but by the rigor and insight with which it struggles against the patterns of difficulty that are inherent in its design.” By following this path, AI can truly fulfill its potential as a transformative force for good.
For more insights, check our CRO's blog page for the full article: https://agisocieties.com/2024/07/31/transformative-approach-to-ai-research-philip-e-agres-vision/
References:
Agre, Philip E. “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI.” In Geof Bowker, Les Gasser, Leigh Star, and Bill Turner, eds, Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, Erlbaum, 1997.
Agre, Philip E. “The dynamic structure of everyday life.” PhD dissertation, Department of Electrical Engineering and Computer Science, MIT, 1988.