W.3
Uncanny Alienation and the Surprisingly-Boring Apocalypse: Teaching the Weird and Weirdly Uninspired Future of Artificial Intelligence
Design South (CDS) – 126
Friday 27th, 8:30 am – 10:00 am
Gaymon Bennett (Arizona State University)
Erica O’Neil (Arizona State University)
This workshop invites participants into a series of conversations, micro-presentations, interactive games and exercises to collectively craft elements for an imagined course on the future of Artificial Intelligence – elements which (following Erik Davis’ timely provocations) takes account, at-once, of AI’s unsettling weirdness and weirdly uninspired modalities, all while striving to reimagine the teaching of AI as something smart, radical, and, well, a bit more fun. The workshop will be designed to shift attention away from the usual commonplaces connected to the now-widespread hand-wringing about AI and the looming prospect of job-losses among the white-collared classes. Equally, conversations and presentations will willfully skirt warnings, trumpeted by prophets of Light, who herald AI as an imminent and inevitable realization of familiar science-fiction narratives about (alternately) robotic singularity, bodily redundancy, and the ghost in the literal machine. Instead, attention and activities will be keyed to the banality of how AI tools are actually operating and being integrated in practice. In contrast to how these tools are being sold — “innovative, transformative, disruptive, world- changing” — we will take stock of the still-clunky ways in which AIs today (despite the wonder of chatbots!) are being used as boring, enterprise-level tools of optimization and replication. At the same time, we will take a deep dive into the Derridean labyrinth-of-language that constitutes AI’s startling reification. We will let ourselves be spun into dizziness and disorientation by the can’t- quite-think-it fact that the play of language in these systems does indeed go all the way down (there is no bottom) and all the way in (there is no heart). We will sit together in the troubling uncertainty that arises from acknowledging that these dark waters, which reveal so little about what is going on technically beneath the surface, might offer back a disconcerting gift: the possibility that the uncanny unfamiliarity of AI may turn out to be a mirror-reflection of our all-to- familiar inability to understand ourselves. Letting ourselves steep in the Big Question of Being Human, the workshop will take up a number of playful exercises designed to help participants collectively specify what we think counts, what we find interesting, and – vitally – how we think these tangled things can best be taught, recognizing all the way through that we are unlikely to make meaningful progress on the theoretical and political purchase of our pedagogical content if we’re not willing to actively experiment with our pedagogical modes. Today, accessible AI tools with intuitive user interfaces have allowed the public (including our students) to enter an alien and sometimes alienating world of large datasets and machine learning, generating a visual and textual spectacle. OpenAI, Google AI, Midjourney, and others releasing these tools are carnival barkers, welcoming the public into a three-ring beta-test, all while simultaneously collecting data on how these tools are deployed. The alien products that such Machine Vision and Large Language Models create, in their facsimile of human artistic and intellectual creation, activate AI’s seductive appeal. Our hope is that by connecting the uncanny shadow sides of AI with its quotidian market-driven dangers, we can yet help our students – and ourselves – fashion more creative, enterprising, and critical responses to the prosaic if sometimes alluring future of artificial intelligence.