SCI-6513
AI and the Physical Imaginary
“We are interested in — if not fascinated by — the two-way relationship between humans and technology. Humans create inspiring and empowering technologies but also are influenced, augmented, manipulated, and even imprisoned by technology, depending on the situation and the interpreter.”
– Pertti Hurme, Jukka Jouhki, in “We Shape Our Tools, and Thereafter Our Tools Shape Us” in Human Technology 13(2).
Generative AI is currently impacting architectural and design workflows, offering new possibilities for ideation, visualization, and formal exploration. Builders of these tools promise to accelerate creative processes, democratize design, and expand the space of formal possibilities, yet this promise obscures several major issues. For example, many current generative AI systems generate forms that often misunderstand gravity and physicality of materials. It lacks understanding of embodied, tacit knowledge through which designers actually work. And, AI systems, more than prior computational tools such as CAD or parametric modeling, act with a degree of independence that challenges designers’ agency, process, and the nature of design knowledge.
Architecture and design are embodied practices. Designers think through making, learn through material experimentation, and generate knowledge through haptic feedback and proprioceptive awareness. As Polanyi observed, “we know more than we can tell”: much of design expertise remains tacit knowledge, resistant to explicit formalization. How can we then design with AI in a way that preserves the material intelligence, tacit knowledge, and embodied awareness that are central to architectural and design practices?
This course interrogates two questions: (1) How do we move AI processes out of the 2D screen into the 3D physical world; the realm of bodies, materials, and fabrication? (2) How do we integrate information about physical reality and tacit/tactile knowledge into design workflows using AI? We will explore these questions through three modules:
Module 1: AI and Physical Making — Students fabricate architectural models and 3D artifacts that demonstrate meaningful integration of human creative process and AI capabilities, and move beyond prompt-based image generation.
Module 2: Interaction Paradigms — Exploration of human-AI interaction models (one-click, real-time, incremental, non-linear, and more) through experimentation. Students develop their own methods of interaction.
Module 3: Multimodal, Human and Tacit Intelligence — Investigation of how tacit knowledge and embodied intelligence can be integrated into AI workflows through methods such as material sensing, gesture, touch, spatial positioning, etc. Students use such methods to design.
We will discuss topics such as computational design thinking, embodied cognition, comparisons between human-AI interaction and human-robot interaction, participatory design, ethical and critical AI humanities approaches, and meta-design (designing the design process). Workshops introduce students to generative AI tools, creative coding with ml5.js/TouchDesigner, and LLM-assisted development of custom AI tools. Projects will engage in physical prototyping, digital fabrication, and model making through emerging AI technologies. We will ask: What roles should AI play in architecture and design? What types of interaction enable meaningful human control? How might we incorporate physicality, material feedback, and evolving intentions into human-AI interaction?