Smart cities, search engines, autonomous vehicles: The pairing of massive data sets and self-learning algorithms is transforming the world around us in ways that are not always easy to grasp. The strange ways computers “think” are hidden within opaque proprietary code. It has been called the “end of theory.” There is a danger, says Paola Sturla, Lecturer in Landscape Architecture at the Harvard Graduate School of Design, that human agency will be nudged out of the picture. Sturla, who is trained as an architect and landscape architect, has called on designers to renew the tradition of humanism. The logical step-by-step processing of our machines must be framed within the open-ended thinking of human-oriented design. In this respect, design has tricks to teach computer science: Working at the messy interface between human users and technological tools is what we do best. Sturla reminds us that humanist-designers were instrumental in the initial development of artificial intelligence (AI) decades ago—and she calls on us to renew this legacy.
What does it mean to be a humanist? Are AI tools and computer simulation compatible with humanism?
My research investigates the agency of AI in the design process in a theoretical way. I say that because, as I acquire more technical knowledge, I confirm for myself how sophisticated these skills are. I am not a data scientist; that would be a different career. Humanism, according to Umberto Eco and Linda Gasperoni, is the ability to tackle open-ended problems in a multidisciplinary way. I am not the technician—I am the questioner, the user, the person who interacts with the technology and therefore sets the requirements, which is part of the design process. That’s why my research looks so different from typical AI research, which is about testing and simulating. We need to question what we do in order to understand what AI can do for us. We don’t want to be blown away by the tool. We want to control the tool, make choices, and be accountable, while being surprised by its generative output.
How can we make sure the implementation of algorithms doesn’t make us lose that pre-modern or anti-modern or just human part of the discourse? My intuition is that it is the interface and the feedback loop between human and machine that is crucial.
on the need for humans and non-humans to work together
Do we need technologies to be able to engage with infrastructures and landscapes, to be able to bring them to a human scale?
No. We have always experienced the world around us without technologies, or with other technologies. We can perceive with only our senses, but technology enhances them. That’s what technology has been doing forever. The reticula painters used to use to frame and subdivide their perspective views is a kind of technology for mediating perception. Technologies today are not totally new; they are an evolution.
AI could be just one additional layer. It can sense things we cannot sense and reveal things we cannot see with our eyes, because this data is not within the scope of our perception. It can therefore inform us with more information. Then there is the generative part of AI. We use many tools to have ideas—people do the most crazy things to be stimulated as designers, so why not use an algorithm? AI can provide forms. But we should not rely on that. We should realize that humans are still part of the process.
Is there currently a shift in the complexity of the problems approached by designers? Should the design disciplines change their ways of working to account for this?
I think there is a shift. I’m not sure if it’s related to technology or if it’s a consequence of technology. It’s obviously a complex phenomenon, rooted in economics, in the social aspects of globalization, and so on. The design process is now international. If I had to point out the most important thing enabled by technology, it would not be AI; it would be email making possible mega-teams around the world sharing BIM models. Communication is changing the landscape of what we do. Economic contingencies have resulted in increased complexity. And certainly technology can help. But this is my point: We need to make sure the technology doesn’t backfire on us. If it just adds a layer of complexity, then it’s a problem. If it helps us manage complex sites then it can be very useful.
We have been dealing with complexity forever. Imagine when Brunelleschi was building Santa Maria del Fiore—he came to an existing context, part of the building was already there, he had to invent a solution. This sort of complex context is not new; it is the bottom line of what we do. The aim should be for technology to help us manage this.
What do you see as the root of your approach? Which designers or thinkers are the heroes of working with complex systems?
What I did in my own work was to start at the beginning—to go back to the 1950s, to Alan Turing. From there I went historically, stepping forward about every 10 years, asking, Who are the key people? There was Herbert Simon, the economist who came up with the term “artificial intelligence.” (Interestingly, it was in a title for a grant application—it was branding to get the money.) Then there was the research at MIT: Marvin Minsky and Nicholas Negroponte at the Media Lab. It’s fascinating to learn that many of these people were architects—to find the humanist-designer at the root of all this disruptive innovation. The statistics came from mathematicians, sure, but the idea of applying it to technology for civil use came from architects, as Molly Wright Steenson has shown in her recent book, Architecture Intelligence.
It is a landscape of people who have been dealing with complexity in a variety of fields. All the people that founded the Santa Fe Institute, for example. Many of the heroes are from around the 1950s and 60s. The questions they were asking are still relevant, even if now we have advanced the technology. For example: How can we compute the ambiguous? How can we compute the “maybe,” as Marvin Minsky (and Stephen Ervin) would put it.
Finally, there are the designers—like landscape architect Lawrence Halprin and his creativity feedback loop, the RSVP Cycle, to name just one.
That’s a good question. How can we have an idea that is not necessarily rooted in a data set?
I don’t have an answer to Minsky’s question, but it’s probably a matter of a feedback loop between the human and the machine. Instead of imagining a machine that takes over, it’s more interesting to think about how to make the interface of that technology, to interact in a way that can be generative. This is what I talk about with computer scientists. There is a lot of research going on in that field, especially in the case of, say, emergency response. If there is an emergency, how can algorithms coordinate the rescue process? How can they interact with the human rescuers? The interface problem is key right now in computer science research.
The problem is a language problem. The machine thinks in linear terms, and the algorithm is a linear process: from A to B, from B to C, based on some parameters. Humans think in a nonlinear way, or we can think in both ways. (Or, really, we don’t know how we think; neuroscientists are clear about that.) An algorithm is a machine that can handle a very specific process. That is the opposite of what we do: tackle open-ended processes. I don’t want to open up the Pandora’s box of the critique of modernism, but if the algorithm is a super-linear way of expressing a problem, then it is the king of the modernist approach. And if we have never been modern (quoting Bruno Latour), then the tendency to try to act in a linear way was never really a coherent idea. My critique, then, is to ask how we can make sure the implementation of algorithms doesn’t make us lose that pre-modern or anti-modern or just human part of the discourse. My intuition, based on what I see happening in computer science, is that it is the interface and the feedback loop between human and machine that is crucial. So it’s not about getting rid of the human, but understanding how the two work together.