Perpetuum mobile of the past and the future seems to form the present in which the thought of thoughts is formed. Hidden meanings in the past that are often keys that unlooked the doors of the future.
And a lot questions take us through worlds from which we want to imagine the probable and improbable answer like Dr. Robert Ford from Westworld…

Humans are fascinated and at the same time afraid by AI, it is an eternal love-hate relationship, where we as humans are questioning our creation. Or, not?
Are we questioning what humans will do with machines? Or what corporate agenda will do, once it owns a superior intelligence.
If other generations debated feminism, human rights, vote, education, the Big Questions for our generation will be around technology, and definitely AI. All debates around AI… Are they really about AI, or should we discuss more about our values, ethics, and responsibilities as human beings, and our inability to imagine societies that don’t harm, but brings benefits for the many?
The debate “Machine Yearning” was a captivating exchange of view between scientists and artists about the AI and Human zigzagging crossroads, and solutions and opportunities we have. It was part of a conversation series that ran at the Sundance Film Festival, supported by Alfred P. Sloan Foundation, that also announced the winners of its annual award – read HERE about the winners

The debate was moderated by Gadi Schwartz ( journalist NBC News).
Featuring: Cynthia Breazeal (Associate Director, MIT Media Lab), Ashley Llorens (Vice President & Managing Director, Microsoft Research Outreach) and two directors that envisioned in their films different futures for AIs (or told our joined history with AIs in different times?): Lisa Joy and Kogonada.
Lisa Joy is a screenwriter, director and producer known for Westworld – a dystopian science fiction series set in a tech amusement park. The park caters to high-paying guests who may indulge their wildest fantasies without fear of retaliation from the android hosts, who are prevented by their programming from harming humans.
Another vision for our future with AIs comes from director and writer Kogonada
Kogonada is a director and writer who in his film After Yang (2021). When his young daughter’s beloved companion — an android named Yang — malfunctions, Jake (Colin Farrell) searches for a way to repair him. In the process, Jake discovers the life that has been passing in front of him, reconnecting with his wife (Jodie Turner-Smith) and daughter across a distance he didn’t know was there.
After Young will be in theaters and streaming on Showtime on March 4.
Three main topics and questions were intriguing:
- When we talk about AI, we talk about technology or humanity?
- Why and how is AI dangerous?
- What are possible paths: is it collaborations, AI-literacy, regulations, involvement of art?
AI-discussions take up the space, raising fear, being intimidating, admiration, fascination. but when we are talking about Ai., what we are talking about: a simple or more complex, but faceless algorithm- such as those on social media?
Human looking robots, such as Sophia? NPC from games? Virtual Beings? A generative art piece, consisting of a few lines of code? Are we talking about robots or robots that have feelings or consciousness?
Whatever the answer is, there is always the AI and/or Human… But WHY?
.![]() | Lisa Joy considered that the reason why so much fiction anthropomorphized AI, gives them a human, body and face, and affection is because we are really examining human nature itself. “Any meditation on AI, is a meditation on fore-fathers of AI, which are humans. And often these are cautionary tales of what can possibly be wrong or short-sighted about humanity in the design of these robots”. |
.![]() | Kogonada admits that he struggles as a human being, ‘ I have had a lot of existential angst since I was a child, and I still did not come up with some real answers”. Maybe when the Other Being comes in our scenarios -and we have to live with the Others, and not necessarily in an oppositional way- that makes us feel uncomfortable? And Kogonada’s answer is kind of yes: ‘ I think humans are uncomfortable with the idea that they are not the central being in this world… The thing that always comforts us (like in the StarTrek scenario) we have this idea, that humans will always triumph because we have ‘THAT THING’, we have that instinct, something that will always put us ahead of Others. |
Lisa Joy warns, that we have to examine ourselves and human nature: “Why aren’t we as sophisticated as we like to believe? How hackable are we as a society or as an individual? And I think the answer is: pretty hackable.
So, in some ways like all myth about Gods where humans go wrong is hubris – we think that we are stronger, wiser and less corruptible, less influenceable than we are, but we haven’t really looked at all the externalities that are involved and all the ways i which by launching something like this, will set a sequence of events that very quickly outpace what your own mind is able to fathom”
Lisa Joy noted that Ai is probably more insidious in its dangers, than what some fiction lets us to believe. And we should not talk about AI in the future, but presents, because
“The park of Westworld is here, it is just not like a Disney world, it is an online space, it is a digital space and we all go onto it, and that’s how we meet our friends and hear about news and completely obscure and probably aggrandize the details of our life.
The fabrications of self-storytelling and the stories that we receive have all these narratives now, you don’t need to go to some place like Westworld to be told a yarn, you just need to log on”.The reason why AI feels much more threatening than ever, it is because we created something that surprised us in ways that they strategize and it is not just data-processing, it is a kind of intelligence that we still kind of try to process, observed Kogonada.
We might ask if AI is challenging us, human beings, as the primary Beings? And Kogonada considered that we have to ask what AI means in our relationship with that?
“It is fascinating, we have been the primary beings of this world. There are other animals that are intelligent, but we constantly imposed our power, and this idea that we created some other entity that is testing us in ways that is not just data computation… “
AI is sophisticated, and Lisa Joy sees it as a kind of Deity. Once Machine-Learning is involved,- we can give the algorithm our prompts and it will overturn an answer, that might be the right answer, but we have no idea how they got that answer. As such, “your actions when relying on an AI become faith-based, much like people looked at oracles and Gods. Basically we created our own Deity”/ We = meaning corporations.
The Goals of those deities will be set by specific drives – capitalistic market rates, some kind of propaganda, warns Lisa Joy.
Love it or hate it, AI is with us for years, and we will see more of it in the coming years.
AI systems can help us, can hack us, can destroy us – Question is WHAT TO DO?
Nowadays, every researcher recognize that there are unintended side effects, consequences about the way systems are designed – for example biases can be incorporated in ML – read more about biases in AI (racial, indigenous, but also art practices that addressed it) that were addressed during the MIT conference Unfolding Intelligences
Dr. Cynthia Breazeal (Associate Director, MIT Media Lab) sees that as concepts such as Responsible AI, Ethical AI are gaining steam, this is a sign of a maturing ecosystem. But warns that “we have to be mindful of how we design and create these technologies”.
So, what are some possible paths, and from the discussion we extracted 4 themes:
-*Collaborate with AI,
-*Involve Policy Makers,
-*AI-literacy
-*Involvement of Art.
3.1 AI – Collaborators?
As we are designing these systems, we have to address if these will become our friends, and we will be able to collaborate with them or our foes as we will see ourselves replaced and displaced by machines. It seems that panelists were more inclined toward collaborative versions of AI.
Ashley Llorens (Vice President & Managing Director, Microsoft ) says clearly, that ”creating a human-like machine is not a goal, but creating a collaborator it is. For example, humans can make irrational choices when faced with risks we tend to overweight or place emphasis on rare occurrences”.
![]() | Dr. Cynthia Breazeal (Associate Director, MIT Media Lab) , admits that there are fascinating complementarities and differences between what machines can do, and what humans can do. “And the opportunity is really about: how can we as a team, as collaborators do more together, achieve more together than we could either as just a machine or just as people. It is kind of exploring that relationship that is fascinating” |
![]() | Ashley Llorens (Vice President & Managing Director, Microsoft ) thinks that AI only exists to augment and extend human intelligence. He envisions intelligence as the ability to set and establish goals for individuals, society, and the planet. One of the challenging questions he addresses is ”how do we specify those goals, how we come together as groups or as human species, and really start to understand what we want to get out of AI”. |
3.2 Tech + Policy:
Ashley Llorens highlighted that the intersection between tech and policy communities is crucial.
” As technologist we really need to understand how to translate some of these conundrums and how to illustrate what they are possible, And in the policy community, we really all have a shared interest in that community making choices’
Lisa Joy underlined that especially during the quarantine, we don’t go out that much, the idea of the world is constantly fed to us by different technologies.
“It’s scary, because it can start to change the very nature of how we relate to each other, how we view ourselves, () , it can start to change, our kindness, generosity, our value system of what matters to us…. () the world presented to us is a big hallucination, is tearing apart the fabric of the objective truth, i think we should probably regulate it…
Lisa also calls for regulation on an international governmental level. She argues that a tech–savvy silicon valley culture can easily disrupt a less tech-savvy culture of regulators. But can you regulate things that people are designing in their basements? How do you do that? Should it be an AI monitoring or policing entity? And who is designing that and who is policing that?, are some uncomfortable questions that need to be answered.
3.3 AI-Literacy
Dr. Cynthia Breazeal tries to build robots, systems that help individuals to achieve deeply personal goals like learning, being healthier or more emotionally resilient But, she admits that AI ‘ is a double-edged sword – we can be persuaded, we can be shaped by these technologies in all kinds of ways’
Another solution is AI-literacy – meaning Cynthia Breazeal tries to educate people to understand what these technologies are, how they work, what influence they can have on us. And she does a lot of work with kids teaching them about AI, social media or how GANs work.
She admits that it is important that people understand “so everyone can participate in that conversation about how we want to live with these technologies”, because now there are only the people who build and design AI, often controlled by corporations, but there is a need for much more diverse communities to be designing these systems.
3.4 The challenge of Art
Kogonada sees the role of art in questioning things about society and humanity – that is why Art exists. Unfortunately, he notices that often the drive for technology is not: of what is Good for Humanity, but so much of tech is driven by industrial or military interests, so the technology that is being advanced is often about Control, Power and Profit. And then, the rest of the world will try to figure out how to utilize it. So, there are a lot of questions to be asked, and “I think that Art can then question those things…”
Dr. Cynthia Breazeal (Associate Director, MIT Media Lab) agrees with the role of art as questioning: “I think the challenge of art is to continue to challenge us to think differently, in ways that are important and timely to the way we’re living now, the way we think about the past and the way we envision our future”.


The Sundance Film Festival®
The Sundance Film Festival has introduced global audiences to some of the most groundbreaking films of the past three decades, including Flee, CODA, Passing, Summer Of Soul (…or, When the Revolution Could Not Be Televised), Clemency, Never Rarely Sometimes Always, Zola, On The Record, Boys State, The Farewell, Honeyland, One Child Nation, The Souvenir, The Infiltrators, Sorry to Bother You, Won’t You Be My Neighbor?, Hereditary, Call Me By Your Name, Get Out, The Big Sick, Mudbound, Fruitvale Station, Whiplash, Brooklyn, Precious, The Cove, Little Miss Sunshine, An Inconvenient Truth, Napoleon Dynamite, Hedwig and the Angry Inch, Reservoir Dogs and sex, lies, and videotape.
The Festival is a program of the non-profit Sundance Institute. 2022 Festival sponsors include: Presenting Sponsors – Acura, AMC+, Chase Sapphire, Adobe; Leadership Sponsors – Amazon Studios, DIRECTV, DoorDash, Dropbox, Netflix, Omnicom Group, WarnerMedia, XRM Media; Sustaining Sponsors – Aflac, Audible, Canada Goose, Canon U.S.A., Inc., Dell Technologies, IMDbPro, Michelob ULTRA Pure Gold, Rabbit Hole Bourbon & Rye, Unity Technologies, University of Utah Health, White Claw Hard Seltzer; Media Sponsors – The Atlantic, IndieWire, Los Angeles Times, NPR, Shadow and Act, Variety, Vulture. Sundance Institute recognizes critical support from the State of Utah as Festival Host State. The support of these organizations helps offset the Festival’s costs and sustain the Institute’s year-round programs for independent artists. Festival.Sundance.org
Sundance Institute
As a champion and curator of independent stories for the stage and screen, Sundance Institute provides and preserves the space for artists in film, theatre, film composing, and digital media to create and thrive.
Founded in 1981 by Robert Redford, the Institute’s signature Labs, granting, and mentorship programs, dedicated to developing new work, take place throughout the year in the U.S. and internationally. Sundance Collab, a digital community platform, brings artists together to learn from each other and Sundance advisors and connect in a creative space, developing and sharing works in progress. The Sundance Film Festival and other public programs connect audiences and artists to ignite new ideas, discover original voices, and build a community dedicated to independent storytelling. Sundance Institute has supported such projects as Clemency, Never Rarely Sometimes Always, Zola, On The Record, Boys State, The Farewell, Honeyland, One Child Nation, The Souvenir, The Infiltrators, Sorry to Bother You, Won’t You Be My Neighbor?, Hereditary, Call Me By Your Name, Get Out, The Big Sick, Mudbound, Fruitvale Station, City So Real, Top of the Lake, Between the World & Me, Wild Goose Dreams and Fun Home. Join the Sundance Institute on Facebook, Instagram, Twitter and YouTube.
*All photos are copyrighted and may be used by press only for the purpose of news or editorial coverage*
5 comments
[…] Read more […]
[…] Debating AI, Questioning Humanity. “Westworld is here”? Directors Lisa Joy (Westworld) and Kogonada (After Yang), alongside Cynthia Breazeal (MIT Media Lab) and Ashley Llorens (Microsof) discussed AI, algorithms, biases and humanity. From Sundance 2022 review […]
[…] Debating AI, Questioning Humanity. “Westworld is here”? Directors Lisa Joy (Westworld) and Kogonada (After Yang), alongside Cynthia Breazeal (MIT Media Lab) and Ashley Llorens (Microsof) discussed AI, algorithms, biases and humanity. From Sundance 2022 review […]
[…] in more debates around Westworld, AI, and related topics?Read our review of the debate “Machine Yearning” from Sundance 2022, featuring Lisa Joy (director and screenwriter Westworlds), Kogonada (director […]
[…] Related to this topic, read also the inspiring debate between Westworld director and writer, Lisa Joy, and Kogonada (director After Yang) about AI, Ethics and Future of Humanity here […]
Comments are closed.