At the beginning of April, TechvangArt announced with enthusiasm the third series of MIT CAST symposia that bring together artists, scientists, engineers, and humanists from a variety of disciplines to address topics of common concern in areas of rapidly evolving research and urgent social relevance.
It was enchanting to see so many participants from the scientific and artistic communities following us on social media on “Unfolding Intelligence: The Art and Science of Contemporary Computation” event.
Virtual gathering has been organized by a team of colleagues from the MIT Center for Advanced Virtuality, the MIT Transmedia Storytelling Initiative, and the MIT Trope Tank. Online exhibitions feature The Invisible College, a multi-platform artwork developed by CAST Visiting Artist Matthew Ritchie, and an online exhibition of generative software artworks, curated by Nick Montfort. The List Visual Arts Center presents the related event, the Wasserman Forum.
It was a great symposium with a lot of interesting debates and discussions. In case you did not have time to take part, we share with you here, our main 4 favorite takeaways.
The whole debate on Bias in AI is our contemporary topic. The AI revolution is here, but AI inherits from its creators our deep biases developed across history. But some biases that are encoded would perpetuate social oppression. That is why thinking critically about biases is a must.
In addition, we have to imagine new ways to design AI systems to ensure not only ethically sound systems, but systems that serve the needs of human empowerment.
Behnaz Farahi is a designer, creative technologist, and critical maker working at the intersection of fashion, architecture, and interactive design.
Through her works, Behnaz Farahi asks if there are ways of deploying AI as a means of exposing human biases and how might technology expand our sensory experience and influence our social interactions?
She suggests potential strategies that can be deployed by AI to actually overcome biases.
By implementing computer vision technology, she analyses how different strategies of gaze can be seen to undermine power structure and promote resistance. Emotive wearable which can recognize and respond to facial expressions – smart soft robotic garments can potentially benefit those with autism who have difficulties in recognizing facial expressions.
Iridescence is a 3D-printed, emotive collar, is equipped with a facial tracking camera and rotating quills, which respond to movement and facial expression, allowing the wearer to sense the location and emotions of others, even with their eyes closed. When did she ask what if our clothing could sense the movement and emotions of those around us?
Caress of the Gaze is a project that engages with broader social issues as the male gaze on women’s bodies.
The artist used computer vision technology to allow women to know when onlookers are staring at them.
The facial tracking technology from the garment detects the age, gaze, and gender of the onlookers, while the fabric using smart material moves based on viewers’ gaze – if you are the viewer, you know which part of your body is looked at. And if you are an onlooker you know that your actions have been noticed.
On a broader level the project analyzes how strategies can be used to undermine the patriarchal system and develop forms of resistance using technology.
Behnaz Farahi recent work: “Can the subaltern Speak?”, was also part of the MIT exhibition, “Generative Unfoldings”, curated by Nick Montfort.
It is Inspired by the intriguing historical masks worn by the Bandari women from Southern Iran. The legend has it that these masks were developed during Portuguese colonial rule, as a way of protecting the wearer from the gaze of slave masters looking for pretty women.
Viewed from a contemporary perspective, they can be seen as a means of protecting women from patriarchal colonial oppression? Asks herself the artist. As stated by the artist, the project relies also on the seminal article “Can the Subaltern Speak?” by feminist theorist, Gayatri Spivak, asks whether it might be possible for the colonized – the subaltern – to have a voice in the face of colonial oppression.
Behnaz Farahi asked herself how might we reframe this same question in the context of contemporary digital culture? In this project two masks begin to develop their own language to communicate with each other, blinking their eyelashes in rapid succession, using AI-generated Morse code.
Ruha Benjamin, Professor of African American Studies, Princeton University, challenges the Point of view through which technological development is too often falsely equated with social progress. Sometimes, tech development can reinforce and sediment social inequality. And also in the past, technology and data had been used to harm different groups – deadly data that IBM helped in supporting the Holocaust, in terms of punch cards to surveil and track populations that were exterminated.
. you can read the book on Amazon
But, in our contemporary society, sometimes data is presented as “do-gooding data”, where the framing and the motive are to help.
For example, in Minnesota, data were collected by different agencies in order to help people at risk, under a community innovation project.
The do-gooding aspect was that agencies were supposed to intervene early. But the local community argued that those agencies were not having a good track record of helping the well-being of the communities.
Ruha Benjamin warns that when it comes to tech “such as deep learning, machine learning, without the historical and sociological depth what we are producing is actually superficial learning”.
We have to situate our goals and design in a social-historical context so that we are not reproducing social and racial inequalities.
There are plenty of studies in different areas that highlighted how racial discrimination works. For example, in the healthcare system, healthcare professionals underestimate the pain of Black patients and don’t prescribe the necessary medicine needed.
(more info: Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. research article by Kelly M. Hoffman, Sophie Trawalter, Jordan R. Axt, and M. Norman Oliver https://www.pnas.org/content/113/16/4296 )
So, what will happen when we outsource these decisions to an AI? There are plenty of evidence that AI trained on data-sets that are already discriminatory lead to discriminatory AI.
For example, psychologists have found that AI considers white-sounding names to be more pleasant, a bias well documented in recruiters. https://www.vice.com/en/article/z43qka/its-our-fault-that-ai-thinks-white-names-are-more-pleasant-than-black-names . Also at Amazon, their AI algorithm developed discriminated against women.
Ruha Benjamin presents the concept of the “New Jim Code” to explore a range of discriminatory designs that encode inequity: by explicitly amplifying racial hierarchies, by ignoring but thereby replicating social divisions, or by aiming to fix racial bias but ultimately doing quite the opposite.
The “New Jim Code” is a combination of coded bias and imagined objectivity as the “New Jim Code” , innovation that enables social containment while appearing fairer than discriminatory practices of a previous era.
Technology can hide social domination, and present everything under the guise of progress, and anti-blackness can get encoded and exercised through automated systems.
Considered that it is expected that AI will dominate different markets – recruitment, healthcare, so on – non-biased algorithms are desirable.
“If only there was a way to slay centuries of racism and sexism with a social justice bot!” reflects Ruha Benjamin.
REPORT: ”Advancing Racial Literacy in Tech” handbook
STUDY: “How Eugenics Shaped Statistics” by Aubrey Clayton (2020)
Another interesting presentation was given by Jason Edward Lewis, Professor of Computation Arts and University Research Chair in Computational Media and the Indigenous Future Imaginary, Concordia University. Founder of founded Obx Laboratory for Experimental Media. Jason Edward Lewis is a creative technologist, a digital media theorist, poet, and software designer, with works featured at Ars Electronica, MobileFest, Elektra.
Jason Edward Lewis explored Artificial Intelligence as kin, presenting two of his most influential works. He is the lead author on the award-winning “Making Kin with the Machines” essay and editor of the groundbreaking Indigenous Protocol and Artificial Intelligence Position Paper
“We undertake this project not to “diversify” the conversation. We do it because we believe that Indigenous epistemologies are much better at respectfully accommodating the non-human. We retain a sense of community that is articulated through complex kin networks anchored in specific territories, genealogies, and protocols. Ultimately, our goal is that we, as a species, figure out how to treat these new non-human kin respectfully and reciprocally—and not as mere tools, or worse, slaves to their creators”
Cultures when confronted with new entities ask first “how are you related to me?” rather than what is that? This is true for animals, stones, oceans, etc. And this question sees the world through a relational lens first, even before essence or utility. For this, cultures developed protocols. Protocols are important because it helps us to understand how we live our lives. Protocol can be understood in Indigenous contexts generally as guidelines for initiating, maintaining and evolving relationships. Protocol also refers to specific methods for properly conducting oneself in any activity; Cultural protocol refers to the customs, lore and codes of behavior of a particular cultural group and a way of conducting business.
The position paper on Indigenous Protocol and Artificial Intelligence (IP AI) published in mid-2020 is for those who want to design and create AI from an ethical position that centers Indigenous concerns. The aim is to articulate a multiplicity of Indigenous knowledge systems and technological practices that can and should be brought to bear on the ‘question of AI.’
The protocol is a collection of texts from analyzing artistic interventions, poetry, tech prototypes. Some interesting works are “How to Build everything Ethically?” by Suzanne Kite – she examines the protocol her people, the Lakota, used to develop sweat lodges in order to develop a methodology for computational systems. While Michele Brown wrote about how to harness AI to support them in the relationship with the ocean. But, for more interesting reflections and possible solutions, the book awaits to be read 🙂
Last, but not least, we can’t miss to mention a multi-part transmedia artwork The Invisible College: Color Confinement created by Matthew Ritchie, who was part of the Dasha Zhukova Distinguished Visiting Artist Program at MIT.
The project was presented in premiere at the beginning of the symposium.
Matthew Ritchie investigated interactions, discussions, and thought processes that take place outside of the formal structure of the Institute, even posting the existence of links between concepts and facts that no human has explicitly documented or communicated.
He then compiled scientific diagrams and recorded discussions with colleagues and had a Generative Adversarial Networks (GANs)—a class of machine learning system—reinterpret those inputs. The artist paused or influenced the machine’s processes, isolating visual forms and narrative passages that gesture toward the conversations and ideas at work in the invisible college.
The real story of The Invisible College: Color Confinement is charming and parable
Organizer of Unfolding Intelligence: The Art and Science of Contemporary Computation
[…] an investment Well, AI might have biases, but these biases might be different in the future. About AI and Biases, you can read discussions from the latest MIT conference.So, will people revolt against […]
[…] bias against women. But, that did not stop startups and investors to continue perfecting the tool. AI has biases, research shows, but, in the end, biases can be overcome. Or should be overcome. One day. So, the […]
[…] Love it or hate it, AI is with us for years, and we will see more of it in the coming years.AI systems can help us, can hack us, can destroy us – Question is WHAT TO DO?Nowadays, every researcher recognize that there are unintended side effects, consequences about the way systems are designed – for example biases can be incorporated in ML – read more about biases in AI (racial, indigenous, but also art practices that addressed it) that were addressed during the MIT conference Unfolding Intelligences […]
[…] Thinkable Thoughts about the Future of AI. From the symposia “Unfolding Intelligence: The Art and Science of Contemporary Computation” organized by MIT (2021) […]
[…] Thinkable Thoughts about the Future of AI aibehnaz farahiinspiration of the week 0 FacebookTwitterPinterestLinkedinRedditTelegramSkypeViberEmail […]
Comments are closed.