More dates

    Artificial Visionaries: Exploring the intersections of machine vision, computation, and our aural and visual cultures

    Share
    Room 511. UQ Brisbane City
    brisbane city, australia
    Add to calendar
     

    Event description

    Artificial Visionaries

    “Artificial Visionaries” is a two-day symposium with the goal of bringing together scholars who are exploring the intersections between computation and creativity across a broad range of aural and visual cultures.

    As artificial intelligence and generative technologies become entangled with our day-to-day creative practices and industrial forms of cultural production, it prompts critical reflection on the affordances, differences, and points of connection between human perception and machine vision, human labour and machine labour, and human creativity and computational creativity. How are generative technologies being incorporated into our creative practices? How are data and algorithms influencing the way we make, exhibit, distribute, perceive, or consume art?

    ChatGPT suggested we call this event "artificial visionaries" — so we did. But who are the visionaries? The hallucinations of the machines, or the creative visions (and hallucinations) of the humans who use them? Whilst the phrase may bring to mind questions of authenticity, authorship, or aesthetic judgement for some cultural studies scholars, we're sure it will prompt very different ideas for a computational scientist. We feel that the polysemy of a machine-generated term such as this is also representative of the many different approaches scholars are taking toward digital cultural research.

    This event has been organised by Meg Herrmann with the support of The Centre for Digital Cultures & Societies at UQ and The ARC Centre of Excellence for Automated Decision Making and Society.

    Keynotes 

    Degenerative Music: Listening with and against algorithmic aberrations

    Explore acoustic chicago blues algorave. Make a song that feels how you feel. Write a songbook about automatic music generation. Prompt: choir, replication, disquiet, clone, drone, decompose, female vocalist, rhythmic, LLM poetry, DIY, heavy, absurd. Enter custom mode. Perform live.

    “Suno is building a future where anyone can make great music. Whether you're a shower singer or a charting artist, we break barriers between you and the song you dream of making. No instrument needed, just imagination. From your mind to music.” 

    “Udio builds AI tools to enable the next generation of music creators. We believe AI has the potential to expand musical horizons and enable anyone to create extraordinary music. With Udio, anyone with a tune, some lyrics, or a funny idea can now express themselves in music.”

    Generative AI platforms like Suno and Udio promise a future where "anyone can make great music" regardless of skills, experience or knowledge by simply using a prompt interface. While this notion radically redefines what it means to create music in a conventional sense, it aligns, weirdly, and perhaps unintentionally, with certain avant-garde and experimental music traditions, which foreground de-skilling (no instrument needed...) and conceptual purity (...just imagination).

    Further, when we listen to AI-generated music in 2024, despite promises to the contrary, we don’t hear seamless genre replication or polished production. Instead, what stands out are aberrations—glitches, artifacts, and strange affectations—what we might call sonic disaggregations or degenerations. These imperfections are not merely flaws; they are the defining features of AI music.

    Rather than focusing on AI’s ability to faithfully replicate musical conventions, this talk proposes that the medium specificity of AI music lies in its errors and mutations, its absence of human intentionality, and the ‘lack of shame’ that often accompanies creative choices. While these qualities preclude (at least for now) AI-generated music from being seen as "authentic" popular music, they fulfil long-held avant-garde desires to replace aesthetic choices with automated processes, structures, mechanisations and prompts.

    Dr Joel Stern (RMIT) 

    Dr Joel Stern is a researcher, curator, and artist living in Naarm/Melbourne, Australia. He holds the position of Vice-Chancellor’s Postdoctoral Fellow at the School of Media and Communication, RMIT University. Informed by a background in experimental music and sonic art, Stern’s work focusses on how practices of sound and listening inform and shape our contemporary worlds.

    In 2020, with fellow artist-researchers Sean Dockray and James Parker, Joel founded Machine Listening, a platform for collaborative research and artistic experimentation, focused on the political and aesthetic dimensions of the computation of sound and speech. Machine Listening emerged out of Stern’s previous work with James Parker on Eavesdropping, a multifaceted project staged at Ian Potter Museum of Art (University of Melbourne) and City Gallery (Wellington) addressing the capture and control of our sonic worlds, alongside strategies of resistance.

    Between 2013 and 2022 Stern was Artistic Director of pioneering Australian organisation Liquid Architecture, helping establish it as one of the worlds leading forums for sonic art. In this capacity he curated and produced numerous festivals, exhibitions, concerts and publications in Australia and internationally, while developing artistic research investigations including disorganising, Polyphonic Social, Why Listen?, Instrument Builders Project, and Ritual Community Music. 

    Weird by Design: Generative AI and the aesthetics and visual culture of weirdness 

    In 2024, new generative AI models for image and video are released every few weeks, and each one seems to promise improved accuracy and unprecedented user control. Often though, if we consider such AI generated videos as “Will Smith Eating Spaghetti” (2023) by Reddit user, chaindrop, using HuggingFace’s ModelScope text2video, it is the inaccuracy and chaos of AI generated works that comprises their viral attraction. This is a rarely examined aesthetic quality we tend to call weird. In one sense – but not all – AI generated weirdness is related to what Carolyn Kane has called “the aesthetics of failure” (2019): associated with technological artefacts that are part of development cycles, but slowly disappearing with the training of each new model. It is possible that weirdness is merely a temporary characteristic of AI aesthetics – one that is leant into or emphasized in vernacular and artistic uses of these applications. But weirdness may also be a more persistent feature of generative AI. For, as I argue here, it operates alongside, underneath, and in relation to generative AI’s developmental trajectories, and their corporate framing and branding. This talk is a brief exploration of the manifestation, experience, and functions of AI weirdness, and how and why weirdness – at least for now – is a significant part of the shifting aesthetic and cultural frameworks through which we understand, share, categorize, and experience emerging AI applications and the text, images, and video they produce.

    Dr Lisa Bode (UQ)

    Lisa Bode lectures in Film and Television Studies at the University of Queensland. She is the author of Making Believe: Screen Performance and Special Effects in Popular Cinema (Rutgers University Press, 2017), which historicizes screen performance within the context of visual and special effects cinema and technological change in Hollywood filmmaking, through the silent, early sound, and current digital eras, in order to shed light on the ways that digital filmmaking processes such as motion capture, digital face-replacement, and green-screen acting are impacting screen acting and stardom. She has published work in edited collections and journals on the implications of digital filmmaking technologies for synthetic media; screen acting and stardom, the cultural reception of the synthespian, mock documentary performance, and the processes through which dead Hollywood stars are remembered, forgotten, or re-animated. She co-edited the August 2021 special issue of Convergence on Digital Faces and Deepfakes on screen, and is currently writing a monograph for Rutgers University Press called Deepfakes and Digital Bodies.

    She is on the editorial board for the series Animation: Key Films / Filmmakers (Bloomsbury Academic), and Animation Studies, the open-access peer-reviewed journal for The Society for Animation Studies. In 2020 she co-founded the Visual Effects Research Network with Associate Professor Leon Gurevitch.

    Workshops

    A Gentle Introduction to Stable Diffusion

    This workshop has been modelled off a module from the Gen AI explainer series recently launched by the Gen AI Lab at the Queensland University of Technology: “Unboxing GenAI: Building capacities for public understanding of Generative AI.”

    Our stable diffusion mini-series covers the most prevalent approach to text-to-image generation that is currently on the market – latent diffusion models. This series breaks down each component of an example open-source model (Stable Diffusion v1.4), explains the reasoning behind each component’s inclusion, and openly reconstructs the model’s algorithm in an approachable, non-technical and interactive format.

    William He (QUT)

    William He is a Machine Learning Engineer at QUT's GenAI lab, specializing in Large Language Model interpretability, model transparency, and emergent model abilities. His present work is a collaboration with Dr. Aaron Snoswell, Dr. Jean Burgess, and Dr. Damiano Spina on an LLM-powered social media re-ranking algorithm, which aims to reduce toxicity by reordering posts based on "bridging-ness". Their algorithm is currently a finalist in the Center for Human-compatible AI's Prosocial Ranking Challenge. Previously, William worked in the government sector, where he mostly trained and deployed Large Language Models in production environments and built proof-of-concept Retrieval Augmented LLMs and semantic search engines. William is also an independent filmmaker, with works that have been featured in previous editions of Canberra Short Film Festival, St Kilda Film Festival, and Melbourne WebFest, among others. He is currently in post-production on a feature-length documentary.


    Re-Wilding AI: From the Banal to the Bonkers

    As AI-generated content rapidly integrates into creative practices, we often see its outputs as polished, efficient, and precise. But what happens when we try not to avoid errors, glitches, and hallucinations? What if we embrace the limitations, anomalies, and ‘wild’ possibilities of these tools? This lectorial workshop invites participants to hack away a path in the overgrown wilderness beyond boring algorithmic perfection, to work together in re-wilding AI imagery. With guidance from Workshop Wayfinder Daniel Binns, you will navigate creative exercises to push AI media generators to breaking point. Together, we will challenge assumptions about AI’s role in creativity and rediscover the unexpected, emergent potential that lies within its systems.

    Dr Daniel Binns (RMIT)

    Daniel Binns is a theorist of media and screen cultures, currently researching the intersection of materiality, computation, and entertainment. Daniel has published work on the Netflix house style, drones and cinematography, video game engines as filmmaking tools, film genre and superhero movies and TV. He is the author of The Hollywood War Film: Critical Observations from World War I to Iraq (2017), and Material Media-Making in the Digital Age (2021). His work has been cited in the Journal of Popular Culture, Black Camera, Pop Matters, and the Journal of International Relations and Development, and he has presented his research at conferences in Australia, New Zealand, the UK, Greece, and the Czech Republic.

    Daniel has also worked as a screenwriter, director, producer and production manager on corporate films, television documentaries, multi-sensory experiences and short-form works. He has produced work for Seven Network Australia, National Geographic and Fox Sports. His film work has been selected for over a dozen international festivals and streaming services.


    Thinking by Prompting: Generative AI and Creative Practice

    This workshop presentation will survey how generative AI can be integrated into the creative processes of a visual artist. By playfully engaging with AI tools artists can initiate a reflective and reflexive creative dialogue in their practice, using AI to facilitate speculative and imaginative thinking. Dr Daniel McKewen will share artistic outcomes and works-in-progress from his exhibition practice as well as current collaborations with other humanities researchers, demonstrating and discussing the creative considerations and contextual conditions emerging in artist’s uses of generative AI.

    Dr Daniel McKewen (QUT)

    Dr Daniel McKewen is a Senior Lecturer in Contemporary Art at QUT where he teaches Studio Practice and Time-based Art units. His practice as a visual artist, and educator considers the intersections of contemporary art, screen culture, economics, and politics. His research ranges across numerous artforms and fields of visual culture, exploring how institutions, systems and structures of power inform our individual and collective imaginations. His artworks reflect on how our we are shaped and challenged by ideological and aesthetic conventions, and how these allow us to make sense of our social experiences. 

    In 2013 Daniel was awarded his Doctorate of Philosophy from Queensland University of Technology for his thesis The Art of Being a Fan: Complicity and Criticality in Contemporary Art and Fandom. His artwork is held in public and private collections and has been exhibited nationally and internationally, including in the 2023 and 2021 Ramsay Art Prize at the Art Gallery of South Australia; Conflict in my Outlook at University of Queensland Art Museum; Currency at New Media Gallery, Vancouver; Art Mixtape at HOTA Gold Coast; NEW14 at the Australian Centre for Contemporary Art, and You Imagine What You Desire at the 19th Biennale of Sydney. Daniel was a founding member of the artist-run-initiative Boxcopy, and from 2016-2023 was a board member of Metro Arts. Daniel's art practice is represented by Milani Gallery, Brisbane.


    Panels

    Machine Vision vs Human Perception 
    • Hallucinations and Realism: the ‘hidden language’ of AI – Emilie K. Sunde (Uni Melb)
    • Provocation: Machines scope, they do not see, they do not sense, they do not…? – Kathryn Brimblecombe-Fox (UQ)
    • Uneasy familiarity: Colonial uncanny and the picturesque in generative AI images – Charu Maithani
    • Cultural Machines: How MetaCLIP Codifies Culture – Luke Munn (UQ) and Adarsh Badri (UQ)
    Authorship, Intentionality, and Appreciation
    • Authorship Adr1ft? Engaging Cinema’s Automatisms in Harmony Korine’s Aggro Dr1ft – Joel Fantini (UQ)
    • Artificial Intelligence and the Appreciation of Art – Tace McNamara (Monash)
    Short Video Platforms and Cultural Production
    • Behind the AI-Generated Content Flood: Grassroots Creators, Lucrative Ventures, and the Informal Industry – Jiaru Tang (QUT)
    • Chinese rural women’s cycle of bitterness on short-video platforms – Bingxi Huang (UQ)
    • The Multi-Sided Product: Media Production in the Context of Platformization – Meg Herrmann (UQ)
    Case Studies: Artificiality and Human-Machine Interaction
    • Pigeon Fool. Hybrid assemblages and the relationship between the live AI bot and the recorded human performer – Abbie Trott (UQ)
    • Can Humans Empathise with Electric Characters? The Case of Spike Jonze’s Her (2013) – Matthew Cipa (UQ)
    • The Metamodern Tension of The Rehearsal (2022 - ) – [Video essay] Keya Makar (UQ)


    Interstate applicants: ADM+S research training has earmarked limited travel bursaries to enable our interstate ADM+S students and ECR members to travel to participate in person. These bursaries are to contribute to return economy airfares and accommodation. Please email m.thomas@uq.edu.au and sally.storey@rmit.edu.au if you would like to apply for a travel bursary to attend.

    Powered by

    Tickets for good, not greed Humanitix dedicates 100% of profits from booking fees to charity

    This event has passed
    Get tickets