More dates

How to prevent an AI and robot apocalypse: A presentation and conversation between Distinguished Professor Phil Morgan and Vice Chancellor Bill Shorten on implications of technology in society and health ecosystems

Share
University of Canberra, Building 1 Level A Room 21 (1A21), near Mizzuna Cafe
bruce, australia
Add to calendar

Thu, 13 Mar, 5:30pm - 7:30pm AEDT

Event description

You are invited to a public seminar on "How to prevent an AI and Robot apocalypse": Designing and deploying AI, Robots and other other autonomous systems responsibly safely, securely and ethically at the University of Canberra delivered by the Visiting Distinguished Professor Philip Morgan. 

Professor Morgan will then be joined on stage by UC’s Vice Chancellor and President Bill Shorten for a discussion focusing on the implications of technology in society and health ecosystems, moderated by Professor Chris Wallace.

Agenda:

  • 5:30 pm - Canapé and Networking
  • 5:45 pm - Welcome and Acknowledgement of Country (MC Professor Chris Wallace), Opening Remarks (Faculty of Science and Technology and Faculty of Health)
  • 6:00 pm - Guest presentation by Distinguished Professor Philip Morgan
  • 6:30 pm - Discussion between Prof Morgan and Vice Chancellor Bill Shorten moderated by Prof Chris Wallace
  • 7:00 pm - Q&A session and Networking
  • 7:30 pm - Conclusion

Abstract:
Over the past 15-20 years, we have seen rapid technological developments in AI, robotic and autonomous systems such that they are fast becoming ubiquitous within many workplace domains (e.g. healthcare, logistics, manufacturing, transport) and are becoming ever present within domestic and social contexts. Self-driving cars, industrial and domestic robots, augmentation of reality, and smart AI agents are no longer something of fiction. Such technologies can, for example: increase productivity; complete repetitive tasks; streamline operations; reduce errors, incidents and accidents typically caused by humans; and in a growing number of cases – support decision making. However, they are not flawless, yet are being developed and deployed at a rapid pace. Lisanne Bainbridge (1983) warned of the ‘ironies of automation’; Raja Parasuraman and Victor Riley (1997) the ‘misuse, disuse, and abuse of automation; John Lee and Katrina See (2004) ‘designing automation for appropriate reliance’; and Alexandra Kaplan and colleagues (2023) ‘factors that have no bearing on AI performance impacting trust in AI’. Are we then risking an AI and robot apocalypse? A judgement day? Quite possibly! Unless such technologies are designed, developed, and tested responsibly, safely, securely, and ethically by humans and crucially with end-users. I will present example research findings, recommendations, notes of caution and many tales of hope from projects spanning a 20+ year career (to date) in Human Factors Psychology and Cognitive Science - across application domains including aerospace, defence, emergency services, environmental intelligence, healthcare, and transportation. Furthermore, almost all these technologies are at risk of being cyber attacked, due to us – humans – often being the weakest link. I will discuss how we can better understand and measure our cyber vulnerabilities in order to fight back and achieve a state of seamless security and privacy in symbiosis with the AI, robotic and autonomous systems in which we increasingly share the world with.

About the speaker: 

Phillip Morgan – Professor of Human Factors and Cognitive Science
School of Psychology, Cardiff University, UK

Prof Phil Morgan holds a Personal Chair (as a Senior Professor) within the School of Psychology at Cardiff University. He is Director of the Cardiff University Human Factors Excellence (HuFEx) Group, Director of Research within the Centre for AI, Robotics, and Human-Machine Systems (IROHMS), Transportation and Human Factors and Cognitive Science Lead within the Digital Transformation Innovation institute (DTII), Director of the Airbus – Cardiff University Academic Centre of Excellence in Human-Centric Cyber Security (H2CS) and Co-Academic Lead of a partnership between Airbus and Cardiff University. Prof Morgan is also Visiting Professor at Luleå University of Technology - Psychology, Division of Health, Medicine & Rehabilitation, Sweden, and Distinguished Visiting Fellow within the Faculty of Education, Science, Technology and Mathematics at the University of Canberra, Australia.

Formally trained as a Cognitive Experimental Psychologist, Prof Morgan is an international expert in human aspects of AI and automation, trust in new/disruptive technologies, Cyberpsychology, transportation human factors, HMI design, HCI, interruption and distraction effects, and adaptive cognition and has published extensively (>130 outputs) across these areas. With >50 grants (~£40million, e.g. Airbus, CREST, ERDF, ESRC, EPSRC, HSSRC, IUK, NCSC, SOS Alarm, Wellcome); often as Principal Investigator / Institution Lead, he has significant project management experience. He supervises PhD students (with many past completions) in areas including human aspects of AI, automation, cyber security, transportation and robotics.

Prof Morgan was a Human Factors lead on the IUK (~£5m, 2015-18) Venturer Autonomous Vehicles for UK Roads project, Co-I and Human Factors lead on the IUK (~£5.5m, 2016-19) Flourish Connected Autonomous Vehicles project, PI on an ESRC-JST (~750k, 2020-2023, with universities in Japan – e.g. Kyoto and Osaka) project Rule of Law in the Age of AI: Distributive Liability for Multi-Agent Societies – focussing on factors such as trust, blame and implications for standards and legislation in the event of accidents involving autonomous vehicles. Amongst other current projects, Prof Morgan is Co-Leading a cross-cutting Human-Centred Design Work Package within an EPSRC (~£12m, 2024-2029) AI for Collective Intelligence (AI4CI) hub (https://ai4ci.ac.uk/).

Recently, Prof Morgan established HumaniFAI Ltd – a research and consultancy company focussed on human-centred, assured, ethical, responsible, and safe design and use of AI, robotic and autonomous systems.   

The event is brought to you by the Collaborative Robotics Lab at the University of Canberra and the ACM SIGCHI Chapter for Canberra. The University of Canberra Visiting Distinguished Fellow Scheme has funded Professor Phil Morgan's visit.

Powered by

Tickets for good, not greed Humanitix dedicates 100% of profits from booking fees to charity

University of Canberra, Building 1 Level A Room 21 (1A21), near Mizzuna Cafe
bruce, australia