Powered by Humanitix
More dates

    Keeping Children Safe in the Age of AI: Examining the risks and benefits of AI implementation for CSE

    This event has passed Get Tickets

    Event description

    AI is a hot topic of conversation across many aspects of life – business, education, online, law enforcement and countless other scenarios. It has been propelled into public debate by the global public release of Large Language Model (LLM) platforms such as ChatGPT.

    While AI presents many positive experiences in our daily lives, it also presents so many questions. Will it be safe? How secure is it? How can we be sure that it will create a positive impact on protecting children and the business interests of the organisation?

    Law makers in Europe and beyond are looking at regulating the design, development and implementation of AI that directly affects the daily lives of society, balancing technological progression with the fundamental rights of individuals, including children. The Artificial Intelligence Act – the first regulatory and legal framework for this technology is before the European Parliament. This legal framework is viewed to become a global blueprint and is due to be fully enforceable by 2026. In Australia, the Albanese Government is currently undertaking an eight-week consultation process on the subject prior to considering their next options in relation to safeguarding the community.

    Similar to the Global Standards for Data Protection (GDPR), the Budapest Convention on Cyber Crime and Human Rights laws, frameworks developed in Europe have been found to have had international impacts for businesses, LE and other organisations.

    This presentation will focus on AI and its full development cycle. Using the capAI model, an AI implementation and testing framework developed by Oxford University, we will seek to show how AI can be successfully and safely integrated into business practices. This includes early warnings to identifying activity pertinent to Suspicious Matter Reports (SMR), online harmful activity and targeted enforcement practices.

    Throughout the presentation, we will identify key players in the implementation of AI, the benefits and the unacceptable risks involved. And we will highlight capability that has been developed by Rigr AI to assist with victim identification within child exploitation cases.

    Your presenter

    Colm Gannon is an AI/Machine Learning expert, working as a Product Manager with Rigr AI in New Zealand. He has over 20 years’ experience in Law Enforcement, involved in national and international investigations and prosecutions relating to online harms, child sexual abuse and exploitation, violent extremism, and harmful online communications.

    Over his years of experience, Colm has worked with multi-disciplinary teams in shaping cross-government strategies relating to cybercrime, child protection and developing initiatives to combat various online harms through digital and social campaigns.

    Colm has a Master of Science Degree (MSc) from the University College Dublin, Ireland, in Forensic Computing and Cybercrime Investigations and is working with Rigr AI and partners on a project to develop AI and research-led software to assess risk indicators for child abuse funded by the European Commission, bringing a truly global perspective to this international issue for our Australian stakeholders.


    Powered by

    Tickets for good, not greed Humanitix donates 100% of profits from booking fees to charity




    Refund policy

    No refund policy specified.