On 27th February 2020, a fully autonomous Australian robot with no human on board used a pre-programmed route with remote supervision to undertake and complete its mission. Australia also won the Silver medal at the 2021 Robot Olympics six months later. In this event, the Australian robot used AI to autonomously explore, map, and discover models representing lost or injured people, suspicious backpacks, or phones while navigating harsh conditions. Clearly, Australia is at the forefront of robotics, autonomous systems, and AI research and development. Australia intends to use AI for security and defense applications and is also developing governance structures in line with Australian values, standards, ethical and legal frameworks.
Kate Devitt & Damian Copeland have discussed this in their research paper titled “Australia’s Approach to AI Governance in Security and Defence” which forms the basis of the following text.
Importance of this Research
AI has enormous potential in Security and Defence applications. AI applications could threaten human rights in warfare and pose ethical and legal risks. This research paper outlines the approach of Australia to make use of AI in Security and Defense.
The research paper also outlines that Australia’s Department of Defense recognizes AI as a priority for future developments. The research paper also outlined the definition of AI and explained the need for Artificial Intelligence for Australian Defense.
Use of AI in Defense
As per the Australian government, Australia focuses on AI to build robotics capability, autonomous systems, precision-guided munitions, hypersonic weapons, and integrated air and missile defense systems, space and information warfare, and cyber capabilities.
AI Governance in Australia
AI Ethics Principles
The research paper also discusses in detail about
- Standards Australia’s Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard, Human Rights, and Australia’s action plan for AI,
- AI governance in Defense and Ethical AI statements across the Australian Navy, Army & Airforce,
- Human governance in Defense,
- Ethics in Australian Cybersecurity and Intelligence; and the Framework for Ethical AI in Defence, and
- Defense Data Strategy
Concerning the use of AI, it is helpful to establish
- Responsibility: Who is responsible for AI?
- Governance – how is AI-controlled?
- Trust – how can AI be trusted?
- Law: How can AI be used lawfully?
- Traceability: How are the actions of AI recorded?
AI has tremendous potential for security and defense applications. Nations can use it to counter-terrorism, for border conflicts, and help them save armed personnel’s lives. Australia has identified the criticality of AI in defense and tried to build a framework around its application
In the words of the researchers
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence (Department of Defence 2021a; Attorney-General’s Department 2020). Australia is committed to OECD’s values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published ‘A Method for Ethical AI in Defence’ (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.
Source: Kate Devitt & Damian Copeland’s “Australia’s Approach to AI Governance in Security and Defence” https://arxiv.org/pdf/2112.01252.pdf