Queensland AI Traffic Cameras Face Ethical Risk Review
A recent report has found that Queensland's transport department is not effectively identifying the ethical risks associated with the artificial intelligence used in its mobile phone and seatbelt detection cameras.
Several key issues were highlighted, including significant privacy concerns, a lack of adequate human oversight to ensure fair decisions, potential for inaccurate image recognition, and problems with how photos are handled and stored.
The Department of Transport and Main Roads (TMR) currently deploys AI in two main areas: its traffic cameras designed to catch mobile phone and seatbelt offences, and QChat, which is a virtual assistant for government employees. A new Queensland Audit Office (QAO) report has strongly urged TMR to conduct comprehensive, department-wide ethical risk assessments for both of these technologies.
AI's Role in Traffic Enforcement
The AI image recognition technology used in the mobile phone and seatbelt cameras acts as a filter, automatically discarding photos that are unlikely to show an offence. This process dramatically reduces the workload for human reviewers.
In 2024 alone, the AI system made a total of 208.4 million assessments, which ultimately resulted in around 114,000 fines being issued. The report found that the AI reduced the volume of images needing review by an external vendor by a massive 98.7 per cent, down to 2.7 million. Following this external review, the Queensland Revenue Office conducted a final review of 137,000 potential offences.
While the mobile phone and seatbelt technology (MPST) program does include some risk mitigation strategies, such as human review, the report states it must assess the "completeness and effectiveness of these arrangements." The QAO noted that without a more thorough review, the department cannot be certain that all ethical risks are being properly identified and managed.
Beyond Traffic Cams The QChat Problem
The audit also raised concerns about QChat, the internal virtual assistant. Risks include users interacting with the tool in inappropriate ways, potentially breaching ethical or legislative rules. There is also a danger that users could mistakenly upload protected information or be given misleading or inaccurate information by the AI.
To address these issues, the report recommended that TMR establish better monitoring controls and implement a more structured approach to staff training.
The Path Forward TMR Responds
Overall, the QAO concluded that the department needs to improve its handling of AI risks. "It has taken initial steps, but lacks full visibility over AI systems in use," the report stated. The key recommendations were for the department to strengthen its oversight of ethical risks, update its governance arrangements, and implement proper assurance frameworks.
In a formal response, TMR Director-General Sally Stannard confirmed that the department had accepted all recommendations and was already working to implement them. "While TMR has implemented a range of controls to mitigate the ethical risks, we will ensure current processes are assessed against the requirements of the AI governance policy," she said.