What is AI?
Artificial intelligence (AI) tools are software applications that have been trained on data to produce outputs based on new inputs (text/numbers, audio, photo etc.). Some of these tools are designed to generate natural language responses to a given prompt or question. Large language models (LLMs), such as early forms of
ChatGPT,
Claude and
Microsoft Copilot, are capable of understanding natural language and context, which allows them to generate outputs of their own. Large reasoning models (LRMs) such as Deepseek-R1 extend these LLMs to incorporate feedback loops that mimic logical thought processes. Small language models (SLMs) are compressed versions of LLMs and LRMs that can be run locally on a desktop computer or handheld device.
Other common AI tools include:
- image and video generators/editors
- automated transcription, translation and captioning
- recommendation algorithms
- data matching and cleaning systems
- automated compliance and classification systems
Our AI Position
To guide our use of AI within Scott Bayley Evaluation Services, we have developed a position statement that aligns with our vision, mission, and principles:
"We assess and where appropriate apply AI in supporting human-led delivery of robust and ethical oversight for evaluators and social researchers." This position reinforces the fact that our ethical services are human-led, and that any use of AI must be considered in light of the value it adds to supporting our clients and their stakeholders, and applied responsibly and ethically.
Principles
To support this position, we have overarching principles that guide the assessment and application of AI in our practices:
1. We are transparent in our responsible usage of AI. That includes publishing, updating, and communicating this policy to stakeholders.
2. We uphold the rights to privacy and confidentiality of client and project information provided as part of applications and amendments. Specifically:
3.
a. We do not use client-confidential information (including the contents of applications or submissions) with tools that use information to train AI models or that retains or stores any data outside of Australia (regardless of encryption).
b. The use of any AI tool must be approved by the Managing Director and in accordance with this policy and related procedures for assessing, auditing, and monitoring tool usage.
4. We do not use AI tools as a substitute for human decision-making in ethical review processes. This position is consistent with the NHMRC's Policy on Use of Generative Artificial Intelligence in Grant Applications and Peer Review 2023, which forbids the use of generative AI (including but not limited to LLMs, LRMs, and SLMs) to assist peer reviewers in the assessment of applications.
This AI policy has been designed with reference to the Australian Government’s Voluntary AI Safety Standard, ISO 42001:2023, and the Australian Privacy Principles. It also ensures that actions are consistent with relevant professional codes, including the Australian Evaluation Society Code of Ethical Conduct, and the NHMRC National Statement on Ethical Conduct in Human Research.