Last Updated: May 2026 · By Ehtisham Saeed, RTO Marketing Specialist
“AI absolutely cannot be used to make assessment decisions; absolutely cannot be used to complete validation where qualified people are required.” That was ASQA at its March 2026 sector update. The marketing extension is implied and arrives in writing this year.
Here is the deal: AI in RTO marketing is no longer a future risk. It is the current risk. ASQA named it as one of its 2025-26 Risk Priorities, ran sector workshops across all eight capital cities in March and April 2026, and confirmed at the March 2026 update that revised Practice Guides arriving mid-2026 will explicitly cover non-compliant AI use. See also: What Is ASQA Marketing Compliance Monitoring? Continuous Self-Assurance Under the 2025 Standards.
If you are using AI to write course pages, generate testimonials, create student photos, target ads, or automate any decision about how prospective students experience your RTO, you are operating in an active regulatory hotspot. The rules are tightening. The penalties are real. The audits are coming. See also: What Is the RTO Student Journey? The 7 Stages From Awareness to Enrolment.
Bottom line: AI use does not lower the compliance bar. It raises it. The Information and Transparency Practice Guide, the Australian Consumer Law, the Privacy Act 1988, and the Australian Framework for Generative AI in Schools all apply simultaneously. RTOs that treat AI as an efficiency tool without governance attract risk across every one of these frameworks.
I’ll be direct about this. Most of the AI use I see in Australian RTO marketing right now is undocumented, ungoverned, and undefended under the 2025 Standards. ChatGPT writes the course page. The marketing team publishes. No human reviews the claims against the training package. No file note records why the AI output was approved. When ASQA asks how the RTO ensures information is accurate, the answer is “we wrote it” – which is now a half-truth at best. See also: RTO Marketing Compliance: The Information and Transparency Practice Guide Made Practical (Pillar 5).
Let us get into it. This post gives you the complete compliance framework for AI in RTO marketing under the 2025 Standards. The three categories of AI use, the specific risks under each, ASQA’s current position, the Australian Consumer Law overlay, the practical workflow that keeps you compliant, and what to expect from the revised Practice Guides arriving mid-2026. This is the forward-looking piece in the RTO marketing compliance cluster.
What Does AI Mean for RTO Marketing Compliance in 2026?
AI in RTO marketing covers any use of artificial intelligence tools in producing, targeting, distributing, or making decisions about marketing content. The category is wider than most RTO owners think.
Common examples in 2026 RTO operations. ChatGPT or Claude generating course page copy, blog posts, social posts, or email sequences. Midjourney or DALL-E creating student or facility imagery. Canva AI generating brochure layouts and ad creative. Meta Ads or Google Ads using AI-powered audience targeting. CRM tools using AI to score lead quality. SMS platforms using AI to write enrolment messages. Website builders generating course pages and SEO content automatically. See also: What Is an ASQA-Compliant RTO Website? Copy, Structure, and the 75-Plus Phrases to Avoid.
All of these fall under the AI compliance umbrella. All of them are subject to the same Standards, the same consumer law, and the same privacy obligations as content produced any other way. The 2025 framework cares about the outcome, not the production method.
What ASQA Has Said Publicly About AI
ASQA’s position has crystallised over twelve months. The regulator published its own AI Transparency Statement committing to human-in-the-loop decision-making, protection of personal and sensitive data, clear ethical standards, and annual public reporting on AI use. The implicit signal: if the regulator governs its own AI use this way, providers are expected to do the same.
The March 2026 ASQA Sector Workshop confirmed AI is a current performance assessment focus. The statement that “AI absolutely cannot be used to make assessment decisions; absolutely cannot be used to complete validation where qualified people are required” set the strongest tone yet. The revised Practice Guides v2 arriving mid-2026 will translate this into specific marketing and training expectations.
ASQA’s Corporate Plan 2025-26 reinforces this. The plan emphasises cracking down on fraudulent practices including non-genuine assessment evidence and non-authentic student work, and maintaining integrity as a core regulatory priority. The fraud frame matters for marketing because misleading marketing is a form of non-authentic representation.
What the Standards for RTOs 2025 Already Require
The Standards do not name AI specifically. They do not need to. The outcome-based framework already covers AI use through expectations that apply regardless of production method.
Outcome Standard 2 requires accurate, sufficient, and current information for prospective students before enrolment. AI-generated content that is inaccurate, insufficient, or outdated breaches this regardless of speed advantage or production cost. The Information and Transparency Practice Guide explicitly names eleven risks RTO marketing must mitigate. AI use can amplify every one of them.
Outcome Standard 4 covers governance. The 2025 framework expects leadership to actively oversee how the RTO operates, including technology choices. An RTO that has integrated AI into marketing without policy, oversight, or documented reasoning is not exercising the governance the Standards require.
The Compliance Standards Instrument 2025 sets specific obligations around marketing material accuracy, third-party arrangements, and consent. Every one of these obligations applies to AI output. The fact that AI produced the content does not change what the content must meet.
What External Frameworks Add
The Australian Consumer Law applies. The ACCC has prosecuted RTOs for misleading employment outcome marketing under section 29. Penalties run to $1.1 million per breach. AI-generated marketing content that overstates outcomes, fabricates testimonials, or implies guarantees attracts the same ACL exposure as human-written content. AI does not function as a legal shield.
The Privacy Act 1988 applies. From June 2025, individuals can sue an RTO directly in court for privacy breaches without proving financial damage. Serious privacy breaches now attract penalties up to $50 million or 30 percent of annual turnover, whichever is greater. AI tools that train on input data – including student information uploaded into consumer AI platforms – create direct privacy exposure.
The Australian Framework for Generative AI in Schools, endorsed by Education Ministers in June 2025, sets six principles directly transferable to VET: teaching and learning, human and social wellbeing, transparency, fairness, accountability, and privacy and security. RTOs that adopt these principles proactively will be well-positioned for the revised Practice Guides arriving mid-2026.
Why AI Use Is an ASQA 2025-26 Risk Priority
ASQA does not name risk priorities lightly. The 2025-26 priorities reflect where the regulator sees the most concentrated patterns of non-compliance, the highest student harm potential, and the largest gap between current practice and the Standards expectation. AI use sits in all three.
Concentrated Non-Compliance Patterns
The performance assessment data is informing the priority. From July 2025 to 30 January 2026, ASQA undertook 89 performance reviews with a 62 percent compliance rate. AI-related concerns surfaced in a meaningful share of those reviews, often as secondary findings to primary issues around assessment integrity, marketing accuracy, or third-party arrangements. See also: What Is RTO Reputation Management? Reviews, Outcomes, and Social Proof Under the 2025 Standards.
The pattern is consistent. RTOs adopt AI tools to solve operational pressure (write content faster, run more campaigns, automate enrolment flows) without governance catching up. The AI tool produces output. The output ships. The compliance issue surfaces later – sometimes in an ASQA assessment, sometimes in an ACCC investigation, sometimes in a Google review that goes viral.
High Student Harm Potential
AI-generated marketing content can mislead at scale faster than human-generated content. A human marketer producing one misleading course page reaches the audience for one page. An AI tool producing fifty AI-generated course pages reaches the audience for fifty pages, often within hours.
The harm is compounded when AI is used to personalise. An AI-driven Facebook ad campaign can put a misleading employment outcome claim in front of thousands of targeted prospective students before any human review identifies the issue. The 2025 framework recognises this scale problem. The priority status reflects it.
Gap Between Practice and Expectation
ASQA’s March 2026 workshops revealed how wide the gap currently is. Many RTOs in the room had no AI policy. No documented oversight. No record of which tools were in use across the organisation. No file notes on AI-assisted decisions. Some had not considered that AI use needed any governance at all beyond a casual “we use ChatGPT to write some copy”.
The Standards expect documented reasoning behind every operational choice. The framework I covered in the RTO marketing compliance decision framework applies here directly. AI use is a decision. The decision needs documented reasoning. Most RTOs do not yet have that documentation.
What Being on the Priority List Means Practically
Three things change when an area is named a 2025-26 Risk Priority.
First, performance assessments target it specifically. Assessors will ask about AI use early in the assessment. They will request the AI policy. They will ask for examples of AI-assisted marketing decisions and the documented reasoning behind them. The questions are not theoretical.
Second, complaints in this area get faster regulatory attention. ASQA receives student complaints about misleading marketing. Complaints involving AI-generated content (especially fabricated testimonials or AI imagery of facilities) move to the top of the queue.
Third, enforcement action is more likely. ASQA’s enforcement team handles 212 serious matters under investigation as of March 2026. Cases involving AI-generated fraud, fabricated evidence, or systematic misleading conduct attract enforcement attention faster than they did under the 2015 framework.
The Three Categories of AI Use in RTO Marketing
Not all AI use carries the same compliance profile. The framework divides AI in RTO marketing into three categories. Each has different risks, different obligations, and different documentation requirements.
Category 1: AI Generation
AI generation covers any use of AI to create marketing content. Course page copy, blog posts, social posts, ad creative, email sequences, brochure designs, FAQ content, student or facility imagery, testimonial scripts, and SEO content all fall under generation.
The risk profile is medium-to-high depending on what is generated and how it is reviewed. A blog post discussing general industry topics carries lower risk than a course page making specific qualification or fee claims. An image of a generic learning environment carries lower risk than an AI-generated image presented as your actual facility.
The core compliance question: does the AI output accurately represent the RTO and its training? If the AI-generated course page describes units that are not on your scope, the page is inaccurate regardless of how it was produced. If the AI-generated facility image shows equipment your RTO does not have, the image is misleading. AI is not a defence. The Information and Transparency Practice Guide applies to the output, not the process.
Category 2: AI Personalisation
AI personalisation covers any use of AI to target marketing at specific individuals or groups based on their personal data. Programmatic advertising, audience targeting, retargeting based on browsing behaviour, AI-driven email personalisation, chatbot interactions, and lookalike audience generation all fall under personalisation.
The risk profile is high. Personalisation always involves personal information processing. The Privacy Act 1988 applies. The Australian Privacy Principles apply. If the AI tool trains on input data, additional consent obligations apply. If the data is processed outside Australia, cross-border data flow provisions apply.
The core compliance question: do you have lawful authority to process the personal data the AI tool uses? Most RTOs running Facebook Ads or Google Ads have not formally documented this. The platforms handle some of the compliance through their own policies. The RTO remains responsible for the lawful basis under Australian law.
Category 3: AI Automation
AI automation covers any use of AI to make decisions without human review. Auto-approving leads as enrolment-ready, AI-scoring assessment evidence, automated content publication, chatbots handling student enquiries autonomously, and AI systems generating personalised pricing all fall under automation.
The risk profile is high to very high. Automation removes the human-in-the-loop control that ASQA’s own AI Transparency Statement commits to. If AI is making decisions that affect a prospective student’s enrolment, fees, or course information, those decisions are subject to the Standards even though no human signed off on each one.
The core compliance question: who is accountable for the decision the AI made? Under the 2025 framework, accountability cannot be delegated to a tool. The CEO and the governance structure remain accountable. Automated decisions need documented oversight: the rules the AI follows, the human checkpoints in the workflow, the audit trail of decisions made, and the review cycle for tuning the system.
How the Categories Overlap in Real Operations
Most RTO AI deployments mix categories. A Facebook Ad campaign that uses AI-generated copy (Category 1), AI audience targeting (Category 2), and AI-automated bid management (Category 3) attracts compliance risk from all three.
The framework forces you to map each category before signing off the campaign. What was generated, by which tool, with what review. What personal data was used, on what lawful basis, with what consent. What decisions were automated, with what oversight, against what rules. Three questions, every campaign. The discipline is the protection.
AI Generation Risks: When Speed Becomes a Compliance Hazard
AI generation is where most RTOs first adopt AI in marketing. It is also where the most common compliance issues originate. Speed is real. Accuracy is harder. The compliance gap is where the regulator looks.
The Hallucination Problem in RTO Context
Large language models hallucinate. They confidently produce content that is plausible-sounding but inaccurate. In RTO marketing, hallucination shows up as.
Course descriptions referencing units of competency that do not exist or are not on your scope. Qualification durations that do not match your Training and Assessment Strategy. Pathway claims about further study or employment that overstate realistic outcomes. Statistical claims (graduation rates, employer satisfaction, employment outcomes) that the AI made up based on plausible patterns. Funding eligibility statements that do not match the current state of Smart and Skilled, Skills First, or other state programs.
The AI does not flag these as uncertain. It presents them confidently. A marketing team that publishes AI output without expert review propagates the hallucinations across the website. Each instance is a compliance breach under the Information and Transparency Practice Guide.
The Practice Guide is explicit about the risks of inaccurate information. The eleven risks I covered in the Practice Guide explainer apply directly to AI-generated content. Risk one (incorrect course details), risk two (inaccurate fee information), risk five (overstating employment outcomes), and risk six (misrepresenting facilities or resources) are particular hallucination targets.
The Currency Problem
AI models have training data cut-off dates. The model does not know what changed last quarter. It does not know which qualifications were superseded or removed. It does not know which funding programs ended. It does not know which units changed their assessment requirements.
An RTO that publishes AI-generated content without currency review can find itself describing qualifications that no longer exist, advertising funding that has ended, or claiming entry pathways that are no longer valid. The 2025 Standards require ongoing review of marketing accuracy. AI generation makes the currency requirement harder, not easier, because the production speed outpaces the review cycle.
The fix is structural. Every piece of AI-generated content runs through a currency check before publication. Training products against the National Register at training.gov.au. Funding programs against the current contract or program guidelines. Statistical claims against current data sources. The check is part of the workflow, not optional.
The Voice and Specificity Problem
AI-generated content has a generic quality that experienced readers detect quickly. Marketing copy that could describe any RTO. Course pages that read like every other course page. Blog posts that recycle the same framings.
The compliance issue is not the genericness itself. The compliance issue is what gets papered over by generic language. Specific entry requirements get smoothed into “good English skills required” (which the 2025 Standards now consider insufficient pre-enrolment disclosure under the LLN disclosure expectations). Specific fee structures get simplified into “affordable pricing” (which fails the transparency expectation). Specific work placement requirements disappear into “industry-relevant practical experience”.
The fix is enrichment. AI-generated content gets a specificity review before publication. Generic claims either get replaced with specific facts or removed entirely. The 2025 framework rewards specificity. AI generation often produces the opposite.
The Image Generation Problem
AI-generated imagery is a separate risk category. Midjourney, DALL-E, Stable Diffusion, and similar tools can produce photorealistic images of training environments, students, or workplace settings that do not exist.
The Practice Guide is explicit. Risk six names “using images of facilities or resources which do not accurately depict those used by your RTO” as a compliance risk. An AI-generated image of a state-of-the-art simulation lab presented as your training environment, when your actual training environment is a community hall with folding tables, breaches the expectation regardless of how convincing the image is.
AI-generated images of students raise additional risks. A fabricated student photo presented as a real student testimonial breaches Australian Consumer Law section 29 (false representations about endorsements). An AI image presented as a real graduate breaches the genuineness expectation under section 18 (misleading or deceptive conduct).
The rule is straightforward. AI-generated imagery used in RTO marketing must represent something the RTO actually has, does, or has done. Generic stock-style AI images for decoration may be acceptable. Specific claims attached to AI imagery are not. RTO Scanner can flag where your course pages combine AI-generated imagery with specific facility or outcome claims, surfacing the highest-risk combinations for review first.
AI Personalisation Risks: Privacy, Consent, and the M Penalty
AI personalisation operates in privacy law territory. The Privacy Act 1988 reforms that took effect in June 2025 transformed the penalty landscape. RTOs running AI-personalised marketing without a privacy framework face the highest dollar exposure of any AI use category.
The New Privacy Penalty Regime
From June 2025, the Privacy Act 1988 reforms introduced direct rights of action for individuals. Previously, the Office of the Australian Information Commissioner (OAIC) was the primary enforcement path. Now individuals can sue an RTO directly in court for privacy breaches without proving financial damage.
Serious privacy breaches attract penalties up to $50 million per incident, or 30 percent of annual turnover, or three times the benefit obtained from the breach, whichever is greater. The penalty regime applies to organisations of all sizes. A small RTO faces the same maximum exposure as a large one.
AI personalisation triggers privacy obligations because it always involves personal information processing. Targeting an ad to a 35-year-old woman in Brisbane interested in aged care training requires processing personal information about that person’s age, gender, location, and interests. The legal question is whether the RTO has lawful authority to process this information.
The Australian Privacy Principles Applied to AI Marketing
Thirteen Australian Privacy Principles apply. The most relevant for AI personalisation in RTO marketing.
APP 1 (open and transparent management of personal information). Your RTO needs a privacy policy that explicitly describes AI use in marketing. Most RTO privacy policies do not currently address this. The full APP guidance from the Office of the Australian Information Commissioner is at oaic.gov.au.
APP 3 (collection of solicited personal information). You can only collect personal information that is reasonably necessary for your functions. AI personalisation that collects extensive data for fine-tuned targeting may exceed the necessity threshold.
APP 6 (use or disclosure of personal information). Personal information can only be used for the primary purpose of collection or a permitted secondary purpose. Using enrolment enquiry data to train AI models for marketing is a secondary purpose that often requires additional consent.
APP 8 (cross-border disclosure of personal information). If your AI tool processes data outside Australia (most consumer AI tools do), you remain accountable for the overseas recipient’s handling of the data. The legal liability does not transfer to the AI vendor.
APP 11 (security of personal information). Personal information uploaded to AI tools must be protected. Uploading student records into ChatGPT to draft personalised emails creates a security breach if the AI tool retains or trains on the data.
The Consumer AI Tools Problem
Most RTOs using AI in marketing are using consumer-grade tools: ChatGPT (the free or Plus version), Claude (basic version), DALL-E, Midjourney. These tools have different data handling profiles than enterprise versions.
Consumer AI tools typically retain input data and may use it for model training. Uploading student information, internal compliance data, or commercially sensitive marketing strategy to a consumer AI tool creates direct exposure. The data leaves your control. You cannot guarantee how it will be used.
The 2026 Smart and Safe RTO guidance is clear. Generic prompts using anonymised inputs are acceptable: “write a professional email template for course enquiry follow-up”. Specific prompts using personal information are not: “write a follow-up to Sarah Thompson, mobile 0412345678, who enquired about CHC33021 on 5 May”.
Enterprise-licensed AI tools (Microsoft Copilot with enterprise data protection, Grammarly Business, dedicated educational AI platforms) typically provide contractual data protections including no model training on customer data and Australian or controlled data residency. The compliance posture is significantly stronger.
The Disclosure Question
The Australian Framework for Generative AI in Schools includes transparency as one of its six principles. RTOs adopting equivalent principles need to consider when AI use should be disclosed to prospective students.
The current regulatory position is that disclosure is not yet mandatory for AI-generated marketing content. The Practice Guide revisions arriving mid-2026 may change this. The current best practice for RTOs serious about 2026 compliance.
Disclose AI use where consumers would expect to see human authorship. Testimonials are an obvious case: if a testimonial is AI-generated rather than from a real student, this needs to be disclosed or the testimonial removed entirely. Student photos in marketing are another case: if an image is AI-generated rather than a real graduate, this needs to be transparent.
Do not disclose AI use where consumers would not reasonably expect specific human authorship. Generic blog posts, FAQ content, and template emails do not need AI disclosure under current Australian law, provided the content is accurate.
AI Automation Risks: Decisions Without Human Review
AI automation is the third category and the highest risk profile. Automation removes the human checkpoint that the 2025 Standards expect. The compliance question shifts from “is the output accurate” to “who is accountable for the decision”.
The Accountability Question Under the 2025 Standards
The 2025 Standards put accountability on the CEO and the governance structure. The framework I covered in the decision framework post applies here. Every operational decision needs documented reasoning. The CEO test asks whether the decision can be defended in fifteen seconds without consulting notes.
Automated decisions are still decisions. The fact that an AI system made the call does not transfer accountability to the system. The CEO and the governance structure remain accountable. The Standards do not recognise “the AI decided” as a defence.
This creates a specific governance requirement. Every AI automation in marketing needs a documented decision logic, a documented review cycle, and a documented human accountability point. If you cannot point to who would defend the system’s decisions in an ASQA assessment, the system is undefended.
Common AI Automation in RTO Marketing
Programmatic advertising platforms (Google Ads automated bidding, Meta Ads campaign optimisation, LinkedIn automated audience expansion) make ongoing decisions about which audiences see which messages at which prices. The RTO sets initial parameters. The AI system optimises within those parameters in ways the RTO does not directly observe.
CRM lead scoring (HubSpot AI, Salesforce Einstein, similar tools) makes automated decisions about which leads to prioritise, which to follow up first, which to mark as low-priority. The decisions affect who gets a fast response and who waits.
Automated content publication (some website platforms now auto-publish AI-generated SEO content, FAQ answers, and category pages) makes decisions about what appears on the RTO’s website without human review of each piece.
Chatbot enrolment support (AI-driven chatbots that answer prospective student questions about course fit, eligibility, and enrolment) makes decisions about what information the prospective student receives. Inaccurate chatbot responses become Information and Transparency breaches the same way inaccurate course pages do.
What Human Oversight Actually Looks Like
Human-in-the-loop is the principle the ASQA AI Transparency Statement adopts. The practical translation for marketing automation has four components. See also: What Is RTO Marketing? 9 Components Explained for 2026 (Standards Update).
First, the rules the AI follows are documented and reviewed. Bidding strategies, lead scoring criteria, publication thresholds, chatbot response logic – all documented in policy, reviewed quarterly, signed off by leadership.
Second, sample decisions are monitored. Not every automated decision needs review. A statistically meaningful sample needs review on a regular cadence. The reviewer is named. The findings are documented. Corrective actions follow.
Third, exception handling has a human path. High-value, high-risk, or unusual decisions get escalated for human review. The AI system flags exceptions. A named person reviews them. The decision and reasoning are documented.
Fourth, system performance is evaluated. The AI’s overall performance against the Standards outcomes is reviewed quarterly. If the system is producing patterns that drift from the outcomes the Standards require, the system gets tuned or retired.
RTOs that pass 2025 performance assessments while running AI automation have all four. RTOs that struggle usually have none of them documented even when the operations are happening informally.
ASQA’s Current Position on AI (March 2026 Update)
The March 2026 ASQA Sector Workshops were the clearest signal yet of the regulator’s evolving position. Workshops ran in all eight capital cities with multiple sessions selling out. The content focused on responsible AI use in VET delivery and compliance with the 2025 Standards.
The Hard Lines ASQA Has Drawn
Three statements at the March 2026 update established hard lines RTOs cannot cross.
Statement one. “AI absolutely cannot be used to make assessment decisions.” Assessment decisions require qualified human assessors. AI-generated competency judgements are not compliant under the 2025 Standards.
Statement two. “AI absolutely cannot be used to complete validation where qualified people are required.” Validation requires qualified validators meeting credential and independence requirements. AI cannot substitute for these humans.
Statement three. Revised Practice Guides version 2 arriving mid-2026 “will include non-compliant use of AI against the requirements”. Marketing compliance is one of the most exposed areas because the Practice Guide for Information and Transparency is among those revising.
What ASQA Has Not Yet Stated Definitively
Several questions remain open pending the revised Practice Guides.
Whether AI-generated marketing content requires disclosure to prospective students. The current position is no formal requirement. The revised guides may introduce one.
Whether RTOs need a formal AI policy as a Compliance Standard requirement. The current position is policy is strongly recommended but not mandated. The revised guides may make it mandatory.
Whether specific AI tools are prohibited (consumer AI for student data, for example). The current position relies on Privacy Act 1988 obligations. The revised guides may add VET-specific prohibitions.
Whether AI use must be flagged in Training and Assessment Strategies. The current position is implicit through the broader TAS expectations. The revised guides may make AI-related disclosures explicit.
The Regulatory Direction Is Clear Even Where the Detail Is Not
The pattern across ASQA’s communications is consistent. AI use is acceptable where governance is structured, transparency is maintained, human oversight is preserved, and outcomes meet the Standards. AI use is non-compliant where governance is absent, transparency is missing, humans are removed from the loop, or outcomes fall below the Standards.
RTOs that build AI compliance around these principles now will be well-positioned regardless of what specific obligations the revised Practice Guides introduce. RTOs that wait for explicit rules will be playing catch-up when the guides are published.
The direction also aligns with broader regulatory trends. The Australian Government’s Policy for the Responsible Use of AI in Government emphasises transparency, human oversight, and accountability. The Australian Framework for Generative AI in Schools (June 2025) translates these principles into educational settings. The Privacy Act 1988 reforms (June 2025) reinforce the data protection foundation. The 2025 Standards for RTOs provide the VET-specific framework. Together, they form a coherent compliance posture for AI use in RTO marketing.
The ACL Question: AI Does Not Excuse Misleading Conduct
The Australian Consumer Law applies to AI-generated marketing content the same way it applies to human-written content. AI is not a legal shield. AI is not a defence. Misleading content is misleading regardless of how it was produced.
The Two ACL Provisions That Matter Most
Section 18 of the Australian Consumer Law prohibits misleading or deceptive conduct in trade or commerce. The provision is technology-neutral. AI-generated marketing that creates a misleading impression breaches section 18 regardless of the production method.
Section 29 of the Australian Consumer Law prohibits false or misleading representations about goods or services. The provision covers specific claim categories: testimonials, characteristics, quality, sponsorship or approval, price, and standards. AI-generated content that makes false representations in any of these categories breaches section 29.
The ACCC has prosecuted RTOs under both sections. Penalties can reach $50 million per breach for a corporation, or three times the benefit obtained, or 30 percent of annual turnover, whichever is greater. Section 18 in particular has the broadest scope because it does not require intent to mislead. The conduct itself is the breach.
AI-Specific ACL Risks
The disclosure question. AI use does not need to be disclosed in most marketing contexts. But the absence of disclosure cannot itself be misleading. If a testimonial is AI-generated and presented in a way that creates the impression a real student authored it, the missing disclosure becomes part of the misleading conduct.
The fabricated testimonial issue. AI-generated testimonials present as endorsements when no real endorser exists. Section 29(1)(e) explicitly prohibits false representations about endorsements. Fabricated AI testimonials are a clear breach. Removing them is the only safe path.
The synthesised graduate problem. AI-generated images of graduates presented as evidence of student outcomes create a false impression of an authentic graduate community. Section 29(1)(b) prohibits false representations about the standard or quality of services. Implying a graduate base that does not exist is a false representation about quality.
The deepfake risk. AI tools can now generate realistic video and voice content. RTOs that use deepfake-style imagery (a “happy graduate” video where the graduate is AI-generated) attract the same misleading conduct risks as fabricated testimonials. The technology is more sophisticated. The compliance principle is the same.
The Consumer Perception Test
The ACL test is consumer perception. Would a reasonable consumer be misled by the AI-generated content? The question is not whether the content is technically true. The question is whether the content creates an impression that turns out to be false.
AI image showing a state-of-the-art lab when the actual lab is basic: misleading impression about facilities. AI testimonial reading “great course, got a job in two weeks” when no such student exists: misleading impression about employment outcomes. AI-generated case study describing a fictional career success: misleading impression about typical results.
The framework recommendation: every AI-generated marketing asset gets reviewed against the consumer perception test. What impression does this create. Is the impression accurate. Can the impression be supported with evidence. If any of the three answers is uncertain, the asset is unsafe.
How to Build an AI-Compliant Marketing Workflow This Quarter
The framework is clear. Implementation is the test. Here is a realistic plan for a typical RTO to build an AI-compliant marketing workflow in twelve weeks, starting from wherever you are now.
Weeks 1-2: Audit Current State
Map every current AI tool in use across your RTO. Marketing tools. CRM tools. Website builders. Email tools. Social platforms. Ad platforms. The map needs to include the consumer versions of general AI tools your team uses informally: ChatGPT, Claude, Gemini, Copilot, Midjourney.
For each tool, document: what category of AI use (generation, personalisation, automation), what data inputs are involved, what data outputs are produced, whether the tool trains on customer data, where the data is processed (Australia or offshore), and which staff have access.
The map alone surfaces compliance issues. Most RTOs find tools in use that nobody at executive level knew about. Most find data flows that fall outside the privacy policy. Most find generated content with no review trail.
Weeks 3-4: Build the AI Marketing Policy
Write the AI policy. Keep it short – three to five pages, not thirty. The policy needs to cover.
Approved tools: which specific AI tools are approved for marketing use, in which versions (enterprise vs consumer), for which use cases.
Prohibited uses: what AI must never be used for in marketing. Generally: assessment decisions, fabricated testimonials, fabricated graduate imagery presented as real, processing student personal data on consumer tools.
Approval workflow: how new AI use cases get approved, who approves, what evidence is required.
Review obligations: what review every piece of AI-generated content goes through before publication, by whom, against what checklist.
Documentation requirements: what evidence is captured for each AI-assisted decision, how it is stored, how long it is retained.
Incident response: what happens when an AI-related compliance issue surfaces. Who is notified, what review is triggered, what corrective action follows.
The policy is signed by the CEO. The policy is reviewed annually. The policy is the foundation of every other compliance step.
Weeks 5-6: Update the Marketing Materials Register
The marketing materials register (which the RTO marketing checklist covers in detail) now needs AI tagging. For every active marketing material, capture.
Was AI involved in production. Which AI tool. Which category (generation, personalisation, automation). What review the AI output received. Who signed off. What date.
The register update is operational, not optional. ASQA assessments under the revised Practice Guides will ask for it. RTOs that maintain the AI flag in their materials register will be able to demonstrate the governance the Standards expect.
Weeks 7-8: Train the Team
The AI policy means nothing if the team does not know it exists. The training covers.
What the policy says. What tools are approved. What data must never be uploaded to AI tools. What review is required before publication. What to do when something feels off.
The training is recorded. Attendance is documented. New starters complete the training within their first month. Refresher training runs every six months.
The 2026 ASQA workshops emphasised this point directly. “AI compliance starts with your people, not your tools.” An AI policy on paper means nothing if the team does not know it exists or how to apply it.
Weeks 9-10: Run the First AI Compliance Audit
Pull a sample of recent marketing materials produced or processed with AI. Apply the AI policy to each. Check.
Was the AI use within approved tools and approved cases. Was the review workflow followed. Is the documentation complete. Does the output meet the Standards (accuracy, currency, transparency, outcome alignment).
The audit produces findings. Some materials will need correction. Some workflows will need tightening. Some training gaps will surface. All of it gets documented in the audit report.
The audit report is signed by the CEO. The corrective actions are tracked to completion. The next audit is diaried.
Weeks 11-12: Establish the Ongoing Cycle
The AI compliance work is not a project. It is a cycle. The final two weeks of the implementation establish the recurring rhythm.
Monthly: AI tool inventory review (any new tools, any decommissioned tools, any changed data handling).
Quarterly: AI compliance audit (sample-based review of AI-assisted marketing, against the policy).
Quarterly: Materials register review (AI flags current, signoffs current).
Annually: AI policy review (does the policy still reflect current operations and current regulatory expectations).
Event-triggered: new ASQA guidance, new ACCC enforcement action, new tool introduction, complaint or incident involving AI-generated content.
The cycle is what produces ongoing compliance. The project produces the starting state. The cycle keeps it running.
What to Expect from the Revised Practice Guides V2
ASQA confirmed at the March 2026 update that revised Practice Guides version 2 are arriving mid-2026 and will include explicit treatment of non-compliant AI use. Based on what ASQA has signalled publicly and the broader Australian AI regulatory direction, here is what RTOs can reasonably expect.
Specific AI Use Cases Likely to Be Named Non-Compliant
AI used to make assessment decisions. Already confirmed at the March update. Will likely be formalised in the validation and assessment Practice Guides with specific language.
AI used to complete validation where qualified humans are required. Already confirmed. Will likely be formalised in the validation Practice Guide with specific obligations.
Fabricated testimonials or student photos presented as real. Implicit under the current Information and Transparency Practice Guide. Likely to be made explicit with specific examples.
Processing of student personal information on consumer AI tools. Implicit under the Privacy Act 1988. May be made explicit in the VET context through cross-referencing to APP obligations.
New Documentation Obligations Likely to Appear
AI policy as a Compliance Standard requirement. Currently strongly recommended. The revisions may make a documented AI policy mandatory for RTOs using AI in any non-trivial way.
AI use disclosure in Training and Assessment Strategies. Currently implicit. The revisions may require RTOs to declare AI involvement in training delivery and assessment within their TAS documents.
AI-assisted decisions in the marketing materials register. Currently a best practice under the materials register expectation. The revisions may make AI tagging mandatory.
Human oversight evidence for AI automation. Currently implicit under governance obligations. The revisions may specify what human-in-the-loop evidence is required for AI-driven systems.
New Risk Areas Likely to Be Named
AI-generated content disclosure to prospective students. Currently not required. The revisions may introduce a disclosure threshold for content that would reasonably be presented as authored by humans.
AI use in international student recruitment. Currently regulated through general international student protections. The revisions may add specific AI-related protections given the higher vulnerability of international student cohorts to misleading marketing.
AI-driven automation in admissions and enrolment. Currently regulated through general enrolment process obligations. The revisions may add specific accountability requirements for automated decisions affecting prospective students.
How to Prepare Without Over-Committing
The revisions are not yet published. Over-engineering for predicted requirements wastes effort. Under-engineering leaves you exposed when the guides arrive. The pragmatic position.
Build the foundations now: AI policy, marketing materials register with AI flags, AI compliance audit cycle. These will be required regardless of the specific language in the revised guides.
Watch ASQA communications carefully through mid-2026. The first draft of the revised guides will likely circulate for sector consultation before final publication. Read the drafts. Submit feedback where appropriate. Track the changes from draft to final.
Update the AI policy when the guides are published. The foundation does not need to change. The specific obligations and prohibited cases will need to be reflected. A one-page policy update typically covers the changes.
RTOs that have the foundations in place by mid-2026 will adapt to the revised guides in weeks. RTOs that have not will be scrambling to build governance from scratch under regulatory pressure. The foundation work this quarter is the protection.
Frequently Asked Questions About AI in RTO Marketing Compliance
What does AI mean for RTO marketing compliance in 2026?
AI in RTO marketing is now an explicit ASQA 2025-26 Risk Priority. The regulator has signalled that revised Practice Guides arriving mid-2026 will include non-compliant AI use against the Standards for RTOs 2025. The Australian Consumer Law, the Privacy Act 1988, and the Australian Framework for Generative AI in Schools all apply simultaneously. AI use does not lower the compliance bar. It raises it because production speed outpaces traditional review cycles and creates new privacy and accountability exposures.
Can RTOs use AI to write course pages and marketing copy?
Yes, with structured governance. AI-generated marketing content is governed by the same accuracy, transparency, and currency requirements as human-written content. The content must be reviewed against the training package, the current scope, and the funding programs the RTO offers before publication. The decision to publish AI-generated content needs documented reasoning. The Information and Transparency Practice Guide expectations apply to the output, not the production method.
What AI uses has ASQA explicitly prohibited?
At the March 2026 sector update, ASQA stated that AI absolutely cannot be used to make assessment decisions and absolutely cannot be used to complete validation where qualified people are required. Both prohibitions are confirmed and will be formalised in the revised Practice Guides arriving mid-2026. The marketing extension is implied through the Information and Transparency Practice Guide and through Australian Consumer Law obligations against misleading conduct.
Can RTOs use AI-generated images of students or facilities?
Only where the imagery represents what the RTO actually has, does, or has done. The Information and Transparency Practice Guide names “using images of facilities or resources which do not accurately depict those used by your RTO” as a compliance risk. AI-generated images of fictional state-of-the-art facilities, AI-generated student photos presented as real graduates, and AI imagery implying capabilities the RTO does not have all breach the expectation. Generic decorative AI imagery for blog posts may be acceptable.
Do testimonials generated by AI need to be disclosed?
AI-generated testimonials should not be published in any form that suggests they are from real students. Section 29 of the Australian Consumer Law prohibits false representations about endorsements. A fabricated AI testimonial is a false representation regardless of disclosure. The compliant path is to remove AI-generated testimonials entirely and replace them with consent-documented testimonials from actual students. Disclosure does not cure the underlying misleading conduct.
What is the privacy penalty for misusing AI with student data?
Serious privacy breaches under the Privacy Act 1988 now attract penalties up to $50 million per incident, or 30 percent of annual turnover, or three times the benefit obtained, whichever is greater. From June 2025, individuals can also sue an RTO directly in court without proving financial damage. Uploading identifiable student records into consumer AI tools that train on input data is a clear breach risk. Use enterprise AI tools with contractual data protections, and never input personal student information into general consumer AI platforms.
Does the RTO need a formal AI policy?
Strongly recommended now, likely to be required when revised Practice Guides arrive mid-2026. A formal AI policy covers approved tools, prohibited uses, approval workflows, review obligations, documentation requirements, and incident response. Keep it short, three to five pages. The CEO signs it. The policy is reviewed annually. Without a policy, you cannot demonstrate the governance the 2025 Standards expect for any operational area, AI included.
How should RTOs handle AI-powered ad targeting on Facebook or Google?
AI-powered ad targeting (programmatic advertising, automated bidding, lookalike audiences) is AI automation. It needs documented governance. The rules the platform follows on your behalf, the human oversight in your campaign management, the review cycle for performance against the Standards outcomes, and the privacy compliance for the personal data the platforms process. The platforms handle some of the compliance through their own policies. Your RTO remains accountable under Australian law.
What is the difference between consumer AI tools and enterprise AI tools for RTOs?
Consumer AI tools (free ChatGPT, basic Claude, public DALL-E) typically retain input data and may use it for model training. Uploading student information or commercially sensitive data creates direct privacy and confidentiality exposure. Enterprise AI tools (Microsoft Copilot with enterprise data protection, Grammarly Business, dedicated educational platforms) provide contractual data protections including no model training on customer data. For any AI use involving student data or commercially sensitive material, enterprise tools are the only compliant option.
How quickly should RTOs build AI marketing compliance?
This quarter. The revised Practice Guides arriving mid-2026 will introduce specific AI compliance obligations. ASQA performance assessments are already asking about AI use in marketing. RTOs that wait for explicit rules will be scrambling under regulatory pressure. The twelve-week implementation plan covers AI tool audit, AI policy development, materials register updates, team training, first compliance audit, and the ongoing cycle. The foundation work now is the protection when the revised guides arrive.
Where to Go From Here
You now have the complete compliance framework for AI in RTO marketing under the Standards for RTOs 2025. The three categories of AI use, the specific risks under each, ASQA’s current position, the Australian Consumer Law overlay, the twelve-week implementation plan, and what to expect from the revised Practice Guides arriving mid-2026.
Three steps to take this week.
First, audit your current AI use. Walk through every marketing tool, every CRM, every ad platform, every general AI tool your team uses informally. Map what is in use, who uses it, what data flows through it. The map alone surfaces issues you did not know existed.
Second, run a free RTO Scanner audit on your current website. The scanner identifies prohibited phrases, missing RTO codes, and training product accuracy issues regardless of whether the content was AI-generated or human-written. The output tells you where AI-generated content has produced compliance gaps.
Third, start drafting the AI policy. Three to five pages. Approved tools, prohibited uses, approval workflow, review obligations, documentation, incident response. The CEO signs it. Without the policy, every other AI compliance step floats.
Read the Rest of the Compliance Cluster
This post sits at the forward edge of the RTO marketing compliance cluster. The other posts cover the foundations.
The Information and Transparency Practice Guide explainer covers the source document and its eleven risks (every one of which AI can amplify). The prohibited phrases guide covers the seven categories of language ASQA flags (every one of which AI can produce automatically without review). The third-party arrangements guide covers partner content (including partners using AI on your behalf). The 27-point marketing checklist covers the operational review (now requiring AI tagging). The decision framework covers the strategic posture (AI decisions are decisions and need framework treatment).
Want Help Building AI Compliance?
The RTO marketing strategy service applies this framework to your specific operations. We audit current AI tool use, draft the AI policy, update the marketing materials register, train the team, and run the first AI compliance audit. By the end of the engagement, your RTO has the governance the revised Practice Guides will require, and your CEO can defend AI-related decisions at performance assessments.
Contact via ehtishamsaeed.com/contact.
AI compliance is not optional. The regulator named it a priority. The penalties are real. The revised guides are coming. The work to do now is the work that protects you when they arrive.
