
A note before I start. I’m a lawyer by training, not a trainer. Evidence is the thing I find genuinely interesting, and the more time I’ve spent around RPL the more I’ve realised how much there is to learn from the people who do this work every day. This post is one outsider’s attempt to think through some of what I’ve been seeing. The judgement calls belong to the trainers and assessors who actually know their candidates, their sectors, and their qualifications. I’m just sharing what’s been catching my attention.
Two qualifications. Both responsible for the safety of others. Both with formal hierarchies, codified procedures, and real consequences when something goes wrong.
You’d expect similar evidence at RPL. From what I’ve seen, there isn’t. The artefacts a Cert III in Early Childhood Education and Care candidate brings look almost nothing like what a Cert IV in Public Safety candidate brings. Different documents, different voices, different judgement calls for the assessor.
This is the second in a series I’m writing about evidence variation across training packages. The first looked at four qualifications side by side. This one compares two.
Why a lawyer cares about RPL
Coming at this from a legal background, what struck me first is that RPL is essentially evidence theory applied to human capability rather than to facts about the world. The same questions a court asks of evidence (is this real, does it prove what it claims, would a reasonable person be persuaded) are the questions a good RPL assessor is asking. The four rules used in VET (valid, sufficient, authentic, current) are close cousins of what a lawyer would call admissibility, weight, provenance, and currency.
What is different, and what I’m still learning, is that the evidence itself takes radically different shapes depending on the kind of work being recognised. That’s what this post is trying to think through.
Continuous work and episodic work
Some work happens continuously. The educator is with children over hours, days, months. Competence shows up in hundreds of small moments. Noticing. Documenting. Adjusting. Talking to a parent at pickup. Sitting on the floor for a block-play session.
Other work happens episodically. The incident leader spends most of their time training and waiting. Then an incident lands and they have hours or days to lead a response. Competence shows up in what they did during it. The brief they delivered. The decisions they made at 0830 and 1100. The debrief afterwards.
That distinction seems to shape a lot about the evidence each qualification needs.

What ECEC actually involves
CHC30125 requires 160 hours of work in a regulated children’s education and care service, regulated by ACECQA. Educators work inside the EYLF and the seven Quality Areas of the National Quality Standard.
Day to day, that means noticing a two year old who used to grab toys now offering them to peers, and writing it up as a learning story for the family at pickup. Adjusting tomorrow’s program because two children have started a sustained block-play project. Sending a note in the communication book. Writing a reflection at the end of the week about something you got wrong.
CHCECE038 is called Observe children to inform practice. CHCPRP003 requires reflection on and improvement of own practice. Quality Area 6 is collaborative partnerships with families and communities. Reading the units, none of these look like skills you demonstrate once.
What ECEC evidence tends to look like
Learning stories the candidate has written. From what I’ve seen, twenty across different children and learning domains tends to land more compellingly than one or two.
Reflective journal entries. “Today I noticed I was rushing the morning routine and one of the children seemed unsettled. Next week I’m going to start the pack-away ten minutes earlier.” Two months of weekly entries seems to read more authentically than three polished pieces.
Family communication artefacts. Photos of the communication book with redactions. Email threads about transitions. Notes from a family conference where the candidate co-developed a plan for a child with an emerging speech delay.
Program planning documents the candidate contributed to, annotated to show where they suggested changes based on what they were seeing.
Photos and videos with consent. ECEC consent rules are stricter than aged care. Faces blurred. Parental consent required even where the centre approves.
A portfolio I’d expect to land thinly: a resume, two training certificates, a generic supervisor reference, a self-assessment. No reflective writing. No observations. The candidate may have worked in centres for years, but the evidence doesn’t yet show what the qualification asks for.
The other failure mode I keep hearing about is volume. Three years of program plans. Fifty learning stories. Every staff meeting agenda. Without mapping to the units, this seems to drown the assessment. The trainers I’ve spoken to talk about helping the candidate pick twenty documents rather than submit two hundred.

What Public Safety actually involves
PUA40422 Cert IV in Public Safety (Biosecurity Emergency Response Leadership) is a different kind of qualification. Core units like PUAOPE015 Conduct briefings and debriefings and PUAOPE020 Lead a crew read like a command structure.
The role leads responses to biosecurity emergencies. Livestock disease outbreaks. New pests in a region. Aquatic threats at a port. The leader supervises field teams, runs operations centres, contributes to the incident action plan, manages logistics, and signs off on the team’s work.
Most of the time it’s training and readiness. Then an incident lands and the leader is in charge under real pressure.
What Public Safety evidence tends to look like
Incident action plans the candidate authored or co-authored. Three IAPs across different incidents seems to cover significant ground.
Briefing and debriefing logs. The candidate stood in front of a crew, gave objectives, allocated resources, confirmed safety protocols, ran the debrief. Signed, with attendance lists and time stamps. As far as I can tell, that’s PUAOPE015 in evidence form.
After action reviews. These look superficially like ECEC reflection but they read differently. Collective rather than personal. Structured rather than narrative. Focused on operational improvement, not relational adjustment.
Crew leader logs. Time-stamped decisions. What was happening at 0830, what was decided, who was tasked, what changed by 1100.
Third party reports from higher in the hierarchy. Not a peer or a generic supervisor, but an incident controller or operations officer who observed the candidate leading. On letterhead. Specific examples.
A portfolio that would seem thin to me: long reflective writing about leadership philosophy, generic references, photos of training exercises, assurances about experience. The writing might be excellent. It just doesn’t seem to prove the candidate can lead a Level 2 incident.
One complication I find genuinely interesting. Many of the strongest Public Safety artefacts are sensitive. Some are classified. Some are commercially sensitive about affected industries. The candidate often genuinely cannot share them. From the trainers I’ve spoken to, the workarounds are redacted documents, formal attestations from senior officers, or workplace observation during a training exercise. This is the kind of thing experienced assessors mention easily but newer ones haven’t necessarily encountered yet.

Many ways to skin a cat
Worth saying clearly. There’s no single correct evidence package, even within one qualification. Different candidates bring different strengths. Different competencies surface differently in different sectors and different workplaces. A learning story that works for one candidate might not be the right anchor for another. A briefing log that proves PUAOPE015 for one candidate might not be the right document for someone whose role sits more on the planning side than the field side.
What an experienced trainer brings to RPL is the judgement to see what matters for this candidate, in this context, against these units. That judgement is qualification specific, sector specific, and candidate specific. It’s not something I have. It’s something the trainers I’ve met spend years developing.
The only thing I’d say with any confidence, watching from the outside, is that not all evidence is equal, and that the work of figuring out what counts as strong evidence for a particular candidate in a particular qualification is real work that deserves more support than it often gets.
Same rules, different evidence
Valid. Sufficient. Authentic. Current. Non-negotiable, written into the standards.
What seems to change is what satisfies them.
ECEC sufficiency seems to mean a portfolio showing consistent practice over weeks or months. Public Safety sufficiency seems to mean breadth across incident types and command roles. ECEC currency is recent application in a regulated service. Public Safety currency is recent application as a leader in actual incidents.
ECEC authenticity is the candidate’s own observations, reflections, and contributions. Voice seems to matter. A reflective journal that doesn’t sound like the candidate is something experienced assessors flag. Public Safety authenticity is the candidate’s own decisions, command, and debriefs. The signature and the time stamp matter.
An assessor expert in one of these qualifications isn’t necessarily ready to assess the other. The four rules translate. The evidence patterns don’t.
So what
If you’re an RTO running RPL across human services and emergency services, one evidence template seems unlikely to serve both well. One assessor capability framework seems unlikely to either. And a platform that assumes all qualifications have the same evidence shape would be missing what actually varies.
That’s the design problem we keep running into with VelvetPath. Different training packages have different evidence shapes. The platform’s job is to support assessors in seeing what’s robust and what’s missing for the specific qualification in front of them. Not to replace assessor judgement. To help assessors see patterns they haven’t yet had time to learn.
Build it with us
I’m building VelvetPath at Red Velvet AI with a small team. The aim is a structured, auditable RPL workflow for the trainers actually doing the work. If you’re new to RPL, the platform helps you do it well from the start. If you’ve been at it for years, it gives you the audit trail and consistency the existing tools never quite delivered.
We’re running working groups with trainers across the qualifications we cover. CHC, BSB, CPC, PUA. If you’d like to shape how the platform handles the evidence patterns specific to your training package, get in touch at partners@theredvelvet.ai.
A good RPL decision is a bit like a good red velvet cake. The surface looks simple. The layers are what hold it up.

Leave a Reply