The discipline that makes RPL defensible

A lot of RPL decisions, in a lot of RTOs, get made on a kind of professional gut feel. The trainer reads the folder, has a chat with the candidate, and forms an impression. The impression might be right. The candidate might be perfectly competent. But “I had a feeling about this one” is not something you can put in front of an auditor, and it’s not something the candidate can rely on either.

A quick note before I go further. I am not a VET trainer. I cofounded Red Velvet AI, which is building a platform for RPL workflows, and that’s what brought me to this question. If you read part 1, you’ll already know the foundations: RPL is assessment, evidence comes in kinds, and a thick folder is not the same as strong evidence. This piece is about the move that turns sorted evidence into a decision that holds up.

Triangulation, in plain language

Here’s the trick that ties all this together. It’s called triangulation, and once you have it, you’ll use it on every RPL decision you ever make. It’s also the thing that makes your judgement defensible if anyone questions it later.

Triangulation is a simple idea. Don’t trust one piece of evidence on its own. Confirm the same fact from more than one angle. If three different things, coming from three different places, all point the same way, your decision holds up. If only one thing points that way, it doesn’t.

This is exactly the discipline RPL needs. A lot of RPL decisions, in a lot of RTOs, get made on a kind of professional gut feel. The trainer reads the folder, has a chat with the candidate, and forms an impression. The impression might be right. The candidate might be perfectly competent. But “I had a feeling about this one” is not something you can put in front of an auditor, and it’s not something the candidate can rely on either. Triangulation gives you something better. When you can say “I confirmed this competency across source, method, and time, and here’s how the evidence lined up,” your decision becomes auditable. It can be reviewed. It can be repeated by another assessor. It can be defended. That’s the difference between an opinion and an assessment.

There are three angles you can confirm a claim from.

Source. Who says it’s true? The candidate is one source. A supervisor is another. A client is a third. A licence record is a fourth. One source is a story. Two independent sources is verifiable. Three is solid.

Method. How is it shown? A work sample is one method. A direct observation is another. A competency conversation is a third. A third-party report is a fourth. A practical demonstration is a fifth. The same skill, shown two or three different ways, is much harder to fake and much easier to defend.

Time. When was it shown? A project from last year tells you about then. A pattern of work across five years tells you about consistency. A demonstration today tells you the skill is still in their hands.

You don’t need all three angles for every single claim. Two is usually enough. But if the only angle you’ve got is one source, told one way, from one moment in time, you should be uneasy.

Here’s what this looks like in practice. Say you’ve got a hospitality candidate applying for RPL in SITXFSA005 Use hygienic practices for food safety. The unit asks them to demonstrate hygienic food handling on at least three occasions.They give you their food safety supervisor certificate (supplementary). Their boss writes you a third-party report (indirect). They give you dated kitchen audit records they led (direct, historical). And you sit them down for a competency conversation about how they actually handle a contamination scare (current).

None of those four things on their own is enough. All four together, from different sources, captured different ways, drawn from different points in time, gives you a defensible decision. The judgement holds because it’s held up by independent points.

Triangulation does three things at once. It protects the candidate, because no single weak document can sink a real claim. It protects you, because your decision is sitting on more than one leg if anyone questions it later. And it protects the qualification, because the certificate at the end actually means something.

Verification: how to actually check

So you’ve got the folder. You’ve read the unit. You’ve sorted the evidence. Now what?

For every important claim the candidate is making, ask yourself: what’s my second source? Then ask: what’s my third method, or my third moment in time? If you can answer those questions, you’re triangulating without needing a textbook.

The strongest single piece of evidence you can collect is direct observation: you, watching the candidate do the task, under conditions you’ve set. There’s a reason for this. In other professional sectors that have been doing competency assessment for decades, like medical and nursing education, direct observation has long been treated as the closest thing you can get to actually seeing the skill in action. There’s a good reason for that. What people say they do and what they actually do are often different things. Observation closes that gap.

If your RTO has the right facilities, a workshop, a simulated kitchen, a children’s services training space, a clinical lab, the best move is usually to bring the candidate in and run a structured assessment against the unit. You see the work. You control the conditions. Almost every “is this really their work?” question disappears in a single session. Where this is available, use it.

But here’s where it gets real. Most RPL candidates can’t easily get to your RTO. They live remote, they work shifts, they can’t take the time off, or your RTO doesn’t have the right facility for the unit they’re going for. You have to be flexible without lowering the bar.

The next-best option is workplace observation. You go to them, or you set up a video call to their actual workplace and watch them work, with their employer or supervisor on hand. This keeps the directness of the observation and adds the realism of their actual work environment, which is often a strength rather than a compromise.

If even that’s not possible, you don’t fall back on paperwork. You set the candidate a structured challenge task. They can record themselves doing it, in their own garage, on their own site, in their own kitchen, against a brief you’ve designed. They can be live-streamed doing it with you watching in real time. Either way, you follow it up with a competency conversation: you ask them to walk you through what they did, why they did it that way, and what they would do differently in a different situation.

This last option is not a workaround. It’s a legitimate form of direct evidence collected through a different method. It’s exactly what flexibility in the principles of assessment was designed for. Recent Australian research on workplace-based assessment in vocational training found that when face-to-face observation was compared with video and remote modalities, the educational value was comparable across modalities, as long as the feedback and structure were strong. What really drove the quality of the assessment was the feedback being focused, specific, and easy to act on, more than the format itself.

So the hierarchy, in plain order:

  1. Workshop observation, with you running a structured task.
  2. Workplace observation, in the candidate’s own environment.
  3. Recorded or live-streamed demonstration with a structured brief, followed by a competency conversation.

Each step down asks more from your verification process and more from the triangulation around the evidence. None of them is paperwork. All of them are real assessment.

What about AI?

You may have noticed that polished documents, reflective statements, and even believable supervisor letters are easier to produce than they used to be. AI has made that gap close fast. A faked or AI-assisted document might pass a quick read.

The answer isn’t to become a forensic detective. You’re an assessor, not an investigator. The answer is triangulation. A faked document can fool one read. It can’t survive a second source, a different method, and a current demonstration that all have to line up with it. The candidate who’s actually competent will breeze through verification. The candidate who isn’t, won’t. You don’t need to catch anyone out. You just need to ask for evidence from more than one angle.

When the candidate is great but the paperwork is poor

This happens often, and it’s where being a good assessor actually matters most. You’ll have candidates who are clearly competent, with twenty years of practice behind them, who can’t produce a tidy folder because they’ve never had to.

Take a candidate applying for CHCAGE011 Provide support to people living with dementia. Twenty years in aged care. The unit asks them to provide support to two different people living with dementia, using person-centred care, tailored communication, activities suited to the person’s needs, and strategies to handle changed behaviour. They probably can’t give you a folder. They never wrote one.

That doesn’t mean they fail. It means you build the picture differently. A workplace observation. A third-party report from a clinical supervisor. A competency conversation focused on real changed-behaviour scenarios you put to them. A current demonstration. Together those build a triangulated picture that holds up, and the candidate didn’t have to produce a folder they were never going to have.

This is what flexibility actually means in the principles of assessment. It doesn’t mean lowering the bar. It means giving the candidate a real chance to clear it.

RPL is not paperwork. It is proof.

You can do this

If you’re new to RPL and you’ve read this far, here’s the takeaway. Read the unit first. Sort the evidence into kinds. Triangulate across source, method, and time. Use direct observation where you can, and the next-best method where you can’t. Trust your assessment instincts: you already know how to assess, RPL is just a different shape of the same work.

If you do those things, you can defend your decision to an auditor, to your candidate, and to yourself. That’s all anyone needs from RPL. Done well, it changes lives. Workers who’ve been doing the job for years finally get the qualification they’ve already earned. That’s worth doing properly.

Build it with us

I’m part of a team building a triangulated, auditable RPL workflow at Red Velvet AI, through a platform called VelvetPath. It’s designed for trainers who are doing exactly what you’re doing, often for the first time, and want a structured way to do it well. If you’d like to be part of designing how this works, especially if you’re an RTO or a trainer who’s been thrown into RPL and is figuring it out as you go, we’d love to talk.

If you’re interested in joining a working group with other trainers in relevant to your training package, please reach out to partners@theredvelvet.ai. We read every email. After all, a good RPL decision is a lot like a good red velvet cake. It looks straightforward on the surface, but the layers are what make it hold up.

One response to “The discipline that makes RPL defensible”

Leave a Reply

Spam-free subscription, we guarantee. This is just a friendly ping when new content is out.

← Back

Thank you for your response. ✨

Discover more from Red Velvet AI

Subscribe now to keep reading and get access to the full archive.

Continue reading