top of page

When AI Gets It Wrong: A Real Lesson in Judgement and Critical Thinking

  • Writer: Elaine Schillinger
    Elaine Schillinger
  • 2 days ago
  • 4 min read

What a rainbow lorikeet taught me about AI, judgement and the questions we forget to ask.


Baby rainbow lorikeet with open wings perched on a person's leg wearing navy pants and gray shoes, on grass. Leaves scattered around.
The moment it started. A young lorikeet climbs a human leg in a backyard during extreme weather. Photo: Elaine Schillinger

During a period of extreme heat and storms, a young rainbow lorikeet landed at my feet in the backyard, climbed straight up my leg and looked at me as though I was supposed to know what to do next. One sibling had already died. Wildlife services were overwhelmed. So I did what most people do when they need reliable information quickly. I turned to AI, in this case ChatGPT, sharing photos and detailed observations over the course of two days, and asking whether the bird was ready to be returned to the wild.


The responses were detailed, calm and consistently reassuring. Based on what I was describing and showing, release was framed as appropriate. The bird looked like a fledgling. The advice made sense. Multiple sources appeared to agree. And yet something did not sit right.


What the AI could not see requiring critical thinking and judgement


The bird over two days of care. Grey down still visible on neck, wings visibly short relative to body size. Photos: Elaine Schillinger


Looking at the bird carefully, several things did not add up. There was still grey down visible on the back of his neck. His wings and tail were visibly short relative to his body. He could not sustain flapping or balance independently. Rather than seeking cover or shelter, he kept climbing toward humans, which is not the behaviour of a bird ready to fend for itself.


These are the kinds of signals that require domain expertise to interpret correctly. And they are exactly the kinds of signals that get lost when advice is delivered with confidence rather than caution.

THE BIRD IN QUESTION. Juvenile rainbow lorikeet with slim angular body, short tail and residual grey fluff, not ready for release.
THE BIRD IN QUESTION

Slim, angular body. Short tail. Residual grey fluff still visible. Wing length not extending proportionally.

TRUE GROUND-READY FLEDGLING. A mature rainbow lorikeet fledgling showing robust body, long primary feathers and proportionate tail, indicating readiness for ground survival.
TRUE GROUND-READY FLEDGLING

Noticeably larger and heavier. Long primary feathers. Tall extending well past the body. Robust, solid appearance.


The AI was pattern-matching to the most common scenario, a healthy fledgling ready for release, and presenting its conclusions without flagging what it could not determine from photos alone. It had no mechanism for saying "I am less certain about this" or "this situation may require escalation." It filled the gaps with confidence, which is precisely where the risk lives.


When uncertainty exists and the cost of being wrong is high, confidence is not reassurance. It is a warning sign.

There is also something worth naming here, which is authority bias. When advice sounds calm, detailed and authoritative, it suppresses further questioning. This is especially true when you are stressed, when a living creature is involved and when multiple sources appear to agree. The reassurance itself becomes the problem.


What I had been feeding him made it worse


Short-term feeding with commercial lorikeet nectar temporarily stabilised the bird and increased his energy and begging behaviour, which created a false appearance of readiness. He seemed stronger. He seemed more alert. What the feeding masked was that he remained underweight, with insufficient muscle and wing development for ground survival. Weight gain, not hunger response, was the metric that mattered. And that was not something I could assess, or that AI could assess from a photograph.


The turning point


On the day he was due to be released, I contacted a second wildlife carer and sent photographs. The assessment was immediate. The bird was not a fledgling. He was a nestling, likely blown from the nest prematurely during the storms. He was not developmentally ready for release and would almost certainly not have survived overnight on the ground. He was transferred to a licensed wildlife rescue facility and spent several weeks in professional care before being properly released.


The decision that changed the outcome was not instinct alone. It was the willingness to pause, question and escalate rather than follow advice that sounded right.


What this reveals about AI and high-stakes decisions


This was not a failure of information. Everything I had access to was well-intentioned and largely accurate in a general sense. What failed was how uncertainty was handled, specifically, that it was not handled at all. The advice never said "I do not know." It never said "escalate." It never said "this is the limit of what can be determined from a photo."

AI excels at pattern recognition, synthesising known information and explaining likely scenarios. What it must not replace is domain expertise, the responsibility to escalate when doubt exists, and accountability for outcomes.


The most serious issue was confidence without guardrails. In high-risk situations, confidence is not reassurance. It is a warning sign.


The questions I should have asked


The questions I was asking were reasonable but invited reassurance rather than rigour. "Is this a fledgling?" "Is this normal behaviour?" "Can I release him tomorrow?" These are the kinds of questions that get you pattern-matched answers.


The questions that would have produced safer responses are different. Here are the prompts I would use now, and that I teach in every AI workshop I run:



These prompts force uncertainty acknowledgement, boundary setting and escalation language. Without them, AI will tend to fill the gaps with confidence. That is its nature. Managing that tendency is ours.


Why this matters beyond the backyard


This situation plays out in professional environments every single day, just with different stakes and less obvious consequences.



The principle is the same in every case. When uncertainty exists and the cost of being wrong is real, confident output is not a green light. It is a prompt to ask better questions.


The bird survived. He spent several weeks with a licensed carer and was eventually released properly. The AI would have had a different outcome on its conscience, if it had one.



That is the essence of AI judgement and critical thinking in practice. It is not a technical skill. And it is one that can be taught, practised and embedded in the way teams work.


If this resonates with how your team is using AI


The responsible and practical use of AI for high-stakes communication is exactly what the AI in High-Stakes Communication workshop covers. It is a one-day program already delivered for parliamentary offices and professional services teams, built around real scenarios from your environment rather than generic AI tips.


If you want to talk about what that looks like for your team, I'd love to have that conversation.


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page