The Doorman, the Data and the Human: How We Think About AI

colart
Image
The Praxxis Group team, creators of OPAL3, pictured together outdoors in New Zealand.

TL;DR

Artificial intelligence is most useful when it supports human judgement rather than trying to replace it. In performance reporting, AI can help summarise information, translate technical language into plain English and reduce repetitive work. That frees leaders and teams to focus on interpreting results, asking better questions and making better decisions.

Artificial intelligence is everywhere right now.

Most of the conversation falls into one of two camps. On one side, AI is presented as a silver bullet that will transform everything overnight. On the other, it is treated as a cost-cutting tool designed to replace people.

At Praxxis Group (the creators of OPAL3), we think both views miss the point.

We see AI as a practical tool that can help organisations work more clearly, more consistently and with less effort. But its real value comes when it strengthens human judgement rather than trying to replace it.

The Doorman Fallacy

Behavioural economist Rory Sutherland uses the example of a hotel doorman.

If you were management at a hotel looking for cost savings it's easy to replace the doorman with an automatic door, you save money. But you also lose a range of Customer Experience benefits that are harder to measure: a sense of welcome, local knowledge, an extra set of eyes, and a human connection.

On paper, the decision looks great. Reduced headcount, salary and overheads saved... In practice, something valuable disappears. As Rory points out...  it takes about 5 years for the hotel rack-rate to drop and even the most loyal customers to abandon them.  Short run: Costs out....  Long run: Value Destroyed...

The same logic applies to AI.

If organisations use AI purely to reduce headcount, they may remove the very things that create trust, insight and better decisions.

Organisations Are Living Systems

We have long believed that organisations are not machines to be optimised. They are living systems that need to adapt.

That belief shapes how we design and implement both OPAL3 and our consulting work.

OPAL3 was built to help leaders make sense of complexity. It brings together performance, risk and planning information into one place so people can see what is happening and decide what to do next.

AI features fit naturally into this philosophy.

Used well, it helps people navigate information overload. It can reduce the effort involved in preparing reports, summarising commentary and identifying emerging themes. It does not replace judgement. It supports it.

What AI Does Well (and What It Doesn't)

AI is good at generating outputs that feel authoritative and coherent while also being prone to being (sometimes hillariously) wrong.

Coherent, yes...  Gogent, not always...

AI is very good at:

  • Summarising large amounts of information
  • Identifying patterns and inconsistencies
  • Translating technical language into plain English
  • Producing first drafts quickly

Humans remain essential for:

  • Judgement and prioritisation
  • Understanding political and organisational context
  • Asking better questions
  • Building trust
  • Deciding what matters

The best results come when AI handles the repetitive work and people focus on interpretation, discussion and action.

AI in Practice: Performance Reporting at Scale

One of the most promising uses of AI is in performance reporting.

In large organisations, significant time is spent collecting updates, rewriting commentary and preparing reports for different audiences. The work is necessary, but often repetitive.

AI can help by:

  • Turning technical commentary into clear language for executives and elected members
  • Summarising information from multiple teams
  • Highlighting trends, risks and exceptions
  • Drafting narrative reports more quickly

This aligns closely with OPAL3's purpose: making complex reporting and risk management simpler and easier to manage.

The result is not just faster reporting. It is better conversations.

A Note of Caution: "AI Slop"

AI can produce content that looks polished but says very little.

We've all seen examples of generic, low-value output that creates more work rather than less. Someone still needs to check the facts, apply context and decide whether the result is actually useful.

That is why we treat AI as an assistant, not an authority.

The accountability remains with people.

Our Approach: Start Small and Learn Fast

We take the same approach to AI that we take to organisational change.

Start with a real problem.

Test a practical use case.

Keep what works.

Improve over time.

This might mean using AI to refine report commentary, draft meeting summaries or explore patterns in data. The goal is not to deploy AI for its own sake. The goal is to remove friction and help people focus on higher-value work.

Technology in Service of Better Judgement

Our ambition is straightforward.

We want to help organisations become healthier and more self-sufficient.

Sometimes that means implementing better reporting systems. Sometimes it means improving performance frameworks. Increasingly, it also means helping teams use AI in ways that are practical, responsible and genuinely useful.

The technology is important.

But the human judgement behind it matters most.

 


Further Reading