An algorithm decides whether a defendant gets parole. An automated system recommends denying a patient treatment. An autonomous vehicle must choose in a split second whom to endanger. These scenarios raise fundamental questions about AI decision-making and human oversight. As AI ethics in business becomes unavoidable, one question stands out: Do we want machines to make these decisions for us?
The temptation is real. Automated systems are faster than any human. They don’t get tired, aren’t emotional, and certainly aren’t biased – or so we assume. They can evaluate millions of data points in seconds while we humans are still reading the first page. So why not delegate more decisions to algorithms and AI?
“In the future, AI will handle this – it can do it better anyway.” We encounter this attitude, or rather this hope, more and more frequently. In efficiency projects, strategy meetings, everyday conversations. The thinking is understandable: if a machine can do something faster, cheaper, and seemingly more objectively – why should humans still bother? The answer may not lie in capability alone. It lies primarily in responsibility.
Lessons from algorithmic decision-making
Before we discuss AI, it’s worth looking at algorithmic systems that have been preparing or making decisions for years. These systems don’t use neural networks or large language models – they’re classical statistical methods and rule-based algorithms. Yet they reveal the risks that emerge when machines act without adequate human oversight.
Justice – statistical risk assessment in the courtroom
In the United States, courts have used systems like COMPAS for years to calculate recidivism probabilities and recommend sentences or parole decisions. COMPAS is based on logistic regression – a classical statistical method that evaluates a questionnaire with approximately 137 variables and calculates a risk score. Importantly, this isn’t AI in today’s sense, but an algorithmic scoring system.
However: A 2016 investigation by ProPublica analysed COMPAS based on over 7,000 cases in Florida. The result: Black defendants were nearly twice as likely to be falsely classified as high risk for reoffending as white defendants – even though they did not reoffend. The system systematically reproduced biases from the historical data it was trained on.
Who bears responsibility for such a judgement: the judge who trusted the algorithm, or the developers who programmed it?
Financial markets – rule-based algorithms out of control
On 6 May 2010, the US stock market lost nearly a trillion dollars in value within 36 minutes – the so-called “Flash Crash.” The Dow Jones fell by almost 1,000 points before largely recovering. What happened? High-frequency trading algorithms – essentially rule-based if-then logic executed in milliseconds – had driven each other into a downward spiral. These systems don’t “learn” and don’t “decide” in any real sense. They execute predefined rules extremely fast. When a large sell order hit the market, the algorithms reacted to the algorithms that reacted to the algorithms. Humans could only watch.
This wasn’t AI. But it demonstrates what happens when automated systems act without sufficient context and without human oversight.
The lesson: AI is only as good as its context
When decisions have major consequences – or directly affect people’s lives – humans must have the final say. Not because we’re nostalgic. Not because we fear technology. But because responsibility requires it.
These historical examples reveal a fundamental problem that applies equally to modern AI: a system can only be as good as the context available for its decision-making.
COMPAS had no context for structural racism in the justice system – it reproduced it. The trading algorithms had no context for what was happening across the market – they reacted blindly to signals.
And AI? The risks haven’t been overcome; they’ve shifted:
Hallucinations: Large language models generate convincing-sounding statements that are factually wrong – without any indication of uncertainty.
Echo chambers: With each iteration, an AI can move deeper in the wrong direction. A flawed approach at the start poisons the entire process.
Missing context: AI without relevant context produces generic results at best, dangerous ones at worst. This applies to text generation as much as to strategic recommendations.
The technology improves. The fundamental problems remain.
5 Guidelines for AI-Supported Decisions
For far-reaching decisions, humans must remain the final authority. These principles help organisations deploy AI responsibly.
Transparency
Is it clear to all stakeholders when an algorithm or AI is involved in making a decision?
Explainability
Can people understand how a recommendation was generated? "Black box" is not an acceptable answer.
Critical Review
Are AI recommendations systematically scrutinised? Algorithms can make mistakes – questioning them is essential.
Final Authority
Does a human make the final decision and take responsibility for it? Not the algorithm – a human being.
Right to Appeal
Can those affected contest decisions – with a human, not a chatbot?
AI is a powerful tool. But the decision – and the responsibility for it – must remain with humans.
From historical algorithms to modern AI – when it gets critical
Let’s examine areas where automated systems and AI already prepare decisions – or may make them in the future.
Defence technology – life or death on the battlefield
Autonomous weapons systems can identify and engage targets faster than any human soldier. The technology exists. The question is: Do we want a machine to decide over life and death? Without hesitation, without doubt, without moral weight?
Many countries are developing exactly such systems. The efficiency arguments are familiar: faster, more precise, no casualties on our side. But who bears responsibility when an algorithm makes a mistake? The programmer? The general? The defence minister? Or no one?
Medicine – who gets treated
AI systems already support diagnoses and surpass human doctors in accuracy in some areas. That’s progress. But what happens when AI doesn’t just diagnose but also decides?
Healthcare systems regularly face resource constraints. What if an algorithm recommends denying an elderly patient an expensive treatment because the statistical probability of success is too low – is that objective efficiency or inhumane calculation? And who explains that to the patient and their family?
Autonomous driving – the trolley problem becomes real
A self-driving car recognises an unavoidable dangerous situation. It can swerve left and risk a collision with another vehicle. Or right, where children stand on the pavement. Or straight ahead, where an obstacle endangers the driver.
Philosophers have debated the trolley problem for decades as a thought experiment. For autonomous vehicles, it becomes a programming decision. Who determines which life is worth more? And by what criteria?
Personnel decisions
AI already filters job applications today. It evaluates CVs, analyses video interviews, predicts success probabilities. This saves time. But what if the algorithm – trained on historical data – unknowingly reproduces patterns that discriminate against people? And what about redundancies? When an algorithm recommends laying off certain employees because their productivity metrics fall below average – who looks at the circumstances that the numbers don’t capture?
Back to everyday business – AI in the organisation
All these examples may seem extreme. Defence tech, triage, autonomous vehicles – that’s far removed from everyday AI use in an SME. Or is it?
If we can agree that automated systems and AI shouldn’t have the final word on life-or-death decisions – why should they have it on business decisions that have far-reaching consequences and can genuinely affect people’s lives?
A strategic decision can cost jobs. Affect families. Transform regions. That may sound less dramatic than autonomous weapons – but for those affected, it’s existential.
AI can support strategy development. It can run scenarios, analyse data, reveal blind spots. That’s valuable. But who ultimately decides whether the AI-generated strategy is “good enough”? Who takes responsibility for the consequences?
The problem is gradual. With every decision delegated to AI, a piece of responsibility diffuses. When no one truly decides anymore, no one can truly be responsible anymore. That’s convenient on one hand, but dangerous on the other. When people say “the AI recommended it” in the future, that’s not a justification. It’s an excuse.
Developing guidelines for AI-supported decisions
When decisions have major consequences – or directly affect people’s lives – humans must have the final say. Not because we’re nostalgic. Not because we fear technology. But because responsibility requires it.
Algorithms and AI are powerful tools. They can prepare decisions, present options, model consequences. They can help us be better informed before we decide. That’s their value and their role. But the decision itself – and the responsibility for it – must rest with humans. Especially when ethical questions are involved.
Concretely, this means establishing company-wide guidelines for how algorithmic systems and AI are used in the organisation.
The following can serve as a starting point:
Transparency: Is it clear to all affected parties when an algorithm or AI is involved in a decision?
Traceability: Can people understand how a recommendation came about? “Black box” is not an acceptable answer for decisions with far-reaching consequences.
Critical scrutiny: Are recommendations systematically reviewed? Algorithms and AI can make mistakes. Questioning results is always relevant.
Final decision: Does a human make the final decision and take responsibility for it? Not the algorithm. Not the machine. A human.
Right to appeal: Can affected parties challenge decisions – with a human, not a chatbot?
Every organisation must develop such guidelines individually. Ideally, they’re part of the AI strategy and reviewed regularly like any business process.
A final thought – AI responsibility in everyday business
The question isn’t whether algorithms or AI can make certain decisions “better” than humans. In some dimensions, they undoubtedly can: faster, more consistent, more data-driven. The question is whether “better” is the right measure.
Decisions that affect people or have far-reaching consequences are often not pure optimisation problems. They reflect moral positions and standards. And moral actions require someone who takes ethical responsibility for them.
An algorithm cannot take responsibility. An AI cannot feel guilt. It cannot stand before someone affected and say: “I made this decision, and I stand by it.”
Only humans can do that. This too is why humans must remain the final authority. Not because we’re infallible. But because as humans, we can not only understand the ethical implications – we can feel them.
How do you shape AI-supported decision-making in your organisation?
Who ultimately bears responsibility when an algorithm makes a recommendation – and who bears it when there are consequences? These questions deserve more than a quick answer. If you’d like to think them through together, we’re the right sparring partner.


