Who Will Be the Boss of the Future?
AI is evolving from being just a tool to gradually becoming "agentic"—starting to make decisions. Will there be a limit to these decisions? For instance, are we ready for BossGPT?
A few months ago, my wife and I decided to watch a movie on Mubi. I’m not really a movie person. Art house films are no exception in this regard; if possible, I prefer to avoid them.
When Mubi came up, I started browsing the films before my wife arrived so that I wouldn’t end up with a screensaver consisting of beautiful mountains, seas, and forests; or the drama of a mother who migrated to England while her daughter is stuck in a war in the Middle East; or the internal anguish of a protagonist who fell into depression after losing their spouse.
When I saw the comedy category, I won’t lie, I cheered inside, “Oh, hell yes!” Then The Banshees of Inisherin, which was tagged as dark humor, came to mind. I got scared. My blood ran cold. I had absolutely no intention of watching that kind of “funny” movie again. I continued to navigate the Mubi catalog anxiously. And then I came across the film Direktøren for det hele, translated into Turkish as “Emret Patronum” (Order Me, Boss).
The name Lars Von Trier did scare me a little, but the film met the two basic criteria I generally use to identify entertaining movies:
The poster said it was funny.
The title was translated into Turkish very poorly.
In other words, someone thought this film would be shown in theaters and could appeal to a general audience. Great!
The plot of the film also seemed interesting:
Ravn, the owner of an IT company, invents an interesting lie when founding the company. To avoid making difficult decisions regarding his employees, he creates an imaginary boss figure living in America and attributes all tough decisions to this imaginary boss. When the sale of the company comes onto the agenda, this lie starts to create problems because the buying firm wants to meet the real boss. To solve this, Ravn hires a failed theater actor named Kristoffer. Kristoffer’s job is to portray this imaginary boss.
“An imaginary boss upon whom you can dump the responsibility for bad decisions”—I think that’s a sweet idea. Genuine comedy can come from this. Conflicts, situational humor, and so on. I believed in the film. I said, “This works.” I said, “It’ll make us laugh.” I said, “It’ll make us think while we laugh.”
And that’s exactly what happened. Although the first half flowed a bit slowly, the film made me smile and entertained me. It was thought-provoking for me as well. Outsourcing the job of being the bad guy to others is a fantastic idea. Humans struggle even to take responsibility for their own decisions. Most people don’t even want to take that responsibility. It feels easier to have someone else decide for them.
Work-related decisions can be even harder. You’re going to fire someone, but that person is a close friend you’ve worked with for 5 years. Telling an employee who just gave birth that there is overtime, giving a raise below expectations, or promoting someone who won’t make everyone happy—these are not easy decisions.
The difficulty, in my opinion, isn’t in making the decision. It’s in the responsibility of the decision. The decision is made one way or another. But the responsibility belongs to the person who makes it. Our protagonist, Ravn, has handed this responsibility over to an imaginary boss. A wonderful solution. Do whatever you want to get done, be ruthless, think only of your own selfish interests, but who is responsible? “That scumbag boss!”
This piece, of course, wasn’t written to recommend a movie to you. Unlike all other technologies, Artificial Intelligence technologies will be able to behave “agentically.” That is, they will make decisions and execute them. In the business world, we will see AI gradually taking control of decision-making processes. So, as the business world, are we ready for a “BossGPT” that supports the decision-making processes of bosses?
Wouldn’t a BossGPT—which reads the Boss’s (CEO’s) emails, analyzes all the company’s data, examines current sectoral and economic data, processes meeting notes and job interview recordings, and makes decisions with all of this—help the boss make the most accurate decision? Wouldn’t it take difficult decisions without batting an eye, in line with the company’s interests and success? Wouldn’t the company be managed more efficiently thanks to this rational AI patron support system? Could a “magnificent” (!) future be awaiting us? The answer depends on what we understand by magnificent. And on the questions of for whom and for what.
Another important question is who will take responsibility for AI’s decisions. In such a decision support system, bosses and managers have the chance to base all their decisions on BossGPT with the claim that they are “objective” and “rational.” We are not too far from hearing sentences like: “It wasn’t me, BossGPT decided this. I know your wife just gave birth, you bought a house two months ago, you have loan debt, but what can I do? BossGPT says we need to let you go.” Whether an AI model that makes decisions “also takes responsibility for wrong decisions” is another question.
Decisions that are right for the business but unethical? Those that are rational but unfair? Can we dump the ball on AI in all these cases and get away with it?
The coming years may be full of moral tests where we dump the responsibility of difficult decisions onto AI and choose to sleep soundly at night. Handing over responsibilities along with decisions is a great comfort for managers and bosses. In fact, beyond comfort, it is a great ethical convenience. Just imagine: a wonderful excuse where you can always blame someone else, act like the victim explaining there was nothing you could do, and shed tears saying “I’m so sorry.”
Who would you want to make the decisions for your company? Rational and objective AI applications, or humans?


