I wrote a short piece exploring how we could rethink this award using Explainable AI (xAI). Instead of relying on gut feelings, what if we used models like Shapley values and Human centric design principles to quantify each player's actual contribution?
Have you ever stopped to wonder how the Man of the Match is actually chosen in football games? Is there a scientific method behind it or is it mostly vibes and flash?
A goal here, a brilliant save there, and suddenly someone’s being handed the trophy. But what if we told you that behind the excitement, the process is largely subjective, and maybe, just maybe, it's time for a rethink? 😀
Traditionally, Man of the Match awards are given based on what commentators, fans, or sponsors feel was the standout performance.
While goals and assists grab headlines, quieter but crucial contributions like a defender’s flawless marking or a midfielder’s game-controlling passes often go unnoticed.
What if we used AI to make this fairer? What if we made it explainable, data-driven, and factual?
Let’s imagine a Shapley-powered Man of the Match system. Shapley values, a concept from cooperative game theory now used in explainable AI (xAI), help fairly determine how much each player contributed to the team's success. It's like distributing credit in a group project: who actually did what?
Think of each player as someone bringing value to a group project. To figure out how much credit each one deserves, we look at all possible ways they could work together.
We then ask: how much better did the team do when this player was involved compared to when they weren’t? We repeat this for every combination of players and then average the results.
In other words, we simulate every possible version of the team with and without each player and see how the performance changes. If a player consistently makes the team better in different setups, they get more credit. It’s like testing different ingredients in a recipe to see which one adds the most flavor.
This ensures each player gets a fair share of recognition not just the flashy goal scorer, but also the silent game-changer who made it all possible.
Say a team wins 3–1, and we want to assess four key players:
A Shapley value model would look at every possible combination of these players, measure how much each one improved the team’s performance across different scenarios, and then fairly assign a contribution score to each. It might reveal:
Even though Alex scored, Ben's assists had the biggest overall impact on the game’s outcome and now we have the math to back it up.
This is the power of explainable AI: not just making decisions, but justifying them. In a world where AI is becoming part of everything from healthcare to hiring to sports analytics this level of transparency is not just helpful, it's critical.
For fans, it builds trust. You don't have to agree with the result, but you can see why it happened.
For players, it's a game-changer. Imagine knowing exactly how your positioning, passes, and pressure impacted the match not just whether you ended up on the scoreboard.
That’s why designers, especially those working with AI, need to think deeply about xAI. As artificial intelligence becomes more mainstream, it shouldn’t just be smart it needs to blend with our social systems. It must earn our trust, explain itself clearly, and respect the nuances of human judgment.
So the next time you see someone walk away with a Man of the Match award, ask yourself: was that decision transparent, or just tradition? 😅
Because in the age of AI, we have the tools to make recognition not just exciting, but fair.
Read other blogs on Tech, Design, Personal development and Startups
What it really means to design in the age of AI. It's a mix of stories, real lessons (like what we learned while building a routing system at Leta), and a call to design with more empathy.
I explore what research says about rewards and motivation, how gamification helps and sometimes hurts real learning, what platforms like Duolingo could do differently to build sustainable meaningful habits