While I was covering the FIFA Women World Cup last Summer, we started doing regular player ratings for every France game, but how did I decide what player deserved what mark?

LYON, FRANCE - JULY 07: Megan Rapinoe of the USA lifts the FIFA Women's World Cup Trophy following her team's victory in the 2019 FIFA Women's World Cup France Final match between The United States of America and The Netherlands at Stade de Lyon on July 07, 2019 in Lyon, France. (Photo by Alex Grimm/Getty Images)
LYON, FRANCE – JULY 07: Megan Rapinoe of the USA lifts the FIFA Women’s World Cup Trophy following her team’s victory in the 2019 FIFA Women’s World Cup France Final match between The United States of America and The Netherlands at Stade de Lyon on July 07, 2019 in Lyon, France. (Photo by Alex Grimm/Getty Images)

Here, I will try to explain how journalists tend to decide on ratings. It is the same process I use for the Arsenal women’s ratings.

I know players in men’s and women’s football in France often complain about their ratings. You do wonder if the same happens in England. Some players from a certain French women’s club I won’t name have been known to protest when their ratings are too low.

Fans also complain ratings are biased. I have seen the way other journos work and compile their ratings and can confirm it is quite similar to the way I do it.

There is no secret formula or magical biased-ingredients.

Now, if we only have to do the France players ratings, it is easier to concentrate on watching 11 players rather than 22. I don’t think anyone can rate 22 players properly over 90 minutes because there are too many things to check.

I was also on twitter duty for the France games, so it made it a little bit harder to watch everything.

It is quite funny because I did make a mistake with one of the player’s names (autocorrect got me) and people were very quick to jump on me to tell me I made a mistake, especially as it was one of their favorite players.

During the game, I mainly watch what happens off the ball: the players’s movement, the team shape, the runs made and so on. These are the things you can only see live at a game. TV gives a very distorted view of what really happens on the pitch.

Regarding writers’ bias for and against certain players, I certainly believe it exists. Writers are friends with players, especially in men’s football, and they are known to put the ratings up in exchange for information.

For us, it was quite easy as we had three or four writers watching the game, so any bias or unusual ratings would impact a lot less as the average of all four ratings was used as the final one.

I do the ratings in a very simple and analytical way. During the game, I have a column with 11-14 player’s names and every important play gets a + or a -.

Then,I chose the base rating for this player and add all those bits to end up with the final ratings.

The scale I use is as follows, and it’s basically the same one as the papers use.

0= (that one never happens, unless that player two foots someone and then starts a fight)

1= awful performance

2 = really bad performance

3 = bad performance

4 = poor performance

5 = average performance

6 = correct performance

7 = good performance

8 = very good performance

9 = fantastic performance

10 =  exceptional performance (that happens probably every 3 years)

So, the majority of players will get between 5 and 7 from me.

An 8 might pop up or a 9 but rarely a 10. I have rarely seen l’Equipe give a 10 like they did to the Russian guy who scored five in a World Cup final game.

It is quite funny to note that, more often than not, I was rating the players higher than the majority of my colleagues. I blame the English influence where the papers over-rate all players on regular basis.

The truth is, player ratings are subjective and hard to quantify. I am sure if you ask twenty fans after a game at the Emirates Stadium or Borehamwood for their ratings, you would get the whole range from 1 to 10.