PARIS — Frances Tiafoe says he receives death threats via social media after he loses professional tennis matches. Jessica Pegula says the same. So does Donna Vekic — directed at her or her family.
“Everybody gets them after a loss,” said Tiafoe, a 25-year-old from Maryland who was scheduled to play in the French Open’s second round Thursday and was a semifinalist at last year’s US Open. “It’s just how society is today. I know how that affects people’s mental health. That’s very real.”
Sloane Stephens, the 2017 champion at Flushing Meadows and 2018 runner-up at Roland Garros, said she often deals with racist messages directed at her online and that some prompted the FBI to investigate.
“It’s obviously been a problem my entire career. It has never stopped,” said Stephens, who is Black. “If anything, it’s only gotten worse.”
In a bid to try to protect athletes from that sort of abuse at Roland Garros during the 15-day Grand Slam tournament that ends June 11, the French Tennis Federation is paying a company to provide players with software that uses artificial intelligence to block these sorts of negative comments.
Every player in every category — singles, doubles, juniors, wheelchair competitors and so on, for a total of around 700 to 800 — is allowed free access to Bodyguard.ai for use on Twitter, Instagram and Facebook. A few dozen players had signed up for the service as of the start of this week, according to Bodyguard.
“This is really important for us: for the players to be very comfortable and be able to focus on the competition. Tennis is mental. It’s really what you have in your mind that counts; you’re making 1,000 decisions during a match,” said FFT CEO Caroline Flaissier, who put the cost to the federation at somewhere between $30,000 and $50,000. “We know that there is a lot of cyberbullying. We have to address that major issue, so we thought let’s do a test.”
That includes monitoring social media used by the FFT and the French Open. An FFT spokeswoman said Wednesday that 4,500 messages had been deleted out of the 79,000 received on those accounts since May 21.
Yann Guerin, head of sports for Nice-based Bodyguard, said the company’s software, which is constantly updated by employees who might notice new words or emojis that should be part of the screening, needs less than 100 milliseconds to analyze a comment and delete it if it’s “hateful or undesirable.” He cited the example of one player who participated in the French…
Click Here to Read the Full Original Article at www.espn.com – TENNIS…