Twitter has announced a new system that allows it to identify around 38% of the harmful content for its platform.
Although so far has not been reported on the total number of harmful publications, nor the variation that has maintained the removal of harmful content.
This is the first time that the social network has implemented an automated system for the suspension and blocking of accounts. Previously, the only system used was the report of the users where, however, the content circulated by the networks.
More than 100,000 accounts that have already been used, created, blocked, treated and carried out other activities to develop illicit activities.
Although this algorithmic regulation has experienced some success. The platform indicated that this system is still in its initial stage and that this technology has yet to be developed further, so that it can be implemented officially.
Also, other changes arrive, such as the regulations for spam accounts through the number of followers or the labels for inappropriate tweets by public figures.
However, the data published by the platform in this “cleaning” process have not been clear about the benefits.
For their part, users have requested the need for transparency reports that show the impact and scope of these actions.