On Thursday 17th of May 2018, one of the providers for our facial similarity report started experiencing inconsistent algorithmic behaviour. This affected face match accuracy between at least 13:21 BST and 22:29 BST for the standard variant and 00:32 BST for the video variant.
In keeping with our commitment to transparency, the following is a report of the issue we encountered, the factors that contributed to that issue, and ultimately, what we’ve done and plan to do to ensure we don't find ourselves in this situation again.
On Thursday, 17th of May at 14:40 BST, the facial verification product manager was alerted to an unusual result of face match (carried out at 13:21 BST), where two individuals matched when they shouldn’t have.
At 17:10 BST during our daily automated test run, we saw an abnormal rise in scores from our face matching provider, resulting in failing tests and incorrect face match classifications. We began our investigation immediately and confirmed the earlier report was related to this issue.
At 17:50 BST a first attempt was made to contact our provider via supplier relations. Following no response, at 18:33 BST a formal bug report was logged in the provider’s issue tracking system.
At 22:29 BST we successfully turned off our provider for the standard variant and replaced it with an equivalent in-house algorithm, restoring performance on the standard variant to normal levels.
At 00:32 BST our supplier confirmed they had issued a hotfix, restoring performance on the video variant to normal levels.
To avoid this happening in the future, we intend to: