Paper Summary: Defeating Face Liveness Detection by Building Virtual Models From Your Public Photos

Attempting something different here. I’ll try to summarize research papers once in a while.
Caveats:
1. I’m not on an expert on anything. So form your own conclusions.
2. These papers are often studies that have very narrow, clearly defined scopes. Don’t automatically apply the same conclusions over wider scopes.
3. I’m presenting my point of view. Don’t mistake them for anything else.
Today’s Paper: Defeating Face Liveness Detection by Building Virtual Models From Your Public Photos by Yi Xu, True Price, Jan-Michael Frahm, Fabian Monrose from the compsci department at Chapel Hill, NC.
The findings has a lot of relevance on Aadhaar’s Face Auth, but it is not about Aadhaar itself.
Key takeaways:
1. If possible, do not post high resolution photos of your face online. Images above 100 px in height posted online are particularly susceptible.
2. Easy accessibility to matched photos considerably increases risk of fraud.
3. Some of the systems have a high rejection rate when the capture is done in poorly-lit conditions.
4. Liveness detection is broken without too much trouble.
5. Using infrared cameras is a good workaround that works around most of the problems above.
6. Hardware quality is a problem. Face detection using web/mobile camera output should be avoided.
Impact on Aadhaar.
1. There is no mention of IR in the Face Auth development as of now. I guess they will work it in once the current version gets worked around a lot.
2. It is extremely risky to ask the authenticating agencies to keep a copy of the images with them. This is terribly insecure a practice.
3. To make Face Auth work, they will rely on any webcam/hardware. This will ensure there are a lot of failures.

Never mind.