Who installs airbag suspension in Fresno

Face recognition at Fresno Airport

The sense and nonsense of biometrics

America is arming itself after September 11th: where the security precautions can be tightened, they will be tightened. Fresno Airport in California was the first American airport to install a face recognition system.

As the local newspaper The Mercury News reports, the first facial recognition system was installed at an American airport in Fresno. Anyone who goes through the barriers there will be compared to a database of 800 terrorism suspects. The alarm rings a couple of times every day - each time wrongly so far. So far there have been no arrests. A three-month test is being carried out in Fresno, and a similar test run is due to begin in January at Boston Airport, where two of the aircraft involved in the accident took off on September 11th. Providence also wants to install the system. It is already in operation at the Icelandic airport Kevlafik (with the tagline on the homepage: one of the most ... secure airport [!] In the world) and in the Canadian airport of Toronto.

It goes without saying that the system leads to violent clashes between civil rights protectors and those in favor of stronger security measures. The discussion is by no means absurd:

It cannot be pointed out often enough that the 911 bombers entered America with their normal passports and completely legal visas and were not previously known to the police. This most devastating attack of all time would not have been hindered in any way by face recognition. Islamic terrorism in particular seems to have a large human potential - people who did not attract attention before suddenly blowing themselves up with a bus in Israel or flying planes into high-rise buildings in America. And the masterminds are unlikely to get lost in an American airport. Anyone wanting to advocate face recognition can find some good arguments for it (like finding local hideaway criminals), but citing the 9/11 attacks here is nonsense.

But the arguments of the other side are not particularly well thought out either. The ACLU (American Civil Liberties Union) called for the system to be dismantled, arguing that installing this software in airports only gives people a false sense of security, namely that they are able to arrest potential terrorists despite the technology just incapable of that. In fact, the detection rates are not very convincing. As I said, there are several false alarms every day in Fresno (with 2000 passengers a day), and tests have shown that a suspect was only recognized in about half of the cases in pictures with a time difference of one year.

And yet the ACLU argument is hollow: In the end, every feeling of security is wrong, as has been proven by assassins who traveled with their own passports and killed the World Trade Center with box cutters. If the installation of face recognition in Fresno helps to ease the mass panic and not every flight becomes a bad trip due to the fear of assassination, then it has done its job. Another example: 250 face-recognition cameras have been guarding the rugged Newham district of London since 1998. To date, there has not been a single arrest. But crime fell by an unbelievable 30% - again a purely psychological victory because potential criminals feel insecure. Many American casinos use face recognition to deter known scammers. That works very well there because casinos are well lit and, unlike at an airport gate, you have a little time to take useful pictures.

Admittedly, the ACLU argument is not really meant seriously. Or should one believe that the ACLU, of all people, has a really efficient face recognition? The ACLU is about the restriction of personal rights, and one must therefore ask the question whether the possible advantage of face recognition outweighs the possible restriction of personal rights.

The answer is relatively simple: If the technology is not misused, so if all the recordings made are irretrievably destroyed at the moment in which they do not match what is being sought, then everything is fine. The authorities already have all kinds of dangerous instruments: B. Prisons (which are only good if there are no innocent people there), monopoly on violence (which is only good if it is not abused), DNA databases (which are only good if the data is kept secret). So if you trust the rule of law in your own country, you can advocate face recognition. Those who fail to do so must reject it, along with all other aspects of government.

But what is really absolutely necessary is a clear law that regulates the use of face recognition. At the Super Bowl game in January 2001, for example, cameras were pointed at visitors in order to find criminals. Apart from the discovery that 19 petty criminals were hiding among 70,000 visitors, there were no successes to record (there were no arrests). Here the public was not informed beforehand about the installed system, and allegedly the law enforcement officers are said to have used the cameras to zoom in on female visitors. The protest was as big as it was justified.

A provider of facial recognition software, Visionics, of all people, has made a very useful proposal for such a law and is already obliging customers to adopt such a policy: The scanned people must be informed beforehand about the system, precise guidelines about the database of suspects, immediate deletion of all unmatched images and severe penalties for people who exceed these guidelines.

Speaking of visionics: The technology of this company is based on finding a good dozen points on a face and measuring the distances. Face recognition is to be carried out with this data. Visionics is used in Tampa, Florida and maybe even at the Olympics.

Competitor Viisage is more successful: This company supplies the aforementioned airports and casinos and was also responsible for the Super Bowl scandal. (Perhaps this may play a part if competitor Visionics demands the stricter guidelines). The Viisage system is based on 128 archetypal faces with which the images are compared. The data is a description of how the faces relate to the archetypes.

The trio is completed by the Canadian competitor Imagis. His system is based on 3D models of the heads, which are then converted into data using algorithms. Imagis claims to be more effective than archetype-based systems. Imagis customers are the British police, who use it to find missing children in child pornography, and the city and airport of Oakland - the latter systems do not seem to have been installed yet.

In terms of customers and technology, the three may differ. But what the stock market did in common was pathetic before September 11, then more than impressive. (Peter Riedlberger)

Read comments (8 posts) https://heise.de/-3423193Report errorDrucken