Flower footage puts computer vision to the test in Shinseungback Kimyonghun’s new video installation.
Have you ever wondered how you can look at a seemingly arbitrary line in a 19th century impressionist painting and recognize something like a sailboat, a woman in a red dress, or rays of light? Sight and recognition are some seriously complex human neurological processes—and the subject of Seoul-based artist duo Shinseungback Kimyonghun’s latest experimental video installation, Flower.
Both Shin Seung Back and Kim Yong Hun have spent the last few months testing the limits of Google’s computer vision software, Cloud Vision API. The image analysis platform helps users to fully understand the content of an image, classify images into categories like book, dress, boat, and recognize specific individual faces and objects. If you’ve ever uploaded pictures to iPhoto, or been asked to tag a friend in a picture on Facebook, chances are, you've run into components of this technology.
In the Flower video installation, Shinseungback Kimyonghun take stock images of flowers and distort them to see how far they can morph a picture while still keeping it recognizable as a flower by the computer vision algorithms. The installation tests how human optic recognition stacks up against that of computers. By projecting the disrupted image during the stretching process, the audience can determine their recognizable range of distortion, as well as the software's.
The duo tells The Creators Project, “The development of computer vision technology has accelerated rapidly over the past few years, and it now shows near-human level accuracy in object recognition (some people claim computers are better already).” After the software scans the picture, Cloud Vision API generates a label and a confidence score, a number that suggests how certain the computer is that the category it came up with is correct. “A picture of a daisy may produce a label of ‘flower: 0.978’, ‘daisy: 0.965,’” write the artists.
The stock images used in Flower were selected after ‘flower’ was the first label category, and the confidence score was higher than 0.9—meaning the device is fairly certain of its answer. The artists have observed that some subjects have a greater distortion capacity. In Flower, the range of distortion was very wide, as is the gap between the range that human eyes accept as recognizable.
The artists have been exploring how computers see for years. In Cloud Face (2012), they tested similar technology by filming cloud formations in the sky and monitoring to see if they spawned any facial recognitions. In CAPTCHA Tweet, they developed an app that allowed users to compose tweets in CAPTCHA, a familiar response test that determines whether you’re human, so computers could read them.
The artists claim that the distinction between computer vision and other optic technologies, like eye glasses, telescopes, or cameras, is that, “not only can we see through it, but it also sees for us. It’s even getting to see better.” Is it possible that how we'll see in the future will be contingent on the trajectory of computer vision?
Say the artists, “It will become more and more difficult to oppose computer’s opinions as it develops further. We are here talking about if it is a flower or not, but discrepancies in judgement between humans and computers can happen in any areas where computers can play a role (e.g., whether to have a patient get an operation or not). If the judgements can be evaluated, we might just follow the better decision maker regardless if it’s a human or not. But how about the issues that have no definite answers such as ethical and philosophical questions? Are we going to listen to computers if they appear to be smarter or hold onto humanity anyhow?”
To learn more about Shinseungback Kimyonghun, click here.