Industry Insights
From Creation to Curation, Machine Vision Is Art’s Technical Patron
POSTED 03/09/2015 | By: Winn Hardin, Contributing Editor
Mac
hine vision is “boring”. Everyone knows this. It’s an enabling technology that helps robots build cars and empowers Google to drive them. It saves Super Bowl parties by finding bad pixels on an 80-inch OLED flat-panel TV before the big game. It locates hidden cancers in medical patients, saving lives, and helps to protect soldiers from the evils of the night.
Computer vision, on the other hand, is exciting. It must be because artists, video-game makers, and even museum curators all use “computer vision” techniques to create very cool stuff. It looks a lot like machine vision because they use digital cameras and image-processing software on a PC as part of an interactive art exposition, video game, or new tool for classifying lost masterpieces. But it’s not machine vision. It’s cooler.
Call it what you will but machine vision and its cool cousin computer vision are invading every part of the human experience.
Painting With Light
Machine vision engineers don’t have to feel like newbies when it comes to making art. Videoplace, developed between 1969 and 1975, is among the first systems to use a camera, image-processing software, and a display to explore interactive art. Participants stood in front of a backlight and their silhouette was digitized and analyzed. It allowed for up to 50 interactions, including multi-gesture drawing. Myron Krueger originally developed Videoplace to illustrate how people could interact with fledging computer systems.
Currently there is no shortage of artists picking up the digital thread as more art-focused computer programming environments, such as Processing, OpenFrameworks, Max/MSP/Jitter, Quartz Composer, Cinder, and vvvv come online.
For example, Refik Anadol used vvvv software to develop Visions of America: Amériques, a multimedia installation recently shown at Frank Gehry’s Walt Disney Concert Hall in Los Angeles. It included a live orchestral performance of French-born composer Edgard Varèse’s Amériques. During the performance, software analyzed the sound and painted the inside of the auditorium with light and shapes based on different audible parameters. The projected shapes were chosen based on automated recognition of the conductor’s movements during the performance.
Anadol’s work is one of many that have leveraged the low-cost and surrounding software support of Microsoft’s Kinect using projected light patterns, a digital camera, and software to determine the conductor’s pose in 3D space. PlayStation 4’s new camera also is gaining traction due to high frame rates and low cost ($60). This doesn’t mean that industrial cameras don’t have a place in artistic endeavors, however.
Industrial camera manufacturer Point Grey (Richmond, British Columbia, Canada) has been very effective in an area it calls “Prosumer & Entertainment.” Recently, Point Grey’s LadyBug multi-sensor sphere camera helped Brasil360 create interactive virtual experiences built around Brazil in advance of the FIFA World Cup. Point Grey’s monochrome FireFly camera has been named by authoritative digital art blog, Creative Applications Network, as a good choice for artists looking for HD-quality video feeds for their projects. The blog also discusses the use of near infrared, thermal, and time of flight (ToF) cameras as unique tools for developing interactive projects. Other resources such as Jorge C.S. Cardoso’s presentation on how to use differencing, thresholding, color, and object-tracking algorithms are becoming much more prevalent as the use of machine vision/computer vision gains traction in the artistic world.
Vision ‘Cures’ Museum Woes
More evidence of machine vision converging on the artistic world can be found in recent conferences that focus exclusively on the use of automated imaging for art history and analysis. Last year witnessed Zurich’s European Conference on Computer Vision with workshops on “Where Computer Vision Meets Art (VISART)” and in Chicago, The Humanities and Technology Camp (THAT Camp) forum on digital art history. Both workshops brought together computer vision experts with art historians to discuss how image-processing techniques could be used to aid art analysis and curation.
During VISART, presentations covered how computer vision can help researchers develop 3D reconstructions from paintings, authentication, and forensics; computer vision and cultural heritage; and interactive 3D media and immersive environments.
At THAT Camp, John Resig, a JavaScript developer and author on the subject, discussed his recent work with the Frick Art Reference Library. By applying the commercially available TinEye, an online image-match engine, Resig was able to help Frick art historians identify relationships for up to 88% of the anonymous Italian artworks in their library. According to Resig, “It’s important to note that this particular archive is likely one of the most challenging cases for using computer vision techniques in general (other archives are likely to have a much higher rates of match). The fact that most of the images in this archive were black-and-white (lacking additional information about the colors of the work) was a major hindrance to improved matching. The less data that the analysis engine has to work with, the harder it is to make a successful match. Additionally, many of the photographs in the set had drastically different lighting between shots, making it very hard to do comparisons. Presumably, another archive that had consistent lighting would also fare much better.”
If it’s true that a picture is worth a thousand words, then image-processing tools that can quantify both the truth and beauty of an artwork speak more than volumes —they are priceless.
Computer vision, on the other hand, is exciting. It must be because artists, video-game makers, and even museum curators all use “computer vision” techniques to create very cool stuff. It looks a lot like machine vision because they use digital cameras and image-processing software on a PC as part of an interactive art exposition, video game, or new tool for classifying lost masterpieces. But it’s not machine vision. It’s cooler.
Call it what you will but machine vision and its cool cousin computer vision are invading every part of the human experience.
Painting With Light
Machine vision engineers don’t have to feel like newbies when it comes to making art. Videoplace, developed between 1969 and 1975, is among the first systems to use a camera, image-processing software, and a display to explore interactive art. Participants stood in front of a backlight and their silhouette was digitized and analyzed. It allowed for up to 50 interactions, including multi-gesture drawing. Myron Krueger originally developed Videoplace to illustrate how people could interact with fledging computer systems.
Currently there is no shortage of artists picking up the digital thread as more art-focused computer programming environments, such as Processing, OpenFrameworks, Max/MSP/Jitter, Quartz Composer, Cinder, and vvvv come online.
For example, Refik Anadol used vvvv software to develop Visions of America: Amériques, a multimedia installation recently shown at Frank Gehry’s Walt Disney Concert Hall in Los Angeles. It included a live orchestral performance of French-born composer Edgard Varèse’s Amériques. During the performance, software analyzed the sound and painted the inside of the auditorium with light and shapes based on different audible parameters. The projected shapes were chosen based on automated recognition of the conductor’s movements during the performance.

Anadol’s work is one of many that have leveraged the low-cost and surrounding software support of Microsoft’s Kinect using projected light patterns, a digital camera, and software to determine the conductor’s pose in 3D space. PlayStation 4’s new camera also is gaining traction due to high frame rates and low cost ($60). This doesn’t mean that industrial cameras don’t have a place in artistic endeavors, however.
Industrial camera manufacturer Point Grey (Richmond, British Columbia, Canada) has been very effective in an area it calls “Prosumer & Entertainment.” Recently, Point Grey’s LadyBug multi-sensor sphere camera helped Brasil360 create interactive virtual experiences built around Brazil in advance of the FIFA World Cup. Point Grey’s monochrome FireFly camera has been named by authoritative digital art blog, Creative Applications Network, as a good choice for artists looking for HD-quality video feeds for their projects. The blog also discusses the use of near infrared, thermal, and time of flight (ToF) cameras as unique tools for developing interactive projects. Other resources such as Jorge C.S. Cardoso’s presentation on how to use differencing, thresholding, color, and object-tracking algorithms are becoming much more prevalent as the use of machine vision/computer vision gains traction in the artistic world.
Vision ‘Cures’ Museum Woes
More evidence of machine vision converging on the artistic world can be found in recent conferences that focus exclusively on the use of automated imaging for art history and analysis. Last year witnessed Zurich’s European Conference on Computer Vision with workshops on “Where Computer Vision Meets Art (VISART)” and in Chicago, The Humanities and Technology Camp (THAT Camp) forum on digital art history. Both workshops brought together computer vision experts with art historians to discuss how image-processing techniques could be used to aid art analysis and curation.
During VISART, presentations covered how computer vision can help researchers develop 3D reconstructions from paintings, authentication, and forensics; computer vision and cultural heritage; and interactive 3D media and immersive environments.
At THAT Camp, John Resig, a JavaScript developer and author on the subject, discussed his recent work with the Frick Art Reference Library. By applying the commercially available TinEye, an online image-match engine, Resig was able to help Frick art historians identify relationships for up to 88% of the anonymous Italian artworks in their library. According to Resig, “It’s important to note that this particular archive is likely one of the most challenging cases for using computer vision techniques in general (other archives are likely to have a much higher rates of match). The fact that most of the images in this archive were black-and-white (lacking additional information about the colors of the work) was a major hindrance to improved matching. The less data that the analysis engine has to work with, the harder it is to make a successful match. Additionally, many of the photographs in the set had drastically different lighting between shots, making it very hard to do comparisons. Presumably, another archive that had consistent lighting would also fare much better.”
If it’s true that a picture is worth a thousand words, then image-processing tools that can quantify both the truth and beauty of an artwork speak more than volumes —they are priceless.
Vision in Life Sciences
This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.