Vision Captures the Next Stage in Smart City Development
| By: Holly O'Dell
Whether you’re talking about the joys of flying cars or a more dystopian view of decay, science fiction has influenced our ideas of how future cities should look. In reality, technologies such as embedded vision will enable future urban environments in the form of “smart cities” that manage traffic, parking, surveillance, and more. In fact, many municipalities are already well on their way toward this future. But even they are only on the cusp of capabilities that the smart city promises.
“In smart cities, we will not see standard off-the-shelf or traditional machine vision cameras to provide solutions,” says Tim Coggins, Head of Sales and FAE Embedded Imaging Module Business – Americas at Basler, Inc. “Rather, we will see embedded platforms that rely on massive networks of image and other sensor capture requiring real-time connectivity with smart processing on the edge. Smart city solutions rely heavily on software rather than hardware.”
Edge computing allows data produced by Internet of Things (IoT) devices to be processed or analyzed at or near its source before it is sent to the cloud or data center. Coggins uses facial recognition as an example.
“We may capture a human face at 5 MP resolution, but we can’t transmit that amount of data to the cloud because it would be cost prohibitive,” Coggins says. “So preprocessing needs to be done on the edge to get metadata, which is sent to the cloud and processed there with smaller packets of information, and we get our results based on that.”
That’s the premise behind Boulder AI, whose area-scan DNNCam performs machine learning at the edge. “By running neural networks inside of the device, we can extract visual information that allows the end-user to explore their data without the need for video management server software,” says Darren Odom, CEO of Boulder AI. “This software performs some analysis but is expensive to maintain because it often requires on-site IT department.”
Not only does this approach save money for the customer, but a single camera can perform multiple tasks. “Other smart cameras on the market usually perform a specific function, such as facial recognition or license plate reading,” Odom says. “With the DNNCam, a parks and rec department can count how many people are using a park, while a traffic management camera can count the number of scooters on the road.”
The system can also go as granular as timing stoplights based on pedestrian traffic at different times of day, a prototype of which a major metropolitan area is currently testing. “Already this year, there have been 18 deaths between cars and people in the city,” Odom says. To eliminate those pedestrian deaths and injuries, the camera communicates to the controller to switch the light based on pedestrian presence. It also gives people who are disabled or in wheelchairs additional time to cross the street.
The camera is mostly used in the visible range, but it does have a shortwave infrared (IR) filter that can be turned on and off remotely. This requires external IR illumination to see long distances at night to enable high-speed license plate reading, for example. Boulder AI is also developing a 300 MP camera that can capture much higher resolutions and connect with other sensors to extract data about how many people are inside of cars or to understand cell phone usage in a population, for example.
Staying Safe on the Road
Another project that puts smart traffic management into practice is the North Avenue Smart Corridor, a simulation model launched by the City of Atlanta and the Georgia Institute of Technology that uses technology to improve signal timing and reduce emissions.
The application employs six cameras mounted on intersections that detect in one-minute increments the number of cars on the road, their speed, the direction they are going, which turning lanes they are using, and more. The connected cameras capture and feed the data into a real-time model of the corridor developed by Georgia Tech. The simulation runs at a frequency of .10 seconds, allowing for a rapid exchange of information between the model and the road.
“Instead of pre-programming signal lights, we can update the status in real-time for drivers so they know what time they will have a green, yellow, or red light,” says Angshuman Guin, senior research engineer at Georgia Tech’s School of Civil and Environmental Engineering.
The near-instant updates can be communicated to the car’s computer, as well as roadside units. “Safety messages allow the vehicle to know the location of other vehicles around it,” Guin says. “This information is used to prevent crashes from happening. If a vehicle runs a red light or is about to hit another vehicle, an alert warns the driver of that vehicle as well as other vehicles.” Vehicles equipped with advanced driver-assistance systems or autonomous operation systems can even hit the brakes to prevent the potential crash.
In addition to improved travel times, the smart corridor project has seen a reduction in vehicle crashes. Georgia Tech is now calibrating a working version of the model and preparing to integrate it into the corridor to consider varying traffic conditions. The project will also introduce an autonomous shuttle this year.
It Takes a Village
One critical takeaway from the smart city concept is that embedded vision is not an island in the larger ecosystem. A connected system necessary for a smart city, which comprises many different sensors producing significant amounts of data — infrastructure monitors, smart parking systems, and waste management, among others — must efficiently capture, process, and share captured data to avoid creating bottlenecks.
“The growth here is not just for sensor-based embedded vision solutions but the need for better overall connectivity between these devices through more effective data management, and more processing synergy between cloud and edge devices,” Coggins says. “Data management and the software itself will likely provide a much higher revenue stream for companies than the hardware.”
To put together a smart city, a municipality will need to rely on an alliance of companies, rather than individual component makers. “With the smart city, you are going to be retrofitting an entire city with new technology and components, and it is a very large undertaking,” Coggins says. “You want to make sure you have a long lifecycle built into such a daunting task. Any company coming into this space needs to very strongly understand complete systems, understand the weak link within those systems, and be available to support those systems.”
With 5.5 million residents and another 3 million expected to move to the state by 2050, Colorado is planning for the future with the Colorado Smart Cities Alliance, a multijurisdictional combination of public, private, and research leaders committed to accelerating the adoption of smart city technologies that improve quality of life and economic opportunities.
Arrow Electronics, Inc., will act as a technology guide for the alliance, helping the various sectors select the right vendor for the application, such as automated parking or infrastructure and asset management. In addition to embedded vision, other smart city sensors include particle monitoring for air quality, infrastructure monitors that anticipate repairs, and microphones trained to detect gunshots.
Launching a smart city can be overwhelming, but Arrow addresses those complexities with its new Colorado Open Lab, an IoT engineering lab where the public and private sector can develop and test smart technologies before deploying them.
“Cities need to start small to understand the biggest problem that will get the best payback in a certain amount of time, rather than developing a master plan in trying to execute everything,” says Ashish Parikh, Vice President, IoT Platforms and Solutions at Arrow. “We’ve engaged cities that launch small pilot deployments focusing on the architecture of where all the data comes together, how it’s going to be managed, how it’s treated related to privacy and security, and how that data will be analyzed. You need to have that understanding right up front, but you also need to make sure that you are learning from the data and integrating it to get to your end goal.”
Among all the challenges related to launching a smart city, protecting the privacy of those who appear in images remains the stickiest. “Basler has a very strict ethical code, and there are certain applications for we will decline to participate,” Coggins says. “If your face is already affiliated with a credit card, for example, that data is already out there. There’s work that needs to be done in how we handle the transmission of data, not just the collection of it.”
Boulder AI follows the standard of the European Union’s General Data Protection Regulation. “You can extract visual information without sending something personally identifiable,” Odom says. Police may want to look at a fender bender for evidence reasons or perform a privacy-centric license plate search, meaning that the police department must have a specific plate number in mind when conducting a search, rather than loading license plates en masse.
As they sit on the edge of larger deployment, smart cities tend to generate more questions than answers at this point. Consider the nature of the city itself. “Why do people go to the city?” Coggins asks. “They go for tourism, to work, to shop, to attend a sporting event. And the biggest consideration for smart city infrastructure is, how do we manage the flow? How do we manage the people? What are we doing to make things safer or more environmentally friendly?”
Although vision will play an important role in providing answers, the city of the future will rely on an entire ecosystem of sensor-based technology.