How Embedded Vision Plays a Key Role in Smart City Development
New York is the “City that Never Sleeps”, and San Francisco is the “City by the Bay”. With technology moving in current directions, someday soon we’ll have the “Smartest City in the Land”, and it will be due to embedded vision systems.
Currently, vision systems are used in cities around the world for various applications, whether it is traffic control or law enforcement. One issue with these systems is their expense, as they are specialized for limited applications and are separate systems that require costly infrastructure. With embedded vision systems for smart city applications, the idea is to remove those stand-alone systems and embed vision sensors in everything from street lights to stop signs and move a majority of the image processing to the sensor itself. With lower cost sensors located in more locations lower resolution is mitigated by increased coverage and reduced distancing. Further, with processing performed at the sensor, less data is transmitted allowing for more sensors to communicate with less overall data traffic.
Embedded vision systems can be used by a smart city to provide numerous functions, from controversial ones like facial recognition for law enforcement to mundane ones like finding open parking spots at the local supermarket or saving energy by reducing light output for street lights when no one is around. If you’ve ever been to Downtown Disney, you’ve seen their “smart” garages, which utilize sensors for each parking space to determine where open spaces are. With embedded vision sensors, those open space sensors would be incorporated into the garage lights, allowing for one sensor to be used for not only open space detection but also energy conservation for the lights, personnel safety at night, and accident detection during operating hours.
Current vision systems would not work in these applications, as they are costly per unit and require a large amount of infrastructure to operate. Embedded vision systems work for two reasons, utilizing lower-cost sensors to reduce per-unit cost and performing image processing at the sensor to reduce data traffic. By utilizing edge computing, embedded sensors will perform the majority of processing at the sensor which will reduce the data transmission and reduce the infrastructure requirements, which greatly reduces the cost of installation. With these two advantages, embedded vision systems will allow for increased coverage and increased capabilities.
Smart cities will create an environment for citizens that will allow them to efficiently and effectively perform tasks, removing some of the annoyances of city living that we experience today and improving our quality of life. Vision systems will be the backbone of these advancements, and embedded vision systems will be the technology that will be the motive force behind it.
Click here to learn more about how the cities of the future are being developed.
- 8 Reasons You Shouldn’t Miss the International Robot Safety Conference
- How Camera Link HS Helped in COVID-19 Vaccine Development
- Deploying AI at the Edge: From Operation to Automation
- “Artificial Intelligence Is Here Now:” A Discussion with Neurala CEO Max Versace on How—and Why —AI Is Accelerating Manufacturing Today
- Utilizing Vision AI in Manufacturing with Neurala's Max Versace
- Managing Industrial Data To, From and Between the Cloud and the Edge
- View All