Blog
How Embedded Vision Plays a Key Role in Smart City Development
Share This On X:
New York is the “City that Never Sleeps”, and San Francisco is the “City by the Bay”. With technology moving in current directions, someday soon we’ll have the “Smartest City in the Land”, and it will be due to embedded vision systems.
Currently, vision systems are used in cities around the world for various applications, whether it is traffic control or law enforcement. One issue with these systems is their expense, as they are specialized for limited applications and are separate systems that require costly infrastructure. With embedded vision systems for smart city applications, the idea is to remove those stand-alone systems and embed vision sensors in everything from street lights to stop signs and move a majority of the image processing to the sensor itself. With lower cost sensors located in more locations lower resolution is mitigated by increased coverage and reduced distancing. Further, with processing performed at the sensor, less data is transmitted allowing for more sensors to communicate with less overall data traffic.
Embedded vision systems can be used by a smart city to provide numerous functions, from controversial ones like facial recognition for law enforcement to mundane ones like finding open parking spots at the local supermarket or saving energy by reducing light output for street lights when no one is around. If you’ve ever been to Downtown Disney, you’ve seen their “smart” garages, which utilize sensors for each parking space to determine where open spaces are. With embedded vision sensors, those open space sensors would be incorporated into the garage lights, allowing for one sensor to be used for not only open space detection but also energy conservation for the lights, personnel safety at night, and accident detection during operating hours.
Don't Miss These Industry-Leading Events!
Current vision systems would not work in these applications, as they are costly per unit and require a large amount of infrastructure to operate. Embedded vision systems work for two reasons, utilizing lower-cost sensors to reduce per-unit cost and performing image processing at the sensor to reduce data traffic. By utilizing edge computing, embedded sensors will perform the majority of processing at the sensor which will reduce the data transmission and reduce the infrastructure requirements, which greatly reduces the cost of installation. With these two advantages, embedded vision systems will allow for increased coverage and increased capabilities.
Smart cities will create an environment for citizens that will allow them to efficiently and effectively perform tasks, removing some of the annoyances of city living that we experience today and improving our quality of life. Vision systems will be the backbone of these advancements, and embedded vision systems will be the technology that will be the motive force behind it.
BACK TO VISION & IMAGING BLOG
Recent Posts
- CMU Robotics Institute's Autonomous Drone Can Save Lives in Natural Disasters
- Your Insider Guide to the 2025 A3 Business Forum: Agenda Highlights & Must-Attend Events
- How Robots Are Addressing the Healthcare Workforce Shortage
- Mastering Anomaly Detection in Manufacturing: Key Benefits, Best Practices, and Use Cases for Implementation
- Accelerate Your AI Expertise with the Designing Autonomous AI Agents Training Course at the AI & Smart Automation Conference
- AI Degrees & Universities: Shaping the Future of Artificial Intelligence
- View All