AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
Consumer Goods/Appliances and Other Consumer Goods/Appliances and Other

Application:
N/A

Why More Retailers Are Adding Computer Vision to Their Shopping Lists

POSTED 12/17/2019

 | By: Dan McCarthy, Contributing Editor


With the approach of the holiday season, many of us hit our favorite retail outlets to shop for expressions of our goodwill toward others. From the perspective of those retail outlets, however, the holiday season is more of a Darwinian battle to the death.

This is particularly true among brick-and-mortar retailers in the U.S., which on average had to support 23 square feet of store space per consumer this year, according to the International Council of Shopping Centers. Contrast this with the retail footage per consumer in the United Kingdom (5 square feet), Spain (4 square feet), and Germany (2 square feet), and you begin to understand why the U.S. retail industry is facing a shakeout, despite strong consumer spending in 2019.

To remain competitive in this environment, retailers are beginning to explore computer vision as a way to reduce risk and operational costs and to optimize in-store traffic. According to a Retail Info Systems (RIS) technology study titled “Retail Accelerates,” only 3% of merchandisers have already leveraged computer vision technology this year, but 40% plan to start or finish implementing the technology within the next 12 months. 

Retailers are leveraging vision technology by enabling cashier-less stores, automating inventory operations, improving security, gathering valuable intelligence about consumer behavior, and offering new and unique customer experiences. 

Simpler Shopping via Complex Technology
E-commerce giant Amazon first entered the brick-and-mortar retail world in 2017 with its acquisition of Whole Foods. It started to reinvent the retail experience a year later when it launched its first cashier-less Amazon Go storefront in Seattle. The company has since opened 18 additional stores in Seattle, Chicago, San Francisco, and New York City, with plans to scale that number to 3,000 by 2021. 

The underlying concept of a Go store is simplicity. Customers entering the store scan the Go smartphone app at a turnstile to gain access. Once inside, they take what they want from the shelves and simply walk out again, without ever stopping at a register or scanning a product. 

Ceiling-mounted vision systems make this possible by identifying what products shoppers add to their bags and billing their account apps accordingly. But complicated challenges needed to be solved to enable such a simple experience. Among them was the need to aggregate signals from different cameras, which all had to be calibrated according to their precise locations. Additional challenges included distinguishing between similar products (such as different flavors of the same brand of ice cream), correctly interpreting each shopper’s actions (for example, taking an object or returning it to the shelf), and tracking people even in an occluded or “tangled” state (that is, beside one another).

Amazon tracks the location and actions of shoppers through custom vision hardware that captures RGB video before segmenting each image into pixels, grouping pixels into blobs, and labeling blobs as persons or non-persons. As blobs labeled as people navigate the store, Amazon’s vision system is able to track them across multiple video frames with high confidence.

When a customer nears a shelf, deep learning algorithms trained to model a shopper’s articulated limbs recognize when an arm extends toward a shelf. Training the deep learning software was not a simple task given the volume of possible poses that a camera might capture as individuals reach for items in a crowded aisle. Amazon relied on simulators of virtual customers to initially train its deep learning software. Its dataset of training images, however, will likely grow exponentially if the company opens new stores as rapidly as planned. In the meantime, Go stores rely on cameras and weight sensors to help Amazon’s system determine when items are taken from a shelf, restored to it, or even simply pushed into empty spaces in the back. 

Optimizing Customer Value
Most of the retail industry appears willing to cede early mover advantage on cashier-less store technology to Amazon. Only 7% of retailers report having grab-and-go technology in place now, according to RIS, while another 15% are either already deploying cashier-less stores or plan to open one in the next 12 months.

Walmart—Amazon’s largest brick-and-mortar competitor—is in the latter category. Earlier this year in Dallas, it launched its own grab-and-go concept with the opening of Sam’s Club Now. Walmart’s use of vision technology is not as streamlined as Amazon’s, however. Shoppers at the new store currently rely on the company’s Scan & Go smartphone app to pay for items as they collect them. But last summer Walmart introduced a beta version of the app that leverages a smartphone’s camera to scan products by image.

Meanwhile, Walmart is applying computer vision in a broader pilot project at more than 1,000 of its other stores. There, embedded cameras monitor self-checkout kiosks and traditional registers to minimize theft and accidental mis-scans. The company is not alone. Vision technology and store monitoring go together like plastic and credit cards, and the applications and competitors are too numerous to mention here. 

Perhaps more important is the fact that vision is evolving beyond loss prevention to introduce new sources of intelligence about consumer behavior and shopping patterns to enable retailers to refine their store layouts, better target their promotions, and improve the customer experience. 

RetailNext’s business intelligence platform, for example, allows stores to integrate data from either legacy analog and IP cameras or its own proprietary stereoscopic video device to gain insights about store traffic volumes or which displays attract more attention. The company’s software can even enable vision systems to provide full-path analysis of every shopper navigating a store over the course of a day. 

Facial recognition is becoming another source for business intelligence and unique customer experiences. Simpler applications can dynamically change in-store promotions to match the demographic of a nearby customer and/or gather emotional analytics from a customer’s response to an ad. With customer opt-in, facial recognition further allows special VIP or loyalty programs that automatically personalize the shopping experience. For example, when a member of Lolli & Pops’ loyalty program enters the chain’s candy stores, facial recognition algorithms alert store personnel, who can access the customer’s taste profiles and apply analytics software to generate personalized recommendations for them.

Automating Inventory
Optimizing inventory management and costs might not wow customers, but for retailers it is vital to remaining competitive and even solvent. According to RIS’s “Retail Accelerates” report, 45% of retailers who invested in analytic technologies this year did so for inventory optimization. Vision-enabled robots are playing an increasing role in capturing the return on these investments.

Walmart is again part of this trend. In April it announced that it had deployed vision-enabled robots from Bossa Nova Robotics as a pilot project in 50 of its stores. The mobile systems employ smart cameras, high-brightness light sources, and precision optics to ensure that shelves remain full, tidy, and error-free by alerting workers to any problems.

Simbe’s Tally robot is another mobile inspection system gaining traction in stores. It captures image data from more than a dozen embedded high-resolution cameras and applies machine learning algorithms to alert staff about out-of-stock products, damaged packaging, or inaccurate pricing. 

The work required to inventory a large supermarket typically makes it a once-a-week job. Simbe claims that Tally can do the work in no more than two hours, and it designed the robot to repeat the task up to three times per day with more than 97% accuracy. By monitoring inventory more quickly and efficiently than human workers and by taking over other mundane tasks, Tally robots can free up store workers to focus on restocking and serving customers. 

Midwest grocer Schnuck Markets recently upped its deployment of Tally robots; after starting with three stores, it now uses them in nine of its 115 outlets. Giant Eagle, which operates 470 supermarkets from Indiana to Maryland, is sampling the bots at three of its locations. 

It is still too early to determine the potential return on investment that vision-enabled robots provide to retailers. But research from industry analyst IHL Group notes that out-of-stock items could be costing retailers nearly $1 trillion in annual sales, and consumers cite empty shelves and absent staff as the leading reason for giving up on finding an item. 

This makes a compelling case for retailers to stock up on vision-enabled robots to help lower the floor on operational costs. And vision technology is also helping to raise the ceiling for revenues by introducing cashier-less stores, improved business intelligence, and optimized customer experiences. As competition for consumer attention heats up in the industry, the case for vision technology will only grow among savvy retailers.