Industry Insights
As Machine Vision Weeds Out Costs, Expect It to Raise More Crops
POSTED 03/11/2020 | By: Dan McCarthy, Contributing Editor
For millennia, agriculture has helped spread human civilization with the practiced application of seeds, water, fertilizer, and grit. As the global population approaches 9 billion, however, the ancient practice faces the consequences of its success: Not only must today’s industry optimize yields to keep pace with the growing number of hungry mouths across the globe, it must also compete with a growing population for resources such as water and arable land. Also, despite the growth in human numbers, farmers in many regions face the ironic challenge of finding able or willing manual labor to help tend their crops.
As with any high-risk production enterprise with razor-thin and widely variable margins, farming breeds a mind-set that’s hesitant to try new tools and practices. Faced with global pressures, however, farmers have steadily integrated machine vision into their operations — albeit with one eye fixed on the bottom line. There continue to be challenges when trying to apply computational image analysis to an environment as variable as farming. But vision technology has demonstrated it can help improve yields, reduce risk and — with the introduction of machine learning techniques — preserve the practice of farming amid diminishing labor pools.
Low-Cost Flights
Remote sensing is among the more established applications of machine vision technology in agriculture, helping farmers to gather data about the health of their crops from visible, infrared, multispectral, or thermal sensors traditionally mounted onboard satellites or aircraft. More recently, low-flying drones have offered farmers a third option that provides greater operational control and highly targeted imaging of their fields.
Drones are often more economical than other platforms because they don’t require the fueling and licensing to put a survey plane in the air. Their low-altitude operation can also make better use of the microbolometer-based thermal cameras favored by agriculture.
In contrast to cryogenically cooled MCT imagers, uncooled microbolometer cameras offer a more compact form factor and lower cost. If they have a drawback, it is their comparatively low resolution compared to the visible spectrum cameras they are often paired with. The sensor formats used most often in agricultural surveys typically capture resolutions of 320 x 240 pixels or 640 x 512 pixels.
“Depending on what the foliage is, you need a certain resolution to ensure you have multiple pixels over certain features of the crop to not lose valuable information,” says Markus Tarin, President, and CEO of MoviTHERM. “In the best-case scenario, microbolometers basically have VGA resolution. When flying at 1,500 feet, there’s not much you can do with that, and flying at lower altitudes means longer survey flights. However, using microbolometer cameras in agriculture is still useful for detecting larger issues, such as irrigation problems.”
Longer flights translate into higher out-of-pocket costs to pay the pilot and fuel the aircraft — unless the pilot is you or the aircraft is your drone.
The cost-benefit analysis drives other vision applications in agriculture. Minimizing flight time is also what underlies the adoption of multispectral imaging systems, which gather image data within several wavelength ranges spanning the visible and infrared spectrum. If you’re going to contract a pilot to fly over your fields, you might as well have multiple cameras with different filters and spectral sensitivities onboard to gather as much useful information as possible in a single pass.
Hyperspectral imaging takes this a step further by using prisms or gratings to capture data from multiple spectral bands measuring only a few nanometers. Hyperspectral systems from companies such as Switzerland-based Gamaya are reported to deliver more than 40 spectral measurements to provide insights into weeds, pests, and plant diseases as well as moisture content and general crop health.
Another advantage of gathering data from multiple wavelengths, says Tarin, is that you can then overlay multispectral information to extrapolate what’s unfolding in a field of crops. “So, if the thermal imager tells you that you haven’t irrigated properly in a certain region, another camera operating in the visible spectrum can tell you how that’s affecting chlorophyll level. You get more information in terms of your action plan,” he says.
The addition of cloud technology to the agricultural tool kit is another extension of this economic dynamic. Traditionally, aircraft surveys captured image data within a day, but postprocessing, analysis, and delivery might add weeks before the farmer could act on it. Today, data is often digitized and uploaded almost immediately to the cloud in a format that allows farmers to access it on-demand in their fields.
From Silicon Valley to the Breadbasket
Traditional machine vision has flourished in industrial and logistics applications partly because the applications and working environment are more controllable, which makes it easier to design a system according to predictable thresholds. Not so with agriculture applications, given that changing weather conditions present an almost endless degree of variance for the system designer to overcome.
Even deep-learning tools, which are helping tackle tough vision challenges such as surface inspection, defect classification, and inspection of finished assemblies, can face a particularly tough row to hoe in agricultural settings.
Wheat, for example, is not just one crop. In the U.S. alone, there are about 1,000 commercially significant varieties, and more than 30,000 around the world. Training a deep-learning system to detect and identify even a fraction of these would also require the sample set to contend with the effect of different soil compositions, seasonal changes, and the growth stage. And this only applies to wheat.
This level of variance hasn’t stopped the march of deep learning in agriculture. A crop of both public and homegrown data is slowly taking root. But another solution may come from another subset of artificial intelligence called deep reinforcement learning (DRL), which, like deep learning, enables a vision system to learn new predictive models autonomously. The difference is that, where deep learning builds on human-in-the-loop training, DRL algorithms allow the system to effectively work toward a desired goal through trial and error. It enables a computer to try different approaches, learn through feedback when an approach delivered a desired result, and then reinforce that approach.
The technology underlies the so-called perception tools applied by Stout AgTech, an OEM of intelligent farm machines, to agricultural applications with a high level of variance that to date have made them resistant to machine vision. Stout mounts its DRL-driven vision module on a toolbar that can be towed behind a tractor with a common three-point hitch. The system can then feed instructions to an encoder that activates any number of conventional mechanical tools, depending on the application, be it thinning, weeding, spraying, or fluffing the crop below.
“Deep reinforcement learning allows us to solve an entirely new class of problems,” says Chris Laudando, Managing Director and Cofounder of Stout. “We’re not beholden to the traditional machine vision hardware and software tools or constrained by only having blob detection, concentricity, parallelism, and intersecting lines or pixel counts to infer what’s going on in the world. So that gives us huge flexibility in handling a world that is highly varied, which is the natural world.”
Stout is protective of the details underlying its vision technology, but Laudando claims the sophistication of DRL algorithms enables the use of low-cost CMOS sensors for input, which extends the financial feasibility of Stout’s vision technology in new agricultural applications. For instance, the company is helping to automate organic peanut farming to Georgia, which, due to its eschewal of broad-scan chemical sprays, has been courting extinction in a very tight manual labor market.
“In the high plains of Texas where they have a ready supply of manual labor, they can bring in a weeding crew to reduce pressure on an organic crop and make $1,200 acre,” says Laudando. “Georgia’s organic peanut farmer needs to control weeds manually, which means they only get about $400 an acre.”
Stout’s DRL-driven vision system was quickly able to distinguish viable peanut plants from weeds. In addition to enabling automation of the weeding process, it promises to restore a fading farm tradition in Georgia, he says.
Pitching the same technologies underlying Level 5 autonomous vehicles to the supremely practical agricultural community may seem like long odds. But as vision and machine learning technology evolve to help automate processes where 10 to 20 people are currently making subjective decisions, its financial feasibility on the farm will substantially improve.