It’s AI vs. weeds. How multispectral cameras are supplying intelligence to Intelligent Farming.

JAI-area-scan-camera-for-Artificial-Intelligence-in-farming-Weeds-Removal

Precision agriculture or intelligent farming has seen an exponential growth in the recent years. You could say that removal of weeds or wild plants is of growing importance in intelligent farming. This is because weeds are poisonous and parasitic for crops. Unchecked weed growth can restrict the development of local crops by absorbing nutrients, water, space, and soil.

Traditional methods of weed removal use herbicides on the assumption that weeds are uniformly distributed in the fields. But most agricultural fields are spatially variable in weed infestation to a certain degree. Detecting random weeds in a crop field is a challenging task.

Of course, there have been some developments in robotics and mechanical engineering where remote controlled mechanical vehicles are now able to precisely pull out the weed using robotic arms. The disadvantage of such systems is that human monitoring is still required to control the vehicle, identify the weed and instruct the robots to pull them out of the crop.

Recent advancements in machine vision, deep learning and artificial intelligence, have created the possibility of replacing the human factor in weed removal systems. But this can only happen if both aspects of the “intelligence” process are addressed: the capturing of intelligence data (images) and the decision-making based on that data.

In order for the AI system to take a decision on whether or not to remove an object (weed) from the ground, it first needs the ability to not only optically identify the object but also classify it as a weed or crop. Because weeds and crops typically have different spectral characteristics, this is where multispectral cameras can be critical to success by ensuring that the right type of images are provided from the optical sensor to facilitate accurate classification.

Machine vision cameras provide real time images of the scenario which can be used for machine learning. This is the biggest advantage over using simplified, very low-cost optical sensors which can provide very accurate spectral data, but without full image data. On the other hand, the challenge for machine vision cameras is to be able to provide multispectral data with a limited number of bands that are relevant for a weeding application.

One possible multispectral scenario is to use optical filters over multiple machine vision cameras to get the desired separation of light, but this type of system becomes complicated due to complex optical alignment assembly, complex data handling and escalated costs.

As weed removal is an outdoor application, the robustness of the optical system is very critical. To use assemblies of multiple cameras that are optically well aligned in such a challenging environment is not very practical. Any disturbance in the optical alignment of various spectral images can lead to inaccurate machine learning which in turn affects the effectiveness of the artificial intelligence process.

The number of bands required for the weed removal process depends on the deep learning/AI methods. An RGB image-based system uses relative color indices formed by RGB gray levels. But if only RGB channels are used, this can make the AI process less sensitive to leaf orientation, effects of daylight, plant texture, shadowing and canopy overlay. Combining RGB with NIR wavelengths is more useful for NDVI analysis where the ratio of red and NIR reflectance is measured to differentiate between weed and the crop. The background reflectance from soil and dead plants can also be used for enhanced differentiation based on RGB-NIR wavelength combinations.

A prism-based camera provides high precision optical alignment between multiple sensors where apart from RGB, one or two additional NIR bands can be selected. This facilitates the autonomous weed removal vehicle to capture multi-spectral images from a single optical plane using exactly the same scenario.

The newest generation of prism-based, multispectral cameras offer high robustness to handle the shock, vibration and temperature conditions of intelligent farming systems and feature high-speed 10GigE interfaces to provide high throughput in terms of the amount of farm area that can be covered within a given time.

In addition, the high frame rates or line rates of these cameras play a very important role in the functioning of the artificial intelligence process. This is because neural networks used in machine learning are very sensitive to any change in the quality of data being analyzed. High-speed cameras reduce the effect of external light such as sunlight or general daylight conditions on the overall image quality. On the backend, AI workflows are typically connected to an embedded processing unit (e.g. Nvidia Jetson Xavier) which can be connected to the camera device using interfacing boards (from PCIe gen4 to 10GigE) in case the camera device does not support USB, MiPi-CSI or a 1GigE interface.

Download datasheet on JAI´s 3-CMOS multispectral area scan cameras.

For more information on JAI´s prism-based cameras for intelligent farming, please contact JAI.

Contact a JAI camera expert