In 2018, ERDC began examining how edge computing could optimize computation execution time and use, Garton says.
The research center has also looked at adding information gathered via edge computing to synthetic training environments to enable soldiers to train on base instead of traveling. These virtual training situations often integrate active weather-related information — such as water conditions, terrain maps and drone imagery — into the experience to represent a situation more accurately.
ERDC has found that the technology will likely require multiple connectivity options, because specific resources and needs will vary based on each application.
“One of the things we were looking at was sending information to a Microsoft HoloLens headset for display. In a situation like that, you’re actually out in the field,” Garton says. “In the synthetic training environment, they’ve also got different virtual reality headsets to give you a true visual look at everything.
EXPLORE: Leveraging standards to optimize AI at the edge.
“In those environments, you can’t use Wi-Fi, and 5G requires so many towers and endpoints to mesh together, so we really had to look at how to use military satellite communications.”
NASA is another agency that employs edge computing, using a structure to process data from DNA sequencing performed aboard the International Space Station, which shrinks the timeframe for processing from weeks to mere minutes.
The agency has also used edge data processing capabilities to enable AI-based communication between two instruments installed aboard the ISS: the MAXI sky survey, which conducts a full-sky survey about every 90 minutes; and NICER, used to monitor stars about to become black holes.
The instruments can now notify one another to take more detailed measurements in an area of the sky where a new object has been spotted. That job previously required the use of two ground-based astrophysics facilities in different time zones.
How USPS Is Tracking Lost Objects with Edge
At the U.S. Postal Service, an edge-based system tracks packages with damaged, misprinted or otherwise unreadable labels. Previously, package sorters could only use information from the OCR address area on packages and scan their tracking barcode. In the new system, package images and other characteristics are automatically obtained as items are scanned for sorting.
NVIDIA V100 Tensor Core GPUs within AI-enabled HPE Apollo 6500 servers, located at USPS processing centers, can process 20 terabytes of package images daily from more than 1,000 mail processing machines.
Due to the volume of items the agency oversees, sending image data from each field processing site to a central location wouldn’t be possible logistically, according to Information Sciences Specialist Joanne Su.
LEARN ABOUT: How edge computing helps organizations solve common IT challenges.
Currently, a postal worker reviews package images and compares them to insights produced by the system’s AI model to confirm accuracy; the process, however, is mostly automatic.
“The tracking barcode is still the primary information used to locate items,” Su says. “However, specialized graphics and labels are also used — packaging, logos, stamps, indicia and permits.
“The key elements here are the GPUs’ performing pattern classification and recognition, along with specially trained AI models. Our internal systems are used to distribute the results to other sites in the USPS network.”