Feature Story | 30-Apr-2025

Using AI to help find missing persons

National Center for Supercomputing Applications

When someone goes missing, time is of the essence. Those first 24 to 48 hours are critical to search and rescue efforts. The missing could be in grave danger, and finding them often requires a fresh trail with evidence that hasn’t been washed away by rain or disturbed by animals or people. A research team based out of California Polytechnic State University, San Luis Obispo (Cal Poly), is working with the National Center for Supercomputing Applications’ (NCSA) resource Delta to help shorten the time between when a person goes missing, and when they’re found.

Most search and rescue teams are made up of volunteers. If the team is lucky, they’ll have one or two who have experience finding people. Many volunteers won’t have experience at all – they help the team by canvassing an area for clues to report back to the experts. A search team manager may have to coordinate a search that covers miles and miles. When a manager is working with people of varying tracking skills, communication becomes the lifeline that could lead to the missing. 

What if you could use an AI-trained application, one that could take all kinds of data about the search in real time and analyze it to help predict where someone would be? Experts could help evaluate and adjust the importance of each clue and with the help of the application, direct searchers into more precise locations. Such an application could become a reality very soon.

Decades of Data

The Cal Poly research team knows all about search and rescue – they have two experts on the team who’ve worked with search and rescue teams for decades, Gary Bloom and Chris Young. “I’m a very long-term search and rescue volunteer,” said Bloom, a computer science alum of Cal Poly. “I’ve been doing it since between my freshman and sophomore year at Cal Poly, about 40 or 50 years ago.”

Young, lecturer, instructor and instructor trainer, teaches his specialty, search management, all over the U.S. and Canada. “I’ve been in search and rescue for 40-plus years,” Young said. “I’ve written a couple of books on the subject, and I just earned a Ph.D. in criminology, criminal justice and missing person studies.”

Through his work with the university, Bloom connected with Franz Kurfess, a professor in the computer science & software engineering department. “We had this notion – search and rescue is largely a paper-driven process. And when we say search and rescue is paper-driven, what we mean is the management of a search.”

Even if several experts are on a team searching for someone, they can only be in one place at a time. Information is passed from volunteers to one of these experts who works as a coordinator or manager of the search. This information is usually written down and physically passed along the chain.

“This is for lost persons in the wilderness,” Bloom explains. “For instance, people with dementia, or walkaways, Alzheimer’s or mental illness-related issues. And the search might be in rural areas, or suburban areas, or both. Essentially, the way we’ve been managing searches is we have a standardized set of forms in Northern California, in the Bay Area. We run a lot of things through a copy machine and a command post, and information is flowing verbally between individuals. But, if we started collecting the information electronically, we could then start applying AI technologies to try to help us figure out where we’re likely to locate somebody.”

Before any supercomputers were involved in the process, the research team had to create a database of expert search and rescue knowledge using data from past searches. What started as a digitization project, turning printed, typed or written information into PDFs, eventually evolved into a backend database capable of synchronizing with a mobile device that could access the data through a simple user interface.

“As searchers are discovering clues and finding information, all that information starts to become electronically transmitted, stored – and we also were able to access a number of historical data sets as of search outcomes to help our predictive models,” said Bloom.

With their new database in hand, the team had a tool that was already making things faster. But there was still a catch – volunteers might find a piece of information in the field, but it still took an expert to analyze it and determine the next steps, and experts in search and rescue are far and few between.

Training an AI with Search and Rescue Expertise

As good as humans are at pattern recognition, AI is even better. Patterns are everywhere, even in the behaviors of missing people.

“Missing people have all kinds of patterns,” Bloom said. “Patterns based on the medications they’re on, their age, the weather, the terrain – I could list probably 20 or 30 different factors that affect where somebody goes. That’s a perfect kind of compute problem for AI technology – to take all these considerations, put them together with the current state of the search and help us try to find the person more quickly by predicting the outcome of where they’re going to be located.”

Bloom and Young have more than 80 years of experience between them – but there are just two of them. Creating an AI that could duplicate their shared expertise requires training a machine how to properly use compiled search and rescue data. The database the AI pulls knowledge from can also include expertise beyond Bloom and Young. For example, the team has included data from the International Search and Rescue Incident Database (ISRID). This database was compiled by Robert Koester, Ph.D., and is also  the basis for his book, “Lost Person Behavior.” 

“The book is based on statistical collections of now hundreds of thousands of missing person incidents that Koester has collected data on,” said Young. “He’s allowed us to use a large subset of that data, so we’re cleaning it up and processing it to include in the project.”

This is just for the collected data – each new missing person will also have a slew of data that needs to be input. Such an ambitious project requires a lot of people to make it happen, which is why the Cal Poly research team is huge. The number of people who have worked on the project has grown to more than 125, with new student researchers taking over as older members of the team graduate, or move on with their coursework. As the team grows, their goal remains the same – to connect all the compiled data to an AI search and rescue application that can help a human search and rescue expert, even one with decades of experience.

There are many moving pieces to a project of this size. Anna Makarewicz, a CS student at Cal Poly, worked with her own team on a social network profiler as part of an Introduction to Artificial Intelligence class in the fall of 2024. “We’re working to compile information from different social networks about a missing person to be able to give a profile of the headspace they were in before they went missing,” said Mararewicz.

There’s a trail of information people leave on social networks that’s not unlike a trail they’d leave in the wilderness – clues that, when analyzed, paint a picture of someone’s state of mind. “We’re using sentiment analysis,” said Makarewicz, “which takes the posts a person has made and uses natural language processing to see what to associate those posts with – more negative or positive words. Then we can potentially answer questions like, ‘How was this person feeling before they went missing?’ Or, ‘Did everything seem normal?’”

An essential part of the project is creating an application that can take all this data and make accurate predictions based on its training – that’s the AI element that requires the use of a supercomputer like Delta. Charles O’Hanlon is one of the student research programmers working on this aspect of the project.

“We’re training a diffusion model to predict the location of lost people,” O’Hanlon said. “The diffusion model is conditioned multimodally, that is to say, on different kinds of data.”

To better understand what a diffusion model is and why in a project like this, it’s paramount to get it just right, imagine you need to drive to a particular location, a new place you’ve never been before. It’s a school, so you know there will be students there and probably yellow school buses when you arrive. You know roughly where you want to go, but on the way, there are a lot of distractions – road construction, a group of pedestrians on a tour. These distractions are noise – they have nothing to do with your task, but the yellow construction equipment might be the same yellow as the buses and the tour might be of young children. Because you know the difference between the noise you see and the actual location you want to go to, you’re able to effectively navigate the city and not confuse these similar but unrelated elements with your goal.

This is similar to how diffusion modeling works. It trains an AI by showing it the goal, then adding lots of distractions, and then allowing the AI to learn how to make its way back to the correct goal. The AI has to learn what noise in the data looks like so it doesn’t accidentally think a yellow backhoe is the same as a school bus. An AI can’t make this logical leap on its own because it doesn’t have intuition, but it can be trained to make the distinction. You can probably imagine that in a typical search and rescue effort, there are a lot of false leads that turn out to be noise.

The team of research programmers used more than 1,000 GPU hours on Delta to train their diffusion models. That might not seem like much, but one hour on a GPU could be millions of computations, depending on the complexity of the computation. They’re currently working on training neural networks to create heat maps to try and predict a missing person’s movement.

“I’ve been using ACCESS resources primarily to train very deep neural nets on the spatial inputs only,” said O’Hanlon. “Initially, we were trying to produce arbitrary distributions for heat map output. We’ve shifted to regressing over a point output. So the output of our model is a coordinate value within the spatial inputs.”

As Cameron Maloney, another student involved in the project, further explains, their team went through a couple of iterations of their model before settling on a more generalized heatmap focused around points on the map rather than trying to run a heat map over the entire area to predict possible points of interest. “We’re trying to predict specific points in an image with a high probability of finding someone,” said Maloney. “Once we have that point, we can say, ‘Okay, so the person is most likely to be found in a 10-meter radius, a little less likely to be found in a 50-meter radius, and so on and so forth.”

The team has come far enough to run a mock search and rescue effort to test everything before deploying it for a field test. “We’re actually in what I’d call somewhere between the alpha and the beta test,” said Bloom. “We’re working towards completing a full-scale beta test. There’s a whole process which we have to go through to get our app published on the Apple and Android App Store so that it can get broader usage.”

Kurfess, who has been one of the main faculty advisors on the project, has helped provide students with guidance for the compute research aspects of the work. He envisions the end product as a collection of tools that search and rescue teams can deploy.

“There will be a dashboard at the command post that people use to keep track of what’s going on out in the field – what clues are coming in on the map, displaying where the person is likely to be found, where the clues are located, and then also more specialized tools as they need them,” explained Kurfess. “Then we also have a mobile component that people actually out there in the field or in the woods will use, and they can use that to report what they are finding or get information relayed back from the command post.”

All of these updates make collecting clues easier – people can take photos and upload them to the app, an especially important upgrade because an expert or an AI would then be able to analyze what they see. All the fresh data from the field can be incorporated into the predictive model immediately as it’s found, further speeding up the search.

Young also sees opportunities to help guide the collection of information, which is especially helpful for those managing the search.

“There’s a whole part of this project focused on the investigative end,” Young said. “We’ve applied a missing person questionnaire that I developed that prompts you to ask deeper questions when something sounds pertinent.”

The AI won’t be able to replace the intuition of the experts, however. But what it can do is help the limited number of experts in the field be more productive in every SAR.

“There will always be someone reviewing the suggestions by the AI,” said Bloom. “The AI will take all that data – weather, terrain, the mental state of the missing – to generate more clarity and more clues and narrow down the scope of where we should be looking. With a tool like that, we’re going to be finding individuals more rapidly.”

Access to Life-Saving Compute Resources

Cal Poly is part of the California State University system. Its focus is on undergraduate education with the motto “Learn By Doing.” While the institution has ample expertise to build impressive research teams, it lacks the compute power to get this project to the final stage. The ACCESS program and resources like Delta, have been invaluable, something every member of the team attests to.

“Getting access to Delta, and more broadly the resources in the ACCESS program, was a godsend for us,” said Kurfess, “We’ve been cobbling together computing resources here and there, and none of our cobbled attempts work very well. Charles, especially, spent a lot of time trying to figure out how can we use the best resources. To give you an example, we got a generous $15,000 cloud computing credit from a company. This is a lot of money, but once you translate it into powerful compute resources, it evaporates very quickly. And for a non-research university like ours, getting grants is more complicated. There are a lot of hurdles. On top of that, Cal Poly also does not have the other computing resources and the personnel support that we can get through NCSA.”

Kurfess has been a professor for some time, so he has plenty of experience writing grant proposals. One of the things he appreciated the most about his work with Delta through ACCESS was how easy everything was to set up. 

“This has been one of the easiest processes to get something this useful. Of course, getting cloud computing credits is not too difficult. But it’s not as easy as it used to be, and getting something useful out of it takes more work and time. With ACCESS, the proposal took me maybe an hour or so to write, and then I consulted with a few people and then getting approval took about a day or two, and not much after that, we allocated the resources, and Charles was able to get to work with the resources.”

The quick turnaround gave the team a much-needed boost. Having the resources that fast was a boon, but having the technical support from NCSA experts was the cherry on top. “The support that we’re getting from NCSA personnel is excellent,” said Kurfess. “They respond within a very short time, much shorter than our own ITS people, and they also know much better what we need and how to resolve the issues that we have, which is not surprising because that’s what they do all day long, whereas our ITS people have a lot of other things to take care of.”

The team, particularly those who have dedicated their lives to search and rescue efforts, can’t speak highly enough about programs like ACCESS and resources like Delta – resources that help make it possible for them to have such a huge positive impact on the work of search and rescue. 

“When I can use technology to help aid us in finding someone and perhaps saving a life, I’ll do anything to make that happen,” said Young. “Gary calls it ‘The art of what’s possible,’ and that’s how this technology makes me feel today. Everybody I’ve talked to in many different countries is really excited about what we’re doing, and they want to know how to help with their data, and I tell them this is the plan. The technology is making it possible and I’m so excited about being a part of this. I can see this growing and growing on a global scale with these resources.”

“A lot of people in the world don’t know what it’s like to have somebody missing in your family,” said Bloom. “If it’s your family member that’s missing, the most important thing is finding an individual as rapidly as possible. I’ve been sponsoring this and working on this for a long time under the belief that we can find people more rapidly if we use technology. But it takes a combination of things to make that happen. It takes leadership on campus, which Franz has been providing. It takes the subject matter expertise, that Chris and I provide, and it takes an army of people to do the work. And that army of people has to be technically brilliant. And that’s what we have here at Cal Poly. NCSA providing resources to us – if we didn’t have outside organizations helping with some of the resources, the cost of compute capacity, the accessibility of the capacity, would just be out of reach, and we’d be extremely limited in what we’d be able to do. What NCSA provides is an enabler for something that’s going to help find lost people more quickly and will save lives, and that should be celebrated.”


ABOUT DELTA AND DELTAAI
NCSA’s Delta and DeltaAI are part of the national cyberinfrastructure ecosystem through the U.S. National Science FoundationACCESS program. Delta (OAC 2005572) is a powerful computing and data-analysis resource combining next-generation processor architectures and NVIDIA graphics processors with forward-looking user interfaces and file systems. The Delta project partners with the Science Gateways Community Institute to empower broad communities of researchers to easily access Delta and with the University of Illinois Division of Disability Resources & Educational Services and the School of Information Sciences to explore and reduce barriers to access. DeltaAI (OAC 2320345) maximizes the output of artificial intelligence and machine learning (AI/ML) research. Tripling NCSA’s AI-focused computing capacity and greatly expanding the capacity available within ACCESS, DeltaAI enables researchers to address the world’s most challenging problems by accelerating complex AI/ML and high-performance computing applications running terabytes of data. Additional funding for DeltaAI comes from the State of Illinois.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.