Responsible AI in Crisis
By Stephen Coles and Mike Gaviria
When natural disasters strike, every second matters to those on the ground. At Klarety, our mission is to decrease the time between data, understanding, and action for crisis responders. AI will revolutionize this decision making process—identifying damage, routing supplies, and coordinating responders—but decision makers need tools that are transparent and trustworthy before they deploy them in the field. The concerns about the 'black box' of AI are most pressing in the fields where AI could do the most good. Fields like crisis response, where rapid analysis and synthesis of huge amounts of data can save lives. But organizations need to understand the algorithms before they can trust them to save lives.
AI will play a pivotal role across all disaster management phases: mitigation, preparation, response, and recovery. Before a crisis strikes, AI will identify vulnerabilities, forecast damage, and recommend resource staging. During the response, AI will provide critical insights, identifying who needs support and where. Post-disaster, AI can guide rebuilding strategies and suggest long-term sustainability solutions. Ultimately, AI can amplify our analytical capacity, allowing decision-makers at all organizational levels to identify trends and analyze large data volumes.
Damage identification after the 2023 earthquake in Turkey and Syria
Using AI in crisis response faces many challenges, from data privacy to copyright, legal implications to reliability and accuracy, and potential biases or discrimination. Google’s efforts to build an AI assistant for doctors provides a blueprint for accuracy and trustworthiness in a life saving application. Not only is Google's Med-PaLM 2 built for safety and trust, physicians preferred its answer to detailed long-form questions over the responses of fellow physicians. This model improved upon its predecessor, Med-PaLM, which was the first generative AI tool to exceed a 'passing' score in US Medical Licensing Examination style questions. Researchers used a more sophisticated base model (PaLM 2), fine tuned with specific medical licensing questions, and used advanced prompting strategies. As of July 2023, its score reached 90%, far beyond the passing score of 60%.
While validation in real-world settings is still necessary, the project is a testament to the power of AI fine-tuning and prompt engineering to bring about physician-level performance. This offers a valuable model for Klarety as we develop methodologies for crisis response. As a Google Startups member, we are building on the success of Med-PaLM to create AI models for disaster relief that save lives.