
Dr. Sarah Erfani’s work addresses a critical challenge in modern technology: ensuring that artificial intelligence (AI) systems are safe, trustworthy, and resilient in the face of uncertainty and attack. While AI is increasingly relied upon to detect diseases, defend against cyber threats, and stabilise essential services such as energy supply, these systems remain vulnerable to errors when confronted with novel situations or malicious interference. Such weaknesses threaten not only technical reliability but also public trust in AI technologies. Dr. Erfani’s research confronts this problem by integrating safety guardrails into the design of AI models, identifying hidden flaws, and developing rigorous methods to strengthen their robustness, interpretability, and security.
Dr. Erfani has developed pioneering approaches that embed mathematical, geometric, and topological methods into the design of AI systems to detect and address structural weaknesses before they cause harm. She has created new techniques for enhancing adversarial robustness, improving anomaly detection, and quantifying uncertainty, which have been successfully applied in safety-critical domains such as defence, healthcare, and energy. Her research has already produced patented technologies, operational AI systems adopted by government and industry, and influential contributions to national strategies on AI safety. Currently, Dr. Erfani is advancing this work by building unified frameworks that integrate safety guardrails directly into the training and deployment of large-scale AI models, ensuring that they remain reliable, interpretable, and secure even under unforeseen conditions or malicious attack.
