Semi-Automatic Women Safety System Using Real-Time Facial Distress Detection with Mandatory User Confirmation and Emergency Alert Mechanism

Authors

  • Virendra Kumar Tiwari Lakshmi Narain College of Technology (MCA), India
  • Jitendra Agrawal Lakshmi Narain College of Technology (MCA), India
  • Sanjay Bajpai Lakshmi Narain College of Technology (MCA), India
  • Kavita Kanathey Lakshmi Narain College of Technology (MCA), India

DOI:

https://doi.org/10.64539/sjer.v1i4.2025.333

Keywords:

Facial Expression Recognition, Women Safety, Distress Detection, User-in-the-Loop, Emergency Alert, Deep Learning, IoT

Abstract

Women’s safety remains a critical global concern. Conventional panic applications and wearable devices require manual activation, which is often impossible when the victim is in shock, physically restrained, or under extreme stress. This paper proposes a semi-automatic women-safety mobile system that continuously monitors the user’s facial expressions using a lightweight Convolutional Neural Network (CNN). When a high probability of distress-related emotions (fear, anger, or sadness) is detected for three consecutive frames, the system instantly triggers strong haptic vibration and displays a large full-screen one-tap SOS confirmation button. Only if the user explicitly taps this button within 7 seconds does the system activate a loud deterrent siren and send the current GPS location along with a pre-recorded emergency message to pre-selected trusted contacts and, if the user has opted in during setup, to local emergency services. Experimental results on a combined dataset of approximately 50,000 facial images show a seven-class emotion classification accuracy of 89%. Real-world field trials conducted with 25 female volunteers in public environments recorded zero false or unintended emergency alerts, with an average time from first distress detection to confirmation screen appearance of 6.4 seconds and an average end-to-end alert transmission time of 6.4 seconds (including user confirmation). This is significantly faster than the 15–18 seconds required by traditional manual panic applications, while eliminating the risk of erroneous alerts that would occur in a fully automatic system. The proposed framework offers a practical, privacy-preserving, and ethically responsible solution that can be readily deployed on existing smartphones and wearable devices, contributing meaningfully to AI-driven personal safety technologies.

References

[1] World Health Organization, “Violence Against Women Prevalence Estimates, 2018,” World Health Organization. [Online]. Available: https://www.who.int/publications/i/item/9789240022256.

[2] UN Women, “Facts and figures: Ending violence against women,” UN Women. [Online]. Available: https://www.unwomen.org/en/articles/facts-and-figures/facts-and-figures-ending-violence-against-women.

[3] S. Sinha, S. Sengupta, P. Sarkar, A. Singh, and M. A. Islam, “Emergency Alert System for Women’s Safety,” International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering, vol. 7, no. 3, pp. 53–55, Mar. 2019, https://doi.org/10.17148/IJIREEICE.2019.7311.

[4] S. Dev and P. Anand, “Empowering Safety: A Comprehensive Survey on IoT Based Women Safety Devices,” TIJER - International Research Journal, vol. 11, no. 5, pp. 700–704, 2024, [Online]. Available: https://tijer.org/tijer/papers/TIJER2405104.pdf.

[5] I. J. Goodfellow et al., “Challenges in representation learning: A report on three machine learning contests,” Neural Networks, vol. 64, pp. 59–63, Apr. 2015, https://doi.org/10.1016/j.neunet.2014.09.005.

[6] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, IEEE Comput. Soc, 1998, pp. 200–205. https://doi.org/10.1109/AFGR.1998.670949.

[7] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, IEEE, Jun. 2010, pp. 94–101. https://doi.org/10.1109/CVPRW.2010.5543262.

[8] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, https://doi.org/10.1038/nature14539.

[9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014, https://doi.org/10.48550/arXiv.1409.1556.

[10] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90.

[11] Miss. Bangar, Miss. B. Srushti, Mr. B. Manish, Mr. J. Omkar, and Mrs. S. S. Shinde, “Women Safety Device Using IoT,” International Journal of Advanced Research in Computer and Communication Engineering, vol. 13, no. 4, Apr. 2024, https://doi.org/10.17148/IJARCCE.2024.134111.

[12] D. Parikh, P. Kapoor, S. Karnani, and S. Kadam, “IoT based Wearable Safety Device for Women,” International Journal of Engineering Research and, vol. 9, no. 5, p. 1086, Jun. 2020, https://doi.org/10.17577/IJERTV9IS050697.

[13] J. Salas-Cáceres, J. Lorenzo-Navarro, D. Freire-Obregón, and M. Castrillón-Santana, “Multimodal emotion recognition based on a fusion of audiovisual information with temporal dynamics,” Multimed Tools Appl, vol. 84, no. 23, pp. 27327–27343, Sep. 2024, https://doi.org/10.1007/s11042-024-20227-6.

[14] S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Trans Affect Comput, vol. 13, no. 3, pp. 1195–1215, Jul. 2022, https://doi.org/10.1109/TAFFC.2020.2981446.

[15] S. Juhitha, M. Pavithra, and E. Archana, “Design and Implementation of Women Safety System Using Mobile Application in Real-Time Environment,” International Journal of Research in Engineering, Science and Management, vol. 3, no. 4, pp. 21–26, 2020, [Online]. Available: https://www.ijresm.com/Vol.3_2020/Vol3_Iss4_April20/IJRESM_V3_I4_5.pdf.

[16] D. Mehta, M. Siddiqui, and A. Javaid, “Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality,” Sensors, vol. 18, no. 2, p. 416, Feb. 2018, https://doi.org/10.3390/s18020416.

[17] G. Bargshady, X. Zhou, R. C. Deo, J. Soar, F. Whittaker, and H. Wang, “Enhanced deep learning algorithm development to detect pain intensity from facial expression images,” Expert Syst Appl, vol. 149, p. 113305, Jul. 2020, https://doi.org/10.1016/j.eswa.2020.113305.

[18] C. Bisogni, L. Cimmino, M. De Marsico, F. Hao, and F. Narducci, “Emotion recognition at a distance: The robustness of machine learning based on hand-crafted facial features vs deep learning models,” Image Vis Comput, vol. 136, p. 104724, Aug. 2023, https://doi.org/10.1016/j.imavis.2023.104724.

[19] P. Ekman, “Facial expression and emotion,” American Psychologist, vol. 48, no. 4, pp. 384–392, 1993, https://doi.org/10.1037/0003-066X.48.4.384.

[20] G. Horstmann, “What do facial expressions convey: Feeling states, behavioral intentions, or actions requests?,” Emotion, vol. 3, no. 2, pp. 150–166, 2003, https://doi.org/10.1037/1528-3542.3.2.150.

[21] A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild,” IEEE Trans Affect Comput, vol. 10, no. 1, pp. 18–31, Jan. 2019, https://doi.org/10.1109/TAFFC.2017.2740923.

Downloads

Published

2025-12-12

How to Cite

Tiwari, V. K., Agrawal, J., Bajpai, S., & Kanathey, K. (2025). Semi-Automatic Women Safety System Using Real-Time Facial Distress Detection with Mandatory User Confirmation and Emergency Alert Mechanism. Scientific Journal of Engineering Research, 1(4), 260–272. https://doi.org/10.64539/sjer.v1i4.2025.333

Issue

Section

Articles