Seyed Sadegh Hosseini, Mohammad Reza Yamaghani,
Volume 27, Issue 4 (10-2024)
Abstract
Introduction: Nowadays, the use of artificial intelligence and machine learning has impacted all fields of study. Utilizing these methods for identifying individuals' emotions through integrating audio, text, and image data has shown higher accuracy than conventional methods, presenting various applications for psychologists and human-machine interaction. Identifying human emotions and individuals' reactions is crucial in psychology and psychotherapy. Emotional identification has traditionally been conducted individually and by analyzing facial expressions, speech patterns, or handwritten responses to stimuli and events. However, depending on the subject's conditions or the analyst's circumstances, this approach may lack the required accuracy. This paper aimed to achieve high-precision emotional recognition from audio, text, and image data using artificial intelligence and machine learning methods.
Methods: This research employs a correlation-based approach between emotions and input data, utilizing machine learning methods and regression analysis to predict a criterion variable based on multiple predictor variables (the emotional category as the criterion variable and the features, audio, image, and text variables as predictors). The statistical population of this study is the IEMOCAP dataset, and the data type of this research is a mixed quantitative-qualitative approach.
Results: The results indicated that combining audio, image, and text data for multi-modal emotional recognition significantly outperformed the recognition of emotions from each data alone, exhibiting a precision of 82.9% in the baseline dataset.
Conclusions: The results demonstrate a considerably acceptable precision in identifying human emotions through audio integration, text, and image data compared to individual data when using machine learning and artificial intelligence methods.
Miramirhossein Seyednazari, Hamed Gholizad Gougjehyaran, Amin Sohaili, Amirmohammad Drosti, Rasul Asghari,
Volume 28, Issue 5 (12-2025)
Abstract
The rapid integration of Artificial Intelligence (AI) into medical sciences, while promising transformative breakthroughs in early diagnosis and personalized treatments (1), introduces a profound ethical and legal challenge: the management of the vast, sensitive, and unprecedented volume of health data and the preservation of patient privacy. The nature of this data, which includes clinical records, radiological images, genetic data, and even data from wearable health devices 2, extends beyond traditional identifiable information. It possesses the capability to reconstruct a comprehensive profile of an individual, rendering complete and permanent de-identification virtually impossible (1).
This massive volume of information has become the main fuel for deep learning algorithms, but any breach or disclosure could lead to serious discrimination in access to insurance, employment, and even judicial decision-making (2). The lack of Transparency regarding how these data are processed and analyzed by the algorithms, which often function as a "black box," erodes the trust of both patients and physicians (3). Healthcare providers cannot understand the AI's decision-making process, which not only hinders clinical adoption but also creates a legal gray area concerning accountability in the event of diagnostic or therapeutic error (4).
The current legal challenge stems from the fact that existing privacy laws were not designed to address advanced algorithms and real-time data collection (1, 4). AI constantly outpaces existing legal frameworks by creating novel methods of knowledge extraction from raw data. Furthermore, due to their reliance on large data networks, AI tools are exposed to advanced cyberattacks, which could lead to the mass disclosure of confidential data (5). Consequently, in the absence of a robust and up-to-date data governance framework, AI's potential to improve public health is accompanied by the risk of undermining human dignity and violating fundamental patient rights (6).
To ensure that AI innovations advance with ethical and legal compliance, urgent measures must be taken to establish a comprehensive regulatory framework. This requires formulating a new, dynamic model of informed consent that goes beyond a one-time agreement, allowing patients continuous and informed control over how their data is used at different stages of AI training and deployment. Concurrently, developers must be mandated to embed privacy protection at the core design of every AI tool, which means utilizing advanced privacy-preserving techniques such as differential privacy and federated learning for on-premise data processing. Additionally, a multi-disciplinary oversight body composed of ethics, legal, computer science, and clinical experts must be established, ensuring that every AI tool undergoes a rigorous and transparent ethical and technical assessment and approval process before entering the clinical environment, thereby preventing potential biases and algorithmic errors. These measures will not only protect patients against misuse but also provide the necessary trust for the sustainable and safe advancement of this vital technology in society