U of C professor advancing online security through adapting biometric systems
By Andreea Timis, September 27 2024—
In a world dominated by deep learning systems, individual data becomes increasingly susceptible to being accessed and compromised. Bometrics such as fingerprints, iris scans, facial or voice pattern recognition are often used as measurable physical and behavioural traits used to identify individuals. However, many of these systems overlook privacy, a lacking that Dr. Marina Gavrilova aims to address.
Gavrilova is a professor from the department of Computer Science, co-director of the Biometric Technologies lab and U of C Research Excellence Chair in Trustworthy and Explainable Artificial Intelligence. Her lab develops privacy-aware biometric systems and mitigating biases in data collection and decision-making.
“The main goal of the biometric system is to ensure public safety by identifying potential intruders or other adversary elements through biometrics that each person possesses. So most commonly it’s either a face or body or gait,” Gavrilova explained in an interview with the Gauntlet. “But with this technology, the privacy of individuals from whom biometric eyes are being collected or observed can be severely compromised.”
Gavrilova stated that the goal of the Biometric Technologies lab is to balance the need for public security and the need for privacy and protection of individuals. With this goal in mind, her lab works to develop systems that allow for de-identification of individuals, or allow only selective information from videos, voice recordings or gait to be processed and utilized in their systems.
“Within the framework of biometric identification systems, we also use other modalities. For instance, social communication. So we analyze the communication styles of different individuals as expressed on social media,” said Gavrilova. “And within this research, we mitigate bias that would be related to the gender of individuals, demographic factors, age or where they come from. Essentially, in our lab we pioneer the notion of social behaviour biometrics. And that allows us to perform fake news detection or psychological traits assessments simply by observing how people communicate online.”
Gavrilova expressed that the challenges faced with developing privacy and data mitigation systems are a global issue with ongoing AI development.
“The recent AI revolution brought forward a deep learning system capable of a natural language communication with humans and generating text, as well as processing images in a manner that surpasses the capacity of humans to do it in a fraction of the time,” she explained. “We as a society face the tremendous challenge right now of how we can mitigate this enormous power while preserving our identity and also maintaining and separating what is real and what is not.”
As it currently stands, deep learning systems like AI are able to generate articles highly similar to research articles, news reports and deepfakes.
“This is simply dangerous for society because it spreads misinformation, fear and can affect both corporations, political campaigns, as well as specific individuals targeted,” Gavrilova explained.
Gavrilova highlighted multiple United Nations events that focus on trustworthy and ethical AI and its impacts on society — the Geneva Science and Diplomacy Anticipator (GESDA), the Digital Technology and Healthy City Conference (DTHC) and an annual UCalgary conference co-organized by her.
“There is work that has started on [developing trustworthy and ethical AI] because major conferences that I currently attend in information security biometrics and image processing domains have at least one workshop dedicated specifically to how we can design trustworthy and ethical AI systems, and AI systems that will mitigate bias,” said Gavrilova.
“So the research on this is starting even within the U of C. And this is exactly what my position entails. I’m bringing talent across campus to tackle this problem, [which] requires a multidisciplinary approach from disciplines such as law, political science, ethics, computer science and engineering,” Gavrilova continued.
Gavrilova spoke about two new initiatives at the U of C at the graduate and postdoc levels. The first is that the Institute for Transdisciplinary studies will introduce masters and other graduate programs focusing on societal issues — including trustworthy and explainable AI. The second is that the Graduate College at U of C is planning a series of events next year for all students that will highlight the topic of trustworthy AI within the research domain of digital worlds.
U of C also has the Information Security club within the department of Computer Science that hosts a variety of events to educate people on the necessity of being aware of their rights and privacy, as well as the challenges related to data collection and ethical issues.
Lastly, Gavrilova added that in navigating a fast-changing world filled with deep learning, deepfakes and other AI, we should all caution ourselves and continuously retain awareness of our individual personal rights.
“We should also be very aware of always using our own judgment when we receive any media news … because it becomes so easy to create fake content online,” she said. “And in my biometric technologies lab, one of the main focuses right now is to make sure we can separate truth from fake content.”
For more information, visit the Biometric Technologies lab website.