Pola's research bridges the gap between AI and ethics

Monday 07 Nov 22

Contact

Søren Hauberg
Professor
DTU Compute
+45 45 25 38 99

Dynamic fairness modelling

During Pola Elisabeth Schwöbel's PhD project, she and colleagues developed a method called dynamic fairness modelling that bridges the gap between ethics and mathematics/engineering.

(1) First establish the ethical goal: What do you want to achieve? What problem do you want to solve?

(2) Collect information that helps quantify decision-making procedures and outcomes and write it down in words. Then select a suitable AI model and adapt it – describe the problem with mathematics.

(3) Continuously evaluate the results and adjust the AI model.

Reference: "The long arc of fairness: formalisations and ethical discourse", Schwöbel & Remmers.

 

The spread of Machine Learning makes it necessary to connect technology and ethical aspects, says the researcher, who has developed a model to create trustworthy artificial data.

Machine Learning (ML) has been a big part of Pola Elisabeth Schwöbel's life for the past three years as a PhD student. Machine Learning has, for example, helped her translate Danish into German and English and vice versa - through various online translation services. At the same time, in her research at DTU Compute, she has looked into the mechanisms behind the mathematical models that are the 'brain' of artificial intelligence.

While it is hopefully just annoyance if a ML model mistranslates everyday language, it can have severe implications if a Machine Learning model misinterprets medical images and overlooks diseases. This is why the use of AI is still limited to assisting doctors in their work: we rely more on human judgment than that of a machine, just as we often require a machine to be more precise than a human. But the potential of AI is enormous.

Pola Elisabeth Schwöbel and her colleagues in the research section for Cognitive Systems are working on developing ethical standards for how to select a mathematical model and develop it to solve a specific task. Doing so in a fair way might mean, e.g., that the model is not only trained on data from men and thus risks overlooking disease in women. Because biased data - i.e. misleading data - leads to biased (misleading) models.

"There have been several examples of AI treating people differently because the models are trained on flawed data sets. This creates justified mistrust of AI, and we believe that you must think about ethics when you select a mathematical model for a given task", says Pola Elisabeth Schwöbel.

Together with colleagues, she is behind a proposal for a new way of working with AI - a kind of fairness model. We will return to that model, but first, we will look at data, which is the raw material in ML model.

Technical innovation

The quality and quantity of data that the models are trained on are critical to how well the ML models work. If you think about Big Data years ago, it is nowhere near the millions of data that today's much more advanced artificial intelligence models need.

Therefore, AI researchers and developers use a well-known mathematical method called data augmentation. You take existing data and apply a simple transformation to generate more data. So far, however, it has been done without really knowing whether, for example, the synthetic medical data represented an actual human. Now Pola has developed a ML method that increases the amount of data in a way that ensures the synthetic data is realistic.

" We believe that you should have a different systematic workflow when choosing a model. You must first decide on an ethical goal: What do you want to solve with mathematics?"
Pola Elisabeth Schwöbel, finished her PhD at DTU Compute in September 2022.

"In my work, we have looked at what constitutes good data augmentation. It is easy to imagine how to create synthetic data for natural images. For example, a picture of a car is still a realistic picture if we change the location of the car or the color. For other types of images, e.g. brain scans, it can be much more difficult. In my research, we show how we can create realistic artificial data by slightly modifying the images using our ML model,” says Pola.

“In that way, you can increase data, and it also helps to create fairer data and AI, because we can make the dataset more balanced with the help of synthetic data,” she says.

Pola's innovative model is general. It means it could also be used to create more data within energy research and in connection with time series analyses, where you work with data that is sorted by when it was measured.

Philosophical basis for fairness in AI

Pola's research started somewhere else than with technical innovation, namely with a philosophical study on how to use synthetic data to make AI models fairer. Because in general, she does not think that technology can always solve ethical challenges. There is a need to build a bridge between ethics and technology.

"I thought about whether you could create a standard for a fair AI model. It is difficult - and there are many ways to measure whether a model is fair or not. It might be considered unfair if a model e.g. is based on health data from men, and therefore is better to detect diseases in men than in women. But on the other hand, it would not be fair if the models intentionally were developed to be worse at finding diseases in men in the effort to treat men and women equally," says Pola.

Together with her colleagues, she has written a scientific paper discussing the problem that people often choose an existing ML model from a catalog and optimize and improve it without thinking about what ethical considerations and standards are encoded in the model; i.e. how it is developed.

"We believe that you should have a different systematic workflow when choosing a model. You must first decide on an ethical goal: What do you want to solve with mathematics?” she says.

Should we choose AI as a shortcut?

Pola explains it with an example from DTU. If you would like to have a more equal distribution of female and male students, you could consider lowering the grade point average necessary for women to be admitted as part of the strategy. But this could perhaps mean that some female students would find it difficult to cope academically in their studies. And it will not be fair to the male students either. Instead, perhaps more could be done to improve the study environment and make it appeal more to women. And this could be done in ongoing dialogue with the target group and university management.

"The same applies to ML models. You must continuously evaluate all parts of the decisions as to whether they will bring you closer to the goal. And it's not all places where AI is suitable either, even if you might be able to take shortcuts with AI and save time and money," says Pola.

"If you are concerned about ethics, my advice is to investigate how you can adapt the models so that they will work. If it is within the health sector, then talk with stakeholders, patient associations, legislators, and employees and hear about the problems and what causes them. When you have gathered a lot of knowledge and set an ethical goal, you write it down in words. And only then you try formalizing it - describing it with mathematical tools. Keep in mind that AI has potential for long-term intervention, and not for here-and-now changes," says Pola.

The important combination of technology and philosophy

Pola's supervisor at DTU Compute, Professor Søren Hauberg, was one of the first in the world to work methodically with data augmentation. And that inspired Pola in her PhD to work with ethical data augmentation.

Today, it is a growing field of research with probably 10-20 laboratories working with this type of research worldwide.

"The development in AI is going fast. So fast that it can be hard to keep up. Therefore, we seldom have time to think ethically about the consequences of AI. Pola's work is therefore incredibly important, as it provides a practical working method that helps ensure that ethical considerations are given an integrated place in the development of AI systems. Here, the extra contribution is that Pola has a strong technical understanding via her PhD from DTU, while she also has a philosophical background. Overall, it gives her a completely unique opportunity to formulate ways of thinking and working processes that actually make a difference," says Professor Søren Hauberg.

News and filters

Get updated on news that match your filter.