Policies

What support can data feminism provide in designing more equitable policies and countering threats to democratic principles? We discussed this with Lauren Klein, Professor at Emory University in Atlanta, Georgia, and co-author of the book Data Feminism (MIT Press, 2020), ahead of Trump's inauguration at the White House

8 min read
Data Feminism
Credits Unsplash/Maxim Berg

As artificial intelligences, trained on vast quantities of data, become ubiquitous, it is vital to collect and use information that is ever more liberated from gender stereotypes and biases to create fairer and more inclusive policies.

To raise awareness among institutions and the general public on this topic, in December 2024, the feminist association Period Think Tank, in collaboration with the non-profit organization The Good Lobby and Prime Minister – a school of politics for young women – organised Data Feminism in Action, a series of events held in the cities of Bologna, Naples, and Rome.

“Italy lacks half of the data needed to utilize the 72 gender indicators of the 2030 Agenda for Sustainable Development” reads the press release issued by Period Think Tank, which has been strongly emphasising how the lack of gender-disaggregated data contributes to fueling inequalities. Thanks to the support of the United States Embassy in Italy, international experts including Thais Ruiz de Alda, Sarah Williams, and Lauren F. Klein participated in the events as keynote speakers.

Lauren Klein
Lauren F. Klein (Credits: Melissa Denae)

Lauren F. Klein is Winship Distinguished Research Professor and Associate Professor in the departments of Quantitative Theory & Methods and English at Emory University in Atlanta, Georgia. She directs the Emory Digital Humanities Lab and the Atlanta Interdisciplinary AI Network. Klein is one of the world's leading scholars in the field of data feminism: she is author (with Catherine D’Ignazio) of the award-winning book Data Feminism (MIT Press, 2020). Currently being translated into Italian, the book is available online in open access. On the occasion of the conference held in Rome on December 7, 2024, as part of the Data Feminism in Action conference series, we asked her a few questions in light of Donald Trump’s inauguration at the White House.

In your presentation, you highlighted the fact that there are two main problems with data: missing data and biased data. Do you think that the massive use of generative AI will help to solve these issues or, on the contrary, will it make them worse?

I think that it will make them worse, because we, as a general society, we as the public are being led to believe that these models are the source of intelligence, meaning all intelligence. When the reality is that they have been trained on data that comes from a very narrow slice of the world's population.

You also said that in today's world, data is power; feminism – intersectional feminism in particular – is also about power, and about who has it and who doesn't. How can we make sure that women and marginalised groups have power over data?

I think this is connected to your question before: if we want to continue to build power around women and marginalized groups, ensuring that we have access to data, then we need to be investing much more in public-interest technologies. So, to go back to your question about generative AI, right now we have a situation where only corporations are making these models. But if we have models being built in the public interest, this may enable it. We also need to support grassroots data collection and data literacy efforts, so that you don't need to feel like you know everything, or that you have all of the data before you start your work. I think many of the projects that we talk about in data feminism involve very small datasets, but they are very powerful.

As you emphasised, AIs are powerful not only because of the data they are trained on, but also because of the conditions and the context in which they are developed.

One is infrastructural, and so I think right now, the only places where we are able to get these models come from corporations. There have been some sort of industry foundation collaborations that are trying to build more open models and more documented models, but I really do think that we need models developed from outside of those systems, in order to ensure that we are starting from a basis that is not dependent upon these preparations.

How can we fix this?  

I think we need to be putting more emphasis on, like I said, public-interest infrastructures and academic infrastructures, that can support the training of these models. Those are two ways in which I think we can begin to develop some of the infrastructural conditions through which this work can take place. And then the second has to do with the context in which they are deployed. I think here is where regulation has a very important role to play, in ensuring that the context in which these models are deployed are not ones that are harmful, that are untested, that somehow fall outside of the bounds of current regulation because they are perceived to be magical or mysterious, or just sort of new technology that does not yet have laws on the books to regulate it. Because right now there are no rules, and the context in which these tools are being deployed are unrestricted and almost nearly universally, incredibly harmful. So that's one way. And then the other thing that I am finding interesting – in terms of these big models and what they can do in terms of context – is figuring out how to interpret their results.

Data Feminism

Could you explain in more detail what this means?

Oftentimes what these models do is replicate or even amplify existing inequalities in the world. Right now we are being told that we should interpret the output of these models as the truth, or sort of the state of the world as it is and cannot be changed. But one of the things we can change is to recognize that we do not want these models to continue to predict the same thing. And if what they are showing is that if we do not change our actions, we will continue to get the status quo, then we can take their output and use it as evidence to advocate for changes, so that the output of these models in the future may be more representative of who people actually are and what they actually want.

One of the key actions to make AI more inclusive is to recognise that it is both a technical and a social system and that, therefore, its intrinsic problems cannot be fixed only by technical solutions. From your perspective, do you think this is starting to change, also in public discourse, or there's still a lot of work to do?

The interesting thing to me is that I think this is changing much more in public discourse than it is in the companies themselves, because now that these systems are being incorporated into products, people are realizing through their own experience that the responses of these systems do not reflect themselves or their desires. They are recognizing that they are not accurate or useful, and all of the things that in the abstract we were told, maybe even just a year ago, turned out not to be true in real life. I think people are recognizing that these tools are insufficient. The problem is that because of the infrastructure and the concentration of resources in these big tech companies right now, the tech companies are not being forced to change anything because they do not really care what the impact is on people, they just care about their own bottom line. 

What do you think the upcoming challenges might be?

I believe the next challenge – and I do think that this will be harder with the incoming administration – is to figure out how to get these tech companies to, or get some of the power that has been accrued by these tech companies, out of their hands. I would have put more faith in regulation and the breaking up of certain monopolies and information monopolies, had Harris been elected; with Trump, I do not think that we will see it, so I think this struggle will be a longer one.

Do you believe that Trump's victory in the latest elections and Musk's increasingly important political role represents an actual threat to the work you have done in recent years with data feminism?

There is no question that Trump's victory represents a threat to real people. I think that from day one – as he has told us, and I actually think it will happen – a lot of real people will be deeply and irreversibly harmed. We are not certain which groups these will be: it is likely to be immigrants, it is likely to be trans people. And even if some of the political changes that he does may later be undone, I think that very unfortunately it will already have had the worst impact on people, and those lives will have been permanently changed. With that said, I do think that one of the things that Trump's victory has shown is that work that elevates feminist activism and feminist organizing structures and feminist principles is so deeply needed, and maybe even more than we thought. I think some people, not everyone, but some people presumed that these values and approaches were more widely shared than they were, and what Trump's victory showed us was that they were not. 

What scenarios do you envision for the near future?

The good news is that we already know what these values and principles are, and I think hopefully there will be more people who now recognize the need to explicitly affiliate themselves or commit themselves to these principles. The bad news is that the threat has gotten greater and has been given a lot of power, and so we are just going to have to be that much stronger in community and across communities to support and sustain ourselves through this time.

Read the Italian version of this article