Translated's Research Center

The lenses of AI

Culture + Technology

Is AI biased? In this delightful chat with Carlos Munoz Novo – ethics professor and partner at The Way Over consultancy group, we discuss inclusivity and the lenses through which we see life, the use of AI in hiring practices and how biases affect this and many other processes and address a possible solution.

Carlos Munoz Novo

Ethics professor and partner at The Way Over

After graduating in philosophy with a doctorate in philosophical sciences, Carlos lived for a few years in Latin America and Africa, where he worked in the field of interculturality. He is a partner at The Way Over and a professor of ethics and researcher at the University of Milan on topics concerning anthropology, ethics, and contemporary philosophy. He is also a business consultant on ethics, training, and intercultural issues and a consultant on medical ethics and ethical problem-solving.

Enrico Boscardi

Enrico Boscardi

Junior Content Creator

Enrico has an undergraduate degree in International Studies, and a master’s degree in corporate communication and marketing. He loves to discover more about the world and the different cultures that inhabit it and this allowed him to be an ideal contributor to Imminent.

Interview script

I’m here with Carlos Muñoz Novo
from The Way Over. Hi Carlos, how are you doing?

Carlos: Hi, good morning, everyone.

Today we want to ask you some questions about inclusivity, AI and about the future of these two things. So Carlos, in your life you’ve come across many things, so, as I told you, we want to ask you about what you came across when you started studying the, as you call them, the pair of glasses that affect our processes in mostly in HR and in interviewing and selecting people, and talents. So please go ahead and tell us more about what you found out and what you think.

Thank you very much for the question, Enrico. One of the most interesting things that we usually say now is about what we can create, you know, a strategy or even with software, you know, to universalize our criteria, to give a certain kind of objectivity to reality.

For example, if you are talking about HR and hiring people, to choose the best manager, to choose the best employee. How can I decide who is the best employee or who is the best manager for my company? And for sure, in our experience, we can learn a lot and we have many interesting people, wonderful headhunters to do it. But nowadays we want to create a perfect way to choose these people.
And in different companies, for example, IKEA is one of them, but not only IKEA, but I think also Amazon
and, I suppose, Netflix, and these huge companies are using different software to help people.

But two reasons: the first reason is that they want to simplify the process because they need to help
people in a very standard way. Something faster.

And the second reason is, and I think it’s very interesting. so the purpose is interesting, how can I choose the best person for my company? And it’s something about objectivity. So I don’t want to put too many filters, no?

So I think: “How can I choose the best person for the company and…” I’m referring to companies that decided to use artificial intelligence. So this software, or these chatbots, so, to say, OK, you can upload
the curriculum vitae here, the software will screen all of them and the human will give to me
the best recruit for my company. This is the process, and you can say: “OK, perfect.”

It’s about objectivity because I don’t want, because I think, I don’t think, my feelings, so I want to… and the idea is wonderful. The problem was something fantastic.

IKEA tried to do it. They did it, and the software gave them only men! So this is a problem that comes, it relates to inclusivity in the workplace and more generally, all these processes that are created by AI, generally, because it was— it’s reshaped now— but it was mostly a men-dominated sector.


You’re telling me that somehow, this bias is affecting this system, right?

For sure, because I write the code, Carlos writes the code. So if I write the code and I’m a man, a Caucasian man, a white man, a European man, with long hair like mine, I think, OK, “man”. In my mind, “man” is Carlos. “Woman” in my mind is my mom, my wife. “Children”: my son, my daughter. They are “children”. So when I think “a man”, I think …white man.
And most of the ITs that are writing artificial intelligence in Europe are white men. Not white women, white men. And most of them are men, not women. So it’s very interesting that when we say OK, we can create something objective, something clear, something transparent, we are putting our blind spots
in the machine. So we are creating an artificial intelligence, we are creating software with artificial intelligence full of blind spots. Ours.

So you think the problem is, at the base, I think that I see two ways of solving this problem. So either we go at the base of the problem, and we diversify our IT staff and our technicians, which would be a much easier way, I think. “Easier ” It’s very difficult, but… I think it has more impact, a bigger social impact than just adjusting the machines to this mistake, right?

Yeah, I think we have two different options. So I’m not an expert, I’m not an IT, so I’m not an expert in these aspects but I think we have two options, talking about the learning machines. So maybe we need to decide which is the best way to teach things to the machine, what info we want to give the machine and how the machine can find new info. This is one thing. And the second thing is maybe we need
to create a group of people, a community. To examine the process during the process, at the beginning, middle, and end, for example.

So when we are preparing new software, maybe for some, maybe you need one person to write the code, but maybe you need a commission, a group of people to examine it. And for many reasons,
even sexual differences: LGBT, different races, different countries, different sensibilities. And of course religions, and different beliefs.
So how can we do it? And I think the first thing, for me, is OK, we are aware that we have a lot of blind spots, and we are aware that these blind spots can be in the machine. So awareness is the first thing,
so maybe 90% awareness, 10% commission.

To be aware of that is wonderful. So I think one of the best things we have now is the high level
of self-awareness about the blind spots that we are putting into artificial intelligence.

And maybe the next step is something easier: OK, we need to create a frame to examine, check, to test the machine before we give the machine to somebody to use.

So you think that, as you said, a community and you think that in this case, obviously,
collective intelligence is key, to…

The only key, I think, in this case.

Perfect. Yeah, no, I definitely agree and see your point in creating some kind of task force behind every process that involves, the picking, the selection of individuals and talents.

Carlos, thank you very much for this nice, I think eye-opening interview about this issue we’ve got with inclusivity and the solutions that we can adapt to solve our problem. Thank you very much, Carlos.
I hope to see you again soon.

Big pleasure. Bye.

The Limits of Language AI

The Limits of Language AI

By Kirti Vashee

AI technology is a source of constant debate in the translation industry. What kind of potential does it have right now, and could it replace humans? Kirti Vashee, Language Technology Evangelist at Translated, gives his take on current dynamics and on the future role of AI in translation.

Read it now