天美传媒

Expert Directory

Showing results 1 – 2 of 2

Saffron Karlsen, PhD

Associate Professor in Sociology

University of Bristol

Discrimination, Equality, Ethnic Backgrounds, Identity, Prejudice, Race

Professor Saffron Karlsen is a Senior Lecturer in Social Research and a member of the Centre for the Study of Ethnicity and Citizenship. Her research is concerned with how people identify and define their own ethnicity, drawing on their family background, community belonging, and experience of life, and the challenges that they face. She has examined ethnic discrimination, ethnic inequalities in health, education and society, social mobility, attitudes towards Female Genital Mutilation, the use of ethnicity data to make policy decisions, and the notion of being British. 

Most recently Professor Karlsen has been tracking the social impact of the COVID-19 virus on ethnic groups in terms of health provision and equalities of access to support. She is part of an International Network on Female Genital Cutting and a member of the advisory board of a project funded by the Swedish Research Council for Health, Working Life and Welfare to explore approaches to FGM-safeguarding. She is leading on the creation of a Bristol Race Equality Network, co-partnered with Bristol City Council and Black South West Network. Professor Karlsen also sits on the ONS Inclusive Data Taskforce.

Education
1995 - BSc Economic History with Population Studies, London School of Economics
1996 - MSc Medical Demography, London School of Hygiene and Tropical Medicine
2006 - PhD Sociology, University College London

Qihang Lin

Associate Professor of Business Analytics

University of Iowa Tippie College of Business

Artificial Intelligence (AI), Discrimination

Qihang Lin, associate professor of business analytics at the University of Iowa’s Tippie College of Business, studies artificial intelligence and discrimination with a National Science Foundation grant.

Based on his research, he believes an independent third-party organization must be created to regulate AI systems specifically to ensure decisions made by the algorithms are fair and don’t discriminate against disadvantaged groups.

He said that without safeguards in place, algorithms can easily become discriminatory if the people who are designing and programming them are not careful.

Lin says the regulatory organization could be a government agency, in a similar way that the U.S. Department of Agriculture holds food producers to standards that ensure our food is safe to eat. Or it could be an industry group, similar to the way ISO certification is awarded to businesses that meet certain management or operational standards by an independent third party. In fact, he said there already are ISO standards that apply to AI for things like security, but none of them pertain to discrimination.

The body, whether a government agency or industry group, would consult with AI researchers, legal experts, and domain experts to develop and standardize a procedure for assessing the fairness of a business’ AI systems. Laws and policies would be put in place that require AI systems to be certified as fair and unbiased before deployment.

Lin’s proposed new body would have the authority to audit the data-driven decision-making systems used by businesses, governments, hospitals, and other organizations to ensure they are fair, transparent and trustworthy.

He believes such a certification system would add credibility to AI products and reduce public concern that decisions made with algorithms are being done so unfairly. It would also help businesses by reducing the likelihood that their decisions are biased and in violation of state or federal discrimination laws. 

Regulations and standards would also promote the applications of AI, and eventually increase both the efficiency and fairness of our society.

 

Showing results 1 – 2 of 2

close
0.1727