Original article can be found here (source): Artificial Intelligence on Medium
In recent months, headlines about artificial intelligence (AI) have ranged from concerning to downright dystopian. In Lockport, NY, the public school system rolled out a massive facial recognition system to surveil over 4,000 students — despite recent demographic studies which show that the technology frequently mis-identifies children. Ongoing investigations into Clearview AI, the facial recognition startup built upon photos that were illicitly scraped from the web, have revealed widespread unsupervised use by not only law enforcement but banks, schools, department stores, and even rich investors and friends of the company.
The nascent coronavirus pandemic, too, has sparked frightening new applications of AI in China— from location-tracking apps that restrict travel and share citizens’ data with the police, to facial recognition technology which identifies individuals not wearing masks, to “smart helmets” that claim to detect nearby individuals with a fever.
But the recent Social Impact in AI Conference, hosted by the Harvard Center for Research on Computation and Society (CRCS), offered another path forward for the future of artificial intelligence.
In just two days, conference attendees shared their work applying artificial intelligence and machine learning to a shockingly wide range of domains, including homelessness services, wildlife trafficking, agriculture by low-literate farmers, HIV prevention, adaptive interfaces for blind users and users with dementia, tuberculosis medication, climate modeling, social robotics, movement-building on social media, medical imaging, and education. These algorithms don’t center around surveillance and profit maximization, but rather community empowerment and resource optimization.
What’s more, many of these young researchers are centering considerations of bias and equity, (re)structuring their designs and methodologies to minimize harm and maximize social benefit. Many of the researchers voiced concerns about how their work interacts with privacy and technocratic values, not shying away from difficult questions about how to responsibly use personal data collected by often opaque methods.
The conference was designed around a central question, “What does it mean to create social impact with AI research?” Over the course of the event, at least one answer became clear: it means listening.
Indeed, a central takeaway from the conference was the importance of knowing what you don’t know, and of not being afraid to consult others when your expertise falls short. AI-based solutions will only ever solve problems on behalf of the many if they are designed in consultation with affected populations and relevant experts. No responsible discussion of the computational advances of new AI systems is complete without integrated interdisciplinary discussions regarding the social, economic, and political impacts of such systems. In this sense, the conference practiced what it preached — many attendees and speakers came from social work, biology, law, behavioral science, and public service.