AI News Weekly – Issue #180: When AI in healthcare goes wrong, who is responsible? – Sep 24th 2020

Original article was published by on AI News Weekly

In the News

When AI in healthcare goes wrong, who is responsible?

In all these cases, studies suggest AI outperforms human doctors in set tasks.
At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible.


Use the Orca Security 2020 State of Cloud Security Report to Benchmark Yourself Against Your Peers and Learn the 4 Things You Must Do Now to Avoid a Major Breach

For most orgs, cloud workload security is dependent upon the installation of security agents across all assets. Something that rarely happens, as this report shows. For example, 81% of organizations have at least one neglected internet-facing workload.

In The News

To Make Fairer AI, Physicists Peer Inside Its Black Box

After repurposing facial recognition and deepfake tech to study galaxies and the Higgs boson, physicists think they can help shape the responsible use of AI.

Watch a Robot AI Beat World-Class Curling Competitors

Artificial intelligence still needs to bridge the “sim-to-real” gap.
Deep-learning techniques that are all the rage in AI log superlative performances in mastering cerebral games, including chess and Go, both of which can be played on a computer.

Applied use cases

The Supply of Disinformation Will Soon Be Infinite

CounterPunch published a January 2018 postmortem detailing what its investigation had found: articles plagiarized from The New Yorker, the Saudi-based Arab News, and other sources; prolific “journalists” who filed as many as three or four stories a day, but whose bylines disappeared after inquiries…

The Cruel New Era of Data-Driven Deportation

The agency had long tapped into driver address records through law enforcement networks.
Eyeing the breadth of DMV databases, agents began to ask state officials to run face recognition searches on driver photos against the photos of undocumented people.

Have you read something written by GPT-3? Probably not, but it’s hard to be sure

Liam Porr, a computer science student at the University of California, Berkeley, used the new machine learning model to generate the post with the intention of fooling the public into believing it was the product of a human mind.


Why kids need special protection from AI’s influence

Algorithms are increasingly shaping children’s lives, but new guardrails could prevent them from getting hurt.

#5: Algorithmic Colonisation with Abeba Birhane

In this podcast episode, ETC Group speaks to Abeba Birhane, a PhD candidate in cognitive science at University College Dublin in the School of Computer Science.
Abeba is a PhD candidate in cognitive science at University College, Dublin, in the school of computer science.

AI Ethics #24: Science fiction to teach AI ethics, unnoticed cognitive biases, post-pandemic university, face-mask recognition, political databases, and more …

Machine translation for African languages, grassroot efforts to combat misinformation, AI regulations for children, NSCAI responsible AI principles and more from the world of AI Ethics!


AI Ethics: We Need to Walk the Walk, Not Just Talk

Along with the exponential increase in funding, the quarterly earnings calls of the Fortune 500 show an increasing preoccupation with the ethics of AI, autonomous systems and robotics.

ABB Debuts Next-Gen Robot With Multi-Industry Suite of Digital Robotics Automation

ABB has debuted a suite of new digital robotics automation products, solutions, and services designed to help customers more fluidly address key trends set to revolutionize the face of manufacturing at the 2020 China International Industry Fair (CIIF), according to a recent press release from…

Regina Barzilay wins $1M Association for the Advancement of Artificial Intelligence Squirrel AI award

In recognition of this, the world’s largest AI society — the Association for the Advancement of Artificial Intelligence (AAAI) — announced today the winner of their new Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity, a $1 million award given to honor individuals whose work…


Intel’s Artificial Intelligence Podcast

Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab.

Introducing KILT, a new unified benchmark for knowledge-intensive NLP tasks

What the research is: KILT (Knowledge Intensive Language Tasks) is a new unified benchmark to help AI researchers build models that are better able to leverage real-world knowledge to accomplish a broad range of tasks.

🚧 Simple considerations for simple people building fancy neural networks

At the same time, deep learning frameworks, tools, and specialized libraries democratize machine learning research by making state-of-the-art research easier to use than ever.

This RSS feed is published on
You can also subscribe
via email.