Original article was published on artificial intelligence
has responded to European Commissions white paper on
) on how to regulate and accelerate the adoption of responsible and ethical AI.
In February this year, the European Commission launched a consultation on Artificial Intelligence and asked citizens and stakeholders to provide their feedback by June 14.
Google said on Friday that it supports the Commission’s plans to help businesses develop the AI skills they need to thrive in the new digital economy.
“Next month, we’ll contribute to those efforts by extending our
check-up tool to 11 European countries to help small businesses implement AI and grow their businesses,” the tech giant said in a statement.Google Cloud
already works closely with scores of businesses across Europe to help them innovate using AI.
Google has partnered with several European universities via its machine learning research hubs in Zurich, Amsterdam, Berlin, Paris and London, and many of their students go on to make important contributions to European businesses.
“We also support the Commission’s goal of building a framework for AI innovation that will create trust and guide ethical development and use of this widely applicable technology. We appreciate the Commission’s proportionate, risk-based approach,” said Kent Walker SVP, Global Affairs at Google.
The ‘White Paper on Artificial Intelligence – A European Approach’ aims to foster a European ecosystem of excellence and trust in AI.
Google said that AI has a broad range of current and future applications, including some that involve significant benefits and risks.
“We think any future regulation would benefit from a more carefully nuanced definition of ‘high-risk’ applications of AI. We agree that some uses warrant extra scrutiny and safeguards to address genuine and complex challenges around safety, fairness, explainability, accountability, and human interactions,” the company explained.
While AI won’t always be perfect, it has great potential to help us improve over the performance of existing systems and processes.
“But the development process for AI must give people confidence that the AI system they’re using is reliable and safe,” said Google.
That’s true for applications like new medical diagnostic techniques, which potentially allow skilled medical practitioners to offer more accurate diagnoses, earlier interventions, and better patient outcomes.
The Commission’s proposal suggests “ex ante” assessment of AI applications (upfront assessment, based on forecasted rather than actual use cases).
“Our contribution recommends having established due diligence and regulatory review processes expand to include the assessment of AI applications,” said Google.
Responsible development of AI presents new challenges and critical questions for all of us.
In 2018, Google published its own AI Principles to help guide ethical development and use of AI, and also established internal review processes to help avoid bias, test rigorously for safety, design with privacy top of mind.
“AI is an important part of Google’s business and our aspirations for the future. We share a common goal with policymakersa desire to build trust in AI through responsible innovation and thoughtful regulation,” Google noted.