Source: Deep Learning on Medium
Today’s AI systems do not have agency.
Deep learning algorithms work on training data that is fed to them by an entity — usually human — outside them. The output of AI systems work on a prescribed mode and “acts” in a proscribed sphere.
What if AI was set free?
Agency of Enquiry
At one level this would mean that AI systems could roam wide and deep. Have independent access to databases across systems. Even suggest and specify experiments for its handlers to setup, run and feedback the results. And most importantly decide on what particular aspects or issues to “work” on based independent assessments of utility. Such a AI system would have an agency of enquiry. To my mind this agency of enquiry would makes sense within specific domains. For example an AI system that has an agency of enquiry in let us say biochemistry or space propulsion. Would not such a system be a good perhaps even a great addition to the scientists, researchers and thinkers in that field? Wouldn’t its facility to analyse complex data, find patterns and formulate hypothesis at speed will enable it to match and even outdo great human minds?
Is there a danger to creating AI’s with “agency of enquiry”?
I think not if the agency is limited to within defined domains. Furthermore without what I call “agency of action” the threat of an AI with an “agency of enquiry” interfering with the real world is minimised. Such an AI could only yield knowledge that will require humans to act on.
Agency of Action
While an AI with agency of enquiry can decide on what to enquire about and which data bases to enquire into its only output are patterns, hypothesis and specifications for suggested experiments. Action is left for its “users” — humans and human-run organisations — to act upon. What if AI systems with agency of enquiry are also given agency of action? Currently there exist AI systems that have a limited agency of action but without an agency of enquiry. Automated maintenance systems or even automated cars are examples of such systems.
The paradigm shift occurs when we create AI systems that have both agency of enquiry and agency of action. For example if we have a AI systems that runs the Central Bank of a country and which has both agency of enquiry and agency of action. Such a system can range far and wide to seek patterns, make prognostications and not only suggest remedies but actually announce and enforce them. In a fairly abstract area like macro-economics the ability of an AI to look for and discern patterns and signals across complex systems and analyse impact of interventions with speed could beat any human team’s ability. While the imperviousness of such a system to human emotions, popular dogmas or political pressures is a definite advantage the possibility of hard wired prejudices and follies is a threat that will need to be guarded against.
It is when AI systems with both agency of enquiry and agency of action move from abstract systems like macroeconomic to the messy world where humans actually live and act that all the troubling implications of the singularity start to appear.
For example, as robotics develops one can imagine policing moving into the domain of AI systems with agency of enquiry and action. It is with such systems that questions about in-built prejudices that are constantly strengthened become very important. However even then such problems can be handled by careful design and audit systems run by humans. With such systems the world would still have not reached the dreaded singularity.
However if and when we reach the stage where AI systems with agency of enquiry and action design and audit other AI systems with agency and enquiry that the singularity begins to emerge . For example If a policing AI system is designed by a legislative AI system and is adjunct to a judicial AI system than the singularity is not far away (Outside the sphere of AI, the organisation of modern democratic society guards against such an eventuality by separating the branches of power — the military for example has a commander-in-chief from the political world or the judiciary is chosen by the political power structure while the power of the vote, in theory, governs the political power structure. It is when the power of the vote becomes meaningless and/or intermittent that the entire system of checks and balances gets fatally weakened. Some would say that this is the malaise effecting all democracies currently).
So should AI get agency?
To my mind the question is moot.
Because all technology finds its optimum level of utility and because the utility curve of AI has asymptotes at the agency of enquiry and agency of action nodes, AI systems will sooner or later evolve into getting agency of enquiry and agency of action.
The trick is to keep human greed for technological and economic progress under control by ensuring that AI systems operate within given domains without one AI system with agency being ever free to design and put into operation another AI system with agency. This imperative should be a central AI tenet just like the First Law of Robotics is in the fictional world of Isaac Asimov.