Scientific Discovery and it’s Logic — A review

Source: Artificial Intelligence on Medium

Scientific Discovery and it’s Logic — A review

This article focuses on the arguments placed in the two texts, by Thomas Nickles and Herbert A. Simon, on scientific discovery and whether the process of it is a rational, logical and method based or it is solely an outcome of creative intuition and human mind’s unbeatable efficiency in figuring out patterns. The paper by Simon brings in the distinction of inductivism from the object of discussion at hand, i.e., assuming either of the two positions will not fall into the problem of induction. This is logically sound because we are only trying to talk about if there exists an algorithm by which we can code up or normatively define what scientists and researchers do, without even touching upon if what and how they do it is correct or not, i.e., being free of the problem of induction. Having isolated our focus of discussion, we will move on to assessment of the two papers. Apart from that, an important element of problem definition, well explained in both the texts, is the hierarchical structure of scientific discovery, starting from identifying a plausible hypothesis, and then testing it to make verifiable predictions, i.e., justification process. More post-levels to this were added by Nickles in the later half of his text, backed up by examples about Einstein et. all rediscovering later into the discovery made by Planck, thus underscoring his argument that a discovery process is never complete, and keeps redefining and rather keeps discovering into itself.

Illustration from Harvard Magazine

(Although the analysis is self-contained, but the reader is advised to refer to the texts this analysis is based on, to better appreciate the arguments)

Assessment

Simon’s effort to logically present the logic of method starts from defining a set of processes/hypothesis P which is constrained by a set of conditions — norms of testing laws C, and G is the goal of discovering valid scientific laws. Here, Simon assumes that the sets P and C are predefined and exhaustive, while scientists during actually heading towards a research might initially have a set of hypothesis (P), and ways to test them ©, but eventually may redefine the set as per subsequent explorations and results during the journey. Thus, if one wishes to program a discovery process, they may not be able to collect complete information to define those sets required to start the process. Further, the example of chess or tic-tac-toe game is claimed to follow a normative set of rules to achieve the Goal (which is to win the game). That is, the set is P, if not C, is tried to be achieved using an inductive (empirical) tool, which has problems too well known to be discussed. Hence, failing on the possibility to define such sets with confidence.

The paper further talks about the method and efficiency of discovering patterns in data, and describes a Heuristic Search Algorithm (HSA) which studies and draws information from the data, and then proposes a hypothesis which most optimally explains the pattern. The text tries to ground this formulation by citing an example from the history of scientific discovery — the Periodic Law of Mendeleev. It says that the discovery was an outcome of a pattern matching exercise after arranging the elements on the basis of atomic weights, which was known to Mendeleev, and probably an efficient computer program could work it out, given all the information. It also talks about that since the number of concepts are a finite set, but an infinite number of combinations of these exist, and a logic of discovery merely boils down to how to efficiently apply these combinations to make a discovery algorithmically. Since a set of finite basis is predefined thus, the author argues, that the PROCESS of making a discovery is deductive, while the discovery itself might not be (which is a common problem for all sources of discovery — human or machine). So the author claims that every discovery or so to say, novelty can be deduced. Thus, there’s no place of irrationality or original, unseen and entirely independent ideas to take any place. This implies that if we keep updating our set as the new discoveries are being made, then given an efficient algorithm and computational power, a pattern matching code could have done what Mendeleev did. In that case, let’s take a reverse walk in this process, dropping elements one by one from the finite set, and reach the nullity. Now here, how did it all start, how was the first discovery made, and how would the algorithm work with an empty set? Thus, the method identified to automate the discovery process seems to fail on doing a backwards recursion.

On the pattern matching examples quoted and the efficient algorithms discussed for those, since these cases were those when we knew the results, hence one could design an efficient algorithm. How do we make an algorithm efficient, i.e., efficiently picking up hypothesis and testing it on data, when the goal is not clear, i.e., an unknown phenomenon is being researched on. The paper by Nickles tries to solve this problem by proposing a context based algorithm which is more specific in the domain of research, so that it explores on only the relevant hypothesis, using only the relevant information. Even in this scenario there are flaws, let’s discuss the example of finding patterns in a given sequence of characters. Consider the argument that each human will not be equally efficient in finding that pattern, looks agreeable. Thus, some humans are more efficient than others. Then, we also need to agree that some humans couldn’t efficiently solve it even using a computer. One could make a computational process as efficient as they themselves are at max, because they can only use the algorithm they used to solve the problem using their mental faculties. Hence, the task of efficiently solving a problem depends on who is trying to solve it, themselves or computationally. Thus, there’s always a superiority of the human over the computer. How do we then explain these differences in efficiency of humans, and the limitations (human being the limit) of computers having efficient processes coded up to solve the problems? The process of making discoveries EFFICIENTLY is itself bringing in some non-excludable irrationality component, or something which is beyond epistemological scope going into the psychological domains.

The text by Nickles very rationally presents the entire evolution of debates on scientific discovery being rational or not. The author uses the word rational rather than logic, for the sake of greater irrefutability. The text introduces consequentialism based H-D method, i.e., discoveries should confirm consequences. It also identifies that discoveries are indeed temporally structured experimental processes, which do follow a method. But it expands the definition of discoveries so much, even beyond ‘final justification’, so that any algorithmic approach to formulate the process of discovery will not be able to match it exactly. It quotes that even the expert scientists will find it difficult to write down the method they will be applying for their research, and sometimes they might not follow or in fact violate the initial set of rules (the finite set that Simon talks about). Moreover, it uses a recursive argument against the existence of any logic of discovery, by quoting that if there were such logics, then how these logics themselves get discovered. It talks about the closest approaches, of putting the discovery process into an algorithmic framework, that has been formulated till now like genetic algorithms, which are great at learning BV+SR strategies, and are very powerful. Although the author seems to conclude that these logics could be efficient only if they are domain specific, and if we were to make a universal set of such logics, then it is equivalent to making scientific discoveries just inflexible.

Conclusion

I mostly agree with the arguments by Nickles, which very nicely explains the importance of studying logic of discovery processes, and the limitations that they face. In my opinion if one claims that there actually exists a perfect logic of discovery processes, then it implies that the process can equivalently be performed on a computer. The entire AI research is working on it, to make processes automatic as much as possible. The aim of AI is not to attain complete automation, but to augment human wherever the process can be feasibly formulated and can be performed efficiently by a computer, thus restating the importance of studying scientific discovery method. If we say that scientific discoveries are absolutely logical (methodological), with no element of irrationality or intuitions or novelty of thoughts involved, then it implies that someday AI could perfectly match and hence replace humans from all places. This is logically not possible, it might converge to it with an error but not match exactly. Apart from that, the ethical issues with AI replacing human also take considerable place in this debate. Can AI have a personality? is a short analysis on whether an artificial intelligence can ever converge to an actual personality or human intelligence, on the grounds of physical, mental, emotional, social and moral dimensions of personality. These dimensions of AI are worth studying since a discovery as in invention or a social construction is not merely a product of intelligent combination of known theories and facts, but are very intricately linked to various social and moral aspects, which are again as a whole making and giving scientists the power to innovate.