From
the Greeks to the present day-passing by Darwin-the way in which we
classify living beings has remained in constant discussion. Aristotle
was one of the first to propose a character based method on a
classification system. Theophrastus, Pedanius and Plinius continued
to contribute in one way or another to the categorization of life
(Manktelow, 2010). In the mid- 18th century, Carl Von Linné
introduced the binary classification system, giving a more organized
meaning to life, but Linné was only interested in descriptions of
the species and was not in organizational levels superior to genres.
Lammarck, de Jussieu and Adenson in different ways proposed the use
of characters for a hierarchical organization of the species. Lamarck
and Juisseu also believed that the similarities of the species
corresponded to a continuity in the form. Then emerged the compared
anatomy with Cuvier and Leclerc who went deeper into the importance
of the characters (Stevens, 2002).
These
researchers prior to the theory of natural selection, had in common
that they conceived the species as immutable categories, that is,
could not emerge new species and these are simply transformed into
other “Scala naturae”. With the advent of Darwin and Wallace's
theory of natural selection, species began to be understood as groups
of organisms that changed their shape by various factors, the idea of
the common ancestor gained strength and the change of organisms over
time began to be represented with phylogenies (Ruse, 2009), taking
the classification from an evolutionary perspective. The use of
phylogenies represented in trees made it necessary to establish
techniques to infer these trees. So, the definitions of homology and
homoplasy became important when choosing the characters, but outside
the already famous discussion between the three schools, the
philosophical background of the methods used in cladistics has also
been generating great controversy from the beginning.
Willi Hennig proposed the maximum parsimony method
based on Ockham's knife and Poper's epistemology. Popper believed to
have solved the problem of inductivism in science, the problem is
about making a posteriori conclusions based on repetitive prior
knowledge, since these are merely "historical memories" and
have no predictive power-they do not assure what will happen in the
future -. For Popper the only important thing is the data at the
moment and if a hypothesis can be falsifiable, then it is a valid
hypothesis (Reippel, 2003). The falsifiability implies that a theory
is capable of being rejected by empirical and verifiable evidence,
the rest would be pseudoscience (Popper, 1981). After Hennig, various
methods of inference (contrary to popperian ideology) have emerged,
that use either likelihood (Maximum Likelihood) or later probability
(Bayesian Analysis). These two methods considered deductive by those
who defend parsimony -as they use other information besides the data-
are considered contradictory to Popper's epistemological theory,
however, some authors claim that likelihood can be considered
popperian (Helfenbein, 2005).
In
my opinion, any type of statement that is made in science must be
supported by empirical and explicit evidence, in addition to being
subject to falsification, either by new information available or by a
new interpretation of the data. It is a good way to avoid falling
into dogmatism and to advance in the creation of knowledge.
Bibliography |
- Helfenbein, K. G., & DeSalle, R. (2005). Falsifications and corroborations: Karl Popper’s influence on systematics. Molecular Phylogenetics and Evolution, 35(1), 271-280.
-
Manktelow, M. (2010). History of taxonomy. Lecture from Dept. of Systematic Biology, Uppsala University.
-
Popper, K. (1981). Science, pseudo-science, and falsifiability. Scientific thinking, 92-99.
-
Rieppel, O. (2003). Popper and systematics. Systematic Biology, 52(2), 259-271.
-
Stevens, P. F. (2003). History of Taxonomy. eLS.
-
Ruse, M. (2009). Darwin y la filosofía. teorema, 28(2).
No hay comentarios:
Publicar un comentario