Always in all life moment we place in the situation of choose
between two or more options, maybe for social reasons in every area
always tell us that we must take a decision, what religion we must
believe, whether we are pro-yankees or not, what political party we
belong, even in Biology we must choose. whether we are Botanist,
Primatologist, Ornithologist, etc. In Compared Biologist class my
professor asked me choose one philosophical current which I must use
to make and approach my questions. I can choose between three
different philosophical currents: Bayesianism, Likelihood Ism,
Frequentist and Popperian Falsificationis
We're going to talk each of the three streams. Although the
Bayesianism has a origin very old, until only a few years had a major
resurgence in science. This stream applies the probability that some
events occur given certain observations (priors), which are changing
or being updated. The probability is denoted Pr(H|O), and is the
probability of the hypothesis given the data, observations or
evidence (Sober, 2008; Kruschke, 2011). On the other hand is the
Likelihoodism, which lacks of priors and the logic is contrary to the
Bayesianism, looks for the probability how well the hypothesis fits
to the data Pr(O|H) (Sobber, 2008; Royal, 1998). Both branches are
extremely powerful to make inferences and contrasting hypotheses, the
only difference is the concept used for making comparisons (Sober
2008, Kruschke, 2011). The likelihoodism uses the concept of favoring
to show that the evidence says regarding the comparison of two
hypotheses, while the Bayesianism adopts the concept of confirmation
to show that the evidence says regarding a hypotheses and it's
negation (Sober, 2008. Pag 34). Finally, the frecuentism, which has
dominated the last century, is based on the probability that an
evidence or event occurs depending on a set of experiments and N
(Johansson, 2011), that gives a ratio and the value P is calculated
and compared with a null model or null hypotheses(Sober, 2008;
Johansson, 2011; Wagenmakers, 2007). The main difference (i think) is
it's philosophical perspective on the comparasion of hypotheses and
the use of priors in Bayesianism (Sober, 2008), in other topics are
very similar. As a first approach to this affirmation I put the
following exercise: Suppose that we have our hypothesis (H) and data
(T), now when we do the Bayesian analysis we seek the probability of
H given T(Pr(T|H)), in contrast to Likelihood we want to look at how
well H fits to our data T(Pr(H|T)), when we apply the Bayes theorem
to our example we have: p(T|H)p(H) = p(H|T)p(T). Then =
Pr(T|H)=[p(H|T)p(T)]/p(H). From a value of Likelihood we can get the
posterior values, the example is somewhat crude and simplist but
implies the idea. I don't pretend to fill this with formulas and
derivations that even I can't explain but Branden Fitelson from page
7 of his article "Likelihoodism, Bayesianism, and Realtional
Confirmation" shows us some examples of how some Bayesian
measures are more Likelihoodians than Bayesians and vice versa, if
someone wants deepen in the topic.
One of the main problems in Bayesianism and Frecuentism is little
objectivity when the data are managed (Ayacaguer, 2000). On one side
are the priors of the Bayesian analysis, and have more influence in
the analysis and their value can be altered to benefit any particular
hypothesis, this is one reason why many people argue the
unreliability of this method when is used in daily life, in
government agencies because anyone can manipulate priors to the own
benefit. It has used 'flat-priors' as a solution to this problem,
which causes that the entire analysis falls on the Likelihood, but
the Frequentism is not far behind, because you can manipulate the P
values or the values of positive or negative false (the famous alpha
and beta), to favor some result in special, just as the criticism is
the use of a null hypothesis (Ayacaguer, 2000; Johansson, 2011).
Similarly the Frequentism present the N problem, because the P values
are influenced by the sample that was used, so we can know beforehand
what would be the result if we use a small N or a big N and the P
value also can be influenced subjectively by the amount of N that is
used (Ayacaguer, 2000; Wagenmakers, 2007; Johansson, 2011). So if
it's subjectivity we have a winner ¡Likelihoodismo!. But we don't
get excited because Likelihoodism also has crtitics and one of these
is it's restriction to some cases (Sober, 2005).
According everything written above, it appears that Likelihoodism
is the best stream and therefore I'll choose it, but no. It can sound
crazy but for me and after of all this timeexploring this trend I can
conclude that one can't choose any stream in special, but I have to
highlight that all have good and bad things and for that reason I
consider they complement each other and all can be used in Bayesin
analysis (obviously without declare Bayesianista). To understand this
idea we must have the main components in mind of Bayesian analysis:
the priors and the Likelihood. Already I denote the relationship
between both Likelihood and Bayes using the theorem. On the other
hand, in priors is where It would enter the Frequentism, we can
consider the results from a Frequentist analys as priors in the
Bayesin analysis, let me give you an example: Suppose you arrive to a
new city and want to know if that month is rainy or not, throughout
the month you take notes on what days is raining and is not raining,
assuming that it rains 25 of 30 days. From that relationship and
calculating the P value you will know that this month is rainy or
not, but What could you say from this assertion on the following
months?, really nothing, but from these observations you could infer
how likely is that the next month it will rainy, because throughout
that month we have noticed that in general before the rain come the
sky is clouded, so if the next day we see the sky is clouded (O), we
know that the probability of rain is going to be high (H), all thanks
to the prior value we obtained from our frequentist observations. I
would like to give an example that occurred to me while I played Xbox
to better explain my idea. Suppose we are going to fight with the
Final Boss, at the begining, we dont know anything, how are attacks,
his moves and we have to spend one or more lives to defeat it,
is in that moment where our Bayesian, Likelihoodism and Frequentism
analysis arises !!. At first we do not know how to approach the enemy
(flat priors) and defeat it (H) with our initial strategy (T) is very
unlikely (low likeliihod, p (T | H)), which ultimately leaves a
unlikely to pass the game (posterioris bayes, Pr (H | T)). As we move
forward in the fight we noticed that the enemy has a particular
frequency for certain attacks, then we will know what is the
probability of making certain attack, these probabilities increase as
you fight more and make more observations on the movements of the
enemy, this, I believe, is a well frequentist analysis (we have an
accumulation of knowledge and increase our priors), once we know
these make a change in our strategy (T) and the probability of defeat
given that our change strategy (Pr (T | H)), so that eventually the
probability of passing the game increases (Pr (H | T)).
So, you will ask, ¿ where is the Popperian Falsificationism ?,
well, I think that is the less critical among the 4 currents.
Basically Popper says us: In science we must reject some theories and
hypotheses to corroborate others ( but, It doesn't mean that these
are true), something like Modus Tollens Tollens (If A is true, no
mean that B are true too). So in this way the three currents use
Popper's logic: The likelihood 'favoring' one hypotheses over
another, Bayesianism 'confirms' one hypotheses respect its own
negation, and Frequentism compare one hypotheses against the null
hypotheses, but we never corroborate that are true hypotheses
I think choose between any of these three currents, is like
choose just one phylogenetic search method is better, all three have
good and bad things and often what really influences is the data, not
the method. I consider more appropriate, for example what Morrone and
Crisci do with the two methods of historical biogeography
(Panbiogeography and Historical Cladistic), they show how each method
is complementary each other, and that are necessary steps for a good
biogeographic analysis (Morrone & Crisci, 1995; Morrone 2001).
This (I think), allows us to look the problem at multiple ways and
allows find multiples and well solutions, avoiding bias. Always tell
me that extremes are not good, so, why we don't avoid the extremes,
and find a intersection between them ? and take the better of each
one, just imagine how world will change if religious zealots get find
a middle point. At the end these are only methods and it seems to me
more crucial and critical the objectivity with which the researcher
will analyze the results.
___________________________________________________________________________________
References.
- Branden Fitelson. Likelihoodism, Bayesianism, and Relational Confirmation. Syntheses (2007).
- Tobias Johansson. Hail the imposible: p-values, evidence and likelihood. Scandinavian Journal of Psychology (2011).
- L.C. Silva Ayacaguer & A. Muñoz Villegas. Debate sobre métodos frecuentistas vs bayesianos. Gac Sanit (2000).
- Eric-Jan Wagernmakers. A practical solution to the pervaise problems of p values. Psychonomic Bulletin & Review (2007).
- Silvio Pinto. El Bayesianismo y la Justificación de la inducción. Principia (2002).
- Royall, R. Statistical Evidence: A Likelihood Paradigm, Boca Raton, Fla.:Chapman and Hall.(1997).
- Elliot Sober. Evidence adn Evolution: The Logic Behind The Science. Cambridge University Press, United States of America. (2008).
- Kruschke, J.K.. Bayesian data analysis: A tutorial with R and BUGS. Amsterdam: Elsevier. 2011.
- Juan J. Morrone. Homology, biogeography and areas of endemism. Diversity and Distribution (2001).
- Sober, E.: 2005, ‘Is Drift a Serious Alternative to Natural Selection as an Explanation of Complex Adaptive Traits?’. In: A. O’Hear (ed.): Philosophy, Biology and Life. Cambridge: Cambridge University Press.
1 comentario:
The text itself is quite enjoyable, and it provides some interesting examples. However, at the end it seems there is no conclusion about which current you support and why you support it. On the other hand, there are a few statements that are inaccurate, such as "On the other hand is the Likelihoodism, which lacks of priors and the logic is contrary to the Bayesianism, looks for the probability how well the hypothesis fits to the data Pr(O|H)". According to Likelihoodism, what one really searches for is how fit the data is, given the hypothesis,- I am only putting the expression you used (Pr(O|H))into words-. On the third paragraph you wrote "One of the main problems in Bayesianism and Frecuentism is little objectivity when the data are managed (Ayacaguer, 2000)", since you have not mentioned anything about Frequentism before, I wonder what Frequentism is about, then. If you considered that evidence is a matter of objectivity, what would put you against being a strict Likelihoodist? why would you need an explanation about Popperian falsificationism afterwards?
Publicar un comentario