martes, 27 de febrero de 2007

Realism and the historicity of knowlegde

REALISM AND THE HISTORICITY OF KNOWLEGDE

Paul Feyerabend, 1989

First assumption: Theories, facts and procedures are the results of idiosyncratic historical developments (they are bound to temporal variables). Cultural and historical factors, not empirical adequacy, determined the survival of the one statement (theory) and disappearance of the other. Each experimental fact to depend on compromises between different groups with different experiences, different philosophies and different bits of high theory to support their position.

Second assumption or separability assumption: the obtained results or the statements founds exists independently of the circumstances of its discovery (discovery of America). The phenomena happen without the necessity to describe them. They are loose to our observations and interpretations.

The power of science: the science cannot to explain all of phenomena. The science can be bound to subjectivity.

Forms to analyze the reality:

Dogmatism: to continue describing the world in accordance with one´s own pet metaphysics.
Instrumentalist, they drops the second assumption (separability), but they not drop it absolutely.
relativism: atoms exist given the conceptual framework that projects them.

Scientist are sculptors of reality, but they not merely act causally upon the world; they also create semantic conditions engendering strong inferences from known effects to novel projections and, conversely, from the projections to testable effects.

domingo, 25 de febrero de 2007

Commensurability, Comparability, Communicability

Thomas S. Kuhn 1982.

The term incommensurable is no common measure in this context it becomes no common language, meaning that if two theories are incommensurable, there is no language, neutral or otherwise, into which both theories, conceived as a sets of statements, can be translated without residue or loss, they must be stated in mutually untranslatable languages, if there is no way in which the two can be stated in a single language, then they cannot be compared, and no arguments from evidence can be relevant to the choice between them.

In so far as incommensurability was a claim about language, about meaning change, this form is called local incommensurability. Most of the terms common to two theories function the same way in both; their meanings, whatever those may be, are preserved; their translation is simply homophonic. Only for a small subgroup of (usually interdefined) terms and for sentences containing them do problems of translatability arise. It is simply implausible that some terms should change meaning when transferred to a new theory with out infecting the terms transfered with them. This means that when such an interdefined term (word-sign) is part of two different theories to be compared or translated, these theories are incommensurable.

Some critics sketch the technique of interpretation: describing its outcome as a translation schema; and all conclude that its success is incompatible with even local incommensurability, the matter with this argument is the equation of the interpretation with translation. Kuhn states that interpretation is not the same as translation, the confusion is easy because actual translation often or perhaps always involves at least a small interpretative component

Translation: made by someone who knows two languages, substitution of words(no necessarily one by one).

Interpretation: made by someone who may initially command a single language, what the interpreter makes in a first instance is learn a new language and whether a that language can be translated into the one with which the interpreter began is an open question. Then the interpreter can attempt to describe in English (or the language to be translated into) the referents of the term to translate, if the description is successful, then no issue of incommensurability arises, on the other hand the interpreter may have learned to recognize distinguishing features unknown to English speakers and for which supplies no descriptive terminology, this is the kind of circumstance for which the term incommensurability is reserved.

Two people may speak the same language and nevertheless use different criteria in picking out the referents of its terms, that’s why translation must preserve not only reference but also sense or intension that is the essential role of sets of terms that must be learned together by those raised inside a culture, scientific or other and that, and which foreigners encountering that culture must consider together during interpretation. If different speakers using different criteria succeed in picking out the same referents, for the same terms, contrast sets must have play a role in determining the criteria each associates with individual terms.

The invariants of translation are to be sought , unlike two members of the same language community, speakers of mutually translatable languages need not to share terms, but the referring expressions of one language must be matchable to co referential expressions in the other, and the lexical structures employed by speakers of the language must be the same, not only between each language but also from one language to the other.

Proofs and Refutations (I)

Imre Lakatos
The British Journal for the Philosophy of Science, 1963


A conjecture (as an outcome after much trial and error) which have passed many different tests (trying to falsify it), suggests that it could be proved. In mathematics, a proof is a demonstration that, assuming certain axioms, some statement is necessarily true. Lakatos proposes to use the word proof for a thought-experimental which leads to a decomposition of the original conjecture into sub-conjectures (or lemmas), instead of using it in the sense of a `guarantee of certain truth’.

Lakatos outlines that informal mathematics grows through the incessant improvement of guesses by speculation and criticism, by the logic of proofs and refutations. Therefore, the proposed concept of proof deploys the conjecture on a wider front so that the criticism has more targets and more opportunities for counterexamples. Lakatos makes a distinction within the counterexamples: (1) The local counterexamples are examples which refute sub-conjectures, and (2) the global counterexamples are examples that refute the main conjecture. While a local counterexample is a criticism of the proof, but not of the conjecture, a global counterexample is a criticism of the conjecture, but not necessarily of the proof.

When a local counterexample emerges, we don’t have to scrap the proof, is better to improve it and replace the false sub-conjecture by a slightly modified one that will stand up to the criticism. Because of the improvement we might obtain implausible conjectures -matured in criticism- that might hit on the truth.

On the other hand, the refutations by counterexamples depend on the meaning of the terms in question. If counterexamples are to be an objective criticism, we have to agree on the meaning of the terms. One can eliminate any counterexample by ad hoc redefinitions; for that reason the last ones are frequently proposed and argued when counterexamples emerge.

Finally, because of the proof concept here presented, we are not perturbed at finding a counterexample to a proved conjecture; we must try to set out to prove a false conjecture.

‘Prove all things; holds fast which is good’



lunes, 19 de febrero de 2007

The Propensity Interpretation of Probability

Kart R Popper
The British Journal for the Philosophy of Science,
1959


By an interpretation of probability Popper means an interpretation of such a statement as ‘The probability of 'a' given ‘b’ is equal to ‘r’:
‘p (a, b) = r’

The frequency interpretation sees that formula as a statement that can, in principle, be objectively tested, by means of statistical test. Here, ‘r’ describes the relative frequency with which the outcome ‘a’ is estimated to occur in any sufficiently long sequence of experiments characterized by the experimental conditions ‘b’.

Popper argues with the frequency interpretation, -especially which refers to singular events (The probability of a singular event can be nothing but the relative frequency within the sequence in question) - and proposes that the propensity interpretation of probability help us to explain, and to predict, the statistical properties of certain sequences. Propensities are defined as possibilities which are endowed with tendencies or dispositions to realize themselves, and which are taken to be responsible for the statistical frequencies with which they will in fact realize themselves in long sequences of repetitions of an experiment.

The propensity interpretation says that the probability is a property of the generating conditions –or the experimental arrangements- and if it is therefore considered as depending upon these conditions. Here the conditions are endowed with a tendency to produce sequences whose frequencies are equal to the probabilities. In this way, a singular event may have a probability even though it may occur only once, for its probability is a property of its generating conditions. The justification of this new idea is made just by an appeal to its usefulness for physical theory.

sábado, 17 de febrero de 2007

A Note on Verisimilitude

Karl Popper, 1976


Based on Stefan Mazurkiewicz’s paper (1974 ) and his definition of the stochastic or probabilistic distance between two deductive systems by:
d(a, b) = p(a.b’ v a’.b) = p(a .b’) +p(a’.b). Popper introduces the definition of distance from the truth ( T is the set of true statements of L , t is the strongest true theory.) :
dT(a) = d(a,t)= p(a .t’) +p(a’.t)

Also it is introduced the nearness function (truthlikeness, verisimilitude):

nT(a)= 1- dT(a) = dT(a’)

t’ is the negation of the strongest true statement, it is the weakest of all false theories (an irreducible theory). If we identify t with the set of true statements T, its complement is the set of false statements F. The nearness to the truth (Verisimilitude) of a is represented by the shaded parts of Fig 3. Now, these two shades areas when read from right to left represent the truth content of the theory a, CtT(a) and what remains is the set F of false statements when the falsity content CtF(a) of the theory a is removed, thus:

Vs(a) = CtT(a)+F- CtF(a)

Popper thinks that the T is too big, and that only we should admit in our universe of statements, the ones that we conjecture to be relevant, and by confining ourselves to those conjectures solve the problem of how to avoid the strengthening a theory by inserting just any stray irrelevant conjunct.

One problem is the one of the comparison of the verisimilitude of false theories. Popper wasn’t able to show that we can approach truth through better and better approximations- through false theories which come nearer and nearer to the truth.

David Miller thought that this approach implies that we can always define new constants such that, with respect to these, the accuracy of the theories is reversed, He seems to think that in taking a problem P1 and the parameters belonging to it as fundamental I somehow have to reject another problem say P3 with it its set of parameters. Popper doesn’t reject P3 nor any other problem that may arise out of the critical discussion of the theory instead he reminds an old schema.(where P stands for problem, TT for tentative theory, EE for error elimination)


P1 --> TT1 --> EE1--> P2--> TT2 --> EE2 --> P3 etc.


If P3 is such a problem, then TT3 will have to be better not only with respect to P3 but also with respect to all preceding problems: this is demanded by the principles ensuring the rationality of the growth of knowledge. This shows that the way of approaching truth will partly depend upon the succession of problems; that is to say, upon the history of thought. But this is no more backward looking than forward looking, two historically isolated and different chains of problems which its solutions may become comparable with respect to verisimilitude only after the two chains have merged; that is after we have found theories that solve the problems of both chains better than all their predecessors.

A statement like “ the theory a is nearer to the true than the competing theory b" is never demonstrable , but may be asserted as a conjecture, strongly arguable for or against on the basis of (1) a comparison of the logical strength of the two theories and (2) a comparison of the state of their critical discussion, including the severity of test which they have pass or failed ( a comparison of their degree of corroboration). Due to this, Popper can give good support of the conjecture that Einstein’s theory of gravitation is nearer to the truth than Newton’s

jueves, 15 de febrero de 2007

Degree of Confirmation

The degree to which a statement x is confirmed by a statement y is connoted as: C(x, y); where x is the hypothesis y the evidence is y. the evidence y can to confirms x, disconfirms it or to be independent of this.
Some people often says that relative probability of x by y P(x, y) is similar to C(x, y); where C(x, y) = P(x, y).
From the point of view of the confirmation o corroboration, there will be two extreme situations:
§ C(x, y) = 1; the evidence supports quite the hypothesis.
§ C(x, y) = -1; the evidence undermine quite the hypothesis.
§ An additional case is when the evidence is independent of the hypothesis; neither confirms it no refutes it.

General formula is the following one:

C(h, e, b) = p(e, hb) - p(e,b)/p(e,hb) - p(eh,b) + p(e,b).

However, there will be intermediate cases:
§ A partial support.
§ A partial undermining.

¿Why the corroboration is thought like probability?

Because the degree of corroboration was used as a new name for the logical probability (inductive logic). Where the C is a measure of increase or decrease of a statement while probability is a measure that decrease or increase (as concepts of velocity, acceleration). C(x, y) should accept or choose a certain hypothesis, according to its degree of corroboration, while the probability (logical) cannot do it. A hypothesis may have a high probability but not a high degree of Corroboration, therefore the probability (Probability of Calculus) is not just like corroboration.
Confirmability is equal to refutability or testability.
E(x, y, z) is the explanatory power of x with respect to y, in the presence of z.
For any given y, C(x, y) increases with the power of x to explain y.




martes, 13 de febrero de 2007

What is Dialectic?

Karl Popper
Mind, New Series, Vol. 49, No. 196. (Oct., 1940), pp. 403-426
.

Dialectic is a theory which maintains that human thought develops in a way characterised by the so-called dialectic triad: thesis, anti-thesis and synthesis. The last one aims to develop a solution of the struggle between the thesis and anti-thesis, and obtained a new dialectic triad or stop with a particular synthesis reached. To the dialecticians when a theory under consideration has been refuted, there will probably have something in it creditable of being preserved. And this feature will be enriched by the adherents of the anti-thesis. Thus a satisfactory solution of the struggle will be only a synthesis, i.e, a theory in which the best points of both thesis and anti-thesis are preserved.

According to the author we have to be very careful not to admit to much from the dialectic viewpoint, because for example it says that the thesis ‘produces’ its anti-thesis. Actually, it is only our critical attitude which produces the anti-thesis. Similarly, we have to be careful not to think that a struggle between a thesis and its anti-thesis will always ‘produce’ a synthesis; the synthesis usually will be much more than a construction built merely of material supplied by thesis and anti-thesis. Another important point to be chary is the way in which dialecticians speak about contradictions. For them there is only one way of criticising a given theory: to show that either it is self-contradictory, or it is contradicted by some other accepted statements. But a theory which involves a contradiction is entirely useless, because it does not convey any sort information.

To Karl Popper the method used in the development of human though in general, can be described as a certain kind of trial and error method. We can assume a position against a theory (hold or reject). Only when a certain theory or system is dogmatically maintained throughout some longer period it does not occur. If the method of trial and error is developed more and more consciously, then it begins to take on the characteristics features of scientific method. This method can briefly be described thus:
· Problem
· Tentatively sort of solution (a theory)
· Criticise (to look for vulnerable points, with an examination brutal as possible) and test the theory in question

If the outcome of a test shows that the theory is erroneous, then it is eliminated. Its success depends mainly on three conditions, namely, that sufficiently may and sufficiently different theories are offered, and that sufficiently severe tests are made. When a theory is rejected, this induces the humans to look out for a new standpoint, which is finally what lets the development of the knowledge.


viernes, 9 de febrero de 2007

QUESTIONS

1. An example of hard induction.
2. Hard induction, naive induction.
3. What is the justification of inductive inferences?
4. Can general rules be formulated without frequency distributions?
5. Induction and Logic
6. How does biology fits in the inductive reasoning?
7. The demarcation problem
8. Is law nature or is nature law?

jueves, 8 de febrero de 2007

On the Justification of Induction

On the Justification of Induction
Hans Reichenbach
The Journal of Philosophy, Vol. 37, No. 4. (Feb. 15, 1940), pp. 97-103.



This paper is an answer to the paper of Isabel P Creed.
The question of justification can be raised based on a previously chosen aim (A) or objective, it concerns the question of appropriateness of a certain means (M) or procedure.

Different kinds of justifications arise depending on what we know about the attainability of the aim, the author defines 3 cases as follows:

we know something about the objective possibility of reaching A by applying the means M
we know that by applying m we certainly will reach A
we know the probability p that A will occur if M is applied
we know at least that P > 0
we know that although p=0, A is possible
we don’t know whether or not application of the means M will lead to A
we know that by applying the M we shall never reach A

M is not justifiable in case three, Reichenbach says that all the others are justifiable but
Miss Creed disagrees with cases 1d and 2, instead she proposes a new case (b) which she thinks justifiable in terms of belief, Reichenbach answers by criticizing the use of the belief concept, how can a belief lead to a justification of induction? Instead he states that this case is just his case 2. Here the author also criticizes David Hume and his defense of inductive belief as a habit, in relation to Miss Creed’s paper title.

Then he continues explaining the terms objective possibility and epistemic possibility The first gives 2 options: something is possible or impossible, the latter gives the chance of more interpretations, being the one that views epistemic “possibility “the same as epistemic “not impossible “including the case of indeterminacy into the term “possible”, the one that would be use to demonstrate the possibility of success in a justification, making the inductive procedure possible.
Justification of the M is conditional in so far as it refers to a certain A. If somebody does not want the aim A an application of the means M by him it would not be justified. There are two types of condition c and c’, being c the less sufficient, just wanting the aim in general and c’ wanting it in face of the more or less problematic chance of success.

The next part is a critic to a kind of analysis proposed by Miss Creed, Reichenbach disagrees with the necessity of this analysis for a justification. The mathematical expectation method is introduced, it determines whether or not a bet is acceptable, where a would be the pleasure and m displeasure and p the probability of A if M is applied, the mathematical formula p*a –m , gives an acceptable value if it’s greater than 0, this condition is extended to cases in which a and m are emotional values and if there is a possibility of measuring them, it would be the perfect justification of induction for Miss Creed, however if we do not know p it would be impossible to determine this value, and it is irrelevant because it has a subjective element, and even if such calculation could be given, it would not necessary mean an obligation to act (in the case of a bet), in other cases the positive value is easily asserted making this calculation unnecessary Reichenbach continues by saying “at this point we have left the logical field and are concerned with the psychological motives of our actions. “


The problem of the justification of induction includes both the question of the decision to attempt predictions and the question of the choice of the best means of making them.
If the decision on the attempt is justified when the aim is not proved to be unattainable, and if it is proved that the means are the best we have, the problem of justification is solved.