Getting Started with RA: What are Claims? How do you work with them?

The Research Analysis platform is centred around the concept of the scientific claim. The goal of a claim is to take a hypothesis that has been argued in the literature and state it in a formalised language that allows for easier comparison, searching and analysis.

While there are several types of scientific claim, Research Analysis is currently focused on Cause-Effect type scientific claims eg. Statins Decrease Coronary Artery Disease in Humans. We have broken up the components of Cause-Effect claims and formalised their language and structure to make them easier to compare, search and analyse. The easiest way to see how they work is to look through some examples in the View Claims table, but here we provide an overview of the model.

Research Analysis Knowledge Management Model Diagram

Figure 1. Research Analysis Knowledge Management Model

  • Claim Elements:
    1. Treatment/Cause: This is the drug, environmental, genetic, etc cause that is made in the model system. eg. Statin treatment
    2. Effect: This is the type of effect that results from the Treatment/Cause. Currently we only offer three options: increases, decreases or not significant.
    3. Molecule/Disease: This describes the molecule or disease that is effected by the cause. eg. Cholesterol
    4. Organ Model: This describes the organ of the animal or the in vitro system where the cause-effect relationship was observed. eg. Blood
    5. Genetic Model: This describes the genetic model in which the cause-effect relationship was observed. eg. ApoE -/-, Wild Type.
    6. Species: This describes the species in which the cause-effect relationship was observed. eg. Human.
  • Standard Terms: Wherever possible, claims should use standardised language for the elements of the claims. Research Analysis currently requires that all terms should be sourced from one of the following, in order:
    • Medical Subject Headings (MeSH) standard – Elements of claims should either exist in MeSH as Subject Headings or any of the Entry Terms for a heading or the Tree Number/Unique ID. For example, Humans is a Subject Heading in MeSH, but you can also use the Entry Terms which include Human, Homo sapiens, Man.
    • NCBI Protein Database – name including synonyms or the unique code eg. Accession for the relevant database can be used in Research Analysis.
    • NCBI PubChem Database – The chemical name including synonyms or the PubChem CID code can be used in Research Analysis.

We do allow for new terms to be added where there is no satisfactory term in the above databases, however the use of standardised terms allows for much more powerful search and analysis. For example, a scientist in one field may use a different term to those in another field, but using standard terms we can link these two claims together and provide cross-field visibility. In future we also hope to provide searching and analysis across different levels of the term tree eg. Hominids or Mammals would consolidate claims for species further out on the tree.

  • Supporting Quotes: Research Analysis requires that there be a quoted statement taken from the literature that supports each scientific claim in the database. While the whole research paper is required to fully support a claim, the quoted statements provide some context to the claim in the words of the scientist. It also makes it easier for other scientists to identify the section of the paper that was used to create the formalised claim.
  • Referencing: Along with the Supporting Quote, Research Analysis requires that the PMID is provided for the paper that the Supporting Quote was taken from – This allows users to quickly dive into the paper for more information or context by clicking on the PMID link in Research Analysis.

How to capture claims using Research Analysis?

  • Claims to Capture: A research paper may introduce one new claim, but generally presents a collection of claims that support a central claim of the paper. The user can choose to add just the highest level claim of the paper or drill down and capture the supporting claims too. It is preferred that claims are entered only for papers that provide new evidence or verification of the claim and not for papers that simply refer to the findings of other papers. Ideally you would drill down into the paper referenced and enter the claim with a quote from this paper. For this reason we prefer that claims are entered from original research papers or papers that verify previous work, rather than from review papers.
  • Adding Claims: You can add claims to the database via two means:
  1. Add Claim on View Claims page: To enter a claim on this page you simply need to enter the values into each of the boxes at the top of the table and then press the “Add Claim” button. The nice thing about this method is that it simultaneously searches the database to see if the claim already exists and you’ll see related claims as you enter each of the terms.
  2. Upload a Spreadsheet of Claims: To load a batch of claims you can go to the Upload Claims menu option and select and upload a spreadsheet containing claims. The spreadsheet will need to be in the right format and you can learn more in this article.
  • Edit/Delete Detailed Claim View: To edit, delete or see more details relating to a claim you can click the View link in the View Claims table and go to the claim page. Where the claim is supported by several Quoted Statements and papers you can see a listing of these on the claim page. You can also get a unique link to the claim page that makes it easy to share the claim with other users of Research Analysis.

We hope that this helps you get started with Research Analysis, but please feel free to contact us if you have further questions.

Causation in Medical Scientific Statements

The concept of causation is a very difficult philosophical topic and doesn’t. The causation implied by the Research Analysis model has been of interest to me for some time, but the inspiration for this article came from Handbook of Analytic Philosophy of Medicine by Kazem Sadegh-Zadeh [1]. Sadegh-Zadeh provides a great overview of the philosophy of causation and, in particular, Etiology the science of clinical causation. “Etiology, from the Greek term αιτια (aitia) meaning “the culprit” and “cause”, is the inquiry into clinical causation or causality including the causes of pathological processes and maladies” [1].

In this article, I will explore some of the tools provided in the handbook (You should refer to section 7.5 of the handbook for a more thorough introduction and overview) and discuss how they can be applied to the Research Analysis model. I will explore the probabilistic interpretations of claims, the concept of causation in relation to claims and the causal relevance of competing claims.

Probabilistic dependence

Let’s take an example claim from Research Analysis:

(1)         Statins decrease coronary events in normal humans

This claim can be found in Research Analysis here and its semantics are analysed in this previous article here. The semantic analysis resulted in the following logical description of the claim:

(2)         ꓯe (DECREASE(statin, myocardial infarction, e) & IN(e, normal human))

Where e represents an event. In my previous article, I discussed that the word case may be more natural than event for this model, where “the concept of a case suggests that the time period and medical process will be appropriate to the specific disease treatment paradigm”.

A lot of the logical formalisation relates to the event or case. If one specific case is considered then we have the simpler claim:

(3)         DECREASE(statin, myocardial infarction)

This claim suggests that there is a decreasing relationship between:

  • The administration of statins, and
  • Myocardial infarction.

At this point probability theory can be introduced. Sadegh-Zadeh [1, p253] shows using probability theory that an event B is probabilistically independent of another event A if and only if (iff):

(4)         P(B|A) = p(B)

Where P(B|A) is the probability of the event B conditional on the fact that the event A has occurred. If events A and B are independent then the fact that event A has occurred will have no impact on the probability of event B occurring and thus the conditional probability will simply equal the probability of B occurring independently of A.

Dependence of two events is then simply represented by the opposite probability relationship:

(5)         P(B|A) ≠ p(B)

That is, two events are dependent if the conditional probability of B given A is not equal to the probability of even B occurring independently – the fact that A occurred effects the probability that B will occur.

Coming back to the simple example, the claim “The administration of statins decreases myocardial infarction” implies that there is a conditional dependence between the two events and that:

(6)         p(Myocardial infarction | The administration of statins) ≠ p(Myocardial infarction)

That is, the probability of myocardial infarction given that statins have been administered to the patient is not equal to the probability of myocardial infarction in general. There is a probabilistic dependence between the event of statin administration and myocardial infarction.

Note that this probabilistic “dependence” does not imply that there is any causal interaction between the two events A and B or myocardial infarction and statins, only that there is a probabilistic correlation between the events. Correlation may exhibit one of two directions in particular cases giving:

(7)         Positive correlation:       p(B|A) > p(B)

(8)         Negative correlation:     p(B|A) < p(B)

In our example, “decrease” is intended to mean that there is a negative correlation between the events and using the probabilistic terminology we have:

(9)         p(Myocardial infarction | The administration of statins) < p(Myocardial infarction)

This statement says that the administration of statins decreases the probability of myocardial infarction when compared to the general unconditional population or that there is a negative correlation between myocardial infarction and the administration of statins. As discussed in our article relating to Popper’s philosophy of science (find it here), the declarative statements like (1,2) found in Research Analysis are scientific statements in a form that can be logically falsified, but in the real world there is never certainty and thus experience only ever supports probabilistic correlations like that in (9) (actually even probabilities are technically not available according to Popper, but probability has use beyond its risks in many fields of science).

Does the statement (9) imply that there is a causal link between statins and myocardial infarction? To answer this question it is necessary to introduce some further concepts.

Probabilistic relevance or irrelevance

The Research Analysis model has always highlighted the importance of the reference population or model for each claim by requiring the specification of the reference species, disease model and whether it is a whole animal or organ model. Most medical research begins in cell culture or animal models, but has the goal of moving into human applications. It is important to clearly separate claims that relate to mice from those that relate to humans. For this reason, the concept of probabilistic relevance conditional on a reference population or background context is introduced below (refer [1, p255-257] for a more detailed introduction):

(10)       p(B|X∩A) > p(B|X)          Positive probabilistic relevance or conditional correlation

(11)       p(B|X∩A) < p(B|X)          Negative probabilistic relevance or conditional correlation

(12)       p(B|X∩A) = p(B|X)          Probabilistic irrelevance or no conditional correlation

Positive probabilistic relevance (10) says that the probability of event B conditional on both events X and A occurring (X ∩ A) is greater than the probability of event B conditional on X alone. In this presentation of probabilistic relevance, X represents the reference population and A and B are the events for which cause and effect are being evaluated. The following example using (1) can be provided:

(13)       p(Myocardial infarction | normal humans ∩ the administration of statins) < p(Myocardial infarction | normal humans)

This sentence says that the probability of myocardial infarction is lower in normal humans that have been administered statins than it is in normal humans in general. As noted, Research Analysis has always included the reference population or background context because research is conducted in many different species, genetic types and disease models. Sadegh-Zadeh in his work notes that the notion of background context is of great importance when analyzing issues of causality and that “There are no such things as ‘causes’ isolated from the context where they are effective or not. The background context will therefore constitute an essential element of our concept of causality.” [1, p 256]

Spurious correlations & Screening Off

To this point in the discussion, it has not been possible to introduce the concept of causation and instead the weaker concepts of relevance and correlation have been used. Where there is a non-zero relevance or correlation between two events A and B as in (10,11), then B could be a potential cause of A, but “Correlation does not imply causation”. To define the concept of causation it is necessary first do define spurious correlation and the concept of screening off.

(14)       Screening off: X screens A off from B iff p(B|X∩A) = p(B|X)

This says that X screens off A if and only if A is, in relation to the reference population X (or some other event or set of events), probabilistically irrelevant to B [1, p257]. We can take this concept a step further and define a spurious cause by incorporating it into the sentences (10-12) to assess whether there is an alternative event C that explains the probabilistic relevance of A on B.

(15)       Spurious cause: A is a spurious cause of B if there is a C such that p(B |X ∩ A ∩ C) = p(B |X ∩ C).

We can rephrase this and introduce the concept of time order as follows: in a reference population X, an event A is a spurious cause of an event B iff:

  1. A is a potential cause of B in X,
  2. There is an event C that precedes A
  3. C screens A off from B.

An example of a spurious cause can be provided as follows:

p(Death | Humans ∩ AIDS ∩ HIV) = p(Death |Humans ∩ HIV)

In this example, AIDS is screened off by HIV. AIDS is the spurious cause that is screened off by HIV infection. The time order of events is discussed further in the next section.

Dominant cause

In the previous example, AIDS could certainly be a cause of Death in untreated individuals. However, AIDS is screened off by HIV. Both AIDS and HIV can be considered as causes of death. Here the concept of Dominant cause can be introduced to provide a ranking between causes.

(16) Dominant Cause:  At1 is a dominant cause of Bt2 in X iff there is no t such that for all events Ct in X:

  1. t1 ≤ t < t2,
  2. p(Bt2|X ∩ At1 ∩ Ct) = p(Bt2|X ∩ Ct).

This definition says that a cause is dominant if no simultaneous or later event is able to screen it off from the effect [1, p275]. This can be demonstrated with the AIDS example:

  • A person contracts HIV in 2001
  • They present with AIDS in 2003
  • They Die in 2010

In this example there would exist the following probability relationship:

p(Death2010 | Humans ∩ HIV2001 ∩ AIDS2003) = p(Death2010 | Humans ∩ AIDS2003)

This relationship says that the combination of HIV with a Human and AIDS provides the same probability of death as the combination of AIDS with a Human. It can also be seen that 2001 ≤ 2003 < 2010, which says that the HIV occurred prior to the AIDS. This result confirms that AIDS is not the dominant cause of death in this case as there does exist an earlier event HIV2001 that screens of AIDS2003 from Death2010. The following shows the causes reordered:

p(Death2010 | Humans ∩ AIDS2003 ∩ HIV2001) = p(Death2010 | Humans ∩ HIV2001)

This relationship in a similar way says that the combination of AIDS with a Human and HIV provides the same probability of death as the combination of HIV with a Human. However, in this case the Ct  event (AIDS) does not occur in time between the At1 (HIV) and Bt2 (Death) terms. So AIDS cannot be the dominant cause.

The concept of dominant cause provides a means for ruling out spurious causes and for keeping track of the cause that has not been ruled out to date. But in reality we will never be able to test all possible events as causes. Knowledge of the dominant cause of a disease will always be subject to future falsification. This is further complicated by the fact that causal chains run off into the infinite past. Taking the example, it may be that the HIV infection was caused by unprotected sex. At the time 2001 in the example above, HIV may be the dominant cause, but if unprotected sex at an earlier time is considered then the HIV would be screened off by the unprotected sex. Looking at the causes of HIV infection, it can be seen that while unprotected sex may be a common cause of HIV it is not the only one. There is also sharing of needles, infusion with HIV infected blood, etc. So there may be many causes of a disease given a broad background population like all humans even though there may only be one cause for a specific person who contracts HIV. There can also be a common cause for the several symptoms of a disease. Finally, it is rare that there is a single cause for an event. It is usually the case that there are several events that contribute to any future event and we will explore the concept of casual relevance below. Causation is a far more complex concept than most people realise. The concepts presented in this article, and more thoroughly by Sadegh-Zadeh [1], provide some valuable tools for assessing causation and making more thorough use of the concept.

Causal Relevance

The concept of causal relevance is useful metric for answering the question: What event is causally more relevant to a particular disease? Causal relevance can be defined as:

(17) cr(A,B,X) = p(B|X∩A) – p(B|X)

This states that causal relevance is simply the difference in the probability of B given the background context X and causal event A and the just the probability of B given the background context X. An example would be:

In a given year:

p(Myocardial infarction | normal humans) = 1%

p(Myocardial infarction | normal humans ∩ smoking) = 2%

cr           = p(Myocardial infarction | normal humans ∩ smoking) – p(Myocardial infarction | normal humans)

= 0.02 – 0.01 = 0.01

The numbers are just estimates, but they suggest that while smoking may double the chance of myocardial infarction (MI) it does not have a high causal relevance within a one year period.

Some examples that demonstrate causal relevance [1, p277]:

causal irrelevance amounts to cr(A,B,X) = 0 (no relevance) eg. causal relevance of your healthcare number to myocardial infarction

positive causal relevance is cr(A,B,X) > 0 (causing) eg. smoking to myocardial infarction

negative causal relevance is cr(A,B,X) < 0 (discausing, preventing) eg. statins to myocardial infarction

maximum positive causal relevance cr(A,B,X) = 1 (maximum efficiency) eg. mechanically clamping your coronary artery to myocardial infarction

maximum negative causal relevance cr(A,B,X) = −1 (maximum prevention) eg. no example

Causal relevance as defined here is not a probability at least due to its range from −1 to +1. It is simply a quantitative function that provides a measures of the context-relative causal impact of events.

Causal relevance can also be used to compare the relative importance of different events to an outcome by comparing their causal relevance.

If cr(A1, B, X) > cr (A2, B, X), then A1 is causally more relevant to B in X than A2.

For example,

cr(smoking, myocardial infarction, normal humans) > cr (healthcare number, myocardial infarction, normal humans)

This says that smoking is a stronger cause of myocardial infarction than is your healthcare number.

In later articles or versions of this article, we will explore how the concepts of causation and relative causation might be applied to the Research Analysis model and platform.


  1. Sadegh-Zadeh, Kazem. Handbook of analytic philosophy of medicine. Dordrecht: Springer, 2014.

Version 1.0, 19th March, 2017

Popperian Falsifiability is Only Theoretical: Evidence can never definitively reject a hypothesis

Popperian falsifiability [1] is a cornerstone of modern science and its philosophy.  I will not question the importance of Popper’s work; it was a major source of inspiration for our work. I will not argue that there are weaknesses in the concept of falsifiability of scientific hypotheses. Popper made it clear in his work that the concept of falsifiability is theoretical and cannot be achieved in practice. His achievement was to demarcate empirical science, as the investigation of logically falsifiable statements, from metaphysics. Popper’s key thesis was proposed as follows:

“But I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience. These considerations suggest that not the verifiability but the falsifiability of a system is to be taken as a criterion of demarcation.* In other words: I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific system to be refuted by experience.”

As examples of logically falsifiable statements, take the following sentence from a review article on the Pathobiological Determinants of Atherosclerosis in Youth (PDAY) Study, which investigated the prevalence of atherosclerosis in 2876 subjects aged 15 to 34-years-old [2].

(1) “All subjects in this study had lesions in the abdominal aorta, and all except 2 white men had lesions in the thoracic aorta.”

From this sentence we can construct an example of a scientific statement that logically falsifiable:

(2) All humans between the ages of 15 and 34-years-old have atherosclerotic lesions in the abdominal aorta.

Or in a more logical form:



x = humans between the ages of 15 and 34-years-old,

α = x has atherosclerotic lesions in the abdominal aorta

Logically this statement (2) could be falsified by the example of one human aged 15 to 34-years-old who did not have any lesions in the abdominal aorta. On the other hand, the universal claim made by “all” makes this statement unverifiable as there is a potentially infinite number of humans.

If we go back to the sentence (1), we can see that the authors note that “all except 2” subjects had lesions in the thoracic aorta. This was not incorporated into the statement (2). But given that 2874 of 2876 subjects (99.9%) did have lesions in the thoracic aorta, is it reasonable to reject the claim (3) below?

(3) All humans between the ages of 15 and 34-years-old have atherosclerotic lesions in the thoracic aorta.

Certainly these two subjects appear to have falsified (3). If we accept the falsification of the claim in (3), then it appears that the most that can be claimed is (4)

(4) Almost all humans between the ages of 15 and 34-years-old have atherosclerotic lesions in the thoracic aorta.

Now (4) is no longer logically falsifiable and thus no longer scientific or empirical, as we can never be sure whether a subject without lesions in the thoracic aorta falsifies the claim or is one of the those excluded by the hedging word “almost”. According to Popper, science involves the empirical investigation of statements like (2) and (3), but not those like (4).

Popper was clear on the real world limitations and possible objections to his approach and discusses them explicitly:

“A third objection may seem more serious. It might be said that even if the asymmetry is admitted, it is still impossible, for various reasons, that any theoretical system should ever be conclusively falsified.”

… “I must admit the justice of this criticism; but I need not therefore withdraw my proposal to adopt falsifiability as a criterion of demarcation.” [1, p19-20]


“If falsifiability is to be at all applicable as a criterion of demarcation, then singular statements must be available which can serve as premisses in falsifying inferences. Our criterion therefore appears only to shift the problem — to lead us back from the question of the empirical character of theories to the question of the empirical character of singular statements.” [1, p21]

Popper is acknowledging that any evidential claim put forward to falsify a hypothesis is in itself a scientific hypothesis that by Poppers philosophy cannot be verified, only theoretically falsified at best, leading us into an infinite regress. Poppers philosophy offers demarcation of empirical science statements from other meta-physical statements, but does not offer definitive rejection of hypotheses by falsification. Falsification is only theoretical. Experiments never definitively falsify a hypothesis.

A simple example

An example hypothesis that meets the requirement of Popperian falsifiability is as following:

(1)         John Smith has familial hypercholesterolemia

Familial hypercholesterolemia is a disease that causes LDL cholesterol (bad cholesterol) to be very high. As specified by the name, the disease is genetic and is passed down through families. There are many genetic defects that can cause the disease, but for this example let us assume that there is only one genetic defect that causes the disease and that (1) is denoting only the disease caused by that defect. Without this assumption the statement becomes unfalsifiable as we could always argue that while John Smith has none of the known genetic defects, he may have an as yet unknown defect that causes the disease.

If we run a genetic test on John Smith, we could falsify (1) with the following finding:

(2) John Smith does not have the familial hypercholesterolemia genetic defect

However, this falsification can never be definitive as (2) is in itself an empirical statement. All diagnostic tests are exposed to uncertainty and this is represented by their false positive or false negative rate. In this case, there is always a chance that John Smith’s diagnostic result was a false negative. That is, he may really have the genetic defect, but the test failed to identify it. No diagnostic test can definitely falsify a hypothesis. There is always some chance that the test itself is false.

Popper notes that empirical hypotheses “can never become ‘probable’: they can only be corroborated, in the sense that they can ‘prove their mettle’ under fire—the fire of our tests.” [1, p259]. This same concept applies symmetrically to apparently falsifying empirical evidence. Popper’s philosophy can’t even give us a probability of falsification. Popper ultimately only gives us a theoretical means for demarcating scientific from meta-physical statements.

Note that I am not a scholar of Popper and that this discussion is not complete. I believe that Popper addressed these points well in his work and you should refer to it for more detail.


  1. Popper, Karl.The logic of scientific discovery. Routledge, 2005.
  2. Strong, Jack P., et al. “Prevalence and extent of atherosclerosis in adolescents and young adults: implications for prevention from the Pathobiological Determinants of Atherosclerosis in Youth Study.” Jama 281.8 (1999): 727-735.

Version 2.0, 19th October, 2016

Representation of individual medical event claims

In the previous article, the logical representation of medical science claims was investigated. Scientific claims have the goal of compressing experimental evidence into rules or models that reliably represent the evidence and make good predictions about future events. In this article, a step is taken back and the representation of individual medical events is investigated. For example, the individual events that together make up the evidence used to come to a medical science claim.

Individual medical events are simpler to model and appear to be well modeled by the standard Davidsonian analysis discussed in the previous article (for discussion of why this analysis is used refer to the previous article). Below is an example discussed in the previous article:

(1) a.     Statins reduce myocardial infarction in normal humans

  1. ꓱe (REDUCE(statin, myocardial infarction, e) & IN(e, normal human))
  2. “There is at least one case, such that statin administration reduced myocardial infarction, where the case was in a normal human”

This Davidsonian analysis of the statement (16a) was taken as a step in the investigation (provided above as (1)), but it was decided that that the existential quantifier was not the correct analysis of the statement and that scientific claims like that in (1a) should to be interpreted as universal quantification. If an individual participant in the clinical trial is considered, the following statement could be made:

(1) a.     Statins reduced myocardial infarction in John Smith

  1. ꓱe (REDUCE(statin, myocardial infarction, e) & IN(e, John Smith))
  2. “There is at least one case, such that statin administration reduced myocardial infarction, where the case was in John Smith”

Here an important difference can be seen between the individual cases and the claims that can be made via statistical analysis of a large number of individual cases as a group. It may be that the statins did reduce the likelihood of a myocardial infarction in John Smith. However, given that the actual event of a myocardial infarction is an irregular occurrence and is dependent on a large number of factors (eg. lifestyle, high physical/stress events, etc), and these factors are uncertain, there is no way of being confident that the statins had any effect from the evidence of one individual. This is why clinical trials are required. So that outside factors can be controlled to some degree and the number of participants can be such that statistically we can have reasonable confidence that the medical scientific claim is valid.

In the example above, the frequency of myocardial infarction is low in an individual and the result often catastrophic. For this reason, studies of drugs for diseases like heart disease generally need to be long term and inclusive of end points (eg. death). If a disease that involved continuous or frequent disease events is considered, then a sentence like that in (1a) may make sense in that the frequency of disease events prior to introduction of the drug can be compared to the frequency afterwards. Take for example antibiotic treatment for bacterial infection:

(2) a.    Antibiotics reduce bacterial infection in John Smith

  1. ꓯe (REDUCE(antibiotics, bacterial infection, e) & IN(e, John Smith))
  2. “In all cases, antibiotics reduce bacterial infection, where the case is in John Smith.”

In this example, bacterial infection nay occurred in John Smith a large number of times during his life. To assess the validity of the claim in (2) there would need to be a number of bacterial infections where no antibiotics were administered and then a number of bacterial infections where antibiotics were administered to allow for a comparison of the control events (no antibiotics) to treatment events. Where the difference was significant, the treatment might be considered a success and the claim (2) might be considered as true (uncertainty discussed further below). However, here again we see the importance of a clinical trial to assess the efficacy of a drug. The claim in (2) is specific to “in John Smith” and the claim cannot be extended to “normal humans” generally. While John Smith is a human, he also has a unique genome, diet, lifestyle and life history generally. The effectiveness of a drug in one human is certainly support for its effectiveness in other humans, but it is always possible that the drug only works with John Smith’s specific genetic profile for example. There is also the possibility that the bacterial infection went away by chance at the same time John Smith began receiving the antibiotic or that it resolved by the placebo effect. This last point highlights the fact that the reduction does not actually imply causation.

In the case of heart disease, the primary cause of myocardial infarction, there may be an indicator of disease other than myocardial infarction. Atherosclerosis describes the formation of lesions in the arteries. Atherosclerosis in the coronary arteries (arteries that supply blood to the heart) is a primary indicator of heart disease and increases the chance of myocardial infarction. The atherosclerotic lesions in the arteries (lesions from here on) exist prior to diagnosis of heart disease and after the commencement of treatment. So in theory they could be measured at many time points prior to treatment and after treatment with statins. If the lesions regressed after treatment, then this could be an indicator within one individual that statins reduce heart disease and by extension myocardial infarction. Unfortunately, lesion progression prior to diagnosis is rarely measured outside of specialised scientific studies. So there is no significant series of events prior to treatment. Secondly, lesions rarely regress significantly and the goal is usually to slow progression of lesions. To assess reduced progression would require the collection of far more time series events that included measures of magnitude and not just the presence or absence of lesions. Unlike say a skin infection, where the infection is visible externally with accompanying pain, the progression of atherosclerotic lesions is painless and difficult to assess given their location with our current technology. Finally, the evidence that lesion stage or size is positively correlated with myocardial infarction events is not definitive. For heart disease and many other diseases, the assessment of the performance of a drug is not possible within an individual, either due to the lack of time series data, the lack of a reliable indicator of disease or the requirement to base the assessment on the reduction of death in a cohort.

We have discussed the need for group based research, such as clinical trials, as necessary for making statements about the correlation between drugs and diseases. Well-designed clinical trials should be able to substantially exclude the placebo effect and take into account the performance of drugs based on end points like death, but it should be noted that chance can still never be ruled out as a factor. In studies involving chemicals or cells the sample size can be made very large, but with humans it is difficult to get very large sample sizes and to control all important variables. However, the statin meta analyses included over 90,000 participants through the consolidation of many large studies.

Version 1.0, 30th September, 2016

Representation of the meaning of scientific claims

Researchers in medical science work hard to express their views and findings in language which is unambiguous and consistent, but natural language is not well suited to the task, for example:

(1)    a. “Statin therapy can safely reduce the 5-year incidence of major coronary events, coronary revascularisation, and stroke by about one fifth per mmol/L reduction in LDL cholesterol, largely irrespective of the initial lipid profile or other presenting characteristics.” [2]

b. “Clinical trials in patients with and without coronary heart disease and with and without high cholesterol have demonstrated consistently that statins reduce the relative risk of major coronary events by ≈30% and produce a greater absolute benefit in patients with higher baseline risk.” [3]

c. “Statins can lower LDL cholesterol concentration by an average of 1.8 mmol/l which reduces the risk of IHD events by about 60% and stroke by 17%.” [4]

Are all of the scientific claims in (1) above expressing the same meaning? No, there are significant differences that effect the specific meaning of each sentence. If we assume that “IHD events” has the same meaning or denotation as “coronary events”, then can we say that all of the claims in (1) mean at a high level that “Statins reduce coronary events”? (see this claim in Research Analysis: These are the sorts of questions that it would useful to be able to answer reliably using sematic methods and tools.

Focusing on the high level claim, consider the following:

(2)    a. Statins reduce coronary events

b. Coronary events are reduced by statins

Sentences (2a, b) have different written forms but the same truth condition. A way is need to represent meaning that is unambiguous and consistent and for this logic can be used.

Logic is a system for reasoning. Below some of the high level terms and elements of logic are briefly outlined:

  • Propositional logic primarily involves the analysis and combination of propositions.
  • Propositions: The term has broad use, but the following is appropriate for our use:
    • refers to the meaning of declarative sentences.
    • affirms or denies a predicate (is true of false)
  • Predicate: A statement that is true or false depending on its arguments.
  • Predicate logic: involves the analysis of the inner structure of simple propositions

For example:

(2)    Statins are drugs

  • Predicate: are drugs
  • Argument: statins
  • This is a one-place predicate
  • It is true if statins are drugs, otherwise false.

Predicates can be represented using notation that uses the main word, without tense, without the copula be (for example “are” in this case) and some prepositions. The entities (people or things) that the predicate is related to are its arguments. The standard notation uses upper case for the predicate and lower case letters for names. For the example above, the notation would be as follows:

(3)    DRUG(statins) = TRUE

In this case the name statins has been used as the argument for the proposition, but any name can be inserted into the proposition. The proposition will be true or false depending on the argument.

Propositions can have more than one argument, for example:

(4)    Statins reduce coronary events

  • Predicate: reduce
  • Subject: statins
  • Object: coronary events
  • True if Statins do reduce coronary events, otherwise false.
  • Notation: REDUCE(statins, coronary events)

This is an example of a two place predicate. In natural language, predicates can occur with three or possibly four arguments. Adicity is the number of arguments that a predicate can take. Each predicate has a fixed acidity or number of arguments. Some predicates from natural language can have different or extended meanings with different numbers of arguments, but each of these is considered to be a unique predicate in logic with a fixed number of arguments.

In natural language, predicates are sometimes used with an argument missing. These are known as elliptical sentences. Elliptical sentences are regularly used in common language e.g. “Give statins.” Give to who? From the context of the conversation it is assumed that the statins are to be given to the patient. From a syntactic point of view this sentence is incomplete as it is missing the object noun phrase. The way to understand these sentences is to assume that there is an ellipsis (in this case the patient) that is given by the context of the situation to fill the required argument in the predicate.

In the case of “Give statins to”, it is obvious as a native English speaker that this sentence is incorrect. In the case of our high level claim (4), the sentence sounds fine and this is because it is a syntactically complete sentence. But it could be asked, reduced compared to what? To assess the truth of the predicate, a comparison set is needed to assess whether there was a reduction. In science, and phrases like (4), the comparison set is usually taken to be the pre-treatment population sample that is represented by the control group in the experiment. The control group is a group of entities (e.g. people, mice, cells) that are the same as the experimental group, but where the experimental condition has not been applied (e.g. the drug has not been administered). The term control group will be used to refer to the comparison set.

An analysis of the full sentences in (1), but without detailed linguistic analysis, provides the following control group for each of the three sentences:

  1. “largely irrespective of the initial lipid profile or other presenting characteristics” suggests that the reduction would apply to any human receiving the treatment.
  2. “patients with and without coronary heart disease and with and without high cholesterol” again suggests that the reduction would apply to any human receiving the treatment.
  3. The sentence doesn’t provide a control group and publication must be investigated further to find out the context of the sentence. In the method it is found that “We included all double blind trials, irrespective of participants’ age or disease. Participants in most trials were healthy with above average lipid concentrations.”. While they note the bias towards above average lipid concentrations, they also note elsewhere that the results are adjusted to take into account variations in the participant groups. So it is reasonable to assume that at a high level they are also making the claim relative to the group of all humans. Though in this case it would also be reasonable to take healthy humans with above average lipid concentrations as the control group.

Taking the control group as “normal humans” (we could have a second claim for hyperlipidemic humans), (4) is updated as follows:

(5)    Statins reduce coronary events in normal humans

  • Predicate: reduce
  • Subject: statins
  • Object 1: coronary events
  • Object 2: normal humans (control group)
  • True if Statins do reduce coronary events in normal humans, otherwise false.

Notation: REDUCE(statins, coronary events, normal humans)

Is the reduce predicate a two place or three place predicate? Syntactically reduce with two arguments is sufficient. Scientists strive for the maximum generality possible in their theories and reduce with two arguments may suggest that the relationship applies to all sets of things. But our example is clearly ridiculous when applied to machines or plants that have no concept of coronary events. In the case of a plant, the denotation of the coronary artery would be false due to the non-existence of the organ in the case of a plants and thus the claim would be false based on Russell’s theory of descriptions [6] or meaningless on a classical analysis. So while the two argument reduce predicate is syntactically correct and may have a meaning, the broad meaning is not implied by the sentences in (1) and is false or meaningless outside the set of animals with coronary arteries. I believe that to represent the sentences in (1) and to be a scientifically useful claim, the reduce predicate must be combined with a control group. In semantics, the identification of arguments and non-arguments is not straightforward and remains an unresolved topic [1].

There are some alternatives means of linking the control group to the reduce predicate without making it an argument:

  1. Consider the control group as a noun modifier for the object of the reduce predicate.
  2. Consider the control group as an adverbial phrase attached to the reduce predicate that provides a location for the reduction.


Control group as noun modifier

If the control group is treated as a noun modifier for the object, then “coronary events in normal humans” is the object that is reduced and this would be represented by (5.1) below

(5.1) REDUCE(statins, coronary events in normal humans)

This doesn’t appear to obviously wrong, since on a simple reading coronary events do occur in the normal humans. However, I do not believe that this representation is logically correct and doesn’t correctly represent the claim in (1). The concept in (1) is that statins are given to normal humans and that this results in a reduction in coronary events. The predicate in 5.1 suggests that the reduction is seen in normal humans, but humans that have received statins are actually no longer “normal humans” because statins alter their metabolism. It is only in these humans with a metabolism altered by statins that the reduction in coronary events is claimed to occur. On the other hand, the predicate in (5.1) does not make clear that the statins or the metabolic effects of statins occur in the humans, because the “in normal humans” only modifies the coronary events argument. The interpretation of the control group as a noun modifier to the object argument does not appear to be correct.

Control group as an adverbial phrase

I believe that the correct interpretation of the adjunct “in normal humans” is that the process or event of reduction occurs within the normal human. This is a word group that qualifies the main reduce predicate and is not a required argument of the reduce predicate. This is an example of an adverbial phrase that is attached to the verb, reduce, and provides a location for the event(s) or process. In the example, the statin is taken by the patient and the effect of the statin occurs within the patient by altering their metabolism and leads to a reduction of coronary events. In this example the location is within the body of any human included in the set of “normal humans”. I believe that this representation allows the interpretation that the process that the statins trigger in the normal humans to alter their metabolism resulting in reduced coronary events occurs from the starting point of a normal human, even though the result of the process will be an altered human with reduce coronary events.

Davidsonian Analysis

The semantic concepts introduced by Davidson in his 1967 paper “The Logic Form of Action Sentences” [8] can be used for the analysis of the reduce predicate and its associated adverbial phrases. Before analysing the sentences in (1), the Davidsonian approach will be reviewed with a simpler example:

“In 1976 we published two papers reporting the discovery and characterization of compactin, the first statin.” [5]

The following simpler sentence that takes some information from elsewhere in the paper will be used:

(6)      Akira Endo discovered compactin in 1976 in Japan

Before Davidson’s work, the standard logical analysis would have included the adverbials as arguments of the predicate [1].

(7)     DISCOVERED(Akira Endo, compactin, 1976, Japan)

Because a predicate has a fixed number of arguments, we can see that this analysis results in a number of DISCOVERED predicates with different numbers of arguments:

(8)     a. Akira Endo discovered compactin

DISCOVERED’(Akira Endo, compactin)

b. Akira Endo discovered compactin in 1976

DISCOVERED’’(Akira Endo, compactin, 1976)

The argument structure in (7) can be represented generally as (9):

(9)    DISCOVERED’’’(discoveree, discoverer, time, place)

There are problems with this analysis:

  1. The arguments of the predicate should be necessary to give it meaning. The discoveree and discoverer appear to be essential to the meaning of the DISCOVERED predicate. But the adverbial expressions are more loosely connected to the predicate.
  2. Action verbs can be modified by a variable number and type of adverbials in difference combinations. The difference combinations of the predicts would express different predicates, which results in a very complex group of related predicates.

It does not seem to make sense that the multitude of predicates that this produces related to different verb meanings. Intuitively it seems that there is one core predicate that appears in all of the sentences, with the adverbs specifying additional information.

Davidson pointed out that sentences like (7) and (8a&b) are related by entailment in a way which seems to require that the same predicate appears in all sentences [1, This example paraphrases Kearns’ example in chapter 11 and it should be referred to for a general introduction]. For example:

The entailments of (6) include:

(10)    Akira Endo discovered compactin in 1976

(11)    Akira Endo discovered compactin in Japan

And the entailment of both (10) and (11) is:

(12)    Akira Endo discovered compactin

Davidson observed that every entailment of this kind resembles entailments which ‘drop conjuncts’, in this case adverbial phrases. He proposed that the parts of the entailing sentence which can be dropped to produce the entailed sentence should be represented as logical conjuncts [1]. The central basic proposition can then be conjoined with the adverbials to provide a proposition that expresses the meaning of the whole sentence.

(13)    Akira Endo discovered compactin in 1976 in Japan

DISCOVERED (Akira Endo, compactin) & p & q

p expresses “in 1976”

q expresses “in Japan”

The next step is to link the propositions p and q to the central proposition. Davidson argued that the adverbials are referring to the action or event described by the central basic proposition eg. “The discovery event occurred in 1976”, “The discovery event occurred in Japan”. Davidson proposed that the event itself should be included as an additional argument and that the propositions should all refer to this event. Using our example:

(14)    a. Akira Endo discovered compactin in 1976 in Japan

b. ꓱe DISCOVERED (Akira Endo, compactin, e) & INYEAR(e, 1976) & INCOUNTRY(e, Japan)

c. “There was an event, which was the discovery of compactin by Akira Endo, and the discovery was in 1976 and the discovery was in Japan.”

The variable e is a restricted variable in logic and ranges over all events. The existential binding of the event variable in this proposition requires that there is at least one event for which the remainder of the proposition is true for the whole proposition to be true. The adverbials (eg. time, place, manner) are now expressed as logical conjuncts with each represented by its own predicate with the event as an argument. The result of the Davidsonian approach is just one core predicate for the action verb (in this case DISCOVERED), to which can be added as many or as few adverbials as needed by using separate predicates for each and conjoining them to the core action predicate.

Davidsonian Analysis of Medical Science Claims

Returning to the analysis of the medical science claim (5), Davidsonian analysis would produce the following:

(15)      a. Statins reduce coronary events in normal humans

b. ꓱe (REDUCE(statin, coronary events, e) & IN(e, normal human))

The word “events” in “coronary events” is confusing here and shouldn’t be confused by the reduction event represented by e in the predicate REDUCE. “Coronary events” here is the object of the predicate and refers to a category of adverse medical events related to the coronary artery, for example a myocardial infarction. To avoid confusion, from here on “myocardial infarction” will be used to replace coronary events as the most common coronary event.

(16)    a. Statins reduce myocardial infarction in normal humans

b. ꓱe (REDUCE(statin, myocardial infarction, e) & IN(e, normal human))

c. “There is at least one event, such that statin administration reduced myocardial infarction, where the event was in a normal human”

Note that the plural has been removed from “statins” and from “normal humans”. This is because in the Davidsonian analysis the predicate REDUCE now applies to a single even e. In each case there is only one statin administered to one human (we’ll ignore the case here that multiple statins are administered).

The concept of an event in common language gives the impression that the statin administration reduced the myocardial infarction over a short period of time. The effects of a single statin administration on the metabolism of cholesterol in humans does occur over a short period of time, but this altered cholesterol metabolism needs to persist for a long period of time to result in a significant reduction in myocardial infarction. Heart disease patients are often administered statins for the remainder of their life. In this way the statin administration has its reduction effect on myocardial infarction via a long term process rather than over a short period implied by the word event. Process may be a better word to use than event. I would like to suggest “case” as an alternative to event as a word that is better suited to the medical field. I believe that the concept of a case suggests that the time period and medical process will be appropriate to the specific disease treatment paradigm. I will continue to use the variable e for event as it is standard in logic, but will use case and event interchangeably in the verbal descriptions.

(16)    d. “There is at least one case, such that statin administration reduced myocardial infarction, where the case was in a normal human”

While the research claims in (1) certainly do suggest that there is a least one event or case where statins reduced coronary events, the existential quantifier doesn’t give the full strength of the claims made. In the introduction to Davidsonian analysis above the past tense of discover, discovered, was used to refer to Endo’s discovery. This signifies the historical nature of the discovery and implies that it isn’t expected to occur again. Rediscoveries can occur due to lost knowledge, but in modern science there is an assumption that there is a first discover and discovery of a scientific claim. In contrast, the claim in (15a) uses the infinitive, reduce, which implies that the reduction will occur in all or many events. A primary goal of science is to identifying models or rules that allow for the reliable compression of large amounts of evidential information about events in the world into a much more compact form that allows us to make good predictions about future events. To this end scientific claims ideally would apply to all events or at least most events that fit the conditions of the claim. The nature of empirical knowledge is that we can’t ever know anything with certainty, but we seek to verify claims that have as broad a scope as possible. Based on this goal of science, I would interpret the claim (16a) with the following logical proposition:

(17)    a. ꓯe (REDUCE(statin, myocardial infarction, e) & IN(e, normal human))

b. “In all cases, statin administration reduces myocardial infarction, where the case is in normal human.”

I believe that (17a) is the correct logical representation of the claim in (5) & (16a) and is a good representation of the core claim in the sentence of (1).

It was shown that considering the control group, “in normal humans”, as a noun modifier to the myocardial infarction (or coronary events) resulted in the scope of the control group only applying to the myocardial infraction. Davidsonian analysis applies the control group to the whole event. This has the effect of placing the reduction event, the statins and the myocardial infarction in a normal human. I believe that this is the correct logical interpretation.

One of the primary reasons for considering the Davidsonian approach was to remove the need for additional arguments in the REDUCE predicate and here this approach adds an additional argument. However, the benefit of the addition of the event argument is that is can be used to introduce other adjuncts to the REDUCE predicate without the need for any arguments beyond three. For example: “reduced by how much?”. Each of the sentences in (1) gives a description of the size of the reduction. For (1b), the following summary sentence could be made: Statins reduce myocardial infarction in normal humans by 30%, and the logical proposition could be made:

(18)    ꓯe (REDUCE(statin, myocardial infarction, e) & IN(e, normal human) & BY(e, 30%))

This BY adjunct does seem to be optional unlike the control group IN predicate, as without it the truth value of the statement (16) and (17) could be assessed by assuming that any reduction greater than zero will be sufficient. I believe that there are some adjuncts that are essential to have a valid scientific meaning and others that may improve the specificity of the claim, but are not essential to the core scientific claim. Interestingly the addition of the BY predicate introduces clear conflict between the three claims in (1). For (1b) we have the claim in (18) and for (1c) we would have:

(19)    ꓯe (REDUCE(statin, myocardial infarction, e) & IN(e, normal human) & BY(e, 60%))

The differences may be due to differences in the meaning of coronary events, as (1b) refers specifically to IHD events, though generally they are assumed to be the same concept. This clear identification of contradiction or conflict between claims is one of the key reasons for representing claims in this this logical structure and of the key goals for Research Analysis. The contradictions raise questions about the structure of the claims and the descriptions of medical objects and events, and I believe that these clearly defined questions are what drives science forward. Arguments are constantly happening in medical science, but often they occur without the parties having a clear understanding of the other parties claims. The use of standardised language and formal descriptions of claims will improve the quality of these arguments and science.

Standard claims used in Research Analysis

The sentences in (1) use the predicate reduce as the relationship between the subject and object. Reduce is clearly just one of many predicates that could be used in medical science. Increase, as the opposite of a reduction, would be another obvious example. In Research Analysis, at the date of writing, three predicates have been proposed as a summary of all predicates. They are:

  1. Increases: The subject has the effect of increasing the object
  2. Decreases: The subject has the effect of decreasing the object
  3. Not Significant: There is no significant relationship between the subject and object. From a statistical point of view, the relationship is not present at a level that is sufficient to exceed the threshold of significance as defined by the authors.

Decrease is a synonym for reduce, as well as other verbs like lower, shrink, etc. In each case, the words may have slightly different usage in natural language, but they share a common concept. In Research Analysis, we use decrease to represent all of these verbal predicates. For the opposite concept we use increase to represent all verbs like raise, expand, etc. Finally, we use the term Not Significant to represent relationships where the subject has no significant effect on the object based on the statistical measure used by the authors of the scientific claim.

Using this model, we can represent (17) as:

(20)    a. ꓯe (DECREASE(statin, myocardial infarction, e) & IN(e, normal human))

b.“In all cases, statin administration decreases myocardial infarction, where the case is in normal human.”

This claim can be found in Research Analysis here where it is linked with the sentences and references in (1).

Causation and Confidence/Determinism

The use of “reduce” in the claims in (1) appears to suggest a relationship of causation between the statins and the reduction in coronary events. Most people would also associate verbs like increases and decreases as suggesting that there is a causal link between the subject and object of the sentence. Bertrand Russell in his paper, “On the notion of cause” [7], states “the word “cause” is so inextricably bound up with misleading associations as to make its complete extrusion from the philosophical vocabulary desirable”. This is a very strong view, but it is clear that causation is a complicated concept and that it should be used with care. The concept of causation and its formal representation will be reviewed in more detail in a later article.

Through the practical use of the Research Analysis model, examples have been found that do not suit a causal model. For example, there may be an association between increased c-reactive protein and myocardial infarction, however the researchers don’t believe that the c-reactive protein causes the myocardial infarction. Instead they believe that both are caused by some other common cause eg. late stage atherosclerosis. In these cases, the use of increase in a claim could be interpreted as “c-reactive protein is found to increase with myocardial infarction”. This use of increase does not imply causation. Increase and decrease in Research Analysis are intended to be used to represent scientific claims that propose causation as well as those that do not.

The formal representation of claims is in (20a) and in Research Analysis are declarative statements that don’t include any hedging of the claim. The claim “statins reduce myocardial infarction in normal humans” suggests that the reduction will occur in all normal humans, not that it will occur most of the time or 95% of the time. This is representative of the claims made in (1), where the core claim in (20a) is made without any hedging, though there is some uncertainty on the quantum of the effect. Is this type of declarative statement appropriate given that modern science relies on statistical tools (eg. t-testing) that acknowledge that we can never have complete certainty about any real world associations. I believe that unhedged declarative statements are the appropriate representation of scientific claims, as Research Analysis seeks to represent the clearly stated core of the hypothesis being tested by the research that becomes the claim found to be support by the results of the scientific process. Modern science is largely based on the philosophy of Karl Popper [9] that requires a scientist to begin their work with a hypothesis that can at least theoretically be falsified by experimental findings. A hypothesis that involves any hedging, cannot ever be falsified definitively as the defender of the hypothesis can always claim that a negative result was only due to chance. The claims in Research Analysis aim to capture the core of these hypotheses that could have been falsified, but were found to be supported, but remain open to having their mettle tested.

Through this investigation, it was identified that the terminology of “not significant” does not align with the declarative and unhedged statements that are intended to be collected in Research Analysis. For this reason, it will be replaced with “no relation” in the next version as in “hair colour has no relation to myocardial infarction”. This language and concept removes the hedging that is implied by the concept of “not significant”. With this language new language, a claim from another publication that found an increase or decrease relationship between the subject and object would provide evidence against or falsify the claim of “no relation”, whereas the old language leaves open the possibility of statistical luck.


Is the model presented in this article a perfect representation of the semantics of scientific claims in medicine? Of course not. We expect to improve the model over time as we learn from application and the input of peers. But we do believe that scientific claims collected using the existing model can provide value through clarification of meaning, more efficient search and tools that can compare and contrast claims.

The writing of this article raise a number of issues for us and we hope to explore them in future articles. They include:

  • Related to Davidsonian analysis:
    • Consider a neo-Davidsonian analysis of our medical science claim model.
    • Does Davidson’s event model extend to situations where the medical claim relates to an association that occurs only after a substantial time.
  • Investigation into the probabilistic representations of medical claims.
  • Research Analysis currently allows for the specification of an organ or organ cell model to be associated with the claim predicate. This would specify “in heart” for the claim in this article. Is this another adverbial phrase or a modifier to “in normal human”?
  • Are there other adverbial phrases or adjuncts that are essential to the core medical scientific claim?
  • Investigation of the different types of terms that can be used as subject and object in medical claims. For example:
    • Events: coronary events, myocardial infarction (blockage of the coronary arteries)
    • Disease process: atherosclerosis (development of lesions in the arteries)
    • Groups or sets of things vs individual things: eg. brain cells, coronary events.
  • An investigation into the validity of causation claims and their representation.
  • Application of the model to individual medical cases or small groups of cases that do not significantly show a general correlation.
  • The use of lambda calculus to represent claims, construct claims and extract claims from natural language into a formal structure in a systematic way. In this article we rephrased sentences before abstracting them into a logical form. Lambda calculus may provide tools that can leverage the syntactic structure of sentences to algorithmically extract formal claims from natural language.
  • Why is it relatively easy for scientists to make linguistic claims, but hard to define the same claims in a logical language?
  • Finally we have discussed several ideas that are not cited. Some of these should have citations from work that we have already read, but couldn’t identify at the time of writing. Others we have not yet found appropriate works to support our approach, but we will keep searching.


  1. Kearns, Kate. “Semantics. Palgrave Modern Linguistics 2ndEngland: Palgrave Macmillan (2011).
  2. Cholesterol Treatment Trialists. “Efficacy and safety of cholesterol-lowering treatment: prospective meta-analysis of data from 90 056 participants in 14 randomised trials of statins.”The Lancet 9493 (2005): 1267-1278.
  3. Maron, David J., Sergio Fazio, and MacRae F. Linton. “Current perspectives on statins.” Circulation2 (2000): 207-213.
  4. Law, Malcolm R., Nicholas J. Wald, and A. R. Rudnicka. “Quantifying effect of statins on low density lipoprotein cholesterol, ischaemic heart disease, and stroke: systematic review and meta-analysis.” Bmj7404 (2003): 1423.
  5. Endo, Akira. “A historical perspective on the discovery of statins.”Proceedings of the Japan Academy, Series B5 (2010): 484-493.
  6. Russell, Bertrand. “On denoting.”Mind 56 (1905): 479-493.
  7. Russell, Bertrand. “On the notion of cause.”Proceedings of the Aristotelian society. Vol. 13. Aristotelian Society, Wiley, 1912.
  8. Davidson, Donald. “The logical form of action sentences.” (1967).
  9. Popper, Karl. The logic of scientific discovery. Routledge, 2005.

Version 1.3, 20th September, 2016

Previous versions can be provided on request.

Acquaintance and Knowledge About the World – Descriptions in Medical Science

  • Denotation involves the interpretation of denoting phrases such as: a cell, some cell, every cell, all cells, the zygote that developed into Bertrand Russell, the unfertilised ovum that developed into Bertrand Russell. The theory of descriptions, developed by Bertrand Russell beginning with his work On Denoting [1], provides a means of extracting the logical form of denoting phrases from natural language expressions.
  • The topic of denotation is important to Russell from a theory of knowledge point of view, that is, theory about what we can know about the world. Russell at the time saw two distinct types of knowledge:
    1. acquaintance: things that we can directly perceive.
    2. knowledge about: things with which we have no immediate acquaintance, but understand by description.
  • Descriptions of objects are denoted by phrases composed of words with whose meanings we are acquainted.

Acquaintance is a special type of knowledge about

  • Many texts on Russell’s theory of descriptions suggest that direct perception and the naming of objects by ostensive definition, by pointing, are unproblematic. This may have been Russell’s view too in his early writings.
  • However, in Russell’s later writings on human knowledge [2] he took a pragmatic empirical view that I would summarise as follows:
    • He accepts Hume’s position that there is no certain knowledge of the real world
    • However, he believes that there is a pragmatic and practical basis for believing in knowledge that has strong empirical support.
  • Acquaintance with the name of a thing is gained through a series of events where the name is associated with the physical perception of the thing.
  • Through this empirical acquaintance we link the name of the thing with the mental description of the object that we keep in our mind and use to recognise the object.
    • The name of the object is just the short hand link to the mental description of the object we keep. The mental description that is used by our brains to interpret the inbound perceptual information from our senses (for example, information from our eyes delivered by our optic nerve to our brain).
    • This is essentially the same as the short hand identifier used by a computer to link to a detailed description or model for recognising an object. A model that is used by the computer to interpret inbound information from its sensory devices (for example, information from a camera delivered by binary information to the CPU).
    • Symbols are used to denote things, usually we think of words, words in any language, but the language could be a computer language of binary bits, those bits physically held in memory, our memories stored in the structure of physical neural networks.
  • If acquaintance is taken to be empirical as discussed above, then really it is just another form of knowledge about things. Traditionally it is considered to be knowledge gained where there is no intermediate unnatural translation of the description from the “natural world” (light travelling to our eyes from the object) and our perception (the receipt of the light by our eyes). But even in this natural case, the simple name we give to a perceived thing is denoted by the description held in our minds of the object, the description used to recognise the object. This description of the object is an incomplete description and created through imperfect senses and thus empirical in nature.
    • Even knowledge of our selves is empirical, in that it necessarily involves one part of the self-perceiving another and in the same way this must be incomplete.
  • Some examples of acquaintance being a special type of knowledge about:
    • Direct sense perception: Practical medical science provides an opportunity for physical acquaintance with physiological, biological and other medical objects. Names are associated with the physical objects by ostensive definition, pointing out, of objects by demonstrators and peers. As discussed, I argue that even here the objects are being denoted by descriptions in our minds that will be used to link future acquaintances with the name given to the object.
    • Magnified perception: When our practical science moves to the microscopic level the empirical nature of naming and denotation becomes clearer. Here we don’t have direct perception of the objects and must rely on the images provided by magnification devices. We treat the images as if they were simply the perceptions we would have if our eyes could achieve the same magnification, however this assumption is an empirical one.
    • Assay perception: At the molecular level, the empirical nature of perception becomes obvious. When we run an assay on a biological sample (eg. ELISA) and interpret the results, it is impossible to treat it as a direct perception of the molecular make-up of the sample. This is acknowledged by the use of statistical analysis to interpret the results and their quality.
  • I would argue that the direct sense perception of medical objects is, from a theory of knowledge point of view, no different to the perception of assay results. Both rely on the translation of sensory perception into description/denotation/model in our brains that is then linked to a name for the object. The difference being only that the assay perception involves external additional descriptions of the object and the direct sensory perception only involves internal human descriptions of the object in our brains.
  • This equivalence can further be demonstrated by the following examples:
    • Instead of a physical practical experience with the object, you could instead learn the name for an object from a textbook where the exact same image that you would have seen in an in person medical practical is provided in the text book. Certainly some of the quality of the acquaintance with the object will be lower (eg. loss of 3 dimensions, you can’t feel it), but it is not a completely different experience and you would gain similar acquaintance and knowledge about the object.
    • It may be that the image in the textbook was damaged, but there remains an extremely detailed description of the object in the book and you would gain a lower but again somewhat similar acquaintance and knowledge about the object.
    • Other similar examples would be a high quality replica wax model of the object, a diagram of the object that is drawn to highlight important attributes of the object (good ways of describing or recognising the object), etc
  • Perception, as it is used here, is simply a name for a type of knowledge about things in the world that doesn’t involve the interposition of descriptions of the object between a human and the object in the real world. However, there remains descriptions within the human, that makes perception a form of knowledge about the object.
  • Thus all human knowledge is knowledge about the world and is thus subject to Russell’s theory of descriptions and denotation.

Knowledge about is empirical

  • Knowledge about the world gained by acquaintance or through descriptions is inevitably empirical. In the case of acquaintance, the descriptions of objects that we build for objects in the world are strengthen and refined through the corroboration of our recurring observations of the object and the description we hold for the object. In the case of knowledge gained through descriptions, the descriptions to be useful must have been developed through empirical means by the person or community who created the description. It is possible that a description has been developed without an empirical foundation, but in this case we should not expect the description to be corroborated by our own observations of the world.
  • Most of our common sense knowledge about the world is established through our acquaintance with the world as we develop. We may not consciously think of this knowledge acquisition as a scientific process, but subconsciously our mind is gaining knowledge through the generation of hypotheses and the testing of these hypotheses in the course of our life.
  • Science and the scientific method is really just a formalisation of our natural approach to knowledge acquisition with a higher expectation on the required level of corroboration.
  • The goal of science is to gain high quality and reliable knowledge about specific things in the world, both objects and the processes that effect objects, to be able to describe or denote these specific things in a highly reliable way.
  • Very good scientific descriptions are those that would allow a person with no direct acquaintance with a thing to identify and make good predictions about that thing in the world.
  • The quality of the descriptions in science are important. Descriptions that are needlessly complicated or rambling will make their conversion into knowledge about the world difficult for the reader. Ockham’s razor suggests that descriptions should be as simple as possible, but no simpler.
  • Another goal of science is to make descriptions as broadly applicable as possible where that can be done without excessive additional complexity. Our preference is to create efficient scientific descriptions of “all cells”, rather than just “some cells”.
  • In physics, many of the descriptions are provided in the form of mathematical equations. On the other hand, in medical science, the majority of descriptions are given in the form of literature. Certainly medicine uses special scientific terminology, but the terminology is presented in natural language. There are some niches of medical science were more formal languages are used: for instance, in biochemical pathway analysis.
  • Mathematics is just a type of description where the symbols and words are used in a highly formal way. High quality natural language can be somewhat formal, but only to a dramatically lower level than formal languages like mathematics and first order predicate logic.
  • Our goal is to develop and promote more formal languages for expressing medical science knowledge. We seek to extract medical science knowledge out of the existing natural language literature and into such formal languages. Ideally one-day medical science will be coded directly into such formal languages and we believe that powerful and easy to use tools will be essential to achieving this goal.


  • Russell, Bertrand. (1905). “On Denoting”, Mind 14, pp. 479–493.
  • Russell, Bertrand. “Human knowledge: Its scope and its limits.” New York: Simon &Schuster (1948).

Version 1.0, 21st August, 2016

Semantics in Medical Science Literature – Introduction

This is the topic for a series of knowledge base articles that will provide an introduction to semantics with a focus on the medical science literature. We have found a number of introductory texts on semantics to be very valuable, but haven’t been able to find any that have a focus on science. Coming from the field of linguistics, it makes sense that these texts focus on general human language. Some of the advanced concepts in semantics deal with the complexity of human conversation where there is a large body of unspoken understanding and context between the speakers. Colloquial language makes it even harder. Scientific language on the other hand tends to be more formal and declarative, but it has its own complexities. There may be introductory texts addressing the semantics of scientific language, but we haven’t been able to find them. As a result, we will likely attempt explanations that have been made before and with more rigor. Where you know this is the case, please let us know and will correct the text and include references. We felt that these articles would be a valuable learning tool for our users and for us.

Research Analysis was developed before we gained an understanding of the formal semantic analysis methods and the sematic tools that have been developed. We expect the Research Analysis platform to evolve as a result of our better understanding of these semantic tools and techniques.

Version 1.0, 18st August, 2016

Critic Systems – Human-Computer Problem Solving Review

This is our first article on the topic of Human-Computer Problem Solving (HCPS) since we put it forward as a key area of focus for us in a recent article [link]. When searching for past research on the topic, we came across “critic systems” in a review article (Tainfield 2004). While this research was largely left in the 20th century, it aligns very well with our interests and includes many insightful concepts and strategies. We provide a brief review of the key concepts here.

The concept of critic systems were primarily developed by Gerhard Fischer and his research group at the University of Colorado, Boulder (Fischer 1991 for example). Fisher’s group applied the concept of critic systems generally to the area of human computer interaction for ill-structured design and software coding applications at a time when graphical interfaces were just emerging. Fisher noted that “The design and evaluation of the systems … led to an understanding of the theoretical aspects of critiquing”, which are presented in their publication: The Role of Critiquing in Cooperative Problem Solving (Fischer 1991). Here we provide a high level overview of the concept of a critic system.

The role of critiquing in cooperative problem solving - Fischer 1991 - Figure 1

Figure 1: The critiquing approach. Figure 1 from Fischer 1991.

Figure 1, shows a high level view of a critiquing system. The key features of the system are as follows:

  • There are two agents, a computer and a user, working in cooperation.
  • The human contributes:
    • The problem solving goal or requirements: The goal specification can vary allot depending on the type of critic system. Where the system is very specific to an application, the high level goal is built into the system and only requirements specific to the current problem need to be provided. In more general problem solving situations the user will need to provide a more detailed specification of the goal.
    • domain expertise and general common sense about the world: This is largely in the form of the users long term memory and intuition. However, it could also include the users access to external memory in the form of books and other knowledge that is not available to the computer.
  • The Computer contributes:
    • A model of the user:
      • A model of the user helps the computer to formulate critics that will be understandable and useful to the user. The model could include information about the expertise of the users to ensure that the critics are provided at the right level eg. student, expert. The user model can be built up through interaction with the user to personalise the system.
      • A model of the user’s problem solving goal: The computer needs to be able to capture the user’s goal in a formal way that allows it to analyse the problem.
    • Domain knowledge: The computer has a knowledge base of domain knowledge relevant to the problem area. The formalisation of the knowledge base will depend on the application type and area, but can include both strict and probabilistic rules and constraints that can be used by the system to critic the proposed solution.
  • Problem Solving & Proposed Solution: The human’s primary role is to generate the initial solution and provide modified solutions after each round of feedback from the critic system. The human’s approach to problem solving will vary, but is assumed to be different to that of the computer and will likely leverage the human’s long term memory and expert intuition developed through their career. The human will need to work with the system to provide the solution in a form that the computer can understand, though substantial focus is put on making the interface as easy as possible to use eg. visual or natural language.
  • Critiquing and Critique:
    • The critiquing process begins with an evaluation of the user provided solution. Tainfield (Tainfield 2004) suggests that there are two approaches to evaluation:
      • Analysis based: the system analyses the solution against its knowledge based without developing its own solution.
      • Comparison based: the system develops its own solution from the requirements and compares the user’s solution to its own.
    • Tainfield (Tainfield 2004) notes that critiques are generally of two types:
      • statements of the detected defects, e.g., errors, risks, constraint violations, solution’s incompleteness, requirement’s vagueness, etc.
      • resolutions about the defects detected based on the first type.
    • The critiques provided can be classified for example by importance, positive/negative etc.
    • The critics must be formulated in a way that maximises their value to the user. The system utilises the user model and user interface to achieve this.
  • The process is iterative with the human taking on board the critic and updating the proposed solution until an acceptable solution is achieved.

A Simple Example – Spell Checking

I would like to put forward what might appear to a trivial example – a spell checker for a word processor. I think it is a good example because, 1. everyone has had exposure to a spell checker and, 2. it is not a problem that has been solved by artificial intelligence. Human-computer collaboration is required to correct spelling. Below is a discussion of the example using the key points from above:

  • The human contributes:
    • The problem solving goal or requirements: The goal of correct spelling is an assumed goal, built into the word processor.
    • Their domain expertise: The user brings their language education and experience, along with innate language capabilities.
  • The Computer contributes:
    • A model of the user:
      • specific language eg. Australian English, possibility profession specific dictionaries, a personal dictionary of words added by the user, etc.
      • Smart phone applications take your historical emails, personal notes etc and read them to build a model of the words you regularly use. These systems attempt to guess the next word that you might use and they are often right, but still most of the time we need to select the right word from a group of three or more.
    • Domain knowledge: dictionaries, grammatical rules, etc.
  • Problem Solving & Proposed Solution:
    • The human simply proposes their spelling for each word in the word processor.
  • Critiquing and Critique:
    • The spell checker analyses the text using the dictionary and grammatical rules.
    • The computer identifies possible incorrect spelling by underlining the word and proposes correct spellings when the work is right clicked.
    • Note that the spell checker often doesn’t provide only one correct option. In many cases the computer is unable to automatically correct the spelling. Human language can be extremely complex and difficult to disambiguate eg. Will Will will the will to Will (my spell checker is suggesting I delete the second and third will right now).
  • Often the computer will offer a correct spelling of the word we intended in the first iteration or focus our attention on remembering the correct spelling. But sometimes we are too far off and we must have a second attempt at the word before the computer is able to guess the word we want. This is the second iteration of the critiquing cycle.
  • Finally, the user decides when the spelling in the document is satisfactory and often this involves ignoring several of the computers critics.

Key Advantages of Critic Systems

Fischer covers some of the aspects of cooperative problem solving of special interest (Fischer 1991, p 125), below is a brief summary:

  • Breakdowns in cooperative problem-solving systems are not as detrimental as in expert systems: collaborative systems are able to deal with misunderstandings, unexpected problems and changes of the human’s goals.
  • Background assumptions do not need to be fully articulated: collaborative systems are especially well suited to ill-structured and poorly defined problems. Humans can know the larger context, learn during the problem solving process and decide when to expand the search space.
  • Semiformal system architectures are appropriate: The computer system doesn’t need to be able to interpret all of the information that it has available and can rely on the human.
  • Delegation problem: Automating a task requires complete specification. Cooperative systems allow for incremental refinement and evolution of understanding of the task.
  • Humans enjoy “doing” and “deciding.”: Humans often enjoy the process and not just the product; they want to take an active part.

Thoughts on application to Research Analysis

At the time of this writing, the Research Analysis applications is a knowledge base with knowledge capture and search tools. We use the application in a process similar to the critiquing approach, but this is a manual process at present. Many researchers would use a collection of databases, software tools and peer feedback in a manual process similar to the critiquing approach to solve hard problems in medical research. The challenge is to bring these manual processes together into a system that substantially increases the efficiency of the researcher without too high a barrier to adoption. Ideally the system could be applied to many niche research areas by updating the domain knowledge, but without needing to change the core software platform.

Research Analysis currently provides very basic critiquing type functions. The researcher can enter a research claim using our semiformal language and the system will provide a list of matching claims that have been made by other researchers with quotations and citation. By relaxing the specification of the claim (eg. any association, rather than a positive correlation), the system will provide a list of similar claims. The similar claims many include claims that contradict or question the significance of the researcher’s claim.

Future functionality for Research Analysis that fits the critiquing concept:

  • We are currently working on the capability of extracting claims directly from the abstracts or natural language of the researcher. This would make the system easier to use and would also increase the ability of the system to automatically populate the knowledge base.
  • The critiquing approach suggests that we could use the individual researcher’s historical publications to build a claim database that could provide a model of the researcher knowledge and beliefs. This model could be used in the critiquing process to provide critics that were in line with the researcher’s beliefs. This could further be extended to include the researcher’s publication database and further again by tracking the researcher’s position on the publication eg. agree, disagree or neutral.
  • Currently the system requires the user to manually vary the search fields to identify related claims. We are working on a reporting type function that would provide the user with a list of claims related to their claim, including supporting claims, conflicting claims and similar claims. Claims at different levels of the physiological hierarchy (eg. artery to cardiovascular system), different physiological locations (eg. artery to liver) and different related species (eg. mice to mammal) could also be compared. The tool will use medical term synonyms and statement analysis to identify claims from other related fields that make similar claims, but with different terminology.
  • In the beautiful future we plan to be able to use a combination of the users input, the systems domain knowledge and problem solving tools to present new hypotheses to the researcher through recombination of the existing knowledge base of claims. Generating medical hypotheses is a classic example of an ill-structured and poorly defined problem space. Often new important hypotheses will require the breaking of accepted rules and rejection of historically accepted scientific claims. We believe that a platform like Research Analysis could be valuable in the systematic proposal of new hypotheses based on analysis of historical claims. While we hope these suggestions will be valuable, we have no doubt that human researchers will be critical in the assessment of these hypotheses using their broad domain knowledge and worldly common sense.

Further Reading

  • Terveen provides a good Overview of Human-Computer Collaboration (Terveen 1995) and in particular a summary of Critic Systems based on the presentations at a symposium on the topic.
  • Miller provides an overview of expert critiquing systems in the field of practice-based medical consultation at the time (Miller 1986).


  1. Fischer, Gerhard, et al. “The role of critiquing in cooperative problem solving. “ACM Transactions on Information Systems (TOIS) 9.2 (1991): 123-151.
  2. Tianfield, Huaglory, and Ruwen Wang. “Critic Systems–Towards Human–Computer Collaborative Problem Solving.” Artificial Intelligence Review 22.4 (2004): 271-295.
  3. Miller, Perry L. “Expert critiquing systems.” Expert Critiquing Systems. Springer New York, 1986. 1-20.
  4. Terveen, Loren G. “Overview of human-computer collaboration.” Knowledge-Based Systems 8.2 (1995): 67-81.

Optimal Human-Computer Problem Solving (HCPS) for medical science

We have been exploring the use of computers to solve hard problems in medicine for several years as part of our Research Analysis project (  Research Analysis was born through our need for a way to keep track of thousands of scientific claims that we extracted during the review of over a thousand papers in the cardiovascular field. During our literature review there would be moments of insight that generated new hypotheses. These insights were generated by the connection of concepts in the current article with other concepts stored in our long term memory. When documenting these ideas, we would search back through the earlier literature to confirm and cite the supporting concepts. Two problems came up again and again:

  1. It could take hours or even days of elapsed time to pin point the specific concept in our library of earlier articles.
  2. When we found the concept in the earlier article, it was often not quite as we remembered. We had linked the concept in the current article by analogy to the earlier article, but the analogy did not fit the facts accurately when we revisited them.


The goal of Research Analysis is to avoid these problems through the use of a standardised language for capturing claims, references to specific supporting sentences from within articles and computer tools for capturing and recalling claims. Our ongoing work on Research Analysis lead us to begin thinking and reading about a higher level question: What is the optimal way for humans and computers to work together to solve hard problems? We are of course not the first to ponder this question, but we were surprised to find how little formal research has been conducted on the question. Researching this questions and applying the findings to medical research problems has become an important focus for us.

Chess is a place where humans and computers have been working together for some time. An article by Garry Kasparov discusses the free-style chess competitions run in 2005, where the competitors could compete as teams with other players and/or computers.1 The surprising result was that the winner was “not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”. We thought that Garry’s basic formulation of the variables was a good thinking tool.

Garry’s article proposes the following with some minor changes to the terms:

weak human + computer + strong process > strong human + computer + weak process > strong computer > strong human

The above is not thorough or formal and would be specific to the problem type, Chess in this case, but we feel that it provides a good overview and example of our current view on how hard problems in medicine should be approached. Our very high level view is:

optimal problem solving = human + computer + optimal process

There will of course be problem types where optimal problem solving will be achieved by machine only eg. complex arithmetic. There will be problems where optimal problem solving will be achieved by a human only eg. interpreting human emotions in a physical environment. However, we feel that most hard problems in the sciences would benefit from the combined efforts of humans and computers, making the identification of the strongest process critical to solving hard problems.

Today in medical science, computers play an important role in research. However, they tend to play niche number crunching roles like statistical and genetic analysis. Many medical science laboratories confine computers to the control of instruments and a little statistical analysis at the end of the experimental process. They are rarely involved in the hypothesis generation and complex problem solving phases of the research process and even less often in a systematic fashion. This could be represented as: strong human + little or no computer + little or no process. On the other hand, the field of Artificial Intelligence (AI) almost always seeks to develop software systems that are able to solve problems without human involvement. An impressive example of such an AI system is the Robot Scientist2 that fully automates the research process including hypothesis generation for a basic biological application. The AI programs in medical science could be represented as: little or no human + strong machine + little or no process (please note that when we use “process” in the formulas, we are specifically referring to the process for human and computer collaboration and not the scientific process or other processes).

While we have named our project Optimal Human-Computer Problem Solving, others in the field use the term machine in place of computer. The two terms have an overlapping definition and we are comfortable with both, but we have chosen to use the term computer because it is more closely related to software and we believe that software will be central to human-computer problem solving. However, the analogy of the machine is a powerful one. The integration of human and machine in the production lines of the early 20th century lead to a dramatic increase in human productivity and wealth. The Ford production line was the proverbial example, where Henry Ford claimed that any man off the street could be productive on the line with less than a couple of days training.3 This is a great example of: human + machine + strong process. There were no fancy robots in the line, most of the steps in the production line involved basic tools, cutting and stamping machinery. But the organisation of the machinery and humans into a very focused, efficient and consistent process is what released the productivity boost. Developing equivalently powerful processes for the integration of humans and computers for solving hard problems in medical science is our goal.

Our continuing research and application development will focus on the following areas:

  1. Explore the strengths and weaknesses of both humans and computers: Understanding these will highlight the best opportunities for humans and computers to collaborate.
  2. Algorithmic or process driven approaches to problem solving: Understanding the few examples of algorithmic approaches to problem solving may accelerate the development of strong processes for human-computer collaboration.
  3. Knowledge capture, management and analysis: We will continue our work with Research Analysis as we feel that efficient knowledge capture and manipulation will be critical to optimal problem solving.


We haven’t cited the work that has inspired us to date, instead we will begin publishing brief articles here that discuss key points we found interesting. We also hope to continue releasing tools we develop through Research Analysis and future platforms.


  1. Kasparov, Garry. “The Chess Master and the Computer” The New York Review of Books 11 FEBRUARY 2010. Web. 10 July 2016. (
  2. Sparkes, Andrew, et al. “Towards Robot Scientists for autonomous scientific discovery.” Automated Experimentation1 (2010): 1.
  3. Ford, Henry, and Samuel Crowther. “My Life and Work. Garden City, New York, USA.” (1922).

Nano-publications – Review and Comparison to the Research Analysis model

Apart from the scientific knowledge management models of micropublications (Clark 2014) and the Biological Expression Language (, the concept of nano-publications (Mons 2009, Growth 2010) is most aligned with the goals of Research Analysis and has provided valuable insights. In this article we provide an overview of the nano-publication model and discuss how it relates to the Research Analysis (RA) model.

The authors propose 5 steps required to create and adoption nano-publications (Mons 2009, Growth 2010):

  1. Terms to Concepts: This step requires that all terms in a research article are mapped to non-ambiguous identifiers. In nano-publications this is referred to as a Concept, where a Concept is the smallest, unambiguous unit of thought. A concept is uniquely identifiable (Groth 2010). This is similar to, but more ambitious than, the Medical Subject Headings (MeSH) database. Using MeSH as an example, MeSH Headings are the equivalent of Concepts and the Entry Terms for each MeSH Heading are equivalent to the Terms or synonyms for each Concept. We agree with this general goal and strongly promote the use of standard language. We promote the use of MeSH terms in Research Analysis. In Research Analysis users can use the MeSH Entry Term (synonym) they are most comfortable with and the system then ensures that this is mapped to the main concept or MeSH Heading. This allows the user to work with the terminology that is most comfortable for them and their peers, rather than being forced to use an ideal concept. In a separate article, we discuss the challenges associated with defining an unambiguous unit of thought.
  2. Concepts to Statements: Here they propose that each smallest insight in exact sciences is a ‘triple’ of three concepts, though conditions are required to put the insight in context. The triple is in the form subject > predicate > object. For example, cholesterol > increases > atherosclerosis. A Statement is a uniquely identifiable triple, which can be achieved through the assignment of a unique identifier to the triple by annotation. RA initially implemented a cause-effect model for statements, which is a special case of the triple where the predicate must be a cause-effect predicate. We currently only offer the predicates increases, decreases and not significant. We chose to initially restrict the options for triples to allow for the collection of a consistent database that would allow for analysis.
  3. Annotation of Statements with Context and Provenance: It is not enough to store statements just in the form of their basic components, three concepts in a specific sequence. A statement only ‘makes sense’ in a given context and taking a statement out of a research publication strips it of this context. The context in a nano-publication is defined by another set of concepts. The annotation is achieved technically through a triple such that the subject of the triple is a statement. For the example above, the species should be specified. Mice do not get atherosclerosis, even on a high cholesterol diet, but humans do. Also, provenance is associated to Statements by annotation eg. author, source. Claim’s in RA by default require that the user provide organ/cell model, genetic model and species annotations that a relevant for each specific claim. RA automatically assigns unique identifiers to claims and requires that at least one supporting quotation is provided from a publication. The supporting quotations require that the PubMed ID (PMID) is provided. Additional conditions and context can be provided by the users appending tags to the claim.
  4. Treating Richly Annotated Statements as Nano- Publications: treat these statements with conditional annotation as nano-publications via proper attribution so they can be cited and the authors can be credited. A nano-publication is a set of annotations that refer to the same statement and contains a minimum set of (community) agreed upon annotations (Groth 2010). This concept is similar to the claim model in RA, claims can be cited using their unique identifier and viewing a claim provides details of all of the quoted statements and associated PMIDs that support the claim, along with any other context provided via tags.
  5. Removing Redundancy, Meta-analyzing Web-Statements: where statements are identical they would be removed to simplify the database. The goal of this being to reduce “undue repetition” and to help improve the identification of new statements. Groth et al. define S-Evidence: all the nano-publications that refer to the same statement (Groth 2010) and, as implied by the name, provide evidence for the statement. The original model for nano-publications focused more on the removal of redundancy, but the concept of S-Evidence provides more respect for the importance of replication and the potential for meta-analysis. In complex sciences like biology, the likelihood of a statement being true based on the evidence of one publication is surprisingly low. For example, the uncertain reproducibility and re-usability of results investigated in the therapeutic development in the cancer field (Begley 2012). No single experiment, or for that matter any number of experiments, can fully demonstrate the truth of a statement. However, the collection of results that support a statement can, in a Poperian sense, provide some guidance to the level to which a scientific statement has had its metal tested. It can also allow for the bridging of knowledge between subfields where different terms for the same concepts are regularly used.

Nano-publication Model

Figure 0: The Nano-publication Model taken from Groth 2010.

The goal of nano-publications

The nano-publications authors propose the goal of having scientific authors structure their data in such a way that computers understand them and we support this goal. However, we feel that it is likely that the formalisation of scientific knowledge may become a specialist task, like the coding of software design specifications into software code. It is not clear how the nano-publications authors see the knowledgebase of nano-publications being used. We strongly believe that while knowledge coding may be a specialist activity, that most researchers in the biological fields will used tools based on such knowledgebases to help direct their research by identifying gaps, conflicts and opportunities in the current research.

Main differences between the nano-publications model and the Research Analysis models

We have discussed some similarities and differences between the nano-publication and RA models above, but here we go into a little more detail.

At the Concept level:

  • The nano-publication model requests that authors use their Concept Wiki unique identifiers, which aggregates concept names from many databases.
  • RA feels that the task of aggregating names is too much and not necessary given the already available databases. Also, we prefer that scientist use the names rather than IDs for concepts as this is more intuitive and in line with our mission.
  • RA currently requests that all claims use NLM’s MeSH Terms or IDs for the names of the elements or concepts in a claim. Where terms are not in MeSH then the NCBI Protein and PubChem databases can be utilised. In rare cases new concept names can be added into RA.
  • RA uses these external name databases to identify synonyms and ensure that if scientists search for a particular MeSH Term that they will receive results for all of the MeSH terms under the MeSH Heading.

At the Statement level (Claims in RA):

  • RA uses the concept of claims in place of the concept of statements in nano-publications. Claims are a simply a type of statement that we feel is more intuitive.
  • With regard to our natural language claims, we ask that scientists use the form of a declarative sentence that is as logically clear as possible. These statements will not be as logically clear as the nano-publication model and are likely to be more complex.
  • Our standard cause-effect claim model is quite similar to the nano-publication model. The primary cause-effect part of the claim is simply a special case version of the nano-publication triple eg. Treatment/Cause [subject] > Effect [predicate] > Disease/Molecule [object].
  • The other components of our cause-effect claim model are equivalent to attribution in the nano-publication model. We have 3 as standard:
    • (cause-effect claim, where it occurs in, Organ/Cell Model) AND
    • (cause-effect claim, where it occurs in, Genetic Model) AND
    • (cause-effect claim, where it occurs in, Species);
  • Note that the three conditions should be considered as conjunctions with the cause-effect claim. Conjunction is required as the conditions are related. The Genetic Model effects the Organ/Cell Model and both are specific to the Species. A simple annotation of each to the cause-effect claim would be ambiguous. It is only when all of the conditions are true that the cause-effect claim is valid.
  • In RA, statements must be supported by at least one quoted statement from a publication. These supporting quotes could be treated as annotations in the nano-publications model, but a difference between the two models is that a specific quote could be used as support for several claims. Further, for each PMID there are often several quoted statements. Technically these components could just be repeated as annotations for each statement, but in the RA model a map or graph concept is used where the statements are supported by quotes and quotes are supported by PMIDs. Claims, quotes and PMIDs are all considered to be elements in the direct acyclic graph. The S-Evidence concept in the nano-publication model goes some way to bridging this difference by grouping together nano-publications with the same statement, however the S-Evidence creates trees rather than a graph. This level of the RA model is much more like the micropublication model than the simpler nano-publication model.

Research Analysis Knowledge Management Model Diagram

Figure 1. Research Analysis Knowledge Management Model

  • The RA model does have some simple annotation like elements that associate information to the statement/claim such as a unique statement identifier, the author of the statement/claim (who may or may not be the author of the publication that the quoted sentence came from), time and date of creation, and tags that in effect provide flexible and unlimited simple annotations.

Both the nano-publication and micro-publication models have a substantial focus on the technical aspects of encoding the data in semantic web schemas. The reason for this focus is the importance of making the data available in an open and semantically rich format. We respect this goal, but have chosen to hide as much of this detail from our users as possible. While the biological research community has become computer savvy over the past decades, the majority of the community are not trained in any computer programming languages and would find these schemas very unpleasant. I have a computer science degree from before web technology was popular and I still find it very unpleasant. This is a little unfair, as these are articles in informatics journals and certainly we acknowledge that the micropublication authors have built user oriented applications.

One of the key goals discussed in the article on the Research Analysis Mission is that our focus is on making knowledge management tools available to normal biological and medical scientists in an easy to use and powerful way – not making the knowledge available to a few hard core geeks and their supercomputers. For this reason, none of the bare bones schemas are visible to the users and the frontend terminology is focused on usability rather than theoretical correctness. This is also the reason why we provide a number of standard models for capturing scientific claims in RA. These models will not be flexible enough for some, but for the rest they will be much easier and straightforward to use. Adoption and the actual acceleration of discovery by normal scientists is our top priority.


Mons, B., & Velterop, J. (2009, October). Nano-Publication in the e-science era. In Workshop on Semantic Web Applications in Scientific Discourse (SWASD 2009).

Groth, P., Gibson, A., & Velterop, J. (2010). The anatomy of a nano-publicationlication. Inf. Services and Use, 30(1-2), 51-56.

Clark, T., Ciccarese, P., & Goble, C. (2014). Micropublications: a semantic model for claims, evidence, arguments and annotations in biomedical communications. Journal of Biomedical Semantics, 5(1), 28.

Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature483(7391), 531-533.