Category Archives: Optimal Human-Computer Problem Solving

Critic Systems – Human-Computer Problem Solving Review

This is our first article on the topic of Human-Computer Problem Solving (HCPS) since we put it forward as a key area of focus for us in a recent article [link]. When searching for past research on the topic, we came across “critic systems” in a review article (Tainfield 2004). While this research was largely left in the 20th century, it aligns very well with our interests and includes many insightful concepts and strategies. We provide a brief review of the key concepts here.

The concept of critic systems were primarily developed by Gerhard Fischer and his research group at the University of Colorado, Boulder (Fischer 1991 for example). Fisher’s group applied the concept of critic systems generally to the area of human computer interaction for ill-structured design and software coding applications at a time when graphical interfaces were just emerging. Fisher noted that “The design and evaluation of the systems … led to an understanding of the theoretical aspects of critiquing”, which are presented in their publication: The Role of Critiquing in Cooperative Problem Solving (Fischer 1991). Here we provide a high level overview of the concept of a critic system.

The role of critiquing in cooperative problem solving - Fischer 1991 - Figure 1

Figure 1: The critiquing approach. Figure 1 from Fischer 1991.

Figure 1, shows a high level view of a critiquing system. The key features of the system are as follows:

  • There are two agents, a computer and a user, working in cooperation.
  • The human contributes:
    • The problem solving goal or requirements: The goal specification can vary allot depending on the type of critic system. Where the system is very specific to an application, the high level goal is built into the system and only requirements specific to the current problem need to be provided. In more general problem solving situations the user will need to provide a more detailed specification of the goal.
    • domain expertise and general common sense about the world: This is largely in the form of the users long term memory and intuition. However, it could also include the users access to external memory in the form of books and other knowledge that is not available to the computer.
  • The Computer contributes:
    • A model of the user:
      • A model of the user helps the computer to formulate critics that will be understandable and useful to the user. The model could include information about the expertise of the users to ensure that the critics are provided at the right level eg. student, expert. The user model can be built up through interaction with the user to personalise the system.
      • A model of the user’s problem solving goal: The computer needs to be able to capture the user’s goal in a formal way that allows it to analyse the problem.
    • Domain knowledge: The computer has a knowledge base of domain knowledge relevant to the problem area. The formalisation of the knowledge base will depend on the application type and area, but can include both strict and probabilistic rules and constraints that can be used by the system to critic the proposed solution.
  • Problem Solving & Proposed Solution: The human’s primary role is to generate the initial solution and provide modified solutions after each round of feedback from the critic system. The human’s approach to problem solving will vary, but is assumed to be different to that of the computer and will likely leverage the human’s long term memory and expert intuition developed through their career. The human will need to work with the system to provide the solution in a form that the computer can understand, though substantial focus is put on making the interface as easy as possible to use eg. visual or natural language.
  • Critiquing and Critique:
    • The critiquing process begins with an evaluation of the user provided solution. Tainfield (Tainfield 2004) suggests that there are two approaches to evaluation:
      • Analysis based: the system analyses the solution against its knowledge based without developing its own solution.
      • Comparison based: the system develops its own solution from the requirements and compares the user’s solution to its own.
    • Tainfield (Tainfield 2004) notes that critiques are generally of two types:
      • statements of the detected defects, e.g., errors, risks, constraint violations, solution’s incompleteness, requirement’s vagueness, etc.
      • resolutions about the defects detected based on the first type.
    • The critiques provided can be classified for example by importance, positive/negative etc.
    • The critics must be formulated in a way that maximises their value to the user. The system utilises the user model and user interface to achieve this.
  • The process is iterative with the human taking on board the critic and updating the proposed solution until an acceptable solution is achieved.

A Simple Example – Spell Checking

I would like to put forward what might appear to a trivial example – a spell checker for a word processor. I think it is a good example because, 1. everyone has had exposure to a spell checker and, 2. it is not a problem that has been solved by artificial intelligence. Human-computer collaboration is required to correct spelling. Below is a discussion of the example using the key points from above:

  • The human contributes:
    • The problem solving goal or requirements: The goal of correct spelling is an assumed goal, built into the word processor.
    • Their domain expertise: The user brings their language education and experience, along with innate language capabilities.
  • The Computer contributes:
    • A model of the user:
      • specific language eg. Australian English, possibility profession specific dictionaries, a personal dictionary of words added by the user, etc.
      • Smart phone applications take your historical emails, personal notes etc and read them to build a model of the words you regularly use. These systems attempt to guess the next word that you might use and they are often right, but still most of the time we need to select the right word from a group of three or more.
    • Domain knowledge: dictionaries, grammatical rules, etc.
  • Problem Solving & Proposed Solution:
    • The human simply proposes their spelling for each word in the word processor.
  • Critiquing and Critique:
    • The spell checker analyses the text using the dictionary and grammatical rules.
    • The computer identifies possible incorrect spelling by underlining the word and proposes correct spellings when the work is right clicked.
    • Note that the spell checker often doesn’t provide only one correct option. In many cases the computer is unable to automatically correct the spelling. Human language can be extremely complex and difficult to disambiguate eg. Will Will will the will to Will (my spell checker is suggesting I delete the second and third will right now).
  • Often the computer will offer a correct spelling of the word we intended in the first iteration or focus our attention on remembering the correct spelling. But sometimes we are too far off and we must have a second attempt at the word before the computer is able to guess the word we want. This is the second iteration of the critiquing cycle.
  • Finally, the user decides when the spelling in the document is satisfactory and often this involves ignoring several of the computers critics.

Key Advantages of Critic Systems

Fischer covers some of the aspects of cooperative problem solving of special interest (Fischer 1991, p 125), below is a brief summary:

  • Breakdowns in cooperative problem-solving systems are not as detrimental as in expert systems: collaborative systems are able to deal with misunderstandings, unexpected problems and changes of the human’s goals.
  • Background assumptions do not need to be fully articulated: collaborative systems are especially well suited to ill-structured and poorly defined problems. Humans can know the larger context, learn during the problem solving process and decide when to expand the search space.
  • Semiformal system architectures are appropriate: The computer system doesn’t need to be able to interpret all of the information that it has available and can rely on the human.
  • Delegation problem: Automating a task requires complete specification. Cooperative systems allow for incremental refinement and evolution of understanding of the task.
  • Humans enjoy “doing” and “deciding.”: Humans often enjoy the process and not just the product; they want to take an active part.

Thoughts on application to Research Analysis

At the time of this writing, the Research Analysis applications is a knowledge base with knowledge capture and search tools. We use the application in a process similar to the critiquing approach, but this is a manual process at present. Many researchers would use a collection of databases, software tools and peer feedback in a manual process similar to the critiquing approach to solve hard problems in medical research. The challenge is to bring these manual processes together into a system that substantially increases the efficiency of the researcher without too high a barrier to adoption. Ideally the system could be applied to many niche research areas by updating the domain knowledge, but without needing to change the core software platform.

Research Analysis currently provides very basic critiquing type functions. The researcher can enter a research claim using our semiformal language and the system will provide a list of matching claims that have been made by other researchers with quotations and citation. By relaxing the specification of the claim (eg. any association, rather than a positive correlation), the system will provide a list of similar claims. The similar claims many include claims that contradict or question the significance of the researcher’s claim.

Future functionality for Research Analysis that fits the critiquing concept:

  • We are currently working on the capability of extracting claims directly from the abstracts or natural language of the researcher. This would make the system easier to use and would also increase the ability of the system to automatically populate the knowledge base.
  • The critiquing approach suggests that we could use the individual researcher’s historical publications to build a claim database that could provide a model of the researcher knowledge and beliefs. This model could be used in the critiquing process to provide critics that were in line with the researcher’s beliefs. This could further be extended to include the researcher’s publication database and further again by tracking the researcher’s position on the publication eg. agree, disagree or neutral.
  • Currently the system requires the user to manually vary the search fields to identify related claims. We are working on a reporting type function that would provide the user with a list of claims related to their claim, including supporting claims, conflicting claims and similar claims. Claims at different levels of the physiological hierarchy (eg. artery to cardiovascular system), different physiological locations (eg. artery to liver) and different related species (eg. mice to mammal) could also be compared. The tool will use medical term synonyms and statement analysis to identify claims from other related fields that make similar claims, but with different terminology.
  • In the beautiful future we plan to be able to use a combination of the users input, the systems domain knowledge and problem solving tools to present new hypotheses to the researcher through recombination of the existing knowledge base of claims. Generating medical hypotheses is a classic example of an ill-structured and poorly defined problem space. Often new important hypotheses will require the breaking of accepted rules and rejection of historically accepted scientific claims. We believe that a platform like Research Analysis could be valuable in the systematic proposal of new hypotheses based on analysis of historical claims. While we hope these suggestions will be valuable, we have no doubt that human researchers will be critical in the assessment of these hypotheses using their broad domain knowledge and worldly common sense.

Further Reading

  • Terveen provides a good Overview of Human-Computer Collaboration (Terveen 1995) and in particular a summary of Critic Systems based on the presentations at a symposium on the topic.
  • Miller provides an overview of expert critiquing systems in the field of practice-based medical consultation at the time (Miller 1986).

References

  1. Fischer, Gerhard, et al. “The role of critiquing in cooperative problem solving. “ACM Transactions on Information Systems (TOIS) 9.2 (1991): 123-151.
  2. Tianfield, Huaglory, and Ruwen Wang. “Critic Systems–Towards Human–Computer Collaborative Problem Solving.” Artificial Intelligence Review 22.4 (2004): 271-295.
  3. Miller, Perry L. “Expert critiquing systems.” Expert Critiquing Systems. Springer New York, 1986. 1-20.
  4. Terveen, Loren G. “Overview of human-computer collaboration.” Knowledge-Based Systems 8.2 (1995): 67-81.

Optimal Human-Computer Problem Solving (HCPS) for medical science

We have been exploring the use of computers to solve hard problems in medicine for several years as part of our Research Analysis project (www.researchanalysis.com).  Research Analysis was born through our need for a way to keep track of thousands of scientific claims that we extracted during the review of over a thousand papers in the cardiovascular field. During our literature review there would be moments of insight that generated new hypotheses. These insights were generated by the connection of concepts in the current article with other concepts stored in our long term memory. When documenting these ideas, we would search back through the earlier literature to confirm and cite the supporting concepts. Two problems came up again and again:

  1. It could take hours or even days of elapsed time to pin point the specific concept in our library of earlier articles.
  2. When we found the concept in the earlier article, it was often not quite as we remembered. We had linked the concept in the current article by analogy to the earlier article, but the analogy did not fit the facts accurately when we revisited them.

 

The goal of Research Analysis is to avoid these problems through the use of a standardised language for capturing claims, references to specific supporting sentences from within articles and computer tools for capturing and recalling claims. Our ongoing work on Research Analysis lead us to begin thinking and reading about a higher level question: What is the optimal way for humans and computers to work together to solve hard problems? We are of course not the first to ponder this question, but we were surprised to find how little formal research has been conducted on the question. Researching this questions and applying the findings to medical research problems has become an important focus for us.

Chess is a place where humans and computers have been working together for some time. An article by Garry Kasparov discusses the free-style chess competitions run in 2005, where the competitors could compete as teams with other players and/or computers.1 The surprising result was that the winner was “not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”. We thought that Garry’s basic formulation of the variables was a good thinking tool.

Garry’s article proposes the following with some minor changes to the terms:

weak human + computer + strong process > strong human + computer + weak process > strong computer > strong human

The above is not thorough or formal and would be specific to the problem type, Chess in this case, but we feel that it provides a good overview and example of our current view on how hard problems in medicine should be approached. Our very high level view is:

optimal problem solving = human + computer + optimal process

There will of course be problem types where optimal problem solving will be achieved by machine only eg. complex arithmetic. There will be problems where optimal problem solving will be achieved by a human only eg. interpreting human emotions in a physical environment. However, we feel that most hard problems in the sciences would benefit from the combined efforts of humans and computers, making the identification of the strongest process critical to solving hard problems.

Today in medical science, computers play an important role in research. However, they tend to play niche number crunching roles like statistical and genetic analysis. Many medical science laboratories confine computers to the control of instruments and a little statistical analysis at the end of the experimental process. They are rarely involved in the hypothesis generation and complex problem solving phases of the research process and even less often in a systematic fashion. This could be represented as: strong human + little or no computer + little or no process. On the other hand, the field of Artificial Intelligence (AI) almost always seeks to develop software systems that are able to solve problems without human involvement. An impressive example of such an AI system is the Robot Scientist2 that fully automates the research process including hypothesis generation for a basic biological application. The AI programs in medical science could be represented as: little or no human + strong machine + little or no process (please note that when we use “process” in the formulas, we are specifically referring to the process for human and computer collaboration and not the scientific process or other processes).

While we have named our project Optimal Human-Computer Problem Solving, others in the field use the term machine in place of computer. The two terms have an overlapping definition and we are comfortable with both, but we have chosen to use the term computer because it is more closely related to software and we believe that software will be central to human-computer problem solving. However, the analogy of the machine is a powerful one. The integration of human and machine in the production lines of the early 20th century lead to a dramatic increase in human productivity and wealth. The Ford production line was the proverbial example, where Henry Ford claimed that any man off the street could be productive on the line with less than a couple of days training.3 This is a great example of: human + machine + strong process. There were no fancy robots in the line, most of the steps in the production line involved basic tools, cutting and stamping machinery. But the organisation of the machinery and humans into a very focused, efficient and consistent process is what released the productivity boost. Developing equivalently powerful processes for the integration of humans and computers for solving hard problems in medical science is our goal.

Our continuing research and application development will focus on the following areas:

  1. Explore the strengths and weaknesses of both humans and computers: Understanding these will highlight the best opportunities for humans and computers to collaborate.
  2. Algorithmic or process driven approaches to problem solving: Understanding the few examples of algorithmic approaches to problem solving may accelerate the development of strong processes for human-computer collaboration.
  3. Knowledge capture, management and analysis: We will continue our work with Research Analysis as we feel that efficient knowledge capture and manipulation will be critical to optimal problem solving.

 

We haven’t cited the work that has inspired us to date, instead we will begin publishing brief articles here that discuss key points we found interesting. We also hope to continue releasing tools we develop through Research Analysis and future platforms.

References

  1. Kasparov, Garry. “The Chess Master and the Computer” The New York Review of Books 11 FEBRUARY 2010. Web. 10 July 2016. (http://www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/)
  2. Sparkes, Andrew, et al. “Towards Robot Scientists for autonomous scientific discovery.” Automated Experimentation1 (2010): 1.
  3. Ford, Henry, and Samuel Crowther. “My Life and Work. Garden City, New York, USA.” (1922).