All modern psychology is cognitive psychology
Methods of cognitive psychology
People> Joachim Funke> Methods of Cognitive Psychology
[Originally published as: Funke, J. (1996). Methods of cognitive psychology. In E. Erdfelder, R. Mausfeld, T. Meiser, & G. Rudinger (Eds.), Manual of Quantitative Methods (pp. 515-528). Weinheim: Psychology Publishing Union. The current version has been updated and enriched with links.]
Cognitive psychology as a branch of general psychology deals with processes in which an individual acquires, uses or modifies memory content. Such processes have traditionally been treated under the keywords perception, learning, memory, thinking, attention and language. The connecting link is the term "information", which refers to environmental stimuli and memory contents that are received or called up, stored, processed and / or changed by the person during such processes. In more recent works, the term knowledge is used instead of information (Mandl & Spada, 1988; Strube & Wender, 1993) in order to emphasize the complex and more active character of memory contents. The subject of cognitive psychology is therefore cognitive processes that elude direct observation, as well as cognitive structures that can only be hypostatized. This raises the question of how meaningful, verifiable statements can be made about the properties of these internal structures and processes. "Developing a theory in cognitive psychology is much like developing a model for the working of the engine of a strange new vehicle by driving the vehicle, being unable to open it up to inspect the engine itself." (Anderson, 1980/1988). Therefore, the methods for examining this subject are of particular importance. The problems arise less with data analysis than with data collection. Two central dependent variables in cognitive psychology experiments are "performance" and "speed". The achievement can, for example, consist in correctly answering a certain number of items in an intelligence test. The faster someone performs this, the higher it is usually weighted. The ratio of performance per unit of time is a possible indicator of cognitive (processing) capacity (Vernon, 1987).
From these remarks it emerges that the central problem of cognitive psychological methods lies in the access to ongoing information processing processes. The more complex the examined cognitive processes are, the more difficult it becomes to formulate plausible process theories on the one hand, but also to show empirical test possibilities for precisely those hypostatized processes on the other. This results in a research situation in which hypothetical flow models in the form of simple or complicated flow diagrams are proposed for many processing processes; however, their verifiability is far below the level of theoretical elaboration.
In the following sections, some of the most important processes are presented as examples. Following a classification by R. J. Sternberg (1977), the various methods are to be subdivided into those for fast-running processes (seconds and milliseconds) and those for slow-running processes (minutes). The former include reaction and decision time measurement as well as eye movement measurement, the latter include verbal data (self-assessment, questioning, knowledge diagnostics) from test subjects (Pbn). There are also a number of (indirect) methods for recording cognitive structures and long-term cognitive processes. Computer simulation must also be mentioned as a methodological approach. This is not to be confused with the recently emerging research paradigm of the presentation of computer-simulated scenarios ("Complex problem solving"; overview: Frensch & Funke, 1995); this is not a separate method, but a special kind of stimulus material. With the procedure of multinomial modeling, a methodical approach to cognitive processes will have to be described which, in terms of its degree of differentiation, stands between the merely verbal level of theory formation on the one hand and computer simulation on the other. Finally, there has recently been access to cognitive processes in the neurosciences, which consist in making brain processes visible and which are briefly addressed.
A separate article in this manual deals with the usual methods of psychophysics in the field of perceptual psychology (see Irtel, in this volume). The information integration theory, which makes certain assumptions about the connection of several stimuli to a judgment (cognitive algebrae.g. linking the height and width of a rectangle to an area impression), mainly uses variance analysis techniques (see Anderson, 1982) to check additivity, multiplication or averaging assumptions. We shall not go into this here either.
In addition to the above procedures for datasurvey All "standard" methods of psychology (e.g. observation, cf. Roskam, in this volume; experiment, cf. Bredenkamp, in this volume) can be used within the framework of cognitive psychology. Also the known methods of dataevaluation do not differ in any specific way from other disciplines of psychology with a few exceptions (e.g. multinomial modeling, see below).
Measurement of reaction and decision-making times
Assumptions about ongoing information processing processes can be made by measuring reaction and decision times reaction time and choice reaction time; summarized here under the generic term of response time, RT). If there is only one possible answer, one speaks of a reaction, in the case of several possible alternatives, of a decision.
First proposed by Donders (1868) Subtraction method is to measure the RT on two tasks that are exactly about one differentiate hypothetical processing (a stage). The measured difference should correspond to the duration of the additional process. The methods presented by Ashby (1982) can be used to check whether the underlying assumption of the additivity of individual RT components is fulfilled. The additive-factors method, proposed by S. Sternberg 1969, does not measure the duration of individual stages, but only checks the independence (additivity) of the cognitive factors (components) that determine the reaction / decision and that make up the RT. Should comparisons of memory contents be made and is the RT for one Such a comparison is known, if the model of additive factors is valid, the RT should increase linearly with an increase in the number of comparisons to be carried out (= manipulation of a factor). The advantage of Sternberg's method is the assumption that experimental manipulations (e.g. the size of the item set from which a target should be identified) each affect separate processing stages. In contrast to the subtraction method, one is no longer forced to insert postulated processing stages or to omit them. However, problems with the application of the model of additive factors (distribution requirements of the ANOVA, powerProblems when testing the null hypothesis "additivity}, etc .; cf. for more details: Townsend & Ashby, 1983, p. 364 ff).
RT measurements are preferably used to check memory models. Assumptions about certain organizational forms of semantic memory suggest that a decision like "a robin is a bird" should be made more quickly than a judgment about the statement "a robin is an animal". Here, the RT differences correspond to the different "distance" of the concepts in the semantic network. Based on this assumption, the RT measurement has also offered itself as a method for determining prototypicality: short RTs are expected for typical specimens in a category, and longer RTs for atypical specimens. However, even with well-defined concepts such as "even-numbered", different RTs have been found for different specimens in this category.
The RT measurement method also plays an important role in connection with questions about the type of internal representation of spatial objects. Related studies on mental rotation have shown (cf. Shepard & Metzler, 1971) that when assessing whether two geometrical figures can be converted into one another, the decision time depends linearly on the extent to which the two bodies rotate relative to one another.
Reaction time experiments can also be used to answer questions about the seriality or parallelism of cognitive processes. Meyer (1994), for example, applied the logic of the parallel-serial test paradigm developed by Townsend and Ashby (1983) to the processing of items in a concentration test. The results of Meyer's experiment clearly speak in favor of parallel item processing (although the subjects assess this differently).
Eye movement measurement
In his overview of the connections between cognitive processes and eye movements, Winterhoff (1980) comes to the conclusion that the acquisition of eye movement data is not the "ideal route" to cognitive processes, but a "possible and important path". The premise of this methodical approach is that in particular the implementation of macromovements (Saccades) is based on conscious control processes, i.e. represents a cognitive selection of the field of vision. Already at the end of the 19th century it was observed that reading a text was by no means continuous, but rather erratic. In this respect, it is hardly surprising that with the advent of corresponding aids for registering eye movements, experimental reading research initially made use of this approach. The number of fixations per line, the duration of the fixations and the number of regressions on earlier text passages come into question as dependent variables. The independent variable is e.g. the difficulty of the presented text. Other studies (Rayner, 1975) have looked at the amount of characters that can be picked up while reading. It turned out that about ten characters to the left and right of the fixation point can be processed, so an average of three words per fixation. In recent times, eye movement measurement has also been increasingly used in the field of problem-solving research. This additional data source offers the possibility of determining which information was recorded at which point in time (in the course of the problem-solving process) and enables alternative interpretations of experimentally generated effects to be checked.
During early apparatus facilities for the detection of saccades for the Pbn as well as for the examiner brought considerable stress (fixation of the head, bite in a biteboard; laborious manual evaluation of extensive data material), are modern computer-controlled on-line-Procedure roughly according to the principle of Corneal reflection non-reactive measuring methods with high accuracy. (Non-reactivity means that the measuring process as such has no influence on the measurement). This is useful for recording micro-saccades Electro-oculogram (EOG), which records potential fluctuations between the retina and cornea with high temporal resolution. However, the technical aspects of this type of data collection will not be discussed further (see Young & Sheena, 1975).
Also the Pupillometry is of interest in this context. Beatty (1982), for example, demonstrates with his investigations how cognitive stress is reflected in a reliably measurable, short-term enlargement of the pupil diameter.
Self-assessment (verbal data, surveys, knowledge diagnostics)
As in many areas of psychology, it is also true of the field of cognitive research that methods based on information provided by the Pbn are very popular. A well-known approach to internal processes is the method of thinking aloud. Here, a Pb is asked to verbalize the thoughts going on while working on a task. Variants of this technique already existed at the end of the 19th century: Introspection (With the special variant "experimental self-observation" and retrospection belonged to the repertoire of the various schools at that time, between which there was by no means agreement on the methodology to be used. For example, Bühler (1908) as a representative of self-observation and Wundt (1908) as its opponent A lively discussion, the modern variant of which is presented in the work of Nisbett and Wilson (1977) and Ericsson and Simon (1980, 1984).
Nisbett and Wilson (1977) take the position that higher cognitive processes are not accessible to introspection and that information from the subjects about causes, effects and cause-effect relationships is only given on the basis of implicit causal models. In cases where these causal models correspond to the facts, one would wrongly conclude that these processes were accessible to the Pbn. The counter-arguments put forward by Ericsson and Simon (1980) attempt to defuse this view by specifying conditions under which verbalizations cognitive processes Not influence. The conclusion that verbal data can be heuristically fruitful, but the impairments associated with the recording suggest the use of other, less reactive methods, therefore only applies to processes in which Pbn have to direct their attention by instruction to processes that normally occur not be heeded by them. However, the counter-arguments of Ericsson and Simon (1980) do not protect against the objection that self-reports only reflect the implicit theory of one's own cognitive processes, but not the processes themselves (see Gopnik, 1993; for a consideration of the first-person experience argues Searle, 1992).
One method for diagnosing structural characteristics of knowledge is the "object sorting test"(OST), in which a Pb is to summarize pre-given or self-compiled concepts of a realm of reality according to freely selectable criteria in (overlapping) groups. Indicators for" cognitive complexity "can be derived from the grouping data (cf. Hussy, 1977) If structural features are mapped, this method has only limited significance in answering the question of how a subject area is individually represented. Scheele and Groeben (1984) have with the "Heidelberg structure-laying technique"(SLT) has developed a method in which the structure of a subject area is" reconstructed "in a dialogue between the Pb and the examiner (the authors speak of" subjective theories "). Through the close interaction of the Pb and the examiner, which the procedure prescribes, and Due to the associated methodology of "communicative validation", this instrument does not provide any satisfactory data either, especially since no quantification of certain aspects of a subjective theory is intended.
Knowledge diagnostics has proven to be important in the context of expert systems (cf. Hayes-Roth, Waterman & Lenat, 1983). At Expert systems it is an attempt to transfer structural and rule knowledge from experts to computer systems. In order to carry out this transfer, "knowledge engineers" try to map them as correctly as possible. The procedures used by the knowledge elicitation are still in the development stage, but it is foreseeable that access to expert knowledge will have to take place in different ways (direct questioning, observation of actions, collection of example cases, etc.). From an application perspective, this area will surely be of the greatest importance in the near future, since the fifth generation computers are supposed to be characterized by their inventory of and handling of knowledge ("intelligent robots"). With regard to the efficiency of previous systems, however, skepticism is appropriate (cf. Streitberg, 1988).
A relatively new approach to knowledge diagnostics is the Formalization of knowledge spaces (see Falmagne, Koppen, Villano, Doignon & Johannesen, 1990). Such a procedure describes the structure of questions and answers in a knowledge domain and thus represents a theory of possible knowledge states. The problem with this is that the definition of an order structure appears to be quite arbitrary. Compared to psychometric approaches, which make the answer performance to certain questions dependent on postulated traits, this approach is an interesting knowledge-diagnostic alternative in its reference to properties of the knowledge space.
A methodology commonly used in thought psychology for Identification of cognitive processes consists in the use of problem types to be worked on sequentially. The solution does not come after a single step (such as in the case of insight problems from gestalt psychological tradition), but only after a series of intermediate steps.A typical example of such a transformation problem is the "Tower of Hanoi". The Pb has three pins in front of it, with a certain number of disks of different diameters attached to one of these pins. The task is to move the specified arrangement of the discs from the specified pin to another pin while observing the following rules: (1) only one disc may be changed at a time, (2) the diameter of the discs on top of one another must decrease towards the top , (3) as few moves as possible should be made. This type of task enforces sequential processing, from which information about the type of strategies used can be obtained (e.g. Sydow, 1970).
Another method is to obtain an initially unknown from the Pbn Predict sequence of signs and to examine the possible processes of information reduction due to redundancy of the character string (cf. Shannon, 1951). Here one can determine the effects of different information contents on the forecasting performance, thus illuminating the capacity aspects of information processing. The normative model of information theory also provides a frame of reference against which human information reduction can be measured (see as an example of this procedure: Hussy, 1975).
Computer simulation of cognitive processes
Since the availability of computers in psychological laboratories, the simulation of cognitive processes has also been part of a special method in cognitive psychology. The basic procedure consists in the most exact possible "reproduction" of the cognitive processes in the human individual when processing a given problem with the help of a computer program ("cognitive modeling"). As Ueckert (1983) states, this is an attempt to make theoretical assumptions about the course of internal processes transparent by creating a simulation program. Such assumptions can e.g. be formulated in the form of production systems; These are rules, the conditional part of which specifies conditions which, if they are fulfilled, the actions mentioned in the action part are carried out. Since new conditions are created with the actions, the system of rules can develop in a process-like manner (cf. Anderson, 1993).
Gregg and Simon (1967) wanted to demonstrate the following advantages of this methodology using the example of a simulation program for concept acquisition: (1) protection against inconsistencies, (2) clarification of otherwise implicit assumptions, (3) avoidance of overly flexible theories based on everyone Dataset fit, (4) elimination of unverifiable theories and (5) need for information on encoding and representation. The initial euphoria about this method has now given way to sober skepticism. Neches (1982) draws attention to six problems with reference to the arguments of Gregg and Simon (1967) mentioned above: (1) the formal description of a model does not guarantee its comprehensibility (at least not until all details are given), ( 2) Often simplifying assumptions have to be made for the implementation of a program, (3) the programs are often tailored to selected example cases, (4) the databases for the programs already contain certain structures that are not necessarily psychologically meaningful, (5) contain provided subroutines non-numeric parameters and enable the program to handle arbitrary data, (6) the programmer withhold data or procedures that cause difficulties for the program.
According to Neches (1982), the assumption that the computer simulation method would develop clearer psychological models has not been confirmed. The importance of an executable program is also controversial, as a number of different models can deliver empirically identical results: "A computer simulation does not necessarily guarantee that a theory is more consistent or comprehensible. Nor does a program's successful performance guarantee that the theory is generalizable, or even that the causes for the success are those predicted by the theory. " (Neches, 1982, p. 89).
In the field of artificial intelligence, the Computer metaphor (the image of the human being as a machine) of course (cf. Johnson-Laird, 1988). The central assumption of AI research is reduced to the following formula by Charniak and McDermott (1985, p. 6): "What the brain does may be thought of at some level as a kind of computation." In contrast to computer simulation research, there is little concern here with whether other programs also deliver similar or even the same results. It is essential that the simulation produces a corresponding performance analogous to human ability. The design of chess programs from the point of view of AI research therefore means performing as it might correspond to a grandmaster. Which aids are to be used here (e.g. with regard to the available storage volume) is of less interest. In particular, it does not matter if resources are used or strategies are used that are certainly not found in humans. Nevertheless, some expect that this research strategy for cognitive psychology will also result in fruitful findings (cf., however, Sharkey & Pfeifer, 1984).
While production systems with clearly defined rule sets were preferred in early computer simulations (cf. Anderson, 1993), they appear in the 1980s connectionist modeling more in the foreground (Hintzman, 1990; McClelland, 1988). Despite high performance on the performance side, these have been criticized for their lack of transparency, but also for reasons of principle (cf. Fodor & Pylyshyn, 1988; Levelt, 1991; McCloskey, 1991).
Riefer and Batchelder (1988) present the method of multinomial modeling as a link between strict theoretical assumptions on the one hand (such as the cognitive architecture ACT-R by Anderson, 1993) and, on the other hand, more empirically oriented data evaluations (such as using variance analysis).
A core assumption of the multinomial modeling consists in the fact that a discrete, finite set of cognitive states can be assigned to very specific observable behaviors. While the assignment of behaviors to cognitive processes or states is unambiguous, conversely, the assignment of cognitive states to behaviors is mostly not clear, because most behaviors can arise from different cognitive processes. For example, a correct answer in a test can come about through knowledge (state 1) or through guesswork (state 2). The parameters of an assumed model must then be estimated from the behavioral classes that can be easily determined empirically. Such models are often referred to as "multinomial processing trees", in which the various possible combinations of the parameters are linked with the empirically observable behavior classes. The parameters contained in the models and to be estimated are determined using maximum likelihood methods Data checked. In addition to adaptation tests, model assumptions are also checked through experiments, the manipulations of which are specifically aimed at individual parameters of the model. If these conform to the hypotheses, such results are taken as confirmation of the model assumptions.
One of the strengths of this approach is the separation of different cognitive processes, for example by determining the different parts of processes during the storage- and retrieval-Phase is operated when recalling the memory (cf. Batchelder & Riefer, 1986). The problem is the required independence of the individual responses of a subject over several successive rounds. In addition, with large samples, even small model deviations may lead to premature failure of the model validity test. According to Riefer and Batchelder (1988, p. 325), one should continue to work with the failed model in cases in which a model fits the data descriptively but fails in the model validity test. In addition, the "trees" suggest a sequential process that does not necessarily have to match the factual sequence of cognitive processes: the model validity test is only sensitive to the probability of a certain one combination cognitive states, but not the probability of a particular one sequence these states. A parameter estimate at the individual level is also not to be found in the published models. Despite these critical remarks, the method described has so far proven to be a flexible instrument in the cognitive-psychological arsenal of methods (see Bredenkamp & Erdfelder, in press).
Dissociation studies have been proposed and successfully used as a research method, especially in the neuropsychological context (McCarthy & Warrington, 1990). The relationships between two different cognitive requirements are compared. If, for example, one finds that a person can control a complex system well but has little knowledge of it (according to Berry & Broadbent, 1984), one speaks of one dissociation between the two services (overview in Hintzman, 1990). If there are not only Pbn who can $ X $, but not $ Y $, but also those who can $ Y $ but not $ X $, one speaks of double dissociation. In neuropsychological studies in particular, one concludes from double dissociations that the achievements $ X $ and $ Y $ each address different cognitive processes.
The procedure of Process dissociation has been suggested in the more recent literature on memory psychology by Jacoby (1991) in order to overcome problems in comparing direct and indirect memory measurements (cf. Richardson-Klavehn & Bjork, 1988). In this approach, automatic and controlled parts of memory performance are recorded by comparing opposing subtasks on one and the same item: Pbn should, for example, supplement a word stem with a word presented in a previous list or, if they do not remember this, the first word they want occurs (inclusion task); alternatively, they should add a word that was added before Not was performed (exclusive task). With controlled correct memory, all old words are added to the inclusion problem, whereas only new words are added to the exclusion problem. In the event of an uncontrolled, automatic reminder, no difference in performance between the two tasks is to be expected. For the comparative analysis of conscious and unconscious parts of memory performance, this procedure seems to be clearly superior to alternative approaches.
In the field of cognitive neuroscience, technical advances have led to an enormous upswing in the last few years imaging procedures guided. These procedures make anatomical and functional aspects of the healthy as well as the diseased brain visible with an unprecedented clarity. In addition to the now almost classic Computed Tomography (CT) with X-rays (static representation of the tissue structures) are meanwhile also Positron emission tomography (PET; dynamic blood flow measurement based on the concentration of rapidly decaying radioisotopes that have to be injected before the task is performed), Magnetic resonance tomography (NMR, nuclear magnetic resonance; renamed MRI, magnetic resonance imaging; almost risk-free measurement of oxygen consumption by changing the nuclear magnetic resonance in a high-frequency magnetic field) and Multi-channel magnetic encephalography (MET; recording of the magnetic fields resulting from brain activity) has become extremely expensive, but also highly informative access to brain processes. When mapping cognitive functions using PET, for example, the procedure is analogous to Donder's subtraction method: The cerebral blood flow that can be measured using PET is recorded before and during a specific cognitive activity; the difference results in the particularly active brain areas for the specific activity.
Posner and Raichle (1994) see with these methods, which operate with increasingly higher spatial and temporal degrees of resolution, a flood of information for cognitive scientists that cannot yet be classified, but will certainly increase our understanding of the material foundations of cognitive processes.
An introduction to the basic ideas of production systems using the example of its ACT-R architecture together with a software for the creation of own models see Anderson (1993) Charniak and McDermott (1985) offer an overview of various fields of application of artificial intelligence (vision, language processing, search methods, inductive and deductive processes, action planning, expert systems). An overview of various methods of data acquisition in the field of knowledge psychology can be found in Kluwe (1988). It deals with three areas: (1) externalization of currently used knowledge, (2) communication of existing knowledge, (3) activation of specific knowledge structures through special stimulus material. The edition by Puff (1982) contains fourteen articles on detailed problems of memory psychological measurement methods (reproduction, recognition, image memory, semantic memory, text understanding, knowledge use). Rumelhart and McClelland (1986) as well as McClelland and Rumelhart (1986, 1989) present the basic ideas of connectionist models and at the same time provide easy-to-use models software, by means of which own networks can be constructed. Townsend and Ashby (1983) develop fundamental ideas for the stochastic modeling of elementary processes in their book. Testing options for parallel and serial processing models are also dealt with. An edition, which is still well worth reading, with ten articles on basic and application problems of RT measurement comes from Welford (1980).
Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco, CA: Freeman. (German translation: Cognitive psychology. An introduction. Heidelberg: Spectrum of Science Publishing Company, 1988).
Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Erlbaum.
Anderson, N.H. (1982). Methods of information integration theory. New York: Academic Press.
Ashby, F. G. (1982). Testing the assumptions of exponential additive reaction time models. Memory & Cognition, 10, 125-134.
Batchelder, W. H. & Riefer, D. M. (1986). The statistical analysis of a model for storage and retrieval processes in human memory. British Journal of Mathematical and Statistical Psychology, 39 129-149.
Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91, 276-292.
Berry, D.C. & Broadbent, D.E. (1984). On the relationship between task performance and associated verbalizable knowledge. Quarterly Journal of Experimental Psychology, 36 A, 209-231.
Bredenkamp, J. & Erdfelder, E. (in press). Methods of memory psychology. In D. Albert & K.-H. Stapf (ed.), memory (= Encyclopedia of Psychology, Subject Area C, Series 2, Volume 4). Göttingen: Hogrefe.
Bühler, K. (1908). Answer to the objections raised by W. Wundt against the method of self-observation on experimentally generated experiences. Archive for the whole of psychology, 12, 93-122.
Charniak, E. & McDermott, D. (1985). Introduction to artificial intelligence. Reading: Addison-Wesley.
Donders, F. C. (1868) Over de snelheid van psychische processen. Onderzoekingen in the Physiological Laboratory of the Utrechtsche Hoogeschool, Twede reeks, II, 92-120. (English reprint in Acta Psychologica, 30, 1969, 412-431).
Ericsson, K.A. & Simon, H.A. (1980). Verbal reports as data. Psychological Review, 87, 215-251.
Ericsson, K.A. & Simon, H.A. (1984). Verbal reports as data. London: MIT Press.
Falmagne, J.-C., Koppen, M., Villano, M., Doignon, J.-P. & Johannesen, L. (1990). Introduction to knowledge spaces: How to build, test, and search them. Psychological Review, 97, 201-224.
Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71.
Frensch, P.A. & Funke, J. (Eds.) (1995). Complex problem solving: The European Perspective. Hillsdale, NJ: Erlbaum.
Gopnik, A. (1993). How we know our minds: The illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences, 16, 1-14.
Gregg, L. W. & Simon, H. A. (1967). Process models and stochastic theories of simple concept formation. Journal of Mathematical Psychology, 4, 246-276.
Hayes-Roth, F., Waterman, D.A. & Lenat, D. B. (Eds.) (1983). Building expert systems. Reading: Addison-Wesley.
Hintzman, D.L. (1990). Human learning and memory: Connections and dissociations. Annual Review of Psychology, 41, 109-139.
Hussy, W. (1975). Information processing and human sequential predictive behavior. Acta Psychologica, 39, 351-367.
Hussy, W. (1977). A contribution to the operationalization and quantification of cognitive complexity. Archive for Psychology, 129, 288-301.
Jacoby, L. L. (1991).A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30, 513-541.
Johnson-Laird, P.N. (1988). The computer and the mind. An introduction to cognitive science. Cambridge: Harvard University Press.
Kluwe, R. H. (1988). Methods of psychology for obtaining data on human knowledge. In H. Mandl & H. Spada (Eds.), Knowledge psychology (pp. 359-385). Munich: Psychologie Verlags Union.
Levelt, W. J. M. (1991). The connectionist fashion. Language & Cognition, 10, 61-72.
Mandl, H. & Spada, H. (Eds.) (1988). Knowledge psychology. Munich: Psychologie Verlags Union.
McCarthy, R.A. & Warrington, E.K. (1990). Cognitive neuropsychology. A clinical introduction. San Diego: Academic Press.
McClelland, J.L. (1988). Connectionist models and psychological evidence. Journal of Memory and Language, 27, 107-123.
McClelland, J. L. & Rumelhart, D. E. (Eds.) (1986). Parallel distributed processing. Explorations in the microstructure of cognition. Volume 2: Psychological and biological models. Cambridge: MIT Press.
McClelland, J. L. & Rumelhart, D. E. (1989). Explorations in parallel distributed processing. A handbook of models, programs, and exercises. Cambridge: MIT Press.
McCloskey, M. (1991). Networks and theories: The place of connectionism in cognitive science. Psychological Science, 2, 387-395.
Meyer, M. C. A. (1994). Serial versus parallel processing of visual information. An experimental study using the example of a concentration test. Bonn: Holos.
Neches, R. (1982). Simulation systems for cognitive psychology. Behavior Research Methods & Instrumentation, 14, 77-91.
Nisbett, R. E. & Wilson, T.D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259.
Posner, M. I. & Raichle, M. E. (1994). Images of mind. New York: Freeman and Company.
Puff, C. R. (Ed.) (1982). Handbook of research methods in human memory and cognition. New York: Academic Press.
Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81.
Richardson-Klavehn, A. & Bjork, R. A. (1988). Measures of memory. Annual Review of Psychology, 39, 475-543.
Riefer, D. M. & Batchelder, W. H. (1988). Multinomial modeling and the measurement of cognitive processes. Psychological Review, 95, 318-339.
Rumelhart, D. E. & McClelland, J. L. (1986). Parallel distributed processing. Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge: MIT Press.
Scheele, B. & Groeben, N. (1984). The Heidelberg structure-laying technique (SLT). A dialogue-consensus method for the elicitation of subjective theories of medium range. Weinheim: Beltz.
Searle, J. R. (1992). The rediscovery of mind. Cambridge: MIT Press.
Shannon, C. E. (1951). Prediction and entropy of printed English. Bell System Technical Journal, 30, 50-64.
Sharkey, N. E. & Pfeifer, R. (1984). Uncomfortable bedfellows: Cognitive psychology and AI. In M. Yazdani & A. Narayanan (Eds.), Artificial intelligence: human effects (pp. 163-172). Chichester: Harwood.
Shepard, R. N. & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701-703.
Sternberg, R.J. (1977). Intelligence, information processing and analogical reasoning: The componential analysis of human abilities. Hillsdale: Erlbaum.
Sternberg, S. (1969). The discovery of processing stages: Extensions of Donders' method. Acta Psychologica, 30, 276-315.
Streitberg, B. (1988). On the nonexistence of expert systems: Critical remarks on Artificial Intelligence in statistics. Statistical Software Newsletter, 14, 55-62.
Strube, G. & Wender, K.-F. (Eds) (1993). The cognitive psychology of knowledge. Amsterdam: Elsevier.
Sydow, H. (1970). For the metric recording of subjective problem states and their change in the thought process I. Journal of Psychology, 177, 145-198.
Townsend, J. T. & Ashby, F. G. (1983). The stochastic modeling of elementary psychological processes. Cambridge: Cambridge University Press.
Ueckert, H. (1983). Computer simulation. In J. Bredenkamp & H. Feger (Eds.), Hypothesis testing (= Encyclopedia of Psychology; Topic B, Series I, Volume 5, pp. 530-616). Göttingen: Hogrefe.
Vernon, P.A. (Ed.) (1987). Speed of information processing and intelligence. Norwood: Ablex.
Welford, A. T. (Ed.) (1980). Reaction times. New York: Academic Press.
Winterhoff, P. (1980). An overview of the connection between eye movements and linguistic-cognitive processes. Psychological review, 31, 271-276.
Wundt, W. (1908). Critical gleanings on the questioning method. Archive for the whole of psychology, 12, 445-459.
Young, L. R. & Sheena, D. (1975). Survey of eye-movement recording methods. Behavior Research Methods & Instrumentation, 7, 397-429.
- Should the story be subjective or objective
- Is Vietnam Asian
- What is pagination in web development
- What comes with a background check
- Are shotgun mechanics in video games realistic?
- How is fish different from other meat?
- Why are Russians so fearless
- Why are French maids famous
- What are interesting nonlinear relationships
- Will China become a Muslim country?
- Which is the best low investment deal
- Koreans eat dogs
- Hot dog meat is popular in Asia
- How far is 50 km to miles
- How many meters in half a mile
- Why do fights smell like poop?
- How is gl pronounced in Italian
- How much is a zillion
- Why can't everyone get A's
- What's a companion in Firefly
- Well worth the ROG gaming phone
- What do you feed cancer patients
- What does AD mean
- What are the Google AdSense alternatives