What is consciousness? We teach the medical students that it is a state of awareness of self and environment that gives significance to stimuli from the internal and external environment. It is based on arousal / alertness and cognitive content of mental functions (sensation, emotion and thought). But what about the so-called condition “minimally conscious state”? This could be a real challenge for a doctor. Although there are diagnostic criteria for this condition, demonstrating that a patient presents a cognitively mediated behavior is affected by the variety and complexity of the behavioral response. And how should we assess the patients with aphasia, apraxia or other impairments that could be a basis for no responsiveness?
For example, a patient with encephalitis that previously was a player in a symphonic orchestra. He did not show any signs of cognitively mediated behavior, but when listening to his favorite areas he had some finger movements like he was conducting the orchestra. However, how do we know that it was not myoclonus? (He presented mioclonias due to brain-damage).
There are too many unanswered questions in this direction…
One reason of this misty situation may be related to a prevalent confusion either with regard to the precision of patients’ neurological condition that may be referred to states quite different from one another also in relation to the prognosis, and before this with regard to the definition of the levels of consciousness also in relation to different neurocognitive models.
Starting with Chalmers (1996), it was repeatedly noticed in psychological literature that the links among conscious experience (and its detailed properties at the different levels of consciousness) and the concepts newly introduced by the cognitive neurosciences, are not accounted for in current models of brain and behaviour (Gray J., 2004 p. 5). A recent attempt to disentangle the question by integrating nine neurocognitive views in the more general and already existing theoretical framework of the so called “social/personality model”, based on the distinction introduced by Duval and Wicklun (1972) between consciousness (attention focused outward, toward the environment) and self-awareness (attention focused inward, toward the self), could not avoid to underline that, although in consciousness states subjects interact with the environment without monitoring their activity (similarly to what happens in Block’s notion of “phenomenal consciousness”: 1995), “a minimal consciousness of self is required for the organism to move in, and interact with, the environment” (Morin 2006, 359). This assertion should imply that each motor response, allowing to assess at a behavioural level the organism-environment interaction, happens in a state called “Sensorimotor awareness” by Stuss, et al. (2001), included in Morin’s model into the Consciousness level together with the state named “Minimal consciousness” by Zelazo (2004). If this is the case one could wonder about the appropriateness of the sensory stimulation programs for Vegetative State subjects in which, despite the presence of a sleep- wakefulness cycle and some reflexes, it is impossible to detect goal-directed behavioral responses, thus confirming the correctness of the VS diagnosis according to a recent definition (Jennett, 2002).
Convincing answer to the problems you indicated may be found in the recent works published by Lancioni (s. for example: Lancioni, G.E., Singh, N.N., O'Reilly, M., Sigafoos, J., De Tommaso, M., Megna, G., Bosco, A., Buonocunto, F., Sacco, V., Chiapparino, C. (2009a). A learning assessment procedure to re-evaluate three persons with a diagnosis of post-coma vegetative state and pervasive motor impairment. Brain Injury, 23, 154-162) that Iwould like to intervene in this forum.
The learning procedure we use relies on microswitch technology to monitor a small response of the patient (e.g., small forehead skin movements) and to control the presentation of positive environmental stimuli contingent on the response. The aim is to determine the patient's ability to associate the response selected with the environmental stimuli, and thus to increase the frequency of such a response to obtain those stimuli. This increase (together with response declines in the absence of the stimuli and when the stimuli are non-contingent) may be considered a sign of discrimination between conditions and possibly a sign of learning (i.e., awareness of the links between responding and stimulation). These signs could also be considered compatible with a non-reflective state of basic consciousness (i.e., awareness of changes in the environment) and could be seen to represent an intermediate level between preconscious and conscious processing (see Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006).
Since my background is in computer science rather than psychology or medicine, I won't even try to speculate on human or animal consciousness. As for the "consciousness" of artificial systems, I would argue that probably a fitting definition would be the set of possible external stimuli the system is able to register, as well as the system's corresponding (implemented) reactions to these stimuli. This seems to be comparable to Sensorimotor awareness, with the important trait that an artificial system cannot create "new" responses. They can only be to predefined conditions and will always be the same (or chosen from a given range of options).
Thanks: interdisciplinary confrontation is the salt of cognitive science discussion.
Let me present you a question related to your assertion that ."an artificial system cannot create "new" responses".
Do you think this is true in the case of neural nets too? If so, please, explain. I know this is not the opinion of computer scientists as I. Alexander or Psychologists interested in cognitive robotics like D. Parisi and R. Manzotti.
Well, I'm no expert on neural networks, but as far as I am aware, they still have to be trained to generate specific responses to different input. So a neural network might not always come up with the responses one might expect, and might find different ways to select the appropriate output, but they will still be in the range of trained possibilities, meaning that the system lacks the "creativity" to create completely new responses.
A possible way to do research on consciousness in the future, is to penetrate deeply into the psychological structure of the phenomena as thinking, memory, emotions, behavior etc by psychological tests, interviews and brainscan,- both in patients with cognitve disturbances and in normal humans - and then collect and select it all
in a theory of consciousness. I guess such theories will be revised at all the time, where each revision do that we penetrate deeper and deeper into our understanding consciousness. But of course there are really long way to go
This is The Topic to discuss!
I am another computer scientist, finishing my PhD in Medical Image Analysis, in particular diffusion imaging of the brain. My biggest interest is the brain, intelligence, consciousness, what it is, who we are, and all of that.
Do I have answers? Not really...
I would like to suggest another book 'On Intelligence' by Jeff Hawkins. It's an engineering kind of view on the subject. Very interesting the idea of how the brain works - not really like artificial intelligence puts it normally: a behavioral approach, that is, if it talks like a duck then it's a duck. I'm still reading but totally recommended.
Next one: Antonio Damasio's.
Integrated, system-level, external and reflective awareness.
This means, basically, that the system is perceiving the world and itself, continuously integrating this information into a multiscale unified model of reality running in the head and computing present and future value for the agent.
Where is consciousness?
In the brain, globally, but especially in sense systems, limbic system and prefrontal cortex.
Which differences between human, animal and artificial system's consciousness?
None theoretically, many in practice due to the extreme bodily and competence differences between them.