Augmented cognition: Science fact or science fiction?

We live in a time in which we are overwhelmed by information obtained from multiple sources, such as the internet, television, and radio. We are usually unable to give our undivided attention to any one source of information, but instead give ‘continuous partial attention’ to all of them by constantly flitting between them. The limitations of cognitive processes, particularly attention and working memory, place a ceiling on the capacity of the brain to process and store information. It is these processes that some researchers are aiming to enhance with augmented cognition, an emerging field which aims to use computational technology to enhance human performance in various tasks by overcoming the bottlenecks in processes such as attention and memory.

Whereas brain-computer interfaces enable people to control various aspects of their environment, the goal of augmented cognition is to determine peoples’ cognitive state in order to enhance it. Augmented cognition has many potential military applications, and its proponents promise that it will also greatly improve productivity in the workplace. Hence, the Defense Advanced Research Projects Agency (DARPA) is conducting research in the area, and corporations such as Microsoft are also showing interest and funding research. Research utilizes a three-pronged approach, whereby advances in cognitive and neural science are combined with information technologies from industry and academia to develop technologies for enhancing human cognitive capabilities.


Before information can be processed in working memory, it must first be attended to by the senses, which are the windows through which the brain perceives its environment. By attention, we mean focusing on a particular aspect of the environment while ignoring others. Psychologists have yet to improve on the definition of the term provided by William James in the late 19th Century:

…attention is…the taking possession by the mind in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought…It implies withdrawal from some things in order to deal effectively with others.

Being tied closely to perception, attention forms the basis of all other cognitive processes, and is perhaps the most extensively studied of all of them. With augmented cognition, researchers hope to optimize the allocation of attention, and to integrate multiple information sources so that the data may be used more efficiently.

Attention-enhancing devices could potentially be very useful if applied to situations in which people are required to make quick decisions within a demanding work environment. If they were to prove beneficial, such devices would need to be programmed to deal with the uncertainty that is often involved in the decision-making process.

One device that is currently being developed is the CogPit, a “smart” cockpit for fighter aircraft of the future. The CogPit uses an electro- encephalogram to take readings of the brain’s electrical activity while a pilot uses the conventional controls of the craft. It is a closed-loop simulation; brain activity is monitored, and specific patterns of brain waves – those associated with stress, for example, trigger the system into action.

By filtering out irrelevant information, the CogPit system could reduce the pilot’s stress levels, enabling them to focus their attention on the most important information. It could provide assistance or, if necessary, take complete control of the aircraft if the pilot is under excess stress. Currently, the CogPit system is fully equipped with flight instruments, including a radar warning receiver which detects surface-to-air missiles, and a “targeting pod” which can locate, track and destroy targets. At the moment, however, it is just a simulation, and has not yet been put to use in a real aircraft. Furthermore, the accuracy with which electroencephalography and similar techniques can determine a person’s stress levels or emotional states is as yet unclear.


The mechanisms by which the brain generates memories are still elusive, although studies of brain-damaged patients have led cognitive psychologists to develop a number of theoretical frameworks. Within these frameworks, memory is usually sub-divided into a number of distinct but interrelated processes. An influential model was developed in the 1960s by Atkinson and Shiffrin, who thought in terms of short-term memory and long-term memory (hereafter referred to as STM and LTM, respectively). According to this model, all information must first pass through STM before being transferred to LTM. STM can store a limited amount of information for a few seconds. Exactly how the transfer from STM to LTM takes place is unclear, but Atkinson and Shiffrin propose that information needs to be “rehearsed” in order to be consolidated and stored in long-term memory. As far as we know, the capacity of the human brain to store long-term memories is infinite.

Atkinson and Shiffrin’s “modal” model of memory is confirmed by the study of patients with various kinds of amnesia. One famous amnesic is a patient known as H. M., who was described by Brenda Milner. In order to alleviate H. M.’s severe epileptic symptoms, surgeons performed bilateral excisions on his medial temporal lobes. The procedure involved the removal of parts of the hippocampus, and had major consequences. As a result of the surgery, H. M. became profoundly amnesic – he was unable to store any new memories, although those memories that had formed prior to the surgery remained intact. H. M. had unimpaired STM but severely disrupted LTM.

H. M.’s condition is known as anterograde amnesia (the inability to encode new memories). Patients with retrograde amnesia exhibit H. M.’s situation in reverse: LTM remains intact, but STM is impaired. Such patients have no difficulty encoding new memories, but cannot remember those memories encoded before the onset of their amnesia. Both anterograde and retrograde amnesics provide compelling evidence that memory can indeed be subdivided into short-term and long-term stores.

Experiments have shown that STM can store 7 plus or minus 2 pieces of information. This upper limit to the amount of information that can be stored in STM can be increased by a simple process called “chunking”. As an example, try remembering the order of this string of 15 letters: OACBNHLDACBLCNB. Most people have great difficulty in carrying out this memory task. If, however, the 15 letters are remembered as a series of well-known acronyms (ABC, BBC, CNN, DHL, AOL), they are far easier to recall.

In the 1980s, Baddeley and Hitch proposed the term ‘working memory’ to describe the temporary storage and manipulation of information during the performance of a task or solving of problem. Baddeley further suggested the term ‘articulatory loop’ for the rapid verbal repetition of the information that is to be remembered. In principle, the articulatory loop is very similar to the process of rehearsal which Atkinson and Shiffrin suggest is needed for memories to pass from STM into LTM. In fact, the relationship between short-term memory and working memory is unclear. Some researchers consider them to be the same, while others believe them to be distinct but related processes.

Although there has been recent work on drugs that enhance working memory, research into how augmented cognition might do so is virtually non-existent. So how do proponents of augmented cognition think it will enhance human memory capacity? It will, according to Dylan Schmorrow, director of DARPA’s AugCog program:

…circumvent fundamental human limitations by engineering work environments that will make it easier for people to encode, store and retrieve the information presented to them [and] develop interfaces that are context-sensitive by presenting material in relation to the context in which it is encountered. This will be accomplished by embedding information in distinctive, image-able, and multi-sensory contexts, so as to provide memory hooks that naturally engage the human mind.

Skeptics say that augmented cognition is no more than science fiction. As we have seen, memory and, to some extent, attention, are abstract concepts. There is no general consensus on a definition of either term, let alone on how they work. Herein lie the limitations of augmented cognition: it is based on theoretical models of cognitive processes, and it is, therefore, difficult to imagine how one could enhance processes that are not fully understood.

DARPA-funded researchers are well aware of the limitations. This is how one group concluded the paper they presented at the 36th Hawaii International Conference on System Science in 2002:

Although our pilot experiment suggests there may be an advantage of the augmented approach in a specific situation, there is much to be done before we are ready to design routinely attention-enhancing tools to optimize human attention allocation and to incorporate uncertainty in real-time decision-making.

Nevertheless, Schmorrow says that “there have been some profound advances in the last 6 months”. The mission of the AugCog program he directs is:

…to extend, by an order of magnitude or more, the information management capacity of the human-computer warfighting integral by developing and demonstrating quantifiable enhancements to human performance in diverse, stressful, operational environments…[and to] empower one human’s ability to successfully accomplish the functions currently carried out by three or more individuals.

Schmorrow has a vision of a symbiosis between man and machine, resulting ultimately in:

decision forecasting tools which exploit human inquisitiveness…monitoring of decision-maker paths through context-rich knowledge space…continuous, autonomous reconciliation of computer behaviors to human mental models and decision-making needs…[and] system interfaces which help people remember.

Schmorrow believes that exploitation of the “inexorable progress in digital computation and storage [combined with a greater] understanding of human brain function”, makes the realization of these goals inevitable in the near future. According to the agency’s website, all this will be accomplished within 5 years.


Here’s a short film entitled The Future of Augmented Cognition, commissioned by DARPA and directed by Alexander Singer, who is probably best known for making episodes of television series such as The Fugitive, Hill Street Blues, Start Trek: The Next Generation and Deep Space Nine. This film is set in the year 2030, and takes place in a command centre which monitors cyberspace activity for threats to the global economy; it is a depiction of DARPA’s vision of how augment cognition will in the future be used to integrate multiple sources of information. 



4 thoughts on “Augmented cognition: Science fact or science fiction?

  1. The scientific fact
    SEMICONDUCTORS WITH BRAIN – Microelectronics meets molecular and neurobiology
    Peter Fromherz
    Max Planck Institute for Biochemistry
    Department of Membrane and Neurophysics
    Martinsried / München Germany
    Electron Devices Meeting, 2001. IEDM Technical Digest. International
    Volume , Issue , 2001 Page(s):16.1.1 – 16.1.4
    Digital Object Identifier 10.1109/IEDM.2001.979510
    Summary:The electrical interfacing of nerve cells and semiconductor microstructures is considered. The coupling of electron conducting silicon with ion conducting neurons relies on a close contact of the chip and the cell membrane with its ion channels. Excitation of neuronal activity is achieved by capacitive interaction with the channels and recording by the response of transistors to open channels. Integrated neuroelectronic systems are obtained by outgrowth of a neuronal net on silicon and by two-way interfacing of the neuronal and electronic components
    For more information visit:

  2. Interesting post! I’m quite interested in such things. I’m much more familiar with the far-out ideas by Ray Kurzweil and other so-called Singularitarian-types. The technology described here though doesn’t seem too hard to envision. 5 years though – that’s not very long!
    After-thought: For whom is DARPA commissioning these videos? They don’t really need any approval from the public to get their funding.

Comments are closed.