With knowledge of how something works comes the power to control it. As we gain better understanding of the links between brain and behaviour, so we will be able to modify that behaviour in far more powerful ways than Pavlov and Skinner could have dreamed.
We are at the dawn of the neurotechnological age. In the last decade, huge amounts of data have been collected in the fields of neuroimaging, psychopharmacology and genomics. Drugs which enhance cognitive functions (‘cogniceuticals’) and brain-machine interfaces are being developed. Neurotechnology is at the same time exciting and perilous, and now is the time to begin serious discussion of the implications of these emerging technologies for society.
As these technologies become more sophisticated, they will have far-reaching socio-economic, legal and ethical implications. The elucidation by neuroscientists of the biological bases of phenomena such as cognition and emotion, once believed to be outside the realm of science, will raise philosophical questions about the nature of the ‘self’ and what it means to be human, and the reduction of human behaviour to neurobiological mechanisms will make us question ideas such as human freedom and responsibility.
Neuroethics recently came into the public eye when two U.S. start-up companies, No Lie MRI and Cephos, announced that they would soon be offering brain scans to determine whether or not a person is lying. The data obtained by neuroimaging techniques under experimental conditions are less valid and more ambiguous than those obtained for diagnostic purposes. Neuroscientists are therefore skeptical about the claim that scanning can distinguish between lies and truth, and bioethicists are concerned about who might use the technology, and for what reasons. At present, then, it would be unwise, to say the least, to allow such tests to be admissible as evidence in the courtroom.
As neuroimaging techniques are refined and the data obtained by them under experimental conditions become more accurate, they will eventually probe deeper into the neural mechanisms underlying individuality. Imaging may, for example, soon be used to obtain information about an individual’s attitudes, biases and preferences, susceptibility to neurological and psychiatric disorders, and traits such as intelligence. Who should have access to this information, and how it might be used? One application is neuromarketing, in which researchers use imaging to try to determine the preference of consumers for products. Their findings could, in principle, be used make consumers more susceptible to advertising. The possibility of obtaining such detailed information, which could easily be used to discriminate between people (e.g. for purposes of employment or life insurance), raises important questions about the privacy of individuals. Would it be ethical to try to determine racial or class differences in intelligence? There is a danger that such information could lead us back to eugenics.
Pharmacological means of altering brain function have been with us for decades. Antidepressants, nutritional supplements, caffeine and ProPlus are but a few examples of the substances which are used routinely. Steven Rose talks of the neurogenetic-industrial complex, in which significant proportions of the population are prescribed psychotropic drugs. The World Health Organization has raised concerns about what it calls an epidemic of depression and attention deficit hyperactivity disorder (ADHD), and about the prescription of drugs for their treatment.
Approximately 10% of American high-school pupils take Ritalin (methylphenidate) to treat ADHD, and a similar proportion of all women in the U.S. take Prozac (fluoxetine) for depression. Patients believed to be vulnerable to depression are given Prozac as a prophylactic, and the drug is also prescribed to women for pre-menstrual tension. The illicit use of Ritalin by university students to increase their attention spans in the hope of improving academic performance is also increasing; some estimates put the figure of college and university students using Ritalin to aid their attention and concentration at 16%, making it the most widely-used recreational drug by university students.
Thus, mood- and cognition-enhancing drugs have been commonplace for some time. We live in a society which would rather administer a psychological straight jacket than address its inherent problems. One would like to think that the profit margins of the big pharmaceutical companies is not a factor in the role of psychotropic drugs in our society; unfortunately, it is. Another important factor is a change in the way psychiatrists diagnose conditions such as ADHD and depression.
There have in recent years been huge efforts by neuroscientists and pharmaceutical companies to develop drugs which boost memory and improve attention. Drugs which repress unwanted memories may be available in the near future, while pharmacogenetics promises to tailor drugs to the individual. To what extent should neurocognitive enhancement be made available to individuals? What will be the long-term side-effects of cognition-enhancing drugs that become widely available? Would some people be pressured enhance their cognitive abilities by taking such drugs? What about the effects on those individuals who choose not to enhance their cognitive functions, or cannot afford to do so? These are just some of the issues that have to be borne in mind if the use of these drugs becomes prevalent.
A quick look at the demographics of the Western world reveals ageing populations. In the coming decades there will be an explosion in the number of people reaching retirement ago, and a concomitant increase in the prevalence of neurodegenerative diseases. This will only entrench the neurogenetic-industrial complex, making individuals more dependent upon drugs to maintain their normal state of health.
Transcranial magnetic stimulation (TMS) and brain-machine interfaces (BMIs) are two recently developed non-pharmacological methods of altering brain function. Although it is unlikely that they will be as widely used as pharmacological methods, these techniques have advanced rapidly in the last few years and also pose major ethical problems. The most widely-used BMI is the cochlear implant, which converts sound waves into electrical impulses that are transmitted to the auditory nerve, and from which nearly 100,000 people have benefited from. Recent advances in bionanotechnology will make two-way communicating BMIs available in the near future.
BMIs constitute a fusion of the organic and technological worlds. Neurotechnology is thus moving the cyborg from the realm of science fiction into the real world. Some argue that the integration of electronic devices with the brain represents the next stage in human evolution, a stage which is being consciously driven in a particular direction. Others say that such a process would mean that we are no longer human, but ‘posthuman’ or ‘transhuman’. Not many would argue that it is unethical to use neural implants or neuroprosthetic devices for the restoration of impaired functions. A combination of drugs, BMIs and advanced neuroprosthetics may eventually be used reverse the effects of ageing on the human body, and increase life expectancy.
To what extent would it remain ethical to do this? Is it ethical to provide BMIs for the enhancement of the abilities of healthy individuals? U.S. government agencies such as DARPA are funnelling large amounts of money into researching and developing BMIs, primarily for use by soldiers on the battlefield. Might there be a day when everyone enlisting into the army will be required to have a BMI fitted? And what about the safety and long-term effects of such devices?
Ultimately, it is society that will decide whether or not to embrace these new technologies. As pointed out in a recent Nature editorial, “the neuroscience community…has been relatively slow to recognize its own responsibility to address potential abuses of knowledge.” It is time that this community started seriously debating the implications of its work for society, and contributing to the neuroethics debate which has, thus far, been initiated primarily by lawyers and journalists. Otherwise, governments may impose immature and ill-considered legislation, as the Bush administration did in the case of stem cell research. It is clear that a workable, robust framework of neuroethical guidelines is needed. When will such a framework be developed?