Tashatuvango/Shutterstock.com

Secrets of the Creative Brain

A leading neuroscientist who has spent decades studying creativity shares her research on where genius comes from, whether it is dependent on high IQ.

As a psychiatrist and neuroscientist who studies creativity, I’ve had the pleasure of working with many gifted and high-profile subjects over the years, but Kurt Vonnegut—dear, funny, eccentric, lovable, tormented Kurt Vonnegut—will always be one of my favorites. Kurt was a faculty member at the Iowa Writers’ Workshop in the 1960s, and participated in the first big study I did as a member of the university’s psychiatry department. I was examining the anecdotal link between creativity and mental illness, and Kurt was an excellent case study.

He was intermittently depressed, but that was only the beginning. His mother had suffered from depression and committed suicide on Mother’s Day, when Kurt was 21 and home on military leave during World War II. His son, Mark, was originally diagnosed with schizophrenia but may actually have bipolar disorder. (Mark, who is a practicing physician, recounts his experiences in two books, The Eden Express and Just Like Someone Without Mental Illness Only More So, in which he reveals that many family members struggled with psychiatric problems. “My mother, my cousins, and my sisters weren’t doing so great,” he writes. “We had eating disorders, co-dependency, outstanding warrants, drug and alcohol problems, dating and employment problems, and other ‘issues.’ ”)

While mental illness clearly runs in the Vonnegut family, so, I found, does creativity. Kurt’s father was a gifted architect, and his older brother Bernard was a talented physical chemist and inventor who possessed 28 patents. Mark is a writer, and both of Kurt’s daughters are visual artists. Kurt’s work, of course, needs no introduction.

For many of my subjects from that first study—all writers associated with the Iowa Writers’ Workshop—mental illness and creativity went hand in hand. This link is not surprising. The archetype of the mad genius dates back to at least classical times, when Aristotle noted, “Those who have been eminent in philosophy, politics, poetry, and the arts have all had tendencies toward melancholia.” This pattern is a recurring theme in Shakespeare’s plays, such as when Theseus, in A Midsummer Night’s Dream, observes, “The lunatic, the lover, and the poet / Are of imagination all compact.” John Dryden made a similar point in a heroic couplet: “Great wits are sure to madness near allied, / And thin partitions do their bounds divide.”

Compared with many of history’s creative luminaries, Vonnegut, who died of natural causes, got off relatively easy. Among those who ended up losing their battles with mental illness through suicide are Virginia Woolf, Ernest Hemingway, Vincent van Gogh, John Berryman, Hart Crane, Mark Rothko, Diane Arbus, Anne Sexton, and Arshile Gorky.

My interest in this pattern is rooted in my dual identities as a scientist and a literary scholar. In an early parallel with Sylvia Plath, a writer I admired, I studied literature at Radcliffe and then went to Oxford on a Fulbright scholarship; she studied literature at Smith and attended Cambridge on a Fulbright. Then our paths diverged, and she joined the tragic list above. My curiosity about our different outcomes has shaped my career. I earned a doctorate in literature in 1963 and joined the faculty of the University of Iowa to teach Renaissance literature. At the time, I was the first woman the university’s English department had ever hired into a tenure-track position, and so I was careful to publish under the gender-neutral name of N. J. C. Andreasen.

Not long after this, a book I’d written about the poet John Donne was accepted for publication by Princeton University Press. Instead of feeling elated, I felt almost ashamed and self-indulgent. Who would this book help? What if I channeled the effort and energy I’d invested in it into a career that might save people’s lives? Within a month, I made the decision to become a research scientist, perhaps a medical doctor. I entered the University of Iowa’s medical school, in a class that included only five other women, and began working with patients suffering from schizophrenia and mood disorders. I was drawn to psychiatry because at its core is the most interesting and complex organ in the human body: the brain.

I have spent much of my career focusing on the neuroscience of mental illness, but in recent decades I’ve also focused on what we might call the science of genius, trying to discern what combination of elements tends to produce particularly creative brains. What, in short, is the essence of creativity? Over the course of my life, I’ve kept coming back to two more-specific questions: What differences in nature and nurture can explain why some people suffer from mental illness and some do not? And why are so many of the world’s most creative minds among the most afflicted? My latest study, for which I’ve been scanning the brains of some of today’s most illustrious scientists, mathematicians, artists, and writers, has come closer to answering this second question than any other research to date.

The first attempted examinations of the connection between genius and insanity were largely anecdotal. In his 1891 book, The Man of Genius, Cesare Lombroso, an Italian physician, provided a gossipy and expansive account of traits associated with genius—left-handedness, celibacy, stammering, precocity, and, of course, neurosis and psychosis—and he linked them to many creative individuals, including Jean-Jacques Rousseau, Sir Isaac Newton, Arthur Schopenhauer, Jonathan Swift, Charles Darwin, Lord Byron, Charles Baudelaire, and Robert Schumann. Lombroso speculated on various causes of lunacy and genius, ranging from heredity to urbanization to climate to the phases of the moon. He proposed a close association between genius and degeneracy and argued that both are hereditary.

Francis Galton, a cousin of Charles Darwin, took a much more rigorous approach to the topic. In his 1869 book, Hereditary Genius, Galton used careful documentation—including detailed family trees showing the more than 20 eminent musicians among the Bachs, the three eminent writers among the Brontës, and so on—to demonstrate that genius appears to have a strong genetic component. He was also the first to explore in depth the relative contributions of nature and nurture to the development of genius.

As research methodology improved over time, the idea that genius might be hereditary gained support. For his 1904 Study of British Genius, the English physician Havelock Ellis twice reviewed the 66 volumes of The Dictionary of National Biography. In his first review, he identified individuals whose entries were three pages or longer. In his second review, he eliminated those who “displayed no high intellectual ability” and added those who had shorter entries but showed evidence of “intellectual ability of high order.” His final list consisted of 1,030 individuals, only 55 of whom were women. Much like Lombroso, he examined how heredity, general health, social class, and other factors may have contributed to his subjects’ intellectual distinction. Although Ellis’s approach was resourceful, his sample was limited, in that the subjects were relatively famous but not necessarily highly creative. He found that 8.2 percent of his overall sample of 1,030 suffered from melancholy and 4.2 percent from insanity. Because he was relying on historical data provided by the authors of The Dictionary of National Biography rather than direct contact, his numbers likely underestimated the prevalence of mental illness in his sample.

A more empirical approach can be found in the early-20th-century work of Lewis M. Terman, a Stanford psychologist whose multivolume Genetic Studies of Genius is one of the most legendary studies in American psychology. He used a longitudinal design—meaning he studied his subjects repeatedly over time—which was novel then, and the project eventually became the longest-running longitudinal study in the world. Terman himself had been a gifted child, and his interest in the study of genius derived from personal experience. (Within six months of starting school, at age 5, Terman was advanced to third grade—which was not seen at the time as a good thing; the prevailing belief was that precocity was abnormal and would produce problems in adulthood.) Terman also hoped to improve the measurement of “genius” and test Lombroso’s suggestion that it was associated with degeneracy.

In 1916, as a member of the psychology department at Stanford, Terman developed America’s first IQ test, drawing from a version developed by the French psychologist Alfred Binet. This test, known as the Stanford-Binet Intelligence Scales, contributed to the development of the Army Alpha, an exam the American military used during World War I to screen recruits and evaluate them for work assignments and determine whether they were worthy of officer status.

Terman eventually used the Stanford-Binet test to select high-IQ students for his longitudinal study, which began in 1921. His long-term goal was to recruit at least 1,000 students from grades three through eight who represented the smartest 1 percent of the urban California population in that age group. The subjects had to have an IQ greater than 135, as measured by the Stanford-Binet test. The recruitment process was intensive: students were first nominated by teachers, then given group tests, and finally subjected to individual Stanford-Binet tests. After various enrichments—adding some of the subjects’ siblings, for example—the final sample consisted of 856 boys and 672 girls. One finding that emerged quickly was that being the youngest student in a grade was an excellent predictor of having a high IQ. (This is worth bearing in mind today, when parents sometimes choose to hold back their children precisely so they will not be the youngest in their grades.)

These children were initially evaluated in all sorts of ways. Researchers took their early developmental histories, documented their play interests, administered medical examinations—including 37 different anthropometric measurements—and recorded how many books they’d read during the past two months, as well as the number of books available in their homes (the latter number ranged from zero to 6,000, with a mean of 328). These gifted children were then reevaluated at regular intervals throughout their lives.

“The Termites,” as Terman’s subjects have come to be known, have debunked some stereotypes and introduced new paradoxes. For example, they were generally physically superior to a comparison group—taller, healthier, more athletic. Myopia (no surprise) was the only physical deficit. They were also more socially mature and generally better adjusted. And these positive patterns persisted as the children grew into adulthood. They tended to have happy marriages and high salaries. So much for the concept of “early ripe and early rotten,” a common assumption when Terman was growing up.

But despite the implications of the title Genetic Studies of Genius, the Termites’ high IQs did not predict high levels of creative achievement later in life. Only a few made significant creative contributions to society; none appear to have demonstrated extremely high creativity levels of the sort recognized by major awards, such as the Nobel Prize. (Interestingly, William Shockley, who was a 12-year-old Palo Alto resident in 1922, somehow failed to make the cut for the study, even though he would go on to share a Nobel Prize in physics for the invention of the transistor.) Thirty percent of the men and 33 percent of the women did not even graduate from college. A surprising number of subjects pursued humble occupations, such as semiskilled trades or clerical positions. As the study evolved over the years, the term gifted was substituted for genius. Although many people continue to equate intelligence with genius, a crucial conclusion from Terman’s study is that having a high IQ is not equivalent to being highly creative. Subsequent studies by other researchers have reinforced Terman’s conclusions, leading to what’s known as the threshold theory, which holds that above a certain level, intelligence doesn’t have much effect on creativity: most creative people are pretty smart, but they don’t have to bethat smart, at least as measured by conventional intelligence tests. An IQ of 120, indicating that someone is very smart but not exceptionally so, is generally considered sufficient for creative genius.

But if high IQ does not indicate creative genius, then what does? And how can one identify creative people for a study?

One approach, which is sometimes referred to as the study of “little c,” is to develop quantitative assessments of creativity—a necessarily controversial task, given that it requires settling on what creativity actually is. The basic concept that has been used in the development of these tests is skill in “divergent thinking,” or the ability to come up with many responses to carefully selected questions or probes, as contrasted with “convergent thinking,” or the ability to come up with the correct answer to problems that have only one answer. For example, subjects might be asked, “How many uses can you think of for a brick?” A person skilled in divergent thinking might come up with many varied responses, such as building a wall; edging a garden; and serving as a bludgeoning weapon, a makeshift shot put, a bookend. Like IQ tests, these exams can be administered to large groups of people. Assuming that creativity is a trait everyone has in varying amounts, those with the highest scores can be classified as exceptionally creative and selected for further study.

While this approach is quantitative and relatively objective, its weakness is that certain assumptions must be accepted: that divergent thinking is the essence of creativity, that creativity can be measured using tests, and that high-scoring individuals are highly creative people. One might argue that some of humanity’s most creative achievements have been the result of convergent thinking—a process that led to Newton’s recognition of the physical formulae underlying gravity, and Einstein’s recognition that E=mc2.

A second approach to defining creativity is the “duck test”: if it walks like a duck and quacks like a duck, it must be a duck. This approach usually involves selecting a group of people—writers, visual artists, musicians, inventors, business innovators, scientists—who have been recognized for some kind of creative achievement, usually through the awarding of major prizes (the Nobel, the Pulitzer, and so forth). Because this approach focuses on people whose widely recognized creativity sets them apart from the general population, it is sometimes referred to as the study of “big C.” The problem with this approach is its inherent subjectivity. What does it mean, for example, to have “created” something? Can creativity in the arts be equated with creativity in the sciences or in business, or should such groups be studied separately? For that matter, should science or business innovation be considered creative at all?

Although I recognize and respect the value of studying “little c,” I am an unashamed advocate of studying “big C.” I first used this approach in the mid-1970s and 1980s, when I conducted one of the first empirical studies of creativity and mental illness. Not long after I joined the psychiatry faculty of the Iowa College of Medicine, I ran into the chair of the department, a biologically oriented psychiatrist known for his salty language and male chauvinism. “Andreasen,” he told me, “you may be an M.D./Ph.D., but that Ph.D. of yours isn’t worth sh--, and it won’t count favorably toward your promotion.” I was proud of my literary background and believed that it made me a better clinician and a better scientist, so I decided to prove him wrong by using my background as an entry point to a scientific study of genius and insanity.

The University of Iowa is home to the Writers’ Workshop, the oldest and most famous creative-writing program in the United States (UNESCO has designated Iowa City as one of its seven “Cities of Literature,” along with the likes of Dublin and Edinburgh). Thanks to my time in the university’s English department, I was able to recruit study subjects from the workshop’s ranks of distinguished permanent and visiting faculty. Over the course of 15 years, I studied not only Kurt Vonnegut but Richard Yates, John Cheever, and 27 other well-known writers.

Going into the study, I keyed my hypotheses off the litany of famous people who I knew had personal or family histories of mental illness. James Joyce, for example, had a daughter who suffered from schizophrenia, and he himself had traits that placed him on the schizophrenia spectrum. (He was socially aloof and even cruel to those close to him, and his writing became progressively more detached from his audience and from reality, culminating in the near-psychotic neologisms and loose associations ofFinnegans Wake.) Bertrand Russell, a philosopher whose work I admired, had multiple family members who suffered from schizophrenia. Einstein had a son with schizophrenia, and he himself displayed some of the social and interpersonal ineptitudes that can characterize the illness. Based on these clues, I hypothesized that my subjects would have an increased rate of schizophrenia in family members but that they themselves would be relatively well. I also hypothesized that creativity might run in families, based on prevailing views that the tendencies toward psychosis and toward having creative and original ideas were closely linked.

I began by designing a standard interview for my subjects, covering topics such as developmental, social, family, and psychiatric history, and work habits and approach to writing. Drawing on creativity studies done by the psychiatric epidemiologist Thomas McNeil, I evaluated creativity in family members by assigning those who had had very successful creative careers an A++ rating and those who had pursued creative interests or hobbies an A+.

My final challenge was selecting a control group. After entertaining the possibility of choosing a homogeneous group whose work is not usually considered creative, such as lawyers, I decided that it would be best to examine a more varied group of people from a mixture of professions, such as administrators, accountants, and social workers. I matched this control group with the writers according to age and educational level. By matching based on education, I hoped to match for IQ, which worked out well; both the test and the control groups had an average IQ of about 120. These results confirmed Terman’s findings that creative genius is not the same as high IQ. If having a very high IQ was not what made these writers creative, then what was?

As I began interviewing my subjects, I soon realized that I would not be confirming my schizophrenia hypothesis. If I had paid more attention to Sylvia Plath and Robert Lowell, who both suffered from what we today call mood disorder, and less to James Joyce and Bertrand Russell, I might have foreseen this. One after another, my writer subjects came to my office and spent three or four hours pouring out the stories of their struggles with mood disorder—mostly depression, but occasionally bipolar disorder. A full 80 percent of them had had some kind of mood disturbance at some time in their lives, compared with just 30 percent of the control group—only slightly less than an age-matched group in the general population. (At first I had been surprised that nearly all the writers I approached would so eagerly agree to participate in a study with a young and unknown assistant professor—but I quickly came to understand why they were so interested in talking to a psychiatrist.) The Vonneguts turned out to be representative of the writers’ families, in which both mood disorder and creativity were overrepresented—as with the Vonneguts, some of the creative relatives were writers, but others were dancers, visual artists, chemists, architects, or mathematicians. This is consistent with what some other studies have found. When the psychologist Kay Redfield Jamison looked at 47 famous writers and artists in Great Britain, she found that more than 38 percent had been treated for a mood disorder; the highest rates occurred among playwrights, and the second-highest among poets. When Joseph Schildkraut, a psychiatrist at Harvard Medical School, studied a group of 15 abstract-expressionist painters in the mid-20th century, he found that half of them had some form of mental illness, mostly depression or bipolar disorder; nearly half of these artists failed to live past age 60.

While my workshop study answered some questions, it raised others. Why does creativity run in families? What is it that gets transmitted? How much is due to nature and how much to nurture? Are writers especially prone to mood disorders because writing is an inherently lonely and introspective activity? What would I find if I studied a group of scientists instead?

These questions percolated in my mind in the weeks, months, and eventually years after the study. As I focused my research on the neurobiology of severe mental illnesses, including schizophrenia and mood disorders, studying the nature of creativity—important as the topic was and is—seemed less pressing than searching for ways to alleviate the suffering of patients stricken with these dreadful and potentially lethal brain disorders. During the 1980s, new neuroimaging techniques gave researchers the ability to study patients’ brains directly, an approach I began using to answer questions about how and why the structure and functional activity of the brain is disrupted in some people with serious mental illnesses.

As I spent more time with neuroimaging technology, I couldn’t help but wonder what we would find if we used it to look inside the heads of highly creative people. Would we see a little genie that doesn’t exist inside other people’s heads?

Today’s neuroimaging tools show brain structure with a precision approximating that of the examination of post-mortem tissue; this allows researchers to study all sorts of connections between brain measurements and personal characteristics. For example, we know that London taxi drivers, who must memorize maps of the city to earn a hackney’s license, have an enlarged hippocampus—a key memory region—as demonstrated in a magnetic-resonance-imaging, or MRI, study. (They know it, too: on a recent trip to London, I was proudly regaled with this information by several different taxi drivers.) Imaging studies of symphony-orchestra musicians have found them to possess an unusually large Broca’s area—a part of the brain in the left hemisphere that is associated with language—along with other discrepancies. Using another technique, functional magnetic resonance imaging (fMRI), we can watch how the brain behaves when engaged in thought.

Designing neuroimaging studies, however, is exceedingly tricky. Capturing human mental processes can be like capturing quicksilver. The brain has as many neurons as there are stars in the Milky Way, each connected to other neurons by billions of spines, which contain synapses that change continuously depending on what the neurons have recently learned. Capturing brain activity using imaging technology inevitably leads to oversimplifications, as sometimes evidenced by news reports that an investigator has found the location of something—love, guilt, decision making—in a single region of the brain.

And what are we even looking for when we search for evidence of “creativity” in the brain? Although we have a definition of creativity that many people accept—the ability to produce something that is novel or original and useful or adaptive—achieving that “something” is part of a complex process, one often depicted as an “aha” or “eureka” experience. This narrative is appealing—for example, “Newton developed the concept of gravity around 1666, when an apple fell on his head while he was meditating under an apple tree.” The truth is that by 1666, Newton had already spent many years teaching himself the mathematics of his time (Euclidean geometry, algebra, Cartesian coordinates) and inventing calculus so that he could measure planetary orbits and the area under a curve. He continued to work on his theory of gravity over the subsequent years, completing the effort only in 1687, when he published Philosophiœ Naturalis Principia Mathematica. In other words, Newton’s formulation of the concept of gravity took more than 20 years and included multiple components: preparation, incubation, inspiration—a version of the eureka experience—and production. Many forms of creativity, from writing a novel to discovering the structure of DNA, require this kind of ongoing, iterative process.

With functional magnetic resonance imaging, the best we can do is capture brain activity during brief moments in time while subjects are performing some task. For instance, observing brain activity while test subjects look at photographs of their relatives can help answer the question of which parts of the brain people use when they recognize familiar faces. Creativity, of course, cannot be distilled into a single mental process, and it cannot be captured in a snapshot—nor can people produce a creative insight or thought on demand. I spent many years thinking about how to design an imaging study that could identify the unique features of the creative brain.

Read more at The Atlantic.

(Image via Tashatuvango/Shutterstock.com)