Is there any truth about the benefits of Classical music?

Is there any truth to the commonly accepted notion that listening to classical music improves mental capacity? Somehow it has been accepted in modern society that classical musicians have larger frontal cortices, better mental reasoning powers and perhaps intelligence quotients. Over the last two decades or so this idea has fuelled a rise in the number of pregnant mothers listening to classical music – whether or not they like it – and parents enrolling their children into music classes. The music of Mozart, in particular, has enjoyed a resurgence as its classical form is deemed to be more logical and organised, compared to music of other periods, assisting in triggering patterns of organisation in the brain amongst its listeners.

How did this idea about Classical music come about? In the 1990s scientists conducted a series of experiments where one group of students were played one of Mozart’s piano sonatas before a spatial reasoning test, while another group sat in silence. The group that was played the music beforehand performed better on that task than the control group. The effect on the control group was temporary and only lasted fifteen minutes, meaning that after the fifteen minute mark the disparities between the results were minimal and statistically the same. The results of the group found also that while music primed the individual particularly for mathematical tasks, after an hour of listening to Classical music, the effect on the brain was lost.

That piece of research was pounced on by the media and other individuals and seemingly perpetuated to promote the listening of Classical music. One governor of the state of Georgia even decreed that newborn babies be given a copy of a CD of Mozart’s works upon leaving the hospital. The Mozart Effect, to give it its common name, was written about in newspapers and magazines, and this began the spur of Mozart-related sales of music as well as the trend of mothers playing such music to their children in and out of the womb.

The most important question we need to ask is whether there is any truth in such research, and whether it can be corroborated.

We know that some forms of music has a soothing, calming effect on individuals. Playing the music to the students may have calmed that so they were not nervous, allowing them to perform better on the task. However, relaxation need not take them the form of Classical music. Any activity that promotes calm before a task – reading a light magazine, playing computer games, talking with a friend – can also hence be said to have the same effect as the classical music that was played.

What if the students in the group had read a joke book or comic beforehand, been less worried about the test and scored better? It might have prompted a deluge of articles claiming “Reading Archie (or The Beano – insert your own title here) improves your IQ”.

Or if the students had been offered a protein drink beforehand, it would not be inconceivable that someone would latch to that piece of research and declare that “Protein Drinks not just good for your body, but for your brain too”.

Mozart’s music has been said to embody the elements of classical music as we know it. Organised formal structures, chords and harmonies through related keys, use of contrasting tunes, contrasts in volume all feature in his music. But the music of other composers have such features too. Imagine if the composer Josef Haydn had been the lucky beneficiary of the experiment and his music had been played instead. The sales of his music catalogue would have hit the roof!

Subsequent scientists all found that listening to music of any form caused improvements, and the genre of music – whether rock or Classical – was irrelevant. But studies today still quote Mozart.

Is it ethical that the media promotes unsubstantiated research by reporting without closer scrutiny? As we have seen in previous blogs posts, the media reports on things without necessarily scrutinising the evidence, and entrusts so-called experts to corroborate the evidence, while it fills column inches and air time with modal auxiliary verbs? Huh? In simple terms, it means that if there is a sniff of a link between A and B, the media reports that “A could cause B”. Never mind whether it does or not, there is always the disclaimer of the word “could”.

In this instance, students performed better on a spatial reasoning task after listening to Mozart; hence the headline “Mozart could improve mental powers”. Diluted over several recounts, you could get “According to XXX newspaper, Mozart improves brain power” before arriving at “Mozart improves brain power”. Unfortunately, this is when the headline is then pounced on by anyone who would stand to profit from espousing this theme.

Who would profit from this? The Classical music world – performers, writers, musicians – can use this “research” to entice people into taking up lessons and buying CDs and magazines. If you read any music teacher’s website you may find them espousing the benefits of learning music; it is rare if you find one that advises it is a lot of effort.

The media will profit from such “research” because it means there is an untapped well of news to report and bleed dry in the quest for filling column inches and air time. News exclusives will be brought out, and so-called experts will also profit for appearing on the news and programmes, either monetarily or in the form of public exposure.

One must question the ethics of incorrect reporting. Unfortunately unsubstantiated research leads to more diluted misreporting, which can then form the basis of new research – research that uses these claims as the groundwork for investigation.

It is scary to think that all the medical research that has been done into effect of music and health could be biased because of the so-called effect of classical music. Could musical activities such as learning the piano help reduce Parkinson’s disease? Could listening to the music of Beethoven reduce the incidence of higher cases of Alzheimer’s disease? Could it all be wrong – have we all been sent down the wrong tunnel by an avalance of hype reporting?

It may be fair to say the human impulse is to buy first and consider later, because we are prone to regret. If we have missed an opportunity to improve the lives and abilities of our children, then we will be kicking ourselves silly forever with guilt.

So if you are still not convinced either way about whether classical music – either in the listening or the practice – really does have any effect, you could at least mitigate your guilt by exposing your child to piano music, for example that has predictable patterns in the left hand. Sometimes, listening to structurally-organised music such as from the Baroque may be useful, but it is also good to listen to Romantic music because the greater range of expression arguably develops a child that has more emotionally subtlety and intelligence.

You may find that ultimately, any truth in the research about Classical music and its mental benefits is not due to the blind passive listening, sitting there while the music goes on around your children. It is in the child’s inner drive to mentally organise the sounds that are heard, the trying and attempts to organise background sounds that really triggers the mental activity in the brain. It is more the practised ability in the inner mind to organise musical sounds that causes better performance in related mental tasks.

A smart person thought up the mental improvement products

The trail of human evolution is littered with gadgetry that have outlived their usefulness. We can add devices such as the fax machine, walkman, mini-disc and tape recorder to the list of machines which seemed clever at the time but have now before obsolete. Those of us of a certain age will remember newer additions such as the PocketPC, a palm sized screen which was used with a stylus that tapped out letters on screen, and the HP Jornada, a slightly bigger tablet sized keyboard and phone. And who could forget the Nintendo Brain Training programmes for the DS and Fitness Programmes for the Wii?

Launched in 2005, Nintendo’s Brain Training programmes claimed to increase mental functioning. Nintendo’s premise was that the concentration required in solving a variety of puzzles, involving language, mathematical and reasoning, increased blood flow to the frontal cortex of the brain, which at least maintained brain functioning or helped improve it. After all, since the brain is a muscle, exercising it by bombarding it with mental exercises would keep it active and healthy, right?

It is the idea of keeping the brain active that leads many to attempt their daily crossword or Sudoku. The latter in particular has seen an surge in interest over the past decade and is now a feature in newspaper back pages and magazines. There are even publications exclusively filled with Sudoku puzzles, and even more complex versions where each traditional puzzle forms a square in a bigger and complex three by three grid. If you thought doing a Sudoku puzzle was hard, imagine having to work on it in relation to eight others. It would be absolutely mind-boggling!

Is there any truth about the positive enhancements to the human life that these objects or activities bring? Nintendo’s claims about the Brain Training programmes were doubted by leading neuroscientists, who doubted the tenuous links between the increased blood flow to the brain and the vaguely described positive effects to life. It is akin to making a blanket statement saying chess grand masters or academics are the happiest people around. Unfortunately it is yet another case of a company creating a product and then engineering the science around it.

Manufacturers of beauty products do it all the time. Whether it is skin care or facial products being flogged, you will find an aspirational theme within the first five seconds of the advertisement (“Look beautiful! Stay young!”) which is then followed by a pseudo-scientific claim, preferably involving percentages (sounds more authoritative) and a small sample size (easier to corroborate, or disclaim, depending on the need).

“Live young forever. XX skin lotion is carefully formulated to retain your natural moisture, so you look and feel twenty years younger. 86% of 173 women noticed a change in skin density after using it for three months.”

There you have it. The secret of beauty product advertising.

Unfortunately, if there was any display of mental acuity, it was by the marketing team of Nintendo. In pitching a product to adults, using the retention and improvement of mental agility as a plus point, they not only convinced adults to buy what was essentially a toy, but to buy one for their children as well. The DS alone has since sold over 90 million units worldwide, and when you take into account the cost of games and all that, you will have to concede that someone at Nintendo had the smarts to produce a tidy little earner.

(For those who were more concerned with retaining their physical functioning, the Nintendo Wii Fit programmes performed that function and filled in the gap in that market.)

The improvement of mental functioning is always a good basis for marketing any product. You can find a whole plethora of products huddling behind it. Multi vitamins, activity puzzles, recreational activities involving multi-tasking – all supposedly give the brain a workout, but more importantly, tap into the fears of missing out or the loss of mental function in the human psyche, that makes people buy not out of potential gain, but fear of lost opportunity and potential regret.

The loss of mental function can lead to Alzheimer’s disease, for which there is currently no cure. With 30 million people worldwide suffering from it, this presents an endless river of opportunity for people researching the disease, as well as people developing products to improve mental function in the hope that it can stave off the disease. Like the Nintendo Brain Training developers realised, it is not so much about whether these scientific products work that makes people buy them – the evidence that is produced is biased and not independent – but it is the fear of missing out and retrospective guilt that compels people to make the purchase. Buy first, examine the evidence later, is the apparent dogma.

Unfortunately we are at the stage of modern society where it is not just the product that needs scrutiny, but whether the scrutiny itself needs scrutiny for evidence of bias, either in the form of financial ties or expected research outcomes.

Mental improvement is an area that product developers – whether the products be vitamins, books or applications – will continually target because human beings will always seek to improve mental prowess, both in themselves and their children, in the hope that somewhere down the line it offers an advantage, or prevents the mental degeneration associated with the aging process. And the compelling reason to buy lies somewhere in the meeting points of being seduced by the aspirational ideals the product offers, the fear of missing out, and the assumption that the underlying evidence is empirical. The greatest mental sharpness has been displayed by the one who has understood the sales psychology of mental health improvement products and used it to his or her advantage.

Set aside time and space for your own mental health

Work places huge demands on modern living. It goes without saying that over generations work demands have increased. For example, generations ago the concept of a traditional job for most people was a five-day working working week. The song “9 to 5” by Dolly Parton more or less captured the essence of work at the time. (Unfortunately, it is still fairly often played, to the point that people in non-Western societies assume we still only work eight hour days, five times a week, and spend our free time sunning ourselves on the beach.) Nowadays people have to work longer hours, and travel further for work. The total time spent each day traveling and working each day could easily amount to twelve hours, and it is not like the commute is down time – we still have to catch up on emails, admin, and type away busily on the laptop. We could easily spend sixty hours doing work-related things.

And the weekends? Forget the weekends. These days there is no distinction between a weekday and a weekend. Work has steadily grown its talons and where an hourly-rated individual used to get 1.5 or two times the normal rate for working on a weekend, these days it is the same. Employers realise that in an economy with job shortages, they can get away with offering less rates but will not be short of takers.

The problem with all this is that we don’t really have much of a choice when it comes to establishing our work boundaries or exercise or rights when we realise we are being pushed beyond our work boundaries. We’re made to feel that in these times, we are lucky to hold down a job, and if we complain about the increasing demands of it, and how higher managers try to force more work on us without increasing our pay, we might get told to take a hike and end up in a more difficult situation of having no job, commitments to uphold and having to start out again. There are lots of people trapped in jobs where they have to take on more and more as the years go by, and have every ounce of work and free hour extracted from them for little pay. This places increasing mental demands on the individual not just in having to cope with work demands, but the possibility of being made redundant if he or she shows weakness by having to admit an inability to cope any more. It is a no win situation.

Is it a surprising statistic that mental health illness is on the rise? Hardly.

Nowadays people are working more to live and living to work more.

What can you do to preserve some semblance of mental health?

The first thing you can do for yourself is to establish boundaries within the home. Establish a space where work does not intrude. A good idea is often the bedroom, or even have a rule that you will not work on the bed. If you end up working on your laptop in the bed, it will not do you any good – keep at least a certain physical space for yourself.

Also try to set aside a time each day for yourself if possible. It is possibly unrealistic to say an hour each day in the modern life climate, but something like twenty minutes to half an hour would be a good idea. Use this time to wind down in your personal space doing something you enjoy, that is different from work. You may think you cannot really afford that time, but it is important to disassociate yourself from work for the sake of your long-term longevity. Think of it as enforced rest. If it works better for you, take your enforced in the middle part of the working day. You don’t necessarily have to be doing something, use it to rest or catch a power nap.

Every now again, such as on a weekend, do something different from work. Do a yoga class, learn an instrument like the piano, or play a game of tennis. The possibilities for leisure are endless. But don’t bring your work approach to your leisure. Don’t start charting your tennis serve percentage, or do anything that makes your leisure activity appear like work in a different form. The only thing you must do with a businesslike approach is to meet this leisure appointment so that your life does not revolve around a continuous stretch of work.

We can moan about it but the nature of work will never revert back to how it was in the past. Those of us who long for the good old days will only make our own lives miserable with wishful thinking. Those of us who insist on working five-day weeks will find it is insufficient to maintain modern living in the twenty-first century. We will all end up working longer and harder in the current economic climate, and even if times improve, employers will be unlikely to go back to pre-existing forms of remuneration if workers have already been accustomed and conditioned to work at a certain level, because it is more cost effective to hire fewer employees who do more work than have the same work done by more employees. Employees have to recognise that adapting to increasing work loads are a working life skill, and that taking steps to negate increasing pressures will also be an essential part to maintaining our own mental health and well-being.

Why mental health problems will never go away

Many people will experience mental health difficulties at some point in their lives. As people go through life the demands on them increase, and over a prolonged period these can cause difficulty and ill health. These problems can manifest themselves both in mental and physical ways.

What kind of demands do people experience? One of these can be work-related. People may experience  stresses of looking for work, having to work in jobs which do not test their skills, or be involved in occupations  which require skills that are seemingly difficult to develop. Another common theme with adults that causes stress is having to work in a job which increasingly demands more of them, but does not remunerate them accordingly. In other words, they have to work more for less, and have to accept the gradual lowering of work conditions, but are unable to change jobs because they have already invested so much in it in terms of working years, but cannot leave and start afresh because the demands of a mortgage to pay off and a young family to provide for means they cannot start on a lower rung in a new occupation. Over a prolonged period, this can cause severe unhappiness.

Is it surprising that suicide affects men in their thirties and forties? This is a period for a man where work demands more, the mortgage needs paying, and the family demands more of his time and energy. It is unsurprising that having spent long periods in this sort of daily struggle, that men develop mental health problems which lead some to attempt suicide. But mental health does not just affect men. Among some of the this some women have to deal with are the struggles of bringing up children, the work life balance, the unfulfilled feel of not utilising their skills, and feeling isolated.

One of the ways ill health develops mentally is when people spend too long being pushed too hard for too long. Put under these kind of demands, the body shuts down as a self preservation measure. But the demands on the person don’t just go away. You may want a break from work. But this may not be possible or practical. In fact, the lack of an escape when you are aware you need one may be a greater trigger of mental illness, because it increases the feeling of being trapped.

It is little wonder that when people go through periods of mental ill health, an enforced period of short-term rest will allow them to reset their bearings to be able to continue at work, or return to work with some level of appropriate support. But this is only temporary.

With mental ill health problems, lifestyle adjustments need to be made for sufficient recovery.

Under the Equality Act (2010), your employer has a legal duty to make “reasonable adjustments” to your work.

Mental ill health sufferers could ask about working flexibly, job sharing, or a quiet room, a government report suggests.

The practicality of this however means more cost to the employer in having to make adjustments to accommodate the employee, and unless the employee is a valued one, whom the employer would like to keep, often the case is that they will be gradually phased out of the organisation.

In fact, when an employee attains a certain level of experience within an organisation, employers often ask more of them because they know these employees are locked in to their jobs, and have to accept these grudgingly, or risk losing their jobs, which they cannot do if they have dependents and financial commitments.
And you know the irony of it? The mental ill health sufferer already knows that. Which is why they don’t speak out for help in the first place.

If these employees complain, employers simply replace them with younger employees, who cost less, and who are willing to take on more responsibilities just to have a job. Any responsibilities the redundant employee had simply get divided up between his leftover colleagues, who are in turn asked to take on more responsibilities. They are next in line in the mental health illness queue.

And what if you are self employed? And have to work to support yourself and your dependents? The demands of the day to day are huge and don’t seem to go away.

You can see why mental health is  perceived a ticking time bomb. Organisations are not going to change to accommodate their employees because of cost, but keep pressing them to increase productivity without pay, knowing that they cannot say no, and when all the life and juice has been squeezed out of them, they can be chucked away and replaced with the next dispensable employee.

A ticking time bomb.

Media’s Marvellous Medicine

When it comes to our health, the media wields enormous influence over what we think. They tell us what’s good, what’s bad, what’s right and wrong, what we should and shouldn’t eat. When you think about it, that’s quite some responsibility. But do you really think that a sense of philanthropic duty is the driving force behind most of the health ‘news’ stories that you read? Who are we kidding? It’s all about sales, of course, and all too often that means the science plays second fiddle. Who wants boring old science getting in the way of a sensation-making headline?

When it comes to research – especially the parts we’re interested in, namely food, diet and nutrients – there’s a snag. The thing is, these matters are rarely, if ever, clear-cut. Let’s say there are findings from some new research that suggest a component of our diet is good for our health. Now academics and scientists are generally a pretty cautious bunch – they respect the limitations of their work and don’t stretch their conclusions beyond their actual findings. Not that you’ll think this when you hear about it in the media. News headlines are in your face and hard hitting. Fluffy uncertainties just won’t cut it. An attention-grabbing headline is mandatory; relevance to the research is optional. Throw in a few random quotes from experts – as the author Peter McWilliams stated, the problem with ‘experts’ is you can always find one ‘who will say something hopelessly hopeless about anything’ – and boom! You’ve got the formula for some seriously media-friendly scientific sex appeal, or as we prefer to call it, ‘textual garbage’. The reality is that a lot of the very good research into diet and health ends up lost in translation. Somewhere between its publication in a respected scientific journal and the moment it enters our brains via the media, the message gets a tweak here, a twist there and a dash of sensationalism thrown in for good measure, which leaves us floundering in a sea of half-truths and misinformation. Most of it should come with the warning: ‘does nothing like it says in the print’. Don’t get us wrong: we’re not just talking about newspapers and magazines here, the problem runs much deeper. Even the so-called nutrition ‘experts’, the health gurus who sell books by the millions, are implicated. We’re saturated in health misinformation.

Quite frankly, many of us are sick of this contagion of nutritional nonsense. So, before launching headlong into the rest of the book, take a step back and see how research is actually conducted, what it all means and what to watch out for when the media deliver their less-than-perfect messages. Get your head around these and you’ll probably be able to make more sense of nutritional research than most of our cherished health ‘gurus’.

Rule #1: Humans are different from cells in a test tube
At the very basic level, researchers use in-vitro testing, in which they isolate cells or tissues of interest and study them outside a living organism in a kind of ‘chemical soup’. This allows substances of interest (for example, a vitamin or a component of food) to be added to the soup to see what happens. So they might, for example, add vitamin C to some cancer cells and observe its effect. We’re stating the obvious now when we say that what happens here is NOT the same as what happens inside human beings. First, the substance is added directly to the cells, so they are often exposed to concentrations far higher than would normally be seen in the body. Second, humans are highly complex organisms, with intricately interwoven systems of almost infinite processes and reactions. What goes on within a few cells in a test tube or Petri dish is a far cry from what would happen in the body. This type of research is an important part of science, but scientists know its place in the pecking order – as an indispensable starting point of scientific research. It can give us valuable clues about how stuff works deep inside us, what we might call the mechanisms, before going on to be more rigorously tested in animals, and ultimately, humans. But that’s all it is, a starting point.

Rule #2: Humans are different from animals
The next logical step usually involves animal testing. Studying the effects of a dietary component in a living organism, not just a bunch of cells, is a big step closer to what might happen in humans. Mice are often used, due to convenience, consistency, a short lifespan, fast reproduction rates and a closely shared genome and biology to humans. In fact, some pretty amazing stuff has been shown in mice. We can manipulate a hormone and extend life by as much as 30%1. We can increase muscle mass by 60% in two weeks. And we have shown that certain mice can even regrow damaged tissues and organs.

So, can we achieve all of that in humans? The answer is a big ‘no’ (unless you happen to believe the X-Men are real). Animal testing might be a move up from test tubes in the credibility ratings, but it’s still a long stretch from what happens in humans. You’d be pretty foolish to make a lot of wild claims based on animal studies alone.

To prove that, all we need to do is take a look at pharmaceutical drugs. Vast sums of money (we’re talking hundreds of millions) are spent trying to get a single drug to market. But the success rate is low. Of all the drugs that pass in-vitro and animal testing to make it into human testing, only 11% will prove to be safe and effective enough to hit the shelves5. For cancer drugs the rate of success is only 5%5. In 2003, the President of Research and Development at pharmaceutical giant Pfizer, John La Mattina, stated that ‘only one in 25 early candidates survives to become a prescribed medicine’. You don’t need to be a betting person to see these are seriously slim odds.

Strip it down and we can say that this sort of pre-clinical testing never, ever, constitutes evidence that a substance is safe and effective. These are research tools to try and find the best candidates to improve our health, which can then be rigorously tested for efficacy in humans. Alas, the media and our nutrition gurus don’t appear to care too much for this. Taking research carried out in labs and extrapolating the results to humans sounds like a lot more fun. In fact, it’s the very stuff of many a hard-hitting newspaper headline and bestselling health book. To put all of this into context, let’s take just one example of a classic media misinterpretation, and you’ll see what we mean.

Rule #3: Treat headlines with scepticism
Haven’t you heard? The humble curry is right up there in the oncology arsenal – a culinary delight capable of curing the big ‘C’. At least that’s what the papers have been telling us. ‘The Spice Of Life! Curry Fights Cancer’ decreed the New York Daily News. ‘How curry can help keep cancer at bay’ and ‘Curry is a “cure for cancer”’ reported the Daily Mail and The Sun in the UK. Could we be witnessing the medical breakthrough of the decade? Best we take a closer look at the actual science behind the headlines.

The spice turmeric, which gives some Indian dishes a distinctive yellow colour, contains relatively large quantities of curcumin, which has purported benefit in Alzheimer’s disease, infections, liver disease, inflammatory conditions and cancer. Impressive stuff. But there’s a hitch when it comes to curcumin. It has what is known as ‘poor bioavailability’. What that means is, even if you take large doses of curcumin, only tiny amounts of it get into your body, and what does get in is got rid of quickly. From a curry, the amount absorbed is so miniscule that it is not even detectable in the body.

So what were those sensational headlines all about? If you had the time to track down the academic papers being referred to, you would see it was all early stage research. Two of the articles were actually referring to in-vitro studies (basically, tipping some curcumin onto cancer cells in a dish and seeing what effect it had).

Suffice to say, this is hardly the same as what happens when you eat a curry. The other article referred to an animal study, where mice with breast cancer were given a diet containing curcumin. Even allowing for the obvious differences between mice and humans, surely that was better evidence? The mice ate curcumin-containing food and absorbed enough for it to have a beneficial effect on their cancer. Sounds promising, until we see the mice had a diet that was 2% curcumin by weight. With the average person eating just over 2kg of food a day, 2% is a hefty 40g of curcumin. Then there’s the issue that the curcumin content of the average curry/turmeric powder used in curry is a mere 2%. Now, whoever’s out there conjuring up a curry containing 2kg of curry powder, please don’t invite us over for dinner anytime soon.

This isn’t a criticism of the science. Curcumin is a highly bio-active plant compound that could possibly be formulated into an effective medical treatment one day. This is exactly why these initial stages of research are being conducted. But take this basic stage science and start translating it into public health advice and you can easily come up with some far-fetched conclusions. Let us proffer our own equally absurd headline: ‘Curry is a Cause of Cancer’. Abiding by the same rules of reporting used by the media, we’ve taken the same type of in-vitro and animal-testing evidence and conjured up a completely different headline. We can do this because some studies of curcumin have found that it actually causes damage to our DNA, and in so doing could potentially induce cancer.

As well as this, concerns about diarrhoea, anaemia and interactions with drug-metabolizing enzymes have also been raised. You see how easy it is to pick the bits you want in order to make your headline? Unfortunately, the problem is much bigger than just curcumin. It could just as easily be resveratrol from red wine, omega-3 from flaxseeds, or any number of other components of foods you care to mention that make headline news.

It’s rare to pick up a newspaper or nutrition book without seeing some new ‘superfood’ or nutritional supplement being promoted on the basis of less than rigorous evidence. The net result of this shambles is that the real science gets sucked into the media vortex and spat out in a mishmash of dumbed-down soundbites, while the nutritional messages we really should be taking more seriously get lost in a kaleidoscope of pseudoscientific claptrap, peddled by a media with about as much authority to advise on health as the owner of the local pâtisserie.

Rule #4: Know the difference between association and causation
If nothing else, we hope we have shown that jumping to conclusions based on laboratory experiments is unscientific, and probably won’t benefit your long-term health. To acquire proof, we need to carry out research that involves actual humans, and this is where one of the greatest crimes against scientific research is committed in the name of a good story, or to sell a product.

A lot of nutritional research comes in the form of epidemiological studies. These involve looking at populations of people and observing how much disease they get and seeing if it can be linked to a risk factor (for example, smoking) or some protective factor (for example, eating fruit and veggies). And one of the most spectacular ways to manipulate the scientific literature is to blur the boundary between ‘association’ and ‘causation’. This might all sound very academic, but it’s actually pretty simple.

Confusing association with causation means you can easily arrive at the wrong conclusion. For example, a far higher percentage of visually impaired people have Labradors compared to the rest of the population, so you might jump to the conclusion that Labradors cause sight problems. Of course we know better, that if you are visually impaired then you will probably have a Labrador as a guide dog. To think otherwise is ridiculous.

But apply the same scenario to the complex human body and it is not always so transparent. Consequently, much of the debate about diet and nutrition is of the ‘chicken versus egg’ variety. Is a low or high amount of a nutrient a cause of a disease, a consequence of the disease, or simply irrelevant?

To try and limit this confusion, researchers often use what’s known as a cohort study. Say you’re interested in studying the effects of diet on cancer risk. You’d begin by taking a large population that are free of the disease at the outset and collect detailed data on their diet. You’d then follow this population over time, let’s say ten years, and see how many people were diagnosed with cancer during this period. You could then start to analyse the relationship between people’s diet and their risk of cancer, and ask a whole lot of interesting questions. Did people who ate a lot of fruit and veggies have less cancer? Did eating a lot of red meat increase cancer? What effect did drinking alcohol have on cancer risk? And so on.

The European Prospective Investigation into Cancer and Nutrition (EPIC), which we refer to often in this book, is an example of a powerfully designed cohort study, involving more than half a million people in ten countries. These studies are a gold mine of useful information because they help us piece together dietary factors that could influence our risk of disease.

But, however big and impressive these studies are, they’re still observational. As such they can only show us associations, they cannot prove causality. So if we’re not careful about the way we interpret this kind of research, we run the risk of drawing some whacky conclusions, just like we did with the Labradors. Let’s get back to some more news headlines, like this one we spotted: ‘Every hour per day watching TV increases risk of heart disease death by a fifth’.

When it comes to observational studies, you have to ask whether the association makes sense. Does it have ‘biological plausibility’? Are there harmful rays coming from the TV that damage our arteries or is it that the more time we spend on the couch watching TV, the less time we spend being active and improving our heart health. The latter is true, of course, and there’s an ‘association’ between TV watching and heart disease, not ‘causation’.

So even with cohorts, the champions of the epidemiological studies, we can’t prove causation, and that’s all down to what’s called ‘confounding’. This means there could be another variable at play that causes the disease being studied, at the same time as being associated with the risk factor being investigated. In our example, it’s the lack of physical activity that increases heart disease and is also linked to watching more TV.

This issue of confounding variables is just about the biggest banana skin of the lot. Time and time again you’ll find nutritional advice promoted on the basis of the findings of observational studies, as though this type of research gives us stone cold facts. It doesn’t. Any scientist will tell you that. This type of research is extremely useful for generating hypotheses, but it can’t prove them.

Rule #5: Be on the lookout for RCTs (randomized controlled trials)
An epidemiological study can only form a hypothesis, and when it offers up some encouraging findings, these then need to be tested in what’s known as an intervention, or clinical, trial before we can talk about causality. Intervention trials aim to test the hypothesis by taking a population that are as similar to each other as possible, testing an intervention on a proportion of them over a period of time and observing how it influences your measured outcome.

An overview of mental health

Mental illness continues to be one of the most misunderstood, mythologised and controversial of issues. Described for as long as human beings have been able to record thoughts and behaviours, it is at once a medical, social and at times political issue. It can lead to detention against one’s will and has its very own Act of Parliament, and yet we really know very little about it.

Societies through the ages have responded to this mystery by the locking up of people whose sometimes bizarre behaviour was deemed dangerous, unsuitable or just plain scandalous. Only within the relatively recent past have the tall, thick walls of the asylum been dismantled and those who remained institutionalised and hidden allowed out into the community.

Little wonder then that mental health and mental disorder remain misunderstood to most, and frightening to many. Recent reports suggest that stigma is on the decline (Time to Change 2014) but progress has been slow. Despite the best efforts of soap scriptwriters, high-profile celebrities ‘coming clean’ about mental illness, and the work of mental health charities and support groups in demystifying diagnoses such as depression, we still see and hear many examples of discrimination and myth.

Given the sheer ubiquity of mental illness throughout the world, the stigma and mystery is surprising. The most recent national survey confirms the now well-known statistic that just under one in four English adults are experiencing a diagnosable mental disorder at any one time (McManus et al. 2009). Depression is identified by the World Health Organization as the world’s leading cause of years of life lost due to disability (WHO 2009).

Relatively few of those experiencing mental health problems will come to the attention of a GP, let alone a mental health professional. This is especially so in the developing world where initiatives to develop local mental health interventions are gaining considerable ground after generations of cultural stigma and ignorance (WHO 2009). But even in parts of the world where people have ready access to medical help, many suffer alone rather than face the apparent shame of experiencing mental health problems.

Perhaps part of our reluctance to accept mental illness lies with difficulties determining mental health. We are made aware of factors that determine positive mental health. Connecting with people, being active, learning new things, acts of altruism and being aware of oneself (NHS 2014) have been evidenced as ways of promoting our well-being, but mental order remains rather more loosely defined than mental disorder.

So what are the systems used to categorise and define mental illness? In the United Kingdom, mental health professionals often refer to an ICD-10 diagnosis to refer to a patient’s condition. This is the World Health Organization’s (WHO) diagnostic manual, which lists all recognised (by WHO at least) diseases and disorders, including the category ‘mental and behavioural disorders’ (WHO 1992). The Diagnostic and Statistical Manual of Mental Disorders (better known as DSM-5) is more often used in the United States and elsewhere in the world (American Psychiatric Association 2013). These two sets of standards are intended to provide global standards for the recognition of mental health problems for both day-to-day clinical practice and clinical researchers, although the tools used by the latter group to measure symptoms often vary from place to place and can interfere with the ‘validity’ of results, or in other words the ability of one set of results to be compared with those from a different research team.

ICD-10 ‘mental and behavioural disorders’ lists 99 different types of mental health problem, each of which is further sub-divided into a variety of more precise diagnoses, ranging from the relatively common and well known (such as depression or schizophrenia) to more obscure diagnoses such as ‘specific developmental disorders of scholastic skills’.

The idea of using classification systems and labels to describe the highly complex vagaries of the human mind often meets with fierce resistance in mental health circles. The ‘medical model’ of psychiatry – diagnosis, prognosis and treatment – is essentially a means of applying the same scientific principles to the study and treatment of the mind as physical medicine applies to diseases of the body. An X-ray of the mind is impossible, a blood test will reveal nothing about how a person feels, and fitting a collection of psychiatric symptoms into a precise diagnostic category does not always yield a consistent result.

In psychiatry, symptoms often overlap with one another. For example, a person with obsessive compulsive disorder may believe that if they do not switch the lights on and off a certain number of times and in a particular order then a disaster will befall them. To most, this would appear a bizarre belief, to the extent that the inexperienced practitioner may label that person as ‘delusional’ or ‘psychotic’. Similarly, a person in the early stages of Alzheimer’s disease may often experience many of the ‘textbook’ features of clinical depression, such as low mood, poor motivation and disturbed sleep. In fact, given the tragic and predictable consequences of dementia it is unsurprising that sufferers often require treatment for depression, particularly while they retain the awareness to know that they are suffering from a degenerative condition with little or no improvement likely.

Psychiatry may often be a less-than-precise science, but the various diagnostic terms are commonplace in health and social care and have at least some descriptive power, although it is also important to remember that patients or clients may experience a complex array of feelings, experiences or ‘symptoms’ that may vary widely with the individual over time and from situation to situation.

Defining what is (or what is not) a mental health problem is really a matter of degrees. Nobody could be described as having ‘good’ mental health every minute of every day. Any football supporter will report the highs and lows encountered on an average Saturday afternoon, and can easily remember the euphoria of an important win or the despondency felt when their team is thrashed six-nil on a cold, wet Tuesday evening. But this could hardly be described as a ‘mental health problem’, and for all but the most ardent supporters their mood will have lifted within a short space of time.

However, the same person faced with redundancy, illness or the loss of a close family member might encounter something more akin to a ‘problem’. They may experience, for example, anger, low mood, tearfulness, sleep difficulties and loss of appetite. This is a quite normal reaction to stressful life events, although the nature and degree of reaction is of course dependent on a number of factors, such as the individual’s personality, the circumstances of the loss and the support available from those around them at the time. In most circumstances the bereaved person will recover after a period of time and will return to a normal way of life without the need for medical intervention of any kind. On the other hand, many people will experience mental health problems serious enough to warrant a visit to their GP.

The majority of people with mental health problems are successfully assessed and treated by GPs and other primary care professionals, such as counsellors. The Improving Access to Psychological Therapies (IAPT) programme is a now well-established approach to treating mental health problems in the community. GPs can make an IAPT referral for depressed and/or anxious patients who have debilitating mental health issues but who don’t require more specialised input from a psychiatrist or community mental health nurse. Most people receiving help for psychological problems will normally be able to carry on a reasonably normal lifestyle either during treatment or following a period of recovery. A small proportion of more severe mental health issues will necessitate referral to a Community Mental Health Team (CMHT), with a smaller still group of patients needing in-patient admission or detention under the Mental Health Act.

Mental health is a continuum at the far end of which lies what professionals refer to as severe and enduring mental illness. This is a poorly defined category, but can be said to include those who suffer from severely debilitating disorders that drastically reduce their quality of life and that may necessitate long-term support from family, carers, community care providers, supported housing agencies and charities. The severe and enduring mentally ill will usually have diagnoses of severe depression or psychotic illness, and will in most cases have some degree of contact with mental health professionals.

The problem with industry-funded drug trials

How much can we trust the results of clinical trials, especially ones that have been funded by companies with vested interests? This is the question we should continually ask ourselves, after the debacle of Seroxat.

The active ingredient of Seroxat is paroxetine. Medicines are known by two names, one of the active ingredient, the one that gives it the scientific name, and the other, the brand name. For example, the ingredient paracetamol is marketed under Neurofen, among other names. Companies that manufacture their own brand of medicine may decide to market it little more than their company name before the active ingredient, for example, Tesco paracetamol or Boots Ibuprofen, in order to distinguish it from other rival brands and aligning it with an already recognised scientific name, but without the associated costs of having to launch a new product brand.

Paroxetine is an anti-depressant and made its name as one of the few anti-depressants to be prescribed to children. However it was withdrawn from use after re-examination of the original scientific evidence found that the results published in the original research were misleading and had been misconstrued.

The prescription of medications to children is done under caution and monitoring, as there are various risks involved. Firstly, there is the danger that their bodies adapt to the medication and become resistant, thereby necessitating either higher doses in adult life, or a move on to stronger medication. In this instance there is the possibility that rather than addressing the problem, the medication only becomes a source of life-long addiction to medication. The second risk is that all medicines have side effects and can cause irreparable damage to the body in other regions. For example, the use of aspirin in the elderly was found to damage the lining of the stomach.

Equally worrying is the effect of these drugs on the health of the mind. Some drugs, particular those for mental health, are taken for their calming effect on the mind. The two main types of mental health drugs can be said to be anti-depressants and mood stabilisers, and while the aim of these drugs is to limit the brain’s overactivity, some have been found to trigger suicidal thoughts in users instead, ironically performing the function they were meant to discourage.

Children are often currently either prescribed adult medication in smaller doses of half strength instead, but the difficulty in assessing the dosage is that it does not lend itself to being analysed on a straight line graph. Should children under a certain age, say twelve for example, be prescribed as doseage based on age? Or if the most important factor in frequency is the body’s ability for absorption, should we prescribe based on other factors such as body mass index?

So when Seroxat came on to the market marketed as an anti-depressant for children you could almost feel the relief of the parents of the young sufferers. A medical product, backed by science and research, suitable for children, approved by the health authorities. Finally a medical product young sufferers could take without too much worry, and one – having been tested with young children – that parents could be led to surmise would be effective in managing their children’s mental health.

Except that Paroxetine, marketed as Seroxat, was not what it claimed to be. It has been withdrawn from use after scientists found, upon re-analysing the original data, that the harmful effects, particularly on young people were under-reported. Furthermore, researchers claim important details that could have affected the approval of its license were not made public, because it might have meant years of research might have gone down the drain.

When a medical product is launched, it is covered under a twenty-year no-compete patent, which means that it has a monopoly on that medicine for that period. While one might question why that is so, it is to protect the time spent by the pharmaceutical companies in investing in research and marketing the product, and give it a time period to establish a sizeable market share as a reward for developing the medication.

Twenty years for a patent might seem like a long term, but as companies apply for it while the product is in the early stages of development, in order that its research is not hijacked by a competing pharmaceutical company, they are often left with a period of ten years or less by the time the medical product has some semblance of its final form. The patent company has that amount of time to apply for a license and to market and sell the medication. After the original twenty years has elapsed, other companies can enter the fray and develop their own brands of the medicine. They, of course, would not need to spend the money on research as much of the research will have already been done, published, and accessible – enough to be reverse-engineered in a shorter space of time. Pharmaceutical companies are hence always engaged in a race against time, and if a product hits a snag in trials, mass production is put on hold – and if the company is left with anything less than five years to market its product, it is usually not long enough a period to recoup research costs. And if it is less with anything less than three years, it might as well have done the research for the companies that follow, because it will not recover the costs of research and marketing. While not proven, it is believed that pharmaceutical companies hence rush out products which have not been sufficiently tested, by emphasising the positive trial results, and wait for corrective feedback from the market before re-issuing a second version. It is not unlike computer applications nowadays which launch in a beta form, relying on user feedback for improvement, before relaunching in an upgraded form. The difference is software has no immediate implications on human health. Medication does.

Researchers who re-examined data from the medical trial of the antidepressant paroxetine, found reports of suicide attempts that had not been included in the original research paper. And because the makers of paroxetine, GlaxoSmithKline (GSK), had marketed paroxetine as a safe and also effective antidepressant for children, even though evidence was to the contrary, GSK had to pay damages for a record $3 billion for making false claims.

In the original research trials, GSK claimed that paroxetine was an effective medication for treating adolescents with depression and it was generally well-tolerated by the body with no side effects. Subsequent analysis found little advantage from paroxetine and an increase in harm in its use, compared to placebo.

The whole issues highlights the difficulty in trusting medical trials whose data is not independently accessed and reviewed.

The current stance on data is that pharmaceutical companies can select that clinical data they choose to release. Why is this so? We have already covered the reason for this. They have committed funds to research and are hence protective (and have right to be) protective of the raw data generated, particularly when competitors are waiting in the fold to launch products using the same data.

If you were a recording artist, and hired a recording studio for two weeks, musicians to play for you and sound engineers to record your work, at the end of the two weeks, you might have come up with a vast amount of recordings which will undergo editing, and from which your album will be created, then whatever has been recorded in the studio is yours, and you have the right to be protective about it in order that someone else might not release music using your ideas or similar to yours.

The problem is that when the pharmaceutical company initiating and funding the research is the one that will eventually market it first, and the clock is ticking against it, then it has a vested interest in the success of the product and is inherently biased to find positive outcomes that are advantageous to the product it creates.

Who would commit twenty years of time, research, marketing and finance to see a product fail?

The pharmaceutical company is also pressured to find these outcomes quickly and hence even the scientific tests may be already geared to ones that lead to pre-determined conclusions rather than ones that open it up to further analysis and cross-examination, and take up precious time or cause delay.

This creates a situation where only favourable data has been sought in the trials and only such data is made publicly available, leading to quick acceptance of the drug, a quick acquisition of a license and subsequently less delay heading into the marketing process.

The alternative is for independent review of the raw data, but this causes additional stresses on the time factor, and the security of the raw data cannot be guaranteed.

Despite the limitations of the current system, there are attempts to reform the system. The AllTrials campaign is a pressure group seeking independent scrutiny of medical data and has backing by medical organisations. The AllTrials group argue that all clinical trial data should be made available for the purpose of independent scrutiny in order to avoid similar issues to the misprescribing of paroxetine from repeated occurrence in the future.

The original study by GSK reported that in clinical trials 275 young people aged 12 to 18 with major depression were randomly allocated to either paroxetine, an older antidepressant drug called imipramine, or a placebo for eight weeks.

The researchers who reviewed the previous original study in 2001 found that it seriously under-reported cases of suicidal or self-harming behaviour, and that several hundreds of pages of data were missing without clear reason. It is likely these did not look upon paroxetine favourably.

Data was also misconstrued. For example, the 2001 paper reported 265 adverse events for people taking paroxetine, while the clinical study report showed 338.

The data involved examining 77,000 pages of data made available by GSK, which in hindsight, might have been 77,000 pages of unreliable data.

This study stands as a warning about how supposedly neutral scientific research papers may mislead readers by misrepresentation. The 2001 papers by GSK appear to have picked outcome measures to suit their results.

It subsequently come to light that the first draft paper was not actually written by the 22 academics named on the paper, but by a ghostwriter paid by GSK.

That fine for GSK might be seen as small in light of this. Certainly the reliability of industry-funded clinical trials, and how the process can be overhauled, is one we need to be considering for the future.