Media’s Marvellous Medicine

When it comes to our health, the media wields enormous influence over what we think. They tell us what’s good, what’s bad, what’s right and wrong, what we should and shouldn’t eat. When you think about it, that’s quite some responsibility. But do you really think that a sense of philanthropic duty is the driving force behind most of the health ‘news’ stories that you read? Who are we kidding? It’s all about sales, of course, and all too often that means the science plays second fiddle. Who wants boring old science getting in the way of a sensation-making headline?

When it comes to research – especially the parts we’re interested in, namely food, diet and nutrients – there’s a snag. The thing is, these matters are rarely, if ever, clear-cut. Let’s say there are findings from some new research that suggest a component of our diet is good for our health. Now academics and scientists are generally a pretty cautious bunch – they respect the limitations of their work and don’t stretch their conclusions beyond their actual findings. Not that you’ll think this when you hear about it in the media. News headlines are in your face and hard hitting. Fluffy uncertainties just won’t cut it. An attention-grabbing headline is mandatory; relevance to the research is optional. Throw in a few random quotes from experts – as the author Peter McWilliams stated, the problem with ‘experts’ is you can always find one ‘who will say something hopelessly hopeless about anything’ – and boom! You’ve got the formula for some seriously media-friendly scientific sex appeal, or as we prefer to call it, ‘textual garbage’. The reality is that a lot of the very good research into diet and health ends up lost in translation. Somewhere between its publication in a respected scientific journal and the moment it enters our brains via the media, the message gets a tweak here, a twist there and a dash of sensationalism thrown in for good measure, which leaves us floundering in a sea of half-truths and misinformation. Most of it should come with the warning: ‘does nothing like it says in the print’. Don’t get us wrong: we’re not just talking about newspapers and magazines here, the problem runs much deeper. Even the so-called nutrition ‘experts’, the health gurus who sell books by the millions, are implicated. We’re saturated in health misinformation.

Quite frankly, many of us are sick of this contagion of nutritional nonsense. So, before launching headlong into the rest of the book, take a step back and see how research is actually conducted, what it all means and what to watch out for when the media deliver their less-than-perfect messages. Get your head around these and you’ll probably be able to make more sense of nutritional research than most of our cherished health ‘gurus’.

Rule #1: Humans are different from cells in a test tube
At the very basic level, researchers use in-vitro testing, in which they isolate cells or tissues of interest and study them outside a living organism in a kind of ‘chemical soup’. This allows substances of interest (for example, a vitamin or a component of food) to be added to the soup to see what happens. So they might, for example, add vitamin C to some cancer cells and observe its effect. We’re stating the obvious now when we say that what happens here is NOT the same as what happens inside human beings. First, the substance is added directly to the cells, so they are often exposed to concentrations far higher than would normally be seen in the body. Second, humans are highly complex organisms, with intricately interwoven systems of almost infinite processes and reactions. What goes on within a few cells in a test tube or Petri dish is a far cry from what would happen in the body. This type of research is an important part of science, but scientists know its place in the pecking order – as an indispensable starting point of scientific research. It can give us valuable clues about how stuff works deep inside us, what we might call the mechanisms, before going on to be more rigorously tested in animals, and ultimately, humans. But that’s all it is, a starting point.

Rule #2: Humans are different from animals
The next logical step usually involves animal testing. Studying the effects of a dietary component in a living organism, not just a bunch of cells, is a big step closer to what might happen in humans. Mice are often used, due to convenience, consistency, a short lifespan, fast reproduction rates and a closely shared genome and biology to humans. In fact, some pretty amazing stuff has been shown in mice. We can manipulate a hormone and extend life by as much as 30%1. We can increase muscle mass by 60% in two weeks. And we have shown that certain mice can even regrow damaged tissues and organs.

So, can we achieve all of that in humans? The answer is a big ‘no’ (unless you happen to believe the X-Men are real). Animal testing might be a move up from test tubes in the credibility ratings, but it’s still a long stretch from what happens in humans. You’d be pretty foolish to make a lot of wild claims based on animal studies alone.

To prove that, all we need to do is take a look at pharmaceutical drugs. Vast sums of money (we’re talking hundreds of millions) are spent trying to get a single drug to market. But the success rate is low. Of all the drugs that pass in-vitro and animal testing to make it into human testing, only 11% will prove to be safe and effective enough to hit the shelves5. For cancer drugs the rate of success is only 5%5. In 2003, the President of Research and Development at pharmaceutical giant Pfizer, John La Mattina, stated that ‘only one in 25 early candidates survives to become a prescribed medicine’. You don’t need to be a betting person to see these are seriously slim odds.

Strip it down and we can say that this sort of pre-clinical testing never, ever, constitutes evidence that a substance is safe and effective. These are research tools to try and find the best candidates to improve our health, which can then be rigorously tested for efficacy in humans. Alas, the media and our nutrition gurus don’t appear to care too much for this. Taking research carried out in labs and extrapolating the results to humans sounds like a lot more fun. In fact, it’s the very stuff of many a hard-hitting newspaper headline and bestselling health book. To put all of this into context, let’s take just one example of a classic media misinterpretation, and you’ll see what we mean.

Rule #3: Treat headlines with scepticism
Haven’t you heard? The humble curry is right up there in the oncology arsenal – a culinary delight capable of curing the big ‘C’. At least that’s what the papers have been telling us. ‘The Spice Of Life! Curry Fights Cancer’ decreed the New York Daily News. ‘How curry can help keep cancer at bay’ and ‘Curry is a “cure for cancer”’ reported the Daily Mail and The Sun in the UK. Could we be witnessing the medical breakthrough of the decade? Best we take a closer look at the actual science behind the headlines.

The spice turmeric, which gives some Indian dishes a distinctive yellow colour, contains relatively large quantities of curcumin, which has purported benefit in Alzheimer’s disease, infections, liver disease, inflammatory conditions and cancer. Impressive stuff. But there’s a hitch when it comes to curcumin. It has what is known as ‘poor bioavailability’. What that means is, even if you take large doses of curcumin, only tiny amounts of it get into your body, and what does get in is got rid of quickly. From a curry, the amount absorbed is so miniscule that it is not even detectable in the body.

So what were those sensational headlines all about? If you had the time to track down the academic papers being referred to, you would see it was all early stage research. Two of the articles were actually referring to in-vitro studies (basically, tipping some curcumin onto cancer cells in a dish and seeing what effect it had).

Suffice to say, this is hardly the same as what happens when you eat a curry. The other article referred to an animal study, where mice with breast cancer were given a diet containing curcumin. Even allowing for the obvious differences between mice and humans, surely that was better evidence? The mice ate curcumin-containing food and absorbed enough for it to have a beneficial effect on their cancer. Sounds promising, until we see the mice had a diet that was 2% curcumin by weight. With the average person eating just over 2kg of food a day, 2% is a hefty 40g of curcumin. Then there’s the issue that the curcumin content of the average curry/turmeric powder used in curry is a mere 2%. Now, whoever’s out there conjuring up a curry containing 2kg of curry powder, please don’t invite us over for dinner anytime soon.

This isn’t a criticism of the science. Curcumin is a highly bio-active plant compound that could possibly be formulated into an effective medical treatment one day. This is exactly why these initial stages of research are being conducted. But take this basic stage science and start translating it into public health advice and you can easily come up with some far-fetched conclusions. Let us proffer our own equally absurd headline: ‘Curry is a Cause of Cancer’. Abiding by the same rules of reporting used by the media, we’ve taken the same type of in-vitro and animal-testing evidence and conjured up a completely different headline. We can do this because some studies of curcumin have found that it actually causes damage to our DNA, and in so doing could potentially induce cancer.

As well as this, concerns about diarrhoea, anaemia and interactions with drug-metabolizing enzymes have also been raised. You see how easy it is to pick the bits you want in order to make your headline? Unfortunately, the problem is much bigger than just curcumin. It could just as easily be resveratrol from red wine, omega-3 from flaxseeds, or any number of other components of foods you care to mention that make headline news.

It’s rare to pick up a newspaper or nutrition book without seeing some new ‘superfood’ or nutritional supplement being promoted on the basis of less than rigorous evidence. The net result of this shambles is that the real science gets sucked into the media vortex and spat out in a mishmash of dumbed-down soundbites, while the nutritional messages we really should be taking more seriously get lost in a kaleidoscope of pseudoscientific claptrap, peddled by a media with about as much authority to advise on health as the owner of the local pâtisserie.

Rule #4: Know the difference between association and causation
If nothing else, we hope we have shown that jumping to conclusions based on laboratory experiments is unscientific, and probably won’t benefit your long-term health. To acquire proof, we need to carry out research that involves actual humans, and this is where one of the greatest crimes against scientific research is committed in the name of a good story, or to sell a product.

A lot of nutritional research comes in the form of epidemiological studies. These involve looking at populations of people and observing how much disease they get and seeing if it can be linked to a risk factor (for example, smoking) or some protective factor (for example, eating fruit and veggies). And one of the most spectacular ways to manipulate the scientific literature is to blur the boundary between ‘association’ and ‘causation’. This might all sound very academic, but it’s actually pretty simple.

Confusing association with causation means you can easily arrive at the wrong conclusion. For example, a far higher percentage of visually impaired people have Labradors compared to the rest of the population, so you might jump to the conclusion that Labradors cause sight problems. Of course we know better, that if you are visually impaired then you will probably have a Labrador as a guide dog. To think otherwise is ridiculous.

But apply the same scenario to the complex human body and it is not always so transparent. Consequently, much of the debate about diet and nutrition is of the ‘chicken versus egg’ variety. Is a low or high amount of a nutrient a cause of a disease, a consequence of the disease, or simply irrelevant?

To try and limit this confusion, researchers often use what’s known as a cohort study. Say you’re interested in studying the effects of diet on cancer risk. You’d begin by taking a large population that are free of the disease at the outset and collect detailed data on their diet. You’d then follow this population over time, let’s say ten years, and see how many people were diagnosed with cancer during this period. You could then start to analyse the relationship between people’s diet and their risk of cancer, and ask a whole lot of interesting questions. Did people who ate a lot of fruit and veggies have less cancer? Did eating a lot of red meat increase cancer? What effect did drinking alcohol have on cancer risk? And so on.

The European Prospective Investigation into Cancer and Nutrition (EPIC), which we refer to often in this book, is an example of a powerfully designed cohort study, involving more than half a million people in ten countries. These studies are a gold mine of useful information because they help us piece together dietary factors that could influence our risk of disease.

But, however big and impressive these studies are, they’re still observational. As such they can only show us associations, they cannot prove causality. So if we’re not careful about the way we interpret this kind of research, we run the risk of drawing some whacky conclusions, just like we did with the Labradors. Let’s get back to some more news headlines, like this one we spotted: ‘Every hour per day watching TV increases risk of heart disease death by a fifth’.

When it comes to observational studies, you have to ask whether the association makes sense. Does it have ‘biological plausibility’? Are there harmful rays coming from the TV that damage our arteries or is it that the more time we spend on the couch watching TV, the less time we spend being active and improving our heart health. The latter is true, of course, and there’s an ‘association’ between TV watching and heart disease, not ‘causation’.

So even with cohorts, the champions of the epidemiological studies, we can’t prove causation, and that’s all down to what’s called ‘confounding’. This means there could be another variable at play that causes the disease being studied, at the same time as being associated with the risk factor being investigated. In our example, it’s the lack of physical activity that increases heart disease and is also linked to watching more TV.

This issue of confounding variables is just about the biggest banana skin of the lot. Time and time again you’ll find nutritional advice promoted on the basis of the findings of observational studies, as though this type of research gives us stone cold facts. It doesn’t. Any scientist will tell you that. This type of research is extremely useful for generating hypotheses, but it can’t prove them.

Rule #5: Be on the lookout for RCTs (randomized controlled trials)
An epidemiological study can only form a hypothesis, and when it offers up some encouraging findings, these then need to be tested in what’s known as an intervention, or clinical, trial before we can talk about causality. Intervention trials aim to test the hypothesis by taking a population that are as similar to each other as possible, testing an intervention on a proportion of them over a period of time and observing how it influences your measured outcome.

The role of pharmacy in healthcare

Pharmacists are experts on the actions and uses of drugs, including their chemistry, their formulation into medicines and the ways in which they are used to manage diseases. The principal aim of the pharmacist is to use this expertise to improve patient care. Pharmacists are in close contact with patients and so have an important role both in assisting patients to make the best use of their prescribed medicines and in advising patients on the appropriate self-management of self-limiting and minor conditions. Increasingly this latter aspect includes OTC prescribing of effective and potent treatments. Pharmacists are also in close working relationships with other members of the healthcare team –doctors, nurses, dentists and others –where they are able to give advice on a wide range of issues surrounding the use of medicines.

Pharmacists are employed in many different areas of practice. These include the traditional ones of hospital and community practice as well as more recently introduced advisory roles at health authority/ health board level and working directly with general practitioners as part of the core, practice-based primary healthcare team. Additionally, pharmacists are employed in the pharmaceutical industry and in academia.

Members of the general public are most likely to meet pharmacists in high street pharmacies or on a hospital ward. However, pharmacists also visit residential homes (see Ch. 49), make visits to patients’own homes and are now involved in running chronic disease clinics in primary and secondary care. In addition, pharmacists will also be contributing to the care of patients through their dealings with other members of the healthcare team in the hospital and community setting.

Historically, pharmacists and general practitioners have a common ancestry as apothecaries. Apothecaries both dispensed medicines prescribed by physicians and recommended medicines for those members of the public unable to afford physicians’fees. As the two professions of pharmacy and general practice emerged this remit split so that pharmacists became primarily responsible for the technical, dispensing aspects of this role. With the advent of the NHS in the UK in 1948, and the philosophy of free medical care at the point of delivery, the advisory function of the pharmacist further decreased. As a result, pharmacists spent more of their time in the dispensing of medicines –and derived an increased proportion of their income from it. At the same time, radical changes in the nature of dispensing itself, as described in the following paragraphs, occurred.

In the early years, many prescriptions were for extemporaneously prepared medicines, either following standard ‘recipes’from formularies such as the British Pharmacopoeia (BP) or British Pharmaceutical Codex (BPC), or following individual recipes written by the prescriber (see Ch. 30). The situation was similar in hospital pharmacy, where most prescriptions were prepared on an individual basis. There was some small-scale manufacture of a range of commonly used items. In both situations, pharmacists required manipulative and time-consuming skills to produce the medicines. Thus a wide range of preparations was made, including liquids for internal and external use, ointments, creams, poultices, plasters, eye drops and ointments, injections and solid dosage forms such as pills, capsules and moulded tablets (see Chs 32–39). Scientific advances have greatly increased the effectiveness of drugs but have also rendered them more complex, potentially more toxic and requiring more sophisticated use than their predecessors. The pharmaceutical industry developed in tandem with these drug developments, contributing to further scientific advances and producing manufactured medical products. This had a number of advantages. For one thing, there was an increased reliability in the product, which could be subjected to suitable quality assessment and assurance. This led to improved formulations, modifications to drug availability and increased use of tablets which have a greater convenience for the patient. Some doctors did not agree with the loss of flexibility in prescribing which resulted from having to use predetermined doses and combinations of materials. From the pharmacist’s point of view there was a reduction in the time spent in the routine extemporaneous production of medicines, which many saw as an advantage. Others saw it as a reduction in the mystique associated with the professional role of the pharmacist. There was also an erosion of the technical skill base of the pharmacist. A look through copies of the BPC in the 1950s, 1960s and 1970s will show the reduction in the number and diversity of formulations included in the Formulary section. That section has been omitted from the most recent editions. However, some extemporaneous dispensing is still required and pharmacists remain the only professionals trained in these skills.

The changing patterns of work of the pharmacist, in community pharmacy in particular, led to an uncertainty about the future role of the pharmacist and a general consensus that pharmacists were no longer being utilized to their full potential. If the pharmacist was not required to compound medicines or to give general advice on diseases, what was the pharmacist to do?

The need to review the future for pharmacy was first formally recognized in 1979 in a report on the NHS which had the remit to consider the best use and management of its financial and manpower resources. This was followed by a succession of key reports and papers, which repeatedly identified the need to exploit the pharmacist’s expertise and knowledge to better effect. Key among these reports was the Nuffield Report of 1986. This report, which included nearly 100 recommendations, led the way to many new initiatives, both by the profession and by the government, and laid the foundation for the recent developments in the practice of pharmacy, which are reflected in this book.

Radical change, as recommended in the Nuffield Report, does not necessarily happen quickly, particularly when regulations and statute are involved. In the 28 years since Nuffield was published, there have been several different agendas which have come together and between them facilitated the paradigm shift for pharmacy envisaged in the Nuffield Report. These agendas will be briefly described below. They have finally resulted in extensive professional change, articulated in the definitive statements about the role of pharmacy in the NHS plans for pharmacy in England (2000), Scotland (2001) and Wales (2002) and the subsequent new contractual frameworks for community pharmacy. In addition, other regulatory changes have occurred as part of government policy to increase convenient public access to a wider range of medicines on the NHS (see Ch. 4). These changes reflect general societal trends to deregulate the professions while having in place a framework to ensure safe practice and a recognition that the public are increasingly well informed through widespread access to the internet. For pharmacy, therefore, two routes for the supply of prescription only medicines (POM) have opened up. Until recently, POM medicines were only available on the prescription of a doctor or dentist, but as a result of the Crown Review in 1999, two significant changes emerged.

First, patient group directions (PGDs) were introduced in 2000. A PGD is a written direction for the supply, or supply and administration, of a POM to persons generally by named groups of professionals. So, for example, under a PGD, community pharmacists could supply a specific POM antibiotic to people with a confirmed diagnostic infection, e.g. azithromycin for Chlamydia.

Second, prescribing rights for pharmacists, alongside nurses and some other healthcare professionals, have been introduced, initially as supplementary prescribers and more recently, as independent prescribers.

The council of the Royal Pharmaceutical Society of Great Britain (RPSGB) decided that it was necessary to allow all members to contribute to a radical appraisal of the profession, what it should be doing and how to achieve it. The ‘Pharmacy in a New Age’consultation was launched in October 1995, with an invitation to all members to contribute their views to the council. These were combined into a subsequent document produced by the council in September 1996 called Pharmacy in a New Age: The New Horizon. This indicated that there was overwhelming agreement from pharmacists that the profession could not stand still.

The main output of this professional review was a commitment to take forward a more proactive, patient-centred clinical role for pharmacy using pharmacists’ skills and knowledge to best effect.

An overview of mental health

Mental illness continues to be one of the most misunderstood, mythologised and controversial of issues. Described for as long as human beings have been able to record thoughts and behaviours, it is at once a medical, social and at times political issue. It can lead to detention against one’s will and has its very own Act of Parliament, and yet we really know very little about it.

Societies through the ages have responded to this mystery by the locking up of people whose sometimes bizarre behaviour was deemed dangerous, unsuitable or just plain scandalous. Only within the relatively recent past have the tall, thick walls of the asylum been dismantled and those who remained institutionalised and hidden allowed out into the community.

Little wonder then that mental health and mental disorder remain misunderstood to most, and frightening to many. Recent reports suggest that stigma is on the decline (Time to Change 2014) but progress has been slow. Despite the best efforts of soap scriptwriters, high-profile celebrities ‘coming clean’ about mental illness, and the work of mental health charities and support groups in demystifying diagnoses such as depression, we still see and hear many examples of discrimination and myth.

Given the sheer ubiquity of mental illness throughout the world, the stigma and mystery is surprising. The most recent national survey confirms the now well-known statistic that just under one in four English adults are experiencing a diagnosable mental disorder at any one time (McManus et al. 2009). Depression is identified by the World Health Organization as the world’s leading cause of years of life lost due to disability (WHO 2009).

Relatively few of those experiencing mental health problems will come to the attention of a GP, let alone a mental health professional. This is especially so in the developing world where initiatives to develop local mental health interventions are gaining considerable ground after generations of cultural stigma and ignorance (WHO 2009). But even in parts of the world where people have ready access to medical help, many suffer alone rather than face the apparent shame of experiencing mental health problems.

Perhaps part of our reluctance to accept mental illness lies with difficulties determining mental health. We are made aware of factors that determine positive mental health. Connecting with people, being active, learning new things, acts of altruism and being aware of oneself (NHS 2014) have been evidenced as ways of promoting our well-being, but mental order remains rather more loosely defined than mental disorder.

So what are the systems used to categorise and define mental illness? In the United Kingdom, mental health professionals often refer to an ICD-10 diagnosis to refer to a patient’s condition. This is the World Health Organization’s (WHO) diagnostic manual, which lists all recognised (by WHO at least) diseases and disorders, including the category ‘mental and behavioural disorders’ (WHO 1992). The Diagnostic and Statistical Manual of Mental Disorders (better known as DSM-5) is more often used in the United States and elsewhere in the world (American Psychiatric Association 2013). These two sets of standards are intended to provide global standards for the recognition of mental health problems for both day-to-day clinical practice and clinical researchers, although the tools used by the latter group to measure symptoms often vary from place to place and can interfere with the ‘validity’ of results, or in other words the ability of one set of results to be compared with those from a different research team.

ICD-10 ‘mental and behavioural disorders’ lists 99 different types of mental health problem, each of which is further sub-divided into a variety of more precise diagnoses, ranging from the relatively common and well known (such as depression or schizophrenia) to more obscure diagnoses such as ‘specific developmental disorders of scholastic skills’.

The idea of using classification systems and labels to describe the highly complex vagaries of the human mind often meets with fierce resistance in mental health circles. The ‘medical model’ of psychiatry – diagnosis, prognosis and treatment – is essentially a means of applying the same scientific principles to the study and treatment of the mind as physical medicine applies to diseases of the body. An X-ray of the mind is impossible, a blood test will reveal nothing about how a person feels, and fitting a collection of psychiatric symptoms into a precise diagnostic category does not always yield a consistent result.

In psychiatry, symptoms often overlap with one another. For example, a person with obsessive compulsive disorder may believe that if they do not switch the lights on and off a certain number of times and in a particular order then a disaster will befall them. To most, this would appear a bizarre belief, to the extent that the inexperienced practitioner may label that person as ‘delusional’ or ‘psychotic’. Similarly, a person in the early stages of Alzheimer’s disease may often experience many of the ‘textbook’ features of clinical depression, such as low mood, poor motivation and disturbed sleep. In fact, given the tragic and predictable consequences of dementia it is unsurprising that sufferers often require treatment for depression, particularly while they retain the awareness to know that they are suffering from a degenerative condition with little or no improvement likely.

Psychiatry may often be a less-than-precise science, but the various diagnostic terms are commonplace in health and social care and have at least some descriptive power, although it is also important to remember that patients or clients may experience a complex array of feelings, experiences or ‘symptoms’ that may vary widely with the individual over time and from situation to situation.

Defining what is (or what is not) a mental health problem is really a matter of degrees. Nobody could be described as having ‘good’ mental health every minute of every day. Any football supporter will report the highs and lows encountered on an average Saturday afternoon, and can easily remember the euphoria of an important win or the despondency felt when their team is thrashed six-nil on a cold, wet Tuesday evening. But this could hardly be described as a ‘mental health problem’, and for all but the most ardent supporters their mood will have lifted within a short space of time.

However, the same person faced with redundancy, illness or the loss of a close family member might encounter something more akin to a ‘problem’. They may experience, for example, anger, low mood, tearfulness, sleep difficulties and loss of appetite. This is a quite normal reaction to stressful life events, although the nature and degree of reaction is of course dependent on a number of factors, such as the individual’s personality, the circumstances of the loss and the support available from those around them at the time. In most circumstances the bereaved person will recover after a period of time and will return to a normal way of life without the need for medical intervention of any kind. On the other hand, many people will experience mental health problems serious enough to warrant a visit to their GP.

The majority of people with mental health problems are successfully assessed and treated by GPs and other primary care professionals, such as counsellors. The Improving Access to Psychological Therapies (IAPT) programme is a now well-established approach to treating mental health problems in the community. GPs can make an IAPT referral for depressed and/or anxious patients who have debilitating mental health issues but who don’t require more specialised input from a psychiatrist or community mental health nurse. Most people receiving help for psychological problems will normally be able to carry on a reasonably normal lifestyle either during treatment or following a period of recovery. A small proportion of more severe mental health issues will necessitate referral to a Community Mental Health Team (CMHT), with a smaller still group of patients needing in-patient admission or detention under the Mental Health Act.

Mental health is a continuum at the far end of which lies what professionals refer to as severe and enduring mental illness. This is a poorly defined category, but can be said to include those who suffer from severely debilitating disorders that drastically reduce their quality of life and that may necessitate long-term support from family, carers, community care providers, supported housing agencies and charities. The severe and enduring mentally ill will usually have diagnoses of severe depression or psychotic illness, and will in most cases have some degree of contact with mental health professionals.

Why Asians are more prone to Type 2 diabetes than Westerners

Thirty-four year-old Alan Phua is what you might describe as a typical male Chinese man. He exercises for three to five times a week in a country that places a high emphasis on healthy lifestyles. He also carefully observes what he eats and is strict about his diet.

Alan lives in Singapore. In addition to military service for the duration of two and a half years when they turn eighteen, citizens have annual reservist training for two weeks until they turn forty. Failing to meet targets for physical exercises such as chin ups, standing broad jumps, sit ups, shuttle runs and a 1.5 mile run means remedial physical training every few months until these standards are meet. But not all is negative though. Meeting or exceeding these targets is rewarded by financial incentives. In other words, living in Singapore as a male means there is a strong push to keep fit and maintain it.

The reasons for this are very clear. Singapore is a small country surrounded by two large neighbours in Malaysia and Indonesia. Its population of five million citizens means that like Israel, it has to rely on a citizen reservist force should the threat of war ever loom. While most of the citizens there seem of the mindset that military war would never break out, as the country is so small that any military action would damage the infrastructure and paralyse it; furthermore, the military is only a deterrent force, the readiness to military action gives leverage in negotiations between nation. For example, if the countries disagree over the supply of water that Malaysia gives Singapore to refine, and the discussions escalate towards a military standoff, having a reservist army puts the country in a better negotiating position. But while many may claim that a war is hypothetical, there is a simpler reason for maintaining fitness. A fitter population means less stress on the healthcare system. Singapore is the sustainable healthcare system that many countries are seeking to adopt.

Like many others in Singapore, Alan’s body does not produce enough insulin. This, as a result, causes the accumulation of sugar in the bloodstream. The lack of insulin leads to other health issues, such as general fatigue, infections, or other effects such as the failure of wounds to heal. However, all is not lost. Eating properly and having a good level of exercise can prevent the blood glucose level from rising and developing into diabetes.

Local researchers from the country’s National University Hospital (NUH), working together with Janssen Pharmaceuticals, have discovered that the reason why Asians are moresusceptible than Westerners to developing Type 2 diabetes is the inability of their bodies to produce high enough levels of insulin.

Even though the finding was based only on a small sample size of 140 mostly Chinese participants, the data, if expanded and refined, will point the way and help patients with diabetes to manage it better; not just for local patients but also within the region. Doctors believe that better dietary advice and a better selection of drugs would help patients to treat diabetes. The preliminary findings are part of the country’s largest diabetes study launched last year. The five-year ongoing study has recruited around 1,300 participants, and aims to eventually nearly double that.

The researchers did however notice the ethnicity of the results was fairly restricted and more participants from a wider racial profile will be needed for the results to be applied to the general population.

Currently, the statistics show that one in three Singaporeans has a risk of developing diabetes. Currently, one out of every fourteen Singaporeans are diabetic. Type 2 diabetes comes about because insufficient insulin is produced by the pancreas, or because the body has insulin resistance.

A previous study that 8 per cent of Chinese people with a Body Mass Index (BMI) of 23 have diabetes. A BMI of 23 is within the normal weight range for Caucasians, and the rate of diabetes development within Chinese people is four times more than their European counterparts. The researchers claimed that it highlighted the importance of avoiding too much high-glucose food such as those rich in simple carbohydrates which include white rice and sugar.

The findings could also lay the foundation for efforts to test whether therapies that target insulin secretion and the ability to make more insulin could be more effective in the local population, and lead to customised diabetes treatment.

What bearing does this have on us, and what action can we take? A good start would be to avoid eating high glucose food such as rice too often and managing our diet. Also try adopting a more active lifestyle!

Dirty laundry a powerful magnet for bedbugs

Bedbugs are small insects and suck human blood for their sustenance. They hide around beds in small cracks and crevices. Their existence can be identified by the presence of small bugs or tiny white eggs in the crevices and joints of furniture and mattresses. You might also locate mottled bedbug shells in these areas. A third sign of existence is the presence of tiny black spots on the mattress which are fecal matter, or red blood spots. And if you have itchy bites on your skin, then that is a clear sign. Unfortunately it is the fourth that provides people with the impetus to check their living areas for bugs, rather than the need to maintain hygiene by changing sheets.

The incidences of bedbugs have increased globally and one theory is that that visitors to countries where the hygiene levels are less stringent bring them back to their own country. The cost of cheap travel, both in terms of rail tickets and air flights, has enabled people to visit far-flung places. But one thing that has not been so apparent is how the bed bugs are carried back. It had been thought that bugs are more drawn to the presence of a human being – but surely they don’t piggyback on one across regions and continents?

The authors of a recent research into the matter have a new perspective of the matter. They believe that bugs are drawn to evidence of human presence, and not necessarily just to the presence of a human host. They believe that bed bugs, in places where hygiene is slightly lacking, collect in the dirty laundry of tourists and are then transported back to the tourists’ own location, from where they feed and multiply.

While this was an experimental study, the results are interesting because it had been previously thought that bed bugs prefer to be near sleeping people because they can sense blood.

The experiments leading to these results were conducted in two identical rooms.

Clothes which had been worn for three hours of daily human activity were taken from four volunteers. As a basis of comparison, clean clothes were also used. Both sets of clothes were placed into clean, cotton tote bags.

The rooms were identically set to 22 degrees Celsius, and the only difference was that one room had higher carbon dioxide levels than the other, to simulate the presence of a human being.

A sealed container with bed bugs in was placed in each room for 48 hours. After twenty four hours, when the carbon dioxide levels had settled, they were released.

In each room there were four clothing bags introduced – two containing soiled laundry and the other two containing clean laundry, presented in a way that mimicked the placement of clean and soiled clothes in a hotel room.

After a further 4 days, the number of bedbugs and their locations were recorded. The experiment was repeated six times and each experiment was preceded by a complete clean of the room with bleach.

The results between both rooms were similar, in that bed bugs gravitated towards the bags containing soiled clothes. The level of carbon dioxide was not a distinguishing factor in this instance, and the result suggested traces of human odour was enough to attract bed bugs. The physical presence of a human being was not necessary.

The carbon dioxide however did influence behaviour in that it encouraged more bed bugs to leave the container in the room with carbon dioxide.

In other words, the carbon dioxide levels in a room are enough to alert bed bugs to human presence, and traces of human odour in clothes are enough to attract them.

Why is this hypothesis useful to know? If you go to a place where the hygiene is suspect, then during the night when you are asleep, the bed bugs know you are present, and if they do not bite you, during the day they may come out and embed themselves in your dirty laundry. The researchers concluded that the management of holiday clothing could help you avoid bringing home bedbugs.

The simple way of protecting yourself against these pesky hitchhikers could just be to keep dirty laundry in sealable bags, such as those with a zip lock, so they cannot access it. Whether or not it means they will turn their attention to you during your holiday is a different matter, but at least it means you will avoid bringing the unwanted bugs back into your own home.

The study was carried out by researchers from the University of Sheffield and was funded by the Department of Animal & Plant Sciences within the same university.

More research of course is needed into the study. For example, if there were a pile of unwashed clothes while some was sleeping in the room, would the bugs gravitate towards the human or towards the clothes? It is more likely that they move for the human, but that kind of theory is difficult to test without willing volunteers!

Also, did the bugs in the room only head for the unwashed clothes because of the absence of a human, or did the proximity of the clothes to the container lull them into account the way they did? Also what is not accounted for are other factors by which bed bugs may be drawn to where they reside. Perhaps in the absence of a human being in the room, bed bugs would head for the next best alternative, which are clothes with trace human odours or skin cells, but perhaps with a human being in the room, bed bugs might rely on temperature differences to know where to zoom in on. In other words, instead of detecting human presence using carbon dioxide, they rely on the difference in temperature of the human body relative to its surroundings (the human body is at 36.9 degrees Celsius).

Carbon dioxide levels have been shown to influence mosquitoes and how they react but perhaps bed bugs rely on other cues.

There could be other factors that cannot or were not be be recreated in the same controlled environment of the experiment.

Ever wonder what it was like in the past centuries? Did people have to deal with bed bugs if they lived in the times of the Baroque ?

Nobody knows but one thing is for sure. Getting rid of bed bugs is a bothersome business but if you can prevent them getting in your home in the first place, all the better!

Where Will factors in mental health treatment

If medication is a physical stabiliser, is therapy a mental stabiliser?

If you’ve read the last few posts you might have come to the conclusion that as far as mental health is concerned, the line of thinking contained in this blog is that an approach that is suitable for long-term and lasting treatment is part medication and part therapy. Medication initially works best for more serious cases, and milder forms of mental health illnesses may be possible without the use of prescription medication, but for the long term, it is better to wean patients off the medication. Not simply because the use of medication over longer periods breeds addiction, dependency and causes changes to the body which may be harmful, but for the health service, it is an unsustainable form of treatment that simply continues to deplete the environment of its resouces while contributing to climate change and extreme weather. It seem strange to have to mention climate change in a medical blog, but essentially this is what we can trace it back to.

Medicine, especially for serious cases of mental health, is an effect-suppressant that minimises immediate symptoms while buying time for alternative therapies that promote long-term solutions to kick in. But there are those who consider if medication if even neccesary at all. After all, the body does a pretty good job of healing itself when we get cuts. Those who ascribe to this view hold that given time, the body does what it needs to prepare itself for survival and growth.

The only problem that time is not always an available resource. Sometimes we need results in a short space of time, and do not have the luxury of seeing the effects of mental illness dwindle away over years. Medication provides a higher level of immediacy to treatment. To some, it seems that medication is flooding the body with chemicals it could obtain or manufacture from within, but within a shorter span of time and with a higher concentration. It is giving the body what it needs in an intensive period rather than over a longer span of time that the non-medical proponents advocate.

Some go further to suggest this no-medication approach can be extended to the therapy aspect of mental health treatment. They argue that therapy, counselling or any other cognitive methods of treatment only serve to increase stresses rather than decrease them. While no one would ever advocate a completely non-medicated and non-therapy treament for mental health illnesses, and the current thinking is a part-medical and part-therapy approach to mental health illnesses, there are those who might consider a non-medicated but supported therapy approach. Another variant of this is the medicated but no therapy group. It is this last group which we will consider further.

On the face of it, it seems preposterous to even suggest it. If we have believed that mental health illnesses can only be treated in the long term with therapies such as counselling, then how is it even possible to consider a zero-therapy treatment group?

Proponents of the above idea hold that the therapy causes stress rather than deals with it on a long term basis. What patients really need, it is argued, is mental space to dwell on their lives, reflect on how they are living, then in order to make long-term changes, they have to find solutions within themselves and the will to apply them. Methods such as counselling and cognitive therapy already exist, but as the solutions are arrived at through the meetings within the counsellor and patient, it is felt that certain patients may only view the changes they have to make as being dispensed by the counsellor, and see them as extrinsic factors. Hence the guidance may be less effective. However, if they are given time and space to reflect on what they need to do, having examined their situation in detail for themselves, it is one that they will be more effective in finding the will to put actions into practice.

Take for example, the caterpillar. Cocooned in security, it makes minute adjustments day by day to prepare itself for the life ahead. To the outsider it looks as if nothing is going on, but this could not be further from the truth. As it is about to break out and emerge as a butterfly, it has to struggles and somehow bridge the gap from where it is, to where it must be. The final trials, as it tries to break out from the cocoon actually help to strengthen and develop its wings permanently. Maturity is arrived at without any extrinsic factors. The caterpillar made it on its own. If someone had helped it, perhaps by thinking to widen the gap through which it must emerge, the lack of pressure and resistance would actually cause the emerging butterfly to have weaker wings and have a poorer chance for long-term survival.

Those that point to a no-therapy solution claim that the guidance of the counsellor, psychotherapist or assisting care individual actually puts a timeframe on what could actually be a non-hurried adaptive process of the mental health patient. A counsellor is paid, either through the mental health patient directly or from a health service. The presence of a counsellor may only impose a time-limit by which progress must be made because health care funds will run out, or perhaps accountability demands that the patient make progress at a speed that may not be concordant with the natural run of things. The pressure to be at a certain mental stage in time may only impose an additional counter-productive burden in the first place.

A common factor in depression is the dwelling on the gulf that exists between where one is and where one wants to be. The prolonged over-emphasis on the disconnect between both disparate worlds is one of the reasons why individuals develop unhappiness and long-term depression. Yet the argument could be made that counselling and cognitive therapy, while aiming to bridge that gap, may not be effective in helping patients develop the skills and will to bridge the gulf in order to take their development forward. Often the development has to follow the patient’s natural timing and pace, and if this important counselling cornerstone is disturbed, then the advice and guidance received from the counsellor will merely be more pieces of information dropping into the gulf and  widening it further.

Some point to a period of reflective solitude as the necessary key to a long term solution. The individual goes at a pace he is suited to, slowly adapting to the needs of his situation and developing the skills for long term recovery. A self-monitoring form of silence and meditation is imposed. The theory behind this thinking could not be any more different from traditional approaches. Where traditionally some form of intervention might be applied to, say, an individual lying in bed and unable to face the day ahead, either through the dispensing of advice such as “Man up! Toughen up!” or visits to therapists, proponents of the reflective solitude theory view the process as the individual resting himself in preparation for the changes ahead, akin to the caterpillar. The belief is that the mere thought of an activity triggers physical processes in the motor nerves, so by resting, the individual is clearing his mind and soul and preparing his body before he can fill it with more useful purpose. It is not a major problem that the resting may  take place over a period of weeks. But the belief is that ultimately the individually will feel compelled to make some changes to better his situation, and the will to do so will have been found.

To take the argument further, and possibly to an extreme, does therapy perform only the role of a distractor or mental substitute? While medication performs the function of a physical stabiliser, does therapy perform the role of a mental stabiliser, stabilising the mood swings and thoughts of the affected individual, before Will, binding these altogether, prompts the individual to leap across the gulf between “where I am” and “where I want to be”?

If you believe that real, long-lasting change can only come about when the mind and body are relatively stable, and given time, an individual posseses the inherent power to heal themselves of mental illness and free themselves from the shackles of the likes of depression, then you might make the case that therapy isn’t as important as it is cut out to be. Is therapy really necessary in this case, and can it be replaced by recreational interests, for example, where parts of the brain that are latent come to the fore, and override the parts of the brain that trigger mental illness?

It would be simplistic to find a direct link between mental health and recreational interests or hobbies. Hobbies do not directly cure mental illnesses. But what they can possibly give is a sense of achievement and empowerment to an individual, subtly developing the mindset and will that change can be attained. The subtle aspect of development is an important one, it is an indirect way of going about developing achievement and staying hidden until the affected individual one day surmises his development and can see measurable progress that could spur him on to make great strides in matters of more concern. If, for example, a mental health sufferer takes up a hobby, such as learning a musical instrument like the piano, the time and energy invested into this may draw excess energy and time away from that invested into unnecessary mental worry, resulting in a greater sense of overall well-being.