Airbnb style recuperation for hospital patients

Would you welcome a stranger into your home? Would you have a spare room set aside for them? Perhaps not. But what if you were paid to do so? This is what some hospital bosses are considering to relieve overcrowding in hospital wards, that patients do their recuperating in private homes, rather than in the hospital. You offer a room if you have one available, and the hospital rents it from you for a patient. It is like an airbnb for hospitals.

On the face of it, this seems like a good idea. Hospital overcrowding is lessened, home owners get a bit of spare cash, the recuperating patient gets a bit of company … everyone’s happy. Patients staying out of hospitals mean that the backlog of operations can be cleared more quickly, resulting in a better streamlined NHS that benefits every citizen.

This idea is being piloted by the startup CareRooms. “Hosts”, who do not necessarily need to have previous experience in healthcare, could earn £50 a night and up to maximum of £1000 a month putting up local residents who are awaiting discharge from hospital. The pilot will start with 30 patients and the hope is that this will expand.

AgeUK claims that patients were being “marooned” in hospitals, taking up beds while 2.2 million days are lost annually to delayed transfers of care.

The specifics, however, do not seem to hold up to scrutiny. Who is responsible for the overall welfare of the patient? Once a patient is transferred to this “care” home, the responsibility of medical care is devolved to someone with basic first-aid training.

Prospective hosts are also required to heat up three microwave meals each day and supply drinks. Unfortunately it opens the issues of safeguarding, governance and possible financial and emotional abuse of people at their most vulnerable time.

The recuperating patients will “get access to a 24-hour call centre, tele-medical GP and promised GP consultation within four hours.”

The underlying question, though, is would you, though, want your loved ones to be put through this kind of care?

This is cost-cutting at its worst. The NHS is cutting costs, cutting ties and cutting responsibilities for those supposedly under its care. It would be a sad day if this kind of devolved responsibility plan became approved.

Media’s Marvellous Medicine

When it comes to our health, the media wields enormous influence over what we think. They tell us what’s good, what’s bad, what’s right and wrong, what we should and shouldn’t eat. When you think about it, that’s quite some responsibility. But do you really think that a sense of philanthropic duty is the driving force behind most of the health ‘news’ stories that you read? Who are we kidding? It’s all about sales, of course, and all too often that means the science plays second fiddle. Who wants boring old science getting in the way of a sensation-making headline?

When it comes to research – especially the parts we’re interested in, namely food, diet and nutrients – there’s a snag. The thing is, these matters are rarely, if ever, clear-cut. Let’s say there are findings from some new research that suggest a component of our diet is good for our health. Now academics and scientists are generally a pretty cautious bunch – they respect the limitations of their work and don’t stretch their conclusions beyond their actual findings. Not that you’ll think this when you hear about it in the media. News headlines are in your face and hard hitting. Fluffy uncertainties just won’t cut it. An attention-grabbing headline is mandatory; relevance to the research is optional. Throw in a few random quotes from experts – as the author Peter McWilliams stated, the problem with ‘experts’ is you can always find one ‘who will say something hopelessly hopeless about anything’ – and boom! You’ve got the formula for some seriously media-friendly scientific sex appeal, or as we prefer to call it, ‘textual garbage’. The reality is that a lot of the very good research into diet and health ends up lost in translation. Somewhere between its publication in a respected scientific journal and the moment it enters our brains via the media, the message gets a tweak here, a twist there and a dash of sensationalism thrown in for good measure, which leaves us floundering in a sea of half-truths and misinformation. Most of it should come with the warning: ‘does nothing like it says in the print’. Don’t get us wrong: we’re not just talking about newspapers and magazines here, the problem runs much deeper. Even the so-called nutrition ‘experts’, the health gurus who sell books by the millions, are implicated. We’re saturated in health misinformation.

Quite frankly, many of us are sick of this contagion of nutritional nonsense. So, before launching headlong into the rest of the book, take a step back and see how research is actually conducted, what it all means and what to watch out for when the media deliver their less-than-perfect messages. Get your head around these and you’ll probably be able to make more sense of nutritional research than most of our cherished health ‘gurus’.

Rule #1: Humans are different from cells in a test tube
At the very basic level, researchers use in-vitro testing, in which they isolate cells or tissues of interest and study them outside a living organism in a kind of ‘chemical soup’. This allows substances of interest (for example, a vitamin or a component of food) to be added to the soup to see what happens. So they might, for example, add vitamin C to some cancer cells and observe its effect. We’re stating the obvious now when we say that what happens here is NOT the same as what happens inside human beings. First, the substance is added directly to the cells, so they are often exposed to concentrations far higher than would normally be seen in the body. Second, humans are highly complex organisms, with intricately interwoven systems of almost infinite processes and reactions. What goes on within a few cells in a test tube or Petri dish is a far cry from what would happen in the body. This type of research is an important part of science, but scientists know its place in the pecking order – as an indispensable starting point of scientific research. It can give us valuable clues about how stuff works deep inside us, what we might call the mechanisms, before going on to be more rigorously tested in animals, and ultimately, humans. But that’s all it is, a starting point.

Rule #2: Humans are different from animals
The next logical step usually involves animal testing. Studying the effects of a dietary component in a living organism, not just a bunch of cells, is a big step closer to what might happen in humans. Mice are often used, due to convenience, consistency, a short lifespan, fast reproduction rates and a closely shared genome and biology to humans. In fact, some pretty amazing stuff has been shown in mice. We can manipulate a hormone and extend life by as much as 30%1. We can increase muscle mass by 60% in two weeks. And we have shown that certain mice can even regrow damaged tissues and organs.

So, can we achieve all of that in humans? The answer is a big ‘no’ (unless you happen to believe the X-Men are real). Animal testing might be a move up from test tubes in the credibility ratings, but it’s still a long stretch from what happens in humans. You’d be pretty foolish to make a lot of wild claims based on animal studies alone.

To prove that, all we need to do is take a look at pharmaceutical drugs. Vast sums of money (we’re talking hundreds of millions) are spent trying to get a single drug to market. But the success rate is low. Of all the drugs that pass in-vitro and animal testing to make it into human testing, only 11% will prove to be safe and effective enough to hit the shelves5. For cancer drugs the rate of success is only 5%5. In 2003, the President of Research and Development at pharmaceutical giant Pfizer, John La Mattina, stated that ‘only one in 25 early candidates survives to become a prescribed medicine’. You don’t need to be a betting person to see these are seriously slim odds.

Strip it down and we can say that this sort of pre-clinical testing never, ever, constitutes evidence that a substance is safe and effective. These are research tools to try and find the best candidates to improve our health, which can then be rigorously tested for efficacy in humans. Alas, the media and our nutrition gurus don’t appear to care too much for this. Taking research carried out in labs and extrapolating the results to humans sounds like a lot more fun. In fact, it’s the very stuff of many a hard-hitting newspaper headline and bestselling health book. To put all of this into context, let’s take just one example of a classic media misinterpretation, and you’ll see what we mean.

Rule #3: Treat headlines with scepticism
Haven’t you heard? The humble curry is right up there in the oncology arsenal – a culinary delight capable of curing the big ‘C’. At least that’s what the papers have been telling us. ‘The Spice Of Life! Curry Fights Cancer’ decreed the New York Daily News. ‘How curry can help keep cancer at bay’ and ‘Curry is a “cure for cancer”’ reported the Daily Mail and The Sun in the UK. Could we be witnessing the medical breakthrough of the decade? Best we take a closer look at the actual science behind the headlines.

The spice turmeric, which gives some Indian dishes a distinctive yellow colour, contains relatively large quantities of curcumin, which has purported benefit in Alzheimer’s disease, infections, liver disease, inflammatory conditions and cancer. Impressive stuff. But there’s a hitch when it comes to curcumin. It has what is known as ‘poor bioavailability’. What that means is, even if you take large doses of curcumin, only tiny amounts of it get into your body, and what does get in is got rid of quickly. From a curry, the amount absorbed is so miniscule that it is not even detectable in the body.

So what were those sensational headlines all about? If you had the time to track down the academic papers being referred to, you would see it was all early stage research. Two of the articles were actually referring to in-vitro studies (basically, tipping some curcumin onto cancer cells in a dish and seeing what effect it had).

Suffice to say, this is hardly the same as what happens when you eat a curry. The other article referred to an animal study, where mice with breast cancer were given a diet containing curcumin. Even allowing for the obvious differences between mice and humans, surely that was better evidence? The mice ate curcumin-containing food and absorbed enough for it to have a beneficial effect on their cancer. Sounds promising, until we see the mice had a diet that was 2% curcumin by weight. With the average person eating just over 2kg of food a day, 2% is a hefty 40g of curcumin. Then there’s the issue that the curcumin content of the average curry/turmeric powder used in curry is a mere 2%. Now, whoever’s out there conjuring up a curry containing 2kg of curry powder, please don’t invite us over for dinner anytime soon.

This isn’t a criticism of the science. Curcumin is a highly bio-active plant compound that could possibly be formulated into an effective medical treatment one day. This is exactly why these initial stages of research are being conducted. But take this basic stage science and start translating it into public health advice and you can easily come up with some far-fetched conclusions. Let us proffer our own equally absurd headline: ‘Curry is a Cause of Cancer’. Abiding by the same rules of reporting used by the media, we’ve taken the same type of in-vitro and animal-testing evidence and conjured up a completely different headline. We can do this because some studies of curcumin have found that it actually causes damage to our DNA, and in so doing could potentially induce cancer.

As well as this, concerns about diarrhoea, anaemia and interactions with drug-metabolizing enzymes have also been raised. You see how easy it is to pick the bits you want in order to make your headline? Unfortunately, the problem is much bigger than just curcumin. It could just as easily be resveratrol from red wine, omega-3 from flaxseeds, or any number of other components of foods you care to mention that make headline news.

It’s rare to pick up a newspaper or nutrition book without seeing some new ‘superfood’ or nutritional supplement being promoted on the basis of less than rigorous evidence. The net result of this shambles is that the real science gets sucked into the media vortex and spat out in a mishmash of dumbed-down soundbites, while the nutritional messages we really should be taking more seriously get lost in a kaleidoscope of pseudoscientific claptrap, peddled by a media with about as much authority to advise on health as the owner of the local pâtisserie.

Rule #4: Know the difference between association and causation
If nothing else, we hope we have shown that jumping to conclusions based on laboratory experiments is unscientific, and probably won’t benefit your long-term health. To acquire proof, we need to carry out research that involves actual humans, and this is where one of the greatest crimes against scientific research is committed in the name of a good story, or to sell a product.

A lot of nutritional research comes in the form of epidemiological studies. These involve looking at populations of people and observing how much disease they get and seeing if it can be linked to a risk factor (for example, smoking) or some protective factor (for example, eating fruit and veggies). And one of the most spectacular ways to manipulate the scientific literature is to blur the boundary between ‘association’ and ‘causation’. This might all sound very academic, but it’s actually pretty simple.

Confusing association with causation means you can easily arrive at the wrong conclusion. For example, a far higher percentage of visually impaired people have Labradors compared to the rest of the population, so you might jump to the conclusion that Labradors cause sight problems. Of course we know better, that if you are visually impaired then you will probably have a Labrador as a guide dog. To think otherwise is ridiculous.

But apply the same scenario to the complex human body and it is not always so transparent. Consequently, much of the debate about diet and nutrition is of the ‘chicken versus egg’ variety. Is a low or high amount of a nutrient a cause of a disease, a consequence of the disease, or simply irrelevant?

To try and limit this confusion, researchers often use what’s known as a cohort study. Say you’re interested in studying the effects of diet on cancer risk. You’d begin by taking a large population that are free of the disease at the outset and collect detailed data on their diet. You’d then follow this population over time, let’s say ten years, and see how many people were diagnosed with cancer during this period. You could then start to analyse the relationship between people’s diet and their risk of cancer, and ask a whole lot of interesting questions. Did people who ate a lot of fruit and veggies have less cancer? Did eating a lot of red meat increase cancer? What effect did drinking alcohol have on cancer risk? And so on.

The European Prospective Investigation into Cancer and Nutrition (EPIC), which we refer to often in this book, is an example of a powerfully designed cohort study, involving more than half a million people in ten countries. These studies are a gold mine of useful information because they help us piece together dietary factors that could influence our risk of disease.

But, however big and impressive these studies are, they’re still observational. As such they can only show us associations, they cannot prove causality. So if we’re not careful about the way we interpret this kind of research, we run the risk of drawing some whacky conclusions, just like we did with the Labradors. Let’s get back to some more news headlines, like this one we spotted: ‘Every hour per day watching TV increases risk of heart disease death by a fifth’.

When it comes to observational studies, you have to ask whether the association makes sense. Does it have ‘biological plausibility’? Are there harmful rays coming from the TV that damage our arteries or is it that the more time we spend on the couch watching TV, the less time we spend being active and improving our heart health. The latter is true, of course, and there’s an ‘association’ between TV watching and heart disease, not ‘causation’.

So even with cohorts, the champions of the epidemiological studies, we can’t prove causation, and that’s all down to what’s called ‘confounding’. This means there could be another variable at play that causes the disease being studied, at the same time as being associated with the risk factor being investigated. In our example, it’s the lack of physical activity that increases heart disease and is also linked to watching more TV.

This issue of confounding variables is just about the biggest banana skin of the lot. Time and time again you’ll find nutritional advice promoted on the basis of the findings of observational studies, as though this type of research gives us stone cold facts. It doesn’t. Any scientist will tell you that. This type of research is extremely useful for generating hypotheses, but it can’t prove them.

Rule #5: Be on the lookout for RCTs (randomized controlled trials)
An epidemiological study can only form a hypothesis, and when it offers up some encouraging findings, these then need to be tested in what’s known as an intervention, or clinical, trial before we can talk about causality. Intervention trials aim to test the hypothesis by taking a population that are as similar to each other as possible, testing an intervention on a proportion of them over a period of time and observing how it influences your measured outcome.

What your breakfast reveals about media companies

Wordsmiths would tell you that the origins of the word “breakfast” lie in the words “break” and “fast”. Then again, you wouldn’t actually need an expert to tell you the combined word comes from its intention – to end the fasting period. What fast? Presumably in Roman days the fast represented the period from after sunset to sunrise, where people had to endure going without food in the cold of night, at a time when the thinking was “Eat as much as you can during the day, while you can”. The line of thinking about what to eat for breakfast certainly does vary from place to place. Some believe that after a period of doing without food – okay, so a few hours every evening now after a “Just Eat” gorge of Indian takeaway washed down with bottles of Kingfisher can hardly be called a fast anymore –  the body has to stock up on its resources. Enter the full English breakfast; sausages, bacon, eggs, tomatoes, beans (mustn’t forget your greens), black pudding – everything you wanted to eat during the day, presented to you literally on a plate, in case you miss the opportunity to eat later on. In contrast, there are others of the thinking that after an overnight period of doing without, the body cannot be forced into what is a gorge. Just as someone who is parched and dehydrated has to resist the natural urge to guzzle down water when presented with it, breakfast, some think, is only a primer for a heavy lunch. Hence the idea of a light continental croissant, a little way of appeasing the hungry body but regulating the intake of food so the body is not lulled into a yo-yo pattern of starvation and gorging that is more typical of eating disorders.

Makes sense? Both points of view actually do, despite the conflicts about whether or not to eat heavy first thing in the morning. But to further complicate the issue, a third group believes that since your body, when at rest, will require resources to draw on when you are asleep, then it makes perfect sense to load up with a heavy meal as the last meal of the day. Start light, finish heavy. Viewed in the context, it makes sense too.

If there is any one consistent factor about diet, it is probably that the debate, ideas and media reports will continue into the future, and ideas will come and go and come back again. The fad for various diets has sold books and filled magazine columns and given the media lots to write about, which is great for the industry because media is not a sector that relies on bringing to you information that is necessarily correct, it is a sector that relies on attracting readership and human traffic in order to build up a reader base which it leverages to companies to sell advertising. Advertising is what drives media, not the exposition or exploration of facts. Hence media companies will present information that they feel is of interest and will hook in readers. It doesn’t necessarily have to be substantiated, as long as there is a fellow source to mention, as if the validation of facts had been corroborated by them.

Where do research scientists fit in this grand scheme of things? There are various kinds of research scientists, ones that truly explore the world in order to further it, and others who conduct investigation in order that it may be latched on to by the media in reports. Ultimately it comes down to who is funding the work. Funded by a company such as Cancer Research? The investigative research conducted by such research scientists is likely to be subject to stringer validation. Funded by a pharmaceutical company? The data obtained by such research needs to be handled carefully in order that the outcomes are not flawed or biased towards any products the company is producing.

In other words, if a pharmaceutical company is working on producing a medical product that is, for example, has seaweed as an active ingredient, then the research must not be conducted in a way that only shows the positive benefits of seaweed; research that only gives supposed scientific validation to a pre-determined result.

Bias is all too easy to spot when the links are direct, when a pharmaceutical company employs scientists. But what happens when the grand paymaster is the media company?

Hang on, I hear you say. Why would a media company, perhaps a newspaper, employ a group of scientists? And how could they get away with it?

The end product for a pharmaceutical company is a medical one. The end product for a newspaper is news, and the research scientists are there to provide it.

The group of scientists don’t necessarily need to be under permanent employ, just occasional contract work when there are lull periods in the news. And the work that they do is not necessarily related to what is in the article that is published anyway. Tenuous links are exploited to maximise the draw of a headline.

This is how it works:

A shark is a fish. A whale is a fish. Your newspaper reports that there is the possibility that sharks could become whales.

And that’s it.

A media company – newspaper, magazine, channel, web agency – can hire research scientists to lend credibility to semi-extravagant claims.

As long as there is another attributable source, or somewhere to dismiss the evidence – easily done by mentioning “It is generally accepted that …” or “Common convention holds that …” before launching into the juicy bit – the bit that spins things out, through a long process by which the receiver, either reader or viewer, has hopefully forgotten what the gist of the argument was in the first place – everything can passed off. In fact, it is a psychological trick – the receiver keeps following in the hope of being able mentally ordering the great influx of information.

Ever watched a BBC drama series? After six episodes, numerous disjointed flashbacks, the final  episode always seems a bit of a letdown because you realise everything was obvious and the in-betweens were just filler bits to spin things out.

I digress. But returning to the point, media companies can hire research scientists on an occasional basis. Some may even do so, and have a scientist for full time hire as a generator of scientific news.

A direct link between a media agency and a research scientist may sound implausible. But think of the UK’s Channel 4 programme, Embarrassing Bodies, where a team of four doctors go around examining people, dispensing advice, running health experiments in a format of an hour-long slot punctuated by two minutes of advertisements for every thirteen minutes of the programme.

If the media company does not want its links to be so obvious, it can dilute them progressively through the form of intermediary companies.

For example, ABC newspaper hires DEF company to manage its search engine optimisation campaign. DEF hires GHI creative media, who hire  JKL, a freelance journalist who knows Dr MNO, who conducts research for hire. Eventually MNO’s “research” ends up in the ABC newspaper. If it proves to be highly controversial or toxic to some extent, ABC’s links to MNO are very, very easy to disavow.

So when the media recently reported that scientists say skipping the morning meal could be linked to poorer cardiovascular health, should we pay any heed to it?

The research findings revealed that, compared with those who had an energy-dense breakfast, those who missed the meal had a greater extent of the early stages of atherosclerosis – a buildup of fatty material inside the arteries.

But the link been skipping breakfast and cardiovascular health is tenuous at best, as the articles themselves admit.

“People who skip breakfast, not only do they eat late and in an odd fashion, but [they also] have a poor lifestyle,” said Valentin Fuster, co-author of the research and director of Mount Sinai Heart in New York and the Madrid-based cardiovascular research institute, the CNIC.

So a poorer lifestyle gives negative impact to your health. A poorer lifestyle causes you to miss breakfast. Sharks do become whales.

This supposed link between skipping breakfast and cardiovascular health was published in the Journal of the American College of Cardiology, and the research had partly been funded by the Spanish bank Santander. The health and diets of 4,052 middle-aged bank workers, both men and women, with no previous history of cardiovascular disease were compared.

You can bet that on another day where news is slow, someone will roll out an “Eating breakfast on the move harms your health” headline. Nothing to do with the way you move and eat, it is simply because you have a stressful lifestyle that impacts on your health which forces you to eat on the go. But it was a link and headline, a “sell” or bait that drew you in to either purchase a newspaper or magazine, watch a programme, or spend some dwell time on a site.

And that’s how media works.

Dirty laundry a powerful magnet for bedbugs

Bedbugs are small insects and suck human blood for their sustenance. They hide around beds in small cracks and crevices. Their existence can be identified by the presence of small bugs or tiny white eggs in the crevices and joints of furniture and mattresses. You might also locate mottled bedbug shells in these areas. A third sign of existence is the presence of tiny black spots on the mattress which are fecal matter, or red blood spots. And if you have itchy bites on your skin, then that is a clear sign. Unfortunately it is the fourth that provides people with the impetus to check their living areas for bugs, rather than the need to maintain hygiene by changing sheets.

The incidences of bedbugs have increased globally and one theory is that that visitors to countries where the hygiene levels are less stringent bring them back to their own country. The cost of cheap travel, both in terms of rail tickets and air flights, has enabled people to visit far-flung places. But one thing that has not been so apparent is how the bed bugs are carried back. It had been thought that bugs are more drawn to the presence of a human being – but surely they don’t piggyback on one across regions and continents?

The authors of a recent research into the matter have a new perspective of the matter. They believe that bugs are drawn to evidence of human presence, and not necessarily just to the presence of a human host. They believe that bed bugs, in places where hygiene is slightly lacking, collect in the dirty laundry of tourists and are then transported back to the tourists’ own location, from where they feed and multiply.

While this was an experimental study, the results are interesting because it had been previously thought that bed bugs prefer to be near sleeping people because they can sense blood.

The experiments leading to these results were conducted in two identical rooms.

Clothes which had been worn for three hours of daily human activity were taken from four volunteers. As a basis of comparison, clean clothes were also used. Both sets of clothes were placed into clean, cotton tote bags.

The rooms were identically set to 22 degrees Celsius, and the only difference was that one room had higher carbon dioxide levels than the other, to simulate the presence of a human being.

A sealed container with bed bugs in was placed in each room for 48 hours. After twenty four hours, when the carbon dioxide levels had settled, they were released.

In each room there were four clothing bags introduced – two containing soiled laundry and the other two containing clean laundry, presented in a way that mimicked the placement of clean and soiled clothes in a hotel room.

After a further 4 days, the number of bedbugs and their locations were recorded. The experiment was repeated six times and each experiment was preceded by a complete clean of the room with bleach.

The results between both rooms were similar, in that bed bugs gravitated towards the bags containing soiled clothes. The level of carbon dioxide was not a distinguishing factor in this instance, and the result suggested traces of human odour was enough to attract bed bugs. The physical presence of a human being was not necessary.

The carbon dioxide however did influence behaviour in that it encouraged more bed bugs to leave the container in the room with carbon dioxide.

In other words, the carbon dioxide levels in a room are enough to alert bed bugs to human presence, and traces of human odour in clothes are enough to attract them.

Why is this hypothesis useful to know? If you go to a place where the hygiene is suspect, then during the night when you are asleep, the bed bugs know you are present, and if they do not bite you, during the day they may come out and embed themselves in your dirty laundry. The researchers concluded that the management of holiday clothing could help you avoid bringing home bedbugs.

The simple way of protecting yourself against these pesky hitchhikers could just be to keep dirty laundry in sealable bags, such as those with a zip lock, so they cannot access it. Whether or not it means they will turn their attention to you during your holiday is a different matter, but at least it means you will avoid bringing the unwanted bugs back into your own home.

The study was carried out by researchers from the University of Sheffield and was funded by the Department of Animal & Plant Sciences within the same university.

More research of course is needed into the study. For example, if there were a pile of unwashed clothes while some was sleeping in the room, would the bugs gravitate towards the human or towards the clothes? It is more likely that they move for the human, but that kind of theory is difficult to test without willing volunteers!

Also, did the bugs in the room only head for the unwashed clothes because of the absence of a human, or did the proximity of the clothes to the container lull them into account the way they did? Also what is not accounted for are other factors by which bed bugs may be drawn to where they reside. Perhaps in the absence of a human being in the room, bed bugs would head for the next best alternative, which are clothes with trace human odours or skin cells, but perhaps with a human being in the room, bed bugs might rely on temperature differences to know where to zoom in on. In other words, instead of detecting human presence using carbon dioxide, they rely on the difference in temperature of the human body relative to its surroundings (the human body is at 36.9 degrees Celsius).

Carbon dioxide levels have been shown to influence mosquitoes and how they react but perhaps bed bugs rely on other cues.

There could be other factors that cannot or were not be be recreated in the same controlled environment of the experiment.

Ever wonder what it was like in the past centuries? Did people have to deal with bed bugs if they lived in the times of the Baroque ?

Nobody knows but one thing is for sure. Getting rid of bed bugs is a bothersome business but if you can prevent them getting in your home in the first place, all the better!

What antibiotics in agriculture are really about

There is widespread concern over the use of antibiotics in the agricultural world and what is wider bearings are. The general consensus is that the use of antibiotics in agriculture needs to be minimised dramatically by farmers, as there are fears that drug-resistant bacteria could pass up the food chain through consumption and environmental contamination.

The concerns take on many forms. Firstly, just as humans can develop resistance to medicines after prolonged use, there is the concern that long-term antibiotic use in agricultural settings may create antibiotic resistance in the animals and crops which receive these antibiotics. Secondly, even if these crops and animals themselves do not develop resistance to antibodies themselves, the prolonged consumption of the vegetables or meat from these farm animals could breed resistance in humans who consume them. There may also be other side effects we are as yet unaware of.

Antimicrobial drugs, which include antibiotics, antifungal and antiparasitical drugs, are commonly used in farming. They are used to prevent damage to crops, kill parasites, as well as keep livestock healthy. The long term aim of antimicrobial drugs in the context of farming is to maximise crop production and livestock farming. A field of crops lost to infestation is months of work for nothing. A farmer with a field of cows suffering from disease has lost not just capital but production possibilities as well. As with the case of mad-cow disease in the 1990s, farmers who had their cows put down not only lost the money they had invested in buying and breeding these cows, but also on the sale of milk and beef.

And in many cases, the losses from a brief period of crop infestation or animal disease could significantly affect a farmer’s income, or make such a dent in their livelihood that it either forces them to take on additional debt to cover the losses, or be so insurmountable that it forces them out of business.

There might be those that argue against the use of antibiotics but the truth is that they are necessary. They are one form of insurance for a sector that has to combat various problems, including the uncertainties of weather. When, for example, your crops – your livelihood – are subject to the whims of weather, infestation, and perhaps human vandalism and theft, you have to take steps to minimise risks on all fronts. You cannot simply just leave things to chance and hope for divine favour or faith – that would merely be masking a lack of responsibility.

Pests and viruses do not restrict their infestation to selected fields. Left unchecked, they would merely spread from unprotected fields and livestock, and then infect further unprotected areas. Antibiotics are medical city walls that keep away marauding invaders, and prevent them from invading territories and conscripting the local population into their armies to do further damage.

Resistance to the antibiotics, antifungal and antiparasitical drugs used in agriculture is collectively known as antimicrobial resistance (AMR).

An independent body chaired by the British economist Jim O’Neill looked specifically at antibiotic use in the environment and agriculture. Among other things, this body examined the ways in which regulation and financial measures such as taxation and subsidies could play in reducing the risks associated with the agricultural use of antimicrobials and environmental contamination.

The data from the report suggests the amount of antimicrobials used in food production internationally is at least the same as that in humans, and in some places is higher. For example, in the US more than 70% of antibiotics that are medically important for humans are used in animals.

What does that all mean? It means that drugs normally for humans are already used in animals. If human beings consume the meat of the animals over prolonged periods, their bodies can develop tolerance to the antibiotics because they were used in the animals. If human beings later have a need for these antibodies, in the medicines for humans, these forms of medication will have little or no effect. And as we have seen before, ineffective long term medication may only create addiction to drugs and pain relief medication.

The report included peer-reviewed research articles in which 72% of the 139 articles found evidence of a link between antibiotic consumption in animals and resistance in humans. There is enough impetus for policy makers to argue for a global reduction of antibiotics in food production to a more appropriate level.

But while the evidence suggests that we should reduce the usage of these antibiotics, antimicrobial usage is unfortunately likely to rise because of the economic growth and for increasing wealth and food consumption in the emerging world.

A considerable amount of antibiotics are used in healthy animals to prevent infection or speed up their growth. This is particularly the case in intensive farming, where animals are kept in confined conditions. An infection in these confined spaces could easily spread between organisms. Further to this, some animals receive antibiotics so that natural limiters to size are killed off in order that their growth is accelerated. If you sell meat by weight, it makes sense that you try to produce as big as animal as you can so that you can maximise your profits.

The report mainly highlighted three main risks that had connections with the high levels of antimicrobial use in food production. There was the concern that drug-resistant strains could be transmitted through direct contact between humans, particularly in the case of farmers, and animals on their farm. Secondly, the transmission of the drug-resistant strains could also result due to the contact during the preparation of the meat, or the consumption of it. Thirdly, the excrement of the animals might contain the drug-resistant strains and the antimicrobials and therefore pass into the environment.

There was also concern raised about the possibility of contaminating the natural environment. For example, if factories that manufacture these antimicrobials do not dispose of by-products properly, these may pollute the natural environment such as water sources. Already we have seen that fish near waste-treatment plants, which treated urine tinged with chemicals from birth control pills, developed abnormal characteristics and behaviour.

The review made three key recommendations for global action to reduce the risks described. The first was that there should be a global target for the minimisation of antibiotic use in food production to a recognised and acceptable level in livestock and fish. There were also recommendations that restrictions be placed on the use of antibiotics in the animals that are heavily consumed by humans.

Currently there are no guidelines surrounding the disposal of antimicrobial manufacturing waste into the environment and the report urged the quick establishment of these in order that pollution of the environment could be minimised and the disposal of by-products and active ingredients be regulated.

The report also urged for more monitoring on these problematic areas in concordance with agreed global targets, because legislation without means of enforcement is useless.

Is it possible that the production of antimicrobials can be limited? One cannot help but be cynical. As long as we inhabit a world where sales drive rewards, it is inconceivable that farmers would slow down their production on their own initiative. We would definitely need legislation and some form of method to ensure compliance.

But what form of legislation should we have? Should we focus on imposing penalties for non-compliance or incentives to encourage the reduced use of antimicrobials?

Some may argue that the latter is more effective in this case. If farmers are offered financial subsidies so that they receive more money for the price of meat, for example, they would be more inclined to reduce the usage of antimicrobials. But how would these be monitored? Could the meat for sale could be tested to ensure the density of antimicrobials falls under established guidelines, for example, so that if the farrmer has been relying on the use of antibiotics to increase the size of livestock, he is latterly being recompensed for the reduction in size arising from the reduction of the antibiotics?

Unfortunately the difficulty is in reconciling both the need as well as the established economic system for growth in one hand, with the sustainability factor in the other. How is farm produce sold? When you buy a bag of salad, a cut of meat, or a bottle of milk, all this is sold by weight or volume. You may buy eggs in carton of six, but they are also graded by size and weight. For the direct manufacturer – the farmer – size, volume and growth are what bring about greater profits – although these profits may barely be just above the threshold for subsistence. And after making allowances for damage due to weather, theft, low market demand and all other variables that threaten an already low-profit industry, asking a farmer to reduce the use of antimicrobials is akin to asking him not to take measures to protect his livelihood. If the use of antimicrobials bothers you, then you have to compensate the farmer not to use them, by being willing to pay higher prices for farm products.

Why do organic or free range eggs cost twice the price for half the size? Aha!

While antimicrobials are also used on free range produce, and the case of organic farming is not entirely relevant here, the same issue is being highlighted here. You are paying more for the process than the product, and in doing so the extra payment that you make is towards the farmers for farming practices you are seeking to promote.

A farmer can get more produce by rearing battery hens, but if you are concerned over animal welfare, you pay extra per animal for the farmer to rear it with more space and hence more welfare for the animal. Your free range chicken costs more not because it is bigger, or necessarily healthier, but because it has been afforded more space, which you consider to be ethical. Farmers may switch to organic farming if there is enough demand for this, and for some this may even be more favourable, because having to produce fewer hens, but fetching the same price as battery hens, may, in the grand scheme of things, be seen by the farmer as a more favourable solution.

In trying to promote less use of antimicrobials, we have to make up the farmer’s perceived loss of earnings. So it is not incorrect to say that if we are concerned about the use of antimicrobials in agriculture, we have to pay more for our farm produce. Are you prepared to do that? For families with high disposable income, the increase may only represent a small additional fraction. But for families on smaller incomes, the increase may be too steep to be feasible. In other words, while the need for a reduction in agricultural antibiotics is recognised, in practical terms it may only remain an aspirational ideal except to those who can afford it.

Can be people be convinced – even if the cost is high – that in the long term it is better for human health? If the continued use of antimicrobials means that human medication in the future may become less effective as our resistance is tempered, should we, despite our reservations about the cost – make the leap towards maintaining a sustainable future? And if low-income families cannot afford to pay more in the cost of their weekly shop to get less, ridiculous as it might sound – should higher income earners step in to fill the shortfall?

It is strange how the wider discussion about the use of antimicrobials in society leads to a discussion about income distribution and political sensitivities.

What has arisen in the course of that evaluation, however, is the fact that expecting citizens alone to fully contribute towards the production shortfall arising from a reduced use of antimicrobials by paying more for their farm produce is not going to work. While some can afford to, many cannot, and those that can may not necessarily want to pay for those that cannot. There are also other measures to reduce the use of anti-microbials.

Governments could also introduce legislation to prevent environmental contamination through antimicrobial products and by-products, and harsh penalties for doing so. At the moment there are no rules in place, it is of increasing concern that such legislation is developed quickly.

Governments could also offer tax subsidies and support for farmers who continue to reduce antimicrobials usage. These could be introduced at the end of the first year, when farmers need most support at the initial stages of conversion, then at thirty months, and at further longer-spaced periods. Subsidies or incentives could an arithmetic progression at the end of one year, two-and-a-half years, four-and-a-half years, seven years and so on, so there is continued incentive to maintain reduced antimicrobial usage.

The only problem is, where would the money for these subsidies come from? If the government receives less tax from farm produce transactions because less has been sold, and it has also received less from antimicrobial companies in the form of tax, because it has made them limit their production, where will it make up the shortfall? Through an environment tax on its citizens?

Therein lies the problem.

The conundrum is this: the threat of antibiotic resistance in the future means we have to lower the level of antimicrobials we currently use. Yet if we do so, we are looking at reduced economic output. And as long as we have an economic system that is reliant on growth and increased production, asking to slow down production is economic suicide.

You may ask: “What about if we have a re-evaluation of an economic system, and create one that is based on sustainability?”

I am sorry to say it but that is wishful, idealistic thinking.

The problem with switching to a sustainable-based economy can be described as such.

Imagine there is a children’s party. At this party there is a table with a gigantic bowl of sweets. The children who are first to arrive eagerly stuff their faces and pockets with sweets, and as the party progresses, the bowl gradually looks emptier and emptier. The parents present chastise their kids if they continue to head for the sweet bowl, remonstrating with them to leave some for the kids who have not yet arrived from the party. Some of these children, perhaps the older ones, might reduce their trips to the bowl and the number of sweets they take. But some children will continue to plunder the bowl of its sweets before it all runs out and stuff their faces, recognising the sweets are a dwindling resource and if they want to eat them they’d best take as many as they can. And a third group, while recognising the sweets will soon run out, are equally keen to get hold of as many as they can, not to eat the sweets, but because they realise that when one of the latecomers arrives and find there are no sweets left, their parents may offer them incentives to trade to appease the desperate child. “Charlie didn’t get many sweets because he was late. If you let Charlie have two of the sweets you already have, I’ll buy you an ice-cream later.” This third group recognises not just the impending scarcity, but contribute to it by stockpiling their own resources to use for later leverage. And they may even make the loudest noises about how everyone should stop taking sweets, only so that they can make the biggest grabs when no one is looking.

Who are the losers in this situation? The obvious ones are the one who arrived late at the party. But the not so obvious losers are the ones from the first group, who amended their behaviour to ensure that there were still sweets left for the later groups to come. In being principled, holding on to ideals, they became lesser off materially, and the only consolation was the knowledge they had made the effort to leave some sweets for the late group – whether or not the latecomers actually got any or not is another question. The sweets ran out eventually.

The problem with thinking about sustainable economic measures is that the first to make an attempt to switch on ethical or aspirational grounds will be among the ones to lose out, because subsequent groups will still make a grab for whatever is left. Some will make a grab to get as much of the remaining resource, while others will make a grab so that when there is scarcity – and scarcity drives up prices – they have plenty of the resource to benefit. So while everyone is making the right noises about economic sustainability, everyone is just holding back for someone to make the first move.

So this is what antibiotics in agriculture really tells you: Too much can create problems later due to antibiotic resistance and improper disposal. We need to cut down on the use of antimicrobials. But reduced antimicrobials means reduced output, and we must be prepared to pay higher prices for less produce to compensate the farmer for that to work, in order that they may earn a living. The government can introduce penalties to govern the disposal of antimicrobial-related products to limit the damage on the environment alongside incentives to limit the use of antimicrobials. But it will have problems funding the incentives. Because what it is proposing is economic slowdown, in order to have an economy at all in later generations – but the current generations are too concerned with their own interests and survival, and stealthily making a grab for the remnants after the first few leave the economic arena.

The problem with industry-funded drug trials

How much can we trust the results of clinical trials, especially ones that have been funded by companies with vested interests? This is the question we should continually ask ourselves, after the debacle of Seroxat.

The active ingredient of Seroxat is paroxetine. Medicines are known by two names, one of the active ingredient, the one that gives it the scientific name, and the other, the brand name. For example, the ingredient paracetamol is marketed under Neurofen, among other names. Companies that manufacture their own brand of medicine may decide to market it little more than their company name before the active ingredient, for example, Tesco paracetamol or Boots Ibuprofen, in order to distinguish it from other rival brands and aligning it with an already recognised scientific name, but without the associated costs of having to launch a new product brand.

Paroxetine is an anti-depressant and made its name as one of the few anti-depressants to be prescribed to children. However it was withdrawn from use after re-examination of the original scientific evidence found that the results published in the original research were misleading and had been misconstrued.

The prescription of medications to children is done under caution and monitoring, as there are various risks involved. Firstly, there is the danger that their bodies adapt to the medication and become resistant, thereby necessitating either higher doses in adult life, or a move on to stronger medication. In this instance there is the possibility that rather than addressing the problem, the medication only becomes a source of life-long addiction to medication. The second risk is that all medicines have side effects and can cause irreparable damage to the body in other regions. For example, the use of aspirin in the elderly was found to damage the lining of the stomach.

Equally worrying is the effect of these drugs on the health of the mind. Some drugs, particular those for mental health, are taken for their calming effect on the mind. The two main types of mental health drugs can be said to be anti-depressants and mood stabilisers, and while the aim of these drugs is to limit the brain’s overactivity, some have been found to trigger suicidal thoughts in users instead, ironically performing the function they were meant to discourage.

Children are often currently either prescribed adult medication in smaller doses of half strength instead, but the difficulty in assessing the dosage is that it does not lend itself to being analysed on a straight line graph. Should children under a certain age, say twelve for example, be prescribed as doseage based on age? Or if the most important factor in frequency is the body’s ability for absorption, should we prescribe based on other factors such as body mass index?

So when Seroxat came on to the market marketed as an anti-depressant for children you could almost feel the relief of the parents of the young sufferers. A medical product, backed by science and research, suitable for children, approved by the health authorities. Finally a medical product young sufferers could take without too much worry, and one – having been tested with young children – that parents could be led to surmise would be effective in managing their children’s mental health.

Except that Paroxetine, marketed as Seroxat, was not what it claimed to be. It has been withdrawn from use after scientists found, upon re-analysing the original data, that the harmful effects, particularly on young people were under-reported. Furthermore, researchers claim important details that could have affected the approval of its license were not made public, because it might have meant years of research might have gone down the drain.

When a medical product is launched, it is covered under a twenty-year no-compete patent, which means that it has a monopoly on that medicine for that period. While one might question why that is so, it is to protect the time spent by the pharmaceutical companies in investing in research and marketing the product, and give it a time period to establish a sizeable market share as a reward for developing the medication.

Twenty years for a patent might seem like a long term, but as companies apply for it while the product is in the early stages of development, in order that its research is not hijacked by a competing pharmaceutical company, they are often left with a period of ten years or less by the time the medical product has some semblance of its final form. The patent company has that amount of time to apply for a license and to market and sell the medication. After the original twenty years has elapsed, other companies can enter the fray and develop their own brands of the medicine. They, of course, would not need to spend the money on research as much of the research will have already been done, published, and accessible – enough to be reverse-engineered in a shorter space of time. Pharmaceutical companies are hence always engaged in a race against time, and if a product hits a snag in trials, mass production is put on hold – and if the company is left with anything less than five years to market its product, it is usually not long enough a period to recoup research costs. And if it is less with anything less than three years, it might as well have done the research for the companies that follow, because it will not recover the costs of research and marketing. While not proven, it is believed that pharmaceutical companies hence rush out products which have not been sufficiently tested, by emphasising the positive trial results, and wait for corrective feedback from the market before re-issuing a second version. It is not unlike computer applications nowadays which launch in a beta form, relying on user feedback for improvement, before relaunching in an upgraded form. The difference is software has no immediate implications on human health. Medication does.

Researchers who re-examined data from the medical trial of the antidepressant paroxetine, found reports of suicide attempts that had not been included in the original research paper. And because the makers of paroxetine, GlaxoSmithKline (GSK), had marketed paroxetine as a safe and also effective antidepressant for children, even though evidence was to the contrary, GSK had to pay damages for a record $3 billion for making false claims.

In the original research trials, GSK claimed that paroxetine was an effective medication for treating adolescents with depression and it was generally well-tolerated by the body with no side effects. Subsequent analysis found little advantage from paroxetine and an increase in harm in its use, compared to placebo.

The whole issues highlights the difficulty in trusting medical trials whose data is not independently accessed and reviewed.

The current stance on data is that pharmaceutical companies can select that clinical data they choose to release. Why is this so? We have already covered the reason for this. They have committed funds to research and are hence protective (and have right to be) protective of the raw data generated, particularly when competitors are waiting in the fold to launch products using the same data.

If you were a recording artist, and hired a recording studio for two weeks, musicians to play for you and sound engineers to record your work, at the end of the two weeks, you might have come up with a vast amount of recordings which will undergo editing, and from which your album will be created, then whatever has been recorded in the studio is yours, and you have the right to be protective about it in order that someone else might not release music using your ideas or similar to yours.

The problem is that when the pharmaceutical company initiating and funding the research is the one that will eventually market it first, and the clock is ticking against it, then it has a vested interest in the success of the product and is inherently biased to find positive outcomes that are advantageous to the product it creates.

Who would commit twenty years of time, research, marketing and finance to see a product fail?

The pharmaceutical company is also pressured to find these outcomes quickly and hence even the scientific tests may be already geared to ones that lead to pre-determined conclusions rather than ones that open it up to further analysis and cross-examination, and take up precious time or cause delay.

This creates a situation where only favourable data has been sought in the trials and only such data is made publicly available, leading to quick acceptance of the drug, a quick acquisition of a license and subsequently less delay heading into the marketing process.

The alternative is for independent review of the raw data, but this causes additional stresses on the time factor, and the security of the raw data cannot be guaranteed.

Despite the limitations of the current system, there are attempts to reform the system. The AllTrials campaign is a pressure group seeking independent scrutiny of medical data and has backing by medical organisations. The AllTrials group argue that all clinical trial data should be made available for the purpose of independent scrutiny in order to avoid similar issues to the misprescribing of paroxetine from repeated occurrence in the future.

The original study by GSK reported that in clinical trials 275 young people aged 12 to 18 with major depression were randomly allocated to either paroxetine, an older antidepressant drug called imipramine, or a placebo for eight weeks.

The researchers who reviewed the previous original study in 2001 found that it seriously under-reported cases of suicidal or self-harming behaviour, and that several hundreds of pages of data were missing without clear reason. It is likely these did not look upon paroxetine favourably.

Data was also misconstrued. For example, the 2001 paper reported 265 adverse events for people taking paroxetine, while the clinical study report showed 338.

The data involved examining 77,000 pages of data made available by GSK, which in hindsight, might have been 77,000 pages of unreliable data.

This study stands as a warning about how supposedly neutral scientific research papers may mislead readers by misrepresentation. The 2001 papers by GSK appear to have picked outcome measures to suit their results.

It subsequently come to light that the first draft paper was not actually written by the 22 academics named on the paper, but by a ghostwriter paid by GSK.

That fine for GSK might be seen as small in light of this. Certainly the reliability of industry-funded clinical trials, and how the process can be overhauled, is one we need to be considering for the future.

How long-term medication harms – but why nothing may be done about it

In looking at mental health, we have previously examined the idea that while medication offers short-term relief, long-term change is brought about through lasting measures such as cognitive therapy. We have also seen that medication is more effective in individuals with more severe forms of mental health, while milder forms can also be dealt with through non-medicative measures. We can summarise by saying that the role of medication is to offer immediate relief, but over a long term, to stabilise the individual to a state where pressures or stressors can be managed to a point where they do not cause stress, but give the individual opportunity to live with them, while examining the root cause of their problems.

The underlying causes are usually non-medically related; they can be extrinsic factors such as the working enviroment or lifestyle. Medication is hence insufficient to deal with these because they cannot impact on them. The focus on the root of the problem is one that patients on medication need to ultimately address. Unfortunately patients taking prescription medicines often make the assumption that if a certain pharmaceutical drug has been prescribed to address a particular problem, then more of it, even within limits, can eventually help resolve it. That is only a mistaken assumption. Overdosing on medication does not address the root of the problem. It only lulls the body into a relaxed state, blinding us to the immediate surroundings, so while we feel calm, relaxed or “high”, this feeling is only temporal.

Medications and the prescription of medication are reactive, not proactive. They treat symptoms that have manifested, but do not treat the cause of the symptoms.

These views of medicine are not just limited to mental health problems; they can extend into physical realms. Take eczema for example. A doctor may prescribe creams containing hydrocortisone and paraffin for you to manage the itchy, red flaring skin conditions that usually see in eczema sufferers. However, these creams may only offer you temporary relief. As soon as you stop taking them, your eczema may return. Advocates of TCM, or traditional Chinese Medicine, suggest that eczema results from an overactive liver, and the trapped “heat” in the body, when it is seeking release, manifests itself as flared red patches over the skin. Creams such as paraffin or other barrier creams may be viewed actually as being counterproductive, because they only prevent the internal heat from escaping and make the eczema worse. Have you ever encountered anyone who, upon applying the cream for ezcema, reported it only worsened the itch? If you visit a TCM practicioner, you will probably be prescribed a cream with some menthol formulation for external use, oral medicine for your eczema, and the advice that in order to deal with the root cause of your eczema, you have to make changes in your diet – specifically, not to over-consume food such as fried food or chocolate, and to avoid alcohol and coffee.

It would be great if the immediate and short-term relief brought about by medication could be extended for long periods. If you were suffering from serious illness such as severe depression, the difference you feel would be very noticeable at the onset of medication. However, medication is only a short-term stress suppressant, buying time in order for longer-term (usually non-medical) measures to take effect. It is not the intention of any prescriber – be it a GP or pharmacist – that any patient be on medication for a prolonged period of time. While it might be good financially to have such patients, it is unethical to keep patients unwell to have a constant income stream and a source of revenue. In this situation the health of the patient has become secondary to the financial benefit he or she can bring, and it is against the ethics of the medical profession.

It is unwise to be on medication for long periods. First and foremost, the body adapts to the doseage and in time the effects that the medicine initially brought are diminished, to the point that either a higher doseage of the medicine is required, or the patient is switched to another new type of medicine which is more potent. In both cases, if medication is seen to be the cure, rather than just to buy immediate relief, then the patient will merely keep taking the medicine in the hope that one day it will completely cure his or her problems, and the potential for addiction to a higher doseage results. This is how all addiction begins, and it is unfortunate if patients who take medication find that it has not only dealt with their initial symptoms, but layered it with a secondary problem of addiction to painkillers.

Addiction is only one of the problems brought about by use of long-term medication. There is the possibility, too, that the body also adapts to new chemicals and is slowly malformed. But the negative impact of medication remains unnoticed until it reaches the tipping point and consequences are made apparent with a catastrophic event. With smoking, for example, constant exposure to the chemicals damages the lungs and malforms them, but often people only sit up and try to take corrective action when irreparable damage has set in and lung cancer has developed. Medication is on the opposite end to the scale as smoking and is taken at the onset to cure rather than harm, but it has the potential to change the human body when taken over prolonged periods.

But the changes are not necessarily just experienced by patients on medication alone. Research scientists from the University of Exeter found that, for example, certain species of male fish were becoming transgender and displaying female characteristics and behaviours, such as having female organs, being less aggressive, and even laying eggs. The fish had come into contact with chemicals in water near waste-treatment plants. Chemicals contained in birth-control pills, mixed with urine flushed down the toilet, were cited as a particular source of contamination.

Long term medication is also not a good idea for children. If hyperactive children are embarking on activities that require focus such as school, or piano lessons, it may not be a good idea for them to be on prolonged medication. It may be better to treat the underlying causes first, to teach the child management strategies, rather than to merely treat the outwardly present effects.

When it comes to mental health problems, the best approaches are a mixture of medication and therapy. Give that medication is meant to be short-term, it is hence, important that therapy be as effective as possible in order for patients to entrust it to fully healing them, rather than depending on medication. This is of course more appropriate in instances of mental illness rather than physical illness that involve pain-relief. Nevertheless, in the latter case, where medication is for physical pain relief, some have suggested therapies such as hypnosis and acupuncture as long-term substitutes for pain medication.

It is worth the NHS examining such therapies in order to study the scientific evidence behind them, to glean any insight that could either be applied elsewhere to other treatments, or to find more cost-effective, longer-lasting treatments that will contribute to the NHS being a sustainable health service. Already, at the present time, the current model of the state being a mere provider and source of medicines and advice to its citizens cannot carry on. The cost of patient care will rise and drain its resources, and it would be more cost-effective to spend resouces to encourage citizens to actively take responsibility for their own health, and hence lessen the burden on the health service, rather than merely look towards it as a provider of medication.

There are also other reasons why the NHS has to prime itself for a move towards being a sustainable health service. It has to limit its carbon footprint in order to minimise the impact it has on the environment.

The prescription of long-term medication can ultimately have its impact traced back to the environment. Constituents of medication are either obtained from natural ingredients from foods grown on land, or manufactured in factories, which again, commandeer land use. The process of turning them into medication requires power and electricity, which either use up fossil fuels and produces fumes and greenhouses gases that result in global warming and instances of extreme weather, or renewable energy in the form of wind farms that still use up land, or solar energy from solar cells whose manufacture might have been through unsustainable means. Waste from manufacturing processes, or from the manufacture and the disposal of the medical product enters landfill or pollutes natural resources.

Land is a limited resource. More specifically, land that can grow useful crop is a limited resource. And so even if the current level of pharmaceutical manufacturing remains the same – perhaps, by some freak balance where the number of people being newly prescribed medication is equatable to the number of deaths – the land, along with the space available for landfill can never be refreshed on that basis. It might not make an immediate difference to you, but every individual has a civic responsibility, as a global citizen, to preserve the earth to make it habitable for future generations, to avoid killing off the human race.

Essentially, we need to lower our dependency on medication to avoid this impact on the environment. So that future generations have a habitable environment.

The problem is in convincing pharmaceutical companies to embrace this thinking. These companies depend on sales and if sales were to fall, so would profits and the price of shares. Pharmaceutical companies are accountable to their shareholders, and need to raise their share prices and create growth. The moment they start thinking about sustainability, they are looking to reduce their growth, and their share price would stagnate. Would you invest in a company with stagnant growth? Thought not. And if a company reports less profit, the government would have raised less revenue through tax and has to make up the shortfall somehow.

Being on long-term medication harms the body, among other things by creates changes in the body and fostering dependency. Ultimately it has significant bearing on the environment. The challenge is for us to wean ourselves off long-term medication, only using it in the short term while we address the root causes of our problems through therapy. On a wider scale, we need to create new business models because current ones actually depend on a sizeable number being unwell, in order for the economy to function. Surely that last statement is not ethical in itself and must raise incredulity – that in this day and age we are not trying to heal people, but maintain a threshold of well and unwell people that is economically beneficial!