Dogs can sense fear – and seek release

What makes some people more susceptible to being bitten by dogs? A recent study suggests that dogs, with a sense of smell keener than humans, can sense fear in us. And this suggests that perhaps the sense of fear trips or triggers the dog into a fright or flight response that results in the human being bitten.

The Daily Telegraph reported that the best form of prevention against a bite from a dog could be to adopt a slight self-confident front, almost seemingly like a swagger, in order to convince the dog of a sense of confidence to override the inner sense of fear. However, this approach does not address how the dog might deal with the presentation of a confident person yet sense the underlying fear. It is like you meeting a person who you know is lying, yet smiling at you. What do you know? You revert to what psychologists might call the memory bank, the “type 2” kind of thinking which is more analytical, and less immediately responsive – but do dogs have that kind of ability to think and fall back on?

The research was carried out by researchers from the University of Liverpool in the form of a survey in a bit to understand why the likelihood of people being bitten by dogs seemed to be in a higher case of incidence for certain individiuals.

The results from the survey said that the likelihood of taking a nip from a four legged friend was almost 2.5 times more common than the current official figure, which estimates that 7.4 in 1,000 people get bitten by a dog every year in the UK. The figure is likely to be higher, because dog owners who get bitten by their own dogs are unlikely to report them for fear of getting their own dogs put down. Dog bites which also happen within the family – where the dog belongs to a family member – are unlikely to be reported for the same reason.

The results also showed that people who are nervous, men and owners of several dogs were more likely to be bitten.

This study was dependent on the date from questionnaires. This sort of information collection is a good way to obtain responses quickly. However, the limitations of this study include the fact that in this particular instance an assessment of behaviour is difficult, both in a recollection situation – having to do it in hindsight. Also there was the earlier reported case of perhaps dog owners not wanting to get their dogs taken in, and amending their queries.

The current guidelins for dog bit preventions suggest the following:

Never leave a young child unsupervised with a dog – regardless of the type of dog and its previous behaviour.

This is of course a good point, especially with attack dogs or more aggresive breeds. Even if the child is known to the dog, there have been many cases where dogs left alone with toddlers have seized the chance and attacked them. It is almost as if the removal of an adult boldens the dog into an attack it would not normally make, and being left alone with a young child heightens the fright or fight syndrome within a dog.

Another guideline is to treat dogs with respect – don’t pet them when they’re eating or sleeping. Dogs dislike being disturbed when they are meeting their basic needs, and the disturbance awakes and breeds aggressive responses that may evolve later.

A third guideline is to avoid stroking or petting unfamiliar dogs – when greeting a dog for the first time, let it sniff you before petting it. A good idea is to actually converse with the owner first so that the dog has already established you are friendly.

This study was carried out by researchers from the University of Liverpool and was funded by the Medical Research Council Population Health Scientist Fellowship. While the media reporting of the study was fairly accurate, The Guardian pointed out that people’s emotional stability was self-rated. In other words, if respondents were asked to rate their feelings, this may not be an accurate assessment – one person’s level of anxiety may not be the same as another’s.

So can dogs actually sense fear and anxiety? How does this explain the incidence of people being bitten by dogs? The answer to these questions can be answered best perhaps in two parts.

The first is the level of aggression in the dog. This depends of course on the genetic makeup, but also how it is treated. If its needs are met then it is likely that the level of aggression is typically lower than what it would be than if it were harrassed or disturbed persistently, which can build up latent aggression.

The second is the dog’s sense of fear. If a dog is often emotionally angered and there is opportunity to release this tension, even in a moment of madness, then this may result in biting as an emotional release.

So can dogs sense fear? Possibly. Does this explain their tendency to bite? Well, dogs that are treated well and genetically not prone to attacking will be less prone to nipping. Dogs that are not attack dogs but mistreated, or dogs that habitually have their attack responses nurtured, are more prone to biting, when the opportunity presents itself in the form of a less defensive target.

Ibuprofen and the fertile imagination

There is an astounding variety of painkillers available for purchase both in supermarkets, chemists, and corner shops. Just take a look at the shelf of your nearest Tesco or Sainsbury. You have various types of paracetamol, both made by pharmaceutical companies as well as in house versions of the supermarkets.

What is the difference between them and why are there so many varieties?

When pharmaceutical companies take on the decision to manufacture a new drug, they are given a twenty-year patent which covers the research into the product, testing and manufacturing, and sales. The period of twenty years, a monopoly as such, is to reward them for the time invested into the research. In the course of the research into the product, pharmaceutical companies must publish various forms of medical evidence and put it into public domain, so that if there is any medical evidence that points to the contrary, these can be debated both by the medical community and the pharmaceutical world.

The problem, if we can call it that, is that business is a very competitive world, and if research is put out in the open without any form of intellectual protection, any manufacturer can pounce on the research undertaken by someone else who has taken the effort and trouble to do it, and produce their product off the back of it. They would have saved the time and cost investment.

Imagine if a writer has taken the time to research a topic, organise his thoughts succinctly, and find a publisher. And when his book is published, someone else photocopies it, binds the copied pages and subsequently peddles it as their own.

Within the period of twenty years, a pharmaceutical company has to research, market and sell enough of the product to recoup the investment costs and profit. It is after the twenty period has expired that the other sharks enter the fray. This is where you get the supermarket brands of the product, which are cheaper because they don’t need to pay for research.

What is the difference between brand names and generics? They essentially do the same thing. But if the original company has done a good job in making the product synonymous with its own brand, then you might think they are better. If you take Neurofen for headaches, then you might think it better than Tesco ibuprofen, even though they both contain the same active ingredient.

But pharmaceutical companies have to reinvent themselves, to make varieties of the same product, otherwise they will lose their market share and eventually die out. If you realise that Neurofen is matched in ability by the cheaper Tesco ibuprofen, you would buy the latter, unless you are persuaded that Neurofen for Flus and Colds, or Neurofen Muscle Pain has something clinically formulated for that specific purpose.

So the shelves of supermarkets are stacked with different priced products with the same active ingredient, as well as different varieties of the same product.

Painkillers are a common medicine because there will always be a demand for pain management.

The availability of pain relief medicine means it is easy for the average individual to obtain them. There is the possibility of overdose, and while this may be a rarity, there is a higher likelihood that the greater availability may mean individuals are taking more doses than they should.

What are the long term health impacts of taking ibuprofen for prolonged periods?

One problem is that the body adapts and so the long-term resistance is affected. In certain groups such as the elderly, aspirin also increased the risks of stomach bleeding.

A clinical trial seemed to suggest it may impact on testosterone production and hence affect fertility.

Test subjects were administered 2 x 600mg doses of ibuprofen daily for six weeks, much higher than the average dose. The sample size was only a small group of 30, and half received ibuprofen, while the others received a placebo. It would have been better if the subject group had been greater, so that there could be more confidence in the test results, but because a test of such nature is to examine human resistance to what is essentially toxicity, it would have been unethical to involve a large group of participants. The research findings found that there was no impact on testosterone already in the body, but the pain relieving nature of ibuprofen, as a relaxant of sorts, had impact on the production of testosterone and appeared to slow down production.

How did these reports end up in the media? The tabloids had a field day, and you would undoubtedly have found one with the usual wisecracks about balls and other man-related genitalia, along the lines of “Ibuprofen shrinks your balls” or “Ibuprofen smalls your balls”.

Maybe instead of Ibuprofen for colds or fast relief, we need Ibuprofen for Dummies.

One cigarette a day can cost a lot

According to the newspaper headlines of late, teenagers should be kept away from cigarette exposure because of this worrying statistic.

A survey of over 216,000 adults found that over 60% of them had been offered and tried a cigarette at some point, and of these, nearly 70% went on to become regular smokers. The conclusion drawn was that there are strong links between trying a cigarette ones to be sociable and going on to develop it as a habit.

This of course ended up in the newspapers with headlines such as “One cigarette is enough to get you hooked”. The Mail Online, Britain’s go-to newspaper for your important health news (and I’m being ironic here) went a step further, saying one puff from a cigarette was enough to get you hooked for life. Never mind if you had one draw of a cigarette, felt the nicotine reach your lungs, then coughed in revulsion at the bitter aftertaste and swore that you would never again try a cigarette again. The Mail Online bets you would return to the lure of the dark side, seduced by its nicotine offers.

I digress.

While we all know that any event, repeated many times becomes a habit, the statistics in this case are a little dubious.

The study was conducted by Queen Mary University (nothing dubious in itself) but among the various concerns were what you might call the high conversion rate. Nearly 70% of those who tried a cigarette once went on to smoke regularly as a habit.

I’m not sure why the 70% is worrying. In fact, I wonder why it is not 100%! Surely, if you asked a habitual smoker, “Have you smoked a cigarette before?”, the answer would be a resounding “Yes”!

Unless you have caught someone in the act of sneakily smoking his virgin cigarette. But he wouldn’t yet be a habitual smoker.

Let’s establish the facts of the matter again.

216,000 adults were surveyed.

130,000 of them (60% of the adults) had tried a cigarette before.

86,000 (40%) have never smoked before.

Of the 130,000 who had tried a cigarette before, 81,000 (70%) went on to become regular smokers.

49,000 (30%) of those who tried a cigarette before either did not go on to smoke at all or did not smoke regularly.

Another way of looking at the data would be as follows:

216,000 adults surveyed.

135,000 adults do not smoke regularly or at all. Some did try once in the past.

81,000 adults smoke regularly and these people have obviously tried a cigarette before.

Suddenly the data doesn’t look sexy anymore.

The data was an umbrella studywhich means data was pooled rather than created from scratch through surveys. As previously examined, the final outcome is also dependent on the integrity of the original source.

Bias can also creep in because the data has not been directly obtained and inferences have been drawn.

For example, the influence of e-cigarettes and vaping on the results have not been scrutinised, because some of the data may have existed before then.

Before we leave it at this, here is another example of data bias:
216,000 adults were surveyed.

130,000 of them (60% of the adults) had tried a cigarette before.

86,000 (40%) have never smoked before.

We can conclude that 100% of the 86,000 who have never smoked a cigarette in the past have never smoked a cigarette.

You can see the absurdity more when it’s spelt out more in words than in numbers.

If research is costly and expensive, in terms of money and time, then why is it wasted on these?

One reason is that it keeps academics and researchers in their jobs, if they produce findings that are financially low-cost but can stave off the question of what they actually do, and their purpose.

This kind of research is the academic version of the newspaper filler article, one that columnists generate based on the littlest of information, in order to fill the papers with “news”, that actually mask the fact that they are there to sell advertising space. And in this, columnists and researchers are at times colluding for the same purpose. Vultures who tear at the carcass of a small rodent and then serve up the bits as a trussed up main meal.

Unethical? Who cares, it seems. Just mask the flawed process and don’t make it too obvious.

Your daily sausage roll may exact its revenge on you in good time

Ever wonder why people go on a vegetarian or a vegan diet? There are many reasons I can think of.

The most common one is that people are very much against animal cruelty. People who avoid eating animal-based products are against the farming of animals, because they are convinced that animals are treated inhumanely. For example, battery hens are kept in small cages in large densities. Imagine if you and your fellow co-workers were put together in a small room, without any desks, and told to make the most of it. You’d all be up in arms about the way you were treated. The only difference between you and hens is that hens can’t protest about it.

The transition to a vegan diet is not just about not eating animals, although this can be a factor too. Vegans are against the eating of animal meat because of the way farm animals are killed. Cows, pigs and chickens, the main farm animals that are killed to provide the common English foods such as the English breakfast comprising sausages, bacon and eggs, are – in the opinion of vegans – inhumanely killed, despite the best of measures.

Do you know how a chicken is killed before it ends up deep fried in bread crumbs and served with your chips and bottle of cola? There are two main ways. The first is by electric methods. First of all, the birds are shackled to a conveyor belt by their legs, upside down. Needless to say, they don’t willingly walk to the machine and pick their positions. There is a lot of fluttering about, human exasperation, and rough handling of the birds which may result in broken bones – who cares, right? After all, the bird is going to be dead soon – before the conveyor belt brings the birds upside down into a water bath primed with an electric circuit. The moment the bird’s head touches the water, it is electrocuted to death.

The second method involves gassing to death. Birds are transported in their crates and exposed to suffocation. This method is arguably more humane, supporters say, because the birds are not manhandled. But don’t be fooled into thinking the bird’s welfare is under consideration. It is a faster, less human-intensive way of killing the birds. Sling them in the box and gas them to death. No messing around trying to catch the flapping things. Avoiding the need to shackle them also saves time.

There is a third reason often quoted for going further in being a vegan. Cows produce vast amounts of methane and if everyone stopped eating beef, it would be better for the enviroment. In this instance, it is not so much for the animal’s welfare, but more for the sake of avoiding the environmental pollution by the animal.

There may soon be another fourth reason for avoiding meat. Processed meats – which have been preserved using methods such as salting, curing, smoking or adding preservatives – have been linked with cancer.

A study involving 262,195 UK women showed links of breast cancer and processed meat. Postmenopausal women who ate processed meat had a 9% higher chance of getting breast cancer than women who ate no processed meat. Those who consumed more than 9g of processed meat had a 21% chance of getting cancer in comparison to those who avoided it altogether.

The study is significant because the sample size is large – not just 100 women, or a small negligible figure whose results may bias findings, but over 250,000 women; more than enough to be taken seriously.

The women were all between the ages of 40-69 and free of cancer when they were recruited for the study before 2010. They were followed for a period of seven years and the results examined.

Process meats are thought to possibly cause cancer because the methods involved in processing the meat may lead to the formation of cancer-causing compounds called carcinogens.

What is not so clear is whether it was the eating of processed meats in isolation that caused the development of cancer. There are other factors that should be taken into account, of course, such as alcohol, exercise, work stress, lifestyle factors and body mass index. Certain ethnicities may also be prone to developing cancer because of other dietary factors such as cooking with oil, ghee or lard.

The results also did not suggest that the findings would be equally applicable to men.

Nevertheless, it would be a good idea, if you were an older woman, to avoid eating processed meat every day. Instead the consumption could be limited to once every other day, or eating it as an occasional treat. Or cut out the meat completely – a switch to a vegetarian or a vegan diet would not only be good for your health. You would be considering the environment too.

A short history of non-medical prescribing

It had long been recognised that nurses spent a significant amount of time visiting general practitioner (GP) surgeries and/ or waiting to see the doctor in order to get a prescription for their patients. Although this practice produced the desired result of a prescription being written, it was not an efficient use of either the nurses’or the GPs’time. Furthermore, it was an equally inefficient use of their skills, exacerbated by the fact that the nurse had usually themselves assessed and diagnosed the patient and decided on an appropriate treatment plan.

The situation was formally acknowledged in the Cumberlege Report (Department of Health and Social Security 1986), which initiated the call for nurse prescribing and recommended that community nurses should be able to prescribe from a limited list, or formulary. Progress was somewhat measured, but The Crown Report of 1989 (Department of Health (DH) 1989) considered the implications of nurse prescribing and recommended suitably qualified registered nurses (district nurses (DN) or health visitors (HV)) should be authorised to prescribe from a limited list, namely, the nurse prescribers’formulary (NPF). Although a case for nurse prescribing had been established, progress relied on legislative changes to permit nurses to prescribe.

Progress continued to be cautious with the decision made to pilot nurse prescribing in eight demonstration sites in eight NHS regions. In 1999, The Crown Report II (DH 1999) reviewed more widely the prescribing, supply and administration of medicines and, in recognition of the success of the nurse prescribing pilots, recommended that prescribing rights be extended to include other groups of nurses and health professionals. By 2001, DNs and HVs had completed education programmes through which they gained V100 prescribing status, enabling them to prescribe from the NPF. The progress being made in prescribing reflected the reforms highlighted in The NHS Plan (DH 2000), which called for changes in the delivery of healthcare throughout the NHS, with nurses, pharmacists and allied health professionals being among those professionals vital to its success.

The publication of Investment and Reform for NHS Staff –Taking Forward the NHS Plan (DH 2001) stated clearly that working in new ways was essential to the successful delivery of the changes. One of these new ways of working was to give specified health professionals the authority to prescribe, building on the original proposals of The Crown Report (DH 1999). Indeed, The NHS Plan (DH 2000) endorsed this recommendation and envisaged that, by 2004, most nurses should be able to prescribe medicines (either independently or supplementary) or supply medicines under patient group directions (PGDs) (DH 2004). After consultation in 2000, on the potential to extend nurse prescribing, changes were made to the Health and Social Care Act 2001.

The then Health Minister, Lord Philip Hunt, provided detail when he announced that nurse prescribing was to include further groups of nurses. He also detailed that the NPF was to be extended to enable independent nurse prescribers to prescribe all general sales list and pharmacy medicines prescribable by doctors under the NHS. This was together with a list of prescription-only medicines (POMs) for specified medical conditions within the areas of minor illness, minor injury, health promotion and palliative care. In November 2002, proposals were announced by Lord Hunt, concerning ‘supplementary’prescribing (DH 2002).

The proposals were to enable nurses and pharmacists to prescribe for chronic illness management using clinical management plans. The success of these developments prompted further regulation changes, enabling specified allied health professionals to train and qualify as supplementary prescribers (DH 2005). From May 2006, the nurse prescribers’extended formulary was discontinued, and qualified nurse independent prescribers (formerly known as extended formulary nurse prescribers) were able to prescribe any licensed medicine for any medical condition within their competence, including some controlled drugs.

Further legislative changes allowed pharmacists to train as independent prescribers (DH 2006) with optometrists gaining independent prescribing rights in 2007. The momentum of non-medical prescribing continued, with 2009 seeing a scoping project of allied health professional prescribing, recommending the extension of prescribing to other professional groups within the allied health professions and the introduction of independent prescribing for existing allied health professional supplementary prescribing groups, particularly physiotherapists and podiatrists (DH 2009).

In 2013, legislative changes enabled independent prescribing for physiotherapists and podiatrists. As the benefits of non-medical prescribing are demonstrated in the everyday practice of different professional groups, the potential to expand this continues, with consultation currently under way to consider the potential for enabling other disciplines to prescribe.

The bigger issues that come with preventing hearing loss

Is there cause for optimism when it comes to preventing hearing loss? Certainly the latest research into this suggests that if positive effects experienced by mice could be transferred to humans and maintained for the long term, then hereditary hearing loss could be a thing of the past.

It has always been assumed that hearing loss is always down to old age. The commonly held view is that as people grow older, their muscles and body functions deteriorate with time to the point that muscle function is impaired and eventually lost. But hearing loss is not necessarily down to age, although there are cases where constant exposure to loud noise, over time, causes reduced sensitivity to aural stimuli. Over half of hearing loss cases are actually due to inheriting faulty genetic mutations from parents.

How do we hear? The hair cells of the inner ear called the cochlea respond to vibrations and these signals are sent to the brain to interpret. The brain processes these signals in terms of frequency, duration and timbre in order to translate them into signals we know.

For example, if we hear a high frequency sound of short duration that is shrill, our brain interprets these characteristics and then runs through a database of audio sounds, an audio library in the brain, and may come up with the suggestion that it has come from a whistle and may signify a call for attention.

What happens when you have a genetic hearing loss gene? The hairs on the inner ear do not grow back and consequently sound vibration from external stimuli do not get passed on to the brain.

With progressive hearing loss too, the characteristics of sound also get distorted. We may hear sounds differently to how they are produced, thereby misinterpreting their meaning. Sounds of higher and lower frequency may be less audible too.

How does that cause a problem? Imagine an alarm. It is set on a high frequency so that it attracts attention. If your ability to hear high frequencies is gradually dulled then you may not be able to detect the sound of an alarm going off.

As hearing gradually deteriorates, the timbre of a sound changes. Sharper sounds become duller, and in the case of the alarm, you may hear it, but it may sound more muted and the brain may not be able to recognise that it is an alarm being heard.

Another problem with hearing loss is the loss of perception of volume. You may be crossing the road and a car might sound its horn if you suddenly encroach into its path. But if you cannot hear that the volume is loud, you may perceive it to be from a car far away and may not realise you are in danger.

The loss of the hairs in the inner ear is a cause of deafness in humans, particularly those for whom hearing loss is genetic. Humans suffering from hereditary hearing loss lose the hairs of the inner ear, which result in the difficulties mentioned above. But there is hope. In a research experiment, scientists successfully delayed the loss of the hairs in the inner ear for mice using a technique that edited away the genetic mutation that causes the loss of the hairs in the cochlea.

Mice were bred with the faulty gene that caused hearing loss. But using a technology known as Crispr, the faulty gene was replaced with a healthy normal one. After about eight weeks, the hairs in the inner ears of mice with genetic predisposition to hearing loss flourished, compared to similar mice which had not been treated. The genetic editing technique had removed the faulty gene which caused hearing loss. The treated mice were assessed for responsiveness to stimuli and showed positive gains.

We could be optimistic about the results but it is important to stress the need to be cautious.

Firstly, the research was conducted on mice and not humans. It is important to state that certain experiments that have been successful in animals have not necessarily had similar success when tried on humans.

Secondly, while the benefits in mice were seen in eight weeks, it may take longer in humans, if at all successful.

Thirdly, we should remember that the experiment worked for the mice which had the genetic mutation that would eventually cause deafness. In other words, they had their hearing at birth but were susceptible to losing it. The technique prevented degeneration in hearing in mice but would not help mice that were deaf at birth from gaining hearing they never had.

Every research carries ethical issues and this one was no different. Firstly, one ethical issue is the recurring one of whether animals should ever be used for research. Should mice be bred for the purposes of research? Are all the mice used? Are they accounted for? Is there someone from Health and Safety going around with a clipboard accounting for the mice? And what happens to the mice when the research has ceased? Are they put down, or released into the ecosystem? “Don’t be silly,” I hear you say, “it’s only mice.” That’s the problem. The devaluation of life, despite the fact that it belongs to another, is what eventually leads to a disregard for other life and human life in general. Would research scientists, in the quest for answers, eventually take to conducting research on beggars, those who sleep rough, or criminals? Would they experiment on orphans or unwanted babies?

The second, when it comes to genetics, is whether genetic experimentation furthers good or promotes misuse. The answer, I suppose, is that the knowledge empowers, but one cannot govern its control. The knowledge that genetic mutation can be edited is good news, perhaps, because it means we can genetically alter, perhaps, disabilities or life-threatening diseases from the onset by removing them. But this, on the other hand, may promote the rise of designer babies, where mothers genetically select features such as blue eyes for their unborn child to enhance their features from birth, and this would promote misuse in the medical community.

Would the use of what is probably best termed genetic surgery be more prominent in the future? One can only suppose so. Once procedures have become more widespread it is certain to conclude that more of such surgeons will become available, to cater for the rich and famous. It may be possible to delay the aging process by genetic surgery, perhaps by removing the gene that causes skin to age, instead of using botox and other external surgical procedures.

Would such genetic surgery ever be available on the NHS? For example, if the cancer gene were identified and could be genetically snipped off, would patients request this instead of medical tablets and other external surgical processes? One way of looking at it is that the NHS is so cash-strapped that under QALY rules, where the cost of a procedure is weighed against the number of quality life years it adds, the cost of genetic surgery would only be limited to more serious illnesses, and certainly not for those down the rung. But perhaps for younger individuals suffering from serious illnesses, such as depression, the cost of a surgical procedure may far outweigh a lifetime’s cost of medication of anti-depressant, anti-psychotics or antibiotics. If you could pinpoint a gene that causes a specific pain response, you might alter it to the point you may not need aspirin, too much of which causes bleeds. And if you could genetically locate what causes dementia in another person, would you not be considered unethical if you let the gene remain, thereby denying others the chance to live a quality life in their latter years?

Genetic editing may be a new technique for the moment but if there is sufficient investment into infrastructure and the corpus of genetic surgery information widens, don’t be surprised if we start seeing more of that in the next century. The cost of genetic editing may outweigh the cost of lifelong medication and side effects, and may prove to be not just more sustainable for the environment but more agreeable to the limited NHS budget.

Most of us won’t be around by then, of course. That is unless we’ve managed to remove the sickness and death genes.

Health umbrella reviews mask the real issues

You have to wonder why the breakfast tea doesn’t get the same level of attention. Or perhaps whether in France, the humble croissant is elevated to the same status. Or maybe the banana could soon be the star of another media show. But unfortunately it is coffee that headlines tomorrow’s fish and chips papers.

“Drinking three or four cups of coffee a day could have benefits for your health”. As we have seen previously, this kind of headline bears the hallmarks of a media health report:

1) repackaging of common information requiring little or no specialist examination;

2) use of a modal auxiliary verb (could) to conveniently justify or disclaim an

attention-grabbing headline – which, by the way, is point number three.

The health reports in the media also incorporate:

4) a statistically small group of trial participants, whose results are then blown up in proportion as if to be representative of the 7 billion people on the planet.

5) Assumptions. A media report about health could simply include assumptions.

Why dwell on coffee? For starters, it is a commonly consumed drink and so any meaningful research would potentially have bearings on millions of people. It is common media practice to focus on common food and activities because of the relevance to daily life.

But if you examine this carefully, why not tea? Why not write about tea? While conspiracy theories may be slightly far fetched, it is possible that – unless it is a speciality tea – coffees cost more, and any potential health benefits would lead people to spend more, hence generating more for the economy in the forms of tax. Perhaps this is why media writers don’t waste too much ink on researching the potential life-saving benefits of bananas, even though they are widely consumed. The research isn’t going to drive people to buy bananas in bulk, and even so, the extra revenue generated from a low priced item isn’t going to raise much extra tax.

Are there any notable similarities or differences in style across different countries? One wonders whether Parisian newspapers, on a regular basis, churn out headlines such as:

“Eating two or more croissants a day could reduce your chances of heart disease.”

“Pan aux raisins linked with dementia”.

The research done was an umbrella review to potentially examine whether further research should be undertaken into researching the effects of coffee and its role in preventing liver cancer. An umbrella review meant that no actual research was undertaken, but that existing research was examined and analysed to glean insights.

The problem with umbrella reviews is that they are very generalised, no actual research is done, and they are only brief analyses of existing research. This means that first of all, an umbrella review could arrive at a particular conclusion, but in no way should that be taken as the final conclusion.

In fact, the findings of an umbrella review are only the preliminary to more detailed investigation. If an umbrella review suggested that drinking coffee could prevent cancer, then what it is saying is more research needs to be undertaken, and the media needs to be ethically responsible by not reporting “Coffee prevents Cancer”, because there are people that look at newspapers and television as the source of their information and assume just because it has been released in the public domain, it is truth. Who could conceive that newspapers spend time and resources to publish trivial information and that television is pure rubbish?

The second problem with umbrella reviews is that the outcomes are only as good as the original sources. If someone gave you a set of grainy photos, then asked you to make a collage with them, then your collage is going to be as good as the grainy photos will allow. If the original sources were not thorough or exact in their investigation, are any subsequent findings based on these merely just a waste of time?

The third issue with umbrella reviews is that under closer scrutiny, the overall picture is distorted by over focussing on small statistical variances, or sometimes minute errors are magnified and lead one down the wrong path.

If you took a picture on your phone and then blew it up to the size of a mural covering the side of your house, the picture becomes very dotty. You might see big patchy squares. But if you started looking for that big patchy square from the image in your phone… one has to wonder what the purpose of that is.

The fourth is that because umbrella reviews are a prelude to a more thorough investigation, their end results are slightly skewed from the outset. If an umbrella review is bound to provide a few avenues for later time-consuming research then it is fundamentally biased into having to provide one in the first place. Why, in that case, have such reviews in the first place? Some may point out that the flaw in the system is that umbrella reviews are relied on by those in academia and research to warrant the continued longevity of their positions. In other words, if researchers had nothing to research, they might be out of a job, so they best find something to stick their noses in.

Have you ever read the London newspaper Metro and come across some research news such as:

“Going to bed angry can wreck your sleep” (25 Sept 2017)

It is the sort of headline that makes you think “Why bother doing the research in the first place?”

It is likely that you have read a media report of an umbrella review.

What were the findings of the original coffee review?

Drinking coffee was consistently linked with a lower risk of death from all causes and from heart disease. The largest reduction in relative risk of premature death was seen in people consuming three cups a day, compared with non-coffee drinkers.

Now, when an umbrella review mentions drinking coffee is linked with a lower risk of death, it is important to be clear about what it specifically means. And what it is stating is that those who had a lower risk of death all happened to drink coffee. It might have nothing to do with the coffee itself. It might have been that they took a break to slow down a fast-paced lifestyle, and the taking of a break gave them a lower risk of death. By that logic of association, tea could also be linked with a lower risk of death.

Coffee was also associated with a lower risk of several cancers, including prostate, endometrial, skin and liver cancer, as well as type-2 diabetes, gallstones and gout, the researchers said. The greatest benefit was seen for liver conditions such as cirrhosis of the liver.

Again, to be clear, the above link means that those who were at lower risk of those cancers happened to drink coffee. But it is not necessarily stating the coffee had anything to do with it.

And coffee is such a commonly consumed drink, that it is easy to use it to draw links to anything.

If people who died from car accidents happened to drink coffee, an umbrella review might state that drinking coffee is linked with higher incidences of car accidents.

The findings can be summarised by a health analyst:

“Does coffee prevent chronic disease and reduce mortality? We simply do not know. Should doctors recommend drinking coffee to prevent disease? Should people start drinking coffee for health reasons? The answer to both questions is ‘no’.”

We should perhaps add a further third question: Did the umbrella review produce any actionable findings, and should it have been undertaken in the first place?

Probably not.