Why health articles in newspapers should be retired

What is it that people look forward to? Most want time to pursue their interests and doing things they love. Some people have managed to combine all this by the traditional interest-led approach, doing things they love, starting up a blog, gaining readership, and then selling advertising space on their blog, or affiliate marketing and other things associated with making money from a website. For others, this lure for things they like is compromised by the need of having to make a living, and hence this is shelved while having to earn a living and put off until retirement.

For most people, retirement would be when they would be able to have the time and money to indulge in things they put off earlier. Some people have combined the starting of a blog and retirement, and made a living by blogging (and gaining a readership) about how they have or intend to retire early.

Retirement. Out of the rat race. All the time in the world. For most people, retirement is the time to look forward to.

A recent study however suggests that retirement is not all that wonderful. Despite it being seen as the time of the life where financial freedom has been achieved and time is flexible, it has been suggested that the onset of mental decline starts with retirement.

The Daily Telegraph reported that retirement caused brain function to rapidly decline, and this information had been provided by scientists. It further cautions that those workers who anticipate leisurely post-work years may need to consider their options again because of this decline. Would you choose to stop work, if this meant your mental faculties would suffer and you would have all the free time in the world but not the mental acuity?

Retired civil servants were found to have a decline in their verbal memory function, the ability to recall spoken information such as words and names. It was found that verbal memory function deteriorated 38% faster after an individual had retired than before. Nevertheless, other areas of cognitive function such as the ability to think and formulate patterns were unaffected.

Even though the decline of verbal memory function had some meaningful relevance, it must be made clear that the study does not suggest anything about dementia or the likelihood of that happening. There were no links drawn with dementia. Just because someone retires does not mean they are more likely to develop dementia.

The study involved over 3000 adults, and they were asked to recall from a list of twenty words after two minutes, and the percentages were drawn from there. The small sample size, not of the adults, but of the word list, meant the percentage decline of post-retirement adults may have been exaggerated.

Look at this mathematically. From a list of twenty words, a non-retiree may recall ten. A retiree may recall six. That difference of four words is a percentage decline of 40%.

Ask yourself – if you were given a list of twenty words, how many would you remember?

It is not unsurprising if retirees exhibit lower abilities at verbal memory recall because the need for these is not really exercised post-retirement. What you don’t use, you lose. We should not be worried about the decline, because it is not a permanent mental state, but it is reversible; in any case the figure is bloated by the nature of the test. If a non-retiree remembers ten words, and a retiree makes one-mistake and remembers it, that would be promoted as a 10% reduction in mental ability already.

Furthermore, decline is not necessarily due to the lack of work. There are many contributing factors as well, such as diet, alcohol and lifestyle. Retirement is not necessarily the impetus behind mental decline. Other factors may confound the analyses.

The research did not involve people who had retired early. For example, hedge fund managers might have retired in their forties. But you would struggle to think that someone in their forties would lose 38% of verbal memory recall.

Would a loss of 38% of verbal memory have an impact on quality of life? It is hard to tell if there is the evidence to support this. But the results point to a simple fact. If you want to get better at verbal memory, then practice your verbal memory skills. If you want to get better at anything, then practice doing it.

Was this piece of news yet another attempt by mainstream media to clog paper space with information – arguably useless? You decide.

The real health concern behind energy drinks

Could your regular normal drink give away your age? Possibly. It is conceivable that your pick-me-up in the morning is a general indicator of age. Those who prefer nothing more than a coffee are more likely to be working adults in their mid thirties or older. Those within the younger age brackets prefer to get a caffeine fix from energy drinks, the most popular among them being Red Bull, whose popularity has arguably been enhanced by its ability to be mixed with other drinks. Why is there this disparity in preference? It has been suggested that the older generation are more health conscious of the levels of sugar within the energy drinks and their effect, and hence avoid consuming them, while younger professionals who perhaps lead a more active lifestyle, including going to the gym, are more inclined to think they will somehow burn off the sugar over the course of the day, and they need the sugar to power them through the day, in addition to the caffeine.

Research suggests this kind of thinking pervades the younger generation, even right down to the teenage age group. In a bid to seem more mature, many are adopting the habits of those they see around them. The image of a twenty-something with energy drink in hand along with a sling bag, possibly a cigarette in the other, on the way to work, whatever work may be – perhaps a singer-songwriter? Or something with a socially glamorous title – is seemingly etched on the minds of youngsters as a life of having made it. This, coupled with the media images of celebrities on night outs with energy drinks in hand, to enable them to party the night through, have certainly promoted the rise of the energy drink among teenagers. It is arguable that energy drinks are the stepping stones from which the younger generation obtain their high before they progress to the consumption of alcohol. Research has demonstrated that it is usually within three years of starting energy drinks that a young adult progresses to consuming alcohol in the search of newer buzzes.

There are the obvious problems of over consumption of alcohol and it is of increasing concern that the copious amounts of energy drinks among young people prime them to reach for higher volumes of alcohol once they make the transition. Simply put, if a young person has habitually consumed three or four cans of Red Bull every day, and then progresses to try alcohol – usually the drink with the highest alcohol percentage, usually vodka for the same reason of the perception of being socially prestigious – then a starting point appears to be three or four shots of the alcoholic drink.

And one of the drinks that helps bridge the divide between energy drinks and alcohol?

Red Bull mixed with vodka.

Ever seen the videos of young adults knocking down shots of vodka or whisky like a fun game?

It seems that imprinted in the social subconscious is the idea that part of maturity and social status is the ability to knock down many shots of high strength alcohol. These has implications for the health of the future generation.

But it is not just the alcohol time bomb that is worrying. Over consumption of energy drinks causes tooth decay and a high level of caffeine and side effects within the body now.

A study of over 200 Canadian teenagers found that consumption of energy drinks caused incidences of sleeplessness and increased heart rate. They also reported other symptoms such as nausea and headaches.

But while the tabloids, in their usual way, exaggerated the links in the way that tabloids do, claiming that energy drinks can cause heart attacks and trigger underlying stress-related conditions, only one in five hundred suffered seizures, but even these cannot be traced directly to the energy drinks.

Energy drinks not only have implications on health, through the impact of sugar and caffeine, but they are subtly dangerous because they blur the lines between non-alcoholic drinks and alcoholic ones, and make the latter more trendy and accessible. In a way, they are similar to vaping. Both are supposedly healthier imitations of what they are supposed to replace. Apparently vaping has no significant effect on the compared to smoking; energy drinks are non-alcoholic ways of obtaining a high or rush.

The problem, however, is that once users have had their fill of these – the so-called healthier options – these options actually compel the individuals to move on to the less healthier option. And when they embark on the more health impacting lifestyle choices – either alcohol or smoking – the patterns of dependency have already long been established.

So the dangers of energy drinks are not so much they cause sleeplessness and increased heart rates.

It is actually that they propel individuals towards alcohol dependency. The main research question that should be asked, is, “Have you been tempted to try alcoholic drinks mixed with energy drinks such as Red Bull?”

Ibuprofen and the fertile imagination

There is an astounding variety of painkillers available for purchase both in supermarkets, chemists, and corner shops. Just take a look at the shelf of your nearest Tesco or Sainsbury. You have various types of paracetamol, both made by pharmaceutical companies as well as in house versions of the supermarkets.

What is the difference between them and why are there so many varieties?

When pharmaceutical companies take on the decision to manufacture a new drug, they are given a twenty-year patent which covers the research into the product, testing and manufacturing, and sales. The period of twenty years, a monopoly as such, is to reward them for the time invested into the research. In the course of the research into the product, pharmaceutical companies must publish various forms of medical evidence and put it into public domain, so that if there is any medical evidence that points to the contrary, these can be debated both by the medical community and the pharmaceutical world.

The problem, if we can call it that, is that business is a very competitive world, and if research is put out in the open without any form of intellectual protection, any manufacturer can pounce on the research undertaken by someone else who has taken the effort and trouble to do it, and produce their product off the back of it. They would have saved the time and cost investment.

Imagine if a writer has taken the time to research a topic, organise his thoughts succinctly, and find a publisher. And when his book is published, someone else photocopies it, binds the copied pages and subsequently peddles it as their own.

Within the period of twenty years, a pharmaceutical company has to research, market and sell enough of the product to recoup the investment costs and profit. It is after the twenty period has expired that the other sharks enter the fray. This is where you get the supermarket brands of the product, which are cheaper because they don’t need to pay for research.

What is the difference between brand names and generics? They essentially do the same thing. But if the original company has done a good job in making the product synonymous with its own brand, then you might think they are better. If you take Neurofen for headaches, then you might think it better than Tesco ibuprofen, even though they both contain the same active ingredient.

But pharmaceutical companies have to reinvent themselves, to make varieties of the same product, otherwise they will lose their market share and eventually die out. If you realise that Neurofen is matched in ability by the cheaper Tesco ibuprofen, you would buy the latter, unless you are persuaded that Neurofen for Flus and Colds, or Neurofen Muscle Pain has something clinically formulated for that specific purpose.

So the shelves of supermarkets are stacked with different priced products with the same active ingredient, as well as different varieties of the same product.

Painkillers are a common medicine because there will always be a demand for pain management.

The availability of pain relief medicine means it is easy for the average individual to obtain them. There is the possibility of overdose, and while this may be a rarity, there is a higher likelihood that the greater availability may mean individuals are taking more doses than they should.

What are the long term health impacts of taking ibuprofen for prolonged periods?

One problem is that the body adapts and so the long-term resistance is affected. In certain groups such as the elderly, aspirin also increased the risks of stomach bleeding.

A clinical trial seemed to suggest it may impact on testosterone production and hence affect fertility.

Test subjects were administered 2 x 600mg doses of ibuprofen daily for six weeks, much higher than the average dose. The sample size was only a small group of 30, and half received ibuprofen, while the others received a placebo. It would have been better if the subject group had been greater, so that there could be more confidence in the test results, but because a test of such nature is to examine human resistance to what is essentially toxicity, it would have been unethical to involve a large group of participants. The research findings found that there was no impact on testosterone already in the body, but the pain relieving nature of ibuprofen, as a relaxant of sorts, had impact on the production of testosterone and appeared to slow down production.

How did these reports end up in the media? The tabloids had a field day, and you would undoubtedly have found one with the usual wisecracks about balls and other man-related genitalia, along the lines of “Ibuprofen shrinks your balls” or “Ibuprofen smalls your balls”.

Maybe instead of Ibuprofen for colds or fast relief, we need Ibuprofen for Dummies.

One cigarette a day can cost a lot

According to the newspaper headlines of late, teenagers should be kept away from cigarette exposure because of this worrying statistic.

A survey of over 216,000 adults found that over 60% of them had been offered and tried a cigarette at some point, and of these, nearly 70% went on to become regular smokers. The conclusion drawn was that there are strong links between trying a cigarette ones to be sociable and going on to develop it as a habit.

This of course ended up in the newspapers with headlines such as “One cigarette is enough to get you hooked”. The Mail Online, Britain’s go-to newspaper for your important health news (and I’m being ironic here) went a step further, saying one puff from a cigarette was enough to get you hooked for life. Never mind if you had one draw of a cigarette, felt the nicotine reach your lungs, then coughed in revulsion at the bitter aftertaste and swore that you would never again try a cigarette again. The Mail Online bets you would return to the lure of the dark side, seduced by its nicotine offers.

I digress.

While we all know that any event, repeated many times becomes a habit, the statistics in this case are a little dubious.

The study was conducted by Queen Mary University (nothing dubious in itself) but among the various concerns were what you might call the high conversion rate. Nearly 70% of those who tried a cigarette once went on to smoke regularly as a habit.

I’m not sure why the 70% is worrying. In fact, I wonder why it is not 100%! Surely, if you asked a habitual smoker, “Have you smoked a cigarette before?”, the answer would be a resounding “Yes”!

Unless you have caught someone in the act of sneakily smoking his virgin cigarette. But he wouldn’t yet be a habitual smoker.

Let’s establish the facts of the matter again.

216,000 adults were surveyed.

130,000 of them (60% of the adults) had tried a cigarette before.

86,000 (40%) have never smoked before.

Of the 130,000 who had tried a cigarette before, 81,000 (70%) went on to become regular smokers.

49,000 (30%) of those who tried a cigarette before either did not go on to smoke at all or did not smoke regularly.

Another way of looking at the data would be as follows:

216,000 adults surveyed.

135,000 adults do not smoke regularly or at all. Some did try once in the past.

81,000 adults smoke regularly and these people have obviously tried a cigarette before.

Suddenly the data doesn’t look sexy anymore.

The data was an umbrella studywhich means data was pooled rather than created from scratch through surveys. As previously examined, the final outcome is also dependent on the integrity of the original source.

Bias can also creep in because the data has not been directly obtained and inferences have been drawn.

For example, the influence of e-cigarettes and vaping on the results have not been scrutinised, because some of the data may have existed before then.

Before we leave it at this, here is another example of data bias:
216,000 adults were surveyed.

130,000 of them (60% of the adults) had tried a cigarette before.

86,000 (40%) have never smoked before.

We can conclude that 100% of the 86,000 who have never smoked a cigarette in the past have never smoked a cigarette.

You can see the absurdity more when it’s spelt out more in words than in numbers.

If research is costly and expensive, in terms of money and time, then why is it wasted on these?

One reason is that it keeps academics and researchers in their jobs, if they produce findings that are financially low-cost but can stave off the question of what they actually do, and their purpose.

This kind of research is the academic version of the newspaper filler article, one that columnists generate based on the littlest of information, in order to fill the papers with “news”, that actually mask the fact that they are there to sell advertising space. And in this, columnists and researchers are at times colluding for the same purpose. Vultures who tear at the carcass of a small rodent and then serve up the bits as a trussed up main meal.

Unethical? Who cares, it seems. Just mask the flawed process and don’t make it too obvious.

Your daily sausage roll may exact its revenge on you in good time

Ever wonder why people go on a vegetarian or a vegan diet? There are many reasons I can think of.

The most common one is that people are very much against animal cruelty. People who avoid eating animal-based products are against the farming of animals, because they are convinced that animals are treated inhumanely. For example, battery hens are kept in small cages in large densities. Imagine if you and your fellow co-workers were put together in a small room, without any desks, and told to make the most of it. You’d all be up in arms about the way you were treated. The only difference between you and hens is that hens can’t protest about it.

The transition to a vegan diet is not just about not eating animals, although this can be a factor too. Vegans are against the eating of animal meat because of the way farm animals are killed. Cows, pigs and chickens, the main farm animals that are killed to provide the common English foods such as the English breakfast comprising sausages, bacon and eggs, are – in the opinion of vegans – inhumanely killed, despite the best of measures.

Do you know how a chicken is killed before it ends up deep fried in bread crumbs and served with your chips and bottle of cola? There are two main ways. The first is by electric methods. First of all, the birds are shackled to a conveyor belt by their legs, upside down. Needless to say, they don’t willingly walk to the machine and pick their positions. There is a lot of fluttering about, human exasperation, and rough handling of the birds which may result in broken bones – who cares, right? After all, the bird is going to be dead soon – before the conveyor belt brings the birds upside down into a water bath primed with an electric circuit. The moment the bird’s head touches the water, it is electrocuted to death.

The second method involves gassing to death. Birds are transported in their crates and exposed to suffocation. This method is arguably more humane, supporters say, because the birds are not manhandled. But don’t be fooled into thinking the bird’s welfare is under consideration. It is a faster, less human-intensive way of killing the birds. Sling them in the box and gas them to death. No messing around trying to catch the flapping things. Avoiding the need to shackle them also saves time.

There is a third reason often quoted for going further in being a vegan. Cows produce vast amounts of methane and if everyone stopped eating beef, it would be better for the enviroment. In this instance, it is not so much for the animal’s welfare, but more for the sake of avoiding the environmental pollution by the animal.

There may soon be another fourth reason for avoiding meat. Processed meats – which have been preserved using methods such as salting, curing, smoking or adding preservatives – have been linked with cancer.

A study involving 262,195 UK women showed links of breast cancer and processed meat. Postmenopausal women who ate processed meat had a 9% higher chance of getting breast cancer than women who ate no processed meat. Those who consumed more than 9g of processed meat had a 21% chance of getting cancer in comparison to those who avoided it altogether.

The study is significant because the sample size is large – not just 100 women, or a small negligible figure whose results may bias findings, but over 250,000 women; more than enough to be taken seriously.

The women were all between the ages of 40-69 and free of cancer when they were recruited for the study before 2010. They were followed for a period of seven years and the results examined.

Process meats are thought to possibly cause cancer because the methods involved in processing the meat may lead to the formation of cancer-causing compounds called carcinogens.

What is not so clear is whether it was the eating of processed meats in isolation that caused the development of cancer. There are other factors that should be taken into account, of course, such as alcohol, exercise, work stress, lifestyle factors and body mass index. Certain ethnicities may also be prone to developing cancer because of other dietary factors such as cooking with oil, ghee or lard.

The results also did not suggest that the findings would be equally applicable to men.

Nevertheless, it would be a good idea, if you were an older woman, to avoid eating processed meat every day. Instead the consumption could be limited to once every other day, or eating it as an occasional treat. Or cut out the meat completely – a switch to a vegetarian or a vegan diet would not only be good for your health. You would be considering the environment too.

A short history of non-medical prescribing

It had long been recognised that nurses spent a significant amount of time visiting general practitioner (GP) surgeries and/ or waiting to see the doctor in order to get a prescription for their patients. Although this practice produced the desired result of a prescription being written, it was not an efficient use of either the nurses’or the GPs’time. Furthermore, it was an equally inefficient use of their skills, exacerbated by the fact that the nurse had usually themselves assessed and diagnosed the patient and decided on an appropriate treatment plan.

The situation was formally acknowledged in the Cumberlege Report (Department of Health and Social Security 1986), which initiated the call for nurse prescribing and recommended that community nurses should be able to prescribe from a limited list, or formulary. Progress was somewhat measured, but The Crown Report of 1989 (Department of Health (DH) 1989) considered the implications of nurse prescribing and recommended suitably qualified registered nurses (district nurses (DN) or health visitors (HV)) should be authorised to prescribe from a limited list, namely, the nurse prescribers’formulary (NPF). Although a case for nurse prescribing had been established, progress relied on legislative changes to permit nurses to prescribe.

Progress continued to be cautious with the decision made to pilot nurse prescribing in eight demonstration sites in eight NHS regions. In 1999, The Crown Report II (DH 1999) reviewed more widely the prescribing, supply and administration of medicines and, in recognition of the success of the nurse prescribing pilots, recommended that prescribing rights be extended to include other groups of nurses and health professionals. By 2001, DNs and HVs had completed education programmes through which they gained V100 prescribing status, enabling them to prescribe from the NPF. The progress being made in prescribing reflected the reforms highlighted in The NHS Plan (DH 2000), which called for changes in the delivery of healthcare throughout the NHS, with nurses, pharmacists and allied health professionals being among those professionals vital to its success.

The publication of Investment and Reform for NHS Staff –Taking Forward the NHS Plan (DH 2001) stated clearly that working in new ways was essential to the successful delivery of the changes. One of these new ways of working was to give specified health professionals the authority to prescribe, building on the original proposals of The Crown Report (DH 1999). Indeed, The NHS Plan (DH 2000) endorsed this recommendation and envisaged that, by 2004, most nurses should be able to prescribe medicines (either independently or supplementary) or supply medicines under patient group directions (PGDs) (DH 2004). After consultation in 2000, on the potential to extend nurse prescribing, changes were made to the Health and Social Care Act 2001.

The then Health Minister, Lord Philip Hunt, provided detail when he announced that nurse prescribing was to include further groups of nurses. He also detailed that the NPF was to be extended to enable independent nurse prescribers to prescribe all general sales list and pharmacy medicines prescribable by doctors under the NHS. This was together with a list of prescription-only medicines (POMs) for specified medical conditions within the areas of minor illness, minor injury, health promotion and palliative care. In November 2002, proposals were announced by Lord Hunt, concerning ‘supplementary’prescribing (DH 2002).

The proposals were to enable nurses and pharmacists to prescribe for chronic illness management using clinical management plans. The success of these developments prompted further regulation changes, enabling specified allied health professionals to train and qualify as supplementary prescribers (DH 2005). From May 2006, the nurse prescribers’extended formulary was discontinued, and qualified nurse independent prescribers (formerly known as extended formulary nurse prescribers) were able to prescribe any licensed medicine for any medical condition within their competence, including some controlled drugs.

Further legislative changes allowed pharmacists to train as independent prescribers (DH 2006) with optometrists gaining independent prescribing rights in 2007. The momentum of non-medical prescribing continued, with 2009 seeing a scoping project of allied health professional prescribing, recommending the extension of prescribing to other professional groups within the allied health professions and the introduction of independent prescribing for existing allied health professional supplementary prescribing groups, particularly physiotherapists and podiatrists (DH 2009).

In 2013, legislative changes enabled independent prescribing for physiotherapists and podiatrists. As the benefits of non-medical prescribing are demonstrated in the everyday practice of different professional groups, the potential to expand this continues, with consultation currently under way to consider the potential for enabling other disciplines to prescribe.

The bigger issues that come with preventing hearing loss

Is there cause for optimism when it comes to preventing hearing loss? Certainly the latest research into this suggests that if positive effects experienced by mice could be transferred to humans and maintained for the long term, then hereditary hearing loss could be a thing of the past.

It has always been assumed that hearing loss is always down to old age. The commonly held view is that as people grow older, their muscles and body functions deteriorate with time to the point that muscle function is impaired and eventually lost. But hearing loss is not necessarily down to age, although there are cases where constant exposure to loud noise, over time, causes reduced sensitivity to aural stimuli. Over half of hearing loss cases are actually due to inheriting faulty genetic mutations from parents.

How do we hear? The hair cells of the inner ear called the cochlea respond to vibrations and these signals are sent to the brain to interpret. The brain processes these signals in terms of frequency, duration and timbre in order to translate them into signals we know.

For example, if we hear a high frequency sound of short duration that is shrill, our brain interprets these characteristics and then runs through a database of audio sounds, an audio library in the brain, and may come up with the suggestion that it has come from a whistle and may signify a call for attention.

What happens when you have a genetic hearing loss gene? The hairs on the inner ear do not grow back and consequently sound vibration from external stimuli do not get passed on to the brain.

With progressive hearing loss too, the characteristics of sound also get distorted. We may hear sounds differently to how they are produced, thereby misinterpreting their meaning. Sounds of higher and lower frequency may be less audible too.

How does that cause a problem? Imagine an alarm. It is set on a high frequency so that it attracts attention. If your ability to hear high frequencies is gradually dulled then you may not be able to detect the sound of an alarm going off.

As hearing gradually deteriorates, the timbre of a sound changes. Sharper sounds become duller, and in the case of the alarm, you may hear it, but it may sound more muted and the brain may not be able to recognise that it is an alarm being heard.

Another problem with hearing loss is the loss of perception of volume. You may be crossing the road and a car might sound its horn if you suddenly encroach into its path. But if you cannot hear that the volume is loud, you may perceive it to be from a car far away and may not realise you are in danger.

The loss of the hairs in the inner ear is a cause of deafness in humans, particularly those for whom hearing loss is genetic. Humans suffering from hereditary hearing loss lose the hairs of the inner ear, which result in the difficulties mentioned above. But there is hope. In a research experiment, scientists successfully delayed the loss of the hairs in the inner ear for mice using a technique that edited away the genetic mutation that causes the loss of the hairs in the cochlea.

Mice were bred with the faulty gene that caused hearing loss. But using a technology known as Crispr, the faulty gene was replaced with a healthy normal one. After about eight weeks, the hairs in the inner ears of mice with genetic predisposition to hearing loss flourished, compared to similar mice which had not been treated. The genetic editing technique had removed the faulty gene which caused hearing loss. The treated mice were assessed for responsiveness to stimuli and showed positive gains.

We could be optimistic about the results but it is important to stress the need to be cautious.

Firstly, the research was conducted on mice and not humans. It is important to state that certain experiments that have been successful in animals have not necessarily had similar success when tried on humans.

Secondly, while the benefits in mice were seen in eight weeks, it may take longer in humans, if at all successful.

Thirdly, we should remember that the experiment worked for the mice which had the genetic mutation that would eventually cause deafness. In other words, they had their hearing at birth but were susceptible to losing it. The technique prevented degeneration in hearing in mice but would not help mice that were deaf at birth from gaining hearing they never had.

Every research carries ethical issues and this one was no different. Firstly, one ethical issue is the recurring one of whether animals should ever be used for research. Should mice be bred for the purposes of research? Are all the mice used? Are they accounted for? Is there someone from Health and Safety going around with a clipboard accounting for the mice? And what happens to the mice when the research has ceased? Are they put down, or released into the ecosystem? “Don’t be silly,” I hear you say, “it’s only mice.” That’s the problem. The devaluation of life, despite the fact that it belongs to another, is what eventually leads to a disregard for other life and human life in general. Would research scientists, in the quest for answers, eventually take to conducting research on beggars, those who sleep rough, or criminals? Would they experiment on orphans or unwanted babies?

The second, when it comes to genetics, is whether genetic experimentation furthers good or promotes misuse. The answer, I suppose, is that the knowledge empowers, but one cannot govern its control. The knowledge that genetic mutation can be edited is good news, perhaps, because it means we can genetically alter, perhaps, disabilities or life-threatening diseases from the onset by removing them. But this, on the other hand, may promote the rise of designer babies, where mothers genetically select features such as blue eyes for their unborn child to enhance their features from birth, and this would promote misuse in the medical community.

Would the use of what is probably best termed genetic surgery be more prominent in the future? One can only suppose so. Once procedures have become more widespread it is certain to conclude that more of such surgeons will become available, to cater for the rich and famous. It may be possible to delay the aging process by genetic surgery, perhaps by removing the gene that causes skin to age, instead of using botox and other external surgical procedures.

Would such genetic surgery ever be available on the NHS? For example, if the cancer gene were identified and could be genetically snipped off, would patients request this instead of medical tablets and other external surgical processes? One way of looking at it is that the NHS is so cash-strapped that under QALY rules, where the cost of a procedure is weighed against the number of quality life years it adds, the cost of genetic surgery would only be limited to more serious illnesses, and certainly not for those down the rung. But perhaps for younger individuals suffering from serious illnesses, such as depression, the cost of a surgical procedure may far outweigh a lifetime’s cost of medication of anti-depressant, anti-psychotics or antibiotics. If you could pinpoint a gene that causes a specific pain response, you might alter it to the point you may not need aspirin, too much of which causes bleeds. And if you could genetically locate what causes dementia in another person, would you not be considered unethical if you let the gene remain, thereby denying others the chance to live a quality life in their latter years?

Genetic editing may be a new technique for the moment but if there is sufficient investment into infrastructure and the corpus of genetic surgery information widens, don’t be surprised if we start seeing more of that in the next century. The cost of genetic editing may outweigh the cost of lifelong medication and side effects, and may prove to be not just more sustainable for the environment but more agreeable to the limited NHS budget.

Most of us won’t be around by then, of course. That is unless we’ve managed to remove the sickness and death genes.