A short history of non-medical prescribing

It had long been recognised that nurses spent a significant amount of time visiting general practitioner (GP) surgeries and/ or waiting to see the doctor in order to get a prescription for their patients. Although this practice produced the desired result of a prescription being written, it was not an efficient use of either the nurses’or the GPs’time. Furthermore, it was an equally inefficient use of their skills, exacerbated by the fact that the nurse had usually themselves assessed and diagnosed the patient and decided on an appropriate treatment plan.

The situation was formally acknowledged in the Cumberlege Report (Department of Health and Social Security 1986), which initiated the call for nurse prescribing and recommended that community nurses should be able to prescribe from a limited list, or formulary. Progress was somewhat measured, but The Crown Report of 1989 (Department of Health (DH) 1989) considered the implications of nurse prescribing and recommended suitably qualified registered nurses (district nurses (DN) or health visitors (HV)) should be authorised to prescribe from a limited list, namely, the nurse prescribers’formulary (NPF). Although a case for nurse prescribing had been established, progress relied on legislative changes to permit nurses to prescribe.

Progress continued to be cautious with the decision made to pilot nurse prescribing in eight demonstration sites in eight NHS regions. In 1999, The Crown Report II (DH 1999) reviewed more widely the prescribing, supply and administration of medicines and, in recognition of the success of the nurse prescribing pilots, recommended that prescribing rights be extended to include other groups of nurses and health professionals. By 2001, DNs and HVs had completed education programmes through which they gained V100 prescribing status, enabling them to prescribe from the NPF. The progress being made in prescribing reflected the reforms highlighted in The NHS Plan (DH 2000), which called for changes in the delivery of healthcare throughout the NHS, with nurses, pharmacists and allied health professionals being among those professionals vital to its success.

The publication of Investment and Reform for NHS Staff –Taking Forward the NHS Plan (DH 2001) stated clearly that working in new ways was essential to the successful delivery of the changes. One of these new ways of working was to give specified health professionals the authority to prescribe, building on the original proposals of The Crown Report (DH 1999). Indeed, The NHS Plan (DH 2000) endorsed this recommendation and envisaged that, by 2004, most nurses should be able to prescribe medicines (either independently or supplementary) or supply medicines under patient group directions (PGDs) (DH 2004). After consultation in 2000, on the potential to extend nurse prescribing, changes were made to the Health and Social Care Act 2001.

The then Health Minister, Lord Philip Hunt, provided detail when he announced that nurse prescribing was to include further groups of nurses. He also detailed that the NPF was to be extended to enable independent nurse prescribers to prescribe all general sales list and pharmacy medicines prescribable by doctors under the NHS. This was together with a list of prescription-only medicines (POMs) for specified medical conditions within the areas of minor illness, minor injury, health promotion and palliative care. In November 2002, proposals were announced by Lord Hunt, concerning ‘supplementary’prescribing (DH 2002).

The proposals were to enable nurses and pharmacists to prescribe for chronic illness management using clinical management plans. The success of these developments prompted further regulation changes, enabling specified allied health professionals to train and qualify as supplementary prescribers (DH 2005). From May 2006, the nurse prescribers’extended formulary was discontinued, and qualified nurse independent prescribers (formerly known as extended formulary nurse prescribers) were able to prescribe any licensed medicine for any medical condition within their competence, including some controlled drugs.

Further legislative changes allowed pharmacists to train as independent prescribers (DH 2006) with optometrists gaining independent prescribing rights in 2007. The momentum of non-medical prescribing continued, with 2009 seeing a scoping project of allied health professional prescribing, recommending the extension of prescribing to other professional groups within the allied health professions and the introduction of independent prescribing for existing allied health professional supplementary prescribing groups, particularly physiotherapists and podiatrists (DH 2009).

In 2013, legislative changes enabled independent prescribing for physiotherapists and podiatrists. As the benefits of non-medical prescribing are demonstrated in the everyday practice of different professional groups, the potential to expand this continues, with consultation currently under way to consider the potential for enabling other disciplines to prescribe.

The bigger issues that come with preventing hearing loss

Is there cause for optimism when it comes to preventing hearing loss? Certainly the latest research into this suggests that if positive effects experienced by mice could be transferred to humans and maintained for the long term, then hereditary hearing loss could be a thing of the past.

It has always been assumed that hearing loss is always down to old age. The commonly held view is that as people grow older, their muscles and body functions deteriorate with time to the point that muscle function is impaired and eventually lost. But hearing loss is not necessarily down to age, although there are cases where constant exposure to loud noise, over time, causes reduced sensitivity to aural stimuli. Over half of hearing loss cases are actually due to inheriting faulty genetic mutations from parents.

How do we hear? The hair cells of the inner ear called the cochlea respond to vibrations and these signals are sent to the brain to interpret. The brain processes these signals in terms of frequency, duration and timbre in order to translate them into signals we know.

For example, if we hear a high frequency sound of short duration that is shrill, our brain interprets these characteristics and then runs through a database of audio sounds, an audio library in the brain, and may come up with the suggestion that it has come from a whistle and may signify a call for attention.

What happens when you have a genetic hearing loss gene? The hairs on the inner ear do not grow back and consequently sound vibration from external stimuli do not get passed on to the brain.

With progressive hearing loss too, the characteristics of sound also get distorted. We may hear sounds differently to how they are produced, thereby misinterpreting their meaning. Sounds of higher and lower frequency may be less audible too.

How does that cause a problem? Imagine an alarm. It is set on a high frequency so that it attracts attention. If your ability to hear high frequencies is gradually dulled then you may not be able to detect the sound of an alarm going off.

As hearing gradually deteriorates, the timbre of a sound changes. Sharper sounds become duller, and in the case of the alarm, you may hear it, but it may sound more muted and the brain may not be able to recognise that it is an alarm being heard.

Another problem with hearing loss is the loss of perception of volume. You may be crossing the road and a car might sound its horn if you suddenly encroach into its path. But if you cannot hear that the volume is loud, you may perceive it to be from a car far away and may not realise you are in danger.

The loss of the hairs in the inner ear is a cause of deafness in humans, particularly those for whom hearing loss is genetic. Humans suffering from hereditary hearing loss lose the hairs of the inner ear, which result in the difficulties mentioned above. But there is hope. In a research experiment, scientists successfully delayed the loss of the hairs in the inner ear for mice using a technique that edited away the genetic mutation that causes the loss of the hairs in the cochlea.

Mice were bred with the faulty gene that caused hearing loss. But using a technology known as Crispr, the faulty gene was replaced with a healthy normal one. After about eight weeks, the hairs in the inner ears of mice with genetic predisposition to hearing loss flourished, compared to similar mice which had not been treated. The genetic editing technique had removed the faulty gene which caused hearing loss. The treated mice were assessed for responsiveness to stimuli and showed positive gains.

We could be optimistic about the results but it is important to stress the need to be cautious.

Firstly, the research was conducted on mice and not humans. It is important to state that certain experiments that have been successful in animals have not necessarily had similar success when tried on humans.

Secondly, while the benefits in mice were seen in eight weeks, it may take longer in humans, if at all successful.

Thirdly, we should remember that the experiment worked for the mice which had the genetic mutation that would eventually cause deafness. In other words, they had their hearing at birth but were susceptible to losing it. The technique prevented degeneration in hearing in mice but would not help mice that were deaf at birth from gaining hearing they never had.

Every research carries ethical issues and this one was no different. Firstly, one ethical issue is the recurring one of whether animals should ever be used for research. Should mice be bred for the purposes of research? Are all the mice used? Are they accounted for? Is there someone from Health and Safety going around with a clipboard accounting for the mice? And what happens to the mice when the research has ceased? Are they put down, or released into the ecosystem? “Don’t be silly,” I hear you say, “it’s only mice.” That’s the problem. The devaluation of life, despite the fact that it belongs to another, is what eventually leads to a disregard for other life and human life in general. Would research scientists, in the quest for answers, eventually take to conducting research on beggars, those who sleep rough, or criminals? Would they experiment on orphans or unwanted babies?

The second, when it comes to genetics, is whether genetic experimentation furthers good or promotes misuse. The answer, I suppose, is that the knowledge empowers, but one cannot govern its control. The knowledge that genetic mutation can be edited is good news, perhaps, because it means we can genetically alter, perhaps, disabilities or life-threatening diseases from the onset by removing them. But this, on the other hand, may promote the rise of designer babies, where mothers genetically select features such as blue eyes for their unborn child to enhance their features from birth, and this would promote misuse in the medical community.

Would the use of what is probably best termed genetic surgery be more prominent in the future? One can only suppose so. Once procedures have become more widespread it is certain to conclude that more of such surgeons will become available, to cater for the rich and famous. It may be possible to delay the aging process by genetic surgery, perhaps by removing the gene that causes skin to age, instead of using botox and other external surgical procedures.

Would such genetic surgery ever be available on the NHS? For example, if the cancer gene were identified and could be genetically snipped off, would patients request this instead of medical tablets and other external surgical processes? One way of looking at it is that the NHS is so cash-strapped that under QALY rules, where the cost of a procedure is weighed against the number of quality life years it adds, the cost of genetic surgery would only be limited to more serious illnesses, and certainly not for those down the rung. But perhaps for younger individuals suffering from serious illnesses, such as depression, the cost of a surgical procedure may far outweigh a lifetime’s cost of medication of anti-depressant, anti-psychotics or antibiotics. If you could pinpoint a gene that causes a specific pain response, you might alter it to the point you may not need aspirin, too much of which causes bleeds. And if you could genetically locate what causes dementia in another person, would you not be considered unethical if you let the gene remain, thereby denying others the chance to live a quality life in their latter years?

Genetic editing may be a new technique for the moment but if there is sufficient investment into infrastructure and the corpus of genetic surgery information widens, don’t be surprised if we start seeing more of that in the next century. The cost of genetic editing may outweigh the cost of lifelong medication and side effects, and may prove to be not just more sustainable for the environment but more agreeable to the limited NHS budget.

Most of us won’t be around by then, of course. That is unless we’ve managed to remove the sickness and death genes.

Ethically spending a million pounds on useful research

Does offering financial incentives encourage mothers of newborns to breastfeed? While this may seem incredulous, a study actually was implemented in parts of England to see if this would be the case.

More than 10,000 mothers across regions such as South Yorkshire, Derbyshire and north Nottinghamshire took part in the trial, where mothers were given a hundred and twenty pounds if they breastfed their babies, and a further eighty pounds if they continued up to the point the babies were six months old. That is to say mothers received two hundred pounds if their babies were breastfed up to the age of six months.

But why was this implemented in the first place? One of the reasons the study was done was to see if financial incentives would help raise the rate of breastfeeding in the UK. In some parts of the UK, only one in eight babies are breastfed past eight weeks. The early suspension of breastfeeding causes later problems in life for babies, and this was a study to see if it would be possible to save a reported seventeen million pounds in annual hospital admissions or GP visits.

How were these women chosen? They were picked from areas which were reportedly low-income ones. There was a suggestion that in low-income areas, mothers feel obliged to return to work quickly and breastfeeding is inconvenient and a reason why mothers stop it.

The financial incentive did result in a rise of six percentage points, from 32% to 38%. This meant that over six hundred more mothers in the ten thousand breastfed their babies for up to six months instead of the hypothetical eight week line.

Should we get excited about these results? Caution is to be exercised.

As a few leading academics noted, there was no way to monitor a reported increase. The mother’s word was taken at face value but there was no way to monitor that a prolonged breastfeeding period actually took place. It would not be inaccurate to say that of these six hundred mothers, some merely reported they had breastfed for longer but without actually doing it. If you live in an income-deprived area, and were offered two hundred pounds of shopping at a time when you needed it, without having to do much apart from saying “Yes, I breastfed”, wouldn’t you take the easy money?

It was mentioned that if the results did have a high percentage of trustworthiness to them, in other words, if mothers breastfed as they said they had done, it would help normalise breastfeeding in regions where it might cause embarrassment to the mother. Why might breastfeeding cause embarrassment? For example, in some social situations it might be slightly awkward to reveal normally covered parts of the body in public.

How much did the scheme cost? If we assume that 38% of 10000 mothers breastfed and claimed these financial vouchers, that’s around 4000 mothers each claiming two hundred pounds, at a cost of eight hundred thousand pounds.

Wow. Eight hundred thousand pounds of free shopping for which an outcome cannot be undisputably proven. Where does all the money come from?

The Medical Research Council was funded to the tune of up to seven hundred and fifty-five million pounds in 2016/17, or which nearly half was provided as grants to researchers. But while all that may sound as a lot of money, surely there should be more accountability in how the money is used. Using up nearly a million pounds of that money for a trial whose results cannot be justified is not a good use of money.

But perhaps the babies’ height, weight and other factors pertaining to breastfeeding could have been taken? For example, if we know that breastfeeding has benefits in certain areas, such as in growth charts, perhaps the babies that were breastfed in that study could have been measured against babies who had not been breastfed to see if there had been any positive gain, and something that could correlate to breastfeeding over the six month period?

Imagine if this had been a study about literacy. Imagine that mothers who read two stories to their child up to the age of four years would receive two hundred pounds. Surely, at the end of the period, the research scientists would not merely be going to the mothers and saying “Did you read to your child? Yes? Here’s two hundred pounds.” They would try to assess the child, perhaps by means of a literacy test of some form, to see if any reading had actually taken place.

Otherwise it is just money down the drain for results which cannot be proven and cannot be relied on. In that case, what is the purpose of spending money on hearsay?

Did giving eight hundred thousand pounds encourage mothers in income-deprived areas to breastfeed for longer periods? Who knows? The only thing we can be sure of is that eight hundred thousand pounds made them say they did it.

Is there any truth about the benefits of Classical music?

Is there any truth to the commonly accepted notion that listening to classical music improves mental capacity? Somehow it has been accepted in modern society that classical musicians have larger frontal cortices, better mental reasoning powers and perhaps intelligence quotients. Over the last two decades or so this idea has fuelled a rise in the number of pregnant mothers listening to classical music – whether or not they like it – and parents enrolling their children into music classes. The music of Mozart, in particular, has enjoyed a resurgence as its classical form is deemed to be more logical and organised, compared to music of other periods, assisting in triggering patterns of organisation in the brain amongst its listeners.

How did this idea about Classical music come about? In the 1990s scientists conducted a series of experiments where one group of students were played one of Mozart’s piano sonatas before a spatial reasoning test, while another group sat in silence. The group that was played the music beforehand performed better on that task than the control group. The effect on the control group was temporary and only lasted fifteen minutes, meaning that after the fifteen minute mark the disparities between the results were minimal and statistically the same. The results of the group found also that while music primed the individual particularly for mathematical tasks, after an hour of listening to Classical music, the effect on the brain was lost.

That piece of research was pounced on by the media and other individuals and seemingly perpetuated to promote the listening of Classical music. One governor of the state of Georgia even decreed that newborn babies be given a copy of a CD of Mozart’s works upon leaving the hospital. The Mozart Effect, to give it its common name, was written about in newspapers and magazines, and this began the spur of Mozart-related sales of music as well as the trend of mothers playing such music to their children in and out of the womb.

The most important question we need to ask is whether there is any truth in such research, and whether it can be corroborated.

We know that some forms of music has a soothing, calming effect on individuals. Playing the music to the students may have calmed that so they were not nervous, allowing them to perform better on the task. However, relaxation need not take them the form of Classical music. Any activity that promotes calm before a task – reading a light magazine, playing computer games, talking with a friend – can also hence be said to have the same effect as the classical music that was played.

What if the students in the group had read a joke book or comic beforehand, been less worried about the test and scored better? It might have prompted a deluge of articles claiming “Reading Archie (or The Beano – insert your own title here) improves your IQ”.

Or if the students had been offered a protein drink beforehand, it would not be inconceivable that someone would latch to that piece of research and declare that “Protein Drinks not just good for your body, but for your brain too”.

Mozart’s music has been said to embody the elements of classical music as we know it. Organised formal structures, chords and harmonies through related keys, use of contrasting tunes, contrasts in volume all feature in his music. But the music of other composers have such features too. Imagine if the composer Josef Haydn had been the lucky beneficiary of the experiment and his music had been played instead. The sales of his music catalogue would have hit the roof!

Subsequent scientists all found that listening to music of any form caused improvements, and the genre of music – whether rock or Classical – was irrelevant. But studies today still quote Mozart.

Is it ethical that the media promotes unsubstantiated research by reporting without closer scrutiny? As we have seen in previous blogs posts, the media reports on things without necessarily scrutinising the evidence, and entrusts so-called experts to corroborate the evidence, while it fills column inches and air time with modal auxiliary verbs? Huh? In simple terms, it means that if there is a sniff of a link between A and B, the media reports that “A could cause B”. Never mind whether it does or not, there is always the disclaimer of the word “could”.

In this instance, students performed better on a spatial reasoning task after listening to Mozart; hence the headline “Mozart could improve mental powers”. Diluted over several recounts, you could get “According to XXX newspaper, Mozart improves brain power” before arriving at “Mozart improves brain power”. Unfortunately, this is when the headline is then pounced on by anyone who would stand to profit from espousing this theme.

Who would profit from this? The Classical music world – performers, writers, musicians – can use this “research” to entice people into taking up lessons and buying CDs and magazines. If you read any music teacher’s website you may find them espousing the benefits of learning music; it is rare if you find one that advises it is a lot of effort.

The media will profit from such “research” because it means there is an untapped well of news to report and bleed dry in the quest for filling column inches and air time. News exclusives will be brought out, and so-called experts will also profit for appearing on the news and programmes, either monetarily or in the form of public exposure.

One must question the ethics of incorrect reporting. Unfortunately unsubstantiated research leads to more diluted misreporting, which can then form the basis of new research – research that uses these claims as the groundwork for investigation.

It is scary to think that all the medical research that has been done into effect of music and health could be biased because of the so-called effect of classical music. Could musical activities such as learning the piano help reduce Parkinson’s disease? Could listening to the music of Beethoven reduce the incidence of higher cases of Alzheimer’s disease? Could it all be wrong – have we all been sent down the wrong tunnel by an avalance of hype reporting?

It may be fair to say the human impulse is to buy first and consider later, because we are prone to regret. If we have missed an opportunity to improve the lives and abilities of our children, then we will be kicking ourselves silly forever with guilt.

So if you are still not convinced either way about whether classical music – either in the listening or the practice – really does have any effect, you could at least mitigate your guilt by exposing your child to piano music, for example that has predictable patterns in the left hand. Sometimes, listening to structurally-organised music such as from the Baroque may be useful, but it is also good to listen to Romantic music because the greater range of expression arguably develops a child that has more emotionally subtlety and intelligence.

You may find that ultimately, any truth in the research about Classical music and its mental benefits is not due to the blind passive listening, sitting there while the music goes on around your children. It is in the child’s inner drive to mentally organise the sounds that are heard, the trying and attempts to organise background sounds that really triggers the mental activity in the brain. It is more the practised ability in the inner mind to organise musical sounds that causes better performance in related mental tasks.

The quest for fitness may be detrimental to your long term mental state

We are often told how we should aim to have, and maintain, a healthy lifestyle. After all, being physically fit allows your body to function both in physical and mental aspects. Healthy body, healthy mind, right?

The only difficulty, if you can call it that, with exercise is that the first thing that we would normally consider is running, but it is not for everyone. Going forward for a certain distance or time has little meaning for some people, especially children.

The thing about running is that it has to have some appreciable meaning, so unless you have some derivative inner joy of measuring your progress using statistics, it is unlikely to hold your interest for the long term. A better form of exercise is though group sports, as the mental boredom of tracking fitness levels is negated in favour of the social dynamic.

Common group sports such as football  have a large following in England. The football season for example lasts from August to May and provides a welcome distraction during the cold winter months. It is also a simple game that can be improvised using other materials and played on all surfaces. No goalposts? Use bags or some other markers. No football? Use a tennis ball. It is often interesting to see children turn up at a field, establish the boundaries of play using trees and creates goalposts using caps or other loose materials and these are often sufficient for the game; at least until there is discussion about whether the “ball” hit the post or went in the goal after it flies over a set of keys intended to represent the goalpost.

There is increasing concern about the link between dementia and football. The pounding of the ball against a soft surface of the brain, when the ball is headed, over time can cause the destruction of cells and cell function. This is of particular concern in the case of children, whose brains and bodies are developing. This has been of significant interest as members of England’s 1966 World Cup winning squad have found to have developed dementia in their later years. Some of them cannot even remember being there in 1966!

It is not just the impact of ball on head that is concerning, but when the head is moved through a range of motion too quickly. Even though there is no impact on the head externally, internally there is damage as the brain is hitting the sides of the skull supposed to protect it.

It is not just football that we have to be concerned about. There is plenty of head and neck related impact in rugby and American football. In fact, in American football, the head related injuries for offensive and defensive linemen, who every forty seconds start a play by ramming into the player on the opposite side of the line,  and the list of dementia sufferers is growing continually. Some players have even sued the NFL for injuries suffered during the game.

Will the rules of football change so that heading the ball is banned? Don’t bet on it. That would change the fabric of the game so much as to ruin it. When the ball is swung in from a corner, what would you do if you couldn’t head it? The game will not change, but also don’t rule out a consortium of players in the future filing lawsuits for work-related injuries. Perhaps in the pursuit of fitness, it may be wiser to choose less impactful activities for the sake of long term health.

Why mental health problems will never go away

Many people will experience mental health difficulties at some point in their lives. As people go through life the demands on them increase, and over a prolonged period these can cause difficulty and ill health. These problems can manifest themselves both in mental and physical ways.

What kind of demands do people experience? One of these can be work-related. People may experience  stresses of looking for work, having to work in jobs which do not test their skills, or be involved in occupations  which require skills that are seemingly difficult to develop. Another common theme with adults that causes stress is having to work in a job which increasingly demands more of them, but does not remunerate them accordingly. In other words, they have to work more for less, and have to accept the gradual lowering of work conditions, but are unable to change jobs because they have already invested so much in it in terms of working years, but cannot leave and start afresh because the demands of a mortgage to pay off and a young family to provide for means they cannot start on a lower rung in a new occupation. Over a prolonged period, this can cause severe unhappiness.

Is it surprising that suicide affects men in their thirties and forties? This is a period for a man where work demands more, the mortgage needs paying, and the family demands more of his time and energy. It is unsurprising that having spent long periods in this sort of daily struggle, that men develop mental health problems which lead some to attempt suicide. But mental health does not just affect men. Among some of the this some women have to deal with are the struggles of bringing up children, the work life balance, the unfulfilled feel of not utilising their skills, and feeling isolated.

One of the ways ill health develops mentally is when people spend too long being pushed too hard for too long. Put under these kind of demands, the body shuts down as a self preservation measure. But the demands on the person don’t just go away. You may want a break from work. But this may not be possible or practical. In fact, the lack of an escape when you are aware you need one may be a greater trigger of mental illness, because it increases the feeling of being trapped.

It is little wonder that when people go through periods of mental ill health, an enforced period of short-term rest will allow them to reset their bearings to be able to continue at work, or return to work with some level of appropriate support. But this is only temporary.

With mental ill health problems, lifestyle adjustments need to be made for sufficient recovery.

Under the Equality Act (2010), your employer has a legal duty to make “reasonable adjustments” to your work.

Mental ill health sufferers could ask about working flexibly, job sharing, or a quiet room, a government report suggests.

The practicality of this however means more cost to the employer in having to make adjustments to accommodate the employee, and unless the employee is a valued one, whom the employer would like to keep, often the case is that they will be gradually phased out of the organisation.

In fact, when an employee attains a certain level of experience within an organisation, employers often ask more of them because they know these employees are locked in to their jobs, and have to accept these grudgingly, or risk losing their jobs, which they cannot do if they have dependents and financial commitments.
And you know the irony of it? The mental ill health sufferer already knows that. Which is why they don’t speak out for help in the first place.

If these employees complain, employers simply replace them with younger employees, who cost less, and who are willing to take on more responsibilities just to have a job. Any responsibilities the redundant employee had simply get divided up between his leftover colleagues, who are in turn asked to take on more responsibilities. They are next in line in the mental health illness queue.

And what if you are self employed? And have to work to support yourself and your dependents? The demands of the day to day are huge and don’t seem to go away.

You can see why mental health is  perceived a ticking time bomb. Organisations are not going to change to accommodate their employees because of cost, but keep pressing them to increase productivity without pay, knowing that they cannot say no, and when all the life and juice has been squeezed out of them, they can be chucked away and replaced with the next dispensable employee.

A ticking time bomb.

The financial considerations of investing in medicine and medical research

BBC News reports that a drug that would reduce the risk of HIV infection would result in cost savings of over £1bn over 80 years. Pre-exposure prophylaxis, or Prep, would reduce infection and hence lower the treatment costs for patients in the long term.

The catch? There is one. It’s the long term.

The cost of the treatment and prevention is such that its provision for the first twenty years – bundling together the cost of medical research and production of medicine – would result in a financial loss, and parity would only be achieved after a period of about thirty to forty years; this period is hard to define because it is dependent on what the drug would cost in the future.

Prep combines two anti-HIV drugs, emtricitabine and tenofovir. The medical trials behind it have concluded it has an effective rate of over one in five when it comes to protecting men who have unprotected sex with men from HIV infection. The exact figure is close to 86%.

Prep can be used either on a daily basis, or on what has been termed a sexual event basis – using it for two days before, during and after periods of unprotected sex.

The research model analysed the potential impact of Prep and found that it could reduce infection rates by over a quarter. The cost of the treatment itself, comparative to the cost of treating infection, would result in a saving over one billion pounds over eight years.

However, it does raise a few ethical questions. If the National Health Service is aiming to be a sustainable one – and one of the aims of sustainability is to empower citizens to take responsibility for their own health –  shouldn’t it be considering less about how it will balance the books, but spend more on education for prevention in the first place? The cost of producing Prep on the NHS would be £19.6 billion over 80 years; while the estimated savings from treatment would be £20.6 billion over the same period. Educating people not to have unprotected sex with those at the risk of HIV arguably would result in a higher saving over a lower time period. Perhaps the NHS should consider ways of reducing cost more significantly, rather than latching on to a cheaper prevention drug immediately. If consumer behaviour is not going to change, symptoms are still going to surface, and the provision of Prep on the NHS may only encourage less self-regulation and awareness.