Where dementia treatment meets your NEETS

A recent study has suggested that just ten minutes of social interaction is enough to mitigate the loss of quality of life in dementia sufferers.

A survey among care homes in south London, north London and Buckinghamshire found that dementia sufferers who had chats with care workers for a prolonged period of time – the average amount of interaction is estimated to be as little as two minutes a day in comparison – faired better when it came to measuring reduction in neuropsychotic symptoms and agitation. The chats were about areas of interest such as family, or the social interaction was extended to activity like sport.

Dementia sufferers in care home were divided into two groups – the first received conventional treatment while the second group received an hour of personal interaction over the week. Those in the second group demonstrated the benefits more prominently.

The difficulty with social interaction in many care homes is that the activities are limited to ones such as bingo, where people are together, but not really interacting, or that the interaction is on a one-to-many level, leaving many sufferers actually disengaged or bored, and more withdrawn in many respects. Interaction – if it can be called that – is very passive, and measured more by presence rather than participation. For example. sitting together in a bingo hall and doing “mental” activities such as bingo, or sitting with others to watch the soaps, occasionally piping up to say “What’s gawin on?” is unlikely to do much for one’s mental faculties.

Dr Doug Brown, director of research at the Alzheimer’s Society, said: “This study shows that training to provide this type of individualised care, activities and social interactions can have a significant impact on the wellbeing of people living with dementia in care homes.

“It also shows that this kind of effective care can reduce costs, which the stretched social care system desperately needs.”

The problem is that while this interaction may be perceived as cost-saving, because it relies less on medication, having paid carers on minimum wage, paid “conversers” is actually more expensive. But it is a method that seems to work.

The unfortunate state of the healthcare is not that it is based on what works, but what is the cheapest. The base line is not the quality of care, but because it would exceed a threshold that the NHS cannot afford, the cost takes priority.

Perhaps what would be an effective method would be for NEETS – young persons not in education, employment or training to do such work. It would give dementia sufferers someone to talk to, and the NEETS could actually learn something from observing life experience, and it would keep government happy because their unemployment figures would go down. And with recent mental health studies suggesting that only 1 in 5 young people have someone to talk to when they are down, would it not be conceivable that at least getting young people who may be on the verge of being depressed due to lack of employment to talk with someone else, for a bit of wage, might actually be an intangible way of reducing their likelihood of depression?

Getting the young unemployed to be befrienders in care homes – is that worth a thought?

Why health articles in newspapers should be retired

What is it that people look forward to? Most want time to pursue their interests and doing things they love. Some people have managed to combine all this by the traditional interest-led approach, doing things they love, starting up a blog, gaining readership, and then selling advertising space on their blog, or affiliate marketing and other things associated with making money from a website. For others, this lure for things they like is compromised by the need of having to make a living, and hence this is shelved while having to earn a living and put off until retirement.

For most people, retirement would be when they would be able to have the time and money to indulge in things they put off earlier. Some people have combined the starting of a blog and retirement, and made a living by blogging (and gaining a readership) about how they have or intend to retire early.

Retirement. Out of the rat race. All the time in the world. For most people, retirement is the time to look forward to.

A recent study however suggests that retirement is not all that wonderful. Despite it being seen as the time of the life where financial freedom has been achieved and time is flexible, it has been suggested that the onset of mental decline starts with retirement.

The Daily Telegraph reported that retirement caused brain function to rapidly decline, and this information had been provided by scientists. It further cautions that those workers who anticipate leisurely post-work years may need to consider their options again because of this decline. Would you choose to stop work, if this meant your mental faculties would suffer and you would have all the free time in the world but not the mental acuity?

Retired civil servants were found to have a decline in their verbal memory function, the ability to recall spoken information such as words and names. It was found that verbal memory function deteriorated 38% faster after an individual had retired than before. Nevertheless, other areas of cognitive function such as the ability to think and formulate patterns were unaffected.

Even though the decline of verbal memory function had some meaningful relevance, it must be made clear that the study does not suggest anything about dementia or the likelihood of that happening. There were no links drawn with dementia. Just because someone retires does not mean they are more likely to develop dementia.

The study involved over 3000 adults, and they were asked to recall from a list of twenty words after two minutes, and the percentages were drawn from there. The small sample size, not of the adults, but of the word list, meant the percentage decline of post-retirement adults may have been exaggerated.

Look at this mathematically. From a list of twenty words, a non-retiree may recall ten. A retiree may recall six. That difference of four words is a percentage decline of 40%.

Ask yourself – if you were given a list of twenty words, how many would you remember?

It is not unsurprising if retirees exhibit lower abilities at verbal memory recall because the need for these is not really exercised post-retirement. What you don’t use, you lose. We should not be worried about the decline, because it is not a permanent mental state, but it is reversible; in any case the figure is bloated by the nature of the test. If a non-retiree remembers ten words, and a retiree makes one-mistake and remembers it, that would be promoted as a 10% reduction in mental ability already.

Furthermore, decline is not necessarily due to the lack of work. There are many contributing factors as well, such as diet, alcohol and lifestyle. Retirement is not necessarily the impetus behind mental decline. Other factors may confound the analyses.

The research did not involve people who had retired early. For example, hedge fund managers might have retired in their forties. But you would struggle to think that someone in their forties would lose 38% of verbal memory recall.

Would a loss of 38% of verbal memory have an impact on quality of life? It is hard to tell if there is the evidence to support this. But the results point to a simple fact. If you want to get better at verbal memory, then practice your verbal memory skills. If you want to get better at anything, then practice doing it.

Was this piece of news yet another attempt by mainstream media to clog paper space with information – arguably useless? You decide.

The bigger issues that come with preventing hearing loss

Is there cause for optimism when it comes to preventing hearing loss? Certainly the latest research into this suggests that if positive effects experienced by mice could be transferred to humans and maintained for the long term, then hereditary hearing loss could be a thing of the past.

It has always been assumed that hearing loss is always down to old age. The commonly held view is that as people grow older, their muscles and body functions deteriorate with time to the point that muscle function is impaired and eventually lost. But hearing loss is not necessarily down to age, although there are cases where constant exposure to loud noise, over time, causes reduced sensitivity to aural stimuli. Over half of hearing loss cases are actually due to inheriting faulty genetic mutations from parents.

How do we hear? The hair cells of the inner ear called the cochlea respond to vibrations and these signals are sent to the brain to interpret. The brain processes these signals in terms of frequency, duration and timbre in order to translate them into signals we know.

For example, if we hear a high frequency sound of short duration that is shrill, our brain interprets these characteristics and then runs through a database of audio sounds, an audio library in the brain, and may come up with the suggestion that it has come from a whistle and may signify a call for attention.

What happens when you have a genetic hearing loss gene? The hairs on the inner ear do not grow back and consequently sound vibration from external stimuli do not get passed on to the brain.

With progressive hearing loss too, the characteristics of sound also get distorted. We may hear sounds differently to how they are produced, thereby misinterpreting their meaning. Sounds of higher and lower frequency may be less audible too.

How does that cause a problem? Imagine an alarm. It is set on a high frequency so that it attracts attention. If your ability to hear high frequencies is gradually dulled then you may not be able to detect the sound of an alarm going off.

As hearing gradually deteriorates, the timbre of a sound changes. Sharper sounds become duller, and in the case of the alarm, you may hear it, but it may sound more muted and the brain may not be able to recognise that it is an alarm being heard.

Another problem with hearing loss is the loss of perception of volume. You may be crossing the road and a car might sound its horn if you suddenly encroach into its path. But if you cannot hear that the volume is loud, you may perceive it to be from a car far away and may not realise you are in danger.

The loss of the hairs in the inner ear is a cause of deafness in humans, particularly those for whom hearing loss is genetic. Humans suffering from hereditary hearing loss lose the hairs of the inner ear, which result in the difficulties mentioned above. But there is hope. In a research experiment, scientists successfully delayed the loss of the hairs in the inner ear for mice using a technique that edited away the genetic mutation that causes the loss of the hairs in the cochlea.

Mice were bred with the faulty gene that caused hearing loss. But using a technology known as Crispr, the faulty gene was replaced with a healthy normal one. After about eight weeks, the hairs in the inner ears of mice with genetic predisposition to hearing loss flourished, compared to similar mice which had not been treated. The genetic editing technique had removed the faulty gene which caused hearing loss. The treated mice were assessed for responsiveness to stimuli and showed positive gains.

We could be optimistic about the results but it is important to stress the need to be cautious.

Firstly, the research was conducted on mice and not humans. It is important to state that certain experiments that have been successful in animals have not necessarily had similar success when tried on humans.

Secondly, while the benefits in mice were seen in eight weeks, it may take longer in humans, if at all successful.

Thirdly, we should remember that the experiment worked for the mice which had the genetic mutation that would eventually cause deafness. In other words, they had their hearing at birth but were susceptible to losing it. The technique prevented degeneration in hearing in mice but would not help mice that were deaf at birth from gaining hearing they never had.

Every research carries ethical issues and this one was no different. Firstly, one ethical issue is the recurring one of whether animals should ever be used for research. Should mice be bred for the purposes of research? Are all the mice used? Are they accounted for? Is there someone from Health and Safety going around with a clipboard accounting for the mice? And what happens to the mice when the research has ceased? Are they put down, or released into the ecosystem? “Don’t be silly,” I hear you say, “it’s only mice.” That’s the problem. The devaluation of life, despite the fact that it belongs to another, is what eventually leads to a disregard for other life and human life in general. Would research scientists, in the quest for answers, eventually take to conducting research on beggars, those who sleep rough, or criminals? Would they experiment on orphans or unwanted babies?

The second, when it comes to genetics, is whether genetic experimentation furthers good or promotes misuse. The answer, I suppose, is that the knowledge empowers, but one cannot govern its control. The knowledge that genetic mutation can be edited is good news, perhaps, because it means we can genetically alter, perhaps, disabilities or life-threatening diseases from the onset by removing them. But this, on the other hand, may promote the rise of designer babies, where mothers genetically select features such as blue eyes for their unborn child to enhance their features from birth, and this would promote misuse in the medical community.

Would the use of what is probably best termed genetic surgery be more prominent in the future? One can only suppose so. Once procedures have become more widespread it is certain to conclude that more of such surgeons will become available, to cater for the rich and famous. It may be possible to delay the aging process by genetic surgery, perhaps by removing the gene that causes skin to age, instead of using botox and other external surgical procedures.

Would such genetic surgery ever be available on the NHS? For example, if the cancer gene were identified and could be genetically snipped off, would patients request this instead of medical tablets and other external surgical processes? One way of looking at it is that the NHS is so cash-strapped that under QALY rules, where the cost of a procedure is weighed against the number of quality life years it adds, the cost of genetic surgery would only be limited to more serious illnesses, and certainly not for those down the rung. But perhaps for younger individuals suffering from serious illnesses, such as depression, the cost of a surgical procedure may far outweigh a lifetime’s cost of medication of anti-depressant, anti-psychotics or antibiotics. If you could pinpoint a gene that causes a specific pain response, you might alter it to the point you may not need aspirin, too much of which causes bleeds. And if you could genetically locate what causes dementia in another person, would you not be considered unethical if you let the gene remain, thereby denying others the chance to live a quality life in their latter years?

Genetic editing may be a new technique for the moment but if there is sufficient investment into infrastructure and the corpus of genetic surgery information widens, don’t be surprised if we start seeing more of that in the next century. The cost of genetic editing may outweigh the cost of lifelong medication and side effects, and may prove to be not just more sustainable for the environment but more agreeable to the limited NHS budget.

Most of us won’t be around by then, of course. That is unless we’ve managed to remove the sickness and death genes.

A smart person thought up the mental improvement products

The trail of human evolution is littered with gadgetry that have outlived their usefulness. We can add devices such as the fax machine, walkman, mini-disc and tape recorder to the list of machines which seemed clever at the time but have now before obsolete. Those of us of a certain age will remember newer additions such as the PocketPC, a palm sized screen which was used with a stylus that tapped out letters on screen, and the HP Jornada, a slightly bigger tablet sized keyboard and phone. And who could forget the Nintendo Brain Training programmes for the DS and Fitness Programmes for the Wii?

Launched in 2005, Nintendo’s Brain Training programmes claimed to increase mental functioning. Nintendo’s premise was that the concentration required in solving a variety of puzzles, involving language, mathematical and reasoning, increased blood flow to the frontal cortex of the brain, which at least maintained brain functioning or helped improve it. After all, since the brain is a muscle, exercising it by bombarding it with mental exercises would keep it active and healthy, right?

It is the idea of keeping the brain active that leads many to attempt their daily crossword or Sudoku. The latter in particular has seen an surge in interest over the past decade and is now a feature in newspaper back pages and magazines. There are even publications exclusively filled with Sudoku puzzles, and even more complex versions where each traditional puzzle forms a square in a bigger and complex three by three grid. If you thought doing a Sudoku puzzle was hard, imagine having to work on it in relation to eight others. It would be absolutely mind-boggling!

Is there any truth about the positive enhancements to the human life that these objects or activities bring? Nintendo’s claims about the Brain Training programmes were doubted by leading neuroscientists, who doubted the tenuous links between the increased blood flow to the brain and the vaguely described positive effects to life. It is akin to making a blanket statement saying chess grand masters or academics are the happiest people around. Unfortunately it is yet another case of a company creating a product and then engineering the science around it.

Manufacturers of beauty products do it all the time. Whether it is skin care or facial products being flogged, you will find an aspirational theme within the first five seconds of the advertisement (“Look beautiful! Stay young!”) which is then followed by a pseudo-scientific claim, preferably involving percentages (sounds more authoritative) and a small sample size (easier to corroborate, or disclaim, depending on the need).

“Live young forever. XX skin lotion is carefully formulated to retain your natural moisture, so you look and feel twenty years younger. 86% of 173 women noticed a change in skin density after using it for three months.”

There you have it. The secret of beauty product advertising.

Unfortunately, if there was any display of mental acuity, it was by the marketing team of Nintendo. In pitching a product to adults, using the retention and improvement of mental agility as a plus point, they not only convinced adults to buy what was essentially a toy, but to buy one for their children as well. The DS alone has since sold over 90 million units worldwide, and when you take into account the cost of games and all that, you will have to concede that someone at Nintendo had the smarts to produce a tidy little earner.

(For those who were more concerned with retaining their physical functioning, the Nintendo Wii Fit programmes performed that function and filled in the gap in that market.)

The improvement of mental functioning is always a good basis for marketing any product. You can find a whole plethora of products huddling behind it. Multi vitamins, activity puzzles, recreational activities involving multi-tasking – all supposedly give the brain a workout, but more importantly, tap into the fears of missing out or the loss of mental function in the human psyche, that makes people buy not out of potential gain, but fear of lost opportunity and potential regret.

The loss of mental function can lead to Alzheimer’s disease, for which there is currently no cure. With 30 million people worldwide suffering from it, this presents an endless river of opportunity for people researching the disease, as well as people developing products to improve mental function in the hope that it can stave off the disease. Like the Nintendo Brain Training developers realised, it is not so much about whether these scientific products work that makes people buy them – the evidence that is produced is biased and not independent – but it is the fear of missing out and retrospective guilt that compels people to make the purchase. Buy first, examine the evidence later, is the apparent dogma.

Unfortunately we are at the stage of modern society where it is not just the product that needs scrutiny, but whether the scrutiny itself needs scrutiny for evidence of bias, either in the form of financial ties or expected research outcomes.

Mental improvement is an area that product developers – whether the products be vitamins, books or applications – will continually target because human beings will always seek to improve mental prowess, both in themselves and their children, in the hope that somewhere down the line it offers an advantage, or prevents the mental degeneration associated with the aging process. And the compelling reason to buy lies somewhere in the meeting points of being seduced by the aspirational ideals the product offers, the fear of missing out, and the assumption that the underlying evidence is empirical. The greatest mental sharpness has been displayed by the one who has understood the sales psychology of mental health improvement products and used it to his or her advantage.

Airbnb style recuperation for hospital patients

Would you welcome a stranger into your home? Would you have a spare room set aside for them? Perhaps not. But what if you were paid to do so? This is what some hospital bosses are considering to relieve overcrowding in hospital wards, that patients do their recuperating in private homes, rather than in the hospital. You offer a room if you have one available, and the hospital rents it from you for a patient. It is like an airbnb for hospitals.

On the face of it, this seems like a good idea. Hospital overcrowding is lessened, home owners get a bit of spare cash, the recuperating patient gets a bit of company … everyone’s happy. Patients staying out of hospitals mean that the backlog of operations can be cleared more quickly, resulting in a better streamlined NHS that benefits every citizen.

This idea is being piloted by the startup CareRooms. “Hosts”, who do not necessarily need to have previous experience in healthcare, could earn £50 a night and up to maximum of £1000 a month putting up local residents who are awaiting discharge from hospital. The pilot will start with 30 patients and the hope is that this will expand.

AgeUK claims that patients were being “marooned” in hospitals, taking up beds while 2.2 million days are lost annually to delayed transfers of care.

The specifics, however, do not seem to hold up to scrutiny. Who is responsible for the overall welfare of the patient? Once a patient is transferred to this “care” home, the responsibility of medical care is devolved to someone with basic first-aid training.

Prospective hosts are also required to heat up three microwave meals each day and supply drinks. Unfortunately it opens the issues of safeguarding, governance and possible financial and emotional abuse of people at their most vulnerable time.

The recuperating patients will “get access to a 24-hour call centre, tele-medical GP and promised GP consultation within four hours.”

The underlying question, though, is would you, though, want your loved ones to be put through this kind of care?

This is cost-cutting at its worst. The NHS is cutting costs, cutting ties and cutting responsibilities for those supposedly under its care. It would be a sad day if this kind of devolved responsibility plan became approved.

Beta blockers and their impact on heart attack sufferers

 

Recent research suggests that the prescription of beta blockers for heart attack patients may not have the benefit ascribed to them.

In the UK, the prescription of beta blockers is routine for patients who have had a heart attack. There are two categories of patients – those who have had a heart attack, and those who have had a heart attack with heart failure, the latter of which is the more severe case. A heart attack involving heart failure is a complication in which the heart muscle has experienced damage and where proper function is compromised.

Beta blockers work by reducing the activity of the heart and lower blood pressure. In essence, the pressure on the heart is lessened by a reduced demand on it.

Current guidelines recommend that the first group of patients are prescribed beta blockers, while for those in the second group, who have experienced heart failure, beta blockers are mandatory.

The research investigated the effect of beta blockers on the first group, for whom beta blockers are recommended but not compulsory. The findings suggested that 95% of patients in the first group did not experience a significantly longer life span and beta blockers did not have any significant impact. There was no statistical difference in death rates within a year large enough to attribute to any positive impact of the beta blockers.

As the data involved tracking a very large sample size of 179,810 people, the results could be deemed to be fairly accurate.

So what the ramifications of this research?

The first is that the vast majority of the first group of heart attack patients are being over-prescribed beta blockers. Beta blockers, while reducing the workload of the heart, can induce side effects such as drowsiness and fatigue as a result of lower blood pressure. Patients may be experiencing these burdens on their health unnecessarily.

The second issue is that over-prescription causes an unnecessary burden on the NHS if it is prescribing drugs unnecessarily. Imagine a patient who has just had a heart operation. While he or she is recuperating in hospital, beta blockers are prescribed as part of the medication. Multiply that by over 100,000, and the result is an unnecessary annual cost to the NHS if the drugs that are needless and have no impact.

Furthermore, the use of drugs with no apparent benefit can, in the long run, only weaken the body’s immunity.

The findings of the survey, however, do not reflect on the impact of beta blockers on the second group of patients – those who have had a heart attack involving heart failure. Another outcome of the findings was the suggestion that treatment be more personalised in order to locate and target patients in the first group who would benefit from the prescription of beta blockers for heart attacks which did not involve heart failure.

Beta-blockers are prescription-only medicines, commonly referred to as POMS, which means they cannot be obtained over the counter. They must be prescribed by a GP or pharmacist. They work by blocking the action of hormones like adrenaline in order to reduce the activity of the heart.

Examples of commonly used beta-blockers include:

  • atenolol (Tenormin)
  • bisoprolol (Cardicor, Emcor)
  • carvedilol metoprolol (Betaloc, Lopresor)
  • nebivolol (Nebilet)
  • propranolol (Inderal)

The generic name which contains the active ingredient is named first, the brand name is in parentheses.

There are many types of beta-blockers and they may be used to treat symptoms such as angina, heart failure, atrial fibrillation (irregular heartbeat), heart attack or high blood pressure. Those are the more common uses of beta-blockers, also they can also be used for migraine or to treat an overactive thyroid (hyperthyroidism), anxiety, tremor, anxiety conditions or even glaucoma.

Beta-blockers, including beta-blocker eye drops, can interact with other medicines, and in doing so alter the effects of one of the medicines. Some of the more common medicines that can cause interference through interaction with beta-blockers include medicines such as anti-arrhythmics (used to control irregular heartbeats), antihypertensives (medicines for lowering blood pressure), antipsychotics, and clonidine, which is commonly used to treat high blood pressure and migraine.

While the most common side-effects of beta-blockers are dizziness and tiredness, other arising side-effects can include blurred vision, cold hands and feet, and slow heartbeat.

Less common symptoms may include sleep disturbance (insomnia), depression, impotence or libido.

The majority of beta-blockers are to be taken once a day, with the exception of certain beta-blockers that are used during pregnancy and the beta-blocker Sotalol, which is administered two or three times a day. The NHS estimates the annual cost of Sotalol per patient to be 77.09 a year.

On the face of it, the results of the research are pretty straightforward. But are they as almost too straightfoward, to warrant the question of why such research needed to be conducted in the first place?
One cannot blame the cynics for questioning what outcomes the research is meant to arrive at.

Let’s consider the matter in a different light. It is estimated that heart attack survivors have a higher risk of recurrent heart attacks or cardiac death, and 10% of heart attack sufferers die within two years. Only 50% of initial survivors are alive at 10 years.

It is not unreasonable to surmise that those who suffer initial heart attacks either experience mortality between the first and second year or develop recurrent attacks which push them to a compulsory prescription of beta-blockers.

Critics to the research point out that a fairer assessment on the effects of beta-blockers should have examined an extended time period of two years rather than one year. They also point out that the research should have focussed on how many heart attack sufferers, who did not have heart failure, and who then did not use beta-blockers, went on to develop recurrent heart attacks, or heart attacks that included heart failure, as it would be more indicative of the effectiveness of beta blockers.

So why did the findings choose to use the timeframe of a year?

The NHS makes baseline assessments on the cost effectiveness of medicines and treatments according to a scale of quality-adjusted life years, or QALYs. It weighs the cost of treatment against the number of years of significant benefit to the patient gained from the treatment. According to the NHS, a figure of twenty thousand pounds per QALY represents treatment that is value for money. In other words, if a treatment can extend and improve a patient’s life for a year, and costs under 20,000, it is worth it.

The NHS’s Regional Drug and Therapeutic Centre, based in Newcastle, gives the cost of beta blockers as between 10 and 512 pounds annually, depending on the type of beta-blocker required. While this falls well within the QALY threshold of 20,000 pounds, using the research findings that beta blockers have no significant impact on health within the first year allows it to scrap the cost of funding this treatment because beta-blockers supposedly offer no significant benefit. The research has focussed on a time period that cannot significantly examine the effectiveness of beta blockers.

Cynics suggest that the research is merely an attempt to reframe the data regarding beta-blockers in order to minimise the cost of healthcare in an NHS which is lacking in resources.

Medical research, is unfortunately often subservient to economics and often the research appears to be carried out to arrive at a pre-planned conclusion. Wasn’t it long ago, when the economic crisis was looming and the government was looking to raise tax on alcohol, that we were told a glass of red wine a day had health benefits? Yet when the NHS struggled years later and was overburdened by drunken citizens dialling emergency services the evidence peddled about red wine was to the contrary.

Risks of stomach bleeding increased by aspirin in over-75s

A recent study by researchers in Oxford in June this year suggested that taking aspirin daily may have led to higher incidences of bleeding in the over-75 age group.

BBC News reported that those in the over-75 age group taking daily aspirin as a precautionary measure after a stroke or heart attack were at a higher risk of stomach bleeds than had been previously conceived.

The Oxford Vascular Study was carried out by researchers from the University of Oxford and funded by the Wellcome Trust, Wolfson Foundation, British Heart Foundation, Dunhill Medical Trust, National Institute of Health Research (NIHR), and the NIHR Oxford Biomedical Research Centre.

The study aimed to assess the bleeding risk for people taking aspirin for the secondary prevention of cardiovascular events. In other words, the people in the research had already had a stroke or heart attack and were taking aspirin to try and prevent them having another. The follow-up period was for up to 10 years with the intention of seeing how many of them were admitted to hospital with bleeds.

Aspirin performs the function of a blood thinner and hence it is often given to people thought to be at risk from blood clots, which, if left unchecked, could trigger a heart attack or stroke. Unfortunately, a potential downside is that it can trigger bleeding in the digestive system or brain.

The researchers found that for under-75s taking aspirin, the annual risk of bleeding is around 1%. However, this risk factor is tripled for those over the age of 75. What was more disturbing was the bleeds were particularly associated with those of the stomach and upper digestive tract. 405 bleeding events required medical attention during the follow-up period, and of these, 187 of which were major bleeds. 40% of bleeds were in the upper digestive tract. The risk of disabling or fatal bleeding of the upper digestive tract was 10 times higher for over-75s compared with younger adults.

The findings of the research suggested that prescribing proton pump inhibitor (PPIs) could significantly reduce these risks in older adults. PPIs are drugs which help defend the lining of the stomach and the risk of a bleed is minimised.

Currently, the prescription of PPIs together with aspirin is not routine for over-75s. While PPIs can considerably reduce the risk of digestive bleeding for regular aspirin users, there are concerns over their side effects, which can include nausea and constipation.

The findings, however, only apply to people taking regular aspirin for secondary prevention of cardiovascular events, and cannot be directly applied to people for primary prevention (that is, people with risk factors for cardiovascular disease but who have not yet had an event such as a stroke or heart attack), or to people using aspirin for brief periods for example to treat pain or fever.

But it must be stressed that no-one should come off the pills quickly, or without consulting their doctor, as doing so would create an immediate risk of heart attacks.

Around 40 per cent of pensioners in the UK take aspirin daily. This estimate is also evenly split between those who have already suffered a heart attack or stroke, and those taking it as a precaution.

Prof Peter Rothwell, the lead author from the University of Oxford said aspirin was causing around 20,000 bleeds annually – and causing at least 3,000 deaths.

Prof Rothwell said: “We know clearly from trials and other research that aspirin is effective at preventing recurrent heart attacks and strokes. Twenty per cent of potential recurrent heart attacks and strokes are prevented by aspirin.

“Nevertheless, there are also about 3,000 excess bleeding deaths attributable to blood-thinners like aspirin across all age-groups,” he said, warning that the risk of serious bleeding is much higher among the over 75s.

“You would probably be advised to stop it in your late 60s or around 70 because at that point the risk of bleeding does start to take off – the risks may well outweigh the benefits,” he said.

Dr Tim Chico, consultant cardiologist, University of Sheffield, said the risks of aspirin were often understimated. “Although bleeding is a well-recognised side effect of aspirin, this drug is still seen by many people as harmless, perhaps because of how easily it can be bought over the counter, “ he said. “Prescription of any drug is a balance between the benefits of the medication against its risks, and aspirin is no different,” he said.