Balancing workplace success, aspiration and recognition

Research suggests that one of the greater signs of mental health is a poor sense of self-worth. The average individual, according to BBC news, is frequently evaluating himself or herself in comparison to others in order to gauge some sort of self-assessment on worth. The New York Times bestseller Everybody Lies by Seth Stephens-Davidowitz claims that this is a kind of social data analysis, using a doppelganger or an imagined self, and we conduct a self evaluation to establish a perceived worth.

If we surround ourselves if an environment where everyone seems to be better than we are – for example, if they seem to be dressed in nicer clothes, drive nicer cars and we hence have a perceived impression that they are successful and what we would like to be – then if the gulf between them and us can be bridged, we are motivated to work hard and aspire towards that success, perhaps by aping the means and methods by which our models have achieved their success. If the gulf is too great, then we get discouraged and the continual trigger of this disparity causes us to feel slightly depressed and results in poor mental health.

In a workplace situation, envy and depression can develop when we evaluate our co-workers. Some of it can be subconscious, some of it can be deliberate. The proximity of the daily grind makes it inevitable. Imagine we are working on a team project. Various members contribute but one – perhaps the project manager, or someone on the same level as you that knows how to position themselves – takes the credit for the work and the accolades. We have all met someone like that, I’m sure. You can recognise these people by the way they talk; when there is work to be done, they say “We must … ” and assume the team mantle, but when there is a sniff of credit to be gotten, their talk turns too “I” and they start mentioning what they feel they have contributed to the project. I once worked with someone who mentioned “I” twenty-five times in a thirty-minute meeting, yet was careful to refer to “we” when the allocation of work section of the meeting approached.

We all work with these kinds of people. Perhaps we subtly realise too that this is how things are; if you want to be promoted to greater things it seems as if this is something we need to be doing from time to time. The problem with these kind of methods is that they make us uncomfortable; we experience the disconnect between having to use a social method of positioning we dislike, and detest when we see it in others, yet we have to resort to it, or else get left behind when everyone around us becomes more upwardly mobile.

What can you do if you find yourself in such a situation? While reading about the drifters from theĀ Piano Lessons N8 blog I realised that perhaps the success of the band and its interchanging personnel meant that not everyone was going to be credited accordingly. Sometimes true worth is only correctly evaluated years after the success is over. Perhaps the resolution in this matter is to accept that, like many parts of life, there are always going to be contradictory aspects. We may not like self promotion, but we may have to position ourselves from time to time to be seen to be doing something. Otherwise if we wait for our work to be recognised, it may take too long for our liking, and the unease it may cause us in the meantime might just be a little too much for us to accept.

Does exposure to violent scenes create violent teens?

Over the recent decades, film technology has increased significantly that we are able to recreate more exciting and fast-paced action scenes using better special effects. One only needs to look back to the 1970s to see the difference. Take for example, the film Battlestar Galactica. Spaceships were warring it out amongst themselves, but you could tell the laser beams of enemy ships and the good guys were merely light being reflected onto strings of model ships. Nowadays we have stunt doubles and pyrotechnics, and the improvements in CGI have meant that it is possible to create a scene without it actually having physically taken place.

Action movies and action scenes draw crowds and revenue. After all, we go to to the movies for some form of escapism – we wouldn’t if the film showed something we were already experincing in real life. In the last few decades, action movies have risen in number. They have always faced criticism about the level of violence inherent, and are often blamed for inciting anti-social behaviour, but is this accusation valid?

In the book Everybody Lies, Seth Stephens-Davidowitz, a data scientist and writer, makes the point that during the run of a violent movie at local theatres, especially on opening night, crime actually goes down. The evidence is that young men, who have a propensity for violence, are actually at the movies. And late night movies actually see a proportionate decrease in violence and crime. Why is this so? The book again suggests that movies are an outlet, a form of distraction, and the fact that a lot of crime is alcohol-fuelled – and cinemas and theatres don’t serve alcohol – means that there is a form of aggression release that substitutes for crime.

But one should not get too eager about showing all the KilL Bill movies at the local cinema. There are many examples of life imitating art, with men hypnotised with what they had just seen on screen. A showing of the gang movie Colours was followed by violent shooting. The movie New Jack City incited riots. And four days after the film The Money Train was shown, men used lighter fluid to ignite a subway toll booth, as if to see if it would really work. In the movie, the operator escaped. The real life operator burned to death.

There is evidence from experiements that subjects exposed to a violent film show more anger and hostility, even if they do not imitate what they have seen.

We could say the same of alcohol. Alcohol may be a substitute for an anti-social evening. That is to say that men and women who might otherwise go out for a night of crime may be prevented so by staying in on a night of catching up over a glass of wine. But the same could be used to say that alcohol instead fuels crime outside of the immediate time frame.

Another useful area to examine is in the effect of music in the film. Does watching a film with “violent” music influence how we act in the aftermath? We know about the effects of music in a film, but it would be useful to see how music – especially since it is such a fabric of society – influences individuals.

Going to bed with your smartphone is not a good idea

Okay, let’s be clear. When I say going to bed with your smartphone, what I really mean is you have your smartphone on a table by your bedside.

Research has shown that thee quality of sleep is affected for those who have smartphones nearby within arm’s reach.

Why should this research not surprise us? Firstly, those of us who have them nearby are more likely to be more responsive to emails, alerts and vibrations which all signify that more information for us to process has come in. Going to sleep with such a mindset, with work lingering in the mind, interferes with our restful periods when this happens for a long time.

Secondly, the backlight from your smartphone can cause you to waken up earlier than you intend to. While is good news for those of us who have problems waking up and keep having to hit the snooze button, perhaps we should consider that the reason we keep hitting the snooze button is we have not sleep well.

Imagine it is summer and gets light earlier. Even if you sleep in a dark room, the light from your phone will hit your visual sensors and trick you into thinking that it is already later than it is. Even if you glance at the phone and realise it is only 5am (I say only because most people are still asleep then, but maybe you are one of the early risers) you have difficulty going back to sleep now because your restful period has been disturbed and this affect your body clocks.

Do you notice how unseasonal temperatures affect wildlife? If you get a week of warmer weather in the winter, flowers and insects start to think that winter has passed and spring is here, and then emerge, only for the cold to hit again, leaving them vulnerable.

The smartphone provides unwanted stimulus in terms of light and sound. Even if it is fully muted and the screen is completely dark, its presence by the side of the bed means you can never fully switch off.

The solution, even is to go low-tech. Get an alarm clock, or a watch if you need to set an alarm as a wakeup call. Leave your phone in a different room like the living room. Try to keep your bedroom sacrosanct, as a place where work does not intrude. You will find it makes a difference to your restful periods.

 

Disconnect for a better quality of life

We live in a world that is more technologically advanced than our grandparents’ generation. For some, the gulf between generations is even closer. Those of us who have parents in their late forties and fifties will almost certainly find that their version of the twenties is much more different than ours. The difference can almost solely be put down to the impact that technology has had on our world.

When computers were rolled out en masse, and the influence of technology was making its way into daily life, we were told that they would simplify life. Computers would do the drudge work that humans used to do, giving us more free time to explore leisure pursuits. At least, that was how it was sold to us.

 

Has that happened? Not really. The average citizen found himself needing to be more computer literate. As the society became more dependent on things like emails, mobile phones, and computers, human beings found themselves needing to know how to work such devices and all their functions. Remember the days when all you had was a simple choice of a digital or traditional film camera? Nowadays the choices have exploded exponentially. Of course, unless you are a purist, you would say having digiital cameras isn’t a bad thing. It isn’t. But making the transition to using them as part of daily life has only increased the mental burden of information we hold in our heads, and that is making us actually less productive. And that arguably is one of the problems with technology. It has resulted in an explosion of information – the information overload that overtaxes our mental processes and leaves us mentally fatigued and less able to focus on important issues.

Social media is another area – touted to enhance links between people from your past, now the need to catch up with the latest social gossip, to promote yourself, to be on track with it all, to be in … all that has a bearing on one’s mind and mental health. It is no wonder that some people report being depressed after scrolling through social media sites like Instagram, Twitter and Facebook.

Has technology enhanced our lives? It has made it easier for companies to push work that used to be done by employees onto users. For example, if Wikipedia existed in the 1980s, it would have had big offices and employees to research and type out the information on its databases. Now it encourages collaborative work – in short, it makes uses do it for them.

The problem is that information is endless and cannot be fully captured, and runs perpendicularly to our innate need to grasp everything. We want to box it all, yet it cannot be boxed. The human civilization generates terrabytes of data every year, and trying to keep on top of it all will leave us tired and fatigued and restless and depressed, an ever-insatiable need.

The solution? Disconnect. It would do you a (real) world of good. And if that is too drastic, trying limiting the amount of screen time you have.

Where dementia treatment meets your NEETS

A recent study has suggested that just ten minutes of social interaction is enough to mitigate the loss of quality of life in dementia sufferers.

A survey among care homes in south London, north London and Buckinghamshire found that dementia sufferers who had chats with care workers for a prolonged period of time – the average amount of interaction is estimated to be as little as two minutes a day in comparison – faired better when it came to measuring reduction in neuropsychotic symptoms and agitation. The chats were about areas of interest such as family, or the social interaction was extended to activity like sport.

Dementia sufferers in care home were divided into two groups – the first received conventional treatment while the second group received an hour of personal interaction over the week. Those in the second group demonstrated the benefits more prominently.

The difficulty with social interaction in many care homes is that the activities are limited to ones such as bingo, where people are together, but not really interacting, or that the interaction is on a one-to-many level, leaving many sufferers actually disengaged or bored, and more withdrawn in many respects. Interaction – if it can be called that – is very passive, and measured more by presence rather than participation. For example. sitting together in a bingo hall and doing “mental” activities such as bingo, or sitting with others to watch the soaps, occasionally piping up to say “What’s gawin on?” is unlikely to do much for one’s mental faculties.

Dr Doug Brown, director of research at the Alzheimer’s Society, said: “This study shows that training to provide this type of individualised care, activities and social interactions can have a significant impact on the wellbeing of people living with dementia in care homes.

“It also shows that this kind of effective care can reduce costs, which the stretched social care system desperately needs.”

The problem is that while this interaction may be perceived as cost-saving, because it relies less on medication, having paid carers on minimum wage, paid “conversers” is actually more expensive. But it is a method that seems to work.

The unfortunate state of the healthcare is not that it is based on what works, but what is the cheapest. The base line is not the quality of care, but because it would exceed a threshold that the NHS cannot afford, the cost takes priority.

Perhaps what would be an effective method would be for NEETS – young persons not in education, employment or training to do such work. It would give dementia sufferers someone to talk to, and the NEETS could actually learn something from observing life experience, and it would keep government happy because their unemployment figures would go down. And with recent mental health studies suggesting that only 1 in 5 young people have someone to talk to when they are down, would it not be conceivable that at least getting young people who may be on the verge of being depressed due to lack of employment to talk with someone else, for a bit of wage, might actually be an intangible way of reducing their likelihood of depression?

Getting the young unemployed to be befrienders in care homes – is that worth a thought?

Why health articles in newspapers should be retired

What is it that people look forward to? Most want time to pursue their interests and doing things they love. Some people have managed to combine all this by the traditional interest-led approach, doing things they love, starting up a blog, gaining readership, and then selling advertising space on their blog, or affiliate marketing and other things associated with making money from a website. For others, this lure for things they like is compromised by the need of having to make a living, and hence this is shelved while having to earn a living and put off until retirement.

For most people, retirement would be when they would be able to have the time and money to indulge in things they put off earlier. Some people have combined the starting of a blog and retirement, and made a living by blogging (and gaining a readership) about how they have or intend to retire early.

Retirement. Out of the rat race. All the time in the world. For most people, retirement is the time to look forward to.

A recent study however suggests that retirement is not all that wonderful. Despite it being seen as the time of the life where financial freedom has been achieved and time is flexible, it has been suggested that the onset of mental decline starts with retirement.

The Daily Telegraph reported that retirement caused brain function to rapidly decline, and this information had been provided by scientists. It further cautions that those workers who anticipate leisurely post-work years may need to consider their options again because of this decline. Would you choose to stop work, if this meant your mental faculties would suffer and you would have all the free time in the world but not the mental acuity?

Retired civil servants were found to have a decline in their verbal memory function, the ability to recall spoken information such as words and names. It was found that verbal memory function deteriorated 38% faster after an individual had retired than before. Nevertheless, other areas of cognitive function such as the ability to think and formulate patterns were unaffected.

Even though the decline of verbal memory function had some meaningful relevance, it must be made clear that the study does not suggest anything about dementia or the likelihood of that happening. There were no links drawn with dementia. Just because someone retires does not mean they are more likely to develop dementia.

The study involved over 3000 adults, and they were asked to recall from a list of twenty words after two minutes, and the percentages were drawn from there. The small sample size, not of the adults, but of the word list, meant the percentage decline of post-retirement adults may have been exaggerated.

Look at this mathematically. From a list of twenty words, a non-retiree may recall ten. A retiree may recall six. That difference of four words is a percentage decline of 40%.

Ask yourself – if you were given a list of twenty words, how many would you remember?

It is not unsurprising if retirees exhibit lower abilities at verbal memory recall because the need for these is not really exercised post-retirement. What you don’t use, you lose. We should not be worried about the decline, because it is not a permanent mental state, but it is reversible; in any case the figure is bloated by the nature of the test. If a non-retiree remembers ten words, and a retiree makes one-mistake and remembers it, that would be promoted as a 10% reduction in mental ability already.

Furthermore, decline is not necessarily due to the lack of work. There are many contributing factors as well, such as diet, alcohol and lifestyle. Retirement is not necessarily the impetus behind mental decline. Other factors may confound the analyses.

The research did not involve people who had retired early. For example, hedge fund managers might have retired in their forties. But you would struggle to think that someone in their forties would lose 38% of verbal memory recall.

Would a loss of 38% of verbal memory have an impact on quality of life? It is hard to tell if there is the evidence to support this. But the results point to a simple fact. If you want to get better at verbal memory, then practice your verbal memory skills. If you want to get better at anything, then practice doing it.

Was this piece of news yet another attempt by mainstream media to clog paper space with information – arguably useless? You decide.

The bigger issues that come with preventing hearing loss

Is there cause for optimism when it comes to preventing hearing loss? Certainly the latest research into this suggests that if positive effects experienced by mice could be transferred to humans and maintained for the long term, then hereditary hearing loss could be a thing of the past.

It has always been assumed that hearing loss is always down to old age. The commonly held view is that as people grow older, their muscles and body functions deteriorate with time to the point that muscle function is impaired and eventually lost. But hearing loss is not necessarily down to age, although there are cases where constant exposure to loud noise, over time, causes reduced sensitivity to aural stimuli. Over half of hearing loss cases are actually due to inheriting faulty genetic mutations from parents.

How do we hear? The hair cells of the inner ear called the cochlea respond to vibrations and these signals are sent to the brain to interpret. The brain processes these signals in terms of frequency, duration and timbre in order to translate them into signals we know.

For example, if we hear a high frequency sound of short duration that is shrill, our brain interprets these characteristics and then runs through a database of audio sounds, an audio library in the brain, and may come up with the suggestion that it has come from a whistle and may signify a call for attention.

What happens when you have a genetic hearing loss gene? The hairs on the inner ear do not grow back and consequently sound vibration from external stimuli do not get passed on to the brain.

With progressive hearing loss too, the characteristics of sound also get distorted. We may hear sounds differently to how they are produced, thereby misinterpreting their meaning. Sounds of higher and lower frequency may be less audible too.

How does that cause a problem? Imagine an alarm. It is set on a high frequency so that it attracts attention. If your ability to hear high frequencies is gradually dulled then you may not be able to detect the sound of an alarm going off.

As hearing gradually deteriorates, the timbre of a sound changes. Sharper sounds become duller, and in the case of the alarm, you may hear it, but it may sound more muted and the brain may not be able to recognise that it is an alarm being heard.

Another problem with hearing loss is the loss of perception of volume. You may be crossing the road and a car might sound its horn if you suddenly encroach into its path. But if you cannot hear that the volume is loud, you may perceive it to be from a car far away and may not realise you are in danger.

The loss of the hairs in the inner ear is a cause of deafness in humans, particularly those for whom hearing loss is genetic. Humans suffering from hereditary hearing loss lose the hairs of the inner ear, which result in the difficulties mentioned above. But there is hope. In a research experiment, scientists successfully delayed the loss of the hairs in the inner ear for mice using a technique that edited away the genetic mutation that causes the loss of the hairs in the cochlea.

Mice were bred with the faulty gene that caused hearing loss. But using a technology known as Crispr, the faulty gene was replaced with a healthy normal one. After about eight weeks, the hairs in the inner ears of mice with genetic predisposition to hearing loss flourished, compared to similar mice which had not been treated. The genetic editing technique had removed the faulty gene which caused hearing loss. The treated mice were assessed for responsiveness to stimuli and showed positive gains.

We could be optimistic about the results but it is important to stress the need to be cautious.

Firstly, the research was conducted on mice and not humans. It is important to state that certain experiments that have been successful in animals have not necessarily had similar success when tried on humans.

Secondly, while the benefits in mice were seen in eight weeks, it may take longer in humans, if at all successful.

Thirdly, we should remember that the experiment worked for the mice which had the genetic mutation that would eventually cause deafness. In other words, they had their hearing at birth but were susceptible to losing it. The technique prevented degeneration in hearing in mice but would not help mice that were deaf at birth from gaining hearing they never had.

Every research carries ethical issues and this one was no different. Firstly, one ethical issue is the recurring one of whether animals should ever be used for research. Should mice be bred for the purposes of research? Are all the mice used? Are they accounted for? Is there someone from Health and Safety going around with a clipboard accounting for the mice? And what happens to the mice when the research has ceased? Are they put down, or released into the ecosystem? “Don’t be silly,” I hear you say, “it’s only mice.” That’s the problem. The devaluation of life, despite the fact that it belongs to another, is what eventually leads to a disregard for other life and human life in general. Would research scientists, in the quest for answers, eventually take to conducting research on beggars, those who sleep rough, or criminals? Would they experiment on orphans or unwanted babies?

The second, when it comes to genetics, is whether genetic experimentation furthers good or promotes misuse. The answer, I suppose, is that the knowledge empowers, but one cannot govern its control. The knowledge that genetic mutation can be edited is good news, perhaps, because it means we can genetically alter, perhaps, disabilities or life-threatening diseases from the onset by removing them. But this, on the other hand, may promote the rise of designer babies, where mothers genetically select features such as blue eyes for their unborn child to enhance their features from birth, and this would promote misuse in the medical community.

Would the use of what is probably best termed genetic surgery be more prominent in the future? One can only suppose so. Once procedures have become more widespread it is certain to conclude that more of such surgeons will become available, to cater for the rich and famous. It may be possible to delay the aging process by genetic surgery, perhaps by removing the gene that causes skin to age, instead of using botox and other external surgical procedures.

Would such genetic surgery ever be available on the NHS? For example, if the cancer gene were identified and could be genetically snipped off, would patients request this instead of medical tablets and other external surgical processes? One way of looking at it is that the NHS is so cash-strapped that under QALY rules, where the cost of a procedure is weighed against the number of quality life years it adds, the cost of genetic surgery would only be limited to more serious illnesses, and certainly not for those down the rung. But perhaps for younger individuals suffering from serious illnesses, such as depression, the cost of a surgical procedure may far outweigh a lifetime’s cost of medication of anti-depressant, anti-psychotics or antibiotics. If you could pinpoint a gene that causes a specific pain response, you might alter it to the point you may not need aspirin, too much of which causes bleeds. And if you could genetically locate what causes dementia in another person, would you not be considered unethical if you let the gene remain, thereby denying others the chance to live a quality life in their latter years?

Genetic editing may be a new technique for the moment but if there is sufficient investment into infrastructure and the corpus of genetic surgery information widens, don’t be surprised if we start seeing more of that in the next century. The cost of genetic editing may outweigh the cost of lifelong medication and side effects, and may prove to be not just more sustainable for the environment but more agreeable to the limited NHS budget.

Most of us won’t be around by then, of course. That is unless we’ve managed to remove the sickness and death genes.