The advancement of social media regulations and how this is protecting young people
Concerns surrounding social media have been around for almost as long as the platforms themselves. But they have reached new heights recently. The high-profile suicide of 14-year-old Molly Russell re-ignited calls for greater policing of social media companies after it was reported that her Instagram account contained distressing content on depression and suicide. Her father also partly blames Instagram (and its owner Facebook) for her death.
The Momo challenge has caused shockwaves on social media, with parents reporting that their children are scared to death of the female character. A distorted female face belonging to ‘Momo’ encourages young people to harm or even kill themselves after receiving a message from her. Indeed the furore surrounding Momo has reached enough of a height to make the original maker (who created Momo as a puppet) ‘kill’ the character.
Yet, these cases are not isolated. Harmful behaviour on social media are occurring every day. As UK Digital Minister Margot James explains, “There is far too much bullying, abuse, misinformation as well as serious and organised crime online. For too long the response from many of the large platforms has fallen short.”
So, what are the social media platforms doing to regulate online behaviour?
How the social media giants are fighting back
Efforts to protect young people online depend on each platform. At the moment, there is a voluntary code of conduct that many platforms adhere to. This is usually re-assessed whenever a case like Molly Russell’s occurs. Following her death, Instagram banned graphic self-harm images. A move that some parties say was too little, too late. “It should never have taken the death of Molly Russell for Instagram to act,” NSPCC chief executive Peter Wanless stated in response to Instagram’s actions.
Alternatively, social media companies are forced to address potentially harmful behaviour when it directly hits their bottom-line. Video platform YouTube recently came under fire for allowing networks of paedophiles to form within the comment sections of its adverts. When news of the paedophile scandal broke, brands such as Hasbro, Disney and Nestle swiftly stopped advertising on the platform. YouTube has now banned all commenting on any videos featuring children or young people.
A challenge to police
This highlights the huge challenge that social media companies face. Every platform is different, which leads to different online behaviours to police. Plus, as YouTube discovered, humans can be pretty innovative in their exploitation of online platforms.
There have been consistent calls from Governments, NGOs and the public for more policing of social media platforms. Current efforts aren’t enough. However, there isn’t a silver bullet solution that will immediately protect all young people online.
Facebook hires an army of content moderators across the globe to vet its content. But even this has been criticised. Firstly, there are the opaque and confusing content guidelines that moderators have to adhere to. Rules that (rightly) bans videos of murder but leaves posts stating that “Autistic people should be sterilised” up. Additionally, the support and treatment of content moderators is under scrutiny, with many reporting PTSD-like symptoms after a few months on the job.
Naturally, if the social media giants are struggling with policing their own platforms – what hope is there for Government bodies?
How Governments are responding
Government actions to protect young users online range from individual politicians calling for greater oversight on social media companies, to fines and codes of practices.
Social media companies additionally have the option to follow a set of child safety recommendations set out by the UK Government around a decade ago. Of course, these recommendations are now outdated and don’t take into account recent cases of harmful behaviour online. Some bodies, such as the NSPCC want the Government to take it a step further, by making social media platforms sign up to a mandatory code of practice. It also wants a fine issued for any breaches of the code.
Indeed, financial consequences for infringing child safety may be the only way to ensure effective policing of social media.
Room for improvement
Many efforts to-date have been ad-hoc and reactive to public outcry or events like Molly Russell’s death. It’s evident that much more can be done by social media companies and legislators to protect young people online.
To do this, Governments and social media companies need to work together. Legislators need to understand each platform inside-and-out, including their potential for harm and any internal limitations. Understanding this, some companies (including Facebook and Twitter) are reaching out to guide politicians on regulating online behaviour.
It’s clear that current ways of protecting young people aren’t working. Only time will tell what regulations will be developed to keep youngsters safe online. But they better come soon – before another Momo leaps out of social media’s shadow.
Whether it’s hiring your next legal professional or you’re looking to take the next step in your career, we can help. Contact Zest Recruitment for experts in recruiting for your industry.