Julian Hayes and Greta Barkle discuss how social media platforms are used by fraudsters, the criminal sanctions already available to tackle the problem, and how regulators are joining forces to clean up the online space.
When President Donald Trump issued his now infamous tweet threatening military force against Black Lives Matter protesters, few would have predicted the vehement ensuing corporate backlash. But with household names including Coca-Cola, Starbucks, Ford and Diageo leading the flight, hundreds of firms withdrew their advertising from Facebook in protest at its stance towards misinformation and hate speech. While the advertising boycott forced concessions from the social media giant, regulatory change was already underway across the world, including the UK where the Government has restated its commitment to rein in misinformation and other online harms. Despite pleas from financial institutions, politicians and law enforcement, however, economic harms will be excluded from forthcoming UK measures. Partially addressing this gap, the UK’s Competition and Markets Authority (‘CMA’) has now stepped into the hotly contested debate over social media regulation.
In response to a rising tide of illegal and harmful internet material, established models of online regulation are under review in many countries, and the ‘safe harbours’ enjoyed by tech companies against liability for material posted by others on their platforms are in the legislators’ sights With potentially Europe-wide consequences, the European Commission too has launched a consultation on a proposed Digital Services Act, hinting heavily at increased accountability of online platform providers whose sites have become de facto public spaces in the online world.
In the UK, the Government is facing increasingly shrill calls to bring forward its long-awaited draft online harms legislation. The Minister of State for Digital and Culture has insisted that there is no delay to the proposals and that the Government’s ambition is to bring a Bill forward and hold pre-legislative scrutiny in the current Parliamentary session. Yet giving evidence to the Home Affairs Committee, the Minister defended the exclusion of economic harms from the scope of the 2019 Online Harms White Paper on the grounds that other work was underway addressing this problem and the Government wished to avoid creating a legislative behemoth. Nevertheless, the increasing prevalence of cyber-enabled economic crime (traditional crimes whose scale or reach is augmented by information communications technology) has led many, including the National Crime Agency and Action Fraud, to call for its inclusion within the scope of the harms from which online providers should be duty-bound to keep their users safe.
Cyber-enabled economic crime on social media takes various forms but often involves misinformation of some kind. It includes sports stars and celebrities surreptitiously buying armies of fake ‘bot’ followers on Twitter, LinkedIn and other platforms, falsely inflating their apparent worth as ‘influencers’ to secure more lucrative bookings and endorsement deals. It also includes shadowy, share-price shifting disinformation campaigns against corporate rivals using false social media accounts like that discovered by Facebook in February 2020 implicating several major South East Asian telecoms providers. More prosaically, cyber-enabled economic crime also takes in fake positive reviews online, covering everything from consumer goods to top-ranked restaurants.
With estimates suggesting £23 billion of UK consumer spending is influenced by online reviews, the potential to mislead is huge, with consumer group Which? last year attacking TripAdvisor for failing to do more to tackle “blatantly” phoney hotel reviews. In 2018, Italian courts jailed the owner of one company selling fake review packages to Italian hospitality businesses.
Specific UK consumer regulations already exist criminalising misleading commercial practices such as false or untruthful endorsement of traders or products and, subject to due diligence and innocent publication defences, those publishing such endorsements also risk prosecution. More generally, those involved in dishonestly making false online representations for profit could face fraud charges. Theoretically, at least, online platforms which become aware that they are hosting fraudulent reviews but take no action could fall under suspicion of assisting the perpetrators under the Serious Crime Act 2007. Importantly, the safe harbour immunity which they normally enjoy as “mere conduits” of such material would not protect against such allegations. However, establishing corporate liability in England & Wales is notoriously difficult and prosecutions for it are rare.
Against this background, the CMA has now entered the field as the consumer’s champion. Although at first blush an unlikely regulator of the digital world, in fact the increasing dominance of a small cohort of tech titans has long attracted competition authority attention both here and overseas. In June 2019, the CMA opened a consumer enforcement case into the communications sector following concerns over fake and misleading online reviews, and swiftly secured the agreement of Facebook and eBay to crack down on false material which had gone undetected on their platforms. The CMA has now followed up with a further investigation of whether major websites displaying online reviews are taking sufficient measures to protect consumers from fake and misleading write-ups.
To assist it, the CMA will be armed with new powers to seek ‘online interface orders’ from the courts to force website operators to remove online content, display warnings to consumers and disable or restrict access to online platforms. Online interface orders may be granted where there is a serious risk to collective consumer interests and there is no other means of stopping or preventing breach of specific legislation, including provisions targeting misleading advertising. Interim orders may be sought without notice where a breach is anticipated.
Cementing its credentials as a regulator in the digital sphere still further, on 1 July 2020, the CMA published its detailed findings after conducting a year-long market study of online platforms and digital advertising. The report calls for a new regulatory regime to promote online competition and announced the teaming-up of the CMA, the Information Commissioner and the future online harms regulator, Ofcom, to create a Digital Markets Taskforce to advise on how a new regulatory regime for digital markets might be designed.
Setting a punishing timetable, the new Taskforce aims to deliver its advice to the Government by the end of 2020. By that time, though, it may be jockeying for attention with a crowded field of priorities, including the end of the Brexit transition period and a predicted second wave of Covid-19 infections.
And with the Secretary of State for Culture, Media and Sport expressly looking to the £149 billion digital sector to create jobs and power the UK out of the coming recession, there are suspicions that the Government will be reluctant to toughen digital regulation if it jeopardises inward investment into the UK tech industry. Facing such countervailing economic and regulatory crosswinds, we will soon learn where the Government’s true priorities lie in the online sphere.
About the authors: