Our timeline is an overview of major upcoming digital regulation that is in the legislative process or coming into force soon. It is designed to help you keep track of relevant regulatory developments, understand their status and identify which pieces of legislation will actually affect your business.
Our international regulatory lawyers offer support for all stages of your business's digitalisation journey. We can help you prepare for, implement and comply with all aspects of digital regulation, whatever your sector. Explore our digital regulation expertise.
⭐ Recently added or amended content.
Last updated: 30 September 2024
The timeline is indicative only and not a complete overview of applicable law.
This directive aims to increase the accessibility of a range of everyday products and services including e-commerce, public transport, banking services, computers, TVs and e-books. It will take effect via national law.
The EU Accessibility Directive was enacted in 2019 and required implementing legislation be issued at national level by the EU Member States by 29 June 2022. The national laws must come into force by 28 June 2025 at the latest.
Fully in force. Member State national enacting legislation required.
National implementing legislation can be tracked from this page.
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services, available here.
The directive and related national implementing legislation will impact on all businesses providing a range of "everyday" products and services, focused on digital technology. This includes:
· computers and operating systems
· smartphones, other communication devices and phone services
· TV equipment related to digital television services and access to audio-visual media services (AVMS)
· ATMs and payment terminals (e.g. card payment machines in supermarkets) and consumer banking services more generally
· e-readers and e-books
· ticketing and check-in machines, electronic tickets and all sources of information (including websites and apps) for air, bus, rail and waterborne transport services
· e-commerce
· emergency calls to the European emergency number 112
There are various carve-outs for content and services available through websites and apps that do not have to be accessible, including historic and archived content, and third party content that is not funded, developed or under the control of the entity providing it.
There are also important carve-outs where compliance would fundamentally alter the nature of the product or service or would impose a disproportionate burden on the provider. Careful consideration of these tests is necessary ahead of product or service launches or wider compliance reviews. It should be noted that for many businesses there are notification requirements before these carve-outs will apply.
The overarching intention of the EU Accessibility Directive is to enable people with disabilities or impairments to access these products and services on an equal basis with others.
Rather than specifying technical requirements, the directive identifies the particular features of the relevant products and services that must be accessible (set out in Annex I of the directive). The intention is that this approach allows for flexibility and innovation in how these requirements are actually met in practice.
The accessibility set out in Annex 1 depend on the product or service but include requirements such as providing information through more than one sensory channel (e.g. the option of an audio version of written text), ensuring content is presented in an understandable way; requirements for written content to be formatted in sufficient size fonts, with sufficient contrast and adjustable spacing. Products requiring fine motor control must offer alternative controls, and requirements for extensive reach or great strength should be avoided. The directive envisages the use of assistive technologies, as well as emphasising the need to respect privacy.
The directive sets out requirements on product manufacturers for technical documentation, self-assessment of compliance and a declaration of conformity with the accessibility requirements, as well as a CE marking to show compliance. Importers are required to place only compliant products on the market, and to check for compliance and the CE mark. Distributors must similarly check for compliance and not put non-compliant products on the market.
For services, compliance with the directive's accessibility requirements must be incorporated into how the services are designed and provided. Information about the service's accessibility compliance must be made available (and updated) for the lifetime of the service. Compliance must be reviewed and maintained, with disclosure obligations around non-compliance.
As noted, compliance is not required where it would fundamentally alter the basic nature of the product or service, or where it would create a disproportionate economic burden on the entity concerned. Where a business decides it falls within one of these carve-outs and so does not have to comply, it must document that assessment. In relation to services, the assessment must be renewed every 5 years, or when the service is altered, or when the national authority requests a reassessment.
The directive also requires Member States to set up national market surveillance authorities to oversee enforcement of the accessibility regime. Adequate enforcement powers must be given to the national authority, which should include provision for "effective, proportionate and dissuasive penalties", as well as remedial action to address non-compliance.
The need to implement national legislation and then allow time for businesses to achieve compliance (for which the directive allows three years) means that this 2019 legislation is taking some time to come into full effect. As noted, national implementation of this directive must take effect by 28 June 2025 at the latest. National implementing legislation can be tracked from this page.
Insight: The EU Accessibility Act – Time to start implementation projects now
Webinar recording: European Accessibility Act: An introduction (19 March 2024)
Webinar recording: Digital Inclusion: Risks of not making your digital products accessible (18 April 2023)
Enacted in June 2022, the Data Governance Act seeks to increase sharing of public sector data, regulates data marketplaces, and creates the concept of data altruism for trusted sharing.
The Data Governance Act applies from 24 September 2023, but businesses that were already providing data intermediary services on 23 June 2022 have until 24 September 2025 to comply with relevant data intermediary provisions.
Enacted. Mostly in force.
Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), available here.
The Data Governance Act (DGA) deals with three main areas of data sharing in the EU:
- Data sharing by public authorities, creating a framework to facilitate the sharing of such data when it is protected under confidentiality obligations, intellectual property or data protection laws and hence falls outside the scope of the Open Data Directive.
- Data-sharing through profit-making intermediaries, creating a new regulatory framework. These provisions apply to: (i) data exchange services or platforms, (ii) services that enable individuals to control the sharing of their personal data, and (iii) so-called "data cooperatives" that support their members in exercising their rights with respect to data. Web browsers, email services, cloud storage, analytics or data sharing software are excluded, as are services used in a closed group such as those ensuring the functioning of Internet of Things (IoT) devices or objects.
- "Data altruism" where data is shared by individuals without reward and via a not-for-profit organisation pursuing objectives in the general interest. A regulatory framework is created to ensure transparency and the rights of data subjects.
The DGA aims to create a new EU model for data sharing, with trusted frameworks and organisations to encourage sharing, but also regulates data-sharing organisations. Data governance in this context means rules, structures, processes and technical means to share and pool data.
The DGA seeks to address the problem that some public data cannot be made open as it is protected by confidentiality, intellectual property rights or data privacy rules. Requirements are imposed on EU Member States to optimise the availability of data while protecting confidentiality and privacy, for example using technical means such as anonymisation, pseudonymisation or accessing data in secure data rooms. Exclusive data sharing arrangements are prohibited except in limited public interest situations. Other requirements include a maximum delay before answering a data access request, and the creation of a single information point by each Member State. The Commission has created a searchable EU register of all information compiled by the national single information points.
The new data intermediary regulatory framework aims to grow private sector trust in opening up data by boosting the neutrality and transparency of data intermediaries. Data intermediaries are required to be neutral third parties, which do not monetise the data themselves by selling it to another business, or feeding it into their own products and services. Data intermediaries are required to act in the best interests of the data subjects. The DGA creates a full regulatory framework for data intermediation services, which will face the additional costs and burden of notification and compliance requirements. The EU Commission has, via an Implementing Regulation, introduced a logo for trusted "EU recognised data intermediary" organisations to differentiate recognised trusted services (i.e. those services that satisfy the compliance requirements of the DGA) from other services. The Commission has created a register of all data intermediation services providers in the EU.
Member States may choose to legislate for "dissuasive financial penalties" for non-compliance. Data intermediaries must notify the national competent authority of their services, which will monitor compliance by the intermediary. As noted, existing data intermediaries in operation on 23 June 2022 have been given two years to September 2025 to bring their operations into compliance.
The DGA creates new legal structures and a regulatory framework for "data altruism". This will enable people and businesses to share their data voluntarily and without reward for a purpose in the public interest (for example, medical or environmental research). As with data intermediaries, the Commission has introduced a logo to differentiate a compliant "EU recognised data altruism organisation" from other services.
Data altruism organisations must be registered in the Commission's new EU public register of recognised data altruism organisations and the data altruism logo must be accompanied by a QR code with a link to that register. Data altruism organisations must be not-for-profit, pursuing stated objectives in the general interest and inviting data holders to contribute their data in pursuance of those objectives. They will not be able to use the pooled data for any other purposes. The DGA further requires independent functioning and functional separation from other activities, as well as requirements to safeguard transparency and data subjects' rights.
The relevant authorities in Member States are responsible for the registration of data altruism organisations and for monitoring compliance by data intermediation services providers.
In May 2024, the European Commission opened infringement procedures against 18 Member States that have either failed to designate the responsible authorities to implement the DGA, or failed to prove that such authorities are empowered to carry out their duties under the DGA. In July 2024, the Commission also opened infringement proceedings against Ireland.
Trust and Legal Certainty for the data-driven Economy? A look into the EU Data Governance Act
EU Data Governance Act | Creating European regulation for the data ecosystem
While the e-Commerce Directive remains the cornerstone of digital regulation, much has changed since its adoption over 20 years ago. The DSA builds on the e-Commerce Directive to address new challenges.
Most in-scope service providers will need to comply from 17 February 2024.
Provisions enabling the designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) applied from 16 November 2022. The VLOPs and VLOSEs that were designated on 25 April 2023 had to comply by 25 August 2023.
Providers that are subsequently designated as VLOPs and VLOSEs by the European Commission will have four months from the Commission's notification of designation to comply with the extra obligations applicable to VLOPs and VLOSEs.
The list of designated VLOPs and VLOSEs can be viewed here.
In force.
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), available here.
The Digital Services Act (DSA) broadly applies to all online intermediaries, irrespective of their place of establishment, that offer digital services or products to users in the EU.
It sets out a layered approach to regulation with requirements increasing cumulatively depending on the classification of the service provider:
· all intermediary service providers (e.g. hosting, caching or mere conduit services) must comply with requirements that apply to all online intermediaries depending on their functionality;
· "online platforms" must comply with the obligations for intermediary service providers and an additional layer of obligations;
· "online platforms allowing consumers to conclude distance contracts with traders" (e.g. online marketplaces) must comply with the obligations for intermediary service providers and online platforms, plus an additional layer of obligations specific to their functionality; and
· VLOPs and VLOSEs are subject to all of the foregoing (subject to functionality) and further obligations that are the most onerous.
VLOPs and VLOSEs are online platforms and online search engines with more than 45 million monthly active users and which are designated as a VLOP or VLOSE by the Commission.
Micro and small businesses are exempt from the requirements for online platforms and online marketplaces, unless they qualify as a VLOP or VLOSE.
The DSA aims to update the "rules of the internet", with the key principle that what is illegal offline should also be illegal online.
The DSA both preserves and updates the intermediary liability position set out in the e-Commerce Directive (e.g. the hosting defence). However, the range of requirements fundamentally changes the liability of online services in the EU with strict obligations for the supervision of illegal content online, as well as new rules on transparency, user safety and advertising accountability. It also includes detailed rules on features that must be incorporated into online platforms and marketplaces or included in user terms and conditions.
The DSA preserves the liability defences under Articles 12-15 of the e-Commerce Directive (the "mere conduit" defence, the "caching" defence, the "hosting" defence and the "no obligation to monitor" provision). Clarity is added that these defences remain available even if the platform has voluntarily taken action to detect, identify and remove, or disable access to, illegal content.
However, notice and take down obligations on online platforms are strengthened. Platforms must put in place mechanisms for users to report illicit content (such as hate speech, terror propaganda, discrimination or counterfeit goods). Platforms must decide what to do about reported content in a timely, diligent and non-arbitrary manner, in order to continue to benefit from the hosting defence. Specified mechanisms to challenge content removal must be made available, including alternative dispute resolution. Online platforms must monitor for repeated submission of illegal content or unfounded complaints by particular users, and suspend their access.
Third party content monitors can apply for "trusted flagger" status. Platforms must prioritise illegal content reports from such individuals and bodies.
Over the summer 2024, the EU Commission conducted a call for evidence to inform its upcoming guidelines on the protection of children online under the DSA. The guidelines will apply to all online platforms that are accessible to children, including those not directed at children, but which still have child users due to a lack of age-assurance mechanisms. The Commission will use the input from the call for evidence to draft the guidelines on which it will consult separately and which it plans to adopt before summer 2025.
General obligations will apply in relation to user safety, transparency, controls and information.
Annual reports are required from all intermediary service providers on their content moderation activity, with the required contents depending on the service provided. Hosting service providers must include information about illegal content notices and complaints received and action taken. These include provisions relating to content recommendations and online terms, and obligations relating to the publication and communication of information on the average monthly active recipients of the service. The European Commission has consulted on the mandatory templates for transparency reporting under the DSA and is analysing the feedback. The Commission has also launched the DSA Transparency Database which is a publicly accessible database of "statements of reasons" which online platforms must submit to the Commission setting out their reasons for making content moderation decisions (with the exception of micro and small enterprises).
Websites must not be designed and organised in away that deceives or manipulates users – so-called "dark patterns" – but must be designed in a user-friendly and age-appropriate way to make sure users can make informed decisions.
On 29 October 2024, the Commission launched a consultation on a draft delegated regulation on the rules for researchers to access platforms' data under the DSA. Article 40 allows "vetted researchers" to access VLOPs' and VLOSEs' data, subject to approval from a Digital Services Coordinator, for the purposes of evaluating systemic risks and mitigation measures. The consultation closes on 26 November 2024, and the rules are expected to be adopted in the first quarter of 2025.
The DSA introduces traceability provisions for online marketplaces. These include a "know-your-trader" obligation, requiring online marketplaces that allow B2C sales to conduct due diligence on traders prior to allowing them to use the platform.
Platforms must be designed to facilitate compliance with traders' legal obligations such as e-commerce and product safety requirements. The illegal content take down provisions are to facilitate the removal of illegal or counterfeit products, and platforms have obligations to contact consumers who have purchased those products.
Specific transparency obligations apply to online advertising. It must be clear to users whether a displayed item is an advertisement and from whom it originates. Information about the main parameters used to decide to whom it will be displayed is essential in this context.
VLOPs and VLOSEs are designated by the Commission. The first tranche of designations (17 VLOPs and 2 VLOSEs) were made on 25 April 2023. Platforms and search engines will need to report on the number of users at least every 6 months. Obligations on VLOPs and VLOSEs apply from four months after their designation as such.
An additional layer of obligations are imposed on VLOPs and VLOSEs. They must diligently monitor and mitigate systemic risks including their service being used for the dissemination of illegal content; the impact of their service on human rights; and the risk of their service negatively affecting civic discourse, electoral processes, public security, gender-based violence, the protection of public health and minors and individuals' physical and mental well-being. The European Commission's guidelines on mitigating against systemic risks in relation to electoral processes were published in the EU's Official Journal on 26 April 2024. The guidelines aim to support VLOPs and VLOSEs with their compliance obligations under Article 35 of the DSA (mitigation of risks) and with other obligations relevant to elections. As well as mitigation measures, the guidelines cover best practices before, during and after electoral events.
VLOPs and VLOSEs also have to implement strict processes towards regular risk assessments, produce reports of action regarding content moderation and are obliged to conduct external annual audits. They have to disclose their parameters for recommender systems and provide for at least one recommender system which does not involve profiling. In addition, the EU Commission as well as the Member States could have access to their algorithms.
On 22 February 2024, the delegated regulation on independent audits under the DSA also entered into force. Article 37 of the DSA requires VLOPs and VLOSEs to undergo annual independent audits to assess compliance with the DSA and any commitments made pursuant to codes of conduct and crisis protocols that have been adopted. The delegated act provides a framework to guide VLOPs and VLOSEs when preparing audits. It also provides mandatory templates for auditors to use when completing an audit report, as well as mandatory templates for the VLOPs and VLOSEs to use when completing their audit implementation reports.
The Commission has launched a DSA whistleblower tool which allows individuals with inside information to report harmful practices by VLOPs and VLOSEs that are potentially in breach of the DSA, anonymously if preferred.
Each Member State must designate one or more competent authorities as responsible for the application and enforcement of the DSA in their jurisdiction, and each Member State must designate one of these as a "Digital Services Coordinator." (DSC)" The DSCs are responsible for enforcement of the DSA in respect of platforms that are not VLOPs or VLOSEs in their different territories and can work together. The DSCs make up a European Board for Digital Services, chaired by the European Commission, which works as an independent advisory group to enforce the DSA. In April 2024, the Commission opened infringement procedures against six member states which either had not designated their DSC by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or had not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches. The Commission has opened infringement procedures against six member states which either did not designate their Digital Services Coordinator by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or have not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). In July 2024, the Commission opened infringement proceedings against six further member states (Belgium, Spain, Croatia, Luxembourg, Netherlands and Sweden).
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The European Commission has been, and continues to be, proactive in sending requests for information to various platforms concerning their compliance with their DSA obligations. It has also opened formal proceedings against some platforms following preliminary investigations and requests for information.
In May 2024, the European Commission and the UK's Ofcom signed an administrative arrangement to support their work in enforcing the new online safety regimes in both the EU and the UK under the DSA and the UK Online Safety Act 2023 respectively.
The Commission and the European Regulators Group for Audio visual Media Services (ERGA), which brings together national media regulators under the Audiovisual Media Services Directive, have also agreed to work together on enforcement, focussing on VLOPs and VLOSEs. ERGA will act as a facilitator to gather information at national level and report on issues such as media pluralism, disinformation and the protection of children to help the Commission identify and assess the systemic risks in these areas. It is also expected to assist the DSCs.
17 February 2024: The Digital Services Act (DSA) is now fully applicable
EU gets one step closer to adopting the Digital Services Act
Rethinking regulation of data-driven digital platforms
EU's Digital Service Act to introduce first express ban on 'dark patterns'
Commission finds EU consumer law inadequate to protect consumers in the digital world
This new legislation is aimed at boosting competition in EU digital markets, with new rules aimed at online "gatekeepers"; that is, players that determine how other companies interact with the users of digital platforms.
The first group of gatekeepers designated under the DMA have had to comply with its obligations since 6 March 2024. Additional designations continue to be made by the Commission: obligations take effect six months from the date of designation.
Enacted. Fully in force.
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), available here.
The DMA is aimed at digital businesses offering a "core platform service", with significant scale and reach within the EU – these firms will be designated as "gatekeepers".
A "core platform service" includes: online intermediation services; online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communication services; operating systems; cloud computing services; and certain advertising services so far as they are provided by a provider of any of the core platform services.
A "gatekeeper" is a "core platform service provider" that has a significant impact on the internal market, whose platform enables business users to access customers and enjoys an entrenched and durable position (meaning at least three years). While there is some nuance in this designation, there will be a presumption that a provider will satisfy the designation test if it:
• generates an annual revenue of €7.5 billion or more in the last three years, or has an average market capitalisation of at least €75 billion in the last financial year in the EEA and provides core platform services in three member states; and
• has more than 45 million monthly EU end users or more than 10,000 yearly active EU business users a year, and
• these thresholds have both been met in each of the last three financial years.
The first companies designated by the Commission as gatekeepers are Alphabet, Amazon, Apple, booking.com, ByteDance, Meta and Microsoft in relation to the following core platform services:
• Social Networks: TikTok, Facebook, Instagram, LinkedIn;
• Online Intermediation Services: Google Maps, Google Play, Google Shopping, Amazon Marketplace, App Store, Meta Marketplace; booking.com;
• Advertising: Google, Amazon, Meta;
• Number Independent Communication Services: WhatsApp, Messenger;
• Video Sharing: YouTube;
• Search: Google Search;
• Browser: Chrome, Safari;
• Operating System: Android, iOS, iPad OS and Windows PC.
Challenges have been lodged by gatekeepers in relation to some of these designations and the Commission has launched several investigations into non-compliance with the DMA by gatekeepers. The Commission intends to complete these investigations by 24 March 2025.
In addition, the Commission has opened a market investigation to further assess the rebuttal submitted by the online social networking service X, which argues that even if X is deemed to meet the quantitative thresholds it does not qualify as an important gateway between businesses and consumers. The Commission has until 13 October 2024 to complete this investigation.
The Commission has also signalled its intention to monitor the development of AI tools in the context of the DMA.
The first limb is a set of rules that will apply only to "gatekeepers" and comprises a list of do's and don'ts for these providers. The rules are designed to address perceived competitive imbalances arising from the gatekeeper's position in the market.
Examples of obligations on gatekeepers include:
• Allowing third parties to inter-operate with the gatekeeper's own service in certain specific situations.
• Allowing business users to access the data that they generate in their use of the gatekeeper's platform.
• Providing advertisers and publishers free of charge with the tools and information necessary to carry out their own, independent verification of advertisements hosted by the gatekeeper.
• Allowing business users to promote their offer and conclude contracts with customers outside the gatekeeper's platform.
Examples of prohibited behaviour include:
• Self-preferencing – treating services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform.
• Preventing consumers from linking up to businesses outside their platforms.
• Preventing users from un-installing any pre-installed software or app.
The second limb covers market investigation powers for the European Commission to conduct investigations to consider whether: a core platform provider should be designated as a gatekeeper; there has been systematic non-compliance; and services should be added to the list of core platform services.
The Commission also has powers to supplement the gatekeeper obligations under the DMA with additional obligations through further legislation, based on a market investigation. This is to ensure that the legislation maintains pace with digital markets as they evolve.
The Commission can also designate firms as "emerging" gatekeepers, that is, they are on the way to reaching the thresholds set out in the DMA for designation as a gatekeeper.
Failure to comply with the rules could lead to fines of up to 10% of global turnover, and, in the case of systematic infringements, structural remedies.
EU rules for "gatekeepers" coming in 2023 as Digital Markets Act is published
Part 1 creates a new regulatory scheme in relation to cybersecurity for connected consumer products. Secondary legislation sets out technical requirements. This affects products already on sale/available.
Secondary legislation giving effect to the connectable products part of the Act came into force on 29 April 2024
Enacted.
UK Product Security and Telecommunications Infrastructure Act 2022, available here.
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 that gives effect to the connectable products provisions are available here.
Manufacturers of connectable products (Product Security) and telecoms network operators, infrastructure provides and site providers (Telecommunications Infrastructure). This affects products already on sale/available.
Part 1 of The Product Security and Telecommunications Infrastructure Act 2022 ('PSTIA 2022') sets out powers for UK Government Ministers to specify security requirements relating to relevant connectable products (a product which is internet or network connectable).
On 14 September 2023 the Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 were introduced under the PSTIA 2022. They came into force on 29 April 2024. The obligations imposed on manufacturers of connectable/digital products include:
· Restricting the use of default passwords;
· Providing information to consumers on how to report security issues; and
· Providing information on minimum periods for which devices will receive security updates.
In addition, businesses may face liability where cybersecurity vulnerabilities lead to the loss or corruption of consumer data.
The PSTIA also imposes obligations on manufacturers, importers and distributors to not make products available in the UK unless accompanied by a statement of compliance with applicable security conditions.
The PSTIA 2022 leaves room for further delegated legislation to introduce additional obligations, and clarifications of business responsibilities.
The Office for Products Safety and Standards (OPSS) has published guidance for businesses that need to be complying with the new requirements. See the link to the February Regulatory Outlook for more.
Additional guidance was added on 23 April 2024 specifying that the product's statement of compliance (SoC) must be "accompanied" by the product itself, and defines the SoC as a "document". However, the terms "document" and "accompany" are not clearly defined in the PSTIA 2022 and so businesses must determine how they will comply with these requirements for their individual products.
The guidance also mentions that certain categories of products may be exempted from the regime. Amending regulations introducing these exemptions had been drafted and would exempt the following categories of products: motor vehicles agricultural, two- or three-wheel vehicles and quadricycles and forestry vehicles. However, these regulations did not make it through the recent 'wash-up' period ahead of the dissolution of Parliament and so it will be for the new government to decide whether to re-introduce these. In the meantime, those products will still be in scope of the regime.
For more on the practical steps to comply with the new regime, please see the link to our Eating Compliance for Breakfast webinar.
Products | UK Regulatory Outlook February 2024 - Osborne Clarke | Osborne Clarke
This directive updates existing EU legislation on security of network and information systems (NIS 1), and requires organisations that provide essential or important services to strengthen their cyber security defences.
The Directive entered into force on 16 January 2023. All member states were obliged to transpose its measures into national law by 17 October 2024.
Fully in force. Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, available here.
The NIS 2 directive updates earlier EU legislation in this area. It significantly extends the list of sectors within the scope of the regulation and provides more information on the entities that are subject to its cybersecurity requirements. The scope of the directive is set out in detail, intended to remove inconsistencies in scope between the implementing legislation enacted at Member State level for the NIS 1 directive. The directive will apply to all businesses operating within the broad sectors listed in Annex I or II which qualify as a "medium" enterprise or larger (i.e. employing 50 people or more, and with either annual turnover over €10 million or a balance sheet exceeding €10 million).
However, the carve-out for smaller business will not apply where they are in the sectors listed in Annex I or II and they are, more specifically:
· providers of public electronic communications networks or services;
· trust service providers;
· top-level domain name registries and domain name system service providers:
· a sole national provider of essential societal or economic services;
· a service provider with a significant impact on public safety, security, or health;
· a service provider which could generate significant systemic risk;
· a critical entity with importance at national or regional level; or
· a public administration entity.
The directive also applies to entities of all sizes listed under the Critical Entities Resilience directive and those that provide domain name registration services.
The directive does not apply to public administration entities that carry out activities in certain areas including national security, public security, defence, or law enforcement. Member states may also exempt specific entities in the above areas.
Annex I covers energy, transport, banking and financial market infrastructure, healthcare, water and sewage, digital infrastructure and certain business-to-business information and communications technology services.
Annex II includes postal and courier services, waste management, some types of manufacturing and digital platform providers for online market places, search engines and social networks.
The scope of this legislation is detailed and the above is a summary only.
The regulation introduces a set of mandatory measures that each entity must address to prevent cybersecurity incidents. This includes policies on risk analysis and information system security, cyber hygiene practices, cybersecurity training, and multi-factor and continuous authentication solutions.
To ensure that organisations are aware of their NIS2 obligations and acquire knowledge on cybersecurity risk prevention, management bodies are required to attend training sessions and to pass this information and knowledge on to their employees.
Member states must implement effective, proportionate, and dissuasive sanctions. Fines for non-compliance can reach 2% of the organization's annual turnover or €10 million − whichever is greater – for essential entities, and up to 1.4% of the organisation's annual turnover or €7 million for important entities.
The new directive clarifies the scope of reporting obligations with more specific provisions regarding the reporting process, content, and timelines. In particular, entities affected by an incident that has a signification impact on provision of their services must submit an initial assessment of the incident to their computer security incident response team (CSIRT) or, where applicable, to the competent authority within 24 hours of becoming aware of the incident. A final update must be provided within a month of the initial notification.
One of the most novel aspects of this directive is the requirement for member states to promote the use of innovative technologies, including artificial intelligence, to improve the detection and prevention of cyber-attacks, which would allow for a more efficient and effective allocation of resources to combat these threats.
Member states were obliged to legislate for appropriate measures to implement the directive by 17 October 2024. National implementing legislation can be tracked from this page. As of 31 October 2024, only Belgium, Croatia, Hungary, Italy, Latvia and Lithuania have adopted national legislation. Organisations in those countries must therefore act swiftly to ensure compliance before the respective national laws come into force. Organisations in other member states should closely monitor implementation of national legislation and prepare in advance.
On 17 October 2024, the Commission adopted the first implementing regulation, which details the cyber security risk management measures and the criteria for when an incident will be considered significant under the directive. The implementing regulation is expected to be published in the Official Journal shortly and will enter into force 20 days later.
Insight: Cyber Security | UK Regulatory Outlook October 2024
Insight: Implementation deadline for NIS2 and new EU cybersecurity compliance regime draws nearer
Insight: What EU businesses need to know about NIS2 and cybersecurity compliance
This regulation contains sweeping reforms to the EU product safety regime, including new provisions and updates to address new technologies and online sales channels.
13 December 2024
Became law on 10 May 2023 but has an 18 month transition period until 13 December 2024.
Regulation (EU) 2023/988 of 10 May 2023 on general product safety, available here.
Anyone placing consumer products on the market in the EU.
The regulation contains sweeping reforms which will significantly change the way that both modern and traditional products are produced, supplied and monitored across the EU. It raises the requirements of safety and adds sophistication in terms of compliance obligations for all non-food products and the businesses involved in manufacturing and supplying them to end users.
The GPSR also adopts an improved definition for a "product" and includes new factors to take into account when assessing safety, so that the EU's product safety regime adequately addresses modern technologies.
The GPSR makes clear that connected devices are considered to be products within its scope and will be subject to the general safety requirement. In addition, when assessing whether a product is safe, economic operators will have to take into account the effect that other products might have on their product, its cybersecurity features, and any evolving, learning and predictive functionalities of the product.
In addition, the GPSR seeks to address the increasing digitalisation of supply chains, in particular the growth of e-commerce, and places a number of obligations on online marketplaces, such as:
· cooperation with market surveillance authorities if they detect a dangerous product on their platform, including directly contacting affected consumers who bought through their platform in the event of a product recall;
· ensuring there is a single point of contact in charge of product safety;
· requiring contact and traceability information to be displayed alongside listings and any relevant product identifiers, warnings or safety information;
· requiring steps to be taken to identify dangerous products made available on its online marketplace; and
· requiring non-compliant traders to be suspended from their platforms.
Market surveillance authorities will also be able to order online platforms to remove dangerous products from their platforms or to disable access to them.
To help you prepare, we have recently launched our dedicated microsite which provides you with a series of helpful resources, which includes the Insights below:
· 10 things businesses can do now to prepare for compliance with the GPSR
· Navigating the EU's new GPSR: obligations are introduced for online marketplaces - Osborne Clarke
This Act introduces a new regulatory regime which imposes various statutory duties on online content-sharing platforms, to restrict both illegal content and content which, while not illegal, may cause harm to child users.
The Online Safety Act (Act) received Royal Assent on 26 October 2023, although much of the Act is not in effect yet as secondary legislation, codes of practice and regulatory guidance from Ofcom need to be put in place first. Following the general election in 2024, the new government indicated that it intended to conduct a review of the implementation of the Act and possibly of the Act itself, but nothing is yet confirmed.
In October 2024, Ofcom published an updated roadmap, setting out its progress in implementing the Act since it became law and presenting its plans for 2025. See our Insight for details.
Secondary legislation made under the Act
The Online Safety Act 2023 (Commencement No. 2) Regulations 2023 and the Online Safety Act 2023 (Commencement No. 3) Regulations 2024 have brought into force some of Ofcom's powers and certain new offences, but the main duties relating to protecting online users from harmful content are still not yet in effect.
The Online Safety (List of Overseas Regulators) Regulations 2024 have also been made, outlining overseas regulators with whom the UK online safety regulator, Ofcom, may co-operate, as set out in section 114 of the Act.
The Online Safety Act 2023 (Pre-existing Part 4B Services Assessment Start Day) Regulations 2024, which came into force on 22 May 2024, specify the "assessment start day" for Video-Sharing Platforms (VSPs) as 2 September 2024. VSPs will have to complete the assessments set out in the Act's sections 9 (illegal content risk assessment duties), 11 (children's risk assessment duties), 14 (assessment duties: user empowerment) and 35 (children's access assessments), starting from this date. VSPs have three months from the date that Ofcom publishes the relevant guidance to complete the assessments.
In September 2024, the government published the draft Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2024, which will make the offence of sharing intimate images without consent under the Sexual Offences act 2023 a "priority offence" under the Act
Enacted, partially in force.
Online Safety Act 2023, available here.
Businesses whose services facilitate the sharing of user generated content or user interactions, particularly social media, messaging services, search and online advertising. The Act regulates those offering services to UK users, regardless of the service provider's location.
The Act aims to protect children and to deal with illegal online content. It was initially proposed that measures on "legal but harmful" content would also be included but these have now been removed from the Act except in the case of certain harmful content hosted on services likely to be accessed by children. The Act seeks to strike a balance between ensuring safe online interactions for children and adult users, and freedom of speech.
Key provisions under the Act include:
· duties on regulated services to protect all users from illegal content;
· duties on regulated services to protect child users from certain types of legal but harmful content;
· various duties to empower users to control their exposure to harmful content;
· duties to protect free speech and fundamental rights;
· duties to protect consumers from fraudulent advertising;
· powers for Ofcom to require regulated services to use technology to proactively identify and remove content relating to terrorism or child abuse;
· new communications offences and other offences relating to online behaviour; and
· extensive investigatory and enforcement powers for Ofcom: services which breach their obligations under the Act can be liable for fines of up to £18 million or 10% of their global turnover (whichever is larger); companies (and in certain circumstances their senior managers) may also be criminally liable for breaches..
In order to ensure that the online safety regime is flexible and "future proof", a number of these provisions are to be dealt with in secondary legislation, and by codes of practice and guidance to be developed by the designated regulator, Ofcom.
Ofcom has devised a consultation process to inform its codes of practice and guidance, which comprises four major consultations together with various sub-consultations, on numerous aspects of the new regime. The process began in November 2023, with a first major consultation on draft codes of practice and guidance on illegal harms duties, which closed on 23 February 2024.
Ofcom then published a consultation on specific duties for service providers that display or publish pornographic content on their online services, which closed on 5 March 2024.
Ofcom has also published a call for evidence to inform its further consultation on draft codes of practice and guidance on the additional duties that will apply to "categorised services" under the Act. The Act introduces a system categorising some regulated online services, based on their key characteristics and whether they meet certain numerical thresholds, as category 1, 2A or 2B services. This call for evidence closed on20 May 2024. Ofcom expects to publish the consultation in early 2025.
Ofcom has also published its advice to the government on setting the numerical thresholds for categorised services, which was sent to the Secretary of State on 29 February 2024. Based on this advice, the Secretary of State will set out the thresholds in secondary legislation and Ofcom will, once it has assessed the services against the final thresholds, publish a register of categorised services, as well as a list of emerging category 1 services. The Act is intended to create a "triple shield" of protection for users. Service providers must remove illegal content, remove content in breach of their own terms and conditions, and give adults more choice over what to engage with.
In May 2024, Ofcom launched its second major consultation on draft guidance and codes of practice on protecting children from legal, but harmful online content, such as pornography, content relating to suicide, self-harm and eating disorders, content that is abusive and is targeted at, or incites hatred against, people based on protected characteristics, bullying, and content containing serious violence. This consultation closed on 17 July 2024, and Ofcom expects to publish its final statement and documents in spring 2025.
At the end of May 2024, the government published guidance to Ofcom on determining the fees that will be payable by regulated services under the Act. The fees paid by regulated service providers whose "qualifying worldwide revenue" (QWR) meets or exceeds a certain revenue threshold and who are not otherwise exempt will fund the new online safety regime. The government will retain oversight of the regulatory costs of the regime by setting Ofcom's total budget cap.
Ofcom then launched a consultation in October 2024 on draft secondary legislation to define QWR, which term will be used to calculate not just the fees payable by regulated services, but also the maximum penalty that Ofcom will be able to levy against providers for breaches of the Act. Ofcom is proposing to define QWR as the total revenue of a provider referable to the provision of regulated services anywhere in the world. Where the provider is found liable together with it’s a group undertaking, the QWR will be calculated on the worldwide revenues of the entire group, whether they relate to regulated services or not. Ofcom proposes setting a £250m revenue threshold for fees and exempting providers with UK referable revenue under £10m. The consultation closes on 9 January 2025.
The fee regime is expected to be in place by 2026/27 financial year. Until then, the government is funding Ofcom's initial costs. Additional fees will then be charged over an initial set period of consecutive years to recoup the set-up costs.
In July 2024, Ofcom published a consultation on its draft transparency reporting guidance, which is designed to assist categorised services comply with the requirement to publish transparency reports. These reports must set out the information requested by Ofcom in transparency notices, which Ofcom must issue to every provider of a categorised service once a year. Ofcom also published a consultation on its information gathering powers under the Act.
In August 2024, Ofcom published a consultation to strengthen its draft illegal harms codes of practice and guidance to include animal cruelty and human torture due to these categories of content not being fully covered by the list of "priority offences" in the Act. The "priority offences", listed in Schedule 7 of the Act, relate to the most serious forms of illegal content. Regulated online platforms will not only have to take steps to prevent such content from appearing, but will also have to remove it when they become aware of it. By including animal cruelty and human torture in its codes and guidance, Ofcom aims to ensure that providers understand that they will also have to remove this type of content.
In September 2024, the Secretary of State for Science, Innovation and Technology, Peter Kyle, wrote to Ofcom asking how it plans to monitor and address the issue of "small but risky" online services. In its response, Ofcom confirmed its awareness of the issue and said that tackling these services is a "vital part" of its regulatory mission. The regulator confirmed that it has already developed plans to take early action against these services.
In October 2024, the UK and US governments signed a joint statement on online safety, calling for platforms to go "further and faster" to protect children online by taking "immediate action" and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression. See this Regulatory Outlook for more.
The UK Online Safety Act: Top 10 takeaways for online service providers
UK Online Safety Act: Ofcom launches its first consultation on illegal harms
UK's Online Safety Act is a seismic regulatory shift for service providers
Online Safety Act 2023: time to prepare as UK's Ofcom confirms start of illegal content duties
The last UK Conservative government did not plan new law around AI but issued high level principles and guidance to steer how existing regulators should use their existing powers to deal with AI.
The consultation closed on 21 June 2023. The government's response was published on 6 February 2024.
In the policy formation stage
"A pro-innovation approach to AI regulation" (March 2023) available here
Response to consultation (February 2024) available here
All businesses where their development, supply or use of AI falls within the scope of existing UK law and regulation.
Overview
The last UK Conservative government's white paper on artificial intelligence (AI) regulation set out the UK's policy approach to regulating AI. Essentially, the government confirmed its approach in the consultation response of February 2024. It did not plan new law in this area, but issued "high-level principles" to guide existing regulators in dealing with AI within the scope of their existing powers and jurisdiction. Its intention was that such an approach would be agile and proportionate.
The white paper's definition of AI identifies it by its key characteristics of being "adaptive" and "autonomous". There is no reference to particular types or classifications of AI (in contrast to the prohibited and high-risk categories of the EU's AI Act), but focuses on machine learning and deep learning systems that are trained and can infer patterns across data. The white paper anticipates that regulators will expand on this loose definition with a more detailed interpretation in their area as needed.
The former Conservative government said that it would monitor and evaluate the emerging regulatory landscape around AI, identifying any gaps in regulatory coverage and supporting coordination and collaboration between the regulators.
Five overarching "high level principles" were set to shape regulatory policy and enforcement:
· safety, security and robustness;
· appropriate transparency and explainability;
· fairness;
· accountability and governance; and
· contestability and redress.
New "initial" guidance was published to support regulators, with expansions of the guidance planned for later in 2024.
The consultation response confirmed that these principles would not be on a statutory footing, unless it proved necessary in the future. Whether the new Labour government agrees, remains to be seen.
Regulators' strategic approaches to AI
Various economic and sector regulators were directed by the former Conservative government to publish their AI strategy by 30 April 2024. The reports on their respective strategic approaches are available here. The length, level of detail and depth of content varies widely between the reports with some regulators (such as the Competition and Markets Authority) having already undertaken significant work in this field and others (such as Ofgem) apparently still in the early stages of building their capacity and understanding around AI.
We understand that these reports were intended to feed into a "gap analysis" by the government of the various regulatory frameworks that will need to deal with AI.
One area where there is a known regulatory gap is in relation to AI and the workforce. Although regulators such as the Equality and Human Rights Commission are often active in relation to workers rights, there is no regulator overseeing employment law overall. By contrast, AI systems deployed in relation to aspects of the employment lifecycle from recruitment to work allocation to employee appraisal are listed as "high risk" under the EU's AI Act. Perhaps in recognition of that lacuna, the former UK Conservative government issued guidelines on the use of AI in recruitment (available here).
Other initiatives
The former government progressed its plan to create a central government function to support the regulation of AI by existing regulators, including ensuring coherence across the regulatory landscape, monitoring how the proposed approach develops in practice and economy-wide horizon-scanning for emerging trends and opportunities around AI. The Office for AI was folded into the relevant policy unit within the Department for Science, Innovation and Technology (DSIT). A minister with responsibility for AI was designated within each government department.
No action is currently planned in relation to addressing the acknowledged lack of clarity around liability in the AI supply chain.
The former government's consultation response discussed how the challenge of highly capable general purpose AI should be addressed but again elected not to legislate in the near term.
The AI and Digital Hub
The white paper confirmed that a sand box would be created to support AI innovation, which was then launched as the AI and Digital Hub. This is a multi-regulator sand box, coordinated by the Digital Regulators Cooperation Forum (DRCF) (led by the Competition and Markets Authority, the Information Commissioner's Office, Ofcom and the Financial Conduct Authority). It offers free informal advice to businesses on how regulation (across the remits of the four DRCF regulators) applies to their innovative digital or AI projects and is initially a one year pilot.
The DRCF planned to respond on eligibility within 5 working days and, for eligible queries, to give an answer on the substance in 8 weeks. Advice is not legally binding and carries no endorsement or certification of compliance.
What does the UK's white paper on AI propose and will it work?
EU delays compliance deadlines for the AI Act
What will the next UK government do about regulating and supporting AI?
What is the latest on the UK government's approach to regulating AI?
The Data Act creates a new framework for sharing data for the private and public sector, including creating new rights of access to IoT data and facilitating switching between cloud providers.
The legislation entered into force on 11 January 2024.
It will become fully applicable on 12 September 2025, except for:
• the obligation to design connected products and services so that data is accessible, which will be applicable to such items placed on the market after 12 September 2026; and
• the provisions on contractual terms and conditions in private sector data contracts, which will not be applicable until 12 September 2027 in relation to contracts concluded on or before 12 September 2025, provided that the contract in question is of indefinite duration or is due to expire at least 10 years from 11 January 2024.
In force. Not yet applicable.
Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), available here.
The Data Act is wide-ranging legislation that will impact on a corresponding wide range of businesses and individuals, including:
• Those manufacturing connected products which are place on the market in the EU;
• Those providing related services (which make the product behave in a particular way) in respect of those products, as well as EU-based users of those products and services;
• Those making data available to data recipients in the EU, and the recipients;
• Public sector bodies and EU institutions wanting access to business data for exceptional need;
• Providers of cloud data processing services proving those services to EU customers; and
• Parties using smart contracts.
The Data Act is intended to boost the availability of data to support a vibrant data economy in the EU. It will have a significant impact on some businesses. It will operate to extend data regulation far beyond the current focus on personal data and data privacy. The intention is to open up access to data, particularly Internet of Things (IoT) and machine-generated data, which is often controlled by the entity that gathered it, and is inaccessible to the entity whose activities generated it. The provisions of the Data Act will be without prejudice to the General Data Protection Regulation regime, privacy rights and rights to confidentiality of communications, all of which must be complied with in acting on the requirements in the Data Act.
You can familiarise yourself with the European Commission's Data Act Explained document here and Frequently Asked Questions (FAQs) on the Data Act here.
Currently, contractual terms and conditions typically decide whether or not data collected by IoT systems, industrial systems and other connected devices can be accessed by the business or person whose activities have generated the data – the user. It is not unusual that the collected data is held by the device or service provider and is not accessible by the user.
The Data Act will create an obligation to design connected products and related services so that data that they collect is available to the user. Users will be entitled to access the data free of charge and potentially in real time. Data must either be directly accessible from the device or readily available without delay.
Users will, moreover, be able to pass the data to a third party (which may be the data holder’s competitor) or instruct the data holder to do so; data holders can only pass user data to a third party on the instructions of the relevant user. The Data Act will impose restrictions on how the third party can use the received data.
These provisions include measures designed to prevent valuable trade secrets from being revealed, and to ensure that these rights are not used to find out information about the offerings of a competitor. They also include measures to prevent unfair contractual terms and conditions being imposed on users by data holders. Data holders are able to impose a non-discriminatory, reasonable fee for providing access (which can include a margin) in business-to-business relationships. Where data is being provided to an SME, the fee must not exceed the costs incurred in making the data available.
In relation to contracts between businesses that contain data-related obligations more generally, the Data Act outlaws terms and conditions that deviate grossly from good commercial practice, contrary to good faith and fair dealing, if imposed unilaterally. Within that prohibition, it sets out various specifically blacklisted contractual terms, which will be unenforceable.
The Data Act makes provision for public sector bodies to gain access to private sector data where they can demonstrate an exceptional need to use the data to carry out statutory duties in the public interest. This might be to deal with a public emergency, or to fulfil a task required by law where there is no other way to obtain the data in question. These provisions do not apply to criminal or administrative investigations or enforcement.
The Data Act seeks to remove perceived obstacles to switching between cloud services providers, or from a cloud service to on-premise infrastructure, or from a single cloud provider to multi-provider services. This part of the Data Act includes provisions concerning the contractual terms dealing with switching or data portability, which must be possible without undue delay, with reasonable assistance, acting to maintain business continuity, and ensuring security, particularly data security. It provides for maximum time limits and detailed information requirements. Service providers are required, more generally, to make information readily available about switching and porting methods, as well as formats including standards and open interoperability specifications. The Data Act imposes a general duty of good faith on all parties involved to make switching effective, timely and to ensure continuity of service.
A further limb of the Data Act is to require cloud service providers to prevent international and non-EU governments from accessing or transferring non-personal data held in the EU, where it would be in breach of EU or member state law.
To ensure that data access rights are not undermined by technical compatibility problems, the Data Act creates obligations around interoperability for data and data-sharing mechanisms. It requires transparency around key aspects and features of datasets, data structures, and access mechanisms (such as application programming interfaces). Provision is made for standards to be developed in relation to these requirements.
The Data Act also imposes interoperability requirements between cloud services, again providing for open interoperability specifications and harmonised standards to support switching and portability, as well as parallel processing.
Finally, the Data Act covers smart contracts used for executing a data-sharing arrangement (but not smart contracts with other functions). It lays down requirements for robustness and access control, safe termination and interruption (a "kill switch"), data archiving and continuity, as well as measures to ensure consistency with the terms of the contract that the smart contract is executing. Provision is made for standards to be developed to meet these requirements. Smart contracts must be self-assessed for conformity and a declaration of EU conformity made.
The Data Act will be enforced at member state level by an appointed regulator. This could be an extension of jurisdiction for an existing regulator, or a new one could be created. Powers for member state regulators can include sanctions.
EU Data Act proposal: Commission plans comprehensive right to data access
What are the implications of the EU Data Act for smart contract operators?
This regulation introduces rules on digital operational resilience to tackle cyber risks and harmonise IT requirements for firms in the financial services sector, and those supplying relevant services to them.
DORA was enacted on 16 January 2023 and will apply from 17 January 2025.
Enacted. Not yet in force.
Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector, available here.
DORA will apply to a broad range of financial entities regulated in the European Economic Area (EEA), including banks, payment and e-money institutions, investment firms, fund managers, and cryptoasset service providers.
The rules will also catch third-party service providers of information and communications technology (ICT), including providers of cloud computing services, software, data analytics, and data centres, where the European authorities will designate them as "critical".
Firms within the scope of DORA must be able to withstand, respond to and recover from ICT incidents. Important requirements for firms will include:
• Having internal governance and control frameworks that allow them to manage ICT risks effectively and prudently
• Having a robust and well-documented ICT risk management framework in place that allows them to address ICT risks quickly and comprehensively
• Reporting major ICT-related incidents to the relevant regulator
• Regularly carrying out digital operational resilience testing, including a range of assessments, methodologies, practices and tools
• Managing ICT third-party risk within their ICT risk management framework
As noted, DORA entered into force on 16 January 2023 and will apply from 17 January 2025.
Under DORA, the European Supervisory Authorities (being the European Banking Authority (EBA), European Insurance and Occupational Pensions Authority (EIOPA) and European Securities and Markets Authority (ESMA) (together the ESAs)) will develop 13 Regulatory Technical Standards (RTS) and Implementation Technical Standards (ITS) setting out details and guidance on key provisions and requirements within DORA. Financial entitles must be fully compliant with and implement these standards in their ICT systems by 17 January 2025.
On 17 January 2024, the ESAs published the first set of joint draft technical standards on:
• RTS on ICT risk management framework;
• RTS on the criteria for classifying ICT-related incidents;
• RTS on ICT services policies supporting the critical or important functions provided by third party ICT providers; and
• ITS setting out the templates to be maintained by financial entities in relation to contractual arrangements with third party service providers.
The first batch of final draft technical standards were adopted by the European Commission in February and March 2024. The Commission Delegated Regulations (CDRs) setting out the RTS above were published in the Official Journal of the EU in June 2024, and entered into force on 15 July 2024.
The second batch of draft technical standards were published on 17 July 2024:
• RTS and ITS on the content, format, timelines, and templates for incident reporting;
• RTS on the harmonisation of oversight activities;
• RTS specifying the criteria for determining the composition of the joint examination team;
• RTS on thread-led penetration testing;
• Guidelines on aggregated costs and losses from major ICT-related incidents; and
• Guidelines on cooperation of oversight activities between the ESAs and competent authorities.
The ESAs published the final joint report on the draft RTS on subcontracting ICT services supporting critical or important functions on 26 July 2024.
On 23 and 24 October 2024, the Commission adopted the RTS and ITS specifying the content and time limits for the notification of, and report on, major ICT-related incidents, and cyber threats, as well as the RTS on the harmonisation of conditions for oversight activities. These RTS and ITS will enter into force on the twentieth day following publication in the Official Journal.
On 15 October 2024, the ESAs published an opinion and suggested amendments to the draft ITS on registers of information in response to the Commission's rejection of the draft submitted. The ESAs urged the Commission to issue its final decision on the use of identifiers and for the swift adoption of the draft ITS, as the holdup risks delaying the designation of critical ICT third-party service providers by the ESAs in 2025.
The implementing and delegated acts can be tracked from this page.
EU financial services firms face tougher cybersecurity rules in two years
This is an agreement between over 130 members of the OECD to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
1 January 2024 for certain elements of Pillar Two in the UK (the "Multinational top-up tax" and "Domestic top-up tax") and 1 January 2025 (for the "undertaxed profits rule"). Implementation of Pillar One is still awaited in the UK.
The provisions for the main implementation of Pillar Two are enacted (although the undertaxed profits rule will be introduced by a later Finance Bill with effect for accounting periods commencing on or after 31 December 2024). Implementation of Pillar One is still awaited.
See the provisions dealing with "Multinational top-up tax", "Domestic top-up tax" and "Undertaxed profits tax" in Parts 3 and 4 of Finance (No.2) Act 2023, available here.
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Implementation of Pillar One is still some way off. It will be implemented by way of a multilateral convention (published in October 2023), and will not come into force until the multilateral convention has been ratified by a critical mass of countries. The introduction of Pillar One will be coordinated with the removal of all Digital Services Taxes and other relevant similar measures already implemented in jurisdictions.
The OECD framework for Pillar Two will operate on a country-by-country basis, with implementation dates varying between countries. The UK has enacted primary legislation in the Finance (No.2) Act 2023 to implement certain elements of Pillar Two in the UK from 1 January 2024 and the undertaxed profits rule will be introduced by a later Finance Bill with effect for accounting periods commencing on or after 31 December 2024.
The proposal is split into two pillars:
(i) Pillar One - this involves a partial reallocation of taxing rights over the profits of the largest and most profitable MNEs (those with revenues exceeding EUR20 billion and profitability greater than 10%) to the jurisdictions where consumers are located. So, it is about where they pay tax. It is hoped that this will resolve longstanding concerns that the international corporate tax framework has not kept pace with the digital economy and how highly digitalised businesses generate value from the active interaction with their users. Under the proposal, 25% of an MNE's "residual profit" above 10% of revenue would be allocated according to a key to recognise sustained and significant involvement in a market irrespective of physical local presence.
(ii) Pillar Two - sets a new minimum corporate tax rate of 15% for large MNEs (those with a combined group turnover of more than EUR750 million) on global profits and provides a new set of rules known as the global anti-base erosion (GloBE) rules which will expand taxing rights of jurisdictions to achieve the global minimum tax rate worldwide to apply to group profits. If a jurisdiction does not adequately tax profits, then other jurisdictions, in particular shareholder jurisdiction or source jurisdictions, can seek to tax such profits.
These regulations introduce new standardised rules for collecting, reporting information about sellers and their income from digital platform activities to tax authorities and exchanging information between tax authorities.
1 January 2024 in the UK (with the first reports due by 31 January 2025)
Regulations enacted (and came into force on 1 January 2024)
The Platform Operators (Due Diligence and Reporting Requirements) Regulations 2023, available here.
All online platforms and those who sell through them.
The UK's implementing regulations include two categories of "Excluded Platform Operators" which are platform operators whose entire business model is such that the platform operator either:
· does not allow sellers to derive a profit from the consideration, or
· has no "reportable sellers".
In each case, the platform operator may only rely on this exemption if it gives notice to HMRC in the form and manner specified in directions given by the HMRC.
These rules introduce new standardised rules for collecting and reporting relevant information about sellers of goods and relevant services – currently personal services (including the provision of transportation and delivery services), and the rental of immoveable property – and their income from digital platform activities to tax authorities and exchanging information between tax authorities. Under the rules platforms must:
· collect certain details about their sellers, including information to accurately identify: who the seller is; where they are based; how much they have earned on the platform over an annual period
· verify the seller’s information to ensure it is accurate
· report the information, including the seller’s income, to the tax authority annually by 31 January
· provide that information to sellers, to help them complete their tax returns.
There are penalties for non-compliance and Tax authorities will exchange the information with the tax authority in which the seller is resident (or rental property is located) to ensure local tax compliance.
This regulation creates a risk-based tiered regulatory regime for AI tools. High risk tools are banned outright or subject to an onerous compliance regime. Additional provisions will apply to general-purpose AI.
The AIA entered into force on 1 August 2024.
The provisions of the AIA will be applicable progressively over the next few years:
• The prohibitions on specified categories of banned AI, as well as the general provisions dealing with scope and definitions, will be applicable after six months – 2 February 2025.
• The provisions on general-purpose AI, as well as the notification, governance, penalties and confidentiality provisions, will be applicable after 12 months – 2 August 2025.
• Most other provisions, including the rules for high risk models defined in Annex III, will be applicable after 2 years – 2 August 2026.
• Rules for high risk AI contained in systems or products that are already subject to EU product safety legislation, as listed in Annex I, (see below) will be applicable after 3 years – 2 August 2027.
Enacted. Provisions become applicable progressively as set out above.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available here.
The AIA will apply to all those who provide, deploy, import or distribute AI systems in the EU across all sectors. It will apply to non-EU providers who place AI systems on the market in the EU or put them into service in the EU, and also providers and deployers of AI systems located outside the EU, but where the output of the AI system is used in the EU.
As well as being in force in all EU member states, the AIA will in due course come into force in Norway, Iceland and Liechtenstein (as countries in the European Economic Area).
The AIA defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The AIA takes a risk-based approach, with the regulatory framework depending on both the level of risk created by an AI application, and the nature of the regulated entity's role. AI deemed high risk is regulated more strictly, as are those who create and distribute AI systems, who are regulated more strictly than organisations which deploy systems created by third parties. In addition, the AIA creates a regulatory framework for general-purpose AI.
"Risk" is assessed in the AIA by reference to both harm to health and safety, and harm to fundamental human rights and is a combination of the probability of occurrence of harm and the severity of that harm.
The proposed tiers of AIA regulation are prohibited AI, high risk AI, transparency requirements for certain forms of AI, and unregulated AI. Two further but distinct tiers apply to general-purpose AI.
Some uses of AI are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. Banned applications include:
• AI systems that use subliminal, manipulative or deceptive techniques intended to distort someone's behaviour materially by impairing their ability to make an informed decision and so causing them to take a decision that they would not otherwise have taken, causing significant harm;
• AI that exploits vulnerabilities of age, disability, social or economic circumstances in order to distort their behaviour, causing significant harm;
• social scoring based on behaviour or personal characteristics where the scoring results in detrimental treatment of the person in question in a social context unrelated to the context in which the scoring data was originally generated or collected, or which is unjustified or disproportionate;
• AI systems used to predict the likelihood of a person committing a criminal offence, based solely on profiling that person or an assessment of their personality traits;
• AI systems that create or expand facial recognition databases through untargeted web-scraping of the internet or CCTV footage;
• emotion inference systems used in the workplace or educational settings unless for medical or safety reasons;
• biometric categorisation around sensitive characteristics (including race, political views, trade union memberships, religious or philosophical beliefs, sex life or sexual orientation); and
• real time remote facial recognition systems used in publicly accessible spaces for law enforcement, with exceptions.
Some AI applications are considered to pose a potentially high risk to health and safety or to fundamental rights, but not to the point that they should be prohibited. These high risk systems fall into two broad categories.
Firstly, AI will be classified as "high risk" for AIA purposes where it is used in safety systems or in products that are already subject to EU product safety legislation, including transport, other vehicles or machinery, and products such as toys, personal protective equipment and medical devices (the Annex I high risk categories).
Secondly, there is a list of specified "high risk" AI systems (in Annex III to the AIA). The detail of this list includes:
• permitted remote biometric identification systems (excluding systems solely used to confirm someone's identity);
• AI used for biometric categorisation based on sensitive or protected attributes or characteristics;
• emotion recognition systems;
• AI systems used as safety components in critical physical and digital infrastructure;
• AI systems used in an educational or vocational context, including determining access to training institutions, learning evaluation systems, educational needs appraisal systems, and systems used to monitor behaviour during exams;
• workplace AI systems for recruitment, applications appraisal, awarding and terminating employment contracts, task allocation and performance appraisal;
• assessment of eligibility for public benefits and services;
• credit checks (excluding fraud prevention systems);
• systems used to assess risk and set prices for life and health insurance
• systems for the despatch or prioritisation of emergency services; and
• various uses in law enforcement, immigration, asylum, judicial and democratic processes.
However, high risk regulation will not apply to AI falling in the Annex III categories where "it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". This will be an issue for self-assessment. Where there is no such actual risk, the system will not have to comply with the AIA high risk regulatory regime.
Onerous compliance obligations are proposed for high risk AI systems concerning a "continuous, iterative" risk management system for the full lifecycle of the AI system. Compliance will require meeting the AIA requirements for technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, the AIA will impose extensive obligations concerning the governance and curation of data used to train, validate and test high risk AI. The focus is on ensuring that such data sets are relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete in view of the intended purposes. The data should have "appropriate statistical properties", including as regards the people in relation to whom the AI system will be used. Data sets must reflect, to the extent appropriate for their intended use, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
The burden of high risk AI compliance will apply in different ways to businesses at different points in the AI supply chain – providers (developers and businesses who have had AI developed for them to put onto the market), product manufacturers whose products incorporate AI, importers, distributors and deployers.
Some lower risk AI, outside the high risk regime, will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be aware that they are interacting with an AI system, if it was not obvious from the circumstances and context. These provisions will apply to:
• chatbots and other systems that interact directly with individuals;
• systems producing synthetic audio, image, video or text content, which must be marked (in machine-readable format) as having been artificially generated or manipulated;
• emotion recognition systems;
• biometric categorisation systems;
• deep fakes images/video/audio content that have been created or manipulated by AI (with exceptions for "evidently artistic, creative, satirical, fictional or analogous works"); and
• systems producing text intended to inform the public on matters of public interest.
For AI systems that do not fall within any of the above categories, the AIA provides for codes of conduct and encourages voluntary adherence to some of the regulatory obligations which apply to the high risk categories. Codes of practice may also cover wider issues such as sustainability, accessibility, stakeholder participation in development and diversity in system-design teams.
Since the AIA was first proposed in 2021, foundation AI systems have emerged as a significant slice of the AI ecosystem. They do not fit readily within the AIA tiers because they perform a specific function (translation, text generation, image generation etc) that can be put to many different applications with differing levels of risk.
The AIA creates two layers of regulation for "general-purpose AI" models: one universal set of obligations for general-purpose AI models and a set of additional obligations for general-purpose AI models with systemic risk.
General-purpose AI include foundation models and some generative AI models. They are defined as models "Including where such AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications", with an exception for AI models used for research, development or prototyping before they are placed on the market.
A general-purpose AI model will be classified as having "systemic risk" if:
The largest, state of the art AI models are expected to fall within this category.
The universal obligations for all general-purpose AI apply in addition to the core risk-based tiers of AIA regulation. Providers of all general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright), which are intended to support downstream providers using the model in their AI system to comply with their own AIA obligations.
General-purpose AI models with systemic risk are subject to a second tier of obligations. These will include model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
As regards enforcement, the AIA requires member states to appoint national level AI regulators which will be given extensive powers to investigate possible non-compliance, with powers to impose fines.
In addition, the "AI Office" has been created within the Commission to centralise monitoring of general-purpose AI models, and to lead on interaction with the scientific community, and in international discussions on AI. It will also support the Commission in developing guidance, standards, codes of practice, codes of conduct, and in relation to investigations, testing and enforcement of AI systems. The AI Office will play a key role in governance across the national AI regulatory bodies appointed in member states.
The AI Office is drawing up a general-purpose AI code of practice (Code). The Code is intended to facilitate the proper application of the AIA for general-purpose AI models. Based on a call for expressions of interest, the Commission has formed a Code of Practice Plenary which held its first meeting in September 2024. The first workshop on the Code took place on 23 October (see this Regulatory Outlook).
The AI Office also conducted a multi-stakeholder consultation on trustworthy general-purpose AI models to collect views on the topics to be covered by the Code and on related AI Office work.
The first draft Code is expected in November 2024, and the AI Office aims to publish the final version in April 2025.
Coordination and coherence of the AIA regime is the responsibility of the AI Board, which is separate to the AI Office and comprises representatives from member states.
• up to the higher of 7 per cent of worldwide turnover or €35 million for breach of the prohibition provisions;
• up to the higher of 3 per cent of worldwide turnover or €15 million for breach of other substantive provisions (including compliance with the high risk AI regime, the low risk transparency regime, and the general-purpose AI provisions); and
• up to the higher of 1% of worldwide turnover or €7.5 million for supplying incorrect, incomplete or misleading information to the relevant authorities.
The fining regime for SMEs is similar to the above, except that the maximum fine is whichever percentage or amount is the lower.
As noted above, deadlines for compliance will be staggered.
When will businesses have to comply with the EU's AI Act?
How will the new UK government resolve the conflict over AI development and IP rights?
This regulation will impose cybersecurity requirements for digital products and ancillary services throughout their lifecycle, requiring conformity assessments and a CE mark.
The text of the EU Cyber Resilience Act, which was formally adopted by the European Parliament on 12 March 2024, was given final approval by the European Parliament on 17 September 2024. The Council of the EU adopted the Act on 10 October 2024.
It will next be published in the Official Journal and enter into force 20 days later. Once in force, manufacturers will have 36 months to prepare for compliance - likely to be autumn 2027.
In the legislative process
Proposal for a Regulation on horizontal cybersecurity requirements for products with digital elements, available here.
The adopted text is available here.
Manufacturers of products with digital elements (software and hardware products).
The Act will introduce cyber security requirements for products with digital elements which aims to protect consumers and businesses from products with inadequate security features. The Act will require manufacturers to ensure that the cybersecurity of their products is in conformity with minimum technical requirements, from the design and development phase, and throughout the whole life cycle of the product. This could include carrying out mandatory security assessments.
Certain types of products with digital elements deemed safety critical will be subject to stricter conformity assessment procedures, reflecting the increased cybersecurity risks they present.
The Act also introduces transparency requirements, by requiring manufacturers to disclose certain cyber security aspects to consumers.
The Act will apply to all products that are connected either directly or indirectly to another device or network, with some exceptions such as medical devices, aviation or cars.
When the Act becomes effective, software and products connected to the internet will be required to apply the CE mark to indicate they comply with the applicable standards. Readers should note also that the European common criteria-based cybersecurity certification scheme, developed under the EU Cybersecurity Act, has been adopted and will apply on a voluntary basis to all information and communication technologies products within the EU.
UK and EU take steps to bolster product security regimes - Osborne Clarke
How is EU cybersecurity law affecting IoT product design? - Osborne Clarke
This Act creates new powers for the CMA to regulate digital market platforms with "significant market status", with individual reviews and bespoke codes of conduct. It also updates aspects of competition law.
The Act received Royal assent on 24 May 2024. It is anticipated that the competition aspects of the Act will come into force in early 2025.
The majority of provisions in the Act will be brought into force by regulations. The CMA has already launched a consultation on guidance on the digital market competition regime, which closed on 12 July 2024.
Digital Markets, Competition and Consumers Act, text available here.
The new regime is intended to apply to businesses that hold substantial and entrenched power in digital markets. Firms that meet certain cumulative thresholds will be in scope:
• A UK nexus test (that is, the firm has a sufficient connection to the UK in a particular digital activity);
• A revenue test; and
• An activity test (that is, the firm conducts a digital activity, or digital activities, that fall within the scope of the regime).
The legislation will only apply to firms meeting those thresholds that have been designated by the Digital Markets Unit (DMU), a specialist unit within the Competition and Markets Authority (CMA), as having strategic market status (SMS) following an SMS investigation. The legislation will potentially impact any businesses operating in digital markets by changing the regulatory overlay of rights and obligations in these markets.
The UK's Digital Markets, Competition and Consumer Act will introduce new statutory powers for the DMU to regulate powerful digital firms. The DMU will remain an administrative unit within the CMA, rather than being designated as a separate regulator.
The Act concerns digital activities, defined as services provided over the internet or digital content, whether paid-for or free of charge. CMA jurisdiction will require a link to the UK.
A business will not be designated with SMS unless it meets the financial threshold of £1 billion in UK group turnover or £25 billion global group turnover in a relevant 12 month period.
The DMU may designate a business as having SMS following an SMS investigation. The investigation will consider whether the business has substantial and entrenched market power, with a forward-looking analysis at least five years into the future. It will also assess whether the business has a position of strategic significance in relation to the digital activity in question, whether by its size or scale, the reliance by others on its services, or the influence that it can exert on others as a consequence of its position. Designations will last for five years and the designation process will include public consultation and liaison with other regulators.
The DMU may create a bespoke code of conduct for each SMS business with a view to ensuring fair dealing, open choices (which may require enhanced data portability and interoperability of services), and trust and transparency.
The CMA is given extensive powers of investigation and enforcement, including fines of up to 10% of global turnover and wide-ranging remedies. In addition to SMS investigations and enforcement, the CMA may also make a "pro-competition intervention" to deal with an adverse effect on competition in a digital market. It will have the same powers as it currently enjoys to take remedial action as following a market investigation, including the power to order divestments and break-ups.
The regulatory regime will:
• Allow the DMU to designate firms that have a substantial, entrenched and strategic position in certain digital activities with SMS.
• Make firms with SMS subject to a new code of conduct, designed to achieve the statutory objectives of fair trading, open choices and trust and transparency.
• Allow the DMU to use pro-competitive interventions. It is anticipated that the DMU will have powers to flexibly design pro-competitive interventions.
• Make changes to the existing merger control regime. This will make notification of mergers, acquisitions and the creation of joint-ventures by SMS firms mandatory where the total consideration is at least £25 million and the transaction will result in the SMS firm crossing certain share/voting rights thresholds. The Act also introduces changes to the notification thresholds for all mergers, including a new "transaction value" threshold of £100 million. There is also a requirement to notify mergers where either party has a 33% or more share of supply and a UK turnover of £350 million. This is designed to capture "killer acquisitions" that would otherwise slip under the CMA's radar.
• The Act gives extraterritorial effect to the UK prohibition against anti-competitive agreements for agreements which are likely to have an immediate, substantial and foreseeable effect on trade within the UK, including agreements that are not implemented or intended to be implemented in the UK. Additionally the Act will give the CMA power to issue fines to overseas companies for failure to comply with an information request.
• Give the CMA power to impose fines for breaches of competition orders and undertakings.
Are you ready for the UK Digital Markets Competition and Consumers Act?
This Act updates consumer law. It includes changes to rules for consumer subscriptions. It also strengthens the CMA's enforcement and fining powers.
The Act received Royal Assent on 24 May 2024. Secondary legislation and statutory guidance from the Competition and Markets Authority (CMA) are needed to bring the provisions into effect.
In September 2024, the government published a ministerial statement setting out a vague timeline for the implementation of the DMCCA. According to the statement:
• The government expects to commence the consumer enforcement regime in Part 3 and the new unfair trading regulatory regime in Chapter 1 of Part 4 of the DMCCA in April 2025. New savings schemes rules will not commence before April 2025, and the timeline is subject to continuing engagement with consumers and industry.
• Reforms to subscriptions contracts will not commence before spring 2026 "at the earliest".
• The aim is to commence the competition and digital markets parts of the DMCCA in December 2024 or January 2025, with the relevant secondary legislation due to be laid before Parliament this autumn. The government expects the CMA to launch the first Strategic Market Status investigations shortly after the digital markets regime is brought into effect.
Enacted
Digital Markets, Competition and Consumers Act 2024, available here.
Broadly speaking, the Act will impact all consumer-related businesses since it not only applies to all consumer contracts but also in circumstances where consumers are targeted. The Act includes a significant overhaul of the laws relating to subscription contracts – as such, providers of subscription services are likely to be particularly impacted.
The Act repeals and restates the consumer protection from unfair trading regulations as primary legislation. It aims to enhance consumer protections by strengthening enforcement (including by giving the Competition and Markets Authority (CMA) significant new powers and the ability to impose substantial fines) and introducing new consumer rights, e.g. to tackle "subscription traps" and the proliferation of fake online reviews.
Definitions
The Act amends some definitions, including “average consumer” “commercial practice” and “transactional decision” and expands the definition of "vulnerable consumers".
Commercial practices that are always considered unfair
The Act repeals and reinstates existing consumer protection for unfair trading regulations. In addition to various minor changes, it amends and supplements the list of commercial practices that are always considered unfair, to reflect the fact that consumers and traders increasingly interact online (resulting in wider application).
To improve consumer transparency, the list of "blacklisted" commercial practices in Schedule 19 of the Act now also includes various activities relating to the submission, commission or publication of fake reviews.
Power to amend the list of unfair commercial practices
The Secretary of State also has the power to amend the Act in various ways including:
• adding, amending or removing "blacklisted" commercial practices;
• giving consumers additional rights of redress; and
• adding, amending or removing information that must be included in an invitation to purchase.
Subscription contracts
The Act gives new rights to consumers and imposes new obligations on providers in respect of subscription contracts. In summary, the proposals include:
• providing consumers with clearer pre-contractual information (the Act sets out the "key" and "full" information a business must give);
• offering a cooling-off period both when the customer enters into the contract as well as a new "renewal cooling-off period";
• strict time limits on providing consumers with reminders before (a) a free trial or low-cost introductory offer comes to an end and converts into a paid or more costly subscription; and (b) a contract auto-renews and commences a new term. Such reminders need to include the date and length of renewal, the current price and the price following renewal, and any notice period and other requirements for cancelling auto-renewal;
• ensuring consumers can easily exit contracts in a straightforward, cost-effective and timely way (ideally in one click, i.e. via a cancellation button);
• providing end-of-contract notice to consumers who have cancelled; and
• if the consumer signed up online, obtaining express consent that the contract will impose payments.
Drip-pricing
The Act provides that a trader must set out the total price of a product, including any mandatory fees, taxes and charges that apply, rather than drip-feeding in these amounts during the transaction process.
Enforcement
The Act significantly enhances the CMA’s role in enforcing consumer protection laws, giving the CMA the power to levy civil fines of up to £300,000 or 10% of annual global turnover (whichever is higher) for breach of all consumer law (not just as updated by the Act). The Act also allows the CMA to directly investigate suspected infringements and practices that may harm the collective interests of consumers in the UK. The CMA can also now issue enforcement notices without going to court first.
Enactment
Secondary legislation and statutory guidance from the CMA are needed before the DMCCA can become fully effective. Over August and September 2024, the CMA consulted on draft guidance and rules for the exercise of its new direct consumer enforcement powers. The government has also consulted on three draft regulations on determining turnover and on the meaning of control over an enterprise for the purposes of determining turnover-based penalties for non-compliance.
Are you ready for the UK Digital Markets Competition and Consumers Act?
New UK legislation envisages powerful digital markets regime and significant reform to consumer law
Overview video: https://youtu.be/HuHLFokwYO4
Video on dark patterns: https://youtu.be/FhNAJYT1Xxg
Video on subscription law changes: https://youtu.be/3D0yDatHHG0
This directive seeks to update product liability rules to include digital products (including software as a service (SaaS)/free standing software), services and platforms. It will require implementation at member state level.
The European Parliament formally adopted the new Product Liability Directive on 12 March 2024 with final approval given on 17 September 2024. The Council of the EU formally adopted the directive on 10 October 2024. The directive will next be published in the EU Official Journal and come into force 20 days later. Member states will be required to transpose the directive into national law within 24 months of the directive becoming law.
There is no further compliance period for businesses once the national implementing laws are in place, meaning that products will need to be compliant from autumn 2026.
In the legislative process
Proposal for a Directive on liability for defective products, available here.
The version that was adopted by the European Parliament on 12 March 2024 is available here.
Anyone placing products on the EU market.
This directive will provide easier access to compensation for consumers who suffer damage from defective products and includes amendments relating to the directive's scope, treatment of psychological damage, and allocation of liability regarding software manufacturers.
The update to the existing product liability framework, dating from 1985, will:
• Update definitions and scope to resolve inconsistences and legal uncertainty, notably regarding the meaning of the term "product". The legislation also amends the scope of the potentially liable parties (to include companies that substantially modify products, providers of software and providers of digital services).
• Modernise liability rules for digital or connected products. The revised PLD will allow compensation for damages when connected products are made unsafe by software updates, when AI or digital services cause damage, when manufacturers fail to address cybersecurity vulnerabilities and where defective products cause a loss or corruption of data.
• Ensure that there is always a business in the EU (i.e. a manufacturer, importer or authorised representative) that can be held liable for a product, even if the product was not purchased in the EU.
• Change rules on disclosure of evidence and the burden of proof. Manufacturers will be required to disclose evidence where a defective product has caused damage, and in complex cases where it is considered excessively difficult for a claimant to prove that a product is defective, the burden of proof will be switched to the defendant entity.
• Consumers will also be able to seek compensation for non-material losses such as medically recognised damage to psychological health.
We have produced an infographic on product liability reform in the EU and UK that sets out some of the key changes and practical actions businesses should be considering. For example, in regards to claimants having enhanced rights over disclosure under the revised EU PLD, businesses should ensure disclosable materials are fit for purpose, including: design files; evidence of safety testing in design phase; and policies on vigilance and corrective actions. Request a copy of the infographic here.
This regulation aims to create an EU-wide framework for trusted and secure digital identities that can be used across the EU, including control of personal data.
The regulation entered into force on 20 May 2024. The regulation will come into effect 24 months after the adoption of an implementing regulation by the Commission. The Commission is obliged to adopt that implementing regulation by 21 November 2024. Therefore, the provisions will not be effective until 21 May 2026. By this date, Member States are required to make a digital identity wallet available to their citizens.
In the legislative process. An implementing regulation is also required, setting out full technical specifications. The Commission is obliged to issue the implementing regulation by 21 November 2024.
Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No910/2014 as regards establishing the European Digital Identity Framework, available here.
This legislation updates and amends the existing digital identity regime in the EU. Once in effect, it will affect all EU citizens, who will be able to have an EU digital identity wallet to prove their identity, to hold other documents such as qualification certificates, bank cards or tickets, and to access public and private services. Wallets will be available to prove the identity of natural and legal persons, and of natural persons representing a natural or legal person. The legislation will also impact on the providers of digital identity services, and be relevant to businesses to which the wallet is presented.
This new legislation amends existing EU rules around digital identity to create the European Identity Framework, known as eIDAS. The concept is for a digital wallet – for example an app on phones – that will stand as proof of the person's identity and can also be used to store and prove other "formal" attributes such as qualifications, certificates, a driving licence, medical prescriptions, and other variations on someone's status and entitlements.
Key points include:
• Wallets will not be compulsory and there will be no discrimination where someone does not have a wallet. They will not replace national ID cards.
• They will be issued at Member State level but will be valid across the EU due to the harmonised legal basis and standards for their issue. They can be issued by the Member State directly, under mandate from a Member State or independently of the Member State but formally recognised by it.
• Wallets will be free of charge. They will also carry the ability to apply an electronic signature, although Member States will be able to prevent free-of-charge use of the wallet for e-signatures for professional purposes.
• Electronic attestations of attributes issued by the body responsible for its authenticity will have the same legal status as an attestation from that body in paper format.
• The regulation includes provisions to ensure that there is a high degree of confidence in the proof of identity provided by the wallet. It includes detailed procedures to be followed in the event of a cybersecurity breach in relation to wallets or the systems that support them.
• The legislation is technology-neutral. The source code for application software components will be open source, although certain elements (not installed on user devices) may be kept confidential.
• The eIDAS framework will be based on a common technical architecture with common standards and harmonised security requirements, building on existing cybersecurity regulatory requirements. This will ensure interoperability of wallets across the EU and regardless of the issuer.
• Member States will be required to provide record matching services across borders.
• Wallet holders will have full control over their data via a "privacy dashboard" and will be able to use the wallet to prove, for example, their age without revealing wider information about their identity beyond what is needed. Wallet providers will not control the data contained in the wallet. Dashboards will contain a log of all transactions where the wallet has been used, and support data portability for the user. Entities that have provided attestations held in the wallet will not be able to track or monitor the wallet holder's activities without their express consent.
• Businesses designated as "very large online platforms" (VLOPs) under the EU's Digital Services Act will be obliged to accept the wallet as proof of identity for users accessing their services.
• Parties seeking to rely on eIDAS wallets for the provision of public or private services will need to register, with the Member State where they are established, their details and how they will be relying on wallets. They must reveal themselves to the wallet holder, although the holder can provide only a pseudonym where their actual identity is not needed for the service in question.
The legislation also establishes a legal framework for electronic signatures (which can be administered using an eIDAS wallet), electronic seals, electronic time stamps, electronic documents, electronic registered delivery services, certificate services for website authentication, electronic archiving, electronic attestation of attributes, electronic signature and seal creation devices, and electronic ledgers.
In September 2024, the European Commission requested the European Union Agency for Cybersecurity (ENISA) to provide support to member states for the certification of digital wallets, including the development of a candidate European cybersecurity certification scheme in accordance with the EU Cybersecurity Act. ENISA will do this by providing harmonised certification requirements which member states should adhere to when setting up their national certification schemes.
The European Commission's Q&A on this regulation is available here.
The Media Act introduces an overhaul of the UK legal framework for public service broadcasting, video on demand, streaming and internet radio, and the digital platforms and devices used to access those services.
The Media Act 2024 received Royal Assent on 24 May 2024. Most of the Act's provisions will be brought into force by secondary legislation, apart from Part 2 on prominence on television selection services, which came into force on the day on which the Act was passed.
Secondary legislation made under the Act
The first commencement regulations, The Media Act 2024 (Commencement No. 1) Regulations 2024, brought into force certain provisions of the Media Act on 23 and 26 August 2024.
The second commencement regulations, The Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force Part 5 and Section 19 of the Act on 17 October 2024.
Enacted
Media Act 2024, available here.
Broadcasters (public service and commercial)
Video-on-demand and streaming service providers
TV selection services, such as smart TV, set-top-box, streaming stick and pay TV platform providers, as well as the associated software providers.
Voice-activated smart speakers and other radio selection services
Radio station operators
Sports rights holders
The primary aims of the Act are to: (i) promote UK public service broadcaster (PSB) content and national radio stations; (ii) protect UK audiences and stimulate local content production; (iii) redress the perceived imbalance between the influence of global streaming platforms as compared to UK-based media services. The bill overhauls the traditional scope of UK on-demand law by extending Ofcom's jurisdiction to non-UK services; and (iv) specifically regulate larger digital content selection platforms and smart devices regarding prominence and access to internet-delivered TV and radio services.
Key reforms include:
· Digital prominence: The Act sets out a framework for PSB prominence on digital TV platforms.
· VoD regime overhaul: The Act creates a new category of regulated on-demand service, the "Tier 1" service. This covers either: (i) PSB services used to fulfil a public service remit, other than those operated by the BBC; or (ii) any other UK or non-UK VoD services designated (by name or by definition) in secondary legislation. The scope of this second limb is undefined, but the expectation is that this will catch services with a large UK audience and which make available "TV-like" content or smaller services if evidence emerges of potential harm. These services will be required to comply with new stringent content and accessibility obligations.
· New public service broadcaster (PSB) remit: The Act sets out a new, flexible remit for PSBs and allows them to contribute towards that remit via a range of audiovisual services, including digital VoD platforms. The Act also amends certain commissioning and broadcasting quota obligations.
· Listed events: The aim of the regime is to ensure that key live sporting events of national interest are widely available and free-to-air for UK audiences. The Act clarifies that only PSB linear services will qualify to bid for the rights to show listed events.
· Radio selection services: The Act recognises the growth in voice-activated devices and their potential power in determining listening choices. The largest of these platforms will be obliged to enable access to UK-licensed internet radio stations without charging any fee and without those platforms overlaying the stations with any third party content, including ads. The station operators would also be free to select their desired means of delivery, for example, via a specific station-operated app or aggregator.
On 26 February 2024, Ofcom published its roadmap for implementing the bill (as it was then) in which it explained its "high-level plan" for putting the provisions into practice once the bill became law. Ofcom cautioned that the dates outlined were indicative only. Its timetable assumed that the legislation would receive Royal Assent by the summer (which it did) and the necessary secondary legislation subsequently laid before Parliament. Following the general election 2024, the Labour government is proceeding with implementation, which will take place in phases over the next two years.
In July 2024, Ofcom published a call for evidence on the listed events regime under the Act, which closed on 26 September 2024.
On 15 August 2024, the government issued the Media Act 2024 (Commencement No 1) Regulations 2024, bringing certain (mostly functional) provisions of the Act into force on 23 August 2024. The Regulations also brought into effect the non-UK Tier 1 VoD services framework, but the details of the regime are still to be determined as Ofcom must first consult and then the government will produce further regulations. In addition, the Regulations partially brought into effect the new digital prominence regime, but the details are also subject to consultation and further regulations from the government.
In September 2024, the government indicated to Ofcom its intention to begin its consideration of Tier 1 regulation of appropriate on-demand services "as soon as is practically possible". The government asked Ofcom to prepare a report on the operation of the UK market for on-demand programme services and non-UK on-demand programme services, which report the government is required to take into account.
The second commencement regulations, the Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force the following provisions of the Act on 17 October 2024: (i) Part 5 (Regulation of radio services); and (ii) Section 19 (amount of financial penalties: qualifying revenue), but only for the purposes of enabling Ofcom to carry out the necessary work to bring the section into force. The Regulations also contain transitional and saving provisions.
The Internet Television Equipment Regulations 2024, which come into force on 14 November 2024, set out the descriptions of devices that are considered to be "internet television equipment" for the purposes of the new prominence framework under the Act. The regulations name smart televisions and streaming devices as internet television equipment.
The government has also published a policy paper on its approach to deciding the categories of TV devices that will be considered "internet television equipment". It explains that smart TVs, set-top boxes and streaming sticks will qualify, but that smartphones, laptops, tablets, PCs and video games consoles will not, as watching TV on these devices is not their primary function.
Newer devices, such as home cinema projectors and portable lifestyle screens, internet connected car touchscreens, virtual reality headsets, smart watches and in-home screens (such as smart fridges and smart speakers with in-built screens), will also be excluded for the same reason. However, the government intends to review the list in one-year's time to reassess.
Media Matters Podcast (Media Act Series)
The UK Media Act 2024 takes first steps on Ofcom's roadmap to implementation
The European Media Freedom Act establishes a regulatory framework for media services in the EU, introducing measures to protect journalists, and strengthen media pluralism and editorial independence.
The Act entered into force on 7 May 2024.
It will generally apply from 8 August 2025 with some exceptions. For example, provisions in relation to the right of recipients of media services to plurality of editorially independent media content will apply from 8 November 2024.
Some provisions will apply from 8 February 2025: national regulatory authorities will assume their powers, provisions relating to the establishment of the new European Board for Media Services and amendments to the Audiovisual Media Services Directive (AVMSD), as well as provisions in respect of rights to editorial freedom and independence of media service providers, will also apply at this time.
Enacted.
Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), available here.
The European Media Freedom Act (EMFA) builds on the revised Audio visual Media Services Directive and broadens its scope by bringing radio and newspapers publishers into its remit.
EMFA applies to all "media service providers" (MSPs) (an entity providing a media service and who has editorial responsibility for the content on that service) that target audiences in the EU, whether the provider is established in the EU or not. It therefore applies to:
· television and radio broadcasters;
· video on-demand providers;
· audio podcast providers;
· newspaper publishers (online and offline); and
· video-sharing platforms, including Very Large Online Platforms (VLOPs) (as defined in the Digital Services Act), that exercise editorial control.
EMFA provides a legal framework setting out both rights and duties for MSPs and recipients of media services in the EU. Key points include:
· Recipients of media services in the EU have a right to receive a plurality of news and current affairs content that is produced with editorial freedom. Member States must respect this right.
· Member States must respect the editorial freedom and independence of MSPs and are banned from interfering or influencing in any way the editorial policies and decisions made by MSPs.
· Journalists are protected from being forced to disclose their sources or confidential communications through sanctions, inspection, detention, surveillance, etc. Surveillance measures will only be allowed where there is "an overriding reason of public interest" and they must be proportionate. Such measures will also first have to be authorised by a judicial authority.
· Safeguards for public service media providers so that they can provide impartial and independent media services.
· Transparency requirements for MSPs: to provide their legal names, information on the identity of their shareholders by whom they are possibly influenced, and the total annual amount of state advertising they receive. MSPs providing news and current affairs content must also take measures to guarantee the independence of editorial decisions.
· The establishment of a new, independent, European Board for Media Services (replacing the European Regulators Group for Audiovisual Media Services under the AVMSD) composed of representatives from national regulatory authorities (NRAs) and assisted by a Commission secretariat.
· A framework for co-operation amongst national regulatory authorities overseen by the Board.
· Provisions in relation to VLOPs, intended to protect freedom of expression. Except in relation to obligations under the Digital Services Act to protect minors and to assess and mitigate against systemic risks, if a VLOP decides to remove or restrict access to a MSP's content from their platform on the grounds that the content is incompatible with its terms and conditions, it must first provide the MSP with a statement of reasons as to why it wants to take that action. The MSP then has 24 hours within which to reply, although that can be shortened in a crisis situation. If the VLOP still decides to remove or restrict access to the content it must tell the MSP without undue delay. If the VLOP repeatedly restricts or removes media content without sufficient grounds, it must engage in "meaningful and effective dialogue" with the MSP, on its request, to find an "amicable solution" within a reasonable timeframe. The MSP has the right to request an opinion from the Board on the outcome of that dialogue, including making recommendations to the VLOP. The Board then has to inform the Commission. The Board also has to regularly organise structured dialogues between VLOPs and MSPs to discuss best practices for content moderation and monitor actions taken to protect society from harmful content, including disinformation and foreign manipulation and interference. These provisions have been the subject of much discussion, especially amongst those concerned with fighting disinformation who think that the provisions amount to a "media exemption" which will allow false and misleading content to proliferate online, and those who fear that the provisions could in fact increase restriction of MSP content thereby harming freedom of the press.
· Obligations on Member States to establish a system, run by the NRAs, to assess "media market concentrations" that could have a significant impact on media pluralism and editorial independence. This is distinct from competition law assessments and rules on merger control. The Board is also obliged to provide an opinion (which the NRA must take "utmost account" of) and can, if no assessment takes place, draw up an opinion on its own initiative if it considers the concentration is likely to affect the functioning of the internal market for media services.
· Rights for users to be able to easily change the configuration, including default settings, of devices or user interfaces that control or manage access to media services so that they can customise the offering to suit their preferences. Manufacturers must include this functionality in their devices and interfaces by design.
· Requirements for providers of audience measurement systems to provide free of charge to MSPs, advertisers and authorised third parties, information on their methodologies and to have their systems audited annually.
· Rules on transparency and proportionality in relation to the allocation of public funds for state advertising and supply or service contracts. State advertising must be distributed broadly across MSPs.
This directive helps parties bring a claim in relation to harm caused by an AI tool, by making it easier to obtain information and by altering the burden of proof.
To be confirmed – see "Status" below.
This legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. This overview reflects the legislation as it stood at that point. It is currently unclear whether it will be revived and enacted, or allowed to lapse, which will be a decision for the new Commission, to be appointed by the new European Parliament. This appointment is expected in autumn 2024.
In September 2024, the European Parliament's Think Tank published a complementary impact assessment on the Commission's proposal. The report, among other things, proposes to expand the scope of the directive into a "more comprehensive software liability instrument" which would cover not only AI, but all other types of software.
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), available here.
The directive will impact on businesses and individuals bringing or defending private actions for non-contractual damages in relation to artificial intelligence (AI). It will (once implemented at national level) change civil litigation rules in EU Member States on access to evidence and the burden of proof for some AI damages claims.
The EU AI Liability Directive is intended to reduce perceived difficulties in claiming non-contractual damages for harm caused by AI.
The proposal sits alongside the EU AI Act and wider reforms to EU product liability legislation. Some AI systems will fall under the EU's product liability regime, a strict liability regime that does not require proof of fault. Where they do not, damages for harm are likely to take the form of a non-contractual claim that requires the claimant to prove the defendant's fault and a causation link between the fault and the harm suffered.
The complexities of AI systems can make it difficult to unravel why harm has been caused, whether there was fault, and whether the fault flowed through to the harm caused. Claimants often suffer from an asymmetry of information – the defendant knows more about what has happened than they do – which can be even greater in relation to claims about an AI system, particularly in civil law systems without common law-style disclosure obligations. The AI liability directive seeks to address these difficulties with two changes to standard civil litigation procedure in EU Member States for AI claims.
First, a new right of access to information is proposed for claimants seeking compensation in relation to an AI system classified as "high risk" under the AI Act. Under the proposals in the directive, the claimant must first make "all proportionate attempts" to obtain information from the defendant. If that is unsuccessful and if the claimant can show they have a plausible claim for damages, the claimant will be able to ask the court to order disclosure of "relevant evidence" from the defendant.
"Relevant evidence" is not currently defined, but could feasibly include some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially extend to data sets used for training, validating or testing AI systems and even to the content of mandatory logs that AI providers must maintain for traceability purposes.
Moreover, the directive proposes that failure to comply with a disclosure order will trigger a rebuttable presumption of non-compliance with a duty of care by the defendant – aiding the claimant in proving fault.
Secondly, the directive proposes creating a "presumption of causality". "National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system" where the claimant shows that all of the following requirements are met:
· The defendant is in breach of a duty of care under national or EU law – the required "fault". The scope of what might constitute a relevant duty of care for these purposes is not defined, although for providers and users of high-risk AI systems, provisions of the AI Act that would constitute relevant breaches for liability purposes are clearly listed. As noted above, if the defendant has failed to comply with a disclosure order, the claimant benefits from a rebuttable presumption as to the defendant's fault; and
· It is reasonably likely from the circumstances in which the harm occurred that the defendant's fault has influenced the output or lack of output of the AI system in question. The burden of proof on the claimant is reduced from having to prove this point to the required standard (for example, that it is more likely than not), to having to show only that it is reasonably likely; and
· The output or lack of output of the AI system gave rise to the harm suffered.
The proposal is not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.
In relation to high-risk AI systems, the directive proposes that the defendant will be shielded from the presumption of causality where it can demonstrate that the claimant has access to sufficient evidence and expertise to prove the causal link.
For AI systems that are not high risk, the presumption will apply only where the court considers that it would be excessively difficult for the claimant to prove the causal link.
Finally, the proposal is that the presumption will apply to AI systems that are put into use by a non-professional user only if they have materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but failed to do so.
More generally, the presumption of causality will be rebuttable, shifting the burden of proof onto the defendant to show that there is no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question.
There is an intentional interplay between this directive and the EU's draft AI Act that will link non-compliance with the AI Act regulatory regime to increased exposure to damages actions.
Once enacted at EU level, the principles in the directive will then need to be implemented at national level. This is known as the transposition period and usually lasts two years. This two-stage approach reflects the extensive differences in detail between the civil litigation rules of different EU Member States. It enables each jurisdiction to enact harmonised changes but adapted as needed to fit into their national civil litigation regimes.
As currently drafted, the new rules would only apply to harm that occurs after the transposition period, without retrospective effect.
EU proposes new approach to liability for artificial intelligence systems
This directive concerns workers who source work through digital platforms. As well as clarifying employment rights, it addresses the fairness, transparency and accountability of algorithmic management of platform workers.
To be confirmed – see "Status" below.
This legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. This overview reflects the legislation as it stood at that point. It is currently unclear whether it will be revived and enacted, or allowed to lapse, which will be a decision for the new Commission, to be appointed by the new European Parliament. If it is revived and enacted, its provisions will then require implementation through national level legislation, to be issued by member states.
Proposal for a Directive of the European Parliament and of the Council on improving working conditions in platform work. The Commission's original 2021 proposal is available here.
The amended text that has been finalised at political level is available here.
The directive applies to all workers performing platform work within the EU (and therefore to their "employers"). The contractual relationship of the worker does not have to be with the recipient of the service but could be with the platform or with an intermediary.
It also applies to all digital labour platforms organising platform work to be performed in the EU, regardless of the place of establishment of the platform. The key point is where the work will be performed, not where the platform is based.
A "digital labour platform" is defined as meeting all of the following requirements:
• The platform is provided at distance via electronic means, such as a website or app;
• It is provided at the request of the recipient of the service;
• It involves the organisation of remunerated work, irrespective of whether the work is remote or in a specified location; and
• It involves the use of automated monitoring or decision-making systems (algorithmic management systems).
Platforms that are designed to enable the exploitation or sharing of assets (such as short term rental accommodation) or to enable the non-commercial resale of goods are expressly excluded from this definition. The requirement that the worker is remunerated means that platforms for organising the activities of volunteers are also excluded from scope.
NB: the focus of this resource is digital regulation so wider aspects of this legislation are out of scope. The directive includes provisions on the correct determination of a platform worker's employment status (including a rebuttable presumption of employment status unless the digital platform can prove that a worker is, in fact, genuinely self-employed), and on transparency around platform work including disclosure of data to relevant authorities.
Chapter III of the directive sets out provisions regarding the algorithmic management of platform workers, where functions that might historically have been taken by human managers are automated.
The directive focuses on automated monitoring systems and automated decision-making systems, both of which operate "through electronic means" and may collect data. Monitoring systems are those which are used for, or support, monitoring, supervising or evaluating someone's work. Decision-making systems are those which take decisions that "significantly affect persons performing platform work". This may include decisions concerning recruitment, allocation of work and working time, remuneration, access to training or promotion, and their contractual status.
The directive seeks to address concerns that workers may not have information about what data is being collected about them, and how it is evaluated. The reasons behind decisions taken by the systems may not be clear, or explainable, and there may be limited or no avenues for challenge, redress or rectification. In relation to personal data, it makes supplemental provisions that will apply over and above the General Data Protection Regulation (GDPR) framework.
The directive requires Member States to outlaw the following functions from being performed by automated monitoring or decision-making systems:
• Processing personal data on the "emotional or psychological state" or the platform worker;
• Processing personal data in relation to private conversations;
• Collecting personal data when the platform worker is not working or making themselves available for platform work;
• Collecting personal data to predict or identify trade union activity by a platform worker;
• Processing personal data to infer protected characteristics about the worker (such as race, sexuality, religion or disabilities), or their migration status, physical or mental health or trade union membership; and
• Processing personal data in order to use biometric profiling to identify the worker by reference to a database of biometric information about individuals.
These provisions will apply from the start of the recruitment process. In addition, the directive provides that any decision to restrict, suspend or terminate the contractual relationship or account of a platform worker (or similarly detrimental decisions) must be taken by a human being.
The directive clarifies that automated processing personal data of a platform worker for monitoring or decision-making will be high risk under the provisions of the GDPR and so will require a data protection impact assessment (DPIA). Digital labour platforms acting as data controllers must seek the views of the platform workers concerned and their representatives. DPIAs must be provided to worker representatives.
Transparency and redress
Digital labour platforms must disclose the use of algorithmic systems to platform workers, their representatives and relevant authorities. This should include information about:
• All the types of decisions that will be taken by the algorithmic systems on the platform (not just those with significant effect (which makes the obligation significantly more onerous than that under GDPR));
• Whether monitoring systems are being used, what data they collect, what actions they monitor, supervise or evaluate, what the aim of these monitoring activities is and how the system achieves it, and who will receive the processed data from the system;
• Whether automated decision-making systems are being used, what decisions they will take or support, what data or parameters are relevant to those decisions and how the behaviour of the platform worker influences them;
• How decisions to restrict, suspend or terminate the account of a platform worker or to refuse payment will be taken, as well as how decisions will be taken on their contractual status.
The required information about algorithmic systems must be provided in writing, in clear and plain language. It must be provided on the worker's first day, or ahead of any changes, or on request. It must also be provided in relation to systems used for recruitment and selection to the person going through that process.
Platform workers will have a right to portability of their data from one digital labour platform to another.
The directive creates obligations around human oversight of algorithmic systems. These include a review at least every two years of the impact of the systems, including on working conditions and on equality of treatment of workers. Where a high risk of discrimination is identified, the system must be modified or no longer used.
Digital labour platforms must have sufficient human resources to implement oversight of the impact of individual decisions, with appropriately competent and trained staff able to overrise automated decisions. Such staff must be protected from dismissal or disciplinary measures for exercising these functions.
Platform workers will have a right to an oral or written explanation in clear and plain language of decisions taken or supported by an algorithmic system, with the option of discussing the decision with a human. The explanation must be in writing for certain decisions, including the restriction, suspension or termination of a platform worker's account, a refusal to pay the worker, or any decision regarding the worker's contractual status.
Workers will have a right to request a review of decisions taken by the algorithmic systems, to a written response within two weeks, to rectification of the decision if it infringes their rights, or to compensation if the error cannot be rectified.
The directive highlights that algorithmic management of workers can cause the intensification of work, which in turn can impact on safety and on the mental and physical health of platform workers. It requires digital labour platforms to evaluate such risks and take preventative and protective measures.
This new regulation will supplement the EU General Data Protection Regulation (GDPR) with harmonised procedures for the cross-border enforcement of the GDPR, in order to create consistency and improve cooperation and efficiency.
To be confirmed – see "Status" below.
This legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. Soon after the elections, the Council of the EU agreed its negotiating position on the draft legislation, but it is unclear whether the new Commission, appointed by the new European Parliament, will in fact revive the legislation or let it lapse. If it is revived in its current form, tri-partite negotiations amongst the EU institutions will be able to begin.
Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, available here.
This new regulation will impact all those involved in cross-border enforcement of the GDPR, including the national data protection authorities (DPAs), the parties under investigation and any parties making a complaint about cross-border GDPR compliance to a DPA.
The regulation adds new procedural rules for the cross-border enforcement mechanisms set out in Chapter VII of the GDPR. It does not introduce new procedures but clarifies the existing GDPR framework. Because GDPR enforcement is decentralised to Member State level, cross-border enforcement within the EU is relatively common – for example where a data subject lodges a complaint about a data controller or processor located in a different Member State.
The European Commission's review of the application of the GDPR indicated that cross-border enforcement and dispute resolution under the GDPR mechanisms was being hampered by differences in administrative procedures and interpretation of the GDPR at national level. The regulation therefore aims to create a harmonised, common set of rules for cross-border GDPR matters.
The proposed regulation will create harmonised rules in relation to various aspects of cross-border enforcement of the GDPR:
· Complaints, including a prescribed form on which to submit a cross-border complaint under Article 77 GDPR, and a required procedure for full or partial rejection of a complaint. Where the complaint is adopted and an investigation follows, the regulation will create a harmonised set of rights for complainants to receive information and provide their views at various stages of the cross-border investigation and decision-making process. The proposed regulation also deals with amicable settlement of complaints, and issues such as translation.
· Rights of those under investigation, ensuring that the rights of defence are observed consistently by DPAs in different Member States. The new rules clarify the parties' right to be heard at specific stages of the investigation, as well as setting out requirements around the matter file to be maintained by the investigating DPA, the parties' rights of access to the file, and confidentiality.
· Co-operation and dispute resolution between DPAs, providing for clear mechanisms and procedures by which DPAs can share views and information at an early stage in a cross-border investigation, as well as requirements on the lead DPA to share its preliminary assessment of the case. The proposed regulation also includes mechanisms to resolve differences on key issues in the case at an early stage. Where disputes arise, provision is made for the European Data Protection Board (EDPB) to adopt an urgent binding decision on the disputed issue, so that the lead DPA can then progress the matter without further delay. The new rules will also include directions as to how DPAs should submit their reasoned objections on a disputed issue, and as to the role of all those involved – the lead DPA, other DPAs and the EDPB.
In agreeing its negotiating position, the Council of the EU is seeking amendments to: (i) ensure clearer timelines to speed up the cooperation process; (ii) provide the option of not applying the additional rules if the case is straightforward; and (iii) provide for an early resolution mechanism to allow DPAs to resolve a case before initiating standard procedures for dealing with across-border complaint, e.g. where the organisation in question has addressed the infringement or an amicable settlement has been reached.
The new rules will not impact the provisions of the GDPR concerning the rights of data subjects, the obligations of data controllers or processors, or on the lawful grounds for data processing. They will also not impact enforcement where there is no cross-border aspect.
The Product Regulation and Metrology Bill proposes to recognise beneficial EU regulations, address new product risks like AI, and clarify supply chain responsibilities, in particular, for online marketplaces.
To be confirmed – see "Status" below.
The draft Product Regulation and Metrology Bill has been published and had its first reading in the House of Lords on 4 September. Its second reading is scheduled for 8 October 2024.
The Product Regulation and Metrology Bill
Anyone placing products on the market in the UK.
In the King's Speech 2024, the government set out plans to introduce a Product Safety and Metrology Bill which would preserve the UK’s status as a global leader in product regulation, supporting businesses and protecting consumers". The King's Speech 2024 background briefing notes specifically recognised that the EU is reforming product safety regulations in line with technological developments through, for example, the new EU General Product Safety Regulation and the EU Product Liability Directive.
The draft bill, named the Product Regulation and Metrology Bill, was introduced to the House of Lords on 4 September 2024 when it had its first reading.
The aim of the bill is to make it easier for new product regulations to be introduced by the government.
A key aspect of the bill is that it allows for the introduction of regulations for products to be placed on the UK market that are closely aligned with relevant EU laws, while, at the same time allowing for necessary divergence, a move that is likely to be welcomed by businesses.
Other aspects of the bill include enhancing compliance and enforcement capabilities, including improved data sharing between regulators and market surveillance authorities; clarifying the responsibilities of those in the supply chain, including online market places; and updating the legal metrology framework. The bill also provides for cost recovery mechanisms and emergency modifications.
Products excluded from the bill include: food, feed stuff, plants, fruit and fungi, plant protection products, animal by-products, products of animal origin, aircrafts, military equipment, medicines and medical devices.
The new bill will likely have a significant impact on product regulation in the UK and businesses will need to keep abreast of developments.
King's Speech 2024: background briefing notes
What did the UK King's Speech contain for business? - Osborne Clarke
Product Regulation and Metrology Bill [HL] - Parliamentary Bills - UK Parliament