Our timeline is an overview of major digital regulation that has come into effect in recent years, is in the legislative process, or is coming into effect soon. It is designed to help you keep track of relevant regulatory developments, understand their status and identify which pieces of legislation will actually affect your business.
Our international regulatory lawyers offer support for all stages of your business's digitalisation journey. We can help you prepare for, implement and comply with all aspects of digital regulation, whatever your sector. Explore our digital regulation expertise.
Last updated: 28 February 2025
The timeline is indicative only and not a complete overview of applicable law.
This Act seeks to increase sharing of public sector data, regulates data marketplaces, and creates the concept of data altruism for trusted sharing.
Next important deadline(s): 24 September 2025
The Data Governance Act applies from 24 September 2023, but businesses that were already providing data intermediary services on 23 June 2022 have until 24 September 2025 to comply with relevant data intermediary provisions.
Enacted. Mostly in force.
24 September 2025 (for businesses already providing data intermediary services on 23 September 2022).
Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), available here.
The Data Governance Act (DGA) deals with three main areas of data sharing in the EU:
- Data sharing by public authorities, creating a framework to facilitate the sharing of such data when it is protected under confidentiality obligations, intellectual property or data protection laws and hence falls outside the scope of the Open Data Directive.
- Data-sharing through profit-making intermediaries, creating a new regulatory framework. These provisions apply to: (i) data exchange services or platforms, (ii) services that enable individuals to control the sharing of their personal data, and (iii) so-called "data cooperatives" that support their members in exercising their rights with respect to data. Web browsers, email services, cloud storage, analytics or data sharing software are excluded, as are services used in a closed group such as those ensuring the functioning of Internet of Things (IoT) devices or objects.
- "Data altruism" where data is shared by individuals without reward and via a not-for-profit organisation pursuing objectives in the general interest. A regulatory framework is created to ensure transparency and the rights of data subjects.
The DGA aims to create a new EU model for data sharing, with trusted frameworks and organisations to encourage sharing, but also regulates data-sharing organisations. Data governance in this context means rules, structures, processes and technical means to share and pool data.
The DGA seeks to address the problem that some public data cannot be made open as it is protected by confidentiality, intellectual property rights or data privacy rules. Requirements are imposed on EU Member States to optimise the availability of data while protecting confidentiality and privacy, for example using technical means such as anonymisation, pseudonymisation or accessing data in secure data rooms. Exclusive data sharing arrangements are prohibited except in limited public interest situations. Other requirements include a maximum delay before answering a data access request, and the creation of a single information point by each Member State. The Commission has created a searchable EU register of all information compiled by the national single information points.
The new data intermediary regulatory framework aims to grow private sector trust in opening up data by boosting the neutrality and transparency of data intermediaries. Data intermediaries are required to be neutral third parties, which do not monetise the data themselves by selling it to another business, or feeding it into their own products and services. Data intermediaries are required to act in the best interests of the data subjects. The DGA creates a full regulatory framework for data intermediation services, which will face the additional costs and burden of notification and compliance requirements. The EU Commission has, via an Implementing Regulation, introduced a logo for trusted "EU recognised data intermediary" organisations to differentiate recognised trusted services (i.e. those services that satisfy the compliance requirements of the DGA) from other services. The Commission has created a register of all data intermediation services providers in the EU.
Member States may choose to legislate for "dissuasive financial penalties" for non-compliance. Data intermediaries must notify the national competent authority of their services, which will monitor compliance by the intermediary. As noted, existing data intermediaries in operation on 23 June 2022 have been given two years to September 2025 to bring their operations into compliance.
The DGA creates new legal structures and a regulatory framework for "data altruism". This will enable people and businesses to share their data voluntarily and without reward for a purpose in the public interest (for example, medical or environmental research). As with data intermediaries, the Commission has introduced a logo to differentiate a compliant "EU recognised data altruism organisation" from other services.
Data altruism organisations must be registered in the Commission's new EU public register of recognised data altruism organisations and the data altruism logo must be accompanied by a QR code with a link to that register. Data altruism organisations must be not-for-profit, pursuing stated objectives in the general interest and inviting data holders to contribute their data in pursuance of those objectives. They will not be able to use the pooled data for any other purposes. The DGA further requires independent functioning and functional separation from other activities, as well as requirements to safeguard transparency and data subjects' rights.
The relevant authorities in Member States are responsible for the registration of data altruism organisations and for monitoring compliance by data intermediation services providers.
In May 2024, the European Commission opened infringement procedures against 18 Member States that have either failed to designate the responsible authorities to implement the DGA, or failed to prove that such authorities are empowered to carry out their duties under the DGA. In July 2024, the Commission also opened infringement proceedings against Ireland.
In December 2024, the Commission sent a reasoned opinion to ten member states who have failed to comply with requirements in relation to responsible authorities. These member states have two months to respond and take the necessary measures. Otherwise, the Commission may refer the cases to the Court of Justice of the EU.
This Act seeks to increase sharing of public sector data, regulates data marketplaces, and creates the concept of data altruism for trusted sharing.
Next important deadline(s): 24 September 2025
The Data Governance Act applies from 24 September 2023, but businesses that were already providing data intermediary services on 23 June 2022 have until 24 September 2025 to comply with relevant data intermediary provisions.
Enacted. Mostly in force.
24 September 2025 (for businesses already providing data intermediary services on 23 September 2022).
Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), available here.
The Data Governance Act (DGA) deals with three main areas of data sharing in the EU:
- Data sharing by public authorities, creating a framework to facilitate the sharing of such data when it is protected under confidentiality obligations, intellectual property or data protection laws and hence falls outside the scope of the Open Data Directive.
- Data-sharing through profit-making intermediaries, creating a new regulatory framework. These provisions apply to: (i) data exchange services or platforms, (ii) services that enable individuals to control the sharing of their personal data, and (iii) so-called "data cooperatives" that support their members in exercising their rights with respect to data. Web browsers, email services, cloud storage, analytics or data sharing software are excluded, as are services used in a closed group such as those ensuring the functioning of Internet of Things (IoT) devices or objects.
- "Data altruism" where data is shared by individuals without reward and via a not-for-profit organisation pursuing objectives in the general interest. A regulatory framework is created to ensure transparency and the rights of data subjects.
The DGA aims to create a new EU model for data sharing, with trusted frameworks and organisations to encourage sharing, but also regulates data-sharing organisations. Data governance in this context means rules, structures, processes and technical means to share and pool data.
The DGA seeks to address the problem that some public data cannot be made open as it is protected by confidentiality, intellectual property rights or data privacy rules. Requirements are imposed on EU Member States to optimise the availability of data while protecting confidentiality and privacy, for example using technical means such as anonymisation, pseudonymisation or accessing data in secure data rooms. Exclusive data sharing arrangements are prohibited except in limited public interest situations. Other requirements include a maximum delay before answering a data access request, and the creation of a single information point by each Member State. The Commission has created a searchable EU register of all information compiled by the national single information points.
The new data intermediary regulatory framework aims to grow private sector trust in opening up data by boosting the neutrality and transparency of data intermediaries. Data intermediaries are required to be neutral third parties, which do not monetise the data themselves by selling it to another business, or feeding it into their own products and services. Data intermediaries are required to act in the best interests of the data subjects. The DGA creates a full regulatory framework for data intermediation services, which will face the additional costs and burden of notification and compliance requirements. The EU Commission has, via an Implementing Regulation, introduced a logo for trusted "EU recognised data intermediary" organisations to differentiate recognised trusted services (i.e. those services that satisfy the compliance requirements of the DGA) from other services. The Commission has created a register of all data intermediation services providers in the EU.
Member States may choose to legislate for "dissuasive financial penalties" for non-compliance. Data intermediaries must notify the national competent authority of their services, which will monitor compliance by the intermediary. As noted, existing data intermediaries in operation on 23 June 2022 have been given two years to September 2025 to bring their operations into compliance.
The DGA creates new legal structures and a regulatory framework for "data altruism". This will enable people and businesses to share their data voluntarily and without reward for a purpose in the public interest (for example, medical or environmental research). As with data intermediaries, the Commission has introduced a logo to differentiate a compliant "EU recognised data altruism organisation" from other services.
Data altruism organisations must be registered in the Commission's new EU public register of recognised data altruism organisations and the data altruism logo must be accompanied by a QR code with a link to that register. Data altruism organisations must be not-for-profit, pursuing stated objectives in the general interest and inviting data holders to contribute their data in pursuance of those objectives. They will not be able to use the pooled data for any other purposes. The DGA further requires independent functioning and functional separation from other activities, as well as requirements to safeguard transparency and data subjects' rights.
The relevant authorities in Member States are responsible for the registration of data altruism organisations and for monitoring compliance by data intermediation services providers.
In May 2024, the European Commission opened infringement procedures against 18 Member States that have either failed to designate the responsible authorities to implement the DGA, or failed to prove that such authorities are empowered to carry out their duties under the DGA. In July 2024, the Commission also opened infringement proceedings against Ireland.
In December 2024, the Commission sent a reasoned opinion to ten member states who have failed to comply with requirements in relation to responsible authorities. These member states have two months to respond and take the necessary measures. Otherwise, the Commission may refer the cases to the Court of Justice of the EU.
The DSA builds on the e-Commerce Directive to address new challenges in the digital sphere.
Next important deadline(s): 1 July 2025
Most in-scope service providers will need to comply from 17 February 2024.
Provisions enabling the designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) applied from 16 November 2022. The VLOPs and VLOSEs that were designated on 25 April 2023 had to comply by 25 August 2023.
Providers that are subsequently designated as VLOPs and VLOSEs by the European Commission will have four months from the Commission's notification of designation to comply with the extra obligations applicable to VLOPs and VLOSEs.
The list of designated VLOPs and VLOSEs can be viewed here.
In force.
1 July 2025 – the Code of Practice on Disinformation takes effect as a Digital Services Act Voluntary Code of Conduct
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), available here.
The Digital Services Act (DSA) broadly applies to all online intermediaries, irrespective of their place of establishment, that offer digital services or products to users in the EU.
It sets out a layered approach to regulation with requirements increasing cumulatively depending on the classification of the service provider:
VLOPs and VLOSEs are online platforms and online search engines with more than 45 million monthly active users and which are designated as a VLOP or VLOSE by the Commission.
Micro and small businesses are exempt from the requirements for online platforms and online marketplaces, unless they qualify as a VLOP or VLOSE.
The DSA aims to update the "rules of the internet", with the key principle that what is illegal offline should also be illegal online.
The DSA both preserves and updates the intermediary liability position set out in the e-Commerce Directive (e.g. the hosting defence). However, the range of requirements fundamentally changes the liability of online services in the EU with strict obligations for the supervision of illegal content online, as well as new rules on transparency, user safety and advertising accountability. It also includes detailed rules on features that must be incorporated into online platforms and marketplaces or included in user terms and conditions.
The DSA preserves the liability defences under Articles 12-15 of the e-Commerce Directive (the "mere conduit" defence, the "caching" defence, the "hosting" defence and the "no obligation to monitor" provision). Clarity is added that these defences remain available even if the platform has voluntarily taken action to detect, identify and remove, or disable access to, illegal content.
However, notice and take down obligations on online platforms are strengthened. Platforms must put in place mechanisms for users to report illicit content (such as hate speech, terror propaganda, discrimination or counterfeit goods). Platforms must decide what to do about reported content in a timely, diligent and non-arbitrary manner, in order to continue to benefit from the hosting defence. Specified mechanisms to challenge content removal must be made available, including alternative dispute resolution. Online platforms must monitor for repeated submission of illegal content or unfounded complaints by particular users, and suspend their access.
Third party content monitors can apply for "trusted flagger" status. Platforms must prioritise illegal content reports from such individuals and bodies.
General obligations will apply in relation to user safety, transparency, controls and information.
Annual reports are required from all intermediary service providers on their content moderation activity, with the required contents depending on the service provided (see below "Implementation progress"). Hosting service providers must include information about illegal content notices and complaints received and action taken. These include provisions relating to content recommendations and online terms, and obligations relating to the publication and communication of information on the average monthly active recipients of the service.
In September 2023, the Commission launched the DSA Transparency Database which is a publicly accessible database of "statements of reasons" that online platforms must submit to the Commission setting out their reasons for making content moderation decisions (with the exception of micro and small enterprises). The Commission has also published an open-source software package to facilitate analysis of data in the Transparency Database.
Under the DSA, websites must not be designed and organised in away that deceives or manipulates users – so-called "dark patterns" – but must be designed in a user-friendly and age-appropriate way to make sure users can make informed decisions.
The DSA introduces traceability provisions for online marketplaces. These include a "know-your-trader" obligation, requiring online marketplaces that allow B2C sales to conduct due diligence on traders prior to allowing them to use the platform.
Platforms must be designed to facilitate compliance with traders' legal obligations such as e-commerce and product safety requirements. The illegal content take down provisions are to facilitate the removal of illegal or counterfeit products, and platforms have obligations to contact consumers who have purchased those products.
Specific transparency obligations apply to online advertising. It must be clear to users whether a displayed item is an advertisement and from whom it originates. Information about the main parameters used to decide to whom it will be displayed is essential in this context.
VLOPs and VLOSEs are designated by the Commission. The first tranche of designations (17 VLOPs and 2 VLOSEs) were made on 25 April 2023. Platforms and search engines will need to report on the number of users at least every 6 months. Obligations on VLOPs and VLOSEs apply from four months after their designation as such.
An additional layer of obligations are imposed on VLOPs and VLOSEs. They must diligently monitor and mitigate systemic risks including their service being used for the dissemination of illegal content; the impact of their service on human rights; and the risk of their service negatively affecting civic discourse, electoral processes, public security, gender-based violence, the protection of public health and minors and individuals' physical and mental well-being.
The European Commission's guidelines on mitigating against systemic risks in relation to electoral processes were published in the EU's Official Journal on 26 April 2024. The guidelines aim to support VLOPs and VLOSEs with their compliance obligations under Article 35 of the DSA (mitigation of risks) and with other obligations relevant to elections. As well as mitigation measures, the guidelines cover best practices before, during and after electoral events.
VLOPs and VLOSEs also have to implement strict processes towards regular risk assessments, produce reports of action regarding content moderation and are obliged to conduct independent annual audits to assess compliance and any commitments made pursuant to codes of conduct and crisis protocols that the Commission has adopted. They also have to disclose their parameters for recommender systems and provide for at least one recommender system which does not involve profiling. In addition, the EU Commission as well as member states, can seek access to their algorithms.
The Commission has also launched a DSA whistleblower tool which allows individuals with inside information to report harmful practices by VLOPs and VLOSEs that are potentially in breach of the DSA, anonymously if preferred.
Each member state must designate one or more competent authorities as responsible for the application and enforcement of the DSA in their jurisdiction, and each member state must designate one of these as a "Digital Services Coordinator." (DSC)" The DSCs are responsible for enforcement of the DSA in respect of platforms that are not VLOPs or VLOSEs in their different territories and can work together. The DSCs make up a European Board for Digital Services, chaired by the European Commission, which works as an independent advisory group to enforce the DSA.
In April 2024, the Commission opened infringement procedures against six member states which either had not designated their DSC by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or had not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches. The Commission has opened infringement procedures against six member states which either did not designate their Digital Services Coordinator by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or have not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). In July 2024, the Commission opened infringement proceedings against six further member states (Belgium, Spain, Croatia, Luxembourg, Netherlands and Sweden).
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The European Commission has been, and continues to be, proactive in sending requests for information to various platforms concerning their compliance with their DSA obligations. It has also opened formal proceedings against some platforms following preliminary investigations and requests for information.
In May 2024, the European Commission and the UK's Ofcom signed an administrative arrangement to support their work in enforcing the new online safety regimes in both the EU and the UK under the DSA and the UK Online Safety Act 2023 respectively.
The Commission and the European Regulators Group for Audio visual Media Services (ERGA), which brings together national media regulators under the Audiovisual Media Services Directive, have also agreed to work together on enforcement, focussing on VLOPs and VLOSEs. ERGA will act as a facilitator to gather information at national level and report on issues such as media pluralism, disinformation and the protection of children to help the Commission identify and assess the systemic risks in these areas. It is also expected to assist the DSCs.
On 22 February 2024, the delegated regulation on independent audits entered into force. The delegated act provides a framework to guide VLOPs and VLOSEs when preparing their external independent audits. It also provides mandatory templates for auditors to use when completing an audit report, as well as mandatory templates for the VLOPs and VLOSEs to use when completing their audit implementation reports.
The first risk assessment reports and independent audit reports from designated VLOPs and VLOSEs were published in November 2024.
Over the summer 2024, the EU Commission conducted a call for evidence to inform its upcoming guidelines on the protection of children online under the DSA. The guidelines will apply to all online platforms that are accessible to children, including those not directed at children, but which still have child users due to a lack of age-assurance mechanisms. The Commission will use the input from the call for evidence to draft the guidelines on which it will consult separately and which it plans to adopt before summer 2025.
On 29 October 2024, the Commission launched a consultation on a draft delegated regulation on the rules for researchers to access platforms' data under the DSA. Article 40 allows "vetted researchers" to access VLOPs' and VLOSEs' data, subject to approval from a Digital Services Coordinator, for the purposes of evaluating systemic risks and mitigation measures. The consultation closed on 10 December 2024, and the rules are expected to be adopted in the first quarter of 2025.
In November 2024, the Commission adopted an implementing regulation on transparency reporting under the DSA. The regulation standardises templates and reporting periods for the transparency reports that providers of intermediary services have to publish in relation to their content moderation practices. VLOPs and VLOSEs must report twice a year, while other services report annually. Under the implementing regulation, providers must start collecting data in line with the templates from 1 July 2025, with the first harmonised reports due at the beginning of 2026. The Commission also plans to update the requirements for submitting statements of reasons to the DSA transparency database so that they align with the implementing regulation.
In January 2025, the Commission and the European Board for Digital Services endorsed the integration of the voluntary Code of Practice on Disinformation into the framework of the DSA. With integration, full adherence to the Code may be considered an appropriate risk mitigation measure for signatories designated as VLOPs and VLOSEs. The Code will therefore be a benchmark for determining DSA compliance. Conversion of the Code into a DSA Voluntary Code of Practice will take effect from 1 July 2025, making its commitments capable of audit from that date onwards.
17 February 2024: The Digital Services Act (DSA) is now fully applicable
EU gets one step closer to adopting the Digital Services Act
Rethinking regulation of data-driven digital platforms
EU's Digital Service Act to introduce first express ban on 'dark patterns'
Commission finds EU consumer law inadequate to protect consumers in the digital world
The DSA builds on the e-Commerce Directive to address new challenges in the digital sphere.
Next important deadline(s): 1 July 2025
Most in-scope service providers will need to comply from 17 February 2024.
Provisions enabling the designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) applied from 16 November 2022. The VLOPs and VLOSEs that were designated on 25 April 2023 had to comply by 25 August 2023.
Providers that are subsequently designated as VLOPs and VLOSEs by the European Commission will have four months from the Commission's notification of designation to comply with the extra obligations applicable to VLOPs and VLOSEs.
The list of designated VLOPs and VLOSEs can be viewed here.
In force.
1 July 2025 – the Code of Practice on Disinformation takes effect as a Digital Services Act Voluntary Code of Conduct
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), available here.
The Digital Services Act (DSA) broadly applies to all online intermediaries, irrespective of their place of establishment, that offer digital services or products to users in the EU.
It sets out a layered approach to regulation with requirements increasing cumulatively depending on the classification of the service provider:
VLOPs and VLOSEs are online platforms and online search engines with more than 45 million monthly active users and which are designated as a VLOP or VLOSE by the Commission.
Micro and small businesses are exempt from the requirements for online platforms and online marketplaces, unless they qualify as a VLOP or VLOSE.
The DSA aims to update the "rules of the internet", with the key principle that what is illegal offline should also be illegal online.
The DSA both preserves and updates the intermediary liability position set out in the e-Commerce Directive (e.g. the hosting defence). However, the range of requirements fundamentally changes the liability of online services in the EU with strict obligations for the supervision of illegal content online, as well as new rules on transparency, user safety and advertising accountability. It also includes detailed rules on features that must be incorporated into online platforms and marketplaces or included in user terms and conditions.
The DSA preserves the liability defences under Articles 12-15 of the e-Commerce Directive (the "mere conduit" defence, the "caching" defence, the "hosting" defence and the "no obligation to monitor" provision). Clarity is added that these defences remain available even if the platform has voluntarily taken action to detect, identify and remove, or disable access to, illegal content.
However, notice and take down obligations on online platforms are strengthened. Platforms must put in place mechanisms for users to report illicit content (such as hate speech, terror propaganda, discrimination or counterfeit goods). Platforms must decide what to do about reported content in a timely, diligent and non-arbitrary manner, in order to continue to benefit from the hosting defence. Specified mechanisms to challenge content removal must be made available, including alternative dispute resolution. Online platforms must monitor for repeated submission of illegal content or unfounded complaints by particular users, and suspend their access.
Third party content monitors can apply for "trusted flagger" status. Platforms must prioritise illegal content reports from such individuals and bodies.
General obligations will apply in relation to user safety, transparency, controls and information.
Annual reports are required from all intermediary service providers on their content moderation activity, with the required contents depending on the service provided (see below "Implementation progress"). Hosting service providers must include information about illegal content notices and complaints received and action taken. These include provisions relating to content recommendations and online terms, and obligations relating to the publication and communication of information on the average monthly active recipients of the service.
In September 2023, the Commission launched the DSA Transparency Database which is a publicly accessible database of "statements of reasons" that online platforms must submit to the Commission setting out their reasons for making content moderation decisions (with the exception of micro and small enterprises). The Commission has also published an open-source software package to facilitate analysis of data in the Transparency Database.
Under the DSA, websites must not be designed and organised in away that deceives or manipulates users – so-called "dark patterns" – but must be designed in a user-friendly and age-appropriate way to make sure users can make informed decisions.
The DSA introduces traceability provisions for online marketplaces. These include a "know-your-trader" obligation, requiring online marketplaces that allow B2C sales to conduct due diligence on traders prior to allowing them to use the platform.
Platforms must be designed to facilitate compliance with traders' legal obligations such as e-commerce and product safety requirements. The illegal content take down provisions are to facilitate the removal of illegal or counterfeit products, and platforms have obligations to contact consumers who have purchased those products.
Specific transparency obligations apply to online advertising. It must be clear to users whether a displayed item is an advertisement and from whom it originates. Information about the main parameters used to decide to whom it will be displayed is essential in this context.
VLOPs and VLOSEs are designated by the Commission. The first tranche of designations (17 VLOPs and 2 VLOSEs) were made on 25 April 2023. Platforms and search engines will need to report on the number of users at least every 6 months. Obligations on VLOPs and VLOSEs apply from four months after their designation as such.
An additional layer of obligations are imposed on VLOPs and VLOSEs. They must diligently monitor and mitigate systemic risks including their service being used for the dissemination of illegal content; the impact of their service on human rights; and the risk of their service negatively affecting civic discourse, electoral processes, public security, gender-based violence, the protection of public health and minors and individuals' physical and mental well-being.
The European Commission's guidelines on mitigating against systemic risks in relation to electoral processes were published in the EU's Official Journal on 26 April 2024. The guidelines aim to support VLOPs and VLOSEs with their compliance obligations under Article 35 of the DSA (mitigation of risks) and with other obligations relevant to elections. As well as mitigation measures, the guidelines cover best practices before, during and after electoral events.
VLOPs and VLOSEs also have to implement strict processes towards regular risk assessments, produce reports of action regarding content moderation and are obliged to conduct independent annual audits to assess compliance and any commitments made pursuant to codes of conduct and crisis protocols that the Commission has adopted. They also have to disclose their parameters for recommender systems and provide for at least one recommender system which does not involve profiling. In addition, the EU Commission as well as member states, can seek access to their algorithms.
The Commission has also launched a DSA whistleblower tool which allows individuals with inside information to report harmful practices by VLOPs and VLOSEs that are potentially in breach of the DSA, anonymously if preferred.
Each member state must designate one or more competent authorities as responsible for the application and enforcement of the DSA in their jurisdiction, and each member state must designate one of these as a "Digital Services Coordinator." (DSC)" The DSCs are responsible for enforcement of the DSA in respect of platforms that are not VLOPs or VLOSEs in their different territories and can work together. The DSCs make up a European Board for Digital Services, chaired by the European Commission, which works as an independent advisory group to enforce the DSA.
In April 2024, the Commission opened infringement procedures against six member states which either had not designated their DSC by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or had not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches. The Commission has opened infringement procedures against six member states which either did not designate their Digital Services Coordinator by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or have not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). In July 2024, the Commission opened infringement proceedings against six further member states (Belgium, Spain, Croatia, Luxembourg, Netherlands and Sweden).
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The European Commission has been, and continues to be, proactive in sending requests for information to various platforms concerning their compliance with their DSA obligations. It has also opened formal proceedings against some platforms following preliminary investigations and requests for information.
In May 2024, the European Commission and the UK's Ofcom signed an administrative arrangement to support their work in enforcing the new online safety regimes in both the EU and the UK under the DSA and the UK Online Safety Act 2023 respectively.
The Commission and the European Regulators Group for Audio visual Media Services (ERGA), which brings together national media regulators under the Audiovisual Media Services Directive, have also agreed to work together on enforcement, focussing on VLOPs and VLOSEs. ERGA will act as a facilitator to gather information at national level and report on issues such as media pluralism, disinformation and the protection of children to help the Commission identify and assess the systemic risks in these areas. It is also expected to assist the DSCs.
On 22 February 2024, the delegated regulation on independent audits entered into force. The delegated act provides a framework to guide VLOPs and VLOSEs when preparing their external independent audits. It also provides mandatory templates for auditors to use when completing an audit report, as well as mandatory templates for the VLOPs and VLOSEs to use when completing their audit implementation reports.
The first risk assessment reports and independent audit reports from designated VLOPs and VLOSEs were published in November 2024.
Over the summer 2024, the EU Commission conducted a call for evidence to inform its upcoming guidelines on the protection of children online under the DSA. The guidelines will apply to all online platforms that are accessible to children, including those not directed at children, but which still have child users due to a lack of age-assurance mechanisms. The Commission will use the input from the call for evidence to draft the guidelines on which it will consult separately and which it plans to adopt before summer 2025.
On 29 October 2024, the Commission launched a consultation on a draft delegated regulation on the rules for researchers to access platforms' data under the DSA. Article 40 allows "vetted researchers" to access VLOPs' and VLOSEs' data, subject to approval from a Digital Services Coordinator, for the purposes of evaluating systemic risks and mitigation measures. The consultation closed on 10 December 2024, and the rules are expected to be adopted in the first quarter of 2025.
In November 2024, the Commission adopted an implementing regulation on transparency reporting under the DSA. The regulation standardises templates and reporting periods for the transparency reports that providers of intermediary services have to publish in relation to their content moderation practices. VLOPs and VLOSEs must report twice a year, while other services report annually. Under the implementing regulation, providers must start collecting data in line with the templates from 1 July 2025, with the first harmonised reports due at the beginning of 2026. The Commission also plans to update the requirements for submitting statements of reasons to the DSA transparency database so that they align with the implementing regulation.
In January 2025, the Commission and the European Board for Digital Services endorsed the integration of the voluntary Code of Practice on Disinformation into the framework of the DSA. With integration, full adherence to the Code may be considered an appropriate risk mitigation measure for signatories designated as VLOPs and VLOSEs. The Code will therefore be a benchmark for determining DSA compliance. Conversion of the Code into a DSA Voluntary Code of Practice will take effect from 1 July 2025, making its commitments capable of audit from that date onwards.
17 February 2024: The Digital Services Act (DSA) is now fully applicable
EU gets one step closer to adopting the Digital Services Act
Rethinking regulation of data-driven digital platforms
EU's Digital Service Act to introduce first express ban on 'dark patterns'
Commission finds EU consumer law inadequate to protect consumers in the digital world
These regulations introduce new rules for digital platforms on collecting and reporting information about their sellers.
Next important deadline(s): None
1 January 2024 in the UK (with the first reports due by 31 January 2025)
Regulations enacted (and came into force on 1 January 2024)
None (full compliance already required and first reports made)
The Platform Operators (Due Diligence and Reporting Requirements) Regulations 2023, available here.
All online platforms and those who sell through them.
The UK's implementing regulations include two categories of "Excluded Platform Operators" which are platform operators whose entire business model is such that the platform operator either:
· does not allow sellers to derive a profit from the consideration, or
· has no "reportable sellers".
In each case, the platform operator may only rely on this exemption if it gives notice to HMRC in the form and manner specified in directions given by the HMRC.
These rules introduce new standardised rules for collecting and reporting relevant information about sellers of goods and relevant services – currently personal services (including the provision of transportation and delivery services), and the rental of immoveable property – and their income from digital platform activities to tax authorities and exchanging information between tax authorities. Under the rules platforms must:
· collect certain details about their sellers, including information to accurately identify: who the seller is; where they are based; how much they have earned on the platform over an annual period
· verify the seller’s information to ensure it is accurate
· report the information, including the seller’s income, to the tax authority annually by 31 January
· provide that information to sellers, to help them complete their tax returns.
There are penalties for non-compliance and Tax authorities will exchange the information with the tax authority in which the seller is resident (or rental property is located) to ensure local tax compliance.
These regulations introduce new rules for digital platforms on collecting and reporting information about their sellers.
Next important deadline(s): None
1 January 2024 in the UK (with the first reports due by 31 January 2025)
Regulations enacted (and came into force on 1 January 2024)
None (full compliance already required and first reports made)
The Platform Operators (Due Diligence and Reporting Requirements) Regulations 2023, available here.
All online platforms and those who sell through them.
The UK's implementing regulations include two categories of "Excluded Platform Operators" which are platform operators whose entire business model is such that the platform operator either:
· does not allow sellers to derive a profit from the consideration, or
· has no "reportable sellers".
In each case, the platform operator may only rely on this exemption if it gives notice to HMRC in the form and manner specified in directions given by the HMRC.
These rules introduce new standardised rules for collecting and reporting relevant information about sellers of goods and relevant services – currently personal services (including the provision of transportation and delivery services), and the rental of immoveable property – and their income from digital platform activities to tax authorities and exchanging information between tax authorities. Under the rules platforms must:
· collect certain details about their sellers, including information to accurately identify: who the seller is; where they are based; how much they have earned on the platform over an annual period
· verify the seller’s information to ensure it is accurate
· report the information, including the seller’s income, to the tax authority annually by 31 January
· provide that information to sellers, to help them complete their tax returns.
There are penalties for non-compliance and Tax authorities will exchange the information with the tax authority in which the seller is resident (or rental property is located) to ensure local tax compliance.
Pillar Two is the second part of a two pillar solution to address the tax challenges arising from digitalisation.
Next important deadline(s): None
1 January 2024 for certain elements of Pillar Two in the UK (the "Multinational top-up tax" (MTT) and "Domestic top-up tax" (DTT)) and accounting periods beginning on or after 31 December 2024 for the "undertaxed profits rule" (UTPR).
The provisions for the main implementation of Pillar Two (MTT and DTT) are enacted although the implementation of the UTPR will be introduced by Finance Bill 2024-25 with effect for accounting periods beginning on or after 31 December 2024.
None (compliance already required for certain elements)
See the provisions dealing with MTT, DTT and UTPR in Parts 3 and 4 of Finance (No.2) Act 2023, available here and subsequent amendments in Finance Act 2024 available here.
See also Finance Bill 2024-25, available here, for amendments to the MTT and DTT legislation to implement the UTPR and other additional amendments to the MTT and DTT to introduce the transitional country by country reporting safe harbour anti-arbitrage rule.
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar Two sets a new minimum corporate tax rate of 15% for large MNEs (those with a combined group turnover of more than EUR750 million) on global profits and provides a new set of rules known as the global anti-base erosion (GloBE) rules which will expand taxing rights of jurisdictions to achieve the global minimum tax rate worldwide to apply to group profits. If a jurisdiction does not adequately tax profits, then other jurisdictions, in particular shareholder jurisdiction or source jurisdictions, can seek to tax such profits.
The OECD framework for Pillar Two will operate on a country-by-country basis, with implementation dates varying between countries. The UK has enacted primary legislation in the Finance (No.2) Act 2023 to implement the MTT and DTT from 1 January 2024 (as further amended by the Finance Act 2024) and legislation to implement the UTPR will be introduced by the Finance Bill 2024-25 with effect for accounting periods beginning on or after 31 December 2024.
Pillar Two is the second part of a two pillar solution to address the tax challenges arising from digitalisation.
Next important deadline(s): None
1 January 2024 for certain elements of Pillar Two in the UK (the "Multinational top-up tax" (MTT) and "Domestic top-up tax" (DTT)) and accounting periods beginning on or after 31 December 2024 for the "undertaxed profits rule" (UTPR).
The provisions for the main implementation of Pillar Two (MTT and DTT) are enacted although the implementation of the UTPR will be introduced by Finance Bill 2024-25 with effect for accounting periods beginning on or after 31 December 2024.
None (compliance already required for certain elements)
See the provisions dealing with MTT, DTT and UTPR in Parts 3 and 4 of Finance (No.2) Act 2023, available here and subsequent amendments in Finance Act 2024 available here.
See also Finance Bill 2024-25, available here, for amendments to the MTT and DTT legislation to implement the UTPR and other additional amendments to the MTT and DTT to introduce the transitional country by country reporting safe harbour anti-arbitrage rule.
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar Two sets a new minimum corporate tax rate of 15% for large MNEs (those with a combined group turnover of more than EUR750 million) on global profits and provides a new set of rules known as the global anti-base erosion (GloBE) rules which will expand taxing rights of jurisdictions to achieve the global minimum tax rate worldwide to apply to group profits. If a jurisdiction does not adequately tax profits, then other jurisdictions, in particular shareholder jurisdiction or source jurisdictions, can seek to tax such profits.
The OECD framework for Pillar Two will operate on a country-by-country basis, with implementation dates varying between countries. The UK has enacted primary legislation in the Finance (No.2) Act 2023 to implement the MTT and DTT from 1 January 2024 (as further amended by the Finance Act 2024) and legislation to implement the UTPR will be introduced by the Finance Bill 2024-25 with effect for accounting periods beginning on or after 31 December 2024.
Part 1 introduces cybersecurity obligations for connected consumer products.
Next important deadline(s): None
Provisions relating to connectable products in Part 1 came into effect on 29 April 2024
Secondary legislation
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (giving effect to the connectable products provisions) are available here.
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) (Amendment) Regulations 2025 are available here.
In effect
None (full compliance already required)
UK Product Security and Telecommunications Infrastructure Act 2022, available here.
Manufacturers of connectable products (Product Security) and telecoms network operators, infrastructure provides and site providers (Telecommunications Infrastructure). This affects products already on sale/available.
Part 1 of The Product Security and Telecommunications Infrastructure Act 2022 (PSTIA) sets out powers for UK Government Ministers to specify security requirements relating to relevant connectable products (a product which is internet or network connectable).
The obligations imposed on manufacturers of connectable/digital products include:
· Restricting the use of default passwords;
· Providing information to consumers on how to report security issues; and
· Providing information on minimum periods for which devices will receive security updates.
In addition, businesses may face liability where cybersecurity vulnerabilities lead to the loss or corruption of consumer data.
The PSTIA also imposes obligations on manufacturers, importers and distributors to not make products available in the UK unless accompanied by a statement of compliance with applicable security conditions.
The PSTIA leaves room for further delegated legislation to introduce additional obligations, and clarifications.
For more on the practical steps to comply with the new regime, please see the link to our Eating Compliance for Breakfast webinar below.
On 8 January 2024, the Office for Products Safety and Standards (OPSS) published guidance for businesses that need to comply with the requirements. See the link to the February Regulatory Outlook below for more.
Additional guidance was added on 23 April 2024 specifying that the product's statement of compliance (SoC) must be "accompanied" by the product itself, and defining the SoC as a "document". However, the terms "document" and "accompany" are not clearly defined in the PSTIA and so businesses must determine how they will comply with these requirements for their individual products.
⭐ Amending regulations were made on 24 February 2025 and came into force on 25 February 2025. The changes exempt the following categories of products from the PSTIA: motor vehicles agricultural, two- or three-wheel vehicles and quadricycles, and forestry vehicles. The amending regulations also clarify that where a manufacturer of relevant connectable products extends the minimum length of time for which security updates relating to such products will be provided, the new minimum length of time must be published as soon as is practicable.
Part 1 introduces cybersecurity obligations for connected consumer products.
Next important deadline(s): None
Provisions relating to connectable products in Part 1 came into effect on 29 April 2024
Secondary legislation
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (giving effect to the connectable products provisions) are available here.
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) (Amendment) Regulations 2025 are available here.
In effect
None (full compliance already required)
UK Product Security and Telecommunications Infrastructure Act 2022, available here.
Manufacturers of connectable products (Product Security) and telecoms network operators, infrastructure provides and site providers (Telecommunications Infrastructure). This affects products already on sale/available.
Part 1 of The Product Security and Telecommunications Infrastructure Act 2022 (PSTIA) sets out powers for UK Government Ministers to specify security requirements relating to relevant connectable products (a product which is internet or network connectable).
The obligations imposed on manufacturers of connectable/digital products include:
· Restricting the use of default passwords;
· Providing information to consumers on how to report security issues; and
· Providing information on minimum periods for which devices will receive security updates.
In addition, businesses may face liability where cybersecurity vulnerabilities lead to the loss or corruption of consumer data.
The PSTIA also imposes obligations on manufacturers, importers and distributors to not make products available in the UK unless accompanied by a statement of compliance with applicable security conditions.
The PSTIA leaves room for further delegated legislation to introduce additional obligations, and clarifications.
For more on the practical steps to comply with the new regime, please see the link to our Eating Compliance for Breakfast webinar below.
On 8 January 2024, the Office for Products Safety and Standards (OPSS) published guidance for businesses that need to comply with the requirements. See the link to the February Regulatory Outlook below for more.
Additional guidance was added on 23 April 2024 specifying that the product's statement of compliance (SoC) must be "accompanied" by the product itself, and defining the SoC as a "document". However, the terms "document" and "accompany" are not clearly defined in the PSTIA and so businesses must determine how they will comply with these requirements for their individual products.
⭐ Amending regulations were made on 24 February 2025 and came into force on 25 February 2025. The changes exempt the following categories of products from the PSTIA: motor vehicles agricultural, two- or three-wheel vehicles and quadricycles, and forestry vehicles. The amending regulations also clarify that where a manufacturer of relevant connectable products extends the minimum length of time for which security updates relating to such products will be provided, the new minimum length of time must be published as soon as is practicable.
This new regulation aims to boost competition in EU digital markets by regulating online "gatekeepers".
Next important deadline(s): H1 2025
The first group of gatekeepers designated under the DMA have had to comply with its obligations since 6 March 2024. Additional designations continue to be made by the Commission: obligations take effect six months from the date of designation.
In effect.
In 2024 the Commission launched reviews of compliance by Alphabet, Apple, Amazon and Meta. These proceedings are expected to conclude at various points in the first half of 2025.
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), available here.
The DMA is aimed at digital businesses offering a "core platform service", with significant scale and reach within the EU – these firms will be designated as "gatekeepers".
A "core platform service" includes: online intermediation services; online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communication services; operating systems; cloud computing services; and certain advertising services so far as they are provided by a provider of any of the core platform services.
A "gatekeeper" is a "core platform service provider" that has a significant impact on the internal market, whose platform enables business users to access customers and enjoys an entrenched and durable position (meaning at least three years). While there is some nuance in this designation, there will be a presumption that a provider will satisfy the designation test if it:
• generates an annual revenue of €7.5 billion or more in the last three years, or has an average market capitalisation of at least €75 billion in the last financial year in the EEA and provides a core platform services in three member states; and
• has more than 45 million monthly EU end users or more than 10,000 yearly active EU business users a year, and
• these thresholds have both been met in each of the last three financial years.
The first limb is a set of rules that will apply only to "gatekeepers" and comprises a list of do's and don'ts for these providers. The rules are designed to address perceived competitive imbalances arising from the gatekeeper's position in the market.
Examples of obligations on gatekeepers include:
• Allowing third parties to inter-operate with the gatekeeper's own service in certain specific situations.
• Allowing business users to access the data that they generate in their use of the gatekeeper's platform.
• Providing advertisers and publishers free of charge with the tools and information necessary to carry out their own, independent verification of advertisements hosted by the gatekeeper.
• Allowing business users to promote their offer and conclude contracts with customers outside the gatekeeper's platform.
Examples of prohibited behaviour include:
• Self-preferencing – treating services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform.
• Preventing consumers from linking up to businesses outside their platforms.
• Preventing users from un-installing any pre-installed software or app.
The second limb covers market investigation powers for the European Commission to conduct investigations to consider whether: a core platform provider should be designated as a gatekeeper; there has been systematic non-compliance; and services should be added to the list of core platform services.
The Commission also has powers to supplement the gatekeeper obligations under the DMA with additional obligations through further legislation, based on a market investigation. This is to ensure that the legislation maintains pace with digital markets as they evolve.
The Commission can also designate firms as "emerging" gatekeepers, that is, they are on the way to reaching the thresholds set out in the DMA for designation as a gatekeeper.
Failure to comply with the rules could lead to fines of up to 10% of global turnover, and, in the case of systematic infringements, structural remedies.
The companies so far designated by the Commission as gatekeepers are: Alphabet, Amazon, Apple, booking.com, ByteDance, Meta and Microsoft in relation to the following core platform services:
• Social Networks: TikTok, Facebook, Instagram, LinkedIn;
• Online Intermediation Services: Google Maps, Google Play, Google Shopping, Amazon Marketplace, App Store, Meta Marketplace; booking.com;
• Advertising: Google, Amazon, Meta
• Number Independent Communication Services: WhatsApp, Messenger;
• Video Sharing: YouTube;
• Search: Google Search;
• Browser: Chrome, Safari;
• Operating System: Android, iOS, iPad OS and Windows PC.
Challenges have been lodged by some gatekeepers in relation to some of these designations. The Commission has also launched several investigations into non-compliance with the DMA by some gatekeepers. The Commission intends to complete these investigations during the first half of 2025.
In addition, the Commission conducted a market investigation to further assess rebuttal arguments submitted by the online social networking service X, which argued that even if X was deemed to meet the quantitative thresholds it did not qualify as an important gateway between businesses and consumers, meaning that its online social networking service should not be a designated gatekeeper. The Commission found in X's favour.
The Commission has also signalled its intention to monitor the development of AI tools in the context of the DMA.
On 23rd January 2025, the DMA working group suggested that the Commission should consider whether to designate AI or cloud infrastructure services.
This new regulation aims to boost competition in EU digital markets by regulating online "gatekeepers".
Next important deadline(s): H1 2025
The first group of gatekeepers designated under the DMA have had to comply with its obligations since 6 March 2024. Additional designations continue to be made by the Commission: obligations take effect six months from the date of designation.
In effect.
In 2024 the Commission launched reviews of compliance by Alphabet, Apple, Amazon and Meta. These proceedings are expected to conclude at various points in the first half of 2025.
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), available here.
The DMA is aimed at digital businesses offering a "core platform service", with significant scale and reach within the EU – these firms will be designated as "gatekeepers".
A "core platform service" includes: online intermediation services; online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communication services; operating systems; cloud computing services; and certain advertising services so far as they are provided by a provider of any of the core platform services.
A "gatekeeper" is a "core platform service provider" that has a significant impact on the internal market, whose platform enables business users to access customers and enjoys an entrenched and durable position (meaning at least three years). While there is some nuance in this designation, there will be a presumption that a provider will satisfy the designation test if it:
• generates an annual revenue of €7.5 billion or more in the last three years, or has an average market capitalisation of at least €75 billion in the last financial year in the EEA and provides a core platform services in three member states; and
• has more than 45 million monthly EU end users or more than 10,000 yearly active EU business users a year, and
• these thresholds have both been met in each of the last three financial years.
The first limb is a set of rules that will apply only to "gatekeepers" and comprises a list of do's and don'ts for these providers. The rules are designed to address perceived competitive imbalances arising from the gatekeeper's position in the market.
Examples of obligations on gatekeepers include:
• Allowing third parties to inter-operate with the gatekeeper's own service in certain specific situations.
• Allowing business users to access the data that they generate in their use of the gatekeeper's platform.
• Providing advertisers and publishers free of charge with the tools and information necessary to carry out their own, independent verification of advertisements hosted by the gatekeeper.
• Allowing business users to promote their offer and conclude contracts with customers outside the gatekeeper's platform.
Examples of prohibited behaviour include:
• Self-preferencing – treating services and products offered by the gatekeeper itself more favourably in ranking than similar services or products offered by third parties on the gatekeeper's platform.
• Preventing consumers from linking up to businesses outside their platforms.
• Preventing users from un-installing any pre-installed software or app.
The second limb covers market investigation powers for the European Commission to conduct investigations to consider whether: a core platform provider should be designated as a gatekeeper; there has been systematic non-compliance; and services should be added to the list of core platform services.
The Commission also has powers to supplement the gatekeeper obligations under the DMA with additional obligations through further legislation, based on a market investigation. This is to ensure that the legislation maintains pace with digital markets as they evolve.
The Commission can also designate firms as "emerging" gatekeepers, that is, they are on the way to reaching the thresholds set out in the DMA for designation as a gatekeeper.
Failure to comply with the rules could lead to fines of up to 10% of global turnover, and, in the case of systematic infringements, structural remedies.
The companies so far designated by the Commission as gatekeepers are: Alphabet, Amazon, Apple, booking.com, ByteDance, Meta and Microsoft in relation to the following core platform services:
• Social Networks: TikTok, Facebook, Instagram, LinkedIn;
• Online Intermediation Services: Google Maps, Google Play, Google Shopping, Amazon Marketplace, App Store, Meta Marketplace; booking.com;
• Advertising: Google, Amazon, Meta
• Number Independent Communication Services: WhatsApp, Messenger;
• Video Sharing: YouTube;
• Search: Google Search;
• Browser: Chrome, Safari;
• Operating System: Android, iOS, iPad OS and Windows PC.
Challenges have been lodged by some gatekeepers in relation to some of these designations. The Commission has also launched several investigations into non-compliance with the DMA by some gatekeepers. The Commission intends to complete these investigations during the first half of 2025.
In addition, the Commission conducted a market investigation to further assess rebuttal arguments submitted by the online social networking service X, which argued that even if X was deemed to meet the quantitative thresholds it did not qualify as an important gateway between businesses and consumers, meaning that its online social networking service should not be a designated gatekeeper. The Commission found in X's favour.
The Commission has also signalled its intention to monitor the development of AI tools in the context of the DMA.
On 23rd January 2025, the DMA working group suggested that the Commission should consider whether to designate AI or cloud infrastructure services.
This directive updates NIS 1 and requires certain organisations to strengthen their cyber security defences.
Next important deadline(s): 2025
The directive entered into force on 16 January 2023. All member states were obliged to transpose its measures into national law by 17 October 2024.
Enacted and partially in effect. Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
2025 (national implementing legislation may be brought in – implementation process below)
Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, available here.
The NIS 2 directive (on security of information and network systems) updates earlier EU legislation in this area (NIS 1). It significantly extends the list of sectors within the scope of the regulation and provides more information on the entities that are subject to its cybersecurity requirements. The scope of the directive is set out in detail, intended to remove inconsistencies in scope between the implementing legislation enacted at member state level for the NIS 1 directive. The directive will apply to all businesses operating within the broad sectors listed in Annex I or II which qualify as a "medium" enterprise or larger (i.e. employing 50 people or more, and with either annual turnover over €10 million or a balance sheet exceeding €10 million).
However, the carve-out for smaller business will not apply where they are in the sectors listed in Annex I or II and they are, more specifically:
The directive also applies to entities of all sizes listed under the Critical Entities Resilience directive and those that provide domain name registration services.
The directive does not apply to public administration entities that carry out activities in certain areas including national security, public security, defence, or law enforcement. Member states may also exempt specific entities in the above areas.
Annex I covers energy, transport, banking and financial market infrastructure, healthcare, water and sewage, digital infrastructure and certain business-to-business information and communications technology services.
Annex II includes postal and courier services, waste management, some types of manufacturing and digital platform providers for online market places, search engines and social networks.
The scope of this legislation is detailed and the above is a summary only.
The regulation introduces a set of mandatory measures that each entity must address to prevent cybersecurity incidents. This includes policies on risk analysis and information system security, cyber hygiene practices, cybersecurity training, and multi-factor and continuous authentication solutions.
To ensure that organisations are aware of their NIS2 obligations and acquire knowledge on cybersecurity risk prevention, management bodies are required to attend training sessions and to pass this information and knowledge on to their employees.
Member states must implement effective, proportionate, and dissuasive sanctions. Fines for non-compliance can reach 2% of the organization's annual turnover or €10 million − whichever is greater – for essential entities, and up to 1.4% of the organisation's annual turnover or €7 million for important entities.
The new directive clarifies the scope of reporting obligations with more specific provisions regarding the reporting process, content, and timelines. In particular, entities affected by an incident that has a signification impact on provision of their services must submit an initial assessment of the incident to their computer security incident response team (CSIRT) or, where applicable, to the competent authority within 24 hours of becoming aware of the incident. A final update must be provided within a month of the initial notification.
One of the most novel aspects of this directive is the requirement for member states to promote the use of innovative technologies, including artificial intelligence, to improve the detection and prevention of cyber-attacks, which would allow for a more efficient and effective allocation of resources to combat these threats.
Member states were obliged to legislate for appropriate measures to implement the directive by 17 October 2024. National implementing legislation can be tracked from this page. As of 28 February 2025, Belgium, Croatia, Greece, Hungary, Italy, Latvia, Lithuania, Romania and Slovakia have completed transposition of the directive into their national laws.
Organisations in those countries must therefore act swiftly to ensure compliance before the respective national laws come into force. In particular, they will have to consider whether they fall under the scope of the directive and therefore need to register with the national cyber security authority before the applicable national deadline.
On 4 December 2024, the Commission announced that it had opened infringement procedures against 23 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden) for failing to meet the 17 October deadline. These member states have two months to respond to the letters of formal notice by completing the transposition process and notifying the Commission of the measures introduced.
⭐ As of 28 February 2025, the member states which have published draft transposition law and are expected to adopt the national implementing acts in 2025 are: Austria, Bulgaria, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Ireland, the Netherlands, Poland, Portugal, Slovenia and Sweden.
Although there is significant fragmentation in the implementation of the directive across the EU, it is important to be ready to comply in the remaining member states as they could adopt implementing acts at any point, with a short period between adoption and when the new changes become enforceable. Organisations should therefore closely monitor implementation of national legislation and prepare in advance of the relevant national laws entering into force by conducting risk assessments, developing incident response plans and enhancing their overall level of cyber security awareness.
On 17 October 2024, the Commission adopted the first implementing regulation, which details the cyber security risk management measures and the criteria for when an incident will be considered significant under the directive. The implementing regulation was published in the Official Journal on 7 November 2024 and entered into force 20 days later, i.e. on 27 November 2024. A reminder that implementing regulations are adopted to ensure uniform implementation of the relevant EU legislation and will take precedence over national legislation in the event of contradiction.
This directive updates NIS 1 and requires certain organisations to strengthen their cyber security defences.
Next important deadline(s): 2025
The directive entered into force on 16 January 2023. All member states were obliged to transpose its measures into national law by 17 October 2024.
Enacted and partially in effect. Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
2025 (national implementing legislation may be brought in – implementation process below)
Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, available here.
The NIS 2 directive (on security of information and network systems) updates earlier EU legislation in this area (NIS 1). It significantly extends the list of sectors within the scope of the regulation and provides more information on the entities that are subject to its cybersecurity requirements. The scope of the directive is set out in detail, intended to remove inconsistencies in scope between the implementing legislation enacted at member state level for the NIS 1 directive. The directive will apply to all businesses operating within the broad sectors listed in Annex I or II which qualify as a "medium" enterprise or larger (i.e. employing 50 people or more, and with either annual turnover over €10 million or a balance sheet exceeding €10 million).
However, the carve-out for smaller business will not apply where they are in the sectors listed in Annex I or II and they are, more specifically:
The directive also applies to entities of all sizes listed under the Critical Entities Resilience directive and those that provide domain name registration services.
The directive does not apply to public administration entities that carry out activities in certain areas including national security, public security, defence, or law enforcement. Member states may also exempt specific entities in the above areas.
Annex I covers energy, transport, banking and financial market infrastructure, healthcare, water and sewage, digital infrastructure and certain business-to-business information and communications technology services.
Annex II includes postal and courier services, waste management, some types of manufacturing and digital platform providers for online market places, search engines and social networks.
The scope of this legislation is detailed and the above is a summary only.
The regulation introduces a set of mandatory measures that each entity must address to prevent cybersecurity incidents. This includes policies on risk analysis and information system security, cyber hygiene practices, cybersecurity training, and multi-factor and continuous authentication solutions.
To ensure that organisations are aware of their NIS2 obligations and acquire knowledge on cybersecurity risk prevention, management bodies are required to attend training sessions and to pass this information and knowledge on to their employees.
Member states must implement effective, proportionate, and dissuasive sanctions. Fines for non-compliance can reach 2% of the organization's annual turnover or €10 million − whichever is greater – for essential entities, and up to 1.4% of the organisation's annual turnover or €7 million for important entities.
The new directive clarifies the scope of reporting obligations with more specific provisions regarding the reporting process, content, and timelines. In particular, entities affected by an incident that has a signification impact on provision of their services must submit an initial assessment of the incident to their computer security incident response team (CSIRT) or, where applicable, to the competent authority within 24 hours of becoming aware of the incident. A final update must be provided within a month of the initial notification.
One of the most novel aspects of this directive is the requirement for member states to promote the use of innovative technologies, including artificial intelligence, to improve the detection and prevention of cyber-attacks, which would allow for a more efficient and effective allocation of resources to combat these threats.
Member states were obliged to legislate for appropriate measures to implement the directive by 17 October 2024. National implementing legislation can be tracked from this page. As of 28 February 2025, Belgium, Croatia, Greece, Hungary, Italy, Latvia, Lithuania, Romania and Slovakia have completed transposition of the directive into their national laws.
Organisations in those countries must therefore act swiftly to ensure compliance before the respective national laws come into force. In particular, they will have to consider whether they fall under the scope of the directive and therefore need to register with the national cyber security authority before the applicable national deadline.
On 4 December 2024, the Commission announced that it had opened infringement procedures against 23 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden) for failing to meet the 17 October deadline. These member states have two months to respond to the letters of formal notice by completing the transposition process and notifying the Commission of the measures introduced.
⭐ As of 28 February 2025, the member states which have published draft transposition law and are expected to adopt the national implementing acts in 2025 are: Austria, Bulgaria, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Ireland, the Netherlands, Poland, Portugal, Slovenia and Sweden.
Although there is significant fragmentation in the implementation of the directive across the EU, it is important to be ready to comply in the remaining member states as they could adopt implementing acts at any point, with a short period between adoption and when the new changes become enforceable. Organisations should therefore closely monitor implementation of national legislation and prepare in advance of the relevant national laws entering into force by conducting risk assessments, developing incident response plans and enhancing their overall level of cyber security awareness.
On 17 October 2024, the Commission adopted the first implementing regulation, which details the cyber security risk management measures and the criteria for when an incident will be considered significant under the directive. The implementing regulation was published in the Official Journal on 7 November 2024 and entered into force 20 days later, i.e. on 27 November 2024. A reminder that implementing regulations are adopted to ensure uniform implementation of the relevant EU legislation and will take precedence over national legislation in the event of contradiction.
This regulation introduces sweeping reforms to the EU product safety regime.
Next important deadline(s): None
⭐ Applies to all new products placed on the market from 13 December 2024.
Became law on 10 May 2023 but had an 18 month transition period until 13 December 2024.
None (full compliance already required)
Regulation (EU) 2023/988 of 10 May 2023 on general product safety, available here.
Anyone placing consumer products on the market in the EU.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
· 10 things businesses can do now to prepare for compliance with the GPSR
· Navigating the EU's new GPSR: obligations are introduced for online marketplaces - Osborne Clarke
The regulation contains sweeping reforms which will significantly change the way that both modern and traditional products are produced, supplied and monitored across the EU. It raises the requirements of safety and adds sophistication in terms of compliance obligations for all non-food products and the businesses involved in manufacturing and supplying them to end users.
The GPSR also adopts an improved definition for a "product" and includes new factors to take into account when assessing safety, so that the EU's product safety regime adequately addresses modern technologies.
The GPSR makes clear that connected devices are considered to be products within its scope and will be subject to the general safety requirement. In addition, when assessing whether a product is safe, economic operators have to take into account the effect that other products might have on their product, its cybersecurity features, and any evolving, learning and predictive functionalities of the product.
In addition, the GPSR seeks to address the increasing digitalisation of supply chains, in particular the growth of e-commerce, and places a number of obligations on online marketplaces, such as:
· cooperation with market surveillance authorities if they detect a dangerous product on their platform, including directly contacting affected consumers who bought through their platform in the event of a product recall;
· ensuring there is a single point of contact in charge of product safety;
· requiring contact and traceability information to be displayed alongside listings and any relevant product identifiers, warnings or safety information;
· requiring steps to be taken to identify dangerous products made available on its online marketplace; and
· requiring non-compliant traders to be suspended from their platforms.
Market surveillance authorities will also be able to order online platforms to remove dangerous products from their platforms or to disable access to them.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
· 10 things businesses can do now to prepare for compliance with the GPSR
· Navigating the EU's new GPSR: obligations are introduced for online marketplaces - Osborne Clarke
This regulation introduces sweeping reforms to the EU product safety regime.
Next important deadline(s): None
⭐ Applies to all new products placed on the market from 13 December 2024.
Became law on 10 May 2023 but had an 18 month transition period until 13 December 2024.
None (full compliance already required)
Regulation (EU) 2023/988 of 10 May 2023 on general product safety, available here.
Anyone placing consumer products on the market in the EU.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
· 10 things businesses can do now to prepare for compliance with the GPSR
· Navigating the EU's new GPSR: obligations are introduced for online marketplaces - Osborne Clarke
The regulation contains sweeping reforms which will significantly change the way that both modern and traditional products are produced, supplied and monitored across the EU. It raises the requirements of safety and adds sophistication in terms of compliance obligations for all non-food products and the businesses involved in manufacturing and supplying them to end users.
The GPSR also adopts an improved definition for a "product" and includes new factors to take into account when assessing safety, so that the EU's product safety regime adequately addresses modern technologies.
The GPSR makes clear that connected devices are considered to be products within its scope and will be subject to the general safety requirement. In addition, when assessing whether a product is safe, economic operators have to take into account the effect that other products might have on their product, its cybersecurity features, and any evolving, learning and predictive functionalities of the product.
In addition, the GPSR seeks to address the increasing digitalisation of supply chains, in particular the growth of e-commerce, and places a number of obligations on online marketplaces, such as:
· cooperation with market surveillance authorities if they detect a dangerous product on their platform, including directly contacting affected consumers who bought through their platform in the event of a product recall;
· ensuring there is a single point of contact in charge of product safety;
· requiring contact and traceability information to be displayed alongside listings and any relevant product identifiers, warnings or safety information;
· requiring steps to be taken to identify dangerous products made available on its online marketplace; and
· requiring non-compliant traders to be suspended from their platforms.
Market surveillance authorities will also be able to order online platforms to remove dangerous products from their platforms or to disable access to them.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
· 10 things businesses can do now to prepare for compliance with the GPSR
· Navigating the EU's new GPSR: obligations are introduced for online marketplaces - Osborne Clarke
The last UK Conservative government did not plan new AI law but issued high level principles and guidance for regulators. The new Labour government has said its plans to introduce AI safety legislation for "frontier" AI models.
The consultation closed on 21 June 2023. The then government's response was published on 6 February 2024.
In the policy formation stage
"A pro-innovation approach to AI regulation" (March 2023) available here
Response to consultation (February 2024) available here
All businesses where their development, supply or use of AI falls within the scope of existing UK law and regulation.
What does the UK's white paper on AI propose and will it work?
EU delays compliance deadlines for the AI Act
What will the next UK government do about regulating and supporting AI?
What is the latest on the UK government's approach to regulating AI?
Overview
The last UK Conservative government's white paper on artificial intelligence (AI) regulation set out the UK's policy approach to regulating AI. Essentially, the government confirmed its approach in the consultation response of February 2024. It did not plan new law in this area, but issued "high-level principles" to guide existing regulators in dealing with AI within the scope of their existing powers and jurisdiction. Its intention was that such an approach would be agile and proportionate.
The white paper's definition of AI identifies it by its key characteristics of being "adaptive" and "autonomous". There is no reference to particular types or classifications of AI (in contrast to the prohibited and high-risk categories of the EU's AI Act), but focuses on machine learning and deep learning systems that are trained and can infer patterns across data. The white paper anticipates that regulators will expand on this loose definition with a more detailed interpretation in their area as needed.
The former Conservative government said that it would monitor and evaluate the emerging regulatory landscape around AI, identifying any gaps in regulatory coverage and supporting coordination and collaboration between the regulators.
Five overarching "high level principles" were set to shape regulatory policy and enforcement:
New "initial" guidance was published to support regulators, with expansions of the guidance planned for later in 2024.
The consultation response confirmed that these principles would not be on a statutory footing, unless it proved necessary in the future. The new Labour government has stated that it will take a different approach, in particular by introducing AI safety legislation for 'frontier' AI models.
Regulators' strategic approaches to AI
Various economic and sector regulators were directed by the former Conservative government to publish their AI strategy by 30 April 2024. The reports on their respective strategic approaches are available here. The length, level of detail and depth of content varies widely between the reports with some regulators (such as the Competition and Markets Authority) having already undertaken significant work in this field and others (such as Ofgem) apparently still in the early stages of building their capacity and understanding around AI.
We understand that these reports were intended to feed into a "gap analysis" by the government of the various regulatory frameworks that will need to deal with AI.
One area where there is a known regulatory gap is in relation to AI and the workforce. Although regulators such as the Equality and Human Rights Commission are often active in relation to workers' rights, there is no regulator overseeing employment law overall. By contrast, AI systems deployed in relation to aspects of the employment lifecycle from recruitment to work allocation to employee appraisal are listed as "high risk" under the EU's AI Act. Perhaps in recognition of that lacuna, the former UK Conservative government issued guidelines on the use of AI in recruitment (available here).
Other initiatives
The former government progressed its plan to create a central government function to support the regulation of AI by existing regulators, including ensuring coherence across the regulatory landscape, monitoring how the proposed approach develops in practice and economy-wide horizon-scanning for emerging trends and opportunities around AI. The Office for AI was folded into the relevant policy unit within the Department for Science, Innovation and Technology (DSIT). A minister with responsibility for AI was designated within each government department.
No action is currently planned in relation to addressing the acknowledged lack of clarity around liability in the AI supply chain.
The former government's consultation response discussed how the challenge of highly capable general purpose AI should be addressed but again elected not to legislate in the near term.
The AI and Digital Hub
The white paper confirmed that a sand box would be created to support AI innovation, which was then launched as the AI and Digital Hub. This is a multi-regulator sand box, coordinated by the Digital Regulators Cooperation Forum (DRCF) (led by the Competition and Markets Authority, the Information Commissioner's Office, Ofcom and the Financial Conduct Authority). It offers free informal advice to businesses on how regulation (across the remits of the four DRCF regulators) applies to their innovative digital or AI projects and is initially a one year pilot.
The DRCF planned to respond on eligibility within 5 working days and, for eligible queries, to give an answer on the substance in 8 weeks. Advice is not legally binding and carries no endorsement or certification of compliance.
The last UK Conservative government did not plan new AI law but issued high level principles and guidance for regulators. The new Labour government has said its plans to introduce AI safety legislation for "frontier" AI models.
The consultation closed on 21 June 2023. The then government's response was published on 6 February 2024.
In the policy formation stage
"A pro-innovation approach to AI regulation" (March 2023) available here
Response to consultation (February 2024) available here
All businesses where their development, supply or use of AI falls within the scope of existing UK law and regulation.
What does the UK's white paper on AI propose and will it work?
EU delays compliance deadlines for the AI Act
What will the next UK government do about regulating and supporting AI?
What is the latest on the UK government's approach to regulating AI?
Overview
The last UK Conservative government's white paper on artificial intelligence (AI) regulation set out the UK's policy approach to regulating AI. Essentially, the government confirmed its approach in the consultation response of February 2024. It did not plan new law in this area, but issued "high-level principles" to guide existing regulators in dealing with AI within the scope of their existing powers and jurisdiction. Its intention was that such an approach would be agile and proportionate.
The white paper's definition of AI identifies it by its key characteristics of being "adaptive" and "autonomous". There is no reference to particular types or classifications of AI (in contrast to the prohibited and high-risk categories of the EU's AI Act), but focuses on machine learning and deep learning systems that are trained and can infer patterns across data. The white paper anticipates that regulators will expand on this loose definition with a more detailed interpretation in their area as needed.
The former Conservative government said that it would monitor and evaluate the emerging regulatory landscape around AI, identifying any gaps in regulatory coverage and supporting coordination and collaboration between the regulators.
Five overarching "high level principles" were set to shape regulatory policy and enforcement:
New "initial" guidance was published to support regulators, with expansions of the guidance planned for later in 2024.
The consultation response confirmed that these principles would not be on a statutory footing, unless it proved necessary in the future. The new Labour government has stated that it will take a different approach, in particular by introducing AI safety legislation for 'frontier' AI models.
Regulators' strategic approaches to AI
Various economic and sector regulators were directed by the former Conservative government to publish their AI strategy by 30 April 2024. The reports on their respective strategic approaches are available here. The length, level of detail and depth of content varies widely between the reports with some regulators (such as the Competition and Markets Authority) having already undertaken significant work in this field and others (such as Ofgem) apparently still in the early stages of building their capacity and understanding around AI.
We understand that these reports were intended to feed into a "gap analysis" by the government of the various regulatory frameworks that will need to deal with AI.
One area where there is a known regulatory gap is in relation to AI and the workforce. Although regulators such as the Equality and Human Rights Commission are often active in relation to workers' rights, there is no regulator overseeing employment law overall. By contrast, AI systems deployed in relation to aspects of the employment lifecycle from recruitment to work allocation to employee appraisal are listed as "high risk" under the EU's AI Act. Perhaps in recognition of that lacuna, the former UK Conservative government issued guidelines on the use of AI in recruitment (available here).
Other initiatives
The former government progressed its plan to create a central government function to support the regulation of AI by existing regulators, including ensuring coherence across the regulatory landscape, monitoring how the proposed approach develops in practice and economy-wide horizon-scanning for emerging trends and opportunities around AI. The Office for AI was folded into the relevant policy unit within the Department for Science, Innovation and Technology (DSIT). A minister with responsibility for AI was designated within each government department.
No action is currently planned in relation to addressing the acknowledged lack of clarity around liability in the AI supply chain.
The former government's consultation response discussed how the challenge of highly capable general purpose AI should be addressed but again elected not to legislate in the near term.
The AI and Digital Hub
The white paper confirmed that a sand box would be created to support AI innovation, which was then launched as the AI and Digital Hub. This is a multi-regulator sand box, coordinated by the Digital Regulators Cooperation Forum (DRCF) (led by the Competition and Markets Authority, the Information Commissioner's Office, Ofcom and the Financial Conduct Authority). It offers free informal advice to businesses on how regulation (across the remits of the four DRCF regulators) applies to their innovative digital or AI projects and is initially a one year pilot.
The DRCF planned to respond on eligibility within 5 working days and, for eligible queries, to give an answer on the substance in 8 weeks. Advice is not legally binding and carries no endorsement or certification of compliance.
This Act overhauls the UK legal framework for public service broadcasting, video on demand, streaming and internet radio.
Next important deadline(s): April 2025, H2 2025
The Media Act 2024 (Act) received Royal Assent on 24 May 2024. Most of the Act's provisions will be brought into force by secondary legislation, apart from Part 2 on prominence on television selection services, which came into force on the day on which the Act was passed.
Secondary legislation made under the Act
The first commencement regulations, The Media Act 2024 (Commencement No. 1) Regulations 2024, brought into force certain provisions of the Media Act on 23 and 26 August 2024.
The second commencement regulations, The Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force Part 5 and Section 19 of the Act on 17 October 2024.
Enacted
April 2025 (Ofcom's report to Secretary of State with recommendations on designation of "television selection services" (TSS)) expected
Second half of 2025 (decisions on designation of TSS expected)
Media Act 2024, available here.
Broadcasters (public service and commercial)
Video-on-demand and streaming service providers
TV selection services, such as smart TV, set-top-box, streaming stick and pay TV platform providers, as well as the associated software providers.
Voice-activated smart speakers and other radio selection services
Radio station operators
Sports rights holders
• Digital prominence: The Act sets out a framework for PSB prominence on digital TV platforms.
• VoD regime overhaul: The Act creates a new category of regulated on-demand service, the "Tier 1" service. This covers either: (i) PSB services used to fulfil a public service remit, other than those operated by the BBC; or (ii) any other UK or non-UK VoD services designated (by name or by definition) in secondary legislation. The scope of this second limb is undefined, but the expectation is that this will catch services with a large UK audience and which make available "TV-like" content or smaller services if evidence emerges of potential harm. These services will be required to comply with new stringent content and accessibility obligations.
• New public service broadcaster (PSB) remit: The Act sets out a new, flexible remit for PSBs and allows them to contribute towards that remit via a range of audiovisual services, including digital VoD platforms. The Act also amends certain commissioning and broadcasting quota obligations.
• Listed events: The aim of the regime is to ensure that key live sporting events of national interest are widely available and free-to-air for UK audiences. The Act clarifies that only PSB linear services will qualify to bid for the rights to show listed events.
• Radio selection services: The Act recognises the growth in voice-activated devices and their potential power in determining listening choices. The largest of these platforms will be obliged to enable access to UK-licensed internet radio stations without charging any fee and without those platforms overlaying the stations with any third party content, including ads. The station operators would also be freed to select their desired means of delivery, for example, via a specific station-operated app or aggregator.
The primary aims of the Act are to:
On 26 February 2024, Ofcom published its roadmap for implementing the bill (as it was then) in which it explained its "high-level plan" for putting the provisions into practice once the bill became law. Ofcom cautioned that the dates outlined were indicative only. Its timetable assumed that the legislation would receive Royal Assent by the summer (which it did) and the necessary secondary legislation subsequently laid before Parliament. Following the general election 2024, the Labour government is proceeding with implementation, which will take place in phases over the next two years.
Listed events
In July 2024, Ofcom published a call for evidence on the listed events regime under the Act, which closed on 26 September 2024.
VoD services regime
The Media Act 2024 (Commencement No 1) Regulations 2024 brought certain (mostly functional) provisions of the Act into force on 23 August 2024. The Regulations also brought into effect the non-UK Tier 1 VoD services framework, but the details of the regime are still to be determined as Ofcom must first consult and then the government will produce further regulations. In addition, the Regulations partially brought into effect the new digital prominence regime, but the details are also subject to consultation and further regulations from the government.
In September 2024, the government indicated to Ofcom its intention to begin its consideration of Tier 1 regulation of appropriate on-demand services "as soon as is practically possible". The government asked Ofcom to prepare a report on the operation of the UK market for on-demand programme services and non-UK on-demand programme services, which report the government is required to take into account.
Regulation of radio services
The second commencement regulations, the Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force the following provisions of the Act on 17 October 2024: (i) Part 5 (Regulation of radio services); and (ii) Section 19 (amount of financial penalties: qualifying revenue), but only for the purposes of enabling Ofcom to carry out the necessary work to bring the section into force. The Regulations also contain transitional and saving provisions.
⭐ Part 5 covers, among other things, local analogue commercial radio licences and their renewal. Previously, renewal was only available if the licence holder was also broadcasting a digital radio service on a "relevant" DAB multiplex. The Act introduces an additional, alternative statutory basis for licensees to apply for renewal if there is a "relevant" DAB multiplex available, but it is not "suitable" for their needs. In December 2024, Ofcom launched a consultation on its approach to considering applications under this new renewal route. Essentially, Ofcom is proposing that a "relevant" local or small-scale multiplex should be considered "unsuitable" in relation to an analogue services only if there is a substantial difference in the size of their coverage areas. The consultation closed on 5 February 2025.
⭐ In February 2025, Ofcom published a consultation on its proposed principles and methods for providing the secretary of state with recommendations for the designation of radio selection services (RSS) under the Act. The Act includes provisions to protect the availability of UK radio on connected audio devices, that is, online radio streams. It brings certain voice-activated online services (radio selection services) that have been designated by the Secretary of State into regulation for the first time. Under the Act, Ofcom is obliged to provide the secretary of state with a report making recommendations as to which RSS should be designated. First, however, the regulator must develop a set of principles and methods that it will follow when making the recommendations. The consultation sets out Ofcom's proposals for these principles and methods. It also seeks views on its emerging thinking on the appropriate sources of data for measuring the use of RSS and setting the threshold for recommendations.
The consultation is open until 18 March 2025.
Availability and prominence regime
The Internet Television Equipment Regulations 2024, which came into force on 14 November 2024, set out the descriptions of devices that are considered to be "internet television equipment" for the purposes of the new prominence framework under the Act. The regulations name smart televisions and streaming devices as internet television equipment.
The government has also published a policy paper on its approach to deciding the categories of TV devices that will be considered "internet television equipment". It explains that smart TVs, set-top boxes and streaming sticks will qualify, but that smartphones, laptops, tablets, PCs and video games consoles will not, as watching TV on these devices is not their primary function.
Newer devices, such as home cinema projectors and portable lifestyle screens, internet connected car touchscreens, virtual reality headsets, smart watches and in-home screens (such as smart fridges and smart speakers with in-built screens), will also be excluded for the same reason. However, the government intends to review the list in one-year's time to reassess.
Under the Act, Ofcom is obliged to provide a report to the secretary of state with recommendations on the designation of "television selection services" (TSS), a new online availability and prominence regime for PSBs' TV apps distributed on connected TV platforms. Under the regime, TSS that are designated by the secretary of state will have to ensure that Ofcom-designated PSB TV apps are available, prominent and easily accessible on the platform. In December 2024, Ofcom launched a consultation on its statement on the principles and methods it will apply when preparing the report. The consultation closed on 5 February 2025. Ofcom plans to issue final statements by April 2025 and it is expected that decisions on designation will be made in the second half of 2025.
This Act overhauls the UK legal framework for public service broadcasting, video on demand, streaming and internet radio.
Next important deadline(s): April 2025, H2 2025
The Media Act 2024 (Act) received Royal Assent on 24 May 2024. Most of the Act's provisions will be brought into force by secondary legislation, apart from Part 2 on prominence on television selection services, which came into force on the day on which the Act was passed.
Secondary legislation made under the Act
The first commencement regulations, The Media Act 2024 (Commencement No. 1) Regulations 2024, brought into force certain provisions of the Media Act on 23 and 26 August 2024.
The second commencement regulations, The Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force Part 5 and Section 19 of the Act on 17 October 2024.
Enacted
April 2025 (Ofcom's report to Secretary of State with recommendations on designation of "television selection services" (TSS)) expected
Second half of 2025 (decisions on designation of TSS expected)
Media Act 2024, available here.
Broadcasters (public service and commercial)
Video-on-demand and streaming service providers
TV selection services, such as smart TV, set-top-box, streaming stick and pay TV platform providers, as well as the associated software providers.
Voice-activated smart speakers and other radio selection services
Radio station operators
Sports rights holders
• Digital prominence: The Act sets out a framework for PSB prominence on digital TV platforms.
• VoD regime overhaul: The Act creates a new category of regulated on-demand service, the "Tier 1" service. This covers either: (i) PSB services used to fulfil a public service remit, other than those operated by the BBC; or (ii) any other UK or non-UK VoD services designated (by name or by definition) in secondary legislation. The scope of this second limb is undefined, but the expectation is that this will catch services with a large UK audience and which make available "TV-like" content or smaller services if evidence emerges of potential harm. These services will be required to comply with new stringent content and accessibility obligations.
• New public service broadcaster (PSB) remit: The Act sets out a new, flexible remit for PSBs and allows them to contribute towards that remit via a range of audiovisual services, including digital VoD platforms. The Act also amends certain commissioning and broadcasting quota obligations.
• Listed events: The aim of the regime is to ensure that key live sporting events of national interest are widely available and free-to-air for UK audiences. The Act clarifies that only PSB linear services will qualify to bid for the rights to show listed events.
• Radio selection services: The Act recognises the growth in voice-activated devices and their potential power in determining listening choices. The largest of these platforms will be obliged to enable access to UK-licensed internet radio stations without charging any fee and without those platforms overlaying the stations with any third party content, including ads. The station operators would also be freed to select their desired means of delivery, for example, via a specific station-operated app or aggregator.
The primary aims of the Act are to:
On 26 February 2024, Ofcom published its roadmap for implementing the bill (as it was then) in which it explained its "high-level plan" for putting the provisions into practice once the bill became law. Ofcom cautioned that the dates outlined were indicative only. Its timetable assumed that the legislation would receive Royal Assent by the summer (which it did) and the necessary secondary legislation subsequently laid before Parliament. Following the general election 2024, the Labour government is proceeding with implementation, which will take place in phases over the next two years.
Listed events
In July 2024, Ofcom published a call for evidence on the listed events regime under the Act, which closed on 26 September 2024.
VoD services regime
The Media Act 2024 (Commencement No 1) Regulations 2024 brought certain (mostly functional) provisions of the Act into force on 23 August 2024. The Regulations also brought into effect the non-UK Tier 1 VoD services framework, but the details of the regime are still to be determined as Ofcom must first consult and then the government will produce further regulations. In addition, the Regulations partially brought into effect the new digital prominence regime, but the details are also subject to consultation and further regulations from the government.
In September 2024, the government indicated to Ofcom its intention to begin its consideration of Tier 1 regulation of appropriate on-demand services "as soon as is practically possible". The government asked Ofcom to prepare a report on the operation of the UK market for on-demand programme services and non-UK on-demand programme services, which report the government is required to take into account.
Regulation of radio services
The second commencement regulations, the Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force the following provisions of the Act on 17 October 2024: (i) Part 5 (Regulation of radio services); and (ii) Section 19 (amount of financial penalties: qualifying revenue), but only for the purposes of enabling Ofcom to carry out the necessary work to bring the section into force. The Regulations also contain transitional and saving provisions.
⭐ Part 5 covers, among other things, local analogue commercial radio licences and their renewal. Previously, renewal was only available if the licence holder was also broadcasting a digital radio service on a "relevant" DAB multiplex. The Act introduces an additional, alternative statutory basis for licensees to apply for renewal if there is a "relevant" DAB multiplex available, but it is not "suitable" for their needs. In December 2024, Ofcom launched a consultation on its approach to considering applications under this new renewal route. Essentially, Ofcom is proposing that a "relevant" local or small-scale multiplex should be considered "unsuitable" in relation to an analogue services only if there is a substantial difference in the size of their coverage areas. The consultation closed on 5 February 2025.
⭐ In February 2025, Ofcom published a consultation on its proposed principles and methods for providing the secretary of state with recommendations for the designation of radio selection services (RSS) under the Act. The Act includes provisions to protect the availability of UK radio on connected audio devices, that is, online radio streams. It brings certain voice-activated online services (radio selection services) that have been designated by the Secretary of State into regulation for the first time. Under the Act, Ofcom is obliged to provide the secretary of state with a report making recommendations as to which RSS should be designated. First, however, the regulator must develop a set of principles and methods that it will follow when making the recommendations. The consultation sets out Ofcom's proposals for these principles and methods. It also seeks views on its emerging thinking on the appropriate sources of data for measuring the use of RSS and setting the threshold for recommendations.
The consultation is open until 18 March 2025.
Availability and prominence regime
The Internet Television Equipment Regulations 2024, which came into force on 14 November 2024, set out the descriptions of devices that are considered to be "internet television equipment" for the purposes of the new prominence framework under the Act. The regulations name smart televisions and streaming devices as internet television equipment.
The government has also published a policy paper on its approach to deciding the categories of TV devices that will be considered "internet television equipment". It explains that smart TVs, set-top boxes and streaming sticks will qualify, but that smartphones, laptops, tablets, PCs and video games consoles will not, as watching TV on these devices is not their primary function.
Newer devices, such as home cinema projectors and portable lifestyle screens, internet connected car touchscreens, virtual reality headsets, smart watches and in-home screens (such as smart fridges and smart speakers with in-built screens), will also be excluded for the same reason. However, the government intends to review the list in one-year's time to reassess.
Under the Act, Ofcom is obliged to provide a report to the secretary of state with recommendations on the designation of "television selection services" (TSS), a new online availability and prominence regime for PSBs' TV apps distributed on connected TV platforms. Under the regime, TSS that are designated by the secretary of state will have to ensure that Ofcom-designated PSB TV apps are available, prominent and easily accessible on the platform. In December 2024, Ofcom launched a consultation on its statement on the principles and methods it will apply when preparing the report. The consultation closed on 5 February 2025. Ofcom plans to issue final statements by April 2025 and it is expected that decisions on designation will be made in the second half of 2025.
This directive aims to increase the accessibility of both physical and digital products and services, focussing on digital.
Next important deadline(s): 28 June 2025
The EU Accessibility Directive (directive) was enacted in 2019 and required implementing legislation be issued at national level by the EU member states by 29 June 2022. The national laws must come into force by 28 June 2025 at the latest.
Enacted. Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
28 June 2025 (if national laws are in place)
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services, available here.
The directive and related national implementing legislation will impact on all businesses providing a range of "everyday" products and services, focused on digital technology. This includes:
There are various carve-outs for content and services available through websites and apps that do not have to be accessible, including historic and archived content, and third party content that is not funded, developed or under the control of the entity providing it.
There are also important carve-outs where compliance would fundamentally alter the nature of the product or service or would impose a disproportionate burden on the provider. Careful consideration of these tests is necessary ahead of product or service launches or wider compliance reviews. It should be noted that for many businesses there are notification requirements before these carve-outs will apply.
The overarching intention of the directive is to enable people with disabilities or impairments to access these products and services on an equal basis with others.
Rather than specifying technical requirements, the directive identifies the particular features of the relevant products and services that must be accessible (set out in Annex I of the directive). The intention is that this approach allows for flexibility and innovation in how these requirements are actually met in practice.
The accessibility set out in Annex 1 depend on the product or service but include requirements such as providing information through more than one sensory channel (e.g. the option of an audio version of written text), ensuring content is presented in an understandable way; requirements for written content to be formatted in sufficient size fonts, with sufficient contrast and adjustable spacing. Products requiring fine motor control must offer alternative controls, and requirements for extensive reach or great strength should be avoided. The directive envisages the use of assistive technologies, as well as emphasising the need to respect privacy.
The directive sets out requirements on product manufacturers for technical documentation, self-assessment of compliance and a declaration of conformity with the accessibility requirements, as well as a CE marking to show compliance. Importers are required to place only compliant products on the market, and to check for compliance and the CE mark. Distributors must similarly check for compliance and not put non-compliant products on the market.
For services, compliance with the directive's accessibility requirements must be incorporated into how the services are designed and provided. Information about the service's accessibility compliance must be made available (and updated) for the lifetime of the service. Compliance must be reviewed and maintained, with disclosure obligations around non-compliance.
As noted, compliance is not required where it would fundamentally alter the basic nature of the product or service, or where it would create a disproportionate economic burden on the entity concerned. Where a business decides it falls within one of these carve-outs and so does not have to comply, it must document that assessment. In relation to services, the assessment must be renewed every 5 years, or when the service is altered, or when the national authority requests a reassessment.
The directive also requires member states to set up national market surveillance authorities to oversee enforcement of the accessibility regime. Adequate enforcement powers must be given to the national authority, which should include provision for "effective, proportionate and dissuasive penalties", as well as remedial action to address non-compliance.
The need to implement national legislation and then allow time for businesses to achieve compliance (for which the directive allows three years) means that this 2019 legislation is taking some time to come into full effect. As noted, national implementation of this directive must take effect by 28 June 2025 at the latest. National implementing legislation can be tracked from this page.
Insight: The EU Accessibility Act – Time to start implementation projects now
Webinar recording: European Accessibility Act: An introduction (19 March 2024)
Webinar recording: Digital Inclusion: Risks of not making your digital products accessible (18 April 2023)
This directive aims to increase the accessibility of both physical and digital products and services, focussing on digital.
Next important deadline(s): 28 June 2025
The EU Accessibility Directive (directive) was enacted in 2019 and required implementing legislation be issued at national level by the EU member states by 29 June 2022. The national laws must come into force by 28 June 2025 at the latest.
Enacted. Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
28 June 2025 (if national laws are in place)
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services, available here.
The directive and related national implementing legislation will impact on all businesses providing a range of "everyday" products and services, focused on digital technology. This includes:
There are various carve-outs for content and services available through websites and apps that do not have to be accessible, including historic and archived content, and third party content that is not funded, developed or under the control of the entity providing it.
There are also important carve-outs where compliance would fundamentally alter the nature of the product or service or would impose a disproportionate burden on the provider. Careful consideration of these tests is necessary ahead of product or service launches or wider compliance reviews. It should be noted that for many businesses there are notification requirements before these carve-outs will apply.
The overarching intention of the directive is to enable people with disabilities or impairments to access these products and services on an equal basis with others.
Rather than specifying technical requirements, the directive identifies the particular features of the relevant products and services that must be accessible (set out in Annex I of the directive). The intention is that this approach allows for flexibility and innovation in how these requirements are actually met in practice.
The accessibility set out in Annex 1 depend on the product or service but include requirements such as providing information through more than one sensory channel (e.g. the option of an audio version of written text), ensuring content is presented in an understandable way; requirements for written content to be formatted in sufficient size fonts, with sufficient contrast and adjustable spacing. Products requiring fine motor control must offer alternative controls, and requirements for extensive reach or great strength should be avoided. The directive envisages the use of assistive technologies, as well as emphasising the need to respect privacy.
The directive sets out requirements on product manufacturers for technical documentation, self-assessment of compliance and a declaration of conformity with the accessibility requirements, as well as a CE marking to show compliance. Importers are required to place only compliant products on the market, and to check for compliance and the CE mark. Distributors must similarly check for compliance and not put non-compliant products on the market.
For services, compliance with the directive's accessibility requirements must be incorporated into how the services are designed and provided. Information about the service's accessibility compliance must be made available (and updated) for the lifetime of the service. Compliance must be reviewed and maintained, with disclosure obligations around non-compliance.
As noted, compliance is not required where it would fundamentally alter the basic nature of the product or service, or where it would create a disproportionate economic burden on the entity concerned. Where a business decides it falls within one of these carve-outs and so does not have to comply, it must document that assessment. In relation to services, the assessment must be renewed every 5 years, or when the service is altered, or when the national authority requests a reassessment.
The directive also requires member states to set up national market surveillance authorities to oversee enforcement of the accessibility regime. Adequate enforcement powers must be given to the national authority, which should include provision for "effective, proportionate and dissuasive penalties", as well as remedial action to address non-compliance.
The need to implement national legislation and then allow time for businesses to achieve compliance (for which the directive allows three years) means that this 2019 legislation is taking some time to come into full effect. As noted, national implementation of this directive must take effect by 28 June 2025 at the latest. National implementing legislation can be tracked from this page.
Insight: The EU Accessibility Act – Time to start implementation projects now
Webinar recording: European Accessibility Act: An introduction (19 March 2024)
Webinar recording: Digital Inclusion: Risks of not making your digital products accessible (18 April 2023)
This Act introduces statutory duties on online content-sharing platforms and search services.
Next important deadline(s): 16 March 2025, 17 March 2025, 16 April 2025
The Online Safety Act (Act) received Royal Assent on 26 October 2023, at which point certain (mostly functional) provisions came into effect, including those relating to Ofcom's duties to provide additional guidance and codes of practice. Much of the Act's main provisions require these documents from Ofcom and/or secondary legislation from the government to be put in place before they become effective.
16 December 2024 signalled the start of the three-month period for in-scope services to complete their illegal content risk assessments.
17 January 2025 marked the start of the duty of regulated providers of pornographic content to ensure, through the use of age verification/estimation tools, that children cannot encounter pornographic content.
16 March 2025 marks the date by which in-scope services must have completed their illegal content risk assessments.
17 March 2025 marks the date when in-scope services must comply with their duties in relation to illegal content in Part 3 of the Act.
16 April 2025 marks the date by when all in-scope services must complete their Child Access Assessments.
In October 2024, Ofcom published an updated roadmap, setting out its progress in implementing the Act since it became law and presenting its plans for 2025. See our Insight for details (and see "What is the process for implementation?" section below).
Secondary legislation under the Act
The Online Safety Act 2023 (Commencement No. 2) Regulations 2023 and the Online Safety Act 2023 (Commencement No. 3) Regulations 2024 brought into force some of Ofcom's powers, certain new offences and other minor provisions.
The Online Safety (List of Overseas Regulators) Regulations 2024 outline the overseas regulators with whom the UK online safety regulator, Ofcom, may co-operate, as set out in section 114 of the Act.
The Online Safety Act 2023 (Pre-existing Part 4B Services Assessment Start Day) Regulations 2024, which came into force on 22 May 2024, specify the "assessment start day" for Video-Sharing Platforms (VSPs) as 2 September 2024. VSPs will have to complete the assessments set out in the Act's sections 9 (illegal content risk assessment duties), 11 (children's risk assessment duties), 14 (assessment duties: user empowerment) and 35 (children's access assessments), starting from this date. VSPs have three months from the date that Ofcom publishes the relevant guidance to complete the assessments.
The Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2024, which came into force on 10 December 2024, make the offence of sharing intimate images without consent under the Sexual Offences Act 2023 a "priority offence" under the Act.
The Online Safety Act 2023 (Commencement No 4) Regulations 2024 brought into force duties on regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools on 17 January 2025.
⭐ The Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 came into force on 27 February 2025. The regulations define the thresholds at and above which in-scope services become "categorised services" and subject to certain additional obligations under the Act.
Enacted and partially in effect.
16 March 2025 (complete illegal content risk assessment)
17 March 2025 (comply with illegal content duties)
16 April 2025 (complete child access assessment)
Online Safety Act 2023, available here.
Businesses whose services facilitate the sharing of user generated content or user interactions, particularly social media, messaging services, search and online advertising. The Act regulates those offering services to UK users, regardless of the service provider's location.
The Act aims to protect children and to deal with illegal online content. It was initially proposed that measures on "legal but harmful" content would also be included but these have now been removed from the Act except in the case of certain harmful content hosted on services likely to be accessed by children. The Act seeks to strike a balance between ensuring safe online interactions for children and adult users, and freedom of speech.
Key provisions under the Act include:
In order to ensure that the online safety regime is flexible and "future proof", a number of these provisions are to be dealt with in secondary legislation, and by codes of practice and guidance to be developed by the designated regulator, Ofcom.
Ofcom has devised a consultation process to inform its codes of practice and guidance, which comprises four major consultations together with various sub-consultations, on numerous aspects of the new regime.
Illegal content
The process began in November 2023, with a first major consultation on draft codes of practice and guidance on illegal harms duties, which closed on 23 February 2024 (see our insight).
In August 2024, Ofcom published a consultation (which closed on 13 September 2024) to strengthen its draft illegal harms codes of practice and guidance to include animal cruelty and human torture due to these categories of content not being fully covered by the list of "priority offences" in the Act. The "priority offences", listed in Schedule 7 of the Act, relate to the most serious forms of illegal content. Regulated online platforms will not only have to take steps to prevent such content from appearing, but will also have to remove it when they become aware of it. By including animal cruelty and human torture in its codes and guidance, Ofcom aims to ensure that providers understand that they will also have to remove this type of content.
In December 2024, following consultation Ofcom published its final form illegal content codes of practice and guidance. This signalled the start date for services to comply with the new rules on conducting illegal content risk assessments. All in-scope services must complete their illegal content risk assessment, to evaluate the risk of harms to users resulting from the presence of illegal content on their service, by 16 March 2025. Services will need to implement the recommended measures set out in these codes (or be using other effective measures to protect users from illegal content) by 17 March 2025.
⭐ To tackle the specific risks faced by women and girls online, Ofcom is required to publish additional, dedicated guidance on safety for women and girls. On 25 February 2025, Ofcom published draft guidance, setting out how in-scope service providers can combat content and activities that disproportionately impact women and girls. The guidelines describe nine actions providers can take and highlights good practice in this area. Ofcom is consulting on the draft until 23 May 2025 and expects to publish final guidance by the end of 2025.
Pornographic content
On 16 January 2025, Ofcom published its final form guidance on "highly effective age assurance and other Part 5 duties" relating to the duties of regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools. This duty came into effect on 17 January 2025.
Categorised services
Ofcom has also published a call for evidence to inform its further consultation on draft codes of practice and guidance on the additional duties that will apply to "categorised services" under the Act. The Act introduces a system categorising some regulated online services, based on their key characteristics and whether they meet certain numerical thresholds, as category 1, 2A or 2B services. This call for evidence closed on 20 May 2024. Ofcom expects to publish the consultation in early 2025.
⭐ The regulations defining the thresholds above which in-scope services become "categorised services" and subject to certain additional obligations under the Act came into force on 27 February 2025. Ofcom will, once it has assessed services against these final thresholds, publish a register of categorised services, as well as a list of emerging category 1 services. The Act is intended to create a "triple shield" of protection for users. Service providers must remove illegal content, remove content in breach of their own terms and conditions, and give adults more choice over what to engage with.
In July 2024, Ofcom published a consultation (which closed on 4 October 2024) on its draft transparency reporting guidance, which is designed to assist categorised services comply with the requirement to publish transparency reports. These reports must set out the information requested by Ofcom in transparency notices, which Ofcom must issue to every provider of a categorised service once a year.
Children
On 16 January 2025, Ofcom published its Statement on Age Assurance and Children's Access, comprising final form versions of Part 3 guidance on highly effective age assurance, guidance on children's access assessments (and guidance on highly effective age assurance and other Part 5 duties see above under Pornographic content)). This marked the start of compliance with the protection of children duties under the Act by firing the starting gun for the three-month period within which all in-scope services must complete their Child Access Assessments (i.e. by 16 April 2025).
Fees and penalties
At the end of May 2024, the government published guidance to Ofcom on determining the fees that will be payable by regulated services under the Act. The fees paid by regulated service providers whose "qualifying worldwide revenue" (QWR) meets or exceeds a certain revenue threshold and who are not otherwise exempt will fund the new online safety regime. The government will retain oversight of the regulatory costs of the regime by setting Ofcom's total budget cap.
In October 2024, Ofcom launched a consultation on draft secondary legislation to define QWR, which term will be used to calculate not just the fees payable by regulated services, but also the maximum penalty that Ofcom will be able to levy against providers for breaches of the Act. Ofcom is proposing to define QWR as the total revenue of a provider referable to the provision of regulated services anywhere in the world. Where the provider is found liable together with it’s a group undertaking, the QWR will be calculated on the worldwide revenues of the entire group, whether they relate to regulated services or not. Ofcom proposes setting a £250m revenue threshold for fees and exempting providers with UK referable revenue under £10m. The consultation closed on 9 January 2025.
The fee regime is expected to be in place by 2026/27 financial year. Until then, the government is funding Ofcom's initial costs. Additional fees will then be charged over an initial set period of consecutive years to recoup the set-up costs.
Information notices
⭐ Under the Act, Ofcom has powers to require and obtain information from regulated services that it needs to fulfil its online safety duties. It will do this by issuing "information notices". On 26 February 2025, Ofcom published final guidance on its information gathering powers under the Act, how it plans to use them and the typical procedures it will follow. Failing to comply with an information notice may result in Ofcom taking enforcement action under the Act.
Enforcement
⭐ On 3 March 2025, Ofcom launched an enforcement programme to monitor whether services are meeting their illegal content risk assessment and record keeping duties under the Act. As part of the programme, the regulator asked various providers to provide it with their illegal content risk assessments to assist it in identifying possible compliance concerns and to monitor how the guidance is being applied. Ofcom expects the programme to run for at least 12 months, during which period, it may initiate formal investigations if it suspects that a provider is failing to meet its obligations under the Act.
Miscellaneous
In September 2024, the Secretary of State for Science, Innovation and Technology, Peter Kyle, wrote to Ofcom asking how it plans to monitor and address the issue of "small but risky" online services. In its response, Ofcom confirmed its awareness of the issue and said that tackling these services is a "vital part" of its regulatory mission. The regulator confirmed that it has already developed plans to take early action against these services.
In October 2024, the UK and US governments signed a joint statement on online safety, calling for platforms to go "further and faster" to protect children online by taking "immediate action" and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression. See this Regulatory Outlook for more.
In November 2024, Ofcom published an open letter to UK online service providers on how the Act will apply to generative AI and chatbots. Ofcom reminded providers that various AI tools and content relating to user-to-user services, search services and pornographic material, will be in scope of the Act.
In November 2024, the government also published its draft Statement of Strategic Priorities for online safety highlighting five priorities that Ofcom must have regard to when implementing the OSA and that industry is expected to adhere to. See this Regulatory Outlook for details.
The UK Online Safety Act: Top 10 takeaways for online service providers
UK Online Safety Act: Ofcom launches its first consultation on illegal harms
UK's Online Safety Act is a seismic regulatory shift for service providers
Online Safety Act 2023: time to prepare as UK's Ofcom confirms start of illegal content duties
This Act introduces statutory duties on online content-sharing platforms and search services.
Next important deadline(s): 16 March 2025, 17 March 2025, 16 April 2025
The Online Safety Act (Act) received Royal Assent on 26 October 2023, at which point certain (mostly functional) provisions came into effect, including those relating to Ofcom's duties to provide additional guidance and codes of practice. Much of the Act's main provisions require these documents from Ofcom and/or secondary legislation from the government to be put in place before they become effective.
16 December 2024 signalled the start of the three-month period for in-scope services to complete their illegal content risk assessments.
17 January 2025 marked the start of the duty of regulated providers of pornographic content to ensure, through the use of age verification/estimation tools, that children cannot encounter pornographic content.
16 March 2025 marks the date by which in-scope services must have completed their illegal content risk assessments.
17 March 2025 marks the date when in-scope services must comply with their duties in relation to illegal content in Part 3 of the Act.
16 April 2025 marks the date by when all in-scope services must complete their Child Access Assessments.
In October 2024, Ofcom published an updated roadmap, setting out its progress in implementing the Act since it became law and presenting its plans for 2025. See our Insight for details (and see "What is the process for implementation?" section below).
Secondary legislation under the Act
The Online Safety Act 2023 (Commencement No. 2) Regulations 2023 and the Online Safety Act 2023 (Commencement No. 3) Regulations 2024 brought into force some of Ofcom's powers, certain new offences and other minor provisions.
The Online Safety (List of Overseas Regulators) Regulations 2024 outline the overseas regulators with whom the UK online safety regulator, Ofcom, may co-operate, as set out in section 114 of the Act.
The Online Safety Act 2023 (Pre-existing Part 4B Services Assessment Start Day) Regulations 2024, which came into force on 22 May 2024, specify the "assessment start day" for Video-Sharing Platforms (VSPs) as 2 September 2024. VSPs will have to complete the assessments set out in the Act's sections 9 (illegal content risk assessment duties), 11 (children's risk assessment duties), 14 (assessment duties: user empowerment) and 35 (children's access assessments), starting from this date. VSPs have three months from the date that Ofcom publishes the relevant guidance to complete the assessments.
The Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2024, which came into force on 10 December 2024, make the offence of sharing intimate images without consent under the Sexual Offences Act 2023 a "priority offence" under the Act.
The Online Safety Act 2023 (Commencement No 4) Regulations 2024 brought into force duties on regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools on 17 January 2025.
⭐ The Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 came into force on 27 February 2025. The regulations define the thresholds at and above which in-scope services become "categorised services" and subject to certain additional obligations under the Act.
Enacted and partially in effect.
16 March 2025 (complete illegal content risk assessment)
17 March 2025 (comply with illegal content duties)
16 April 2025 (complete child access assessment)
Online Safety Act 2023, available here.
Businesses whose services facilitate the sharing of user generated content or user interactions, particularly social media, messaging services, search and online advertising. The Act regulates those offering services to UK users, regardless of the service provider's location.
The Act aims to protect children and to deal with illegal online content. It was initially proposed that measures on "legal but harmful" content would also be included but these have now been removed from the Act except in the case of certain harmful content hosted on services likely to be accessed by children. The Act seeks to strike a balance between ensuring safe online interactions for children and adult users, and freedom of speech.
Key provisions under the Act include:
In order to ensure that the online safety regime is flexible and "future proof", a number of these provisions are to be dealt with in secondary legislation, and by codes of practice and guidance to be developed by the designated regulator, Ofcom.
Ofcom has devised a consultation process to inform its codes of practice and guidance, which comprises four major consultations together with various sub-consultations, on numerous aspects of the new regime.
Illegal content
The process began in November 2023, with a first major consultation on draft codes of practice and guidance on illegal harms duties, which closed on 23 February 2024 (see our insight).
In August 2024, Ofcom published a consultation (which closed on 13 September 2024) to strengthen its draft illegal harms codes of practice and guidance to include animal cruelty and human torture due to these categories of content not being fully covered by the list of "priority offences" in the Act. The "priority offences", listed in Schedule 7 of the Act, relate to the most serious forms of illegal content. Regulated online platforms will not only have to take steps to prevent such content from appearing, but will also have to remove it when they become aware of it. By including animal cruelty and human torture in its codes and guidance, Ofcom aims to ensure that providers understand that they will also have to remove this type of content.
In December 2024, following consultation Ofcom published its final form illegal content codes of practice and guidance. This signalled the start date for services to comply with the new rules on conducting illegal content risk assessments. All in-scope services must complete their illegal content risk assessment, to evaluate the risk of harms to users resulting from the presence of illegal content on their service, by 16 March 2025. Services will need to implement the recommended measures set out in these codes (or be using other effective measures to protect users from illegal content) by 17 March 2025.
⭐ To tackle the specific risks faced by women and girls online, Ofcom is required to publish additional, dedicated guidance on safety for women and girls. On 25 February 2025, Ofcom published draft guidance, setting out how in-scope service providers can combat content and activities that disproportionately impact women and girls. The guidelines describe nine actions providers can take and highlights good practice in this area. Ofcom is consulting on the draft until 23 May 2025 and expects to publish final guidance by the end of 2025.
Pornographic content
On 16 January 2025, Ofcom published its final form guidance on "highly effective age assurance and other Part 5 duties" relating to the duties of regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools. This duty came into effect on 17 January 2025.
Categorised services
Ofcom has also published a call for evidence to inform its further consultation on draft codes of practice and guidance on the additional duties that will apply to "categorised services" under the Act. The Act introduces a system categorising some regulated online services, based on their key characteristics and whether they meet certain numerical thresholds, as category 1, 2A or 2B services. This call for evidence closed on 20 May 2024. Ofcom expects to publish the consultation in early 2025.
⭐ The regulations defining the thresholds above which in-scope services become "categorised services" and subject to certain additional obligations under the Act came into force on 27 February 2025. Ofcom will, once it has assessed services against these final thresholds, publish a register of categorised services, as well as a list of emerging category 1 services. The Act is intended to create a "triple shield" of protection for users. Service providers must remove illegal content, remove content in breach of their own terms and conditions, and give adults more choice over what to engage with.
In July 2024, Ofcom published a consultation (which closed on 4 October 2024) on its draft transparency reporting guidance, which is designed to assist categorised services comply with the requirement to publish transparency reports. These reports must set out the information requested by Ofcom in transparency notices, which Ofcom must issue to every provider of a categorised service once a year.
Children
On 16 January 2025, Ofcom published its Statement on Age Assurance and Children's Access, comprising final form versions of Part 3 guidance on highly effective age assurance, guidance on children's access assessments (and guidance on highly effective age assurance and other Part 5 duties see above under Pornographic content)). This marked the start of compliance with the protection of children duties under the Act by firing the starting gun for the three-month period within which all in-scope services must complete their Child Access Assessments (i.e. by 16 April 2025).
Fees and penalties
At the end of May 2024, the government published guidance to Ofcom on determining the fees that will be payable by regulated services under the Act. The fees paid by regulated service providers whose "qualifying worldwide revenue" (QWR) meets or exceeds a certain revenue threshold and who are not otherwise exempt will fund the new online safety regime. The government will retain oversight of the regulatory costs of the regime by setting Ofcom's total budget cap.
In October 2024, Ofcom launched a consultation on draft secondary legislation to define QWR, which term will be used to calculate not just the fees payable by regulated services, but also the maximum penalty that Ofcom will be able to levy against providers for breaches of the Act. Ofcom is proposing to define QWR as the total revenue of a provider referable to the provision of regulated services anywhere in the world. Where the provider is found liable together with it’s a group undertaking, the QWR will be calculated on the worldwide revenues of the entire group, whether they relate to regulated services or not. Ofcom proposes setting a £250m revenue threshold for fees and exempting providers with UK referable revenue under £10m. The consultation closed on 9 January 2025.
The fee regime is expected to be in place by 2026/27 financial year. Until then, the government is funding Ofcom's initial costs. Additional fees will then be charged over an initial set period of consecutive years to recoup the set-up costs.
Information notices
⭐ Under the Act, Ofcom has powers to require and obtain information from regulated services that it needs to fulfil its online safety duties. It will do this by issuing "information notices". On 26 February 2025, Ofcom published final guidance on its information gathering powers under the Act, how it plans to use them and the typical procedures it will follow. Failing to comply with an information notice may result in Ofcom taking enforcement action under the Act.
Enforcement
⭐ On 3 March 2025, Ofcom launched an enforcement programme to monitor whether services are meeting their illegal content risk assessment and record keeping duties under the Act. As part of the programme, the regulator asked various providers to provide it with their illegal content risk assessments to assist it in identifying possible compliance concerns and to monitor how the guidance is being applied. Ofcom expects the programme to run for at least 12 months, during which period, it may initiate formal investigations if it suspects that a provider is failing to meet its obligations under the Act.
Miscellaneous
In September 2024, the Secretary of State for Science, Innovation and Technology, Peter Kyle, wrote to Ofcom asking how it plans to monitor and address the issue of "small but risky" online services. In its response, Ofcom confirmed its awareness of the issue and said that tackling these services is a "vital part" of its regulatory mission. The regulator confirmed that it has already developed plans to take early action against these services.
In October 2024, the UK and US governments signed a joint statement on online safety, calling for platforms to go "further and faster" to protect children online by taking "immediate action" and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression. See this Regulatory Outlook for more.
In November 2024, Ofcom published an open letter to UK online service providers on how the Act will apply to generative AI and chatbots. Ofcom reminded providers that various AI tools and content relating to user-to-user services, search services and pornographic material, will be in scope of the Act.
In November 2024, the government also published its draft Statement of Strategic Priorities for online safety highlighting five priorities that Ofcom must have regard to when implementing the OSA and that industry is expected to adhere to. See this Regulatory Outlook for details.
The UK Online Safety Act: Top 10 takeaways for online service providers
UK Online Safety Act: Ofcom launches its first consultation on illegal harms
UK's Online Safety Act is a seismic regulatory shift for service providers
Online Safety Act 2023: time to prepare as UK's Ofcom confirms start of illegal content duties
This Act updates consumer law, makes changes to rules for consumer subscriptions and strengthens the CMA's enforcement powers.
Next important deadline(s): 6 April 2025, April 2026
The Digital Markets, Competition and Consumers Act 2024 (Act) received Royal Assent on 24 May 2024. Secondary legislation and statutory guidance from the Competition and Markets Authority (CMA) are needed to bring the provisions into effect.
In September 2024, the government published a ministerial statement setting out a vague timeline for the implementation of the Act. According to the statement:
Secondary legislation under the Act
The Digital Markets, Competition and Consumers Act 2024 (Commencement No 1 and Savings and Transitional Provisions) Regulations 2024 brought into force on 1 January 2025 (to the extent not brought into force on Royal Assent) the digital markets and competition parts of the Act, as well as other miscellaneous provisions.
The Competition Act 1998 (Determination of Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 1998 Act.
The Enterprise Act 2002 (Mergers and Market Investigations) (Determination of Control and Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 2002 Act, and the circumstances in which a person is considered to have control over an enterprise.
The Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024 sets out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of assessing whether an undertaking should be designated as having strategic market status (SMS) or to determine penalties for non-compliance relating to: (i) provisions on digital markets; (ii) the enforcement of consumer protection law; and (iii) the motor fuel regime under the Act.
Enacted
6 April 2025 (effective date for consumer protection enforcement powers)
April 2026 (reforms to subscription contracts regime expected to commence)
Digital Markets, Competition and Consumers Act 2024, available here.
Broadly speaking, the Act will impact all consumer-related businesses since it not only applies to all consumer contracts but also in circumstances where consumers are targeted. The Act includes a significant overhaul of the laws relating to subscription contracts – as such, providers of subscription services are likely to be particularly impacted.
The Act repeals and restates the consumer protection from unfair trading regulations as primary legislation. It aims to enhance consumer protections by strengthening enforcement (including by giving the CMA significant new powers and the ability to impose substantial fines) and introducing new consumer rights, e.g. to tackle "subscription traps" and the proliferation of fake online reviews.
Definitions
The Act amends some definitions, including “average consumer”, “commercial practice” and “transactional decision” and expands the definition of "vulnerable consumers".
Commercial practices that are always considered unfair
The Act repeals and reinstates the existing Consumer Protection for Unfair Trading Regulations 2008 (CPRs). The provisions on unfair commercial practices (UCPs) are in Chapter 1 of Part 4 of the Act.
Under the Act, some UCPs, such as misleading and aggressive practices, are prohibited if they are likely to cause the average consumer to take a different transactional decision.
The Act also amends and supplements the list of commercial practices that are always considered unfair regardless of their impact on the average consumer's transactional decisions, to reflect the fact that consumers and traders increasingly interact online (resulting in wider application).
To improve consumer transparency, the list of "blacklisted" commercial practices in Schedule 19 of the Act now also includes various activities relating to the submission, commission or publication of fake reviews, and "drip pricing".
Power to amend the list of unfair commercial practices
The Secretary of State also has the power to amend the Act in various ways including:
Subscription contracts
The Act gives new rights to consumers and imposes new obligations on providers in respect of subscription contracts. In summary, the proposals include:
Drip-pricing
The Act provides that a trader must set out the total price of a product, including any mandatory fees, taxes and charges that apply, rather than drip-feeding in these amounts during the transaction process.
Enactment
The Act significantly enhances the CMA’s role in enforcing consumer protection laws, giving the CMA the power to levy civil fines of up to £300,000 or 10% of annual global turnover (whichever is higher) for breach of all consumer law (not just as updated by the Act). The Act also allows the CMA to directly investigate suspected infringements and practices that may harm the collective interests of consumers in the UK. The CMA can also now issue enforcement notices without going to court first.
Secondary legislation and statutory guidance from the CMA are needed before the Act can become fully effective.
Consumer protection enforcement powers
Over August and September 2024, the CMA consulted on draft guidance and rules for the exercise of its new direct consumer protection enforcement powers. The CMA is due to publish its final statement on these in early 2025.
On 25 November 2024, the government made the Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024, which (among other things) set out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of determining penalties under the enforcement of consumer protection law regime.
The consumer penalties regime requires an assessment to be made as to when a person controls another person. The regulations also set out the circumstances in which a person is considered to have such control over another person.
The regulations in relation to consumer protection enforcement powers come into effect on 6 April 2025.
⭐ In December 2024, the CMA published draft guidance on its consumer protection role and powers, which explains the approach the regulator proposes to take to using both its civil and criminal enforcement powers and how it will work with partners. The CMA's consultation on the draft guidance closed on 22 January 2025.
Subscription regime
In November 2024, the government launched a consultation on implementation of the new subscription contracts regime under the Act. The consultation seeks feedback on proposed policies to inform the content of the secondary legislation that will implement the new regime and on particular issues likely to be covered in guidance. The consultation covers proposals for:
The deadline for responses was 10 February 2025.
Unfair commercial practices
⭐ The UCPs provisions in the Act are expected to come into force in April 2025. In December 2024, the CMA published draft guidance, which explains key concepts, such as "average consumer" and "transactional decision", and sets out how the UCPs provisions may apply in practice, with flowcharts and examples of situations that might amount to a breach. The CMA's consultation on the draft closed on 22 January 2025.
Since most of the rules on misleading advertising in the advertising codes derive from or are compatible with the CPRs, the Committee of Advertising Practice and the Broadcast Committee of Advertising Practice have amended them to align with the DMCCA. The consultation closed on 5 February 2025.
See Osborne Clarke's Insight for more details on these consultations.
How businesses can get set for the UK Digital Markets Competition and Consumers Act in 2025
DMCCA consultations launched on implementation of UK consumer protection regime
Overview video: https://youtu.be/HuHLFokwYO4
Video on dark patterns: https://youtu.be/FhNAJYT1Xxg
Video on subscription law changes: https://youtu.be/3D0yDatHHG0
This Act updates consumer law, makes changes to rules for consumer subscriptions and strengthens the CMA's enforcement powers.
Next important deadline(s): 6 April 2025, April 2026
The Digital Markets, Competition and Consumers Act 2024 (Act) received Royal Assent on 24 May 2024. Secondary legislation and statutory guidance from the Competition and Markets Authority (CMA) are needed to bring the provisions into effect.
In September 2024, the government published a ministerial statement setting out a vague timeline for the implementation of the Act. According to the statement:
Secondary legislation under the Act
The Digital Markets, Competition and Consumers Act 2024 (Commencement No 1 and Savings and Transitional Provisions) Regulations 2024 brought into force on 1 January 2025 (to the extent not brought into force on Royal Assent) the digital markets and competition parts of the Act, as well as other miscellaneous provisions.
The Competition Act 1998 (Determination of Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 1998 Act.
The Enterprise Act 2002 (Mergers and Market Investigations) (Determination of Control and Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 2002 Act, and the circumstances in which a person is considered to have control over an enterprise.
The Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024 sets out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of assessing whether an undertaking should be designated as having strategic market status (SMS) or to determine penalties for non-compliance relating to: (i) provisions on digital markets; (ii) the enforcement of consumer protection law; and (iii) the motor fuel regime under the Act.
Enacted
6 April 2025 (effective date for consumer protection enforcement powers)
April 2026 (reforms to subscription contracts regime expected to commence)
Digital Markets, Competition and Consumers Act 2024, available here.
Broadly speaking, the Act will impact all consumer-related businesses since it not only applies to all consumer contracts but also in circumstances where consumers are targeted. The Act includes a significant overhaul of the laws relating to subscription contracts – as such, providers of subscription services are likely to be particularly impacted.
The Act repeals and restates the consumer protection from unfair trading regulations as primary legislation. It aims to enhance consumer protections by strengthening enforcement (including by giving the CMA significant new powers and the ability to impose substantial fines) and introducing new consumer rights, e.g. to tackle "subscription traps" and the proliferation of fake online reviews.
Definitions
The Act amends some definitions, including “average consumer”, “commercial practice” and “transactional decision” and expands the definition of "vulnerable consumers".
Commercial practices that are always considered unfair
The Act repeals and reinstates the existing Consumer Protection for Unfair Trading Regulations 2008 (CPRs). The provisions on unfair commercial practices (UCPs) are in Chapter 1 of Part 4 of the Act.
Under the Act, some UCPs, such as misleading and aggressive practices, are prohibited if they are likely to cause the average consumer to take a different transactional decision.
The Act also amends and supplements the list of commercial practices that are always considered unfair regardless of their impact on the average consumer's transactional decisions, to reflect the fact that consumers and traders increasingly interact online (resulting in wider application).
To improve consumer transparency, the list of "blacklisted" commercial practices in Schedule 19 of the Act now also includes various activities relating to the submission, commission or publication of fake reviews, and "drip pricing".
Power to amend the list of unfair commercial practices
The Secretary of State also has the power to amend the Act in various ways including:
Subscription contracts
The Act gives new rights to consumers and imposes new obligations on providers in respect of subscription contracts. In summary, the proposals include:
Drip-pricing
The Act provides that a trader must set out the total price of a product, including any mandatory fees, taxes and charges that apply, rather than drip-feeding in these amounts during the transaction process.
Enactment
The Act significantly enhances the CMA’s role in enforcing consumer protection laws, giving the CMA the power to levy civil fines of up to £300,000 or 10% of annual global turnover (whichever is higher) for breach of all consumer law (not just as updated by the Act). The Act also allows the CMA to directly investigate suspected infringements and practices that may harm the collective interests of consumers in the UK. The CMA can also now issue enforcement notices without going to court first.
Secondary legislation and statutory guidance from the CMA are needed before the Act can become fully effective.
Consumer protection enforcement powers
Over August and September 2024, the CMA consulted on draft guidance and rules for the exercise of its new direct consumer protection enforcement powers. The CMA is due to publish its final statement on these in early 2025.
On 25 November 2024, the government made the Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024, which (among other things) set out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of determining penalties under the enforcement of consumer protection law regime.
The consumer penalties regime requires an assessment to be made as to when a person controls another person. The regulations also set out the circumstances in which a person is considered to have such control over another person.
The regulations in relation to consumer protection enforcement powers come into effect on 6 April 2025.
⭐ In December 2024, the CMA published draft guidance on its consumer protection role and powers, which explains the approach the regulator proposes to take to using both its civil and criminal enforcement powers and how it will work with partners. The CMA's consultation on the draft guidance closed on 22 January 2025.
Subscription regime
In November 2024, the government launched a consultation on implementation of the new subscription contracts regime under the Act. The consultation seeks feedback on proposed policies to inform the content of the secondary legislation that will implement the new regime and on particular issues likely to be covered in guidance. The consultation covers proposals for:
The deadline for responses was 10 February 2025.
Unfair commercial practices
⭐ The UCPs provisions in the Act are expected to come into force in April 2025. In December 2024, the CMA published draft guidance, which explains key concepts, such as "average consumer" and "transactional decision", and sets out how the UCPs provisions may apply in practice, with flowcharts and examples of situations that might amount to a breach. The CMA's consultation on the draft closed on 22 January 2025.
Since most of the rules on misleading advertising in the advertising codes derive from or are compatible with the CPRs, the Committee of Advertising Practice and the Broadcast Committee of Advertising Practice have amended them to align with the DMCCA. The consultation closed on 5 February 2025.
See Osborne Clarke's Insight for more details on these consultations.
How businesses can get set for the UK Digital Markets Competition and Consumers Act in 2025
DMCCA consultations launched on implementation of UK consumer protection regime
Overview video: https://youtu.be/HuHLFokwYO4
Video on dark patterns: https://youtu.be/FhNAJYT1Xxg
Video on subscription law changes: https://youtu.be/3D0yDatHHG0
This regulation creates a risk-based tiered regulatory regime for AI tools.
Next important deadline(s): April 2025, 2 August 2025, 2 August 2026, 2 August 2027
The AIA entered into force on 1 August 2024.
The provisions of the AIA will be applicable progressively over the next few years:
Enacted. Provisions become effective progressively as set out above.
2 February 2025 (effective date for prohibitions)
April 2025 (AI Office aims to publish final version of code of practice
2 August 2025 (effective date for general-purpose AI models)
2 August 2026 (effective date for high-risk AI models)
2 August 2027 (effective date for Annex 3 AI)
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available here.
The AIA will apply to all those who provide, deploy, import or distribute AI systems in the EU across all sectors. It will apply to non-EU providers who place AI systems on the market in the EU or put them into service in the EU, and also providers and deployers of AI systems located outside the EU, but where the output of the AI system is used in the EU.
As well as being in force in all EU member states, the AIA will in due course come into force in Norway, Iceland and Liechtenstein (as countries in the European Economic Area).
The AIA defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The AIA takes a risk-based approach, with the regulatory framework depending on both the level of risk created by an AI application, and the nature of the regulated entity's role. AI deemed high risk is regulated more strictly, as are those who create and distribute AI systems, who are regulated more strictly than organisations which deploy systems created by third parties. In addition, the AIA creates a regulatory framework for general-purpose AI.
"Risk" is assessed in the AIA by reference to both harm to health and safety, and harm to fundamental human rights and is a combination of the probability of occurrence of harm and the severity of that harm.
The proposed tiers of AIA regulation are prohibited AI, high risk AI, transparency requirements for certain forms of AI, and unregulated AI. Two further but distinct tiers apply to general-purpose AI.
Some uses of AI are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. Banned applications include:
Some AI applications are considered to pose a potentially high risk to health and safety or to fundamental rights, but not to the point that they should be prohibited. These high risk systems fall into two broad categories.
Firstly, AI will be classified as "high risk" for AIA purposes where it is used in safety systems or in products that are already subject to EU product safety legislation, including transport, other vehicles or machinery, and products such as toys, personal protective equipment and medical devices (the Annex I high risk categories).
Secondly, there is a list of specified "high risk" AI systems (in Annex III to the AIA). The detail of this list includes:
However, high risk regulation will not apply to AI falling in the Annex III categories where "it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". This will be an issue for self-assessment. Where there is no such actual risk, the system will not have to comply with the AIA high risk regulatory regime.
Onerous compliance obligations are proposed for high risk AI systems concerning a "continuous, iterative" risk management system for the full lifecycle of the AI system. Compliance will require meeting the AIA requirements for technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, the AIA will impose extensive obligations concerning the governance and curation of data used to train, validate and test high risk AI. The focus is on ensuring that such data sets are relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete in view of the intended purposes. The data should have "appropriate statistical properties", including as regards the people in relation to whom the AI system will be used. Data sets must reflect, to the extent appropriate for their intended use, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
The burden of high risk AI compliance will apply in different ways to businesses at different points in the AI supply chain – providers (developers and businesses who have had AI developed for them to put onto the market), product manufacturers whose products incorporate AI, importers, distributors and deployers.
Some lower risk AI, outside the high risk regime, will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be aware that they are interacting with an AI system, if it was not obvious from the circumstances and context. These provisions will apply to:
For AI systems that do not fall within any of the above categories, the AIA provides for codes of conduct and encourages voluntary adherence to some of the regulatory obligations which apply to the high risk categories. Codes of practice may also cover wider issues such as sustainability, accessibility, stakeholder participation in development and diversity in system-design teams.
Since the AIA was first proposed in 2021, foundation AI systems have emerged as a significant slice of the AI ecosystem. They do not fit readily within the AIA tiers because they perform a specific function (translation, text generation, image generation etc) that can be put to many different applications with differing levels of risk.
The AIA creates two layers of regulation for "general-purpose AI" models: one universal set of obligations for general-purpose AI models and a set of additional obligations for general-purpose AI models with systemic risk.
General-purpose AI include foundation models and some generative AI models. They are defined as models "Including where such AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications", with an exception for AI models used for research, development or prototyping before they are placed on the market.
A general-purpose AI model will be classified as having "systemic risk" if:
The largest, state of the art AI models are expected to fall within this category.
The universal obligations for all general-purpose AI apply in addition to the core risk-based tiers of AIA regulation. Providers of all general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright), which are intended to support downstream providers using the model in their AI system to comply with their own AIA obligations.
General-purpose AI models with systemic risk are subject to a second tier of obligations. These will include model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
As regards enforcement, the AIA requires member states to appoint national level AI regulators which will be given extensive powers to investigate possible non-compliance, with powers to impose fines.
In addition, the "AI Office" has been created within the Commission to centralise monitoring of general-purpose AI models, and to lead on interaction with the scientific community, and in international discussions on AI. It will also support the Commission in developing guidance, standards, codes of practice, codes of conduct, and in relation to investigations, testing and enforcement of AI systems. The AI Office will play a key role in governance across the national AI regulatory bodies appointed in member states.
Coordination and coherence of the AIA regime is the responsibility of the AI Board, which is separate to the AI Office and comprises representatives from member states.
Regulatory fines are in three tiers:
The fining regime for SMEs is similar to the above, except that the maximum fine is whichever percentage or amount is the lower.
As noted above, deadlines for compliance will be staggered.
Definition of AI
⭐ In February 2025, the Commission published its guidelines on the definition of "AI system", which simply describe the definition, apply the recital and give examples. The key is the ability of a system to act autonomously and infer how to generate output based on inputs. See this Regulatory Outlook.
Prohibited AI
⭐ On 2 February 2025, the AIA's restrictions on prohibited AI practices, as well as general AI literacy obligations, came into full effect. Following this, the Commission published its long-awaited guidelines on prohibited categories of AI. See this Regulatory Outlook for more details.
General-purpose AI code of practice
The AI Office is drawing up a general-purpose AI code of practice (Code). The Code is intended to facilitate the proper application of the AIA for general-purpose AI models. Based on a call for expressions of interest, the Commission has formed a Code of Practice Plenary which held its first meeting in September 2024. The first workshop on the Code took place on 23 October (see this Regulatory Outlook). The AI Office also conducted a multi-stakeholder consultation on trustworthy general-purpose AI models to collect views on the topics to be covered by the Code and on related AI Office work.
The first draft Code was published in November 2024. The Commission stressed that this was very much a first draft, and that drafting will be an iterative process, with three further rounds of input and drafting envisaged.
The second draft, taking account of feedback received in response to the first draft, was published in December 2024. The third draft is expected in the week commencing 17 February 2025.
⭐ The third draft was expected in the week commencing 17 February 2025. However, this interim deadline was extended at the last minute and it is now expected in March, accompanied by a survey to allow participants to give feedback.
The AI Office aims to publish the final version in April 2025. See a tentative timeline on the Code here.
This regulation creates a risk-based tiered regulatory regime for AI tools.
Next important deadline(s): April 2025, 2 August 2025, 2 August 2026, 2 August 2027
The AIA entered into force on 1 August 2024.
The provisions of the AIA will be applicable progressively over the next few years:
Enacted. Provisions become effective progressively as set out above.
2 February 2025 (effective date for prohibitions)
April 2025 (AI Office aims to publish final version of code of practice
2 August 2025 (effective date for general-purpose AI models)
2 August 2026 (effective date for high-risk AI models)
2 August 2027 (effective date for Annex 3 AI)
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available here.
The AIA will apply to all those who provide, deploy, import or distribute AI systems in the EU across all sectors. It will apply to non-EU providers who place AI systems on the market in the EU or put them into service in the EU, and also providers and deployers of AI systems located outside the EU, but where the output of the AI system is used in the EU.
As well as being in force in all EU member states, the AIA will in due course come into force in Norway, Iceland and Liechtenstein (as countries in the European Economic Area).
The AIA defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The AIA takes a risk-based approach, with the regulatory framework depending on both the level of risk created by an AI application, and the nature of the regulated entity's role. AI deemed high risk is regulated more strictly, as are those who create and distribute AI systems, who are regulated more strictly than organisations which deploy systems created by third parties. In addition, the AIA creates a regulatory framework for general-purpose AI.
"Risk" is assessed in the AIA by reference to both harm to health and safety, and harm to fundamental human rights and is a combination of the probability of occurrence of harm and the severity of that harm.
The proposed tiers of AIA regulation are prohibited AI, high risk AI, transparency requirements for certain forms of AI, and unregulated AI. Two further but distinct tiers apply to general-purpose AI.
Some uses of AI are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. Banned applications include:
Some AI applications are considered to pose a potentially high risk to health and safety or to fundamental rights, but not to the point that they should be prohibited. These high risk systems fall into two broad categories.
Firstly, AI will be classified as "high risk" for AIA purposes where it is used in safety systems or in products that are already subject to EU product safety legislation, including transport, other vehicles or machinery, and products such as toys, personal protective equipment and medical devices (the Annex I high risk categories).
Secondly, there is a list of specified "high risk" AI systems (in Annex III to the AIA). The detail of this list includes:
However, high risk regulation will not apply to AI falling in the Annex III categories where "it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". This will be an issue for self-assessment. Where there is no such actual risk, the system will not have to comply with the AIA high risk regulatory regime.
Onerous compliance obligations are proposed for high risk AI systems concerning a "continuous, iterative" risk management system for the full lifecycle of the AI system. Compliance will require meeting the AIA requirements for technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, the AIA will impose extensive obligations concerning the governance and curation of data used to train, validate and test high risk AI. The focus is on ensuring that such data sets are relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete in view of the intended purposes. The data should have "appropriate statistical properties", including as regards the people in relation to whom the AI system will be used. Data sets must reflect, to the extent appropriate for their intended use, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
The burden of high risk AI compliance will apply in different ways to businesses at different points in the AI supply chain – providers (developers and businesses who have had AI developed for them to put onto the market), product manufacturers whose products incorporate AI, importers, distributors and deployers.
Some lower risk AI, outside the high risk regime, will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be aware that they are interacting with an AI system, if it was not obvious from the circumstances and context. These provisions will apply to:
For AI systems that do not fall within any of the above categories, the AIA provides for codes of conduct and encourages voluntary adherence to some of the regulatory obligations which apply to the high risk categories. Codes of practice may also cover wider issues such as sustainability, accessibility, stakeholder participation in development and diversity in system-design teams.
Since the AIA was first proposed in 2021, foundation AI systems have emerged as a significant slice of the AI ecosystem. They do not fit readily within the AIA tiers because they perform a specific function (translation, text generation, image generation etc) that can be put to many different applications with differing levels of risk.
The AIA creates two layers of regulation for "general-purpose AI" models: one universal set of obligations for general-purpose AI models and a set of additional obligations for general-purpose AI models with systemic risk.
General-purpose AI include foundation models and some generative AI models. They are defined as models "Including where such AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications", with an exception for AI models used for research, development or prototyping before they are placed on the market.
A general-purpose AI model will be classified as having "systemic risk" if:
The largest, state of the art AI models are expected to fall within this category.
The universal obligations for all general-purpose AI apply in addition to the core risk-based tiers of AIA regulation. Providers of all general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright), which are intended to support downstream providers using the model in their AI system to comply with their own AIA obligations.
General-purpose AI models with systemic risk are subject to a second tier of obligations. These will include model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
As regards enforcement, the AIA requires member states to appoint national level AI regulators which will be given extensive powers to investigate possible non-compliance, with powers to impose fines.
In addition, the "AI Office" has been created within the Commission to centralise monitoring of general-purpose AI models, and to lead on interaction with the scientific community, and in international discussions on AI. It will also support the Commission in developing guidance, standards, codes of practice, codes of conduct, and in relation to investigations, testing and enforcement of AI systems. The AI Office will play a key role in governance across the national AI regulatory bodies appointed in member states.
Coordination and coherence of the AIA regime is the responsibility of the AI Board, which is separate to the AI Office and comprises representatives from member states.
Regulatory fines are in three tiers:
The fining regime for SMEs is similar to the above, except that the maximum fine is whichever percentage or amount is the lower.
As noted above, deadlines for compliance will be staggered.
Definition of AI
⭐ In February 2025, the Commission published its guidelines on the definition of "AI system", which simply describe the definition, apply the recital and give examples. The key is the ability of a system to act autonomously and infer how to generate output based on inputs. See this Regulatory Outlook.
Prohibited AI
⭐ On 2 February 2025, the AIA's restrictions on prohibited AI practices, as well as general AI literacy obligations, came into full effect. Following this, the Commission published its long-awaited guidelines on prohibited categories of AI. See this Regulatory Outlook for more details.
General-purpose AI code of practice
The AI Office is drawing up a general-purpose AI code of practice (Code). The Code is intended to facilitate the proper application of the AIA for general-purpose AI models. Based on a call for expressions of interest, the Commission has formed a Code of Practice Plenary which held its first meeting in September 2024. The first workshop on the Code took place on 23 October (see this Regulatory Outlook). The AI Office also conducted a multi-stakeholder consultation on trustworthy general-purpose AI models to collect views on the topics to be covered by the Code and on related AI Office work.
The first draft Code was published in November 2024. The Commission stressed that this was very much a first draft, and that drafting will be an iterative process, with three further rounds of input and drafting envisaged.
The second draft, taking account of feedback received in response to the first draft, was published in December 2024. The third draft is expected in the week commencing 17 February 2025.
⭐ The third draft was expected in the week commencing 17 February 2025. However, this interim deadline was extended at the last minute and it is now expected in March, accompanied by a survey to allow participants to give feedback.
The AI Office aims to publish the final version in April 2025. See a tentative timeline on the Code here.
This Act creates a new framework for sharing data and new rights of access to IoT data.
Next important deadline(s): 12 September 2025, 12 September 2026, 12 September 2027
The legislation entered into force on 11 January 2024.
It will become fully applicable on 12 September 2025, except for:
• the obligation to design connected products and services so that data is accessible, which will be applicable to such items placed on the market after 12 September 2026; and
• the provisions on contractual terms and conditions in private sector data contracts, which will not be applicable until 12 September 2027 in relation to contracts concluded on or before 12 September 2025, provided that the contract in question is of indefinite duration or is due to expire at least 10 years from 11 January 2024.
Enacted. Not yet in effect.
12 September 2025 (effective date for the majority of provisions)
12 September 2026 (effective date for design of connected products obligations)
12 September 2027 (effective date for obligations on contractual terms in private sector data contracts)
Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), available here.
The Data Act is wide-ranging legislation that will impact on a corresponding wide range of businesses and individuals, including:
• Those manufacturing connected products which are place on the market in the EU.
• Those providing related services (which make the product behave in a particular way) in respect of those products, as well as EU-based users of those products and services.
• Those making data available to data recipients in the EU, and the recipients.
• Public sector bodies and EU institutions wanting access to business data for exceptional need.
• Providers of cloud data processing services proving those services to EU customers.
• Parties using smart contracts.
The Data Act is intended to boost the availability of data to support a vibrant data economy in the EU. It will have a significant impact on some businesses. It will operate to extend data regulation far beyond the current focus on personal data and data privacy. The intention is to open up access to data, particularly Internet of Things (IoT) and machine-generated data, which is often controlled by the entity that gathered it, and is inaccessible to the entity whose activities generated it. The provisions of the Data Act will be without prejudice to the General Data Protection Regulation regime, privacy rights and rights to confidentiality of communications, all of which must be complied with in acting on the requirements in the Data Act.
You can familiarise yourself with the European Commission's Data Act Explained document here and Frequently Asked Questions (FAQs) on the Data Act here.
Currently, contractual terms and conditions typically decide whether or not data collected by IoT systems, industrial systems and other connected devices can be accessed by the business or person whose activities have generated the data – the user. It is not unusual that the collected data is held by the device or service provider and is not accessible by the user.
The Data Act will create an obligation to design connected products and related services so that data that they collect is available to the user. Users will be entitled to access the data free of charge and potentially in real time. Data must either be directly accessible from the device or readily available without delay.
Users will, moreover, be able to pass the data to a third party (which may be the data holder’s competitor) or instruct the data holder to do so; data holders can only pass user data to a third party on the instructions of the relevant user. The Data Act will impose restrictions on how the third party can use the received data.
These provisions include measures designed to prevent valuable trade secrets from being revealed, and to ensure that these rights are not used to find out information about the offerings of a competitor. They also include measures to prevent unfair contractual terms and conditions being imposed on users by data holders. Data holders are able to impose a non-discriminatory, reasonable fee for providing access (which can include a margin) in business-to-business relationships. Where data is being provided to an SME, the fee must not exceed the costs incurred in making the data available.
In relation to contracts between businesses that contain data-related obligations more generally, the Data Act outlaws terms and conditions that deviate grossly from good commercial practice, contrary to good faith and fair dealing, if imposed unilaterally. Within that prohibition, it sets out various specifically blacklisted contractual terms, which will be unenforceable.
The Data Act makes provision for public sector bodies to gain access to private sector data where they can demonstrate an exceptional need to use the data to carry out statutory duties in the public interest. This might be to deal with a public emergency, or to fulfil a task required by law where there is no other way to obtain the data in question. These provisions do not apply to criminal or administrative investigations or enforcement.
The Data Act seeks to remove perceived obstacles to switching between cloud services providers, or from a cloud service to on-premise infrastructure, or from a single cloud provider to multi-provider services. This part of the Data Act includes provisions concerning the contractual terms dealing with switching or data portability, which must be possible without undue delay, with reasonable assistance, acting to maintain business continuity, and ensuring security, particularly data security. It provides for maximum time limits and detailed information requirements. Service providers are required, more generally, to make information readily available about switching and porting methods, as well as formats including standards and open interoperability specifications. The Data Act imposes a general duty of good faith on all parties involved to make switching effective, timely and to ensure continuity of service.
A further limb of the Data Act is to require cloud service providers to prevent international and non-EU governments from accessing or transferring non-personal data held in the EU, where it would be in breach of EU or member state law.
To ensure that data access rights are not undermined by technical compatibility problems, the Data Act creates obligations around interoperability for data and data-sharing mechanisms. It requires transparency around key aspects and features of datasets, data structures, and access mechanisms (such as application programming interfaces). Provision is made for standards to be developed in relation to these requirements.
The Data Act also imposes interoperability requirements between cloud services, again providing for open interoperability specifications and harmonised standards to support switching and portability, as well as parallel processing.
Finally, the Data Act covers smart contracts used for executing a data-sharing arrangement (but not smart contracts with other functions). It lays down requirements for robustness and access control, safe termination and interruption (a "kill switch"), data archiving and continuity, as well as measures to ensure consistency with the terms of the contract that the smart contract is executing. Provision is made for standards to be developed to meet these requirements. Smart contracts must be self-assessed for conformity and a declaration of EU conformity made.
The Data Act will be enforced at member state level by an appointed regulator. This could be an extension of jurisdiction for an existing regulator, or a new one could be created. Powers for member state regulators can include sanctions.
This Act creates a new framework for sharing data and new rights of access to IoT data.
Next important deadline(s): 12 September 2025, 12 September 2026, 12 September 2027
The legislation entered into force on 11 January 2024.
It will become fully applicable on 12 September 2025, except for:
• the obligation to design connected products and services so that data is accessible, which will be applicable to such items placed on the market after 12 September 2026; and
• the provisions on contractual terms and conditions in private sector data contracts, which will not be applicable until 12 September 2027 in relation to contracts concluded on or before 12 September 2025, provided that the contract in question is of indefinite duration or is due to expire at least 10 years from 11 January 2024.
Enacted. Not yet in effect.
12 September 2025 (effective date for the majority of provisions)
12 September 2026 (effective date for design of connected products obligations)
12 September 2027 (effective date for obligations on contractual terms in private sector data contracts)
Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), available here.
The Data Act is wide-ranging legislation that will impact on a corresponding wide range of businesses and individuals, including:
• Those manufacturing connected products which are place on the market in the EU.
• Those providing related services (which make the product behave in a particular way) in respect of those products, as well as EU-based users of those products and services.
• Those making data available to data recipients in the EU, and the recipients.
• Public sector bodies and EU institutions wanting access to business data for exceptional need.
• Providers of cloud data processing services proving those services to EU customers.
• Parties using smart contracts.
The Data Act is intended to boost the availability of data to support a vibrant data economy in the EU. It will have a significant impact on some businesses. It will operate to extend data regulation far beyond the current focus on personal data and data privacy. The intention is to open up access to data, particularly Internet of Things (IoT) and machine-generated data, which is often controlled by the entity that gathered it, and is inaccessible to the entity whose activities generated it. The provisions of the Data Act will be without prejudice to the General Data Protection Regulation regime, privacy rights and rights to confidentiality of communications, all of which must be complied with in acting on the requirements in the Data Act.
You can familiarise yourself with the European Commission's Data Act Explained document here and Frequently Asked Questions (FAQs) on the Data Act here.
Currently, contractual terms and conditions typically decide whether or not data collected by IoT systems, industrial systems and other connected devices can be accessed by the business or person whose activities have generated the data – the user. It is not unusual that the collected data is held by the device or service provider and is not accessible by the user.
The Data Act will create an obligation to design connected products and related services so that data that they collect is available to the user. Users will be entitled to access the data free of charge and potentially in real time. Data must either be directly accessible from the device or readily available without delay.
Users will, moreover, be able to pass the data to a third party (which may be the data holder’s competitor) or instruct the data holder to do so; data holders can only pass user data to a third party on the instructions of the relevant user. The Data Act will impose restrictions on how the third party can use the received data.
These provisions include measures designed to prevent valuable trade secrets from being revealed, and to ensure that these rights are not used to find out information about the offerings of a competitor. They also include measures to prevent unfair contractual terms and conditions being imposed on users by data holders. Data holders are able to impose a non-discriminatory, reasonable fee for providing access (which can include a margin) in business-to-business relationships. Where data is being provided to an SME, the fee must not exceed the costs incurred in making the data available.
In relation to contracts between businesses that contain data-related obligations more generally, the Data Act outlaws terms and conditions that deviate grossly from good commercial practice, contrary to good faith and fair dealing, if imposed unilaterally. Within that prohibition, it sets out various specifically blacklisted contractual terms, which will be unenforceable.
The Data Act makes provision for public sector bodies to gain access to private sector data where they can demonstrate an exceptional need to use the data to carry out statutory duties in the public interest. This might be to deal with a public emergency, or to fulfil a task required by law where there is no other way to obtain the data in question. These provisions do not apply to criminal or administrative investigations or enforcement.
The Data Act seeks to remove perceived obstacles to switching between cloud services providers, or from a cloud service to on-premise infrastructure, or from a single cloud provider to multi-provider services. This part of the Data Act includes provisions concerning the contractual terms dealing with switching or data portability, which must be possible without undue delay, with reasonable assistance, acting to maintain business continuity, and ensuring security, particularly data security. It provides for maximum time limits and detailed information requirements. Service providers are required, more generally, to make information readily available about switching and porting methods, as well as formats including standards and open interoperability specifications. The Data Act imposes a general duty of good faith on all parties involved to make switching effective, timely and to ensure continuity of service.
A further limb of the Data Act is to require cloud service providers to prevent international and non-EU governments from accessing or transferring non-personal data held in the EU, where it would be in breach of EU or member state law.
To ensure that data access rights are not undermined by technical compatibility problems, the Data Act creates obligations around interoperability for data and data-sharing mechanisms. It requires transparency around key aspects and features of datasets, data structures, and access mechanisms (such as application programming interfaces). Provision is made for standards to be developed in relation to these requirements.
The Data Act also imposes interoperability requirements between cloud services, again providing for open interoperability specifications and harmonised standards to support switching and portability, as well as parallel processing.
Finally, the Data Act covers smart contracts used for executing a data-sharing arrangement (but not smart contracts with other functions). It lays down requirements for robustness and access control, safe termination and interruption (a "kill switch"), data archiving and continuity, as well as measures to ensure consistency with the terms of the contract that the smart contract is executing. Provision is made for standards to be developed to meet these requirements. Smart contracts must be self-assessed for conformity and a declaration of EU conformity made.
The Data Act will be enforced at member state level by an appointed regulator. This could be an extension of jurisdiction for an existing regulator, or a new one could be created. Powers for member state regulators can include sanctions.
This Act gives the CMA new powers to regulate firms in digital markets with "significant market status" and updates aspects of competition law.
Next important deadline(s): January to September 2025
⭐ The Act received Royal assent on 24 May 2024 and the competition and digital markets aspects came into effect on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and savings and Transitional Provisions) Regulations 2024.
Enacted and partially in effect.
January to March 2025 (stage 1 of CMA's initial Strategic Market Status (SMS) investigations – evidence gathering and engagement with firms and other stakeholders ).
April to June 2025 (stage 2 of CMA's initial Strategic Market Status (SMS) investigations – further evidence gathering and analysis followed by a consultation on the proposed decision).
July to September 2025 (stage 3 of CMA's initial Strategic Market Status (SMS) investigations – analysis of consultation responses and further evidence gathering).
Digital Markets, Competition and Consumers Act, text available here.
The new regime is intended to apply to businesses that hold substantial and entrenched power in digital markets. Firms that meet certain cumulative thresholds will be in scope:
The legislation will only apply to firms meeting those thresholds that have been designated by the Digital Markets Unit (DMU), a specialist unit within the Competition and Markets Authority (CMA), as having strategic market status (SMS) following an SMS investigation. The legislation will potentially impact any businesses operating in digital markets by changing the regulatory overlay of rights and obligations in these markets.
The Act introduces new statutory powers for the DMU to regulate powerful digital firms. The DMU remains an administrative unit within the CMA, rather than being designated as a separate regulator.
The Act concerns digital activities, defined as services provided over the internet or digital content, whether paid-for or free of charge. CMA jurisdiction will require a link to the UK.
A business will not be designated with SMS unless it meets the financial threshold of £1 billion in UK group turnover or £25 billion global group turnover in a relevant 12 month period.
The DMU may designate a business as having SMS following an SMS investigation. The investigation will consider whether the business has substantial and entrenched market power, with a forward-looking analysis at least five years into the future. It will also assess whether the business has a position of strategic significance in relation to the digital activity in question, whether by its size or scale, the reliance by others on its services, or the influence that it can exert on others as a consequence of its position. Designations will last for five years and the designation process will include public consultation and liaison with other regulators.
The DMU may create a bespoke code of conduct for each SMS business with a view to ensuring fair dealing, open choices (which may require enhanced data portability and interoperability of services), and trust and transparency.
The CMA is given extensive powers of investigation and enforcement, including fines of up to 10% of global turnover and wide-ranging remedies. In addition to SMS investigations and enforcement, the CMA may also make a "pro-competition intervention" to deal with an adverse effect on competition in a digital market. It will have the same powers as it currently enjoys to take remedial action as following a market investigation, including the power to order divestments and break-ups.
The regulatory regime:
Most of the Act's provisions will be brought into force by secondary legislation.
In December 2024, the CMA published finalised guidance on the digital markets competition regime, following consultation.
The digital markets and competition elements of the Act came into force on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and Savings and Transitional Provisions) Regulations 2024
The CMA| has set out the steps it will take when conducting initial Strategic Market Status (SMS) investigations, mirroring legislative requirements for extensive consultation with third parties and potential SMS firms before making any decision. The steps are:
Stage 1 (January to March 2025): initial evidence gathering and engagement with potential SMS firms and other stakeholders.
Stage 2 (April to June 2025): further evidence gathering and analysis, followed by a consultation on the proposed SMS designation decision and any initial conduct requirements.
Stage 3 (July to September 2025): analysis of consultation responses and further evidence gathering.
In addition to this the CMA to publish "roadmaps" alongside our consultations on the proposed SMS decisions in June (for search), and July (for mobile). The CMA intends to do this for all future SMS designations.
This Act gives the CMA new powers to regulate firms in digital markets with "significant market status" and updates aspects of competition law.
Next important deadline(s): January to September 2025
⭐ The Act received Royal assent on 24 May 2024 and the competition and digital markets aspects came into effect on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and savings and Transitional Provisions) Regulations 2024.
Enacted and partially in effect.
January to March 2025 (stage 1 of CMA's initial Strategic Market Status (SMS) investigations – evidence gathering and engagement with firms and other stakeholders ).
April to June 2025 (stage 2 of CMA's initial Strategic Market Status (SMS) investigations – further evidence gathering and analysis followed by a consultation on the proposed decision).
July to September 2025 (stage 3 of CMA's initial Strategic Market Status (SMS) investigations – analysis of consultation responses and further evidence gathering).
Digital Markets, Competition and Consumers Act, text available here.
The new regime is intended to apply to businesses that hold substantial and entrenched power in digital markets. Firms that meet certain cumulative thresholds will be in scope:
The legislation will only apply to firms meeting those thresholds that have been designated by the Digital Markets Unit (DMU), a specialist unit within the Competition and Markets Authority (CMA), as having strategic market status (SMS) following an SMS investigation. The legislation will potentially impact any businesses operating in digital markets by changing the regulatory overlay of rights and obligations in these markets.
The Act introduces new statutory powers for the DMU to regulate powerful digital firms. The DMU remains an administrative unit within the CMA, rather than being designated as a separate regulator.
The Act concerns digital activities, defined as services provided over the internet or digital content, whether paid-for or free of charge. CMA jurisdiction will require a link to the UK.
A business will not be designated with SMS unless it meets the financial threshold of £1 billion in UK group turnover or £25 billion global group turnover in a relevant 12 month period.
The DMU may designate a business as having SMS following an SMS investigation. The investigation will consider whether the business has substantial and entrenched market power, with a forward-looking analysis at least five years into the future. It will also assess whether the business has a position of strategic significance in relation to the digital activity in question, whether by its size or scale, the reliance by others on its services, or the influence that it can exert on others as a consequence of its position. Designations will last for five years and the designation process will include public consultation and liaison with other regulators.
The DMU may create a bespoke code of conduct for each SMS business with a view to ensuring fair dealing, open choices (which may require enhanced data portability and interoperability of services), and trust and transparency.
The CMA is given extensive powers of investigation and enforcement, including fines of up to 10% of global turnover and wide-ranging remedies. In addition to SMS investigations and enforcement, the CMA may also make a "pro-competition intervention" to deal with an adverse effect on competition in a digital market. It will have the same powers as it currently enjoys to take remedial action as following a market investigation, including the power to order divestments and break-ups.
The regulatory regime:
Most of the Act's provisions will be brought into force by secondary legislation.
In December 2024, the CMA published finalised guidance on the digital markets competition regime, following consultation.
The digital markets and competition elements of the Act came into force on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and Savings and Transitional Provisions) Regulations 2024
The CMA| has set out the steps it will take when conducting initial Strategic Market Status (SMS) investigations, mirroring legislative requirements for extensive consultation with third parties and potential SMS firms before making any decision. The steps are:
Stage 1 (January to March 2025): initial evidence gathering and engagement with potential SMS firms and other stakeholders.
Stage 2 (April to June 2025): further evidence gathering and analysis, followed by a consultation on the proposed SMS designation decision and any initial conduct requirements.
Stage 3 (July to September 2025): analysis of consultation responses and further evidence gathering.
In addition to this the CMA to publish "roadmaps" alongside our consultations on the proposed SMS decisions in June (for search), and July (for mobile). The CMA intends to do this for all future SMS designations.
This regulation introduces rules for financial services firms to manage cyber risks and harmonise IT requirements.
Next important deadline(s): 5 March 2025, 12 March 2025
DORA was enacted on 16 January 2023 and came into effect on 17 January 2025.
In effect
5 March 2025 (delegated regulation on RTS on harmonising oversight activities)
Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector, available here.
DORA will apply to a broad range of financial entities regulated in the European Economic Area (EEA), including banks, payment and e-money institutions, investment firms, fund managers, and cryptoasset service providers.
The rules will also catch third-party service providers of information and communications technology (ICT), including providers of cloud computing services, software, data analytics, and data centres, where the European authorities will designate them as "critical".
DORA has extraterritorial reach, therefore non-EU financial entities and ICT providers offering services to EU financial institutions will also need to comply with the regulation.
Firms within the scope of DORA must be able to withstand, respond to and recover from ICT incidents. Important requirements for firms will include:
• Having internal governance and control frameworks that allow them to manage ICT risks effectively and prudently.
• Having a robust and well-documented ICT risk management framework in place that allows them to address ICT risks quickly and comprehensively.
• Reporting major ICT-related incidents to the relevant regulator.
• Regularly carrying out digital operational resilience testing, including a range of assessments, methodologies, practices and tools.
• Managing ICT third-party risk within their ICT risk management framework.
Under DORA, the European Supervisory Authorities (being the European Banking Authority (EBA) European Insurance and Occupational Pensions Authority (EIOPA) European Securities and Markets Authority (ESMA) (together the ESAs)) will develop 13 Regulatory Technical Standards (RTS) and Implementation Technical Standards (ITS) setting out details and guidance on key provisions and requirements within DORA. Financial entitles must be fully compliant with and implement these standards in their ICT systems by 17 January 2025.
The first batch of technical standards published by the ESAs were adopted by the Commission and brought into effect via implementing and delegated acts in February and March 2024 (see below).
The second batch of draft technical standards were published by the ESAs on 17 July 2024. Those still to be adopted by the Commission and implementing and delegated acts put in place are:
• RTS and ITS on the content, format, timelines, and procedures for incident reporting;
• Guidelines on aggregated costs and losses from major ICT-related incidents; and
• Guidelines on cooperation of oversight activities between the ESAs and competent authorities.
To date, the Commission has brought the following technical standards into effect via implementing and delegated acts:
• Delegated regulation specifying oversight fees for critical ICT third-party service providers in the financial sector, which came into effect on 13 March 2024.
• Delegated regulation on further specifying the criteria for designation of ICT third-party service providers as critical for financial entities, which. came into effect on 19 June 2024.
• Delegated regulation on oversight fees for critical ICT third-party service providers, which came into effect on 19 June 2024.
• Delegated regulation on RTS specifying the criteria for classification of ICT-related incidents, which came into effect on 15 July 2024.
• Delegated regulation on RTS specifying criteria regarding ICT risk management, which came into effect on 15 July 2024.
• Delegated regulation on RTS specifying ICT services policies supporting the critical or important functions provided by third party ICT providers, which came into effect on 15 July 2024.
• Implementing regulation on standard templates for the register of information, which came into effect on 22 December 2024.
• Delegated regulation on RTS on harmonising oversight activities, which will come into effect on 5 March 2025.
• Delegated regulation on RTS for threat-led penetration testing, which will come into effect on 5 March 2025.
• Delegated regulation on RTS on harmonising oversight activities, which will come into effect on 5 March 2025.
• Delegated regulation on RTS which specify the content and time limits for the notification and reporting of major ICT-related incidents, and the content of notifications for significant cyber threats, which will come into effect on 12 March 2025.
• Implementing regulation on ITS setting out the standard forms, templates and procedures for financial entities to report a major ICT-related incident and to notify a significant cyber threat, which will come into effect on 12 March 2025.
• Delegated regulation on RTS specifying the criteria for determining the composition of the joint examination team. This has not yet been published in the Official Journal so is not yet in effect.
On 31 January 2025, the Commission rejected the draft RTS on subcontracting, which sets out the requirements and conditions for the use of subcontracted ICT services supporting critical or important functions under DORA. The Commission explained that it considers the requirements introduced by Article 5 on the “Conditions for subcontracting relating to the chain of ICT subcontractors providing a service supporting a critical or important function by the financial entity” as exceeding the legal mandate under DORA.
The ESAs have six weeks upon receipt of the Commission's rejection to amend and resubmit the draft RTS and remove Article 5 as proposed by the Commission. Should the ESAs fail to submit a draft RTS, however, the Commission may adopt the RTS with those amendments it considers relevant, or reject it.
The implementing and delegated acts can be tracked from this page.
This regulation introduces rules for financial services firms to manage cyber risks and harmonise IT requirements.
Next important deadline(s): 5 March 2025, 12 March 2025
DORA was enacted on 16 January 2023 and came into effect on 17 January 2025.
In effect
5 March 2025 (delegated regulation on RTS on harmonising oversight activities)
Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector, available here.
DORA will apply to a broad range of financial entities regulated in the European Economic Area (EEA), including banks, payment and e-money institutions, investment firms, fund managers, and cryptoasset service providers.
The rules will also catch third-party service providers of information and communications technology (ICT), including providers of cloud computing services, software, data analytics, and data centres, where the European authorities will designate them as "critical".
DORA has extraterritorial reach, therefore non-EU financial entities and ICT providers offering services to EU financial institutions will also need to comply with the regulation.
Firms within the scope of DORA must be able to withstand, respond to and recover from ICT incidents. Important requirements for firms will include:
• Having internal governance and control frameworks that allow them to manage ICT risks effectively and prudently.
• Having a robust and well-documented ICT risk management framework in place that allows them to address ICT risks quickly and comprehensively.
• Reporting major ICT-related incidents to the relevant regulator.
• Regularly carrying out digital operational resilience testing, including a range of assessments, methodologies, practices and tools.
• Managing ICT third-party risk within their ICT risk management framework.
Under DORA, the European Supervisory Authorities (being the European Banking Authority (EBA) European Insurance and Occupational Pensions Authority (EIOPA) European Securities and Markets Authority (ESMA) (together the ESAs)) will develop 13 Regulatory Technical Standards (RTS) and Implementation Technical Standards (ITS) setting out details and guidance on key provisions and requirements within DORA. Financial entitles must be fully compliant with and implement these standards in their ICT systems by 17 January 2025.
The first batch of technical standards published by the ESAs were adopted by the Commission and brought into effect via implementing and delegated acts in February and March 2024 (see below).
The second batch of draft technical standards were published by the ESAs on 17 July 2024. Those still to be adopted by the Commission and implementing and delegated acts put in place are:
• RTS and ITS on the content, format, timelines, and procedures for incident reporting;
• Guidelines on aggregated costs and losses from major ICT-related incidents; and
• Guidelines on cooperation of oversight activities between the ESAs and competent authorities.
To date, the Commission has brought the following technical standards into effect via implementing and delegated acts:
• Delegated regulation specifying oversight fees for critical ICT third-party service providers in the financial sector, which came into effect on 13 March 2024.
• Delegated regulation on further specifying the criteria for designation of ICT third-party service providers as critical for financial entities, which. came into effect on 19 June 2024.
• Delegated regulation on oversight fees for critical ICT third-party service providers, which came into effect on 19 June 2024.
• Delegated regulation on RTS specifying the criteria for classification of ICT-related incidents, which came into effect on 15 July 2024.
• Delegated regulation on RTS specifying criteria regarding ICT risk management, which came into effect on 15 July 2024.
• Delegated regulation on RTS specifying ICT services policies supporting the critical or important functions provided by third party ICT providers, which came into effect on 15 July 2024.
• Implementing regulation on standard templates for the register of information, which came into effect on 22 December 2024.
• Delegated regulation on RTS on harmonising oversight activities, which will come into effect on 5 March 2025.
• Delegated regulation on RTS for threat-led penetration testing, which will come into effect on 5 March 2025.
• Delegated regulation on RTS on harmonising oversight activities, which will come into effect on 5 March 2025.
• Delegated regulation on RTS which specify the content and time limits for the notification and reporting of major ICT-related incidents, and the content of notifications for significant cyber threats, which will come into effect on 12 March 2025.
• Implementing regulation on ITS setting out the standard forms, templates and procedures for financial entities to report a major ICT-related incident and to notify a significant cyber threat, which will come into effect on 12 March 2025.
• Delegated regulation on RTS specifying the criteria for determining the composition of the joint examination team. This has not yet been published in the Official Journal so is not yet in effect.
On 31 January 2025, the Commission rejected the draft RTS on subcontracting, which sets out the requirements and conditions for the use of subcontracted ICT services supporting critical or important functions under DORA. The Commission explained that it considers the requirements introduced by Article 5 on the “Conditions for subcontracting relating to the chain of ICT subcontractors providing a service supporting a critical or important function by the financial entity” as exceeding the legal mandate under DORA.
The ESAs have six weeks upon receipt of the Commission's rejection to amend and resubmit the draft RTS and remove Article 5 as proposed by the Commission. Should the ESAs fail to submit a draft RTS, however, the Commission may adopt the RTS with those amendments it considers relevant, or reject it.
The implementing and delegated acts can be tracked from this page.
This Act establishes a regulatory framework for media services in the EU, introducing measures to protect journalists, and strengthen media pluralism and editorial independence.
Next important deadline(s): 8 August 2025
The European Media Freedom Act (Act) became law on 7 May 2024.
It will generally apply from 8 August 2025 with some exceptions. For example, provisions in relation to the right of recipients of media services to plurality of editorially independent media content applied from 8 November 2024.
Some provisions began to apply from 8 February 2025: national regulatory authorities assumed their powers, provisions relating to the establishment of the new European Board for Media Services and amendments to the Audiovisual Media Services Directive (AVMSD). Provisions in respect of rights to editorial freedom and independence of media service providers, also began to apply from this date.
On the horizon.
8 August 2025 (general effective date)
Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), available here.
The Act builds on the revised Audio visual Media Services Directive and broadens its scope by bringing radio and newspapers publishers into its remit.
The Act applies to all "media service providers" (MSPs) (an entity providing a media service and who has editorial responsibility for the content on that service) that target audiences in the EU, whether the provider is established in the EU or not. It therefore applies to:
The Act provides a legal framework setting out both rights and duties for MSPs and recipients of media services in the EU. Key points include:
In January 2025, a working group, comprising MEPs from the European Parliament's Committee on Culture and Education, was established to oversee the implementation and enforcement of the Act.
This Act establishes a regulatory framework for media services in the EU, introducing measures to protect journalists, and strengthen media pluralism and editorial independence.
Next important deadline(s): 8 August 2025
The European Media Freedom Act (Act) became law on 7 May 2024.
It will generally apply from 8 August 2025 with some exceptions. For example, provisions in relation to the right of recipients of media services to plurality of editorially independent media content applied from 8 November 2024.
Some provisions began to apply from 8 February 2025: national regulatory authorities assumed their powers, provisions relating to the establishment of the new European Board for Media Services and amendments to the Audiovisual Media Services Directive (AVMSD). Provisions in respect of rights to editorial freedom and independence of media service providers, also began to apply from this date.
On the horizon.
8 August 2025 (general effective date)
Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), available here.
The Act builds on the revised Audio visual Media Services Directive and broadens its scope by bringing radio and newspapers publishers into its remit.
The Act applies to all "media service providers" (MSPs) (an entity providing a media service and who has editorial responsibility for the content on that service) that target audiences in the EU, whether the provider is established in the EU or not. It therefore applies to:
The Act provides a legal framework setting out both rights and duties for MSPs and recipients of media services in the EU. Key points include:
In January 2025, a working group, comprising MEPs from the European Parliament's Committee on Culture and Education, was established to oversee the implementation and enforcement of the Act.
This regulation creates an EU-wide framework for trusted and secure digital identities.
Next important deadline(s): Q1 2025, December 2026
The regulation entered into force on 20 May 2024. The regulation will come into effect 24 months after the adoption of various implementing regulations by the Commission.
The Commission adopted five implementing regulations on the technical specifications and certification of European digital identity (eID) wallets in November 2024, and they entered into force on 24 December 2024. These provisions will not, therefore, be effective until December 2026. By this date, member states are required to make eID wallets available to their citizens.
On the horizon.
Q1 2025 (Commission to adopt additional implementing regulations)
December 2026 (effective date for eID wallets)
Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No910/2014 as regards establishing the European Digital Identity Framework, available here.
This legislation updates and amends the existing digital identity regime in the EU. Once in effect, it will affect all EU citizens, who will be able to have an eID wallet to prove their identity, to hold other documents such as qualification certificates, bank cards or tickets, and to access public and private services. Wallets will be available to prove the identity of natural and legal persons, and of natural persons representing a natural or legal person.
The legislation will also impact the providers of digital identity services, and be relevant to businesses to which the wallet is presented.
The new legislation amends existing EU rules around digital identity to create the European Identity Framework, known as eIDAS. The concept is for a digital wallet – for example an app on phones – that will stand as proof of the person's identity and can also be used to store and prove other "formal" attributes such as qualifications, certificates, a driving licence, medical prescriptions, and other variations on someone's status and entitlements.
Key points include:
The legislation also establishes a legal framework for electronic signatures (which can be administered using an eID wallet), electronic seals, electronic time stamps, electronic documents, electronic registered delivery services, certificate services for website authentication, electronic archiving, electronic attestation of attributes, electronic signature and seal creation devices, and electronic ledgers.
In September 2024, the European Commission asked the European Union Agency for Cybersecurity (ENISA) to support member states on the certification of eID wallets, including the development of a candidate European cybersecurity certification scheme in accordance with the EU Cybersecurity Act. ENISA will do this by providing harmonised certification requirements, which member states should adhere to when setting up their national certification schemes.
In November 2024, the Commission adopted five implementing acts under the regulation on technical specifications and certification.
The Commission also published five new draft implementing acts for consultation, which closed on 2 January 2025, setting out:
The Commission plans to adopt these regulations during the first quarter of 2025.
The European Commission's Q&A on this regulation is available here.
European Digital Identity Wallet Website
This regulation creates an EU-wide framework for trusted and secure digital identities.
Next important deadline(s): Q1 2025, December 2026
The regulation entered into force on 20 May 2024. The regulation will come into effect 24 months after the adoption of various implementing regulations by the Commission.
The Commission adopted five implementing regulations on the technical specifications and certification of European digital identity (eID) wallets in November 2024, and they entered into force on 24 December 2024. These provisions will not, therefore, be effective until December 2026. By this date, member states are required to make eID wallets available to their citizens.
On the horizon.
Q1 2025 (Commission to adopt additional implementing regulations)
December 2026 (effective date for eID wallets)
Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No910/2014 as regards establishing the European Digital Identity Framework, available here.
This legislation updates and amends the existing digital identity regime in the EU. Once in effect, it will affect all EU citizens, who will be able to have an eID wallet to prove their identity, to hold other documents such as qualification certificates, bank cards or tickets, and to access public and private services. Wallets will be available to prove the identity of natural and legal persons, and of natural persons representing a natural or legal person.
The legislation will also impact the providers of digital identity services, and be relevant to businesses to which the wallet is presented.
The new legislation amends existing EU rules around digital identity to create the European Identity Framework, known as eIDAS. The concept is for a digital wallet – for example an app on phones – that will stand as proof of the person's identity and can also be used to store and prove other "formal" attributes such as qualifications, certificates, a driving licence, medical prescriptions, and other variations on someone's status and entitlements.
Key points include:
The legislation also establishes a legal framework for electronic signatures (which can be administered using an eID wallet), electronic seals, electronic time stamps, electronic documents, electronic registered delivery services, certificate services for website authentication, electronic archiving, electronic attestation of attributes, electronic signature and seal creation devices, and electronic ledgers.
In September 2024, the European Commission asked the European Union Agency for Cybersecurity (ENISA) to support member states on the certification of eID wallets, including the development of a candidate European cybersecurity certification scheme in accordance with the EU Cybersecurity Act. ENISA will do this by providing harmonised certification requirements, which member states should adhere to when setting up their national certification schemes.
In November 2024, the Commission adopted five implementing acts under the regulation on technical specifications and certification.
The Commission also published five new draft implementing acts for consultation, which closed on 2 January 2025, setting out:
The Commission plans to adopt these regulations during the first quarter of 2025.
The European Commission's Q&A on this regulation is available here.
European Digital Identity Wallet Website
This directive updates product liability rules to include digital products, services and platforms.
Next important deadeline(s): 9 December 2026
⭐ The EU Product Liability Directive entered into law on 8 December 2024. Member States have until 9 December 2026 (24 months) to implement the directive into national law. There is no further implementation period, meaning that products placed on the EU market or put into service from 9 December 2026 will be subject to the new rules (provided national laws are in place). Those products already on the market or put into service before 9 December 2026 will remain subject to the former Product Liability Directive.
On the horizon. National implementing legislation awaited.
9 December 2026 (effective date)
The final text can be found here.
Anyone placing products on the EU market.
Once effective, this directive will provide easier access to compensation for consumers who suffer damage from defective products and includes amendments relating to the directive's scope, treatment of psychological damage, and allocation of liability regarding software manufacturers.
The update to the existing product liability framework, dating from 1985, will:
We have produced an infographic on product liability reform in the EU and UK that sets out some of the key changes and practical actions businesses should be considering. For example, in regards to claimants having enhanced rights over disclosure under the revised EU PLD, businesses should ensure disclosable materials are fit for purpose, including: design files; evidence of safety testing in design phase; and policies on vigilance and corrective actions. Request a copy of the infographic here.
This directive updates product liability rules to include digital products, services and platforms.
Next important deadeline(s): 9 December 2026
⭐ The EU Product Liability Directive entered into law on 8 December 2024. Member States have until 9 December 2026 (24 months) to implement the directive into national law. There is no further implementation period, meaning that products placed on the EU market or put into service from 9 December 2026 will be subject to the new rules (provided national laws are in place). Those products already on the market or put into service before 9 December 2026 will remain subject to the former Product Liability Directive.
On the horizon. National implementing legislation awaited.
9 December 2026 (effective date)
The final text can be found here.
Anyone placing products on the EU market.
Once effective, this directive will provide easier access to compensation for consumers who suffer damage from defective products and includes amendments relating to the directive's scope, treatment of psychological damage, and allocation of liability regarding software manufacturers.
The update to the existing product liability framework, dating from 1985, will:
We have produced an infographic on product liability reform in the EU and UK that sets out some of the key changes and practical actions businesses should be considering. For example, in regards to claimants having enhanced rights over disclosure under the revised EU PLD, businesses should ensure disclosable materials are fit for purpose, including: design files; evidence of safety testing in design phase; and policies on vigilance and corrective actions. Request a copy of the infographic here.
This directive clarifies the employment rights of workers who source work through digital platforms and covers fairness in the algorithmic management of workers.
Next important deadline(s): 2 December 2026
This directive entered into law on 1 December 2024. Member states have 24 months to transpose the directive into their national legislation (i.e. by 2 December 2026).
On the horizon.
2 December 2026 (effective date if national laws in place)
Directive of the European Parliament and of the Council on improving working conditions in platform work, available here.
The directive applies to all workers performing platform work within the EU (and therefore to their "employers"). The contractual relationship of the worker does not have to be with the recipient of the service but could be with the platform or with an intermediary.
It also applies to all digital labour platforms organising platform work to be performed in the EU, regardless of the place of establishment of the platform. The key point is where the work will be performed, not where the platform is based.
A "digital labour platform" is defined as meeting all of the following requirements:
Platforms that are designed to enable the exploitation or sharing of assets (such as short term rental accommodation) or to enable the non-commercial resale of goods are expressly excluded from this definition. The requirement that the worker is remunerated means that platforms for organising the activities of volunteers are also excluded from scope.
The directive requires member states to outlaw the following functions from being performed by automated monitoring or decision-making systems:
These provisions will apply from the start of the recruitment process. In addition, the directive provides that any decision to restrict, suspend or terminate the contractual relationship or account of a platform worker (or similarly detrimental decisions) must be taken by a human being.
The directive clarifies that automated processing personal data of a platform worker for monitoring or decision-making will be high risk under the provisions of the GDPR and so will require a data protection impact assessment (DPIA). Digital labour platforms acting as data controllers must seek the views of the platform workers concerned and their representatives. DPIAs must be provided to worker representatives.
Transparency and redress
Digital labour platforms must disclose the use of algorithmic systems to platform workers, their representatives and relevant authorities. This should include information about:
The required information about algorithmic systems must be provided in writing, in clear and plain language. It must be provided on the worker's first day, or ahead of any changes, or on request. It must also be provided in relation to systems used for recruitment and selection to the person going through that process.
Platform workers will have a right to portability of their data from one digital labour platform to another.
The directive creates obligations around human oversight of algorithmic systems. These include a review at least every two years of the impact of the systems, including on working conditions and on equality of treatment of workers. Where a high risk of discrimination is identified, the system must be modified or no longer used.
Digital labour platforms must have sufficient human resources to implement oversight of the impact of individual decisions, with appropriately competent and trained staff able to overrise automated decisions. Such staff must be protected from dismissal or disciplinary measures for exercising these functions.
Platform workers will have a right to an oral or written explanation in clear and plain language of decisions taken or supported by an algorithmic system, with the option of discussing the decision with a human. The explanation must be in writing for certain decisions, including the restriction, suspension or termination of a platform worker's account, a refusal to pay the worker, or any decision regarding the worker's contractual status.
Workers will have a right to request a review of decisions taken by the algorithmic systems, to a written response within two weeks, to rectification of the decision if it infringes their rights, or to compensation if the error cannot be rectified.
The directive highlights that algorithmic management of workers can cause the intensification of work, which in turn can impact on safety and on the mental and physical health of platform workers. It requires digital labour platforms to evaluate such risks and take preventative and protective measures.
NB: the focus of this resource is digital regulation so wider aspects of this legislation are out of scope. The directive includes provisions on the correct determination of a platform worker's employment status (including a rebuttable presumption of employment status unless the digital platform can prove that a worker is, in fact, genuinely self-employed), and on transparency around platform work including disclosure of data to relevant authorities.
Chapter III of the directive sets out provisions regarding the algorithmic management of platform workers, where functions that might historically have been taken by human managers are automated.
The directive focuses on automated monitoring systems and automated decision-making systems, both of which operate "through electronic means" and may collect data. Monitoring systems are those which are used for, or support, monitoring, supervising or evaluating someone's work. Decision-making systems are those which take decisions that "significantly affect persons performing platform work". This may include decisions concerning recruitment, allocation of work and working time, remuneration, access to training or promotion, and their contractual status.
The directive seeks to address concerns that workers may not have information about what data is being collected about them, and how it is evaluated. The reasons behind decisions taken by the systems may not be clear, or explainable, and there may be limited or no avenues for challenge, redress or rectification. In relation to personal data, it makes supplemental provisions that will apply over and above the General Data Protection Regulation (GDPR) framework.
This directive clarifies the employment rights of workers who source work through digital platforms and covers fairness in the algorithmic management of workers.
Next important deadline(s): 2 December 2026
This directive entered into law on 1 December 2024. Member states have 24 months to transpose the directive into their national legislation (i.e. by 2 December 2026).
On the horizon.
2 December 2026 (effective date if national laws in place)
Directive of the European Parliament and of the Council on improving working conditions in platform work, available here.
The directive applies to all workers performing platform work within the EU (and therefore to their "employers"). The contractual relationship of the worker does not have to be with the recipient of the service but could be with the platform or with an intermediary.
It also applies to all digital labour platforms organising platform work to be performed in the EU, regardless of the place of establishment of the platform. The key point is where the work will be performed, not where the platform is based.
A "digital labour platform" is defined as meeting all of the following requirements:
Platforms that are designed to enable the exploitation or sharing of assets (such as short term rental accommodation) or to enable the non-commercial resale of goods are expressly excluded from this definition. The requirement that the worker is remunerated means that platforms for organising the activities of volunteers are also excluded from scope.
The directive requires member states to outlaw the following functions from being performed by automated monitoring or decision-making systems:
These provisions will apply from the start of the recruitment process. In addition, the directive provides that any decision to restrict, suspend or terminate the contractual relationship or account of a platform worker (or similarly detrimental decisions) must be taken by a human being.
The directive clarifies that automated processing personal data of a platform worker for monitoring or decision-making will be high risk under the provisions of the GDPR and so will require a data protection impact assessment (DPIA). Digital labour platforms acting as data controllers must seek the views of the platform workers concerned and their representatives. DPIAs must be provided to worker representatives.
Transparency and redress
Digital labour platforms must disclose the use of algorithmic systems to platform workers, their representatives and relevant authorities. This should include information about:
The required information about algorithmic systems must be provided in writing, in clear and plain language. It must be provided on the worker's first day, or ahead of any changes, or on request. It must also be provided in relation to systems used for recruitment and selection to the person going through that process.
Platform workers will have a right to portability of their data from one digital labour platform to another.
The directive creates obligations around human oversight of algorithmic systems. These include a review at least every two years of the impact of the systems, including on working conditions and on equality of treatment of workers. Where a high risk of discrimination is identified, the system must be modified or no longer used.
Digital labour platforms must have sufficient human resources to implement oversight of the impact of individual decisions, with appropriately competent and trained staff able to overrise automated decisions. Such staff must be protected from dismissal or disciplinary measures for exercising these functions.
Platform workers will have a right to an oral or written explanation in clear and plain language of decisions taken or supported by an algorithmic system, with the option of discussing the decision with a human. The explanation must be in writing for certain decisions, including the restriction, suspension or termination of a platform worker's account, a refusal to pay the worker, or any decision regarding the worker's contractual status.
Workers will have a right to request a review of decisions taken by the algorithmic systems, to a written response within two weeks, to rectification of the decision if it infringes their rights, or to compensation if the error cannot be rectified.
The directive highlights that algorithmic management of workers can cause the intensification of work, which in turn can impact on safety and on the mental and physical health of platform workers. It requires digital labour platforms to evaluate such risks and take preventative and protective measures.
NB: the focus of this resource is digital regulation so wider aspects of this legislation are out of scope. The directive includes provisions on the correct determination of a platform worker's employment status (including a rebuttable presumption of employment status unless the digital platform can prove that a worker is, in fact, genuinely self-employed), and on transparency around platform work including disclosure of data to relevant authorities.
Chapter III of the directive sets out provisions regarding the algorithmic management of platform workers, where functions that might historically have been taken by human managers are automated.
The directive focuses on automated monitoring systems and automated decision-making systems, both of which operate "through electronic means" and may collect data. Monitoring systems are those which are used for, or support, monitoring, supervising or evaluating someone's work. Decision-making systems are those which take decisions that "significantly affect persons performing platform work". This may include decisions concerning recruitment, allocation of work and working time, remuneration, access to training or promotion, and their contractual status.
The directive seeks to address concerns that workers may not have information about what data is being collected about them, and how it is evaluated. The reasons behind decisions taken by the systems may not be clear, or explainable, and there may be limited or no avenues for challenge, redress or rectification. In relation to personal data, it makes supplemental provisions that will apply over and above the General Data Protection Regulation (GDPR) framework.
This regulation introduces cybersecurity requirements for digital products.
Next important deadline(s): 11 December 2027
The EU Cyber Resilience Act (CRA) entered into force on 10 December 2024. The CRA has a 36-month transition period, meaning it will be effective from 11 December 2027.
On the horizon.
11 December 2027 (effective date)
The final text can be found here.
Manufacturers of products with digital elements (software and hardware products).
The CRA will introduce cyber security requirements for products with digital elements which aims to protect consumers and businesses from products with inadequate security features. The CRA will require manufacturers to ensure that the cybersecurity of their products is in conformity with minimum technical requirements, from the design and development phase, and throughout the whole life cycle of the product. This could include carrying out mandatory security assessments.
Certain types of products with digital elements deemed safety critical will be subject to stricter conformity assessment procedures, reflecting the increased cybersecurity risks they present.
The CRA also introduces transparency requirements, by requiring manufacturers to disclose certain cyber security aspects to consumers.
The CRA will apply to all products that are connected either directly or indirectly to another device or network, with some exceptions such as medical devices, aviation or cars.
When the CRA becomes effective, software and products connected to the internet will be required to apply the CE mark to indicate they comply with the applicable standards. Readers should note also that the European common criteria-based cybersecurity certification scheme, developed under the EU Cybersecurity Act, has been adopted and will apply on a voluntary basis to all information and communication technologies products within the EU.
This regulation introduces cybersecurity requirements for digital products.
Next important deadline(s): 11 December 2027
The EU Cyber Resilience Act (CRA) entered into force on 10 December 2024. The CRA has a 36-month transition period, meaning it will be effective from 11 December 2027.
On the horizon.
11 December 2027 (effective date)
The final text can be found here.
Manufacturers of products with digital elements (software and hardware products).
The CRA will introduce cyber security requirements for products with digital elements which aims to protect consumers and businesses from products with inadequate security features. The CRA will require manufacturers to ensure that the cybersecurity of their products is in conformity with minimum technical requirements, from the design and development phase, and throughout the whole life cycle of the product. This could include carrying out mandatory security assessments.
Certain types of products with digital elements deemed safety critical will be subject to stricter conformity assessment procedures, reflecting the increased cybersecurity risks they present.
The CRA also introduces transparency requirements, by requiring manufacturers to disclose certain cyber security aspects to consumers.
The CRA will apply to all products that are connected either directly or indirectly to another device or network, with some exceptions such as medical devices, aviation or cars.
When the CRA becomes effective, software and products connected to the internet will be required to apply the CE mark to indicate they comply with the applicable standards. Readers should note also that the European common criteria-based cybersecurity certification scheme, developed under the EU Cybersecurity Act, has been adopted and will apply on a voluntary basis to all information and communication technologies products within the EU.
This directive helps parties bring a claim for damages for harm caused by an AI tool.
Next important deadline(s): None
This legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. This overview reflects the legislation as it stood at that point.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the AI Liability Directive among them. ⭐ However, the European Commission's work programme for 2025, published in February 2025, states that the directive is to be withdrawn. The European Parliament and Council of the EU can now provide their views before the Commission makes a final decision.
In September 2024, the European Parliament's Think Tank published a complementary impact assessment on the Commission's proposal. The report, among other things, proposes to expand the scope of the directive into a "more comprehensive software liability instrument" which would cover not only AI, but all other types of software.
None (awaiting final decision from European Commission)
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), available here.
The directive will impact on businesses and individuals bringing or defending private actions for non-contractual damages in relation to artificial intelligence (AI). It will (once implemented at national level) change civil litigation rules in EU member states on access to evidence and the burden of proof for some AI damages claims.
The EU AI Liability Directive is intended to reduce perceived difficulties in claiming non-contractual damages for harm caused by AI.
The proposal sits alongside the EU AI Act and wider reforms to EU product liability legislation. Some AI systems will fall under the EU's product liability regime, a strict liability regime that does not require proof of fault. Where they do not, damages for harm are likely to take the form of a non-contractual claim that requires the claimant to prove the defendant's fault and a causation link between the fault and the harm suffered.
The complexities of AI systems can make it difficult to unravel why harm has been caused, whether there was fault, and whether the fault flowed through to the harm caused. Claimants often suffer from an asymmetry of information – the defendant knows more about what has happened than they do – which can be even greater in relation to claims about an AI system, particularly in civil law systems without common law-style disclosure obligations. The AI liability directive seeks to address these difficulties with two changes to standard civil litigation procedure in EU member states for AI claims.
First, a new right of access to information is proposed for claimants seeking compensation in relation to an AI system classified as "high risk" under the AI Act. Under the proposals in the directive, the claimant must first make "all proportionate attempts" to obtain information from the defendant. If that is unsuccessful and if the claimant can show they have a plausible claim for damages, the claimant will be able to ask the court to order disclosure of "relevant evidence" from the defendant.
"Relevant evidence" is not currently defined, but could feasibly include some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially extend to data sets used for training, validating or testing AI systems and even to the content of mandatory logs that AI providers must maintain for traceability purposes.
Moreover, the directive proposes that failure to comply with a disclosure order will trigger a rebuttable presumption of non-compliance with a duty of care by the defendant – aiding the claimant in proving fault.
Secondly, the directive proposes creating a "presumption of causality". "National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system" where the claimant shows that all of the following requirements are met:
The proposal is not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.
In relation to high-risk AI systems, the directive proposes that the defendant will be shielded from the presumption of causality where it can demonstrate that the claimant has access to sufficient evidence and expertise to prove the causal link.
For AI systems that are not high risk, the presumption will apply only where the court considers that it would be excessively difficult for the claimant to prove the causal link.
Finally, the proposal is that the presumption will apply to AI systems that are put into use by a non-professional user only if they have materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but failed to do so.
More generally, the presumption of causality will be rebuttable, shifting the burden of proof onto the defendant to show that there is no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question.
There is an intentional interplay between this directive and the EU's draft AI Act that will link non-compliance with the AI Act regulatory regime to increased exposure to damages actions.
Once enacted at EU level, the principles in the directive will then need to be implemented at national level. This is known as the transposition period and usually lasts two years. This two-stage approach reflects the extensive differences in detail between the civil litigation rules of different EU member states. It enables each jurisdiction to enact harmonised changes but adapted as needed to fit into their national civil litigation regimes.
As currently drafted, the new rules would only apply to harm that occurs after the transposition period, without retrospective effect.
This directive helps parties bring a claim for damages for harm caused by an AI tool.
Next important deadline(s): None
This legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. This overview reflects the legislation as it stood at that point.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the AI Liability Directive among them. ⭐ However, the European Commission's work programme for 2025, published in February 2025, states that the directive is to be withdrawn. The European Parliament and Council of the EU can now provide their views before the Commission makes a final decision.
In September 2024, the European Parliament's Think Tank published a complementary impact assessment on the Commission's proposal. The report, among other things, proposes to expand the scope of the directive into a "more comprehensive software liability instrument" which would cover not only AI, but all other types of software.
None (awaiting final decision from European Commission)
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), available here.
The directive will impact on businesses and individuals bringing or defending private actions for non-contractual damages in relation to artificial intelligence (AI). It will (once implemented at national level) change civil litigation rules in EU member states on access to evidence and the burden of proof for some AI damages claims.
The EU AI Liability Directive is intended to reduce perceived difficulties in claiming non-contractual damages for harm caused by AI.
The proposal sits alongside the EU AI Act and wider reforms to EU product liability legislation. Some AI systems will fall under the EU's product liability regime, a strict liability regime that does not require proof of fault. Where they do not, damages for harm are likely to take the form of a non-contractual claim that requires the claimant to prove the defendant's fault and a causation link between the fault and the harm suffered.
The complexities of AI systems can make it difficult to unravel why harm has been caused, whether there was fault, and whether the fault flowed through to the harm caused. Claimants often suffer from an asymmetry of information – the defendant knows more about what has happened than they do – which can be even greater in relation to claims about an AI system, particularly in civil law systems without common law-style disclosure obligations. The AI liability directive seeks to address these difficulties with two changes to standard civil litigation procedure in EU member states for AI claims.
First, a new right of access to information is proposed for claimants seeking compensation in relation to an AI system classified as "high risk" under the AI Act. Under the proposals in the directive, the claimant must first make "all proportionate attempts" to obtain information from the defendant. If that is unsuccessful and if the claimant can show they have a plausible claim for damages, the claimant will be able to ask the court to order disclosure of "relevant evidence" from the defendant.
"Relevant evidence" is not currently defined, but could feasibly include some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially extend to data sets used for training, validating or testing AI systems and even to the content of mandatory logs that AI providers must maintain for traceability purposes.
Moreover, the directive proposes that failure to comply with a disclosure order will trigger a rebuttable presumption of non-compliance with a duty of care by the defendant – aiding the claimant in proving fault.
Secondly, the directive proposes creating a "presumption of causality". "National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system" where the claimant shows that all of the following requirements are met:
The proposal is not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.
In relation to high-risk AI systems, the directive proposes that the defendant will be shielded from the presumption of causality where it can demonstrate that the claimant has access to sufficient evidence and expertise to prove the causal link.
For AI systems that are not high risk, the presumption will apply only where the court considers that it would be excessively difficult for the claimant to prove the causal link.
Finally, the proposal is that the presumption will apply to AI systems that are put into use by a non-professional user only if they have materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but failed to do so.
More generally, the presumption of causality will be rebuttable, shifting the burden of proof onto the defendant to show that there is no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question.
There is an intentional interplay between this directive and the EU's draft AI Act that will link non-compliance with the AI Act regulatory regime to increased exposure to damages actions.
Once enacted at EU level, the principles in the directive will then need to be implemented at national level. This is known as the transposition period and usually lasts two years. This two-stage approach reflects the extensive differences in detail between the civil litigation rules of different EU member states. It enables each jurisdiction to enact harmonised changes but adapted as needed to fit into their national civil litigation regimes.
As currently drafted, the new rules would only apply to harm that occurs after the transposition period, without retrospective effect.
This is the first part of a two pillar solution to address the tax challenges arising from digitalisation.
Next important deadline(s): None
Still awaited in the UK.
Not yet implemented in the UK.
None
-
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar One involves a partial reallocation of taxing rights over the profits of the largest and most profitable MNEs (those with revenues exceeding EUR20 billion and profitability greater than 10%) to the jurisdictions where consumers are located. So, it is about where they pay tax. It is hoped that this will resolve longstanding concerns that the international corporate tax framework has not kept pace with the digital economy and how highly digitalised businesses generate value from the active interaction with their users. Under the proposal, 25% of an MNE's "residual profit" above 10% of revenue would be allocated according to a key to recognise sustained and significant involvement in a market irrespective of physical local presence.
Implementation of Pillar One is still some way off. It will be implemented by way of a multilateral convention (published in October 2023), and will not come into force until the multilateral convention has been ratified by a critical mass of countries. The introduction of Pillar One will be coordinated with the removal of all Digital Services Taxes and other relevant similar measures already implemented in jurisdictions.
This is the first part of a two pillar solution to address the tax challenges arising from digitalisation.
Next important deadline(s): None
Still awaited in the UK.
Not yet implemented in the UK.
None
-
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar One involves a partial reallocation of taxing rights over the profits of the largest and most profitable MNEs (those with revenues exceeding EUR20 billion and profitability greater than 10%) to the jurisdictions where consumers are located. So, it is about where they pay tax. It is hoped that this will resolve longstanding concerns that the international corporate tax framework has not kept pace with the digital economy and how highly digitalised businesses generate value from the active interaction with their users. Under the proposal, 25% of an MNE's "residual profit" above 10% of revenue would be allocated according to a key to recognise sustained and significant involvement in a market irrespective of physical local presence.
Implementation of Pillar One is still some way off. It will be implemented by way of a multilateral convention (published in October 2023), and will not come into force until the multilateral convention has been ratified by a critical mass of countries. The introduction of Pillar One will be coordinated with the removal of all Digital Services Taxes and other relevant similar measures already implemented in jurisdictions.
This bill proposes to recognise beneficial EU regulations, address new product risks like AI, and clarify supply chain responsibilities for online marketplaces.
Next important deadline(s): 5 March 2025
The draft Product Regulation and Metrology Bill completed the committee stage in the House of Lords on 11 December 2024. It began the report stage on 26 February 2025, with the second sitting scheduled for 5 March 2025.
5 March 2025 (second sitting of report stage in the House of Lords)
The Product Regulation and Metrology Bill
Anyone placing products on the market in the UK.
In the King's Speech 2024, the government set out plans to introduce a Product Safety and Metrology Bill which would preserve the UK’s status as a global leader in product regulation, supporting businesses and protecting consumers". The King's Speech 2024 background briefing notes specifically recognised that the EU is reforming product safety regulations in line with technological developments through, for example, the new EU General Product Safety Regulation and the EU Product Liability Directive.
The draft bill, named the Product Regulation and Metrology Bill, was introduced to the House of Lords on 4 September 2024 when it had its first reading.
The bill is intended to modernises the UK's management of product and metrology regulations.
A key aspect of the bill is that it allows for the introduction of regulations for products to be placed on the UK market that are closely aligned with relevant EU laws, while, at the same time allowing for necessary divergence, a move that is likely to be welcomed by businesses.
Other aspects of the bill include enhancing compliance and enforcement capabilities, including improved data sharing between regulators and market surveillance authorities; clarifying the responsibilities of those in the supply chain, including online market places; and updating the legal metrology framework. The bill also provides for cost recovery mechanisms and emergency modifications.
Products excluded from the bill include: food, feed stuff, plants, fruit and fungi, plant protection products, animal by-products, products of animal origin, aircrafts, military equipment, medicines and medical devices.
On 8 October 2024, the Product Regulation and Metrology Bill had its second reading in the House of Lords and was broadly welcomed, though concerns were raised over the bill lacking detail. The Lords commented that without seeing the draft statutory instruments which will provide this detail, there is little for them to scrutinise. Read more on the second reading in our October Regulatory Outlook.
The bill has been subject to a high level of scrutiny due to the low level of detail included in it. A number of amendments to the bill were proposed during the Committee Stage, including the bill's jurisdictional scope, environmental considerations, the definition of a product, the definition of an online marketplace, enhancements to consumer safety, reviews of technical standards, and enforcement powers.
The bill is currently being read at report stage: the first sitting took place on 26 February 2025 and the second sitting is scheduled for 5 March 2025. The bill will then move to third reading, which is the final stage in the House of Lords. Once enacted, the bill will be reprinted to include all agreed amendments.
The bill will likely have a significant impact on product regulation in the UK and businesses will need to keep abreast of developments.
This bill proposes to recognise beneficial EU regulations, address new product risks like AI, and clarify supply chain responsibilities for online marketplaces.
Next important deadline(s): 5 March 2025
The draft Product Regulation and Metrology Bill completed the committee stage in the House of Lords on 11 December 2024. It began the report stage on 26 February 2025, with the second sitting scheduled for 5 March 2025.
5 March 2025 (second sitting of report stage in the House of Lords)
The Product Regulation and Metrology Bill
Anyone placing products on the market in the UK.
In the King's Speech 2024, the government set out plans to introduce a Product Safety and Metrology Bill which would preserve the UK’s status as a global leader in product regulation, supporting businesses and protecting consumers". The King's Speech 2024 background briefing notes specifically recognised that the EU is reforming product safety regulations in line with technological developments through, for example, the new EU General Product Safety Regulation and the EU Product Liability Directive.
The draft bill, named the Product Regulation and Metrology Bill, was introduced to the House of Lords on 4 September 2024 when it had its first reading.
The bill is intended to modernises the UK's management of product and metrology regulations.
A key aspect of the bill is that it allows for the introduction of regulations for products to be placed on the UK market that are closely aligned with relevant EU laws, while, at the same time allowing for necessary divergence, a move that is likely to be welcomed by businesses.
Other aspects of the bill include enhancing compliance and enforcement capabilities, including improved data sharing between regulators and market surveillance authorities; clarifying the responsibilities of those in the supply chain, including online market places; and updating the legal metrology framework. The bill also provides for cost recovery mechanisms and emergency modifications.
Products excluded from the bill include: food, feed stuff, plants, fruit and fungi, plant protection products, animal by-products, products of animal origin, aircrafts, military equipment, medicines and medical devices.
On 8 October 2024, the Product Regulation and Metrology Bill had its second reading in the House of Lords and was broadly welcomed, though concerns were raised over the bill lacking detail. The Lords commented that without seeing the draft statutory instruments which will provide this detail, there is little for them to scrutinise. Read more on the second reading in our October Regulatory Outlook.
The bill has been subject to a high level of scrutiny due to the low level of detail included in it. A number of amendments to the bill were proposed during the Committee Stage, including the bill's jurisdictional scope, environmental considerations, the definition of a product, the definition of an online marketplace, enhancements to consumer safety, reviews of technical standards, and enforcement powers.
The bill is currently being read at report stage: the first sitting took place on 26 February 2025 and the second sitting is scheduled for 5 March 2025. The bill will then move to third reading, which is the final stage in the House of Lords. Once enacted, the bill will be reprinted to include all agreed amendments.
The bill will likely have a significant impact on product regulation in the UK and businesses will need to keep abreast of developments.
This regulation will supplement the EU GDPR with harmonised procedures for the cross-border enforcement of the GDPR.
Next important deadline(s): None
The proposal was adopted by the European Commission in July 2023. The Parliament adopted its position on the Commission's proposal in April 2024. However this legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. Soon after the elections, the Council of the EU agreed its negotiating position on the draft legislation.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the proposal for this regulation among them. The European Commission's Work Programme for 2025 also included the regulation in its list of pending proposals. Once the regulation is revived, tri-partite negotiations among the EU institutions will be able to begin.
On the horizon.
None (awaiting revival in the new legislature)
Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, available here.
This new regulation will impact all those involved in cross-border enforcement of the GDPR, including the national data protection authorities (DPAs), the parties under investigation and any parties making a complaint about cross-border GDPR compliance to a DPA.
The regulation adds new procedural rules for the cross-border enforcement mechanisms set out in Chapter VII of the GDPR. It does not introduce new procedures but clarifies the existing GDPR framework. Because GDPR enforcement is decentralised to member state level, cross-border enforcement within the EU is relatively common – for example where a data subject lodges a complaint about a data controller or processor located in a different member state.
The European Commission's review of the application of the GDPR indicated that cross-border enforcement and dispute resolution under the GDPR mechanisms was being hampered by differences in administrative procedures and interpretation of the GDPR at national level. The regulation therefore aims to create a harmonised, common set of rules for cross-border GDPR matters.
The proposed regulation will create harmonised rules in relation to various aspects of cross-border enforcement of the GDPR:
In agreeing its negotiating position, the Council of the EU is seeking amendments to: (i) ensure clearer timelines to speed up the cooperation process; (ii) provide the option of not applying the additional rules if the case is straightforward; and (iii) provide for an early resolution mechanism to allow DPAs to resolve a case before initiating standard procedures for dealing with across-border complaint, e.g. where the organisation in question has addressed the infringement or an amicable settlement has been reached.
The new rules will not impact the provisions of the GDPR concerning the rights of data subjects, the obligations of data controllers or processors, or on the lawful grounds for data processing. They will also not impact enforcement where there is no cross-border aspect.
This regulation will supplement the EU GDPR with harmonised procedures for the cross-border enforcement of the GDPR.
Next important deadline(s): None
The proposal was adopted by the European Commission in July 2023. The Parliament adopted its position on the Commission's proposal in April 2024. However this legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. Soon after the elections, the Council of the EU agreed its negotiating position on the draft legislation.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the proposal for this regulation among them. The European Commission's Work Programme for 2025 also included the regulation in its list of pending proposals. Once the regulation is revived, tri-partite negotiations among the EU institutions will be able to begin.
On the horizon.
None (awaiting revival in the new legislature)
Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, available here.
This new regulation will impact all those involved in cross-border enforcement of the GDPR, including the national data protection authorities (DPAs), the parties under investigation and any parties making a complaint about cross-border GDPR compliance to a DPA.
The regulation adds new procedural rules for the cross-border enforcement mechanisms set out in Chapter VII of the GDPR. It does not introduce new procedures but clarifies the existing GDPR framework. Because GDPR enforcement is decentralised to member state level, cross-border enforcement within the EU is relatively common – for example where a data subject lodges a complaint about a data controller or processor located in a different member state.
The European Commission's review of the application of the GDPR indicated that cross-border enforcement and dispute resolution under the GDPR mechanisms was being hampered by differences in administrative procedures and interpretation of the GDPR at national level. The regulation therefore aims to create a harmonised, common set of rules for cross-border GDPR matters.
The proposed regulation will create harmonised rules in relation to various aspects of cross-border enforcement of the GDPR:
In agreeing its negotiating position, the Council of the EU is seeking amendments to: (i) ensure clearer timelines to speed up the cooperation process; (ii) provide the option of not applying the additional rules if the case is straightforward; and (iii) provide for an early resolution mechanism to allow DPAs to resolve a case before initiating standard procedures for dealing with across-border complaint, e.g. where the organisation in question has addressed the infringement or an amicable settlement has been reached.
The new rules will not impact the provisions of the GDPR concerning the rights of data subjects, the obligations of data controllers or processors, or on the lawful grounds for data processing. They will also not impact enforcement where there is no cross-border aspect.