Our timeline is an overview of major digital regulation that has come into effect in recent years, is in the legislative process, or is coming into effect soon. It is designed to help you keep track of relevant regulatory developments, understand their status and identify which pieces of legislation will actually affect your business.
Our international regulatory lawyers offer support for all stages of your business's digitalisation journey. We can help you prepare for, implement and comply with all aspects of digital regulation, whatever your sector. Explore our digital regulation expertise.
Last updated: 31 July 2025
The timeline is indicative only and not a complete overview of applicable law.
This Act seeks to increase sharing of public sector data, regulates data marketplaces, and creates the concept of data altruism for trusted sharing.
Next important deadline(s)
25 July 2025
24 September 2025
The Data Governance Act applies from 24 September 2023, but businesses that were already providing data intermediary services on 23 June 2022 have until 24 September 2025 to comply with relevant data intermediary provisions.
Enacted. Mostly in effect.
Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), available here.
25 July 2025 (deadline for responses to the consultation on European Commission's evaluation of the DGA)
24 September 2025 (for businesses already providing data intermediary services on 23 September 2022).
The Data Governance Act (DGA) deals with three main areas of data sharing in the EU:
The DGA aims to create a new EU model for data sharing, with trusted frameworks and organisations to encourage sharing, but also regulates data-sharing organisations. Data governance in this context means rules, structures, processes and technical means to share and pool data.
The DGA seeks to address the problem that some public data cannot be made open as it is protected by confidentiality, intellectual property rights or data privacy rules. Requirements are imposed on EU member states to optimise the availability of data while protecting confidentiality and privacy, for example using technical means such as anonymisation, pseudonymisation or accessing data in secure data rooms. Exclusive data sharing arrangements are prohibited except in limited public interest situations. Other requirements include a maximum delay before answering a data access request, and the creation of a single information point by each member state. The European Commission has created a searchable EU register of all information compiled by the national single information points.
The new data intermediary regulatory framework aims to grow private sector trust in opening up data by boosting the neutrality and transparency of data intermediaries. Data intermediaries are required to be neutral third parties, which do not monetise the data themselves by selling it to another business, or feeding it into their own products and services. Data intermediaries are required to act in the best interests of the data subjects. The DGA creates a full regulatory framework for data intermediation services, which will face the additional costs and burden of notification and compliance requirements. The Commission has, via an Implementing Regulation, introduced a logo for trusted "EU recognised data intermediary" organisations to differentiate recognised trusted services (i.e. those services that satisfy the compliance requirements of the DGA) from other services. The Commission has created a register of all data intermediation services providers in the EU.
Member states may choose to legislate for "dissuasive financial penalties" for non-compliance. Data intermediaries must notify the national competent authority of their services, which will monitor compliance by the intermediary. As noted, existing data intermediaries in operation on 23 June 2022 have been given two years to September 2025 to bring their operations into compliance.
The DGA creates new legal structures and a regulatory framework for "data altruism". This will enable people and businesses to share their data voluntarily and without reward for a purpose in the public interest (for example, medical or environmental research). As with data intermediaries, the Commission has introduced a logo to differentiate a compliant "EU recognised data altruism organisation" from other services.
Data altruism organisations must be registered in the Commission's new EU public register of recognised data altruism organisations and the data altruism logo must be accompanied by a QR code with a link to that register. Data altruism organisations must be not-for-profit, pursuing stated objectives in the general interest and inviting data holders to contribute their data in pursuance of those objectives. They will not be able to use the pooled data for any other purposes. The DGA further requires independent functioning and functional separation from other activities, as well as requirements to safeguard transparency and data subjects' rights.
The relevant authorities in member states are responsible for the registration of data altruism organisations and for monitoring compliance by data intermediation services providers.
In May 2024, the Commission opened infringement procedures against 18 member states that have either failed to designate the responsible authorities to implement the DGA, or failed to prove that such authorities are empowered to carry out their duties under the DGA. In July 2024, the Commission also opened infringement proceedings against Ireland.
In December 2024, the Commission sent a reasoned opinion to ten member states who have failed to comply with requirements in relation to responsible authorities. These member states have two months to respond and take the necessary measures. Otherwise, the Commission may refer the cases to the Court of Justice of the EU.
⭐ In July 2025, the Commission launched a series of consultations on the evaluation of three key EU data regulations, including the DGA. The consultation on the DGA looks at how well it supports trusted data sharing, including the role of data intermediaries, data altruism and safeguards for international data transfers. The deadline for responses is 25 July 2025.
This Act seeks to increase sharing of public sector data, regulates data marketplaces, and creates the concept of data altruism for trusted sharing.
Next important deadline(s)
25 July 2025
24 September 2025
The Data Governance Act applies from 24 September 2023, but businesses that were already providing data intermediary services on 23 June 2022 have until 24 September 2025 to comply with relevant data intermediary provisions.
Enacted. Mostly in effect.
Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), available here.
25 July 2025 (deadline for responses to the consultation on European Commission's evaluation of the DGA)
24 September 2025 (for businesses already providing data intermediary services on 23 September 2022).
The Data Governance Act (DGA) deals with three main areas of data sharing in the EU:
The DGA aims to create a new EU model for data sharing, with trusted frameworks and organisations to encourage sharing, but also regulates data-sharing organisations. Data governance in this context means rules, structures, processes and technical means to share and pool data.
The DGA seeks to address the problem that some public data cannot be made open as it is protected by confidentiality, intellectual property rights or data privacy rules. Requirements are imposed on EU member states to optimise the availability of data while protecting confidentiality and privacy, for example using technical means such as anonymisation, pseudonymisation or accessing data in secure data rooms. Exclusive data sharing arrangements are prohibited except in limited public interest situations. Other requirements include a maximum delay before answering a data access request, and the creation of a single information point by each member state. The European Commission has created a searchable EU register of all information compiled by the national single information points.
The new data intermediary regulatory framework aims to grow private sector trust in opening up data by boosting the neutrality and transparency of data intermediaries. Data intermediaries are required to be neutral third parties, which do not monetise the data themselves by selling it to another business, or feeding it into their own products and services. Data intermediaries are required to act in the best interests of the data subjects. The DGA creates a full regulatory framework for data intermediation services, which will face the additional costs and burden of notification and compliance requirements. The Commission has, via an Implementing Regulation, introduced a logo for trusted "EU recognised data intermediary" organisations to differentiate recognised trusted services (i.e. those services that satisfy the compliance requirements of the DGA) from other services. The Commission has created a register of all data intermediation services providers in the EU.
Member states may choose to legislate for "dissuasive financial penalties" for non-compliance. Data intermediaries must notify the national competent authority of their services, which will monitor compliance by the intermediary. As noted, existing data intermediaries in operation on 23 June 2022 have been given two years to September 2025 to bring their operations into compliance.
The DGA creates new legal structures and a regulatory framework for "data altruism". This will enable people and businesses to share their data voluntarily and without reward for a purpose in the public interest (for example, medical or environmental research). As with data intermediaries, the Commission has introduced a logo to differentiate a compliant "EU recognised data altruism organisation" from other services.
Data altruism organisations must be registered in the Commission's new EU public register of recognised data altruism organisations and the data altruism logo must be accompanied by a QR code with a link to that register. Data altruism organisations must be not-for-profit, pursuing stated objectives in the general interest and inviting data holders to contribute their data in pursuance of those objectives. They will not be able to use the pooled data for any other purposes. The DGA further requires independent functioning and functional separation from other activities, as well as requirements to safeguard transparency and data subjects' rights.
The relevant authorities in member states are responsible for the registration of data altruism organisations and for monitoring compliance by data intermediation services providers.
In May 2024, the Commission opened infringement procedures against 18 member states that have either failed to designate the responsible authorities to implement the DGA, or failed to prove that such authorities are empowered to carry out their duties under the DGA. In July 2024, the Commission also opened infringement proceedings against Ireland.
In December 2024, the Commission sent a reasoned opinion to ten member states who have failed to comply with requirements in relation to responsible authorities. These member states have two months to respond and take the necessary measures. Otherwise, the Commission may refer the cases to the Court of Justice of the EU.
⭐ In July 2025, the Commission launched a series of consultations on the evaluation of three key EU data regulations, including the DGA. The consultation on the DGA looks at how well it supports trusted data sharing, including the role of data intermediaries, data altruism and safeguards for international data transfers. The deadline for responses is 25 July 2025.
The DSA builds on the e-Commerce Directive to address new challenges in the digital sphere.
Next important deadline(s)
2026
Most in-scope service providers had to comply from 17 February 2024.
Provisions enabling the designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) applied from 16 November 2022. The VLOPs and VLOSEs that were designated on 25 April 2023 had to comply by 25 August 2023.
Providers that are subsequently designated as VLOPs and VLOSEs by the European Commission will have four months from the Commission's notification of designation to comply with the extra obligations applicable to VLOPs and VLOSEs.
The list of designated VLOPs and VLOSEs can be viewed here.
In effect.
Beginning of 2026 (first harmonised reports due under the DSA regulation on transparency reporting)
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), available here.
The Digital Services Act (DSA) broadly applies to all online intermediaries, irrespective of their place of establishment, that offer digital services or products to users in the EU.
It sets out a layered approach to regulation with requirements increasing cumulatively depending on the classification of the service provider:
VLOPs and VLOSEs are online platforms and online search engines with more than 45 million monthly active users and which are designated as a VLOP or VLOSE by the Commission.
Micro and small businesses are exempt from the requirements for online platforms and online marketplaces, unless they qualify as a VLOP or VLOSE.
The DSA aims to update the "rules of the internet", with the key principle that what is illegal offline should also be illegal online.
The DSA both preserves and updates the intermediary liability position set out in the e-Commerce Directive (e.g. the hosting defence). However, the range of requirements fundamentally changes the liability of online services in the EU with strict obligations for the supervision of illegal content online, as well as new rules on transparency, user safety and advertising accountability. It also includes detailed rules on features that must be incorporated into online platforms and marketplaces or included in user terms and conditions.
The DSA preserves the liability defences under Articles 12-15 of the e-Commerce Directive (the "mere conduit" defence, the "caching" defence, the "hosting" defence and the "no obligation to monitor" provision). Clarity is added that these defences remain available even if the platform has voluntarily taken action to detect, identify and remove, or disable access to, illegal content.
However, notice and take down obligations on online platforms are strengthened. Platforms must put in place mechanisms for users to report illicit content (such as hate speech, terror propaganda, discrimination or counterfeit goods). Platforms must decide what to do about reported content in a timely, diligent and non-arbitrary manner, in order to continue to benefit from the hosting defence. Specified mechanisms to challenge content removal must be made available, including alternative dispute resolution. Online platforms must monitor for repeated submission of illegal content or unfounded complaints by particular users, and suspend their access.
Third party content monitors can apply for "trusted flagger" status. Platforms must prioritise illegal content reports from such individuals and bodies.
General obligations apply in relation to user safety, transparency, controls and information.
Annual reports are required from all intermediary service providers on their content moderation activity, with the required contents depending on the service provided (see below "Implementation progress"). Hosting service providers must include information about illegal content notices and complaints received and action taken. These include provisions relating to content recommendations and online terms, and obligations relating to the publication and communication of information on the average monthly active recipients of the service.
Websites must not be designed and organised in away that deceives or manipulates users – so-called "dark patterns" – but must be designed in a user-friendly and age-appropriate way to make sure users can make informed decisions.
In September 2023, the Commission launched the DSA Transparency Database which is a publicly accessible database of "statements of reasons "that online platforms must submit to the Commission setting out their reasons for making content moderation decisions (with the exception of micro- and small enterprises). The Commission has also published an open-source software package to facilitate the analysis of data in the Transparency Database.
The DSA introduces traceability provisions for online marketplaces. These include a "know-your-trader" obligation, requiring online marketplaces that allow B2C sales to conduct due diligence on traders prior to allowing them to use the platform.
Platforms must be designed to facilitate compliance with traders' legal obligations such as e-commerce and product safety requirements. The illegal content take down provisions are to facilitate the removal of illegal or counterfeit products, and platforms have obligations to contact consumers who have purchased those products.
Specific transparency obligations apply to online advertising. It must be clear to users whether a displayed item is an advertisement and from whom it originates. Information about the main parameters used to decide to whom it will be displayed is essential in this context.
VLOPs and VLOSEs are designated by the Commission. Platforms and search engines need to report on the number of users at least every six months.
An additional layer of obligations are imposed on VLOPs and VLOSEs. They must diligently monitor and mitigate systemic risks including their service being used for the dissemination of illegal content; the impact of their service on human rights; and the risk of their service negatively affecting civic discourse, electoral processes, public security, gender-based violence, the protection of public health and minors and individuals' physical and mental well-being.
The Commission's guidelines on mitigating against systemic risks in relation to electoral processes were published in the EU's Official Journal on 26 April 2024. The guidelines aim to support VLOPs and VLOSEs with their compliance obligations under Article 35 of the DSA (mitigation of risks) and with other obligations relevant to elections. As well as mitigation measures, the guidelines cover best practices before, during and after electoral events.
VLOPs and VLOSEs also have to implement strict processes towards regular risk assessments, produce reports of action regarding content moderation and are obliged to conduct independent annual audits to assess compliance and any commitments made pursuant to codes of conduct and crisis protocols that the Commission has adopted. They also have to disclose their parameters for recommender systems and provide for at least one recommender system which does not involve profiling. In addition, the Commission as well as member states, can seek access to their algorithms.
The Commission has also launched a DSA whistleblower tool which allows individuals with inside information to report harmful practices by VLOPs and VLOSEs that are potentially in breach of the DSA, anonymously if preferred.
Non-VLOPs and VLOSEs
Each member state must designate one or more competent authorities as responsible for the application and enforcement of the DSA in their jurisdiction, and each member state must designate one of these as a "Digital Services Coordinator." (DSC)" The DSCs are responsible for enforcement of the DSA in respect of platforms that are not VLOPs or VLOSEs in their different territories and can work together. The DSCs make up a European Board for Digital Services, chaired by the Commission, which works as an independent advisory group to enforce the DSA.
In 2024, the Commission opened infringement procedures against various member states which either had not designated their DSC, or had not yet granted their DSC full powers to carry out their duties under the DSA. In May 2025, the Commission referred five of these member states (Czechia, Spain, Cyprus, Poland and Portugal) to the Court of Justice of the EU for failing to take the necessary measures.
VLOPs and VLOSEs
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The Commission has opened infringement procedures against six member states which either did not designate their Digital Services Coordinator by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or have not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). In July 2024, the Commission opened infringement proceedings against six further member states (Belgium, Spain, Croatia, Luxembourg, Netherlands and Sweden).
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The European Commission has been, and continues to be, proactive in sending requests for information to various platforms concerning their compliance with their DSA obligations. It has also opened formal proceedings against some platforms following preliminary investigations and requests for information.
International cooperation
In May 2024, the Commission and the UK's Ofcom signed an administrative arrangement to support their work in enforcing the new online safety regimes in both the EU and the UK under the DSA and the UK Online Safety Act 2023 respectively.
On 22 February 2024, the delegated regulation on independent audits entered into effect. The delegated act provides a framework to guide VLOPs and VLOSEs when preparing their external independent audits. It also provides mandatory templates for auditors to use when completing an audit report, as well as mandatory templates for the VLOPs and VLOSEs to use when completing their audit implementation reports.
The first risk assessment reports and independent audit reports from designated VLOPs and VLOSEs were published in November 2024.
⭐ From 1 July 2025, the implementing regulation on transparency reporting under the DSA started to apply. The regulation standardises templates and reporting periods for the transparency reports that providers of intermediary services have to publish in relation to their content moderation practices. VLOPs and VLOSEs must report twice a year, while other services report annually. Under the implementing regulation, providers must start collecting data in line with the templates from 1 July 2025, with the first harmonised reports due at the beginning of 2026. The Commission also plans to update the requirements for submitting statements of reasons to the DSA transparency database so that they align with the implementing regulation.
⭐ On 1 July 2025, the voluntary Code of Practice on Disinformation was integrated into the DSA. The code, which has been signed by various advertisers, ad-tech companies, platforms and other relevant organisations, is a set of 44 commitments aimed at fighting disinformation online. It covers, among other things, transparency of political advertising, cutting financial incentives for disseminators of disinformation, measures to reduce the spread of disinformation, tools to empower users against disinformation, and tools to empower researchers and fact-checkers. With integration, compliance with the code can be used by enforcers as a benchmark for determining DSA compliance.
⭐ On 2 July 2025, the Commission adopted a delegated act on data access for researchers under the DSA. The delegated act explains how VLOPs and VLOSEs should share internal data that is not publicly available with qualified researchers. It sets out the legal and technical requirements for such access so that researchers can assess systemic risks and mitigation measures in the EU. It also establishes a new online DSA data access portal. The delegated act must be scrutinised by the European Parliament and Council of the EU within the next three months. It will enter into force on publication in the Official Journal of the EU.
⭐ On 14 July 2025, the Commission published its final guidelines on the protection of minors under Article 28 of the DSA. See this Regulatory Outlook for more information. The guidelines also recommend the use of age verification technologies to restrict access to adult content, pointing to EU digital identity wallets (when they become available) and to the Commission's blueprint for age verification apps, which sets a standard for age verification apps.
Digital regulation | UK Regulatory Outlook July 2025
17 February 2024: The Digital Services Act (DSA) is now fully applicable
Rethinking regulation of data-driven digital platforms
EU's Digital Service Act to introduce first express ban on 'dark patterns'
Commission finds EU consumer law inadequate to protect consumers in the digital world
The DSA builds on the e-Commerce Directive to address new challenges in the digital sphere.
Next important deadline(s)
2026
Most in-scope service providers had to comply from 17 February 2024.
Provisions enabling the designation of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) applied from 16 November 2022. The VLOPs and VLOSEs that were designated on 25 April 2023 had to comply by 25 August 2023.
Providers that are subsequently designated as VLOPs and VLOSEs by the European Commission will have four months from the Commission's notification of designation to comply with the extra obligations applicable to VLOPs and VLOSEs.
The list of designated VLOPs and VLOSEs can be viewed here.
In effect.
Beginning of 2026 (first harmonised reports due under the DSA regulation on transparency reporting)
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), available here.
The Digital Services Act (DSA) broadly applies to all online intermediaries, irrespective of their place of establishment, that offer digital services or products to users in the EU.
It sets out a layered approach to regulation with requirements increasing cumulatively depending on the classification of the service provider:
VLOPs and VLOSEs are online platforms and online search engines with more than 45 million monthly active users and which are designated as a VLOP or VLOSE by the Commission.
Micro and small businesses are exempt from the requirements for online platforms and online marketplaces, unless they qualify as a VLOP or VLOSE.
The DSA aims to update the "rules of the internet", with the key principle that what is illegal offline should also be illegal online.
The DSA both preserves and updates the intermediary liability position set out in the e-Commerce Directive (e.g. the hosting defence). However, the range of requirements fundamentally changes the liability of online services in the EU with strict obligations for the supervision of illegal content online, as well as new rules on transparency, user safety and advertising accountability. It also includes detailed rules on features that must be incorporated into online platforms and marketplaces or included in user terms and conditions.
The DSA preserves the liability defences under Articles 12-15 of the e-Commerce Directive (the "mere conduit" defence, the "caching" defence, the "hosting" defence and the "no obligation to monitor" provision). Clarity is added that these defences remain available even if the platform has voluntarily taken action to detect, identify and remove, or disable access to, illegal content.
However, notice and take down obligations on online platforms are strengthened. Platforms must put in place mechanisms for users to report illicit content (such as hate speech, terror propaganda, discrimination or counterfeit goods). Platforms must decide what to do about reported content in a timely, diligent and non-arbitrary manner, in order to continue to benefit from the hosting defence. Specified mechanisms to challenge content removal must be made available, including alternative dispute resolution. Online platforms must monitor for repeated submission of illegal content or unfounded complaints by particular users, and suspend their access.
Third party content monitors can apply for "trusted flagger" status. Platforms must prioritise illegal content reports from such individuals and bodies.
General obligations apply in relation to user safety, transparency, controls and information.
Annual reports are required from all intermediary service providers on their content moderation activity, with the required contents depending on the service provided (see below "Implementation progress"). Hosting service providers must include information about illegal content notices and complaints received and action taken. These include provisions relating to content recommendations and online terms, and obligations relating to the publication and communication of information on the average monthly active recipients of the service.
Websites must not be designed and organised in away that deceives or manipulates users – so-called "dark patterns" – but must be designed in a user-friendly and age-appropriate way to make sure users can make informed decisions.
In September 2023, the Commission launched the DSA Transparency Database which is a publicly accessible database of "statements of reasons "that online platforms must submit to the Commission setting out their reasons for making content moderation decisions (with the exception of micro- and small enterprises). The Commission has also published an open-source software package to facilitate the analysis of data in the Transparency Database.
The DSA introduces traceability provisions for online marketplaces. These include a "know-your-trader" obligation, requiring online marketplaces that allow B2C sales to conduct due diligence on traders prior to allowing them to use the platform.
Platforms must be designed to facilitate compliance with traders' legal obligations such as e-commerce and product safety requirements. The illegal content take down provisions are to facilitate the removal of illegal or counterfeit products, and platforms have obligations to contact consumers who have purchased those products.
Specific transparency obligations apply to online advertising. It must be clear to users whether a displayed item is an advertisement and from whom it originates. Information about the main parameters used to decide to whom it will be displayed is essential in this context.
VLOPs and VLOSEs are designated by the Commission. Platforms and search engines need to report on the number of users at least every six months.
An additional layer of obligations are imposed on VLOPs and VLOSEs. They must diligently monitor and mitigate systemic risks including their service being used for the dissemination of illegal content; the impact of their service on human rights; and the risk of their service negatively affecting civic discourse, electoral processes, public security, gender-based violence, the protection of public health and minors and individuals' physical and mental well-being.
The Commission's guidelines on mitigating against systemic risks in relation to electoral processes were published in the EU's Official Journal on 26 April 2024. The guidelines aim to support VLOPs and VLOSEs with their compliance obligations under Article 35 of the DSA (mitigation of risks) and with other obligations relevant to elections. As well as mitigation measures, the guidelines cover best practices before, during and after electoral events.
VLOPs and VLOSEs also have to implement strict processes towards regular risk assessments, produce reports of action regarding content moderation and are obliged to conduct independent annual audits to assess compliance and any commitments made pursuant to codes of conduct and crisis protocols that the Commission has adopted. They also have to disclose their parameters for recommender systems and provide for at least one recommender system which does not involve profiling. In addition, the Commission as well as member states, can seek access to their algorithms.
The Commission has also launched a DSA whistleblower tool which allows individuals with inside information to report harmful practices by VLOPs and VLOSEs that are potentially in breach of the DSA, anonymously if preferred.
Non-VLOPs and VLOSEs
Each member state must designate one or more competent authorities as responsible for the application and enforcement of the DSA in their jurisdiction, and each member state must designate one of these as a "Digital Services Coordinator." (DSC)" The DSCs are responsible for enforcement of the DSA in respect of platforms that are not VLOPs or VLOSEs in their different territories and can work together. The DSCs make up a European Board for Digital Services, chaired by the Commission, which works as an independent advisory group to enforce the DSA.
In 2024, the Commission opened infringement procedures against various member states which either had not designated their DSC, or had not yet granted their DSC full powers to carry out their duties under the DSA. In May 2025, the Commission referred five of these member states (Czechia, Spain, Cyprus, Poland and Portugal) to the Court of Justice of the EU for failing to take the necessary measures.
VLOPs and VLOSEs
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The Commission has opened infringement procedures against six member states which either did not designate their Digital Services Coordinator by the 17 February 2024 deadline (Estonia, Poland and Slovakia), or have not yet granted them full powers to carry out their duties under the DSA (Cyprus, Czechia and Portugal). In July 2024, the Commission opened infringement proceedings against six further member states (Belgium, Spain, Croatia, Luxembourg, Netherlands and Sweden).
Enforcement in relation to VLOPS and VLOSEs is exclusively reserved to the EU Commission. Fines of up to 6% of global annual turnover can be levied for breaches.
The European Commission has been, and continues to be, proactive in sending requests for information to various platforms concerning their compliance with their DSA obligations. It has also opened formal proceedings against some platforms following preliminary investigations and requests for information.
International cooperation
In May 2024, the Commission and the UK's Ofcom signed an administrative arrangement to support their work in enforcing the new online safety regimes in both the EU and the UK under the DSA and the UK Online Safety Act 2023 respectively.
On 22 February 2024, the delegated regulation on independent audits entered into effect. The delegated act provides a framework to guide VLOPs and VLOSEs when preparing their external independent audits. It also provides mandatory templates for auditors to use when completing an audit report, as well as mandatory templates for the VLOPs and VLOSEs to use when completing their audit implementation reports.
The first risk assessment reports and independent audit reports from designated VLOPs and VLOSEs were published in November 2024.
⭐ From 1 July 2025, the implementing regulation on transparency reporting under the DSA started to apply. The regulation standardises templates and reporting periods for the transparency reports that providers of intermediary services have to publish in relation to their content moderation practices. VLOPs and VLOSEs must report twice a year, while other services report annually. Under the implementing regulation, providers must start collecting data in line with the templates from 1 July 2025, with the first harmonised reports due at the beginning of 2026. The Commission also plans to update the requirements for submitting statements of reasons to the DSA transparency database so that they align with the implementing regulation.
⭐ On 1 July 2025, the voluntary Code of Practice on Disinformation was integrated into the DSA. The code, which has been signed by various advertisers, ad-tech companies, platforms and other relevant organisations, is a set of 44 commitments aimed at fighting disinformation online. It covers, among other things, transparency of political advertising, cutting financial incentives for disseminators of disinformation, measures to reduce the spread of disinformation, tools to empower users against disinformation, and tools to empower researchers and fact-checkers. With integration, compliance with the code can be used by enforcers as a benchmark for determining DSA compliance.
⭐ On 2 July 2025, the Commission adopted a delegated act on data access for researchers under the DSA. The delegated act explains how VLOPs and VLOSEs should share internal data that is not publicly available with qualified researchers. It sets out the legal and technical requirements for such access so that researchers can assess systemic risks and mitigation measures in the EU. It also establishes a new online DSA data access portal. The delegated act must be scrutinised by the European Parliament and Council of the EU within the next three months. It will enter into force on publication in the Official Journal of the EU.
⭐ On 14 July 2025, the Commission published its final guidelines on the protection of minors under Article 28 of the DSA. See this Regulatory Outlook for more information. The guidelines also recommend the use of age verification technologies to restrict access to adult content, pointing to EU digital identity wallets (when they become available) and to the Commission's blueprint for age verification apps, which sets a standard for age verification apps.
Digital regulation | UK Regulatory Outlook July 2025
17 February 2024: The Digital Services Act (DSA) is now fully applicable
Rethinking regulation of data-driven digital platforms
EU's Digital Service Act to introduce first express ban on 'dark patterns'
Commission finds EU consumer law inadequate to protect consumers in the digital world
These regulations introduce new rules for digital platforms on collecting and reporting information about their sellers.
Next important deadline(s)
None
1 January 2024 in the UK (with the first reports due by 31 January 2025)
Regulations enacted (and came into force on 1 January 2024)
The Platform Operators (Due Diligence and Reporting Requirements) Regulations 2023, available here.
None (full compliance already required and first reports made)
All online platforms and those who sell through them.
The UK's implementing regulations include two categories of "Excluded Platform Operators" which are platform operators whose entire business model is such that the platform operator either:
In each case, the platform operator may only rely on this exemption if it gives notice to HMRC in the form and manner specified in directions given by the HMRC.
These rules introduce new standardised rules for collecting and reporting relevant information about sellers of goods and relevant services – currently personal services (including the provision of transportation and delivery services), and the rental of immoveable property – and their income from digital platform activities to tax authorities and exchanging information between tax authorities. Under the rules platforms must:
There are penalties for non-compliance and Tax authorities will exchange the information with the tax authority in which the seller is resident (or rental property is located) to ensure local tax compliance.
These regulations introduce new rules for digital platforms on collecting and reporting information about their sellers.
Next important deadline(s)
None
1 January 2024 in the UK (with the first reports due by 31 January 2025)
Regulations enacted (and came into force on 1 January 2024)
The Platform Operators (Due Diligence and Reporting Requirements) Regulations 2023, available here.
None (full compliance already required and first reports made)
All online platforms and those who sell through them.
The UK's implementing regulations include two categories of "Excluded Platform Operators" which are platform operators whose entire business model is such that the platform operator either:
In each case, the platform operator may only rely on this exemption if it gives notice to HMRC in the form and manner specified in directions given by the HMRC.
These rules introduce new standardised rules for collecting and reporting relevant information about sellers of goods and relevant services – currently personal services (including the provision of transportation and delivery services), and the rental of immoveable property – and their income from digital platform activities to tax authorities and exchanging information between tax authorities. Under the rules platforms must:
There are penalties for non-compliance and Tax authorities will exchange the information with the tax authority in which the seller is resident (or rental property is located) to ensure local tax compliance.
Pillar Two is the second part of a two pillar solution to address the tax challenges arising from the digitalised economy.
Next important deadline(s): None
1 January 2024 for the "Multinational top-up tax" (MTT) and "Domestic top-up tax" (DTT) and accounting periods beginning on or after 31 December 2024 for the "undertaxed profits rule" (UTPR).
In effect
None (compliance already required for certain elements)
See the provisions dealing with MTT, DTT and UTPR in Parts 3 and 4 of Finance (No.2) Act 2023, available here and subsequent amendments in Finance Act 2024 available here.
See also Finance Act 2025, available here, for amendments to the MTT and DTT legislation to implement the UTPR and other additional amendments to the MTT and DTT to introduce the transitional country by country reporting safe harbour anti-arbitrage rule.
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar Two sets a new minimum corporate tax rate of 15% for large MNEs (those with a combined group turnover of more than EUR750 million) on global profits and provides a new set of rules known as the global anti-base erosion (GloBE) rules which will expand taxing rights of jurisdictions to achieve the global minimum tax rate worldwide to apply to group profits. If a jurisdiction does not adequately tax profits, then other jurisdictions, in particular shareholder jurisdiction or source jurisdictions, can seek to tax such profits.
The OECD framework for Pillar Two will operate on a country-by-country basis, with implementation dates varying between countries. The UK has enacted primary legislation in the Finance (No.2) Act 2023 to implement the MTT and DTT from 1 January 2024 (as further amended by the Finance Act 2024) and legislation to implement the UTPR has been introduced by the Finance Act 2025 with effect for accounting periods beginning on or after 31 December 2024.
Pillar Two is the second part of a two pillar solution to address the tax challenges arising from the digitalised economy.
Next important deadline(s): None
1 January 2024 for the "Multinational top-up tax" (MTT) and "Domestic top-up tax" (DTT) and accounting periods beginning on or after 31 December 2024 for the "undertaxed profits rule" (UTPR).
In effect
None (compliance already required for certain elements)
See the provisions dealing with MTT, DTT and UTPR in Parts 3 and 4 of Finance (No.2) Act 2023, available here and subsequent amendments in Finance Act 2024 available here.
See also Finance Act 2025, available here, for amendments to the MTT and DTT legislation to implement the UTPR and other additional amendments to the MTT and DTT to introduce the transitional country by country reporting safe harbour anti-arbitrage rule.
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar Two sets a new minimum corporate tax rate of 15% for large MNEs (those with a combined group turnover of more than EUR750 million) on global profits and provides a new set of rules known as the global anti-base erosion (GloBE) rules which will expand taxing rights of jurisdictions to achieve the global minimum tax rate worldwide to apply to group profits. If a jurisdiction does not adequately tax profits, then other jurisdictions, in particular shareholder jurisdiction or source jurisdictions, can seek to tax such profits.
The OECD framework for Pillar Two will operate on a country-by-country basis, with implementation dates varying between countries. The UK has enacted primary legislation in the Finance (No.2) Act 2023 to implement the MTT and DTT from 1 January 2024 (as further amended by the Finance Act 2024) and legislation to implement the UTPR has been introduced by the Finance Act 2025 with effect for accounting periods beginning on or after 31 December 2024.
Part 1 introduces cybersecurity obligations for connected consumer products.
Next important deadline(s)
None
Provisions relating to connectable products in Part 1 came into effect on 29 April 2024
Secondary legislation
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (giving effect to the connectable products provisions) are available here.
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) (Amendment) Regulations 2025 are available here.
In effect
UK Product Security and Telecommunications Infrastructure Act 2022, available here.
None (full compliance already required)
Manufacturers of connectable products (Product Security) and telecoms network operators, infrastructure provides and site providers (Telecommunications Infrastructure). This affects products already on sale/available.
Part 1 of The Product Security and Telecommunications Infrastructure Act 2022 (PSTIA) sets out powers for UK Government Ministers to specify security requirements relating to relevant connectable products (a product which is internet or network connectable).
The obligations imposed on manufacturers of connectable/digital products include:
In addition, businesses may face liability where cybersecurity vulnerabilities lead to the loss or corruption of consumer data.
The PSTIA also imposes obligations on manufacturers, importers and distributors to not make products available in the UK unless accompanied by a statement of compliance with applicable security conditions.
The PSTIA leaves room for further delegated legislation to introduce additional obligations, and clarifications.
For more on the practical steps to comply with the new regime, please see the link to our Eating Compliance for Breakfast webinar below.
On 8 January 2024, the Office for Products Safety and Standards (OPSS) published guidance for businesses that need to comply with the requirements. See the link to the February Regulatory Outlook below for more.
Additional guidance was added on 23 April 2024 specifying that the product's statement of compliance (SoC) must be "accompanied" by the product itself, and defining the SoC as a "document". However, the terms "document" and "accompany" are not clearly defined in the PSTIA and so businesses must determine how they will comply with these requirements for their individual products.
Amending regulations were made on 24 February 2025 and came into force on 25 February 2025. The changes exempt the following categories of products from the PSTIA: motor vehicles agricultural, two- or three-wheel vehicles and quadricycles, and forestry vehicles. The amending regulations also clarify that where a manufacturer of relevant connectable products extends the minimum length of time for which security updates relating to such products will be provided, the new minimum length of time must be published as soon as is practicable. The government has updated its guidance in line with these changes.
Part 1 introduces cybersecurity obligations for connected consumer products.
Next important deadline(s)
None
Provisions relating to connectable products in Part 1 came into effect on 29 April 2024
Secondary legislation
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (giving effect to the connectable products provisions) are available here.
The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) (Amendment) Regulations 2025 are available here.
In effect
UK Product Security and Telecommunications Infrastructure Act 2022, available here.
None (full compliance already required)
Manufacturers of connectable products (Product Security) and telecoms network operators, infrastructure provides and site providers (Telecommunications Infrastructure). This affects products already on sale/available.
Part 1 of The Product Security and Telecommunications Infrastructure Act 2022 (PSTIA) sets out powers for UK Government Ministers to specify security requirements relating to relevant connectable products (a product which is internet or network connectable).
The obligations imposed on manufacturers of connectable/digital products include:
In addition, businesses may face liability where cybersecurity vulnerabilities lead to the loss or corruption of consumer data.
The PSTIA also imposes obligations on manufacturers, importers and distributors to not make products available in the UK unless accompanied by a statement of compliance with applicable security conditions.
The PSTIA leaves room for further delegated legislation to introduce additional obligations, and clarifications.
For more on the practical steps to comply with the new regime, please see the link to our Eating Compliance for Breakfast webinar below.
On 8 January 2024, the Office for Products Safety and Standards (OPSS) published guidance for businesses that need to comply with the requirements. See the link to the February Regulatory Outlook below for more.
Additional guidance was added on 23 April 2024 specifying that the product's statement of compliance (SoC) must be "accompanied" by the product itself, and defining the SoC as a "document". However, the terms "document" and "accompany" are not clearly defined in the PSTIA and so businesses must determine how they will comply with these requirements for their individual products.
Amending regulations were made on 24 February 2025 and came into force on 25 February 2025. The changes exempt the following categories of products from the PSTIA: motor vehicles agricultural, two- or three-wheel vehicles and quadricycles, and forestry vehicles. The amending regulations also clarify that where a manufacturer of relevant connectable products extends the minimum length of time for which security updates relating to such products will be provided, the new minimum length of time must be published as soon as is practicable. The government has updated its guidance in line with these changes.
This new regulation aims to boost competition in EU digital markets by regulating online "gatekeepers".
Next important deadline(s)
H2 2025
The first group of gatekeepers designated under the Digital Markets Act (DMA) have had to comply with its obligations since 6 March 2024. Additional designations continue to be made by the Commission: obligations take effect six months from the date of designation.
Enacted. Fully in effect.
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), available here.
Second half of 2025 (outcome of compliance proceedings in respect of some gatekeepers expected).
The DMA is aimed at digital businesses offering a "core platform service", with significant scale and reach within the EU – these firms will be designated as "gatekeepers".
A "core platform service" includes: online intermediation services; online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communication services; operating systems; cloud computing services; and certain advertising services so far as they are provided by a provider of any of the core platform services.
A "gatekeeper" is a "core platform service provider" that has a significant impact on the internal market, whose platform enables business users to access customers and enjoys an entrenched and durable position (meaning at least three years). While there is some nuance in this designation, there will be a presumption that a provider will satisfy the designation test if it:
The first limb is a set of rules that will apply only to "gatekeepers" and comprises a list of do's and don'ts for these providers. The rules are designed to address perceived competitive imbalances arising from the gatekeeper's position in the market.
Examples of obligations on gatekeepers include:
Examples of prohibited behaviour include:
The second limb covers market investigation powers for the European Commission to conduct investigations to consider whether: a core platform provider should be designated as a gatekeeper; there has been systematic non-compliance; and services should be added to the list of core platform services.
The Commission also has powers to supplement the gatekeeper obligations under the DMA, with additional obligations through further legislation, based on a market investigation. This is to ensure that the legislation maintains pace with digital markets as they evolve.
The Commission can also designate firms as "emerging" gatekeepers, that is, they are on the way to reaching the thresholds set out in the DMA for designation as a gatekeeper.
Failure to comply with the rules could lead to fines of up to 10% of global turnover, and, in the case of systematic infringements, structural remedies.
The companies so far designated by the Commission as gatekeepers are: Alphabet, Amazon, Apple, booking.com, ByteDance, Meta and Microsoft in relation to the following core platform services:
Challenges have been lodged by some gatekeepers in relation to some of these designations and in April 2025, the Commission decided that Meta's service, Facebook Marketplace, should no longer be designated as a gatekeeper since it had fewer than 10,000 business users in 2024.
In addition, the Commission conducted a market investigation to further assess rebuttal arguments submitted by the online social networking service X, which argued that even if X was deemed to meet the quantitative thresholds, it did not qualify as an important gateway between businesses and consumers, meaning that its online social networking service should not be a designated gatekeeper. The Commission found in X's favour.
The Commission has also launched several investigations into non-compliance with the DMA by some gatekeepers. This resulted in the first fines being issued under the DMA to Apple and Meta. Both tech companies are appealing these decisions. Other proceedings are ongoing and expected to conclude at various points in the second half of 2025.
The Commission has also signalled its intention to monitor the development of AI tools in the context of the DMA.
On 23rd January 2025, the DMA working group suggested that the Commission should consider whether to designate AI or cloud infrastructure services.
After conducting technical compliance workshops in Spring 2025, in which third parties were able to provide their views on compliance by gatekeepers, the Commission began conducting a second round of workshops in June 2025, allowing gatekeepers to present updates on the measures employed over the past year to ensure compliance and enabling third parties to provide their feedback.
This new regulation aims to boost competition in EU digital markets by regulating online "gatekeepers".
Next important deadline(s)
H2 2025
The first group of gatekeepers designated under the Digital Markets Act (DMA) have had to comply with its obligations since 6 March 2024. Additional designations continue to be made by the Commission: obligations take effect six months from the date of designation.
Enacted. Fully in effect.
Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (Digital Markets Act), available here.
Second half of 2025 (outcome of compliance proceedings in respect of some gatekeepers expected).
The DMA is aimed at digital businesses offering a "core platform service", with significant scale and reach within the EU – these firms will be designated as "gatekeepers".
A "core platform service" includes: online intermediation services; online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communication services; operating systems; cloud computing services; and certain advertising services so far as they are provided by a provider of any of the core platform services.
A "gatekeeper" is a "core platform service provider" that has a significant impact on the internal market, whose platform enables business users to access customers and enjoys an entrenched and durable position (meaning at least three years). While there is some nuance in this designation, there will be a presumption that a provider will satisfy the designation test if it:
The first limb is a set of rules that will apply only to "gatekeepers" and comprises a list of do's and don'ts for these providers. The rules are designed to address perceived competitive imbalances arising from the gatekeeper's position in the market.
Examples of obligations on gatekeepers include:
Examples of prohibited behaviour include:
The second limb covers market investigation powers for the European Commission to conduct investigations to consider whether: a core platform provider should be designated as a gatekeeper; there has been systematic non-compliance; and services should be added to the list of core platform services.
The Commission also has powers to supplement the gatekeeper obligations under the DMA, with additional obligations through further legislation, based on a market investigation. This is to ensure that the legislation maintains pace with digital markets as they evolve.
The Commission can also designate firms as "emerging" gatekeepers, that is, they are on the way to reaching the thresholds set out in the DMA for designation as a gatekeeper.
Failure to comply with the rules could lead to fines of up to 10% of global turnover, and, in the case of systematic infringements, structural remedies.
The companies so far designated by the Commission as gatekeepers are: Alphabet, Amazon, Apple, booking.com, ByteDance, Meta and Microsoft in relation to the following core platform services:
Challenges have been lodged by some gatekeepers in relation to some of these designations and in April 2025, the Commission decided that Meta's service, Facebook Marketplace, should no longer be designated as a gatekeeper since it had fewer than 10,000 business users in 2024.
In addition, the Commission conducted a market investigation to further assess rebuttal arguments submitted by the online social networking service X, which argued that even if X was deemed to meet the quantitative thresholds, it did not qualify as an important gateway between businesses and consumers, meaning that its online social networking service should not be a designated gatekeeper. The Commission found in X's favour.
The Commission has also launched several investigations into non-compliance with the DMA by some gatekeepers. This resulted in the first fines being issued under the DMA to Apple and Meta. Both tech companies are appealing these decisions. Other proceedings are ongoing and expected to conclude at various points in the second half of 2025.
The Commission has also signalled its intention to monitor the development of AI tools in the context of the DMA.
On 23rd January 2025, the DMA working group suggested that the Commission should consider whether to designate AI or cloud infrastructure services.
After conducting technical compliance workshops in Spring 2025, in which third parties were able to provide their views on compliance by gatekeepers, the Commission began conducting a second round of workshops in June 2025, allowing gatekeepers to present updates on the measures employed over the past year to ensure compliance and enabling third parties to provide their feedback.
This regulation introduces sweeping reforms to the EU product safety regime.
Next important deadline(s)
None
Applies to all new products placed on the market from 13 December 2024.
Became law on 10 May 2023 but had an 18 month transition period until 13 December 2024.
Regulation (EU) 2023/988 of 10 May 2023 on general product safety, available here.
None (full compliance already required)
Anyone placing consumer products on the market in the EU.
The regulation contains sweeping reforms which will significantly change the way that both modern and traditional products are produced, supplied and monitored across the EU. It raises the requirements of safety and adds sophistication in terms of compliance obligations for all non-food products and the businesses involved in manufacturing and supplying them to end users.
The GPSR also adopts an improved definition for a "product" and includes new factors to take into account when assessing safety, so that the EU's product safety regime adequately addresses modern technologies.
The GPSR makes clear that connected devices are considered to be products within its scope and will be subject to the general safety requirement. In addition, when assessing whether a product is safe, economic operators have to take into account the effect that other products might have on their product, its cybersecurity features, and any evolving, learning and predictive functionalities of the product.
In addition, the GPSR seeks to address the increasing digitalisation of supply chains, in particular the growth of e-commerce, and places a number of obligations on online marketplaces, such as:
Market surveillance authorities will also be able to order online platforms to remove dangerous products from their platforms or to disable access to them.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
This regulation introduces sweeping reforms to the EU product safety regime.
Next important deadline(s)
None
Applies to all new products placed on the market from 13 December 2024.
Became law on 10 May 2023 but had an 18 month transition period until 13 December 2024.
Regulation (EU) 2023/988 of 10 May 2023 on general product safety, available here.
None (full compliance already required)
Anyone placing consumer products on the market in the EU.
The regulation contains sweeping reforms which will significantly change the way that both modern and traditional products are produced, supplied and monitored across the EU. It raises the requirements of safety and adds sophistication in terms of compliance obligations for all non-food products and the businesses involved in manufacturing and supplying them to end users.
The GPSR also adopts an improved definition for a "product" and includes new factors to take into account when assessing safety, so that the EU's product safety regime adequately addresses modern technologies.
The GPSR makes clear that connected devices are considered to be products within its scope and will be subject to the general safety requirement. In addition, when assessing whether a product is safe, economic operators have to take into account the effect that other products might have on their product, its cybersecurity features, and any evolving, learning and predictive functionalities of the product.
In addition, the GPSR seeks to address the increasing digitalisation of supply chains, in particular the growth of e-commerce, and places a number of obligations on online marketplaces, such as:
Market surveillance authorities will also be able to order online platforms to remove dangerous products from their platforms or to disable access to them.
To help you navigate the changes introduced by the GPSR, see our dedicated microsite which provides you with a series of helpful resources, including these insights:
This Act overhauls the UK legal framework for public service broadcasting, video on demand, streaming and internet radio.
Next important deadline(s)
16 September 2025
22 September 2025
H2 2025
Late 2025
The Media Act 2024 (Act) received Royal Assent on 24 May 2024. Most of the Act's provisions will be brought into force by secondary legislation, apart from Part 2 on prominence on television selection services, which came into force on the day on which the Act was passed.
Secondary legislation made under the Act
The first commencement regulations, The Media Act 2024 (Commencement No. 1) Regulations 2024, brought into force certain provisions of the Media Act on 23 and 26 August 2024.
The second commencement regulations, The Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force Part 5 and Section 19 of the Act on 17 October 2024.
Enacted – coming into effect in stages
Media Act 2024, available here.
16 September 2025 (consultation on TSS designation closes)
22 September 2025 (consultation on local news and information on analogue commercial radio closes)
Second half of 2025 (decisions on designation of TSS expected)
Late 2025 (Ofcom final statement on listed events regime expected)
Broadcasters (public service and commercial)
Video-on-demand (VoD) and streaming service providers
TV selection services, such as smart TV, set-top-box, streaming stick and pay TV platform providers, as well as the associated software providers
Voice-activated smart speakers and other radio selection services
Radio station operators
Sports rights holders
The primary aims of the Act are to:
On 26 February 2024, Ofcom published its roadmap for implementing the bill (as it was then) in which it explained its "high-level plan" for putting the provisions into practice once the bill became law. Ofcom cautioned that the dates outlined were indicative only. Its timetable assumed that the legislation would receive Royal Assent by the summer (which it did) and the necessary secondary legislation subsequently laid before Parliament. Ofcom is proceeding with implementation, which is taking place in phases over the next two years.
In July 2024, Ofcom published a call for evidence on the listed events regime under the Act, which closed on 26 September 2024.
In June 2025, Ofcom published a consultation on changes Ofcom proposes making to its Code on listed events and on the definition of various terms used in the listed events regime. The deadline for responses was 8 August 2025. See this Regulatory Outlook for more. Ofcom expects to publish a final statement in late 2025 and the regime is expected to come into effect in 2026.
The Media Act 2024 (Commencement No 1) Regulations 2024 brought certain (mostly functional) provisions of the Act into force on 23 August 2024. The Regulations also brought into effect the non-UK Tier 1 VoD services framework, but the details of the regime are still to be determined as Ofcom must first consult and then the government will produce further regulations. In addition, the Regulations partially brought into effect the new digital prominence regime, but the details are also subject to consultation and further regulations from the government.
In September 2024, the government indicated to Ofcom its intention to begin its consideration of Tier 1 regulation of appropriate on-demand services "as soon as is practically possible". The government asked Ofcom to prepare a report on the operation of the UK market generally for on-demand programme services (ODPS) and non-UK (ODPS), which report the government is required to take into account.
On 30 May 2025, Ofcom delivered the report to the secretary of state. Ofcom has not made this report publicly available due to restrictions in the Communications Act 2003 on sharing the information provided to Ofcom by providers of ODPS.
The second commencement regulations, the Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force the following provisions of the Act on 17 October 2024: (i) Part 5 (Regulation of radio services); and (ii) Section 19 (amount of financial penalties: qualifying revenue), but only for the purposes of enabling Ofcom to carry out the necessary work to bring the section into force. The regulations also contain transitional and saving provisions.
Part 5 covers, among other things, local analogue commercial radio licences and their renewal. Previously, renewal was only available if the licence holder was also broadcasting a digital radio service on a "relevant" DAB multiplex. The Act introduces an additional, alternative statutory basis for licensees to apply for renewal if there is a "relevant" DAB multiplex available, but it is not "suitable" for their needs. In December 2024, Ofcom launched a consultation on its approach to considering applications under this new renewal route. Essentially, Ofcom is proposing that a "relevant" local or small-scale multiplex should be considered "unsuitable" in relation to an analogue services only if there is a substantial difference in the size of their coverage areas. The consultation closed on 5 February 2025.
Part 5 also ensures the delivery of local news and information on analogue commercial radio by establishing new mandates for its consistent transmission. On 1 July 2025, Ofcom published a consultation on how it proposes to implement this new framework, including the specific conditions Ofcom proposes to include in licences. Ofcom also published draft guidance on how licensees might meet the requirements of the new licence conditions. The consultation closes on 22 September 2025.
Part 6 of the Act governs the regulation of RSS. Ofcom is obliged to provide the secretary of state with a report making recommendations as to which RSS should be designated. In February 2025, Ofcom consulted on the principles and methods it proposes applying when making these recommendations. It also sought views on its emerging thinking on the appropriate sources of data for measuring the use of RSS and setting the threshold for recommendations.
Ofcom published its final statement confirming its principles and methods in May 2025.
Internet televisions equipment: The Internet Television Equipment Regulations 2024, which came into force on 14 November 2024, set out the descriptions of devices that are considered to be "internet television equipment" for the purposes of the new prominence framework under the Act. The regulations name smart televisions and streaming devices as internet television equipment.
The government has also published a policy paper on its approach to deciding the categories of TV devices that will be considered "internet television equipment". It explains that smart TVs, set-top boxes and streaming sticks will qualify, but that smartphones, laptops, tablets, PCs and video games consoles will not, as watching TV on these devices is not their primary function. Newer devices, such as home cinema projectors and portable lifestyle screens, internet connected car touchscreens, virtual reality headsets, smart watches and in-home screens (such as smart fridges and smart speakers with in-built screens), will also be excluded for the same reason. However, the government intends to review the list one year after full implementation of the new prominence regime.
Television selection services: Under the Act, Ofcom is also obliged to provide a report to the secretary of state with recommendations on the designation of "television selection services" (TSS), which are the connected TV platforms on which, under the new regime, Ofcom-designated PSB TV apps must be available, prominent and easily accessible. In December 2024, Ofcom launched a consultation on the principles and methods it proposes applying when preparing the report. The consultation closed on 5 February 2025. In April 2025, Ofcom published a final statement with no changes. It is expected that decisions on designation will be made in the second half of 2025.
⭐ Ofcom is now consulting on 14 TSS that it proposes designating. Following consultation, Ofcom will formally report its recommendations to the secretary of state who will make regulations to formalise the designations. The consultation is open until 16 September 2025.
⭐ Internet programme services: Under the new online availability and prominence regime, TSS designated by the secretary of state (see above) will have to ensure that PSB TV players (known as internet programme services or IPS) that have been designated by Ofcom, and their content, are available, prominent and easily accessible. Before Ofcom can consider applications for designation as IPS, it is required to publish a statement setting out how it will do this. In July 2025, following consultation and after having made some minor changes, it published the final statement. Ofcom intends to publish an application form for PSBs to request designation as IPS later in 2025.
In May 2025, Ofcom published a consultation on implementing changes to the PSB quota obligations under the Act (to allow PSBs to use their on-demand services to meet their productions quotas). Ofcom also consulted on draft guidance on which programmes will count towards the original productions quota, and on amending its guidance on regional productions. The consultation closed on 10 July 2025.
⭐ In July 2025, Ofcom also updated its guidance on the codes of practice that PSBs must put in place and follow when commissioning independent producers. The guidance will not take effect until the relevant provisions in the Act have been commenced through secondary legislation. Ofcom does not expect this to be before 1 January 2026.
⭐ The Act also requires licensed PSBs to set out how they intend to fulfil all their regulatory obligations, including identifying the contribution that each service (whether online or linear) will make, in statements of programme policy (SoPP). Channel 4 has additional media content duties that it sets out in a statement of media content policy. The BBC is subject to different requirements under the BBC Charter and is not therefore required to deliver a SoPP. In July 2025 Ofcom published updated guidance for PSBs on preparing these documents, following consultation. The guidance will take effect once the relevant sections of the Media Act are commenced through secondary legislation.
This Act overhauls the UK legal framework for public service broadcasting, video on demand, streaming and internet radio.
Next important deadline(s)
16 September 2025
22 September 2025
H2 2025
Late 2025
The Media Act 2024 (Act) received Royal Assent on 24 May 2024. Most of the Act's provisions will be brought into force by secondary legislation, apart from Part 2 on prominence on television selection services, which came into force on the day on which the Act was passed.
Secondary legislation made under the Act
The first commencement regulations, The Media Act 2024 (Commencement No. 1) Regulations 2024, brought into force certain provisions of the Media Act on 23 and 26 August 2024.
The second commencement regulations, The Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force Part 5 and Section 19 of the Act on 17 October 2024.
Enacted – coming into effect in stages
Media Act 2024, available here.
16 September 2025 (consultation on TSS designation closes)
22 September 2025 (consultation on local news and information on analogue commercial radio closes)
Second half of 2025 (decisions on designation of TSS expected)
Late 2025 (Ofcom final statement on listed events regime expected)
Broadcasters (public service and commercial)
Video-on-demand (VoD) and streaming service providers
TV selection services, such as smart TV, set-top-box, streaming stick and pay TV platform providers, as well as the associated software providers
Voice-activated smart speakers and other radio selection services
Radio station operators
Sports rights holders
The primary aims of the Act are to:
On 26 February 2024, Ofcom published its roadmap for implementing the bill (as it was then) in which it explained its "high-level plan" for putting the provisions into practice once the bill became law. Ofcom cautioned that the dates outlined were indicative only. Its timetable assumed that the legislation would receive Royal Assent by the summer (which it did) and the necessary secondary legislation subsequently laid before Parliament. Ofcom is proceeding with implementation, which is taking place in phases over the next two years.
In July 2024, Ofcom published a call for evidence on the listed events regime under the Act, which closed on 26 September 2024.
In June 2025, Ofcom published a consultation on changes Ofcom proposes making to its Code on listed events and on the definition of various terms used in the listed events regime. The deadline for responses was 8 August 2025. See this Regulatory Outlook for more. Ofcom expects to publish a final statement in late 2025 and the regime is expected to come into effect in 2026.
The Media Act 2024 (Commencement No 1) Regulations 2024 brought certain (mostly functional) provisions of the Act into force on 23 August 2024. The Regulations also brought into effect the non-UK Tier 1 VoD services framework, but the details of the regime are still to be determined as Ofcom must first consult and then the government will produce further regulations. In addition, the Regulations partially brought into effect the new digital prominence regime, but the details are also subject to consultation and further regulations from the government.
In September 2024, the government indicated to Ofcom its intention to begin its consideration of Tier 1 regulation of appropriate on-demand services "as soon as is practically possible". The government asked Ofcom to prepare a report on the operation of the UK market generally for on-demand programme services (ODPS) and non-UK (ODPS), which report the government is required to take into account.
On 30 May 2025, Ofcom delivered the report to the secretary of state. Ofcom has not made this report publicly available due to restrictions in the Communications Act 2003 on sharing the information provided to Ofcom by providers of ODPS.
The second commencement regulations, the Media Act 2024 (Commencement No. 2 and Transitional and Saving Provisions) Regulations 2024 brought into force the following provisions of the Act on 17 October 2024: (i) Part 5 (Regulation of radio services); and (ii) Section 19 (amount of financial penalties: qualifying revenue), but only for the purposes of enabling Ofcom to carry out the necessary work to bring the section into force. The regulations also contain transitional and saving provisions.
Part 5 covers, among other things, local analogue commercial radio licences and their renewal. Previously, renewal was only available if the licence holder was also broadcasting a digital radio service on a "relevant" DAB multiplex. The Act introduces an additional, alternative statutory basis for licensees to apply for renewal if there is a "relevant" DAB multiplex available, but it is not "suitable" for their needs. In December 2024, Ofcom launched a consultation on its approach to considering applications under this new renewal route. Essentially, Ofcom is proposing that a "relevant" local or small-scale multiplex should be considered "unsuitable" in relation to an analogue services only if there is a substantial difference in the size of their coverage areas. The consultation closed on 5 February 2025.
Part 5 also ensures the delivery of local news and information on analogue commercial radio by establishing new mandates for its consistent transmission. On 1 July 2025, Ofcom published a consultation on how it proposes to implement this new framework, including the specific conditions Ofcom proposes to include in licences. Ofcom also published draft guidance on how licensees might meet the requirements of the new licence conditions. The consultation closes on 22 September 2025.
Part 6 of the Act governs the regulation of RSS. Ofcom is obliged to provide the secretary of state with a report making recommendations as to which RSS should be designated. In February 2025, Ofcom consulted on the principles and methods it proposes applying when making these recommendations. It also sought views on its emerging thinking on the appropriate sources of data for measuring the use of RSS and setting the threshold for recommendations.
Ofcom published its final statement confirming its principles and methods in May 2025.
Internet televisions equipment: The Internet Television Equipment Regulations 2024, which came into force on 14 November 2024, set out the descriptions of devices that are considered to be "internet television equipment" for the purposes of the new prominence framework under the Act. The regulations name smart televisions and streaming devices as internet television equipment.
The government has also published a policy paper on its approach to deciding the categories of TV devices that will be considered "internet television equipment". It explains that smart TVs, set-top boxes and streaming sticks will qualify, but that smartphones, laptops, tablets, PCs and video games consoles will not, as watching TV on these devices is not their primary function. Newer devices, such as home cinema projectors and portable lifestyle screens, internet connected car touchscreens, virtual reality headsets, smart watches and in-home screens (such as smart fridges and smart speakers with in-built screens), will also be excluded for the same reason. However, the government intends to review the list one year after full implementation of the new prominence regime.
Television selection services: Under the Act, Ofcom is also obliged to provide a report to the secretary of state with recommendations on the designation of "television selection services" (TSS), which are the connected TV platforms on which, under the new regime, Ofcom-designated PSB TV apps must be available, prominent and easily accessible. In December 2024, Ofcom launched a consultation on the principles and methods it proposes applying when preparing the report. The consultation closed on 5 February 2025. In April 2025, Ofcom published a final statement with no changes. It is expected that decisions on designation will be made in the second half of 2025.
⭐ Ofcom is now consulting on 14 TSS that it proposes designating. Following consultation, Ofcom will formally report its recommendations to the secretary of state who will make regulations to formalise the designations. The consultation is open until 16 September 2025.
⭐ Internet programme services: Under the new online availability and prominence regime, TSS designated by the secretary of state (see above) will have to ensure that PSB TV players (known as internet programme services or IPS) that have been designated by Ofcom, and their content, are available, prominent and easily accessible. Before Ofcom can consider applications for designation as IPS, it is required to publish a statement setting out how it will do this. In July 2025, following consultation and after having made some minor changes, it published the final statement. Ofcom intends to publish an application form for PSBs to request designation as IPS later in 2025.
In May 2025, Ofcom published a consultation on implementing changes to the PSB quota obligations under the Act (to allow PSBs to use their on-demand services to meet their productions quotas). Ofcom also consulted on draft guidance on which programmes will count towards the original productions quota, and on amending its guidance on regional productions. The consultation closed on 10 July 2025.
⭐ In July 2025, Ofcom also updated its guidance on the codes of practice that PSBs must put in place and follow when commissioning independent producers. The guidance will not take effect until the relevant provisions in the Act have been commenced through secondary legislation. Ofcom does not expect this to be before 1 January 2026.
⭐ The Act also requires licensed PSBs to set out how they intend to fulfil all their regulatory obligations, including identifying the contribution that each service (whether online or linear) will make, in statements of programme policy (SoPP). Channel 4 has additional media content duties that it sets out in a statement of media content policy. The BBC is subject to different requirements under the BBC Charter and is not therefore required to deliver a SoPP. In July 2025 Ofcom published updated guidance for PSBs on preparing these documents, following consultation. The guidance will take effect once the relevant sections of the Media Act are commenced through secondary legislation.
This directive aims to increase the accessibility of both physical and digital products and services, focussing on digital.
Next important deadline(s)
28 June 2030
The EU Accessibility Directive (directive) was enacted in 2019. National laws required implementing the legislation across the EU came into effect on 28 June 2025 for new websites and digital services. Existing websites and digital services have a five-year transitional period, until 28June 2030, to comply.
In effect.
National implementing legislation can be tracked from this page.
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services, available here.
28 June 2030 (end of transitional period for existing websites and digital services)
The directive and related national implementing legislation will impact on all businesses providing a range of "everyday" products and services, focused on digital technology. This includes:
There are various carve-outs for content and services available through websites and apps that do not have to be accessible, including historic and archived content, and third party content that is not funded, developed or under the control of the entity providing it.
There are also important carve-outs where compliance would fundamentally alter the nature of the product or service or would impose a disproportionate burden on the provider. Careful consideration of these tests is necessary ahead of product or service launches or wider compliance reviews. It should be noted that for many businesses there are notification requirements before these carve-outs will apply.
The overarching intention of the directive is to enable people with disabilities or impairments to access these products and services on an equal basis with others.
Rather than specifying technical requirements, the directive identifies the particular features of the relevant products and services that must be accessible (set out in Annex I of the directive). The intention is that this approach allows for flexibility and innovation in how these requirements are actually met in practice.
The accessibility requirements set out in Annex I depend on the product or service but include: providing information through more than one sensory channel (e.g. the option of an audio version of written text), ensuring content is presented in an understandable way; and requirements for written content to be formatted in sufficient size fonts, with sufficient contrast and adjustable spacing. Products requiring fine motor control must offer alternative controls, and requirements for extensive reach or great strength should be avoided. The directive envisages the use of assistive technologies, as well as emphasising the need to respect privacy.
The directive sets out requirements on product manufacturers for technical documentation, self-assessment of compliance and a declaration of conformity with the accessibility requirements, as well as a CE marking to show compliance. Importers are required to place only compliant products on the market, and to check for compliance and the CE mark. Distributors must similarly check for compliance and not put non-compliant products on the market.
For services, compliance with the directive's accessibility requirements must be incorporated into how the services are designed and provided. Information about the service's accessibility compliance must be made available (and updated) for the lifetime of the service. Compliance must be reviewed and maintained, with disclosure obligations around non-compliance.
As noted, compliance is not required where it would fundamentally alter the basic nature of the product or service, or where it would create a disproportionate economic burden on the entity concerned. Where a business decides it falls within one of these carve-outs and so does not have to comply, it must document that assessment. In relation to services, the assessment must be renewed every 5 years, or when the service is altered, or when the national authority requests a reassessment.
The directive also requires member states to set up national market surveillance authorities to oversee enforcement of the accessibility regime. Adequate enforcement powers must be given to the national authority, which should include provision for "effective, proportionate and dissuasive penalties", as well as remedial action to address non-compliance.
All member states have implemented the necessary legislation, meaning that the directive is in effect across the whole of the EU. New websites and digital services must comply now, but existing websites and digital services have until 28 June 2030 to comply. National implementing legislation can be tracked from this page.
The EU Accessibility Act – Two Months to Go!
EU Accessibility Act now in force – Is your business at risk?
Webinar recording: European Accessibility Act: An introduction (19 March 2024)
Webinar recording: Digital Inclusion: Risks of not making your digital products accessible (18 April 2023)
This directive aims to increase the accessibility of both physical and digital products and services, focussing on digital.
Next important deadline(s)
28 June 2030
The EU Accessibility Directive (directive) was enacted in 2019. National laws required implementing the legislation across the EU came into effect on 28 June 2025 for new websites and digital services. Existing websites and digital services have a five-year transitional period, until 28June 2030, to comply.
In effect.
National implementing legislation can be tracked from this page.
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services, available here.
28 June 2030 (end of transitional period for existing websites and digital services)
The directive and related national implementing legislation will impact on all businesses providing a range of "everyday" products and services, focused on digital technology. This includes:
There are various carve-outs for content and services available through websites and apps that do not have to be accessible, including historic and archived content, and third party content that is not funded, developed or under the control of the entity providing it.
There are also important carve-outs where compliance would fundamentally alter the nature of the product or service or would impose a disproportionate burden on the provider. Careful consideration of these tests is necessary ahead of product or service launches or wider compliance reviews. It should be noted that for many businesses there are notification requirements before these carve-outs will apply.
The overarching intention of the directive is to enable people with disabilities or impairments to access these products and services on an equal basis with others.
Rather than specifying technical requirements, the directive identifies the particular features of the relevant products and services that must be accessible (set out in Annex I of the directive). The intention is that this approach allows for flexibility and innovation in how these requirements are actually met in practice.
The accessibility requirements set out in Annex I depend on the product or service but include: providing information through more than one sensory channel (e.g. the option of an audio version of written text), ensuring content is presented in an understandable way; and requirements for written content to be formatted in sufficient size fonts, with sufficient contrast and adjustable spacing. Products requiring fine motor control must offer alternative controls, and requirements for extensive reach or great strength should be avoided. The directive envisages the use of assistive technologies, as well as emphasising the need to respect privacy.
The directive sets out requirements on product manufacturers for technical documentation, self-assessment of compliance and a declaration of conformity with the accessibility requirements, as well as a CE marking to show compliance. Importers are required to place only compliant products on the market, and to check for compliance and the CE mark. Distributors must similarly check for compliance and not put non-compliant products on the market.
For services, compliance with the directive's accessibility requirements must be incorporated into how the services are designed and provided. Information about the service's accessibility compliance must be made available (and updated) for the lifetime of the service. Compliance must be reviewed and maintained, with disclosure obligations around non-compliance.
As noted, compliance is not required where it would fundamentally alter the basic nature of the product or service, or where it would create a disproportionate economic burden on the entity concerned. Where a business decides it falls within one of these carve-outs and so does not have to comply, it must document that assessment. In relation to services, the assessment must be renewed every 5 years, or when the service is altered, or when the national authority requests a reassessment.
The directive also requires member states to set up national market surveillance authorities to oversee enforcement of the accessibility regime. Adequate enforcement powers must be given to the national authority, which should include provision for "effective, proportionate and dissuasive penalties", as well as remedial action to address non-compliance.
All member states have implemented the necessary legislation, meaning that the directive is in effect across the whole of the EU. New websites and digital services must comply now, but existing websites and digital services have until 28 June 2030 to comply. National implementing legislation can be tracked from this page.
The EU Accessibility Act – Two Months to Go!
EU Accessibility Act now in force – Is your business at risk?
Webinar recording: European Accessibility Act: An introduction (19 March 2024)
Webinar recording: Digital Inclusion: Risks of not making your digital products accessible (18 April 2023)
This Act makes provisions to enhance sharing and use of data by UK organisations, including changes to the UK GDPR.
Next important deadline
2025
The Data (Use and Access) Act 2025 (Act) received Royal Assent on 19 June 2025. The provisions of the Act will be brought into effect in stages. Many (including most of the GDPR changes) will come in on a future date to be decided by the government or are being implemented over time via secondary legislation. A few took effect as soon as the Act became law and some will take effect two months after, including those giving the Information Commissioner the power to require the production of documents.
Secondary legislation under the Act
The Data (Use and Access) Act 2025 (Commencement No 1) Regulations
Enacted and partially in effect.
Data (Use and Access) Act 2025 available here.
2025 and onwards (see "What is the process for implementation?" below)
Data subjects, data controllers, organisations in various sectors will be affected, including technology providers, energy, telecoms, infrastructure, health and social care or financial services, scientific and other research teams.
The Act is a wide-ranging piece of legislation, much of it amending existing legislation, in particular the UK General Data Protection Regulation (GDPR), and setting up various frameworks to be fleshed out in secondary legislation. Some key provisions include:
In the areas of data protection law, the Act introduces many fairly small changes, including those set out below.
There is a softening of the current restrictions on automated decision making (ADM), for example in making explicit that the (partial) prohibition on ADM would apply only for special category data, and where there is "no meaningful human involvement".
In theory, this gives organisations broader scope to use ADM, which may facilitate deploying AI systems for additional use cases. However, the changes are subtle and several conditions and restrictions still apply, including obligations to put in place appropriate safeguards for all significant wholly automated decisions based on personal data.
Definitions of certain types of research are added to the GDPR to refine the scope of those concepts.
In particular, the definition of "scientific research" arguably makes the concept slightly wider, in that it is deemed to include any research that "can reasonably be described as scientific" irrespective of the source of research funding, and whether or not it is commercial.
The definition of "statistical purposes" potentially slightly narrows the concept, in that it applies only where the data is used for statistical surveys or to produce statistical results.
There is a broader concept of what people can consent to their data being used for. The Act clarifies that, in appropriate cases, a person will be able to give consent to their data potentially being used for more than one type of scientific research, even if not all those research purposes are identifiable at the time they give that consent.
Overall, the changes are intended to benefit organisations that conduct research or use research results. While there is broadening of the scope of "scientific research", this is still tempered by safeguards and limitations.
The Act provides for the government to bring in standards to enable interoperability and sharing of health-related data, amending the Health and Social Care Act so that information standards in relation to IT technology or services can be published. The government hopes this will facilitate smooth flowing of patient information across healthcare, giving staff quicker access to patient data.
IT providers whose services are used in connection with the provision of health care or adult social care must have regard to these standards. The regulator will be able to call out publicly those who fail to meet them.
IT suppliers for the health and care sectors will need to ensure that their systems meet common standards to enable data sharing across platforms. Such standards have yet to be set, but may include, for example, standards allowing for mandatory interoperability and APIs for services which process patients’ information.
Under the Act, the existing Office for Digital Identities and Attributes will oversee a standards framework for online digital verification services.
Compliance with the standards framework will not be mandatory, but organisations successfully applying for certification will be awarded certification and be included on a publicly accessible register and entitled to display a "trust mark" to show they meet the standards.
The standards will include:
Organisations hoping to obtain certification will of course need to ensure that their data processing practices meet the requirements, which in some circumstances will go beyond their current UK GDPR compliance obligations.
The government will set up a "smart data" regime, which will allow the setting up of separate smart data schemes to address specific sector needs, such as in finance, energy and telecoms.
Smart data schemes will allow individuals to request that their data be shared directly with them, or with authorised and regulated third parties, and establish a supporting framework to ensure secure storage and transfers of this data.
The government hopes that these schemes can mirror the success of the open banking regime, enhancing consumer confidence in using trusted third-party services to provide, for example, personalised market comparisons and financial advice on costs savings, as well as "one-click" service switching to a new provider.
The National Underground Asset Register is a government digital service which provides instant access to a map of the underground pipes and cables for authorised users. It will be put on a statutory footing, rather than the current voluntary arrangement, mandating that owners of underground infrastructure, such as water companies or telecoms operators, register their underground assets. The idea is that companies will benefit from a more comprehensive and rich view of buried assets, enabling them to know exactly where any underground asset is placed.
Failure to comply with the obligations may constitute a criminal offence and those in breach may be liable for damages to those suffering loss as a consequence of that failure.
The Act creates some exceptions to the current regime; for example, that user consent is not required for use of cookies/other tracking technologies in some online services where they are used solely to collect statistical data in order to make improvements to services or a website, or are used solely to improve the appearance or performance of a website, or adapt it to a user's preferences.
The exceptions are subject to various conditions, including around transparency, the right to object, and not using the collected data for purposes beyond the scope of the exceptions.
The Act limits the right for an individual to obtain copies of their personal data under the GDPR so that they are entitled only to the data that would be found in a "reasonable and proportionate" search. This is largely mere codification of the existing case law and guidance from the Information Commissioner's Office (ICO).
The Act provides that certain types of processing purposes will be more likely to count as "legitimate interests", including processing for the purposes of:
In theory, this makes it easier for organisations to use the legitimate interest ground as the basis for their processing in these areas. However, its impact is reduced in practice, because the changes mean only that those types of processing are more likely to be considered legitimate interests (they are not guaranteed to be); the recitals of the GDPR already referred to the processing of personal data for direct marketing purposes potentially being regarded as carried out for a legitimate interest; and many organisations will have already concluded that they were covered by the legitimate interest ground.
The Act also includes provisions allowing the government, in future, to introduce categories of processing which will be deemed to be legitimate interests purposes.
The Act includes a new mechanism to allow the secretary of state to introduce more classes of special category data (s. 74) by enacting secondary legislation. The restrictions on the processing of special category data are significantly heavier than those on other types of personal data.
The Online Safety Act 2023 will be amended to allow the government to bring in regulations which could oblige service providers to provide information to third-party researchers conducting research into online safety measures, including the ability to bring in criminal offences, fines and other sanctions for non-compliance.
The UK regulator, the ICO, will be abolished and its functions transferred to a new body, the Information Commission, with changes to structure and powers.
On 21 July 2025, the Data (Use and Access) Act 2025 (Commencement No 1) Regulations were made, bringing into force several sections of the Act. These provisions are largely technical, relating mainly to sections of the Act that allow the government to make certain provisions.
However, the regulations also brought into effect s111 of the Act, which amends the Privacy and Electronic Communications Regulations (PECR) by extending the time periods for relevant service providers to notify a personal data breach under PECR to the regulator.
Also brought into effect are parts of s117 of the Act, which establishes the Information Commission, though substantive changes to the ICO/Information Commission are not expected until early 2026 and can only take place once the new Information Commission's board has been appointed in any event.
In July 2025, the Department for Science, Innovation and Technology (DSIT) published an outline of its tiered timetable for implementing the Act. In summary:
• Stage 1 – immediately after Royal Assent – includes the commencement of technical provisions, clarifying aspects of the legal framework and measures requiring the government to publish an impact assessment, a report and a progress update on its plans for AI and copyright reform.
• Stage 2 – three to four months after Royal Assent (i.e. mid-September to mid-October 2025) – includes the commencement of most of the measures on digital verification services and those on the retention of information by providers of internet services in connection with the death of a child.
• Stage 3 – approximately six months after Royal Assent (i.e. mid-December 2025) – includes the commencement of the main changes to data protection legislation plus the provisions on information standards for health and adult social care.
• Stage 4 – more than six months after Royal Assent (i.e. from mid-December 2025 onwards) – includes the rest (e.g. measures on the National Underground Register and electronic registering births and deaths).
This Act makes provisions to enhance sharing and use of data by UK organisations, including changes to the UK GDPR.
Next important deadline
2025
The Data (Use and Access) Act 2025 (Act) received Royal Assent on 19 June 2025. The provisions of the Act will be brought into effect in stages. Many (including most of the GDPR changes) will come in on a future date to be decided by the government or are being implemented over time via secondary legislation. A few took effect as soon as the Act became law and some will take effect two months after, including those giving the Information Commissioner the power to require the production of documents.
Secondary legislation under the Act
The Data (Use and Access) Act 2025 (Commencement No 1) Regulations
Enacted and partially in effect.
Data (Use and Access) Act 2025 available here.
2025 and onwards (see "What is the process for implementation?" below)
Data subjects, data controllers, organisations in various sectors will be affected, including technology providers, energy, telecoms, infrastructure, health and social care or financial services, scientific and other research teams.
The Act is a wide-ranging piece of legislation, much of it amending existing legislation, in particular the UK General Data Protection Regulation (GDPR), and setting up various frameworks to be fleshed out in secondary legislation. Some key provisions include:
In the areas of data protection law, the Act introduces many fairly small changes, including those set out below.
There is a softening of the current restrictions on automated decision making (ADM), for example in making explicit that the (partial) prohibition on ADM would apply only for special category data, and where there is "no meaningful human involvement".
In theory, this gives organisations broader scope to use ADM, which may facilitate deploying AI systems for additional use cases. However, the changes are subtle and several conditions and restrictions still apply, including obligations to put in place appropriate safeguards for all significant wholly automated decisions based on personal data.
Definitions of certain types of research are added to the GDPR to refine the scope of those concepts.
In particular, the definition of "scientific research" arguably makes the concept slightly wider, in that it is deemed to include any research that "can reasonably be described as scientific" irrespective of the source of research funding, and whether or not it is commercial.
The definition of "statistical purposes" potentially slightly narrows the concept, in that it applies only where the data is used for statistical surveys or to produce statistical results.
There is a broader concept of what people can consent to their data being used for. The Act clarifies that, in appropriate cases, a person will be able to give consent to their data potentially being used for more than one type of scientific research, even if not all those research purposes are identifiable at the time they give that consent.
Overall, the changes are intended to benefit organisations that conduct research or use research results. While there is broadening of the scope of "scientific research", this is still tempered by safeguards and limitations.
The Act provides for the government to bring in standards to enable interoperability and sharing of health-related data, amending the Health and Social Care Act so that information standards in relation to IT technology or services can be published. The government hopes this will facilitate smooth flowing of patient information across healthcare, giving staff quicker access to patient data.
IT providers whose services are used in connection with the provision of health care or adult social care must have regard to these standards. The regulator will be able to call out publicly those who fail to meet them.
IT suppliers for the health and care sectors will need to ensure that their systems meet common standards to enable data sharing across platforms. Such standards have yet to be set, but may include, for example, standards allowing for mandatory interoperability and APIs for services which process patients’ information.
Under the Act, the existing Office for Digital Identities and Attributes will oversee a standards framework for online digital verification services.
Compliance with the standards framework will not be mandatory, but organisations successfully applying for certification will be awarded certification and be included on a publicly accessible register and entitled to display a "trust mark" to show they meet the standards.
The standards will include:
Organisations hoping to obtain certification will of course need to ensure that their data processing practices meet the requirements, which in some circumstances will go beyond their current UK GDPR compliance obligations.
The government will set up a "smart data" regime, which will allow the setting up of separate smart data schemes to address specific sector needs, such as in finance, energy and telecoms.
Smart data schemes will allow individuals to request that their data be shared directly with them, or with authorised and regulated third parties, and establish a supporting framework to ensure secure storage and transfers of this data.
The government hopes that these schemes can mirror the success of the open banking regime, enhancing consumer confidence in using trusted third-party services to provide, for example, personalised market comparisons and financial advice on costs savings, as well as "one-click" service switching to a new provider.
The National Underground Asset Register is a government digital service which provides instant access to a map of the underground pipes and cables for authorised users. It will be put on a statutory footing, rather than the current voluntary arrangement, mandating that owners of underground infrastructure, such as water companies or telecoms operators, register their underground assets. The idea is that companies will benefit from a more comprehensive and rich view of buried assets, enabling them to know exactly where any underground asset is placed.
Failure to comply with the obligations may constitute a criminal offence and those in breach may be liable for damages to those suffering loss as a consequence of that failure.
The Act creates some exceptions to the current regime; for example, that user consent is not required for use of cookies/other tracking technologies in some online services where they are used solely to collect statistical data in order to make improvements to services or a website, or are used solely to improve the appearance or performance of a website, or adapt it to a user's preferences.
The exceptions are subject to various conditions, including around transparency, the right to object, and not using the collected data for purposes beyond the scope of the exceptions.
The Act limits the right for an individual to obtain copies of their personal data under the GDPR so that they are entitled only to the data that would be found in a "reasonable and proportionate" search. This is largely mere codification of the existing case law and guidance from the Information Commissioner's Office (ICO).
The Act provides that certain types of processing purposes will be more likely to count as "legitimate interests", including processing for the purposes of:
In theory, this makes it easier for organisations to use the legitimate interest ground as the basis for their processing in these areas. However, its impact is reduced in practice, because the changes mean only that those types of processing are more likely to be considered legitimate interests (they are not guaranteed to be); the recitals of the GDPR already referred to the processing of personal data for direct marketing purposes potentially being regarded as carried out for a legitimate interest; and many organisations will have already concluded that they were covered by the legitimate interest ground.
The Act also includes provisions allowing the government, in future, to introduce categories of processing which will be deemed to be legitimate interests purposes.
The Act includes a new mechanism to allow the secretary of state to introduce more classes of special category data (s. 74) by enacting secondary legislation. The restrictions on the processing of special category data are significantly heavier than those on other types of personal data.
The Online Safety Act 2023 will be amended to allow the government to bring in regulations which could oblige service providers to provide information to third-party researchers conducting research into online safety measures, including the ability to bring in criminal offences, fines and other sanctions for non-compliance.
The UK regulator, the ICO, will be abolished and its functions transferred to a new body, the Information Commission, with changes to structure and powers.
On 21 July 2025, the Data (Use and Access) Act 2025 (Commencement No 1) Regulations were made, bringing into force several sections of the Act. These provisions are largely technical, relating mainly to sections of the Act that allow the government to make certain provisions.
However, the regulations also brought into effect s111 of the Act, which amends the Privacy and Electronic Communications Regulations (PECR) by extending the time periods for relevant service providers to notify a personal data breach under PECR to the regulator.
Also brought into effect are parts of s117 of the Act, which establishes the Information Commission, though substantive changes to the ICO/Information Commission are not expected until early 2026 and can only take place once the new Information Commission's board has been appointed in any event.
In July 2025, the Department for Science, Innovation and Technology (DSIT) published an outline of its tiered timetable for implementing the Act. In summary:
• Stage 1 – immediately after Royal Assent – includes the commencement of technical provisions, clarifying aspects of the legal framework and measures requiring the government to publish an impact assessment, a report and a progress update on its plans for AI and copyright reform.
• Stage 2 – three to four months after Royal Assent (i.e. mid-September to mid-October 2025) – includes the commencement of most of the measures on digital verification services and those on the retention of information by providers of internet services in connection with the death of a child.
• Stage 3 – approximately six months after Royal Assent (i.e. mid-December 2025) – includes the commencement of the main changes to data protection legislation plus the provisions on information standards for health and adult social care.
• Stage 4 – more than six months after Royal Assent (i.e. from mid-December 2025 onwards) – includes the rest (e.g. measures on the National Underground Register and electronic registering births and deaths).
This Act introduces statutory duties on online content-sharing platforms and search services.
Next important deadline(s)
10 September 2025
20 October 2025
End of 2025
The Online Safety Act (Act) received Royal Assent on 26 October 2023, at which point certain (mostly functional) provisions came into effect, including those relating to Ofcom's duties to provide additional guidance and codes of practice. Much of the Act's main provisions require these documents from Ofcom and/or secondary legislation from the government to be put in place before they become effective.
16 December 2024 signalled the start of the three-month period for in-scope services to complete their illegal content risk assessments.
17 January 2025 marked the start of the duty of regulated providers of pornographic content to ensure, through the use of age verification/estimation tools, that children cannot encounter pornographic content.
16 March 2025 marked the date by which in-scope services must have completed their illegal content risk assessments.
17 March 2025 marked the date when in-scope services must comply with their duties in relation to illegal content in Part 3 of the Act.
16 April 2025 marks the date by when all in-scope services must complete their Child Access Assessments.
24 July 2025 marked the date by when in-scope services likely to be accessed by children needed to have completed and recorded a children's risk assessment to assess the risk of harm to children on their service from content harmful to children.
25 July 2025 marked the date by when service providers needed to have implemented the measures set out in the Protection of Children Codes of Practice (or similar).
Secondary legislation under the Act
The Online Safety Act 2023 (Commencement No. 2) Regulations 2023 and the Online Safety Act 2023 (Commencement No. 3) Regulations 2024 brought into force some of Ofcom's powers, certain new offences and other minor provisions.
The Online Safety (List of Overseas Regulators) Regulations 2024 outline the overseas regulators with whom the UK online safety regulator, Ofcom, may co-operate, as set out in section 114 of the Act.
The Online Safety Act 2023 (Pre-existing Part 4B Services Assessment Start Day) Regulations 2024, which came into force on 22 May 2024, specify the "assessment start day" for Video-Sharing Platforms (VSPs) as 2 September 2024. VSPs will have to complete the assessments set out in the Act's sections 9 (illegal content risk assessment duties), 11 (children's risk assessment duties), 14 (assessment duties: user empowerment) and 35 (children's access assessments), starting from this date. VSPs have three months from the date that Ofcom publishes the relevant guidance to complete the assessments.
The Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2024, which came into force on 10 December 2024, make the offence of sharing intimate images without consent under the Sexual Offences Act 2023 a "priority offence" under the Act.
The Online Safety Act 2023 (Commencement No 4) Regulations 2024 brought into force duties on regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools on 17 January 2025.
The Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 came into force on 27 February 2025. The regulations define the thresholds at and above which in-scope services become "categorised services" and subject to certain additional obligations under the Act.
The Online Safety Act 2023 (Commencement No. 5) Regulations 2025 will bring into force (from 3 November 2025) various provisions in the Act relating to Child Sexual Exploitation and Abuse (CSEA) content online, including: (i) duties on both UK and non-UK providers of regulated user-to-user services to report detected and unreported CSEA content present on the service to the National Crime Agency. The Secretary of State is still to make regulations on the information that should be contained in these reports, their format and the time frames in which they should be sent to the NCA; (ii) section 69, which makes it a criminal offence to knowingly or recklessly provide materially false information when reporting CSEA content to the NCA; and (iii) provisions relating to Ofcom's enforcement powers as regards offences under the OSA for failing to comply with provision of information requirements, including in respect of the CSEA reporting requirement.
⭐ The Online Safety Act 2023 (Commencement No. 6) Regulations 2025 brought s 210 of the Act into effect on 25 July 2025. Section 210 repeals the video-sharing platform rules in the Communications Act 2023 (CA). The regulations mark the end of the transition period, which commenced in January 2024, during which pre-existing video-sharing services were regulated under both the Act and the CA. From 25 July 2025, services are regulated wholly under the Act.
In effect (partially).
10 September 2025 (closure of Ofcom consultation on draft guidance for regulated service providers on calculating their QWR)
20 October 2025 (deadline for response to Ofcom consultation on further measures for codes of practice)
End of 2025 (final guidance on protecting women and girls online expected from Ofcom)
Online Safety Act 2023, available here.
Businesses whose services facilitate the sharing of user generated content or user interactions, particularly social media, messaging services, search and online advertising. The Act regulates those offering services to UK users, regardless of the service provider's location.
The Act aims to protect children from harmful content and to deal with illegal online content. The Act seeks to strike a balance between ensuring safe online interactions for children and adult users, and freedom of speech.
Key provisions under the Act include:
In order to ensure that the online safety regime is flexible and "future proof", a number of these provisions are dealt with, and to be dealt with, in secondary legislation, and by codes of practice and guidance to be developed by the designated regulator, Ofcom.
When the Act received Royal Assent, Ofcom devised a consultation process to inform its codes of practice and guidance. This comprises four major consultations together with various sub-consultations, on numerous aspects of the new regime. Ofcom also published an implementation roadmap.
In October 2024, Ofcom published an updated roadmap, setting out its progress in implementing the Act since it became law and presenting its plans for 2025. See our Insight for details.
In June 2025, Ofcom published a further update on its implementation plans.
The process began in November 2023, with a first major consultation on draft codes of practice and guidance on illegal harms duties, which closed on 23 February 2024 (see our insight).
In December 2024, Ofcom published its final form illegal content codes of practice and guidance. This signalled the start date for services to comply with the new rules on conducting illegal content risk assessments. All in-scope services had to have completed their illegal content risk assessment, to evaluate the risk of harms to users resulting from the presence of illegal content on their service, by 16 March 2025. Services needed to implement the recommended measures set out in these codes (or be using other effective measures to protect users from illegal content) by 17 March 2025.
In August 2024, Ofcom consulted on strengthening its draft illegal harms codes of practice and guidance to include animal cruelty and human torture due to these categories of content not being fully covered by the list of "priority offences" in the Act. The "priority offences", listed in Schedule 7 of the Act, relate to the most serious forms of illegal content. Regulated online platforms not only have to take steps to prevent such content from appearing, but also have to remove it when they become aware of it. By including animal cruelty and human torture in its codes and guidance, Ofcom aims to ensure that providers understand that they will also have to remove this type of content. Ofcom's statement on the consultation is pending.
Under the Act, Ofcom has specific powers to tackle terrorism and child sexual exploitation and abuse (CSEA) content available on user-to-user services. Ofcom can, by issuing a "technology notice", make a provider use a certain technology to tackle this content or it can require them to develop such technology. Any technology Ofcom requires a provider to use must be accredited by Ofcom or appointed third party, against minimum standards of accuracy set by government, following advice form Ofcom. In December 2024, Ofcom consulted on its proposals for what the minimum standards of accuracy for accredited technologies could be, and on draft guidance on how it proposes to use its technology notice powers. Ofcom intends to submit its final advice on minimum standards to the government and publish final guidance by Spring 2026.
To tackle the specific risks faced by women and girls online, Ofcom is required to publish additional, dedicated guidance on safety for women and girls. On 25 February 2025, Ofcom published draft guidance, setting out how in-scope service providers can combat content and activities that disproportionately impact women and girls. The guidelines describe nine actions providers can take and highlight good practice in this area. Ofcom consulted on the draft earlier this year and expects to publish the final guidance by the end of 2025.
On 24 April 2025, Ofcom published a consultation on amending its illegal content codes of practice to expand the application of measures that allow children to block and mute user accounts (ICU J1 in the code of practice) and disable comments (ICU J2), to include providers of certain smaller user-to-user services that are likely to be accessed by children where they have relevant risks and functionalities. The consultation closed on 22 July 2025. Ofcom expects to publish the final guidance by the end of 2025.
On 30 June 2025, Ofcom published a consultation proposing additional safety measures for its illegal content and protection of children codes of practice. The new proposals provide more targeted safety measures to make online services safer by design. They include measures to: stop illegal content going viral; make more use of proactive technologies (such as "hash matching" for intimate image abuse content and terrorism content); protect children while livestreaming; and respond to a crisis. The consultation closes on 20 October 2025. Ofcom plans to publish updated codes of practice in summer 2026.
On 16 January 2025, Ofcom published its final form guidance on "highly effective age assurance and other Part 5 duties" relating to the duties of regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools. This duty came into effect on 17 January 2025.
On 24 April 2025, ahead of the 25 July deadline to implement mandatory age assurance requirements (see "Children" below) Ofcom wrote to hundreds of pornography services, setting out in detail what services need to do to comply with their duties under the Act and reminding them of the consequences of non-compliance.
On 16 January 2025, Ofcom published its Statement on Age Assurance and Children's Access, comprising final form versions of Part 3 guidance on highly effective age assurance, guidance on children's access assessments (and guidance on highly effective age assurance and other Part 5 duties see above under Pornographic content)). This marked the start of compliance with the protection of children duties under the Act by kicking off the three-month period within which all in-scope services had to complete their Child Access Assessments (i.e. by 16 April 2025).
On 24 April 2025, Ofcom published its protection of children statement under the Act, including the draft Protection of Children Codes of Practice for search services and for user-to-user services, and the final form Children's Risk Assessment Guidance and Children's Risk Profiles.
On 4 July 2025, Ofcom published the final versions of its Protection of Children Code for user-to-user services and Protection of Children Code for search services.
Starting from 24 April 2025, all in-scope services had until 24 July 2025 to complete and record a Children's Risk Assessment to assess the risk of harm to children on their service from content harmful to children. This assessment is an additional requirement to completing an illegal content risk assessment (the deadline for which was 16 March 2025).
From 25 July 2025, service providers need to have implemented the measures set out in the Protection of Children Codes of Practice (or be using similar effective measures to protect children and mitigate the risks to children identified in their risk assessment). Ofcom has said that it will give a further six month grace period, ending 24 February 2026, for services to implement these measures. However, this grace period will not stop Ofcom from taking action in the meantime against services that it deems are "deliberately" not engaging with child protection measures or are causing "egregious harm".
The Act introduces a system categorising some regulated online services as category 1, 2A or 2B services, based on their key characteristics and whether they meet certain numerical thresholds.
In March 2024, Ofcom published a call for evidence to inform its further consultation on draft codes of practice and guidance on the additional duties that will apply to "categorised services" under the Act. This call for evidence closed on 20 May 2024 and a statement from the regulator is pending.
The regulations defining the thresholds above which in-scope services become "categorised services" and subject to certain additional obligations under the Act came into force on 27 February 2025. Ofcom will, once it has assessed services against these final thresholds, publish a register of categorised services, as well as a list of emerging category 1 services.
⭐ In July 2025, Ofcom published a statement on transparency reporting and its final transparency reporting guidance for categorised services. The guidance explains when and how Ofcom will exercise its transparency powers, setting out the factors it will consider when deciding what information categorised services will be required to provide (as set out in transparency notices issued by Ofcom) in their transparency reports.
At the end of May 2024, the government published guidance to Ofcom on determining the fees that will be payable by regulated services under the Act. The fees will only be paid by providers whose "qualifying worldwide revenue" (QWR) meets or exceeds a certain revenue threshold and who are not otherwise exempt, and will fund Ofcom's regulation of the new online safety regime. The government will retain oversight of the regulatory costs of the regime by setting Ofcom's total budget cap.
In June 2025, Ofcom published its policy statement on implementation of the online safety fees and penalties regime under the Act following a consultation (see this Regulatory Outlook for more on the consultation. Ofcom has decided on: the definition of QWR to be used to assess the threshold at or above which providers will be required to pay fees, and the maximum penalty caps; QWR for penalty caps in the case of joint and several liability; QWR threshold advice to the secretary of state; exemptions from having to notify Ofcom on QWR and from paying fees; its approach to its Statement of Charging Principles, which will establish the principles that Ofcom will apply when setting fees, which must be in place before Ofcom can start charging; and the process for notifying Ofcom of liability to pay fees.
⭐ In July 2025, Ofcom published a second consultation to implement the fees and penalties regime, this time on guidance for providers of online services regulated under the Act on calculating their QWR in accordance with the Online Safety Act 2023 (Qualifying Worldwide Revenue) Regulations 2025. The consultation is open until 10 September 2025.
The fee regime is expected to be in place by 2026/27 financial year. Until then, the government is funding Ofcom's initial costs. Additional fees will then be charged over an initial set period of consecutive years to recoup the set-up costs.
Under the Act, Ofcom has powers to require and obtain information from regulated services that it needs to fulfil its online safety duties. It will do this by issuing "information notices". On 26 February 2025, Ofcom published final guidance on its information gathering powers under the Act, how it plans to use them and the typical procedures it will follow. Failing to comply with an information notice may result in Ofcom taking enforcement action under the Act.
In January 2025, Ofcom opened an enforcement programme into age assurance measures to protect children from encountering pornographic content, initially focusing on regulated providers' compliance with the Part 5 duties. In July 2025, Ofcom extended this programme to cover all platforms that allow users to share pornographic material, whether they are dedicated adult sites or other services that include pornography (Part 3 services).
On 3 March 2025, Ofcom launched an enforcement programme to monitor whether services are meeting their illegal content risk assessment and record keeping duties under the Act. As part of the programme, the regulator asked various providers to provide it with their illegal content risk assessments to assist it in identifying possible compliance concerns and to monitor how the guidance is being applied. Ofcom expects the programme to run for at least 12 months, during which period, it may initiate formal investigations if it suspects that a provider is failing to meet its obligations under the Act.
In March 2025, Ofcom also launched an enforcement programme targeting child sexual abuse material (CSAM) online. In recognition of the prevalence of such material, making human content moderation insufficient to deal with the issue on its own, Ofcom's illegal content codes of practice recommend that certain services use automated moderation technology, including perceptual hash-matching if the service is a file-sharing or file-storage service at high risk of hosting CSAM, to identify such content and swiftly remove it. For more, see this Regulatory Outlook.
⭐ In July 2025, Ofcom launched a fourth enforcement programme to monitor whether services are meeting their children's risk assessment duties under the Act.
⭐ In July 2025, Ofcom also launched a new enforcement programme to protect children from harmful content through the use of age assurance. It builds on work undertaken by Ofcom's "small but risky taskforce", which identified several services whose main purpose is to host or disseminate content harmful to children. Ofcom will engage with these services and assess the age assurance processes they are implementing.
"Small but risky" online services: In September 2024, the Secretary of State for Science, Innovation and Technology, Peter Kyle, wrote to Ofcom asking how it plans to monitor and address the issue of "small but risky" online services. In its response, Ofcom said that tackling these services is a "vital part" of its regulatory mission. The regulator confirmed that it has already developed plans to take early action against these services.
International cooperation: In October 2024, the UK and US governments signed a joint statement on online safety, calling for platforms to go "further and faster" to protect children online by taking "immediate action" and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression. See this Regulatory Outlook for more.
AI: In November 2024, Ofcom published an open letter to UK online service providers on how the Act will apply to generative AI and chatbots. Ofcom reminded providers that various AI tools and content relating to user-to-user services, search services and pornographic material, will be in scope of the Act.
Statement of Strategic Priorities for online safety: In May 2025, the government's Statement of Strategic Priorities for online safety was laid before Parliament. It highlights five priorities that Ofcom must have regard to when implementing the OSA and that industry is expected to adhere to. See this Regulatory Outlook for details.
Online Information Advisory Committee: Ofcom has established an Online Information Advisory Committee to advise the regulator on issues relating to disinformation and misinformation, as required by the Act.
"Super-complaints": In June 2025, the government published a response to its consultation on "super-complaints" under the Act. The super-complaints regime aims to ensure that eligible entitles can raise complaints with Ofcom about "systemic issues" relating to existing or emerging online harms (see this Regulatory Outlook for background). The government has also laid draft Online Safety Super-Complaints (Eligibility and Procedural Matters) Regulations 2025 before Parliament to define the eligibility criteria for entities to submit super-complaints and the procedural steps to establish the duties of complainants and of Ofcom when a super-complaint is submitted. The super-complaints regime is stated to come into force on 31 December 2025. Ofcom will also be consulting on draft guidance for the new regime.
⭐ Researchers' access: In July 2025, Ofcom's published a report, following a call for evidence, examining how researchers could gain better access to information from regulated online services to support their work on online safety. The report, which the regulator is required by the Act to publish and submit to the secretary of state, also assesses the current constraints on information sharing for research purposes and assesses how to improve access, setting out three policy options for the government to consider.
UK Online Safety Act: Ofcom publishes guidance on age assurance and children's access assessments
Online Safety Act 2023: time to prepare as UK's Ofcom confirms start of illegal content duties
UK's Online Safety Act is a seismic regulatory shift for service providers
UK Online Safety Act: Ofcom launches its first consultation on illegal harms
The UK Online Safety Act: Top 10 takeaways for online service providers
This Act introduces statutory duties on online content-sharing platforms and search services.
Next important deadline(s)
10 September 2025
20 October 2025
End of 2025
The Online Safety Act (Act) received Royal Assent on 26 October 2023, at which point certain (mostly functional) provisions came into effect, including those relating to Ofcom's duties to provide additional guidance and codes of practice. Much of the Act's main provisions require these documents from Ofcom and/or secondary legislation from the government to be put in place before they become effective.
16 December 2024 signalled the start of the three-month period for in-scope services to complete their illegal content risk assessments.
17 January 2025 marked the start of the duty of regulated providers of pornographic content to ensure, through the use of age verification/estimation tools, that children cannot encounter pornographic content.
16 March 2025 marked the date by which in-scope services must have completed their illegal content risk assessments.
17 March 2025 marked the date when in-scope services must comply with their duties in relation to illegal content in Part 3 of the Act.
16 April 2025 marks the date by when all in-scope services must complete their Child Access Assessments.
24 July 2025 marked the date by when in-scope services likely to be accessed by children needed to have completed and recorded a children's risk assessment to assess the risk of harm to children on their service from content harmful to children.
25 July 2025 marked the date by when service providers needed to have implemented the measures set out in the Protection of Children Codes of Practice (or similar).
Secondary legislation under the Act
The Online Safety Act 2023 (Commencement No. 2) Regulations 2023 and the Online Safety Act 2023 (Commencement No. 3) Regulations 2024 brought into force some of Ofcom's powers, certain new offences and other minor provisions.
The Online Safety (List of Overseas Regulators) Regulations 2024 outline the overseas regulators with whom the UK online safety regulator, Ofcom, may co-operate, as set out in section 114 of the Act.
The Online Safety Act 2023 (Pre-existing Part 4B Services Assessment Start Day) Regulations 2024, which came into force on 22 May 2024, specify the "assessment start day" for Video-Sharing Platforms (VSPs) as 2 September 2024. VSPs will have to complete the assessments set out in the Act's sections 9 (illegal content risk assessment duties), 11 (children's risk assessment duties), 14 (assessment duties: user empowerment) and 35 (children's access assessments), starting from this date. VSPs have three months from the date that Ofcom publishes the relevant guidance to complete the assessments.
The Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2024, which came into force on 10 December 2024, make the offence of sharing intimate images without consent under the Sexual Offences Act 2023 a "priority offence" under the Act.
The Online Safety Act 2023 (Commencement No 4) Regulations 2024 brought into force duties on regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools on 17 January 2025.
The Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 came into force on 27 February 2025. The regulations define the thresholds at and above which in-scope services become "categorised services" and subject to certain additional obligations under the Act.
The Online Safety Act 2023 (Commencement No. 5) Regulations 2025 will bring into force (from 3 November 2025) various provisions in the Act relating to Child Sexual Exploitation and Abuse (CSEA) content online, including: (i) duties on both UK and non-UK providers of regulated user-to-user services to report detected and unreported CSEA content present on the service to the National Crime Agency. The Secretary of State is still to make regulations on the information that should be contained in these reports, their format and the time frames in which they should be sent to the NCA; (ii) section 69, which makes it a criminal offence to knowingly or recklessly provide materially false information when reporting CSEA content to the NCA; and (iii) provisions relating to Ofcom's enforcement powers as regards offences under the OSA for failing to comply with provision of information requirements, including in respect of the CSEA reporting requirement.
⭐ The Online Safety Act 2023 (Commencement No. 6) Regulations 2025 brought s 210 of the Act into effect on 25 July 2025. Section 210 repeals the video-sharing platform rules in the Communications Act 2023 (CA). The regulations mark the end of the transition period, which commenced in January 2024, during which pre-existing video-sharing services were regulated under both the Act and the CA. From 25 July 2025, services are regulated wholly under the Act.
In effect (partially).
10 September 2025 (closure of Ofcom consultation on draft guidance for regulated service providers on calculating their QWR)
20 October 2025 (deadline for response to Ofcom consultation on further measures for codes of practice)
End of 2025 (final guidance on protecting women and girls online expected from Ofcom)
Online Safety Act 2023, available here.
Businesses whose services facilitate the sharing of user generated content or user interactions, particularly social media, messaging services, search and online advertising. The Act regulates those offering services to UK users, regardless of the service provider's location.
The Act aims to protect children from harmful content and to deal with illegal online content. The Act seeks to strike a balance between ensuring safe online interactions for children and adult users, and freedom of speech.
Key provisions under the Act include:
In order to ensure that the online safety regime is flexible and "future proof", a number of these provisions are dealt with, and to be dealt with, in secondary legislation, and by codes of practice and guidance to be developed by the designated regulator, Ofcom.
When the Act received Royal Assent, Ofcom devised a consultation process to inform its codes of practice and guidance. This comprises four major consultations together with various sub-consultations, on numerous aspects of the new regime. Ofcom also published an implementation roadmap.
In October 2024, Ofcom published an updated roadmap, setting out its progress in implementing the Act since it became law and presenting its plans for 2025. See our Insight for details.
In June 2025, Ofcom published a further update on its implementation plans.
The process began in November 2023, with a first major consultation on draft codes of practice and guidance on illegal harms duties, which closed on 23 February 2024 (see our insight).
In December 2024, Ofcom published its final form illegal content codes of practice and guidance. This signalled the start date for services to comply with the new rules on conducting illegal content risk assessments. All in-scope services had to have completed their illegal content risk assessment, to evaluate the risk of harms to users resulting from the presence of illegal content on their service, by 16 March 2025. Services needed to implement the recommended measures set out in these codes (or be using other effective measures to protect users from illegal content) by 17 March 2025.
In August 2024, Ofcom consulted on strengthening its draft illegal harms codes of practice and guidance to include animal cruelty and human torture due to these categories of content not being fully covered by the list of "priority offences" in the Act. The "priority offences", listed in Schedule 7 of the Act, relate to the most serious forms of illegal content. Regulated online platforms not only have to take steps to prevent such content from appearing, but also have to remove it when they become aware of it. By including animal cruelty and human torture in its codes and guidance, Ofcom aims to ensure that providers understand that they will also have to remove this type of content. Ofcom's statement on the consultation is pending.
Under the Act, Ofcom has specific powers to tackle terrorism and child sexual exploitation and abuse (CSEA) content available on user-to-user services. Ofcom can, by issuing a "technology notice", make a provider use a certain technology to tackle this content or it can require them to develop such technology. Any technology Ofcom requires a provider to use must be accredited by Ofcom or appointed third party, against minimum standards of accuracy set by government, following advice form Ofcom. In December 2024, Ofcom consulted on its proposals for what the minimum standards of accuracy for accredited technologies could be, and on draft guidance on how it proposes to use its technology notice powers. Ofcom intends to submit its final advice on minimum standards to the government and publish final guidance by Spring 2026.
To tackle the specific risks faced by women and girls online, Ofcom is required to publish additional, dedicated guidance on safety for women and girls. On 25 February 2025, Ofcom published draft guidance, setting out how in-scope service providers can combat content and activities that disproportionately impact women and girls. The guidelines describe nine actions providers can take and highlight good practice in this area. Ofcom consulted on the draft earlier this year and expects to publish the final guidance by the end of 2025.
On 24 April 2025, Ofcom published a consultation on amending its illegal content codes of practice to expand the application of measures that allow children to block and mute user accounts (ICU J1 in the code of practice) and disable comments (ICU J2), to include providers of certain smaller user-to-user services that are likely to be accessed by children where they have relevant risks and functionalities. The consultation closed on 22 July 2025. Ofcom expects to publish the final guidance by the end of 2025.
On 30 June 2025, Ofcom published a consultation proposing additional safety measures for its illegal content and protection of children codes of practice. The new proposals provide more targeted safety measures to make online services safer by design. They include measures to: stop illegal content going viral; make more use of proactive technologies (such as "hash matching" for intimate image abuse content and terrorism content); protect children while livestreaming; and respond to a crisis. The consultation closes on 20 October 2025. Ofcom plans to publish updated codes of practice in summer 2026.
On 16 January 2025, Ofcom published its final form guidance on "highly effective age assurance and other Part 5 duties" relating to the duties of regulated providers of pornographic content to prevent access by children to such content through the use of age verification/estimation tools. This duty came into effect on 17 January 2025.
On 24 April 2025, ahead of the 25 July deadline to implement mandatory age assurance requirements (see "Children" below) Ofcom wrote to hundreds of pornography services, setting out in detail what services need to do to comply with their duties under the Act and reminding them of the consequences of non-compliance.
On 16 January 2025, Ofcom published its Statement on Age Assurance and Children's Access, comprising final form versions of Part 3 guidance on highly effective age assurance, guidance on children's access assessments (and guidance on highly effective age assurance and other Part 5 duties see above under Pornographic content)). This marked the start of compliance with the protection of children duties under the Act by kicking off the three-month period within which all in-scope services had to complete their Child Access Assessments (i.e. by 16 April 2025).
On 24 April 2025, Ofcom published its protection of children statement under the Act, including the draft Protection of Children Codes of Practice for search services and for user-to-user services, and the final form Children's Risk Assessment Guidance and Children's Risk Profiles.
On 4 July 2025, Ofcom published the final versions of its Protection of Children Code for user-to-user services and Protection of Children Code for search services.
Starting from 24 April 2025, all in-scope services had until 24 July 2025 to complete and record a Children's Risk Assessment to assess the risk of harm to children on their service from content harmful to children. This assessment is an additional requirement to completing an illegal content risk assessment (the deadline for which was 16 March 2025).
From 25 July 2025, service providers need to have implemented the measures set out in the Protection of Children Codes of Practice (or be using similar effective measures to protect children and mitigate the risks to children identified in their risk assessment). Ofcom has said that it will give a further six month grace period, ending 24 February 2026, for services to implement these measures. However, this grace period will not stop Ofcom from taking action in the meantime against services that it deems are "deliberately" not engaging with child protection measures or are causing "egregious harm".
The Act introduces a system categorising some regulated online services as category 1, 2A or 2B services, based on their key characteristics and whether they meet certain numerical thresholds.
In March 2024, Ofcom published a call for evidence to inform its further consultation on draft codes of practice and guidance on the additional duties that will apply to "categorised services" under the Act. This call for evidence closed on 20 May 2024 and a statement from the regulator is pending.
The regulations defining the thresholds above which in-scope services become "categorised services" and subject to certain additional obligations under the Act came into force on 27 February 2025. Ofcom will, once it has assessed services against these final thresholds, publish a register of categorised services, as well as a list of emerging category 1 services.
⭐ In July 2025, Ofcom published a statement on transparency reporting and its final transparency reporting guidance for categorised services. The guidance explains when and how Ofcom will exercise its transparency powers, setting out the factors it will consider when deciding what information categorised services will be required to provide (as set out in transparency notices issued by Ofcom) in their transparency reports.
At the end of May 2024, the government published guidance to Ofcom on determining the fees that will be payable by regulated services under the Act. The fees will only be paid by providers whose "qualifying worldwide revenue" (QWR) meets or exceeds a certain revenue threshold and who are not otherwise exempt, and will fund Ofcom's regulation of the new online safety regime. The government will retain oversight of the regulatory costs of the regime by setting Ofcom's total budget cap.
In June 2025, Ofcom published its policy statement on implementation of the online safety fees and penalties regime under the Act following a consultation (see this Regulatory Outlook for more on the consultation. Ofcom has decided on: the definition of QWR to be used to assess the threshold at or above which providers will be required to pay fees, and the maximum penalty caps; QWR for penalty caps in the case of joint and several liability; QWR threshold advice to the secretary of state; exemptions from having to notify Ofcom on QWR and from paying fees; its approach to its Statement of Charging Principles, which will establish the principles that Ofcom will apply when setting fees, which must be in place before Ofcom can start charging; and the process for notifying Ofcom of liability to pay fees.
⭐ In July 2025, Ofcom published a second consultation to implement the fees and penalties regime, this time on guidance for providers of online services regulated under the Act on calculating their QWR in accordance with the Online Safety Act 2023 (Qualifying Worldwide Revenue) Regulations 2025. The consultation is open until 10 September 2025.
The fee regime is expected to be in place by 2026/27 financial year. Until then, the government is funding Ofcom's initial costs. Additional fees will then be charged over an initial set period of consecutive years to recoup the set-up costs.
Under the Act, Ofcom has powers to require and obtain information from regulated services that it needs to fulfil its online safety duties. It will do this by issuing "information notices". On 26 February 2025, Ofcom published final guidance on its information gathering powers under the Act, how it plans to use them and the typical procedures it will follow. Failing to comply with an information notice may result in Ofcom taking enforcement action under the Act.
In January 2025, Ofcom opened an enforcement programme into age assurance measures to protect children from encountering pornographic content, initially focusing on regulated providers' compliance with the Part 5 duties. In July 2025, Ofcom extended this programme to cover all platforms that allow users to share pornographic material, whether they are dedicated adult sites or other services that include pornography (Part 3 services).
On 3 March 2025, Ofcom launched an enforcement programme to monitor whether services are meeting their illegal content risk assessment and record keeping duties under the Act. As part of the programme, the regulator asked various providers to provide it with their illegal content risk assessments to assist it in identifying possible compliance concerns and to monitor how the guidance is being applied. Ofcom expects the programme to run for at least 12 months, during which period, it may initiate formal investigations if it suspects that a provider is failing to meet its obligations under the Act.
In March 2025, Ofcom also launched an enforcement programme targeting child sexual abuse material (CSAM) online. In recognition of the prevalence of such material, making human content moderation insufficient to deal with the issue on its own, Ofcom's illegal content codes of practice recommend that certain services use automated moderation technology, including perceptual hash-matching if the service is a file-sharing or file-storage service at high risk of hosting CSAM, to identify such content and swiftly remove it. For more, see this Regulatory Outlook.
⭐ In July 2025, Ofcom launched a fourth enforcement programme to monitor whether services are meeting their children's risk assessment duties under the Act.
⭐ In July 2025, Ofcom also launched a new enforcement programme to protect children from harmful content through the use of age assurance. It builds on work undertaken by Ofcom's "small but risky taskforce", which identified several services whose main purpose is to host or disseminate content harmful to children. Ofcom will engage with these services and assess the age assurance processes they are implementing.
"Small but risky" online services: In September 2024, the Secretary of State for Science, Innovation and Technology, Peter Kyle, wrote to Ofcom asking how it plans to monitor and address the issue of "small but risky" online services. In its response, Ofcom said that tackling these services is a "vital part" of its regulatory mission. The regulator confirmed that it has already developed plans to take early action against these services.
International cooperation: In October 2024, the UK and US governments signed a joint statement on online safety, calling for platforms to go "further and faster" to protect children online by taking "immediate action" and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression. See this Regulatory Outlook for more.
AI: In November 2024, Ofcom published an open letter to UK online service providers on how the Act will apply to generative AI and chatbots. Ofcom reminded providers that various AI tools and content relating to user-to-user services, search services and pornographic material, will be in scope of the Act.
Statement of Strategic Priorities for online safety: In May 2025, the government's Statement of Strategic Priorities for online safety was laid before Parliament. It highlights five priorities that Ofcom must have regard to when implementing the OSA and that industry is expected to adhere to. See this Regulatory Outlook for details.
Online Information Advisory Committee: Ofcom has established an Online Information Advisory Committee to advise the regulator on issues relating to disinformation and misinformation, as required by the Act.
"Super-complaints": In June 2025, the government published a response to its consultation on "super-complaints" under the Act. The super-complaints regime aims to ensure that eligible entitles can raise complaints with Ofcom about "systemic issues" relating to existing or emerging online harms (see this Regulatory Outlook for background). The government has also laid draft Online Safety Super-Complaints (Eligibility and Procedural Matters) Regulations 2025 before Parliament to define the eligibility criteria for entities to submit super-complaints and the procedural steps to establish the duties of complainants and of Ofcom when a super-complaint is submitted. The super-complaints regime is stated to come into force on 31 December 2025. Ofcom will also be consulting on draft guidance for the new regime.
⭐ Researchers' access: In July 2025, Ofcom's published a report, following a call for evidence, examining how researchers could gain better access to information from regulated online services to support their work on online safety. The report, which the regulator is required by the Act to publish and submit to the secretary of state, also assesses the current constraints on information sharing for research purposes and assesses how to improve access, setting out three policy options for the government to consider.
UK Online Safety Act: Ofcom publishes guidance on age assurance and children's access assessments
Online Safety Act 2023: time to prepare as UK's Ofcom confirms start of illegal content duties
UK's Online Safety Act is a seismic regulatory shift for service providers
UK Online Safety Act: Ofcom launches its first consultation on illegal harms
The UK Online Safety Act: Top 10 takeaways for online service providers
This Act recognises beneficial EU regulations, addresses new product risks like AI, and clarifies supply chain responsibilities for online marketplaces.
Next important deadline(s)
The Product Regulation and Metrology Act received Royal Assent on 21 July 2025.
The Product Regulation and Metrology Act, available here.
TBC
Anyone placing products on the market in the UK.
In the King's Speech 2024, the government set out plans to introduce a Product Safety and Metrology Bill which would preserve the UK’s status as a global leader in product regulation, supporting businesses and protecting consumers". The King's Speech 2024 background briefing notes specifically recognised that the EU is reforming product safety regulations in line with technological developments through, for example, the new EU General Product Safety Regulation and the EU Product Liability Directive.
Having had its amendments considered and approved by the Lords on 10 July 2025, the Product Regulation and Metrology Bill received Royal Assent on 21 July and subsequently became the Product Regulation and Metrology Act 2025.
The Act is intended to modernise the UK's management of product and metrology regulations, by allowing ministers to introduce secondary legislation (which will be subject to consultation and to the affirmative procedure, i.e. increased parliamentary scrutiny, in key areas), including in relation to the creation of criminal offences, regulating online marketplaces, and imposing duties on new supply chain actors.
A key aspect of the Act is that these regulations can be closely aligned with relevant EU laws if appropriate.
Other aspects of the Act include enhancing compliance and enforcement capabilities, including improved data sharing between regulators and market surveillance authorities; clarifying the responsibilities of those in the supply chain, including online market places; and updating the legal metrology framework. The Act also provides for cost recovery mechanisms and emergency modifications.
Products excluded from the Act include: food, feed stuff, plants, fruit and fungi, plant protection products, animal by-products, products of animal origin, aircrafts, military equipment, medicines and medical devices.
On 4 June, the bill finished its reading in the House of Commons and went back to the House of Lords to approve the amendments, which include:
The Act and subsequent reformed regulations will likely have a significant impact on product regulation in the UK and businesses need to keep abreast of developments.
This Act recognises beneficial EU regulations, addresses new product risks like AI, and clarifies supply chain responsibilities for online marketplaces.
Next important deadline(s)
The Product Regulation and Metrology Act received Royal Assent on 21 July 2025.
The Product Regulation and Metrology Act, available here.
TBC
Anyone placing products on the market in the UK.
In the King's Speech 2024, the government set out plans to introduce a Product Safety and Metrology Bill which would preserve the UK’s status as a global leader in product regulation, supporting businesses and protecting consumers". The King's Speech 2024 background briefing notes specifically recognised that the EU is reforming product safety regulations in line with technological developments through, for example, the new EU General Product Safety Regulation and the EU Product Liability Directive.
Having had its amendments considered and approved by the Lords on 10 July 2025, the Product Regulation and Metrology Bill received Royal Assent on 21 July and subsequently became the Product Regulation and Metrology Act 2025.
The Act is intended to modernise the UK's management of product and metrology regulations, by allowing ministers to introduce secondary legislation (which will be subject to consultation and to the affirmative procedure, i.e. increased parliamentary scrutiny, in key areas), including in relation to the creation of criminal offences, regulating online marketplaces, and imposing duties on new supply chain actors.
A key aspect of the Act is that these regulations can be closely aligned with relevant EU laws if appropriate.
Other aspects of the Act include enhancing compliance and enforcement capabilities, including improved data sharing between regulators and market surveillance authorities; clarifying the responsibilities of those in the supply chain, including online market places; and updating the legal metrology framework. The Act also provides for cost recovery mechanisms and emergency modifications.
Products excluded from the Act include: food, feed stuff, plants, fruit and fungi, plant protection products, animal by-products, products of animal origin, aircrafts, military equipment, medicines and medical devices.
On 4 June, the bill finished its reading in the House of Commons and went back to the House of Lords to approve the amendments, which include:
The Act and subsequent reformed regulations will likely have a significant impact on product regulation in the UK and businesses need to keep abreast of developments.
This regulation creates a risk-based tiered regulatory regime for AI tools.
Next important deadline(s)
2 August 2026
2 August 2027
The AIA entered into force on 1 August 2024.
The provisions of the AIA will be applicable progressively over the next few years:
Enacted. Provisions become effective progressively as set out above.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available here.
2 August 2026 (effective date for high-risk AI models)
2 August 2027 (effective date for Annex 3 AI)
The AIA will apply to all those who provide, deploy, import or distribute AI systems in the EU across all sectors. It will apply to non-EU providers who place AI systems on the market in the EU or put them into service in the EU, and also providers and deployers of AI systems located outside the EU, but where the output of the AI system is used in the EU.
As well as being in force in all EU member states, the AIA will in due course come into force in Norway, Iceland and Liechtenstein (as countries in the European Economic Area).
The AIA defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The AIA takes a risk-based approach, with the regulatory framework depending on both the level of risk created by an AI application, and the nature of the regulated entity's role. AI deemed high risk is regulated more strictly, as are those who create and distribute AI systems, who are regulated more strictly than organisations which deploy systems created by third parties. In addition, the AIA creates a regulatory framework for general-purpose AI.
"Risk" is assessed in the AIA by reference to both harm to health and safety, and harm to fundamental human rights and is a combination of the probability of occurrence of harm and the severity of that harm.
The proposed tiers of AIA regulation are prohibited AI, high risk AI, transparency requirements for certain forms of AI, and unregulated AI. Two further but distinct tiers apply to general-purpose AI.
Some uses of AI are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. Banned applications include:
Some AI applications are considered to pose a potentially high risk to health and safety or to fundamental rights, but not to the point that they should be prohibited. These high risk systems fall into two broad categories.
Firstly, AI will be classified as "high risk" for AIA purposes where it is used in safety systems or in products that are already subject to EU product safety legislation, including transport, other vehicles or machinery, and products such as toys, personal protective equipment and medical devices (the Annex I high risk categories).
Secondly, there is a list of specified "high risk" AI systems (in Annex III to the AIA). The detail of this list includes:
However, high risk regulation will not apply to AI falling in the Annex III categories where "it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". This will be an issue for self-assessment. Where there is no such actual risk, the system will not have to comply with the AIA high risk regulatory regime.
Onerous compliance obligations are proposed for high risk AI systems concerning a "continuous, iterative" risk management system for the full lifecycle of the AI system. Compliance will require meeting the AIA requirements for technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, the AIA will impose extensive obligations concerning the governance and curation of data used to train, validate and test high risk AI. The focus is on ensuring that such data sets are relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete in view of the intended purposes. The data should have "appropriate statistical properties", including as regards the people in relation to whom the AI system will be used. Data sets must reflect, to the extent appropriate for their intended use, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
The burden of high risk AI compliance will apply in different ways to businesses at different points in the AI supply chain – providers (developers and businesses who have had AI developed for them to put onto the market), product manufacturers whose products incorporate AI, importers, distributors and deployers.
Some lower risk AI, outside the high risk regime, will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be aware that they are interacting with an AI system, if it was not obvious from the circumstances and context. These provisions will apply to:
For AI systems that do not fall within any of the above categories, the AIA provides for codes of conduct and encourages voluntary adherence to some of the regulatory obligations which apply to the high risk categories. Codes of practice may also cover wider issues such as sustainability, accessibility, stakeholder participation in development and diversity in system-design teams.
Since the AIA was first proposed in 2021, foundation AI systems have emerged as a significant slice of the AI ecosystem. They do not fit readily within the AIA tiers because they perform a specific function (translation, text generation, image generation etc) that can be put to many different applications with differing levels of risk.
The AIA creates two layers of regulation for "general-purpose AI" models: one universal set of obligations for general-purpose AI models and a set of additional obligations for general-purpose AI models with systemic risk.
General-purpose AI include foundation models and some generative AI models. They are defined as models "Including where such AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications", with an exception for AI models used for research, development or prototyping before they are placed on the market.
A general-purpose AI model will be classified as having "systemic risk" if:
The largest, state of the art AI models are expected to fall within this category.
The universal obligations for all general-purpose AI apply in addition to the core risk-based tiers of AIA regulation. Providers of all general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright), which are intended to support downstream providers using the model in their AI system to comply with their own AIA obligations.
General-purpose AI models with systemic risk are subject to a second tier of obligations. These will include model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
The AIA requires member states to appoint national level AI regulators which will be given extensive powers to investigate possible non-compliance, with powers to impose fines.
In addition, the "AI Office" has been created within the Commission to centralise monitoring of general-purpose AI models, and to lead on interaction with the scientific community, and in international discussions on AI. It will also support the Commission in developing guidance, standards, codes of practice, codes of conduct, and in relation to investigations, testing and enforcement of AI systems. The AI Office will play a key role in governance across the national AI regulatory bodies appointed in member states.
Coordination and coherence of the AIA regime is the responsibility of the AI Board, which is separate to the AI Office and comprises representatives from member states.
Regulatory fines are in three tiers:
The fining regime for SMEs is similar to the above, except that the maximum fine is whichever percentage or amount is the lower.
As noted above, deadlines for compliance will be staggered.
In February 2025, the Commission published its guidelines on the definition of "AI system", which simply describe the definition, apply the recital and give examples. The key is the ability of a system to act autonomously and infer how to generate output based on inputs. See this Regulatory Outlook.
On 2 February 2025, the AIA's restrictions on prohibited AI practices, as well as general AI literacy obligations, came into full effect. Following this, the Commission published its long-awaited guidelines on prohibited categories of AI. See this Regulatory Outlook for more details.
⭐ General-purpose AI code of practice
On 10 July 2025, the final draft of the general-purpose AI (GPAI) code of practice (Code) was published, not far in advance of 2 August 2025, when the relevant parts of the AIA came into effect. This follows an iterative drafting process with three draft codes published before the final one.
Following this, the Commission and the European Artificial Intelligence Board completed their adequacy assessment and formally confirmed that adherence to the Code is an adequate voluntary tool for providers of GPAI models to demonstrate compliance with articles 53 and 55 of the AIA.
The Code has three chapters: (i) transparency; (ii) copyright; and (iii) safety and security (the latter applying only to providers of GPAI models with "systemic risk"). The Code is voluntary, but adherence to it is more likely to ensure compliance. Following the Code may also be beneficial, as the authorities may be more likely to focus on monitoring the developer's adherence to the Code, rather than wider compliance issues.
⭐ Commission guidelines on GPAI
On 18 July 2025, the Commission published its guidance on the scope of application of the GPAI rules. The guidelines introduce technical criteria to help developers understand whether their AI models qualify as general-purpose and hence are subject to the AIA's additional obligations on this type of model. The guidelines state that a key "indicative criterion" is whether the compute needed to train the general-purpose model was greater than 1023 FLOPs and whether it can generate language.
In June 2025, the Commission launched a consultation on the classification of high risk AI systems under the AIA. The consultation closed on 18 July 2025. Responses will feed into the guidelines on high-risk AI that have to be issued by 2February 2026. See this Regulatory Outlook for details.
Artificial Intelligence | UK Regulatory Outlook July 2025
Artificial Intelligence | UK Regulatory Outlook June 2025
Artificial intelligence | UK Regulatory Outlook April 2025
When will businesses have to comply with the EU's AI Act?
How will the new UK government resolve the conflict over AI development and IP rights?
This regulation creates a risk-based tiered regulatory regime for AI tools.
Next important deadline(s)
2 August 2026
2 August 2027
The AIA entered into force on 1 August 2024.
The provisions of the AIA will be applicable progressively over the next few years:
Enacted. Provisions become effective progressively as set out above.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available here.
2 August 2026 (effective date for high-risk AI models)
2 August 2027 (effective date for Annex 3 AI)
The AIA will apply to all those who provide, deploy, import or distribute AI systems in the EU across all sectors. It will apply to non-EU providers who place AI systems on the market in the EU or put them into service in the EU, and also providers and deployers of AI systems located outside the EU, but where the output of the AI system is used in the EU.
As well as being in force in all EU member states, the AIA will in due course come into force in Norway, Iceland and Liechtenstein (as countries in the European Economic Area).
The AIA defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The AIA takes a risk-based approach, with the regulatory framework depending on both the level of risk created by an AI application, and the nature of the regulated entity's role. AI deemed high risk is regulated more strictly, as are those who create and distribute AI systems, who are regulated more strictly than organisations which deploy systems created by third parties. In addition, the AIA creates a regulatory framework for general-purpose AI.
"Risk" is assessed in the AIA by reference to both harm to health and safety, and harm to fundamental human rights and is a combination of the probability of occurrence of harm and the severity of that harm.
The proposed tiers of AIA regulation are prohibited AI, high risk AI, transparency requirements for certain forms of AI, and unregulated AI. Two further but distinct tiers apply to general-purpose AI.
Some uses of AI are considered to pose such a significant risk to health and safety or fundamental rights that they should be banned. Banned applications include:
Some AI applications are considered to pose a potentially high risk to health and safety or to fundamental rights, but not to the point that they should be prohibited. These high risk systems fall into two broad categories.
Firstly, AI will be classified as "high risk" for AIA purposes where it is used in safety systems or in products that are already subject to EU product safety legislation, including transport, other vehicles or machinery, and products such as toys, personal protective equipment and medical devices (the Annex I high risk categories).
Secondly, there is a list of specified "high risk" AI systems (in Annex III to the AIA). The detail of this list includes:
However, high risk regulation will not apply to AI falling in the Annex III categories where "it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making". This will be an issue for self-assessment. Where there is no such actual risk, the system will not have to comply with the AIA high risk regulatory regime.
Onerous compliance obligations are proposed for high risk AI systems concerning a "continuous, iterative" risk management system for the full lifecycle of the AI system. Compliance will require meeting the AIA requirements for technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
In addition, the AIA will impose extensive obligations concerning the governance and curation of data used to train, validate and test high risk AI. The focus is on ensuring that such data sets are relevant, sufficiently representative and, to the best extent possible, reasonably free of errors and complete in view of the intended purposes. The data should have "appropriate statistical properties", including as regards the people in relation to whom the AI system will be used. Data sets must reflect, to the extent appropriate for their intended use, the "specific geographical, contextual, behavioural or functional setting" within which the AI system will be used.
The burden of high risk AI compliance will apply in different ways to businesses at different points in the AI supply chain – providers (developers and businesses who have had AI developed for them to put onto the market), product manufacturers whose products incorporate AI, importers, distributors and deployers.
Some lower risk AI, outside the high risk regime, will nevertheless be subject to obligations, mainly relating to transparency. The prime concern is that users must be aware that they are interacting with an AI system, if it was not obvious from the circumstances and context. These provisions will apply to:
For AI systems that do not fall within any of the above categories, the AIA provides for codes of conduct and encourages voluntary adherence to some of the regulatory obligations which apply to the high risk categories. Codes of practice may also cover wider issues such as sustainability, accessibility, stakeholder participation in development and diversity in system-design teams.
Since the AIA was first proposed in 2021, foundation AI systems have emerged as a significant slice of the AI ecosystem. They do not fit readily within the AIA tiers because they perform a specific function (translation, text generation, image generation etc) that can be put to many different applications with differing levels of risk.
The AIA creates two layers of regulation for "general-purpose AI" models: one universal set of obligations for general-purpose AI models and a set of additional obligations for general-purpose AI models with systemic risk.
General-purpose AI include foundation models and some generative AI models. They are defined as models "Including where such AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications", with an exception for AI models used for research, development or prototyping before they are placed on the market.
A general-purpose AI model will be classified as having "systemic risk" if:
The largest, state of the art AI models are expected to fall within this category.
The universal obligations for all general-purpose AI apply in addition to the core risk-based tiers of AIA regulation. Providers of all general-purpose AI models will be required to meet various transparency obligations (including information about the training data and in respect of copyright), which are intended to support downstream providers using the model in their AI system to comply with their own AIA obligations.
General-purpose AI models with systemic risk are subject to a second tier of obligations. These will include model evaluation and adversarial testing, assessment and mitigation of systemic risk, monitoring and reporting on serious incidents, adequate cybersecurity, and monitoring and reporting on energy consumption.
The AIA requires member states to appoint national level AI regulators which will be given extensive powers to investigate possible non-compliance, with powers to impose fines.
In addition, the "AI Office" has been created within the Commission to centralise monitoring of general-purpose AI models, and to lead on interaction with the scientific community, and in international discussions on AI. It will also support the Commission in developing guidance, standards, codes of practice, codes of conduct, and in relation to investigations, testing and enforcement of AI systems. The AI Office will play a key role in governance across the national AI regulatory bodies appointed in member states.
Coordination and coherence of the AIA regime is the responsibility of the AI Board, which is separate to the AI Office and comprises representatives from member states.
Regulatory fines are in three tiers:
The fining regime for SMEs is similar to the above, except that the maximum fine is whichever percentage or amount is the lower.
As noted above, deadlines for compliance will be staggered.
In February 2025, the Commission published its guidelines on the definition of "AI system", which simply describe the definition, apply the recital and give examples. The key is the ability of a system to act autonomously and infer how to generate output based on inputs. See this Regulatory Outlook.
On 2 February 2025, the AIA's restrictions on prohibited AI practices, as well as general AI literacy obligations, came into full effect. Following this, the Commission published its long-awaited guidelines on prohibited categories of AI. See this Regulatory Outlook for more details.
⭐ General-purpose AI code of practice
On 10 July 2025, the final draft of the general-purpose AI (GPAI) code of practice (Code) was published, not far in advance of 2 August 2025, when the relevant parts of the AIA came into effect. This follows an iterative drafting process with three draft codes published before the final one.
Following this, the Commission and the European Artificial Intelligence Board completed their adequacy assessment and formally confirmed that adherence to the Code is an adequate voluntary tool for providers of GPAI models to demonstrate compliance with articles 53 and 55 of the AIA.
The Code has three chapters: (i) transparency; (ii) copyright; and (iii) safety and security (the latter applying only to providers of GPAI models with "systemic risk"). The Code is voluntary, but adherence to it is more likely to ensure compliance. Following the Code may also be beneficial, as the authorities may be more likely to focus on monitoring the developer's adherence to the Code, rather than wider compliance issues.
⭐ Commission guidelines on GPAI
On 18 July 2025, the Commission published its guidance on the scope of application of the GPAI rules. The guidelines introduce technical criteria to help developers understand whether their AI models qualify as general-purpose and hence are subject to the AIA's additional obligations on this type of model. The guidelines state that a key "indicative criterion" is whether the compute needed to train the general-purpose model was greater than 1023 FLOPs and whether it can generate language.
In June 2025, the Commission launched a consultation on the classification of high risk AI systems under the AIA. The consultation closed on 18 July 2025. Responses will feed into the guidelines on high-risk AI that have to be issued by 2February 2026. See this Regulatory Outlook for details.
Artificial Intelligence | UK Regulatory Outlook July 2025
Artificial Intelligence | UK Regulatory Outlook June 2025
Artificial intelligence | UK Regulatory Outlook April 2025
When will businesses have to comply with the EU's AI Act?
How will the new UK government resolve the conflict over AI development and IP rights?
This Act creates a new framework for sharing data and new rights of access to IoT data.
Next important deadline(s)
⭐12 September 2025
12 September 2026
12 September 2027
The legislation entered into force on 11 January 2024.
It will become fully applicable on 12 September 2025, except for:
Enacted. Not yet in effect.
Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), available here.
⭐12 September 2025 (effective date for the majority of provisions)
12 September 2026 (effective date for design of connected products obligations)
12 September 2027 (effective date for obligations on contractual terms in private sector data contracts)
The Data Act is wide-ranging legislation that will impact on a corresponding wide range of businesses and individuals, including:
The Data Act is intended to boost the availability of data to support a vibrant data economy in the EU. It will have a significant impact on some businesses. It will operate to extend data regulation far beyond the current focus on personal data and data privacy. The intention is to open up access to data, particularly Internet of Things (IoT) and machine-generated data, which is often controlled by the entity that gathered it, and is inaccessible to the entity whose activities generated it. The provisions of the Data Act will be without prejudice to the General Data Protection Regulation regime, privacy rights and rights to confidentiality of communications, all of which must be complied with in acting on the requirements in the Data Act.
You can familiarise yourself with the European Commission's "Data Act explained" document here and Frequently Asked Questions (FAQs) on the Data Act here.
Currently, contractual terms and conditions typically decide whether or not data collected by IoT systems, industrial systems and other connected devices can be accessed by the business or person whose activities have generated the data – the user. It is not unusual that the collected data is held by the device or service provider and is not accessible by the user.
The Data Act will create an obligation to design connected products and related services so that data that they collect is available to the user. Users will be entitled to access the data free of charge and potentially in real time. Data must either be directly accessible from the device or readily available without delay.
Users will, moreover, be able to pass the data to a third party (which may be the data holder’s competitor) or instruct the data holder to do so; data holders can only pass user data to a third party on the instructions of the relevant user. The Data Act will impose restrictions on how the third party can use the received data.
These provisions include measures designed to prevent valuable trade secrets from being revealed, and to ensure that these rights are not used to find out information about the offerings of a competitor. They also include measures to prevent unfair contractual terms and conditions being imposed on users by data holders. Data holders are able to impose a non-discriminatory, reasonable fee for providing access (which can include a margin) in business-to-business relationships. Where data is being provided to an SME, the fee must not exceed the costs incurred in making the data available.
In relation to contracts between businesses that contain data-related obligations more generally, the Data Act outlaws terms and conditions that deviate grossly from good commercial practice, contrary to good faith and fair dealing, if imposed unilaterally. Within that prohibition, it sets out various specifically blacklisted contractual terms, which will be unenforceable.
The Data Act makes provision for public sector bodies to gain access to private sector data where they can demonstrate an exceptional need to use the data to carry out statutory duties in the public interest. This might be to deal with a public emergency, or to fulfil a task required by law where there is no other way to obtain the data in question. These provisions do not apply to criminal or administrative investigations or enforcement.
The Data Act seeks to remove perceived obstacles to switching between cloud services providers, or from a cloud service to on-premise infrastructure, or from a single cloud provider to multi-provider services. This part of the Data Act includes provisions concerning the contractual terms dealing with switching or data portability, which must be possible without undue delay, with reasonable assistance, acting to maintain business continuity, and ensuring security, particularly data security. It provides for maximum time limits and detailed information requirements. Service providers are required, more generally, to make information readily available about switching and porting methods, as well as formats including standards and open interoperability specifications. The Data Act imposes a general duty of good faith on all parties involved to make switching effective, timely and to ensure continuity of service.
A further limb of the Data Act is to require cloud service providers to prevent international and non-EU governments from accessing or transferring non-personal data held in the EU, where it would be in breach of EU or member state law.
To ensure that data access rights are not undermined by technical compatibility problems, the Data Act creates obligations around interoperability for data and data-sharing mechanisms. It requires transparency around key aspects and features of datasets, data structures, and access mechanisms (such as application programming interfaces). Provision is made for standards to be developed in relation to these requirements.
The Data Act also imposes interoperability requirements between cloud services, again providing for open interoperability specifications and harmonised standards to support switching and portability, as well as parallel processing.
Finally, the Data Act covers smart contracts used for executing a data-sharing arrangement (but not smart contracts with other functions). It lays down requirements for robustness and access control, safe termination and interruption (a "kill switch"), data archiving and continuity, as well as measures to ensure consistency with the terms of the contract that the smart contract is executing. Provision is made for standards to be developed to meet these requirements. Smart contracts must be self-assessed for conformity and a declaration of EU conformity made.
The Data Act will be enforced at member state level by an appointed regulator. This could be an extension of jurisdiction for an existing regulator, or a new one could be created. Powers for member state regulators can include sanctions.
This Act creates a new framework for sharing data and new rights of access to IoT data.
Next important deadline(s)
⭐12 September 2025
12 September 2026
12 September 2027
The legislation entered into force on 11 January 2024.
It will become fully applicable on 12 September 2025, except for:
Enacted. Not yet in effect.
Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), available here.
⭐12 September 2025 (effective date for the majority of provisions)
12 September 2026 (effective date for design of connected products obligations)
12 September 2027 (effective date for obligations on contractual terms in private sector data contracts)
The Data Act is wide-ranging legislation that will impact on a corresponding wide range of businesses and individuals, including:
The Data Act is intended to boost the availability of data to support a vibrant data economy in the EU. It will have a significant impact on some businesses. It will operate to extend data regulation far beyond the current focus on personal data and data privacy. The intention is to open up access to data, particularly Internet of Things (IoT) and machine-generated data, which is often controlled by the entity that gathered it, and is inaccessible to the entity whose activities generated it. The provisions of the Data Act will be without prejudice to the General Data Protection Regulation regime, privacy rights and rights to confidentiality of communications, all of which must be complied with in acting on the requirements in the Data Act.
You can familiarise yourself with the European Commission's "Data Act explained" document here and Frequently Asked Questions (FAQs) on the Data Act here.
Currently, contractual terms and conditions typically decide whether or not data collected by IoT systems, industrial systems and other connected devices can be accessed by the business or person whose activities have generated the data – the user. It is not unusual that the collected data is held by the device or service provider and is not accessible by the user.
The Data Act will create an obligation to design connected products and related services so that data that they collect is available to the user. Users will be entitled to access the data free of charge and potentially in real time. Data must either be directly accessible from the device or readily available without delay.
Users will, moreover, be able to pass the data to a third party (which may be the data holder’s competitor) or instruct the data holder to do so; data holders can only pass user data to a third party on the instructions of the relevant user. The Data Act will impose restrictions on how the third party can use the received data.
These provisions include measures designed to prevent valuable trade secrets from being revealed, and to ensure that these rights are not used to find out information about the offerings of a competitor. They also include measures to prevent unfair contractual terms and conditions being imposed on users by data holders. Data holders are able to impose a non-discriminatory, reasonable fee for providing access (which can include a margin) in business-to-business relationships. Where data is being provided to an SME, the fee must not exceed the costs incurred in making the data available.
In relation to contracts between businesses that contain data-related obligations more generally, the Data Act outlaws terms and conditions that deviate grossly from good commercial practice, contrary to good faith and fair dealing, if imposed unilaterally. Within that prohibition, it sets out various specifically blacklisted contractual terms, which will be unenforceable.
The Data Act makes provision for public sector bodies to gain access to private sector data where they can demonstrate an exceptional need to use the data to carry out statutory duties in the public interest. This might be to deal with a public emergency, or to fulfil a task required by law where there is no other way to obtain the data in question. These provisions do not apply to criminal or administrative investigations or enforcement.
The Data Act seeks to remove perceived obstacles to switching between cloud services providers, or from a cloud service to on-premise infrastructure, or from a single cloud provider to multi-provider services. This part of the Data Act includes provisions concerning the contractual terms dealing with switching or data portability, which must be possible without undue delay, with reasonable assistance, acting to maintain business continuity, and ensuring security, particularly data security. It provides for maximum time limits and detailed information requirements. Service providers are required, more generally, to make information readily available about switching and porting methods, as well as formats including standards and open interoperability specifications. The Data Act imposes a general duty of good faith on all parties involved to make switching effective, timely and to ensure continuity of service.
A further limb of the Data Act is to require cloud service providers to prevent international and non-EU governments from accessing or transferring non-personal data held in the EU, where it would be in breach of EU or member state law.
To ensure that data access rights are not undermined by technical compatibility problems, the Data Act creates obligations around interoperability for data and data-sharing mechanisms. It requires transparency around key aspects and features of datasets, data structures, and access mechanisms (such as application programming interfaces). Provision is made for standards to be developed in relation to these requirements.
The Data Act also imposes interoperability requirements between cloud services, again providing for open interoperability specifications and harmonised standards to support switching and portability, as well as parallel processing.
Finally, the Data Act covers smart contracts used for executing a data-sharing arrangement (but not smart contracts with other functions). It lays down requirements for robustness and access control, safe termination and interruption (a "kill switch"), data archiving and continuity, as well as measures to ensure consistency with the terms of the contract that the smart contract is executing. Provision is made for standards to be developed to meet these requirements. Smart contracts must be self-assessed for conformity and a declaration of EU conformity made.
The Data Act will be enforced at member state level by an appointed regulator. This could be an extension of jurisdiction for an existing regulator, or a new one could be created. Powers for member state regulators can include sanctions.
This directive updates NIS 1 and requires certain organisations to strengthen their cyber security defences.
Next important deadline(s)
2025
The directive entered into force on 16 January 2023. All member states were obliged to transpose its measures into national law by 17 October 2024.
In effect (partially). Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, available here.
2025 (national implementing legislation may be brought in – see implementation process below)
The NIS 2 directive (on security of information and network systems) updates earlier EU legislation in this area (NIS 1). It significantly extends the list of sectors within the scope of the regulation and provides more information on the entities that are subject to its cybersecurity requirements.
The scope of the directive is set out in detail, intended to remove inconsistencies in scope between the implementing legislation enacted at member state level for the NIS 1 directive.
The directive applies to all businesses operating within the broad sectors listed in Annex I or II which qualify as a "medium" enterprise or larger (i.e. employing 50 people or more, and with either annual turnover over €10 million or a balance sheet exceeding €10 million).
However, the carve-out for smaller business does not apply where they are in the sectors listed in Annex I or II, which are, more specifically:
The directive also applies to entities of all sizes listed under the Critical Entities Resilience directive and those that provide domain name registration services.
The directive does not apply to public administration entities that carry out activities in certain areas including national security, public security, defence, or law enforcement. Member states may also exempt specific entities in the above areas.
Annex I covers energy, transport, banking and financial market infrastructure, healthcare, water and sewage, digital infrastructure and certain business-to-business information and communications technology services.
Annex II includes postal and courier services, waste management, some types of manufacturing and digital platform providers for online market places, search engines and social networks.
The scope of this legislation is detailed and the above is a summary only.
The regulation introduces a set of mandatory measures that each entity must address to prevent cybersecurity incidents. This includes policies on risk analysis and information system security, cyber hygiene practices, cybersecurity training, and multi-factor and continuous authentication solutions.
To ensure that organisations are aware of their NIS2 obligations and acquire knowledge on cybersecurity risk prevention, management bodies are required to attend training sessions and to pass this information and knowledge on to their employees.
Member states must implement effective, proportionate, and dissuasive sanctions. Fines for non-compliance can reach 2% of the organization's annual turnover or €10 million − whichever is greater – for essential entities, and up to 1.4% of the organisation's annual turnover or €7 million for important entities.
The new directive clarifies the scope of reporting obligations with more specific provisions regarding the reporting process, content, and timelines. In particular, entities affected by an incident that has a signification impact on provision of their services must submit an initial assessment of the incident to their computer security incident response team (CSIRT) or, where applicable, to the competent authority within 24 hours of becoming aware of the incident. A final update must be provided within a month of the initial notification.
One of the most novel aspects of this directive is the requirement for member states to promote the use of innovative technologies, including artificial intelligence, to improve the detection and prevention of cyber-attacks, which would allow for a more efficient and effective allocation of resources to combat these threats.
Member states were obliged to legislate for appropriate measures to implement the directive by 17 October 2024. National implementing legislation can be tracked from this page. ⭐ Belgium, Croatia, Cyprus, Czech Republic, Denmark, Finland, Greece, Hungary, Italy, Latvia, Lithuania, Malta, Romania, Slovakia and Ukraine have all completed transposition of the directive into their national laws.
Organisations in those countries must therefore act swiftly to ensure compliance before the respective national laws come into force. In particular, they will have to consider whether they fall under the scope of the directive and therefore need to register with the national cyber security authority before the applicable national deadline.
On 4 December 2024, the Commission announced that it had opened infringement procedures against 23 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden) for failing to meet the 17 October deadline. These member states had two months to respond to the letters of formal notice by completing the transposition process and notifying the Commission of the measures introduced. Since then, Malta, Romania, Slovakia and Greece have fully implemented the new law.
On 7 May 2025, the Commission sent a reasoned opinion (a formal request to comply with EU law) to the remaining 19 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Netherlands, Austria, Poland, Portugal, Slovenia, Finland and Sweden) for failing to transpose fully and notify the Commission of transposition of the directive. The member states had two months to respond and to complete transposition. Failure to comply by the deadline may lead the Commission to refer the cases to the Court of Justice of the European Union.
The first implementing regulation under the directive, which sets out the cyber security risk management measures and the criteria for when an incident will be considered significant under the directive, and came into effect on 27 November 2024. A reminder that implementing regulations are adopted to ensure uniform implementation of the relevant EU legislation and will take precedence over national legislation in the event of contradiction.
Although there is significant fragmentation in the implementation of the directive across the EU, it is important to be ready to comply in the remaining member states as they could adopt implementing acts at any point during 2025, with a short period between adoption and when the new changes become enforceable. Organisations should therefore closely monitor implementation of national legislation and prepare in advance of the relevant national laws entering into force by conducting risk assessments, developing incident response plans and enhancing their overall level of cyber security awareness.
UK Knowledge Collection: Cyber security, low-carbon infrastructure, and pensions reform
UK Cyber Security and Resilience Bill will be both similar and different to the EU's NIS2
Cyber Security | UK Regulatory Outlook November 2024
Implementation deadline for NIS2 and new EU cybersecurity compliance regime draws nearer
What EU businesses need to know about NIS2 and cybersecurity compliance
This directive updates NIS 1 and requires certain organisations to strengthen their cyber security defences.
Next important deadline(s)
2025
The directive entered into force on 16 January 2023. All member states were obliged to transpose its measures into national law by 17 October 2024.
In effect (partially). Member state national enacting legislation required.
National implementing legislation can be tracked from this page.
Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, available here.
2025 (national implementing legislation may be brought in – see implementation process below)
The NIS 2 directive (on security of information and network systems) updates earlier EU legislation in this area (NIS 1). It significantly extends the list of sectors within the scope of the regulation and provides more information on the entities that are subject to its cybersecurity requirements.
The scope of the directive is set out in detail, intended to remove inconsistencies in scope between the implementing legislation enacted at member state level for the NIS 1 directive.
The directive applies to all businesses operating within the broad sectors listed in Annex I or II which qualify as a "medium" enterprise or larger (i.e. employing 50 people or more, and with either annual turnover over €10 million or a balance sheet exceeding €10 million).
However, the carve-out for smaller business does not apply where they are in the sectors listed in Annex I or II, which are, more specifically:
The directive also applies to entities of all sizes listed under the Critical Entities Resilience directive and those that provide domain name registration services.
The directive does not apply to public administration entities that carry out activities in certain areas including national security, public security, defence, or law enforcement. Member states may also exempt specific entities in the above areas.
Annex I covers energy, transport, banking and financial market infrastructure, healthcare, water and sewage, digital infrastructure and certain business-to-business information and communications technology services.
Annex II includes postal and courier services, waste management, some types of manufacturing and digital platform providers for online market places, search engines and social networks.
The scope of this legislation is detailed and the above is a summary only.
The regulation introduces a set of mandatory measures that each entity must address to prevent cybersecurity incidents. This includes policies on risk analysis and information system security, cyber hygiene practices, cybersecurity training, and multi-factor and continuous authentication solutions.
To ensure that organisations are aware of their NIS2 obligations and acquire knowledge on cybersecurity risk prevention, management bodies are required to attend training sessions and to pass this information and knowledge on to their employees.
Member states must implement effective, proportionate, and dissuasive sanctions. Fines for non-compliance can reach 2% of the organization's annual turnover or €10 million − whichever is greater – for essential entities, and up to 1.4% of the organisation's annual turnover or €7 million for important entities.
The new directive clarifies the scope of reporting obligations with more specific provisions regarding the reporting process, content, and timelines. In particular, entities affected by an incident that has a signification impact on provision of their services must submit an initial assessment of the incident to their computer security incident response team (CSIRT) or, where applicable, to the competent authority within 24 hours of becoming aware of the incident. A final update must be provided within a month of the initial notification.
One of the most novel aspects of this directive is the requirement for member states to promote the use of innovative technologies, including artificial intelligence, to improve the detection and prevention of cyber-attacks, which would allow for a more efficient and effective allocation of resources to combat these threats.
Member states were obliged to legislate for appropriate measures to implement the directive by 17 October 2024. National implementing legislation can be tracked from this page. ⭐ Belgium, Croatia, Cyprus, Czech Republic, Denmark, Finland, Greece, Hungary, Italy, Latvia, Lithuania, Malta, Romania, Slovakia and Ukraine have all completed transposition of the directive into their national laws.
Organisations in those countries must therefore act swiftly to ensure compliance before the respective national laws come into force. In particular, they will have to consider whether they fall under the scope of the directive and therefore need to register with the national cyber security authority before the applicable national deadline.
On 4 December 2024, the Commission announced that it had opened infringement procedures against 23 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden) for failing to meet the 17 October deadline. These member states had two months to respond to the letters of formal notice by completing the transposition process and notifying the Commission of the measures introduced. Since then, Malta, Romania, Slovakia and Greece have fully implemented the new law.
On 7 May 2025, the Commission sent a reasoned opinion (a formal request to comply with EU law) to the remaining 19 member states (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Netherlands, Austria, Poland, Portugal, Slovenia, Finland and Sweden) for failing to transpose fully and notify the Commission of transposition of the directive. The member states had two months to respond and to complete transposition. Failure to comply by the deadline may lead the Commission to refer the cases to the Court of Justice of the European Union.
The first implementing regulation under the directive, which sets out the cyber security risk management measures and the criteria for when an incident will be considered significant under the directive, and came into effect on 27 November 2024. A reminder that implementing regulations are adopted to ensure uniform implementation of the relevant EU legislation and will take precedence over national legislation in the event of contradiction.
Although there is significant fragmentation in the implementation of the directive across the EU, it is important to be ready to comply in the remaining member states as they could adopt implementing acts at any point during 2025, with a short period between adoption and when the new changes become enforceable. Organisations should therefore closely monitor implementation of national legislation and prepare in advance of the relevant national laws entering into force by conducting risk assessments, developing incident response plans and enhancing their overall level of cyber security awareness.
UK Knowledge Collection: Cyber security, low-carbon infrastructure, and pensions reform
UK Cyber Security and Resilience Bill will be both similar and different to the EU's NIS2
Cyber Security | UK Regulatory Outlook November 2024
Implementation deadline for NIS2 and new EU cybersecurity compliance regime draws nearer
What EU businesses need to know about NIS2 and cybersecurity compliance
This Act gives the CMA new powers to regulate firms in digital markets with "significant market status" and updates aspects of competition law.
Next important deadline(s): January to September 2025
The Act received Royal assent on 24 May 2024 and the competition and digital markets aspects came into effect on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and savings and Transitional Provisions) Regulations 2024.
Enacted and partially in effect.
April to June 2025 (stage two of CMA's initial Strategic Market Status (SMS) investigations – further evidence gathering and analysis followed by a consultation on the proposed decision).
July to September 2025 (stage three of CMA's initial Strategic Market Status (SMS) investigations – analysis of consultation responses and further evidence gathering).
Digital Markets, Competition and Consumers Act, text available here.
The new regime is intended to apply to businesses that hold substantial and entrenched power in digital markets. Firms that meet certain cumulative thresholds will be in scope:
The legislation will only apply to firms meeting those thresholds that have been designated by the Digital Markets Unit (DMU), a specialist unit within the Competition and Markets Authority (CMA), as having strategic market status (SMS) following an SMS investigation. The legislation will potentially impact any businesses operating in digital markets by changing the regulatory overlay of rights and obligations in these markets.
The Act introduces new statutory powers for the DMU to regulate powerful digital firms. The DMU remains an administrative unit within the CMA, rather than being designated as a separate regulator.
The Act concerns digital activities, defined as services provided over the internet or digital content, whether paid-for or free of charge. CMA jurisdiction will require a link to the UK.
A business will not be designated with SMS unless it meets the financial threshold of £1 billion in UK group turnover or £25 billion global group turnover in a relevant 12 month period.
The DMU may designate a business as having SMS following an SMS investigation. The investigation will consider whether the business has substantial and entrenched market power, with a forward-looking analysis at least five years into the future. It will also assess whether the business has a position of strategic significance in relation to the digital activity in question, whether by its size or scale, the reliance by others on its services, or the influence that it can exert on others as a consequence of its position. Designations will last for five years and the designation process will include public consultation and liaison with other regulators.
The DMU may create a bespoke code of conduct for each SMS business with a view to ensuring fair dealing, open choices (which may require enhanced data portability and interoperability of services), and trust and transparency.
The CMA is given extensive powers of investigation and enforcement, including fines of up to 10% of global turnover and wide-ranging remedies. In addition to SMS investigations and enforcement, the CMA may also make a "pro-competition intervention" to deal with an adverse effect on competition in a digital market. It will have the same powers as it currently enjoys to take remedial action as following a market investigation, including the power to order divestments and break-ups.
The regulatory regime:
Most of the Act's provisions will be brought into force by secondary legislation.
In December 2024, the CMA published finalised guidance on the digital markets competition regime, following consultation.
The digital markets and competition elements of the Act came into force on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and Savings and Transitional Provisions) Regulations 2024
The CMA has set out the steps it will take when conducting initial Strategic Market Status (SMS) investigations, mirroring legislative requirements for extensive consultation with third parties and potential SMS firms before making any decision. The steps are:
Stage 1 (January to March 2025): initial evidence gathering and engagement with potential SMS firms and other stakeholders.
Stage 2 (April to June 2025): further evidence gathering and analysis, followed by a consultation on the proposed SMS designation decision and any initial conduct requirements.
Stage 3 (July to September 2025): analysis of consultation responses and further evidence gathering.
The CMA also intends to publish "roadmaps" alongside its consultations on the proposed SMS decisions in June (for search), and July (for mobile). The CMA intends to do this for all future SMS designations.
Since the DMCCA came into effect, the CMA has launched the following investigations to decide whether certain firms should be designated with "strategic market status". These CMA investigations are as follows:
The CMA is no longer planning to launch any further investigations in 2025.
This Act gives the CMA new powers to regulate firms in digital markets with "significant market status" and updates aspects of competition law.
Next important deadline(s): January to September 2025
The Act received Royal assent on 24 May 2024 and the competition and digital markets aspects came into effect on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and savings and Transitional Provisions) Regulations 2024.
Enacted and partially in effect.
April to June 2025 (stage two of CMA's initial Strategic Market Status (SMS) investigations – further evidence gathering and analysis followed by a consultation on the proposed decision).
July to September 2025 (stage three of CMA's initial Strategic Market Status (SMS) investigations – analysis of consultation responses and further evidence gathering).
Digital Markets, Competition and Consumers Act, text available here.
The new regime is intended to apply to businesses that hold substantial and entrenched power in digital markets. Firms that meet certain cumulative thresholds will be in scope:
The legislation will only apply to firms meeting those thresholds that have been designated by the Digital Markets Unit (DMU), a specialist unit within the Competition and Markets Authority (CMA), as having strategic market status (SMS) following an SMS investigation. The legislation will potentially impact any businesses operating in digital markets by changing the regulatory overlay of rights and obligations in these markets.
The Act introduces new statutory powers for the DMU to regulate powerful digital firms. The DMU remains an administrative unit within the CMA, rather than being designated as a separate regulator.
The Act concerns digital activities, defined as services provided over the internet or digital content, whether paid-for or free of charge. CMA jurisdiction will require a link to the UK.
A business will not be designated with SMS unless it meets the financial threshold of £1 billion in UK group turnover or £25 billion global group turnover in a relevant 12 month period.
The DMU may designate a business as having SMS following an SMS investigation. The investigation will consider whether the business has substantial and entrenched market power, with a forward-looking analysis at least five years into the future. It will also assess whether the business has a position of strategic significance in relation to the digital activity in question, whether by its size or scale, the reliance by others on its services, or the influence that it can exert on others as a consequence of its position. Designations will last for five years and the designation process will include public consultation and liaison with other regulators.
The DMU may create a bespoke code of conduct for each SMS business with a view to ensuring fair dealing, open choices (which may require enhanced data portability and interoperability of services), and trust and transparency.
The CMA is given extensive powers of investigation and enforcement, including fines of up to 10% of global turnover and wide-ranging remedies. In addition to SMS investigations and enforcement, the CMA may also make a "pro-competition intervention" to deal with an adverse effect on competition in a digital market. It will have the same powers as it currently enjoys to take remedial action as following a market investigation, including the power to order divestments and break-ups.
The regulatory regime:
Most of the Act's provisions will be brought into force by secondary legislation.
In December 2024, the CMA published finalised guidance on the digital markets competition regime, following consultation.
The digital markets and competition elements of the Act came into force on 1 January 2025 via the Digital Markets, Competition and Consumers Act 2024 (Commencement No. 1 and Savings and Transitional Provisions) Regulations 2024
The CMA has set out the steps it will take when conducting initial Strategic Market Status (SMS) investigations, mirroring legislative requirements for extensive consultation with third parties and potential SMS firms before making any decision. The steps are:
Stage 1 (January to March 2025): initial evidence gathering and engagement with potential SMS firms and other stakeholders.
Stage 2 (April to June 2025): further evidence gathering and analysis, followed by a consultation on the proposed SMS designation decision and any initial conduct requirements.
Stage 3 (July to September 2025): analysis of consultation responses and further evidence gathering.
The CMA also intends to publish "roadmaps" alongside its consultations on the proposed SMS decisions in June (for search), and July (for mobile). The CMA intends to do this for all future SMS designations.
Since the DMCCA came into effect, the CMA has launched the following investigations to decide whether certain firms should be designated with "strategic market status". These CMA investigations are as follows:
The CMA is no longer planning to launch any further investigations in 2025.
This Act updates consumer law, makes changes to rules for consumer subscriptions and strengthens the CMA's enforcement powers.
Next important deadline(s)
April 2026
The Digital Markets, Competition and Consumers Act 2024 (Act) received Royal Assent on 24 May 2024. Secondary legislation and statutory guidance from the Competition and Markets Authority (CMA) are needed to bring the provisions into effect.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No 1 and Savings and Transitional Provisions) Regulations 2024 brought into force on 1 January 2025 (to the extent not brought into force on Royal Assent) the digital markets and competition parts of the Act, as well as other miscellaneous provisions.
The Competition Act 1998 (Determination of Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 1998 Act.
The Enterprise Act 2002 (Mergers and Market Investigations) (Determination of Control and Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 2002 Act, and the circumstances in which a person is considered to have control over an enterprise.
The Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024 sets out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of assessing whether an undertaking should be designated as having strategic market status (SMS) or to determine penalties for non-compliance relating to: (i) provisions on digital markets; (ii) the enforcement of consumer protection law; and (iii) the motor fuel regime under the Act.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No. 2) Regulations 2025 brought into force consumer enforcement powers under Part 3 and unfair commercial practices provisions under Chapter 1 of Part 4 of the Act on 6 April 2025.
The Digital Markets, Competition and Consumers Act 2024 (CMA Consumer Enforcement Rules) Regulations 2025 brought into force the CMA's direct consumer enforcement rules, set out in the schedule to the regulations, on 6 April 2025.
Enacted. Mostly in effect
Digital Markets, Competition and Consumers Act 2024, available here.
April 2026 (reforms to subscription contracts regime expected to commence)
Broadly speaking, the Act impacts all consumer-related businesses since it not only applies to all consumer contracts but also in circumstances where consumers are targeted. The Act includes a significant overhaul of the laws relating to subscription contracts – as such, providers of subscription services are likely to be particularly impacted.
The Act repeals and restates the consumer protection from unfair trading regulations as primary legislation. It aims to enhance consumer protections by strengthening enforcement (including by giving the CMA significant new powers and the ability to impose substantial fines) and introducing new consumer rights, e.g. to tackle "subscription traps" and the proliferation of fake online reviews.
The Act amends some definitions, including “average consumer”, “commercial practice” and “transactional decision” and expands the definition of "vulnerable consumers".
The Act repeals and reinstates the existing Consumer Protection for Unfair Trading Regulations 2008 (CPRs). The provisions on unfair commercial practices (UCPs) are in Chapter 1 of Part 4 of the Act.
Under the Act, some UCPs, such as misleading and aggressive practices, are prohibited if they are likely to cause the average consumer to take a different transactional decision.
The Act also amends and supplements the list of commercial practices that are always considered unfair regardless of their impact on the average consumer's transactional decisions, to reflect the fact that consumers and traders increasingly interact online (resulting in wider application).
To improve consumer transparency, the list of "blacklisted" commercial practices in Schedule 19 of the Act now also includes various activities relating to the submission, commission or publication of fake reviews, and "drip pricing".
The Secretary of State also has the power to amend the Act in various ways including:
The Act gives new rights to consumers and imposes new obligations on providers in respect of subscription contracts. In summary, the proposals include:
The Act provides that a trader must set out the total price of a product, including any mandatory fees, taxes and charges that apply, rather than drip-feeding in these amounts during the transaction process.
The Act significantly enhances the CMA’s role in enforcing consumer protection laws, giving the CMA the power to levy civil fines of up to £300,000 or 10% of annual global turnover (whichever is higher) for breach of all consumer law (not just as updated by the Act). The Act also allows the CMA to directly investigate suspected infringements and practices that may harm the collective interests of consumers in the UK. The CMA can also now issue enforcement notices without going to court first.
Secondary legislation and statutory guidance from the CMA are needed before the Act can become fully effective.
On 25 November 2024, the government made the Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024, which (among other things) set out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of determining penalties under the enforcement of consumer protection law regime.
The consumer penalties regime requires an assessment to be made as to when a person controls another person. The regulations also set out the circumstances in which a person is considered to have such control over another person.
The regulations in relation to consumer protection enforcement powers come into effect on 6 April 2025.
The Digital Markets, Competition and Consumers Act 2024 (CMA Consumer Enforcement Rules) Regulations 2025 brought into force the CMA's direct consumer enforcement rules, set out in the schedule to the regulations, on 6 April 2025.
CMA guidance:
In November 2024, the government launched a consultation on implementation of the new subscription contracts regime under the Act (which closed on 10 February 2025). The consultation sought feedback on proposed policies to inform the content of the secondary legislation that will implement the new regime and on particular issues likely to be covered in guidance. See this Insight for more information.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No. 2) Regulations 2025 brought into force the following parts of the Act on 6 April 2025:
CMA guidance:
Since most of the rules on misleading advertising in the advertising codes derive from or are compatible with the CPRs, the Committee of Advertising Practice (CAP) and the Broadcast Committee of Advertising Practice (BCAP) have amended them to align with the DMCCA.
The CMA, CAP and BCAP consulted on their guidance last year.
See Osborne Clarke's Insight for more details on these consultations.
The Digital Markets Competition and Consumers Act (DMCCA) hub
UKCMA provides further guidance on the 'drip pricing' provisions of the DMCCA
How businesses can get set for the UK Digital Markets Competition and Consumers Act in 2025
DMCCA consultations launched on implementation of UK consumer protection regime
Overview video: https://youtu.be/HuHLFokwYO4
Video on dark patterns: https://youtu.be/FhNAJYT1Xxg
Video on subscription law changes: https://youtu.be/3D0yDatHHG0
This Act updates consumer law, makes changes to rules for consumer subscriptions and strengthens the CMA's enforcement powers.
Next important deadline(s)
April 2026
The Digital Markets, Competition and Consumers Act 2024 (Act) received Royal Assent on 24 May 2024. Secondary legislation and statutory guidance from the Competition and Markets Authority (CMA) are needed to bring the provisions into effect.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No 1 and Savings and Transitional Provisions) Regulations 2024 brought into force on 1 January 2025 (to the extent not brought into force on Royal Assent) the digital markets and competition parts of the Act, as well as other miscellaneous provisions.
The Competition Act 1998 (Determination of Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 1998 Act.
The Enterprise Act 2002 (Mergers and Market Investigations) (Determination of Control and Turnover for Penalties) Regulations 2024 sets out how the CMA will calculate turnover for the purposes of calculating penalties in relation to enforcement of the 2002 Act, and the circumstances in which a person is considered to have control over an enterprise.
The Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024 sets out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of assessing whether an undertaking should be designated as having strategic market status (SMS) or to determine penalties for non-compliance relating to: (i) provisions on digital markets; (ii) the enforcement of consumer protection law; and (iii) the motor fuel regime under the Act.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No. 2) Regulations 2025 brought into force consumer enforcement powers under Part 3 and unfair commercial practices provisions under Chapter 1 of Part 4 of the Act on 6 April 2025.
The Digital Markets, Competition and Consumers Act 2024 (CMA Consumer Enforcement Rules) Regulations 2025 brought into force the CMA's direct consumer enforcement rules, set out in the schedule to the regulations, on 6 April 2025.
Enacted. Mostly in effect
Digital Markets, Competition and Consumers Act 2024, available here.
April 2026 (reforms to subscription contracts regime expected to commence)
Broadly speaking, the Act impacts all consumer-related businesses since it not only applies to all consumer contracts but also in circumstances where consumers are targeted. The Act includes a significant overhaul of the laws relating to subscription contracts – as such, providers of subscription services are likely to be particularly impacted.
The Act repeals and restates the consumer protection from unfair trading regulations as primary legislation. It aims to enhance consumer protections by strengthening enforcement (including by giving the CMA significant new powers and the ability to impose substantial fines) and introducing new consumer rights, e.g. to tackle "subscription traps" and the proliferation of fake online reviews.
The Act amends some definitions, including “average consumer”, “commercial practice” and “transactional decision” and expands the definition of "vulnerable consumers".
The Act repeals and reinstates the existing Consumer Protection for Unfair Trading Regulations 2008 (CPRs). The provisions on unfair commercial practices (UCPs) are in Chapter 1 of Part 4 of the Act.
Under the Act, some UCPs, such as misleading and aggressive practices, are prohibited if they are likely to cause the average consumer to take a different transactional decision.
The Act also amends and supplements the list of commercial practices that are always considered unfair regardless of their impact on the average consumer's transactional decisions, to reflect the fact that consumers and traders increasingly interact online (resulting in wider application).
To improve consumer transparency, the list of "blacklisted" commercial practices in Schedule 19 of the Act now also includes various activities relating to the submission, commission or publication of fake reviews, and "drip pricing".
The Secretary of State also has the power to amend the Act in various ways including:
The Act gives new rights to consumers and imposes new obligations on providers in respect of subscription contracts. In summary, the proposals include:
The Act provides that a trader must set out the total price of a product, including any mandatory fees, taxes and charges that apply, rather than drip-feeding in these amounts during the transaction process.
The Act significantly enhances the CMA’s role in enforcing consumer protection laws, giving the CMA the power to levy civil fines of up to £300,000 or 10% of annual global turnover (whichever is higher) for breach of all consumer law (not just as updated by the Act). The Act also allows the CMA to directly investigate suspected infringements and practices that may harm the collective interests of consumers in the UK. The CMA can also now issue enforcement notices without going to court first.
Secondary legislation and statutory guidance from the CMA are needed before the Act can become fully effective.
On 25 November 2024, the government made the Digital Markets, Competition and Consumers Act 2024 and Consumer Rights Act 2015 (Turnover and Control) Regulations 2024, which (among other things) set out how the CMA, civil courts and other enforcers will estimate or calculate turnover for the purposes of determining penalties under the enforcement of consumer protection law regime.
The consumer penalties regime requires an assessment to be made as to when a person controls another person. The regulations also set out the circumstances in which a person is considered to have such control over another person.
The regulations in relation to consumer protection enforcement powers come into effect on 6 April 2025.
The Digital Markets, Competition and Consumers Act 2024 (CMA Consumer Enforcement Rules) Regulations 2025 brought into force the CMA's direct consumer enforcement rules, set out in the schedule to the regulations, on 6 April 2025.
CMA guidance:
In November 2024, the government launched a consultation on implementation of the new subscription contracts regime under the Act (which closed on 10 February 2025). The consultation sought feedback on proposed policies to inform the content of the secondary legislation that will implement the new regime and on particular issues likely to be covered in guidance. See this Insight for more information.
The Digital Markets, Competition and Consumers Act 2024 (Commencement No. 2) Regulations 2025 brought into force the following parts of the Act on 6 April 2025:
CMA guidance:
Since most of the rules on misleading advertising in the advertising codes derive from or are compatible with the CPRs, the Committee of Advertising Practice (CAP) and the Broadcast Committee of Advertising Practice (BCAP) have amended them to align with the DMCCA.
The CMA, CAP and BCAP consulted on their guidance last year.
See Osborne Clarke's Insight for more details on these consultations.
The Digital Markets Competition and Consumers Act (DMCCA) hub
UKCMA provides further guidance on the 'drip pricing' provisions of the DMCCA
How businesses can get set for the UK Digital Markets Competition and Consumers Act in 2025
DMCCA consultations launched on implementation of UK consumer protection regime
Overview video: https://youtu.be/HuHLFokwYO4
Video on dark patterns: https://youtu.be/FhNAJYT1Xxg
Video on subscription law changes: https://youtu.be/3D0yDatHHG0
This regulation introduces rules for financial services firms to manage cyber risks and harmonise IT requirements.
Next important deadline(s)
None
DORA was enacted on 16 January 2023 and came into effect on 17 January 2025.
In effect
Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector, available here.
None
DORA applies to a broad range of financial entities regulated in the European Economic Area (EEA), including banks, payment and e-money institutions, investment firms, fund managers, and cryptoasset service providers.
The rules also catch third-party service providers of information and communications technology (ICT), including providers of cloud computing services, software, data analytics, and data centres, where the European authorities designate them as "critical".
DORA has extraterritorial reach, therefore non-EU financial entities and ICT providers offering services to EU financial institutions will also need to comply with the regulation.
Firms within the scope of DORA must be able to withstand, respond to and recover from ICT incidents. Important requirements for firms will include:
Under DORA, the European Supervisory Authorities (being the European Banking Authority (EBA) European Insurance and Occupational Pensions Authority (EIOPA) European Securities and Markets Authority (ESMA) (together the ESAs)) will develop 13 Regulatory Technical Standards (RTS) and Implementation Technical Standards (ITS) setting out details and guidance on key provisions and requirements within DORA. Financial entitles must be fully compliant with and implement these standards in their ICT systems by 17 January 2025.
The first batch of technical standards published by the ESAs were adopted by the Commission and brought into effect via implementing and delegated acts in 2024 (see below).
The second batch of draft technical standards were published by the ESAs on 17 July 2024.
The ESAs also published joint Guidelines on the cooperation of oversight activities between the ESAs and competent authorities on this date. These applied from 17January 2025.
To date, the Commission has brought the technical standards set out below into effect via implementing and delegated acts:
On 18 March 2025, the ESAs published joint guidelines on the estimation of aggregated annual costs and losses from major ICT-related incidents. The joint guidelines came into effect on 19 May 2025.
⭐ Following the coming into force of the delegated regulation on RTS on subcontracting, the legal framework of DORA is now complete. Organisations should focus on implementing their regulatory obligations to ensure compliance.
The implementing and delegated acts can be tracked from this page.
This regulation introduces rules for financial services firms to manage cyber risks and harmonise IT requirements.
Next important deadline(s)
None
DORA was enacted on 16 January 2023 and came into effect on 17 January 2025.
In effect
Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector, available here.
None
DORA applies to a broad range of financial entities regulated in the European Economic Area (EEA), including banks, payment and e-money institutions, investment firms, fund managers, and cryptoasset service providers.
The rules also catch third-party service providers of information and communications technology (ICT), including providers of cloud computing services, software, data analytics, and data centres, where the European authorities designate them as "critical".
DORA has extraterritorial reach, therefore non-EU financial entities and ICT providers offering services to EU financial institutions will also need to comply with the regulation.
Firms within the scope of DORA must be able to withstand, respond to and recover from ICT incidents. Important requirements for firms will include:
Under DORA, the European Supervisory Authorities (being the European Banking Authority (EBA) European Insurance and Occupational Pensions Authority (EIOPA) European Securities and Markets Authority (ESMA) (together the ESAs)) will develop 13 Regulatory Technical Standards (RTS) and Implementation Technical Standards (ITS) setting out details and guidance on key provisions and requirements within DORA. Financial entitles must be fully compliant with and implement these standards in their ICT systems by 17 January 2025.
The first batch of technical standards published by the ESAs were adopted by the Commission and brought into effect via implementing and delegated acts in 2024 (see below).
The second batch of draft technical standards were published by the ESAs on 17 July 2024.
The ESAs also published joint Guidelines on the cooperation of oversight activities between the ESAs and competent authorities on this date. These applied from 17January 2025.
To date, the Commission has brought the technical standards set out below into effect via implementing and delegated acts:
On 18 March 2025, the ESAs published joint guidelines on the estimation of aggregated annual costs and losses from major ICT-related incidents. The joint guidelines came into effect on 19 May 2025.
⭐ Following the coming into force of the delegated regulation on RTS on subcontracting, the legal framework of DORA is now complete. Organisations should focus on implementing their regulatory obligations to ensure compliance.
The implementing and delegated acts can be tracked from this page.
This Act establishes a regulatory framework for media services in the EU, introducing measures to protect journalists, and strengthen media pluralism and editorial independence.
Next important deadline(s)
8 August 2025
The European Media Freedom Act (Act) became law on 7 May 2024.
It generally applies from 8 August 2025 with some exceptions. For example, provisions in relation to the right of recipients of media services to plurality of editorially independent media content applied from 8 November 2024.
Some provisions began to apply from 8 February 2025: national regulatory authorities assumed their powers, provisions relating to the establishment of the new European Board for Media Services and amendments to the Audiovisual Media Services Directive (AVMSD). Provisions in respect of rights to editorial freedom and independence of media service providers, also began to apply from this date.
On the horizon.
Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), available here.
8 August 2025 (general effective date)
The Act builds on the revised AVMSD and broadens its scope by bringing radio and newspapers publishers into its remit.
The Act applies to all "media service providers" (MSPs) (an entity providing a media service and who has editorial responsibility for the content on that service) that target audiences in the EU, whether the provider is established in the EU or not. It therefore applies to:
The Act provides a legal framework setting out both rights and duties for MSPs and recipients of media services in the EU. Key points include:
In January 2025, a working group, comprising MEPs from the European Parliament's Committee on Culture and Education, was established to oversee the implementation and enforcement of the Act.
In April 2025, the European Board for Media Services held its second plenary session, marking the end of the transition from its predecessor, the European Regulators Group for Audiovisual Media Services. The Media Board is now fully operational as an independent advisory body under the Act and says that it is ready to support the consistent and effective implementation of EU media law.
In June 2025, the Commission published a consultation on guidelines it will issue under Article 18 of the Act designed to provide safeguards for MSPs in relation to the moderation of their content by VLOPs. See more on these provisions above. The guidelines will help protect MSPs from having their content removed unwarrantedly from VLOPs and will assist VLOPs in implementing the specific safeguards. The consultation closed on 23 July 2025.
This Act establishes a regulatory framework for media services in the EU, introducing measures to protect journalists, and strengthen media pluralism and editorial independence.
Next important deadline(s)
8 August 2025
The European Media Freedom Act (Act) became law on 7 May 2024.
It generally applies from 8 August 2025 with some exceptions. For example, provisions in relation to the right of recipients of media services to plurality of editorially independent media content applied from 8 November 2024.
Some provisions began to apply from 8 February 2025: national regulatory authorities assumed their powers, provisions relating to the establishment of the new European Board for Media Services and amendments to the Audiovisual Media Services Directive (AVMSD). Provisions in respect of rights to editorial freedom and independence of media service providers, also began to apply from this date.
On the horizon.
Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), available here.
8 August 2025 (general effective date)
The Act builds on the revised AVMSD and broadens its scope by bringing radio and newspapers publishers into its remit.
The Act applies to all "media service providers" (MSPs) (an entity providing a media service and who has editorial responsibility for the content on that service) that target audiences in the EU, whether the provider is established in the EU or not. It therefore applies to:
The Act provides a legal framework setting out both rights and duties for MSPs and recipients of media services in the EU. Key points include:
In January 2025, a working group, comprising MEPs from the European Parliament's Committee on Culture and Education, was established to oversee the implementation and enforcement of the Act.
In April 2025, the European Board for Media Services held its second plenary session, marking the end of the transition from its predecessor, the European Regulators Group for Audiovisual Media Services. The Media Board is now fully operational as an independent advisory body under the Act and says that it is ready to support the consistent and effective implementation of EU media law.
In June 2025, the Commission published a consultation on guidelines it will issue under Article 18 of the Act designed to provide safeguards for MSPs in relation to the moderation of their content by VLOPs. See more on these provisions above. The guidelines will help protect MSPs from having their content removed unwarrantedly from VLOPs and will assist VLOPs in implementing the specific safeguards. The consultation closed on 23 July 2025.
Introduces a total ban on all adverts for less healthy food and drink online and a ban for such ads to be shown on TV before 21:00
Next important deadline
1 October 2025
5 January 2026
The Advertising (Less Healthy Food Definitions and Exemptions) Regulations 2024, giving effect to advertising restrictions contained in the Health and Care Act 2022, which amended the Communications Act 2003, stated that the restrictions would come into force on 1 October 2025.
This has been amended by the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025, which provide that the restrictions will now come into effect on 5 January 2026.
Enacted
The Advertising (Less Healthy Food Definitions and Exemptions) Regulations 2024, available here.
1 October 2025 (the date from which industry has agreed to comply)
5 January 2026 (the restrictions officially come into effect)
The restrictions will apply to businesses involved in the manufacture or sale of "less healthy" food or drink with 250 or more employees, including franchises and symbol groups, who pay to advertise less healthy food or drink products.
In a bid to "raising the healthiest generation of children ever", the regulations will bring in restrictions on the advertising of less healthy food or drink products, also referred to as products high in fat, salt or sugar (HFSS), on TV and online. The restrictions comprise a 21:00 watershed for advertising of "identifiable" less healthy food or drink products on TV and a total ban on paid-for advertising of these products online.
The regulations will introduce:
The government has also confirmed how the new prohibitions and restrictions will apply to IPTV services:
The government has confirmed that "brand advertising" will be exempt from the restrictions (see "What is the process for implementation?" section below).
The government has also published guidance on the regulations, which sets out further details on which businesses and products will be caught by the new regulations. See this Insight for more details.
The Advertising Standards Authority (ASA) must also prepare implementation rules and guidance on the restrictions.
In December 2023, the Committee of Advertising Practice (CAP) published a consultation on draft guidance to accompany the legislation and on the changes that will need to be made to the advertising codes to reflect the new restrictions. The consultation closed on 7 February 2024.
In an update on the consultation, published on 13 January 2025, CAP said that the consultation process had led to questions being raised over "brand advertising", that is, advertising by brands that sell HFSS products but that does not explicitly refer to or feature particular HFSS products, and whether those types of ads will be caught by the new restrictions. The proposed guidance suggested that ads featuring branding for a range of "less healthy" products would fall outside the restrictions, provided no specific HFSS product is "identifiable", but the legislation itself does not make specific reference to such brand advertising. Accordingly, CAP recognised that some parts of its original guidance proposal required revision.
To deal with these issues, CAP published revised guidance and a further consultation in February 2025. This further consultation closed on 18 March 2025. The outcome of the consultation and the resulting guidance are awaited. See this Regulatory Outlook for more information.
The government then confirmed, in May 2025, in a written statement, its intention to introduce secondary legislation that will explicitly exempt "brand advertising" from the restrictions in the regulations.
To allow time for consultation on the draft secondary legislation, the government also announced a delay to the formal date for the restrictions to come into force – from 1 October 2025 to 5 January 2026 – now fixed in the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025.
This in turn led the ASA to ask the CAP to place on hold its ongoing consultation on implementation rules and guidance to allow for a proper assessment of the government's decision.
Despite the date when the restrictions come into effect being pushed back to January 2026, advertisers and broadcasters, with the support of online platforms and publishers, have committed to the government that they will comply with the restrictions as if they were still coming into force from 1 October 2025. Therefore, it will be expected that ads for specific identifiable less healthy products will not be shown on TV between 5:30am and 21:00 or at any time online from 1 October 2025.
To implement the regulations, the Advertising Standards Authority (ASA) must prepare implementation rules and guidance on the restrictions.
In December 2023, the Committee of Advertising Practice (CAP) published a consultation on draft guidance to accompany the legislation and on the changes that will need to be made to the advertising codes to reflect the new restrictions. The consultation closed on 7 February 2024.
In an update on the consultation, published on 13 January 2025, CAP said that the consultation process had led to questions being raised over "brand advertising", that is, advertising by brands that sell HFSS products but that does not explicitly refer to or feature particular HFSS products, and whether those types of ads will be caught by the new restrictions. The proposed guidance suggested that ads featuring branding for a range of "less healthy" products would fall outside the restrictions, provided no specific HFSS product is "identifiable", but the legislation itself does not make specific reference to such brand advertising. Accordingly, CAP recognised that some parts of its original guidance proposal required revision.
To deal with these issues, CAP published revised guidance and a further consultation in February 2025. This further consultation closed on 18 March 2025. The outcome of the consultation and the resulting guidance are awaited. See this Regulatory Outlook for more information.
The government then confirmed, in May 2025, in a written statement, its intention to introduce secondary legislation that will explicitly exempt "brand advertising" from the restrictions in the regulations.
To allow time for consultation on the draft secondary legislation, the government also announced a delay to the formal date for the restrictions to come into force – from 1 October 2025 to 5 January 2026 – now fixed in the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025.
This in turn led the ASA to ask the CAP to place on hold its ongoing consultation on implementation rules and guidance to allow for a proper assessment of the government's decision.
Despite the date when the restrictions come into effect being pushed back to January 2026, advertisers and broadcasters, with the support of online platforms and publishers, have committed to the government that they will comply with the restrictions as if they were still coming into force from 1 October 2025. Therefore, it will be expected that ads for specific identifiable less healthy products will not be shown on TV between 5:30am and 21:00 or at any time online from 1 October 2025.
Introduces a total ban on all adverts for less healthy food and drink online and a ban for such ads to be shown on TV before 21:00
Next important deadline
1 October 2025
5 January 2026
The Advertising (Less Healthy Food Definitions and Exemptions) Regulations 2024, giving effect to advertising restrictions contained in the Health and Care Act 2022, which amended the Communications Act 2003, stated that the restrictions would come into force on 1 October 2025.
This has been amended by the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025, which provide that the restrictions will now come into effect on 5 January 2026.
Enacted
The Advertising (Less Healthy Food Definitions and Exemptions) Regulations 2024, available here.
1 October 2025 (the date from which industry has agreed to comply)
5 January 2026 (the restrictions officially come into effect)
The restrictions will apply to businesses involved in the manufacture or sale of "less healthy" food or drink with 250 or more employees, including franchises and symbol groups, who pay to advertise less healthy food or drink products.
In a bid to "raising the healthiest generation of children ever", the regulations will bring in restrictions on the advertising of less healthy food or drink products, also referred to as products high in fat, salt or sugar (HFSS), on TV and online. The restrictions comprise a 21:00 watershed for advertising of "identifiable" less healthy food or drink products on TV and a total ban on paid-for advertising of these products online.
The regulations will introduce:
The government has also confirmed how the new prohibitions and restrictions will apply to IPTV services:
The government has confirmed that "brand advertising" will be exempt from the restrictions (see "What is the process for implementation?" section below).
The government has also published guidance on the regulations, which sets out further details on which businesses and products will be caught by the new regulations. See this Insight for more details.
The Advertising Standards Authority (ASA) must also prepare implementation rules and guidance on the restrictions.
In December 2023, the Committee of Advertising Practice (CAP) published a consultation on draft guidance to accompany the legislation and on the changes that will need to be made to the advertising codes to reflect the new restrictions. The consultation closed on 7 February 2024.
In an update on the consultation, published on 13 January 2025, CAP said that the consultation process had led to questions being raised over "brand advertising", that is, advertising by brands that sell HFSS products but that does not explicitly refer to or feature particular HFSS products, and whether those types of ads will be caught by the new restrictions. The proposed guidance suggested that ads featuring branding for a range of "less healthy" products would fall outside the restrictions, provided no specific HFSS product is "identifiable", but the legislation itself does not make specific reference to such brand advertising. Accordingly, CAP recognised that some parts of its original guidance proposal required revision.
To deal with these issues, CAP published revised guidance and a further consultation in February 2025. This further consultation closed on 18 March 2025. The outcome of the consultation and the resulting guidance are awaited. See this Regulatory Outlook for more information.
The government then confirmed, in May 2025, in a written statement, its intention to introduce secondary legislation that will explicitly exempt "brand advertising" from the restrictions in the regulations.
To allow time for consultation on the draft secondary legislation, the government also announced a delay to the formal date for the restrictions to come into force – from 1 October 2025 to 5 January 2026 – now fixed in the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025.
This in turn led the ASA to ask the CAP to place on hold its ongoing consultation on implementation rules and guidance to allow for a proper assessment of the government's decision.
Despite the date when the restrictions come into effect being pushed back to January 2026, advertisers and broadcasters, with the support of online platforms and publishers, have committed to the government that they will comply with the restrictions as if they were still coming into force from 1 October 2025. Therefore, it will be expected that ads for specific identifiable less healthy products will not be shown on TV between 5:30am and 21:00 or at any time online from 1 October 2025.
To implement the regulations, the Advertising Standards Authority (ASA) must prepare implementation rules and guidance on the restrictions.
In December 2023, the Committee of Advertising Practice (CAP) published a consultation on draft guidance to accompany the legislation and on the changes that will need to be made to the advertising codes to reflect the new restrictions. The consultation closed on 7 February 2024.
In an update on the consultation, published on 13 January 2025, CAP said that the consultation process had led to questions being raised over "brand advertising", that is, advertising by brands that sell HFSS products but that does not explicitly refer to or feature particular HFSS products, and whether those types of ads will be caught by the new restrictions. The proposed guidance suggested that ads featuring branding for a range of "less healthy" products would fall outside the restrictions, provided no specific HFSS product is "identifiable", but the legislation itself does not make specific reference to such brand advertising. Accordingly, CAP recognised that some parts of its original guidance proposal required revision.
To deal with these issues, CAP published revised guidance and a further consultation in February 2025. This further consultation closed on 18 March 2025. The outcome of the consultation and the resulting guidance are awaited. See this Regulatory Outlook for more information.
The government then confirmed, in May 2025, in a written statement, its intention to introduce secondary legislation that will explicitly exempt "brand advertising" from the restrictions in the regulations.
To allow time for consultation on the draft secondary legislation, the government also announced a delay to the formal date for the restrictions to come into force – from 1 October 2025 to 5 January 2026 – now fixed in the Communications Act 2003 (Restrictions on the Advertising of Less Healthy Food) (Effective Date) (Amendment) Regulations 2025.
This in turn led the ASA to ask the CAP to place on hold its ongoing consultation on implementation rules and guidance to allow for a proper assessment of the government's decision.
Despite the date when the restrictions come into effect being pushed back to January 2026, advertisers and broadcasters, with the support of online platforms and publishers, have committed to the government that they will comply with the restrictions as if they were still coming into force from 1 October 2025. Therefore, it will be expected that ads for specific identifiable less healthy products will not be shown on TV between 5:30am and 21:00 or at any time online from 1 October 2025.
This regulation creates an EU-wide framework for trusted and secure digital identities.
Next important deadline(s)
19 August 2025
December 2026
The regulation entered into force on 20 May 2024. The regulation will come into effect 24 months after the adoption of various implementing regulations by the European Commission.
The Commission adopted five implementing regulations on the technical specifications and certification of European digital identity (eID) wallets in November 2024, and they entered into force on 24 December 2024. These provisions will not, therefore, be effective until December 2026. By this date, member states are required to make eID wallets available to their citizens.
On the horizon.
Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No910/2014 as regards establishing the European Digital Identity Framework, available here.
19 August 2025 (new implementing regulations on technical specifications and certification come into effect)
December 2026 (effective date for eID wallets)
This legislation updates and amends the existing digital identity regime in the EU. Once in effect, it will affect all EU citizens, who will be able to have an eID wallet to prove their identity, to hold other documents such as qualification certificates, bank cards or tickets, and to access public and private services. Wallets will be available to prove the identity of natural and legal persons, and of natural persons representing a natural or legal person.
The legislation will also impact the providers of digital identity services, and be relevant to businesses to which the wallet is presented.
The new legislation amends existing EU rules around digital identity to create the European Identity Framework, known as eIDAS. The concept is for a digital wallet – for example an app on phones – that will stand as proof of the person's identity and can also be used to store and prove other "formal" attributes such as qualifications, certificates, a driving licence, medical prescriptions, and other variations on someone's status and entitlements.
Key points include:
The legislation also establishes a legal framework for electronic signatures (which can be administered using an eID wallet), electronic seals, electronic time stamps, electronic documents, electronic registered delivery services, certificate services for website authentication, electronic archiving, electronic attestation of attributes, electronic signature and seal creation devices, and electronic ledgers.
In September 2024, the Commission asked the European Union Agency for Cybersecurity (ENISA) to support member states on the certification of eID wallets, including the development of a candidate European cybersecurity certification scheme in accordance with the EU Cybersecurity Act. ENISA will do this by providing harmonised certification requirements, which member states should adhere to when setting up their national certification schemes.
In November 2024, the Commission adopted five implementing acts under the regulation on technical specifications and certification.
In May 2025, following consultation, the Commission adopted four new draft implementing acts, setting out:
⭐ In July 2025, the Commission adopted seven new implementing regulations, including on the specifications needed to issue electronic attestations of attributes. All the regulations are available here. They will all come into effect on 19 August 2025.
The European Commission's Q&A on this regulation is available here.
This regulation creates an EU-wide framework for trusted and secure digital identities.
Next important deadline(s)
19 August 2025
December 2026
The regulation entered into force on 20 May 2024. The regulation will come into effect 24 months after the adoption of various implementing regulations by the European Commission.
The Commission adopted five implementing regulations on the technical specifications and certification of European digital identity (eID) wallets in November 2024, and they entered into force on 24 December 2024. These provisions will not, therefore, be effective until December 2026. By this date, member states are required to make eID wallets available to their citizens.
On the horizon.
Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No910/2014 as regards establishing the European Digital Identity Framework, available here.
19 August 2025 (new implementing regulations on technical specifications and certification come into effect)
December 2026 (effective date for eID wallets)
This legislation updates and amends the existing digital identity regime in the EU. Once in effect, it will affect all EU citizens, who will be able to have an eID wallet to prove their identity, to hold other documents such as qualification certificates, bank cards or tickets, and to access public and private services. Wallets will be available to prove the identity of natural and legal persons, and of natural persons representing a natural or legal person.
The legislation will also impact the providers of digital identity services, and be relevant to businesses to which the wallet is presented.
The new legislation amends existing EU rules around digital identity to create the European Identity Framework, known as eIDAS. The concept is for a digital wallet – for example an app on phones – that will stand as proof of the person's identity and can also be used to store and prove other "formal" attributes such as qualifications, certificates, a driving licence, medical prescriptions, and other variations on someone's status and entitlements.
Key points include:
The legislation also establishes a legal framework for electronic signatures (which can be administered using an eID wallet), electronic seals, electronic time stamps, electronic documents, electronic registered delivery services, certificate services for website authentication, electronic archiving, electronic attestation of attributes, electronic signature and seal creation devices, and electronic ledgers.
In September 2024, the Commission asked the European Union Agency for Cybersecurity (ENISA) to support member states on the certification of eID wallets, including the development of a candidate European cybersecurity certification scheme in accordance with the EU Cybersecurity Act. ENISA will do this by providing harmonised certification requirements, which member states should adhere to when setting up their national certification schemes.
In November 2024, the Commission adopted five implementing acts under the regulation on technical specifications and certification.
In May 2025, following consultation, the Commission adopted four new draft implementing acts, setting out:
⭐ In July 2025, the Commission adopted seven new implementing regulations, including on the specifications needed to issue electronic attestations of attributes. All the regulations are available here. They will all come into effect on 19 August 2025.
The European Commission's Q&A on this regulation is available here.
This directive requires all businesses selling goods and services online to provide a "withdrawal button" on their interface so that consumers can easily cancel online contracts
Next important deadline: 19 December 2025; 19 June 2026
The directive entered into force on 18 December 2023.
By 19 December 2025 member states must adopt national legislation to implement the directive. National legislation should come into effect on 19 June 2026.
This directive will repeal the Financial Services Distance Marketing Directive (2002/65/EC) on the distance marketing of consumer financial services with effect from 19 June 2026.
In force
Directive (EU) 2023/2673 of the European Parliament and of the Council of 22 November 2023 amending Directive 2011/83/EU as regards financial services contracts concluded at a distance and repealing Directive 2002/65/EC, available here.
19 December 2025 (the date by which member states should adopt national legislation to implement the directive)
19 June 2026 (the date by which national legislation should come into effect)
Any businesses concluding contracts with consumers online
Businesses concluding financial services contracts at a distance (e.g. online, by phone or by post)
The directive amends the Consumer Rights Directive (2011/83/EU) (CRD) to introduce:
• specific provisions for financial services contracts concluded at a distance; and
• clear rights for consumers to be able to cancel online contracts (not just relating to financial products) via a "withdrawal button".
For consumer contracts concluded online, traders must provide consumers with an opportunity to cancel and withdraw from the contract by the use of a withdrawal function on their website. This cancellation option should be in addition to other cancellation methods required by the CRD (i.e. by using the model withdrawal form or making any other unequivocal statement setting out a consumer's decision to withdraw from the contract). This applies to all traders providing products or services to consumers online, not just those marketing and selling financial services.
The withdrawal function must be:
• Labelled with the words "withdraw from contract here" or other similar wording in an easily legible way.
• Continuously available throughout the withdrawal period.
• Prominently displayed on the online interface and easily accessible to the consumer.
The withdrawal button should allow consumers to send an online cancellation statement to the trader stating their decision to withdraw from the contract. There must be functionality to allow the consumer to provide the following information:
• The consumer's name.
• Details of the contract they wish to withdraw from.
• Details of the electronic means by which confirmation of the cancellation should be sent to the consumer.
The consumer must also be able to submit the cancellation statement by clicking a confirmation button. This button must be easily legible, using the words "confirm withdrawal" or other similar wording.
Once the consumer activates the confirmation button, the trader must, without undue delay, send them an acknowledgement of receipt, including its content and the date and time of submission.
Information requirements
The directive provides specific information that the trader must provide to the consumer when concluding an online contract for consumer financial services. It should be provided "in good time" before the consumer is bound by a contract and in a clear and comprehensible manner. Such information includes, among other things, the trader's identity and contact details, the main characteristics of the service, a total price, the withdrawal period and the conditions for exercising the right to withdraw.
Right of withdrawal
The directive also sets out specific requirements relating to the right of withdrawal from financial services contracts:
• The consumer must have a period of 14 calendar days to cancel a contract without penalty and without having to give any reason. That period must be extended to 30 calendar days if a contract relates to personal pensions.
• This withdrawal period should begin from either: (a) the day the contract was concluded; or (b) the day on which the consumer received the terms and conditions and the information above, if that is later than the date in (a).
The directive sets out exceptions to the right of withdrawal, for example: for foreign exchange financial services, i.e. where the price depends on fluctuations in the financial market outside the trader's control, which might occur during the withdrawal period; or for travel and baggage insurance policies or similar short-term insurance policies of less than one month's duration.
Where the consumer decides to withdraw from a financial services contract, they can be required to pay for the service that has been provided to them under the contract, as long as it does not amount to a penalty. The trader must return all other sums paid by the consumer and the consumer must return any sums they have received from the trader, without undue delay and no later than within 30 calendar days of the date of withdrawal.
Online interface design
Member states must ensure that traders, when concluding online financial services contracts, do not design, organise or operate their online interfaces in a way that deceives or manipulates consumers or otherwise materially distorts or impairs their ability to make free and informed decisions. In particular, member states must adopt measures that address at least one of the following practices by traders:
• Giving more prominence to certain choices when asking consumers to make a decision.
• Repeatedly asking consumers to make a choice where that choice has already been made, especially by presenting pop-ups that interfere with the user experience.
• Making the procedure for terminating a service more difficult than subscribing to it.
This directive requires all businesses selling goods and services online to provide a "withdrawal button" on their interface so that consumers can easily cancel online contracts
Next important deadline: 19 December 2025; 19 June 2026
The directive entered into force on 18 December 2023.
By 19 December 2025 member states must adopt national legislation to implement the directive. National legislation should come into effect on 19 June 2026.
This directive will repeal the Financial Services Distance Marketing Directive (2002/65/EC) on the distance marketing of consumer financial services with effect from 19 June 2026.
In force
Directive (EU) 2023/2673 of the European Parliament and of the Council of 22 November 2023 amending Directive 2011/83/EU as regards financial services contracts concluded at a distance and repealing Directive 2002/65/EC, available here.
19 December 2025 (the date by which member states should adopt national legislation to implement the directive)
19 June 2026 (the date by which national legislation should come into effect)
Any businesses concluding contracts with consumers online
Businesses concluding financial services contracts at a distance (e.g. online, by phone or by post)
The directive amends the Consumer Rights Directive (2011/83/EU) (CRD) to introduce:
• specific provisions for financial services contracts concluded at a distance; and
• clear rights for consumers to be able to cancel online contracts (not just relating to financial products) via a "withdrawal button".
For consumer contracts concluded online, traders must provide consumers with an opportunity to cancel and withdraw from the contract by the use of a withdrawal function on their website. This cancellation option should be in addition to other cancellation methods required by the CRD (i.e. by using the model withdrawal form or making any other unequivocal statement setting out a consumer's decision to withdraw from the contract). This applies to all traders providing products or services to consumers online, not just those marketing and selling financial services.
The withdrawal function must be:
• Labelled with the words "withdraw from contract here" or other similar wording in an easily legible way.
• Continuously available throughout the withdrawal period.
• Prominently displayed on the online interface and easily accessible to the consumer.
The withdrawal button should allow consumers to send an online cancellation statement to the trader stating their decision to withdraw from the contract. There must be functionality to allow the consumer to provide the following information:
• The consumer's name.
• Details of the contract they wish to withdraw from.
• Details of the electronic means by which confirmation of the cancellation should be sent to the consumer.
The consumer must also be able to submit the cancellation statement by clicking a confirmation button. This button must be easily legible, using the words "confirm withdrawal" or other similar wording.
Once the consumer activates the confirmation button, the trader must, without undue delay, send them an acknowledgement of receipt, including its content and the date and time of submission.
Information requirements
The directive provides specific information that the trader must provide to the consumer when concluding an online contract for consumer financial services. It should be provided "in good time" before the consumer is bound by a contract and in a clear and comprehensible manner. Such information includes, among other things, the trader's identity and contact details, the main characteristics of the service, a total price, the withdrawal period and the conditions for exercising the right to withdraw.
Right of withdrawal
The directive also sets out specific requirements relating to the right of withdrawal from financial services contracts:
• The consumer must have a period of 14 calendar days to cancel a contract without penalty and without having to give any reason. That period must be extended to 30 calendar days if a contract relates to personal pensions.
• This withdrawal period should begin from either: (a) the day the contract was concluded; or (b) the day on which the consumer received the terms and conditions and the information above, if that is later than the date in (a).
The directive sets out exceptions to the right of withdrawal, for example: for foreign exchange financial services, i.e. where the price depends on fluctuations in the financial market outside the trader's control, which might occur during the withdrawal period; or for travel and baggage insurance policies or similar short-term insurance policies of less than one month's duration.
Where the consumer decides to withdraw from a financial services contract, they can be required to pay for the service that has been provided to them under the contract, as long as it does not amount to a penalty. The trader must return all other sums paid by the consumer and the consumer must return any sums they have received from the trader, without undue delay and no later than within 30 calendar days of the date of withdrawal.
Online interface design
Member states must ensure that traders, when concluding online financial services contracts, do not design, organise or operate their online interfaces in a way that deceives or manipulates consumers or otherwise materially distorts or impairs their ability to make free and informed decisions. In particular, member states must adopt measures that address at least one of the following practices by traders:
• Giving more prominence to certain choices when asking consumers to make a decision.
• Repeatedly asking consumers to make a choice where that choice has already been made, especially by presenting pop-ups that interfere with the user experience.
• Making the procedure for terminating a service more difficult than subscribing to it.
This directive updates product liability rules to include digital products, services and platforms.
Next important deadeline(s): 9 December 2026
The EU Product Liability Directive entered into law on 8 December 2024. Member States have until 9 December 2026 (24 months) to implement the directive into national law. There is no further implementation period, meaning that products placed on the EU market or put into service from 9 December 2026 will be subject to the new rules (provided national laws are in place). Those products already on the market or put into service before 9 December 2026 will remain subject to the former Product Liability Directive.
On the horizon. National implementing legislation awaited.
9 December 2026 (effective date)
The final text can be found here.
Anyone placing products on the EU market.
Once effective, this directive will provide easier access to compensation for consumers who suffer damage from defective products and includes amendments relating to the directive's scope, treatment of psychological damage, and allocation of liability regarding software manufacturers.
The update to the existing product liability framework, dating from 1985, will:
We have produced an infographic on product liability reform in the EU and UK that sets out some of the key changes and practical actions businesses should be considering. For example, in regards to claimants having enhanced rights over disclosure under the revised EU PLD, businesses should ensure disclosable materials are fit for purpose, including: design files; evidence of safety testing in design phase; and policies on vigilance and corrective actions. Request a copy of the infographic here.
This directive updates product liability rules to include digital products, services and platforms.
Next important deadeline(s): 9 December 2026
The EU Product Liability Directive entered into law on 8 December 2024. Member States have until 9 December 2026 (24 months) to implement the directive into national law. There is no further implementation period, meaning that products placed on the EU market or put into service from 9 December 2026 will be subject to the new rules (provided national laws are in place). Those products already on the market or put into service before 9 December 2026 will remain subject to the former Product Liability Directive.
On the horizon. National implementing legislation awaited.
9 December 2026 (effective date)
The final text can be found here.
Anyone placing products on the EU market.
Once effective, this directive will provide easier access to compensation for consumers who suffer damage from defective products and includes amendments relating to the directive's scope, treatment of psychological damage, and allocation of liability regarding software manufacturers.
The update to the existing product liability framework, dating from 1985, will:
We have produced an infographic on product liability reform in the EU and UK that sets out some of the key changes and practical actions businesses should be considering. For example, in regards to claimants having enhanced rights over disclosure under the revised EU PLD, businesses should ensure disclosable materials are fit for purpose, including: design files; evidence of safety testing in design phase; and policies on vigilance and corrective actions. Request a copy of the infographic here.
The upcoming Act will fill gaps in consumer law to address unfair commercial practices online, such as dark patterns, personalisation, and addictive design
Next important deadline
24 October 2025
On 17 July 2025, the Commission launched a call for evidence and consultation on the Digital Fairness Act (DFA), both of which close on 24 October 2025. The Commission expects to adopt the legislative proposal for the DFA in the third quarter of 2026.
On the horizon
Digital Fairness Act
24 October 2025 (deadline to respond to the European Commission's call for evidence and consultation on the Digital Fairness Act)
Businesses involved in the provision of digital services and digital content
E-commerce and retail businesses
Social media influencers
Online advertisers
App developers
The idea for the DFA was born out of the Commission's digital fairness "fitness check" of consumer law published in October 2024. The fitness check concluded that while current EU consumer protection laws are still relevant, they do not fully tackle the unique challenges and harmful practices that consumers face online. The findings identified various problematic online practices and revealed some gaps in consumer protection law and areas of legal uncertainty. See Osborne Clarke's Insight for more information on the fitness check findings.
Following the results of the fitness check, the Commission's President asked the Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection to develop a DFA.
The DFA will focus on addressing the gaps and areas of legal uncertainty identified as a result of the fitness check. It will complement existing legislation in this area, such as the Digital Services Act, the Digital Markets Act, the Artificial Intelligence Act, the Data Act and the Audiovisual Media Services Directive.
At this early stage, it is envisaged that the DFA will focus on:
• Dark patterns and other unfair techniques that pressurise, deceive and manipulate consumers online.
• Misleading marketing by social media influencers (for example, the lack of disclosure of commercial communications and the promotion of harmful products to their followers). It will also clarify the responsibilities of companies that collaborate with them.
• Addictive design of digital products, especially affecting minors, leading them to spend excessive time and money on online goods and services.
• Particular features of digital products, such as virtual currency and loot boxes in video games and in-app purchases, addressing functionalities that have a significant impact on minors.
• Problematic personalisation practices, such as personalisation of prices and advertising, including where consumer vulnerabilities are targeted.
• Unfair pricing practices (such as "drip pricing", "starting from" prices, percentage/value discounts that mislead consumers as to the nature of the promotion).
• Managing digital contracts (such as cancellations of subscriptions, auto-renewals and free trials, and the use of chatbots for customer service).
It is also intended that the DFA will provide an opportunity for simplification, reducing the administrative burden on companies and making enforcement easier.
The Commission is also developing the Consumer Agenda 2025-2030, which will cover some of the issues to be addressed by the DFA. See this Regulatory Outlook for more information.
On 17 July 2025, the Commission launched a call for evidence and consultation on the DFA, both of which close on 24 October 2025. A summary report of the public consultation will be published within eight weeks of the consultation closure. The Commission expects to adopt a legislative proposal in the third quarter of 2026.
For key themes of the public consultation at a glance, see this Osborne Clarke's publication.
The upcoming Act will fill gaps in consumer law to address unfair commercial practices online, such as dark patterns, personalisation, and addictive design
Next important deadline
24 October 2025
On 17 July 2025, the Commission launched a call for evidence and consultation on the Digital Fairness Act (DFA), both of which close on 24 October 2025. The Commission expects to adopt the legislative proposal for the DFA in the third quarter of 2026.
On the horizon
Digital Fairness Act
24 October 2025 (deadline to respond to the European Commission's call for evidence and consultation on the Digital Fairness Act)
Businesses involved in the provision of digital services and digital content
E-commerce and retail businesses
Social media influencers
Online advertisers
App developers
The idea for the DFA was born out of the Commission's digital fairness "fitness check" of consumer law published in October 2024. The fitness check concluded that while current EU consumer protection laws are still relevant, they do not fully tackle the unique challenges and harmful practices that consumers face online. The findings identified various problematic online practices and revealed some gaps in consumer protection law and areas of legal uncertainty. See Osborne Clarke's Insight for more information on the fitness check findings.
Following the results of the fitness check, the Commission's President asked the Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection to develop a DFA.
The DFA will focus on addressing the gaps and areas of legal uncertainty identified as a result of the fitness check. It will complement existing legislation in this area, such as the Digital Services Act, the Digital Markets Act, the Artificial Intelligence Act, the Data Act and the Audiovisual Media Services Directive.
At this early stage, it is envisaged that the DFA will focus on:
• Dark patterns and other unfair techniques that pressurise, deceive and manipulate consumers online.
• Misleading marketing by social media influencers (for example, the lack of disclosure of commercial communications and the promotion of harmful products to their followers). It will also clarify the responsibilities of companies that collaborate with them.
• Addictive design of digital products, especially affecting minors, leading them to spend excessive time and money on online goods and services.
• Particular features of digital products, such as virtual currency and loot boxes in video games and in-app purchases, addressing functionalities that have a significant impact on minors.
• Problematic personalisation practices, such as personalisation of prices and advertising, including where consumer vulnerabilities are targeted.
• Unfair pricing practices (such as "drip pricing", "starting from" prices, percentage/value discounts that mislead consumers as to the nature of the promotion).
• Managing digital contracts (such as cancellations of subscriptions, auto-renewals and free trials, and the use of chatbots for customer service).
It is also intended that the DFA will provide an opportunity for simplification, reducing the administrative burden on companies and making enforcement easier.
The Commission is also developing the Consumer Agenda 2025-2030, which will cover some of the issues to be addressed by the DFA. See this Regulatory Outlook for more information.
On 17 July 2025, the Commission launched a call for evidence and consultation on the DFA, both of which close on 24 October 2025. A summary report of the public consultation will be published within eight weeks of the consultation closure. The Commission expects to adopt a legislative proposal in the third quarter of 2026.
For key themes of the public consultation at a glance, see this Osborne Clarke's publication.
This directive clarifies the employment rights of workers who source work through digital platforms and covers fairness in the algorithmic management of workers.
Next important deadline(s)
2 December 2026
This directive entered into law on 1 December 2024. Member states have 24 months to transpose the directive into their national legislation (i.e. by 2 December 2026).
On the horizon.
Directive of the European Parliament and of the Council on improving working conditions in platform work, available here.
2 December 2026 (effective date if national laws in place)
The directive applies to all workers performing platform work within the EU (and therefore to their "employers"). The contractual relationship of the worker does not have to be with the recipient of the service but could be with the platform or with an intermediary.
It also applies to all digital labour platforms organising platform work to be performed in the EU, regardless of the place of establishment of the platform. The key point is where the work will be performed, not where the platform is based.
A "digital labour platform" is defined as meeting all of the following requirements:
Platforms that are designed to enable the exploitation or sharing of assets (such as short term rental accommodation) or to enable the non-commercial resale of goods are expressly excluded from this definition. The requirement that the worker is remunerated means that platforms for organising the activities of volunteers are also excluded from scope.
NB: the focus of this resource is digital regulation so wider aspects of this legislation are out of scope. The directive includes provisions on the correct determination of a platform worker's employment status (including a rebuttable presumption of employment status unless the digital platform can prove that a worker is, in fact, genuinely self-employed), and on transparency around platform work including disclosure of data to relevant authorities.
Chapter III of the directive sets out provisions regarding the algorithmic management of platform workers, where functions that might historically have been taken by human managers are automated.
The directive focuses on automated monitoring systems and automated decision-making systems, both of which operate "through electronic means" and may collect data. Monitoring systems are those which are used for, or support, monitoring, supervising or evaluating someone's work. Decision-making systems are those which take decisions that "significantly affect persons performing platform work". This may include decisions concerning recruitment, allocation of work and working time, remuneration, access to training or promotion, and their contractual status.
The directive seeks to address concerns that workers may not have information about what data is being collected about them, and how it is evaluated. The reasons behind decisions taken by the systems may not be clear, or explainable, and there may be limited or no avenues for challenge, redress or rectification. In relation to personal data, it makes supplemental provisions that will apply over and above the General Data Protection Regulation (GDPR) framework.
The directive requires member states to outlaw the following functions from being performed by automated monitoring or decision-making systems:
These provisions will apply from the start of the recruitment process. In addition, the directive provides that any decision to restrict, suspend or terminate the contractual relationship or account of a platform worker (or similarly detrimental decisions) must be taken by a human being.
The directive clarifies that automated processing personal data of a platform worker for monitoring or decision-making will be high risk under the provisions of the GDPR and so will require a data protection impact assessment (DPIA). Digital labour platforms acting as data controllers must seek the views of the platform workers concerned and their representatives. DPIAs must be provided to worker representatives.
Digital labour platforms must disclose the use of algorithmic systems to platform workers, their representatives and relevant authorities. This should include information about:
The required information about algorithmic systems must be provided in writing, in clear and plain language. It must be provided on the worker's first day, or ahead of any changes, or on request. It must also be provided in relation to systems used for recruitment and selection to the person going through that process.
Platform workers will have a right to portability of their data from one digital labour platform to another.
The directive creates obligations around human oversight of algorithmic systems. These include a review at least every two years of the impact of the systems, including on working conditions and on equality of treatment of workers. Where a high risk of discrimination is identified, the system must be modified or no longer used.
Digital labour platforms must have sufficient human resources to implement oversight of the impact of individual decisions, with appropriately competent and trained staff able to overrise automated decisions. Such staff must be protected from dismissal or disciplinary measures for exercising these functions.
Platform workers will have a right to an oral or written explanation in clear and plain language of decisions taken or supported by an algorithmic system, with the option of discussing the decision with a human. The explanation must be in writing for certain decisions, including the restriction, suspension or termination of a platform worker's account, a refusal to pay the worker, or any decision regarding the worker's contractual status.
Workers will have a right to request a review of decisions taken by the algorithmic systems, to a written response within two weeks, to rectification of the decision if it infringes their rights, or to compensation if the error cannot be rectified.
The directive highlights that algorithmic management of workers can cause the intensification of work, which in turn can impact on safety and on the mental and physical health of platform workers. It requires digital labour platforms to evaluate such risks and take preventative and protective measures.
This directive clarifies the employment rights of workers who source work through digital platforms and covers fairness in the algorithmic management of workers.
Next important deadline(s)
2 December 2026
This directive entered into law on 1 December 2024. Member states have 24 months to transpose the directive into their national legislation (i.e. by 2 December 2026).
On the horizon.
Directive of the European Parliament and of the Council on improving working conditions in platform work, available here.
2 December 2026 (effective date if national laws in place)
The directive applies to all workers performing platform work within the EU (and therefore to their "employers"). The contractual relationship of the worker does not have to be with the recipient of the service but could be with the platform or with an intermediary.
It also applies to all digital labour platforms organising platform work to be performed in the EU, regardless of the place of establishment of the platform. The key point is where the work will be performed, not where the platform is based.
A "digital labour platform" is defined as meeting all of the following requirements:
Platforms that are designed to enable the exploitation or sharing of assets (such as short term rental accommodation) or to enable the non-commercial resale of goods are expressly excluded from this definition. The requirement that the worker is remunerated means that platforms for organising the activities of volunteers are also excluded from scope.
NB: the focus of this resource is digital regulation so wider aspects of this legislation are out of scope. The directive includes provisions on the correct determination of a platform worker's employment status (including a rebuttable presumption of employment status unless the digital platform can prove that a worker is, in fact, genuinely self-employed), and on transparency around platform work including disclosure of data to relevant authorities.
Chapter III of the directive sets out provisions regarding the algorithmic management of platform workers, where functions that might historically have been taken by human managers are automated.
The directive focuses on automated monitoring systems and automated decision-making systems, both of which operate "through electronic means" and may collect data. Monitoring systems are those which are used for, or support, monitoring, supervising or evaluating someone's work. Decision-making systems are those which take decisions that "significantly affect persons performing platform work". This may include decisions concerning recruitment, allocation of work and working time, remuneration, access to training or promotion, and their contractual status.
The directive seeks to address concerns that workers may not have information about what data is being collected about them, and how it is evaluated. The reasons behind decisions taken by the systems may not be clear, or explainable, and there may be limited or no avenues for challenge, redress or rectification. In relation to personal data, it makes supplemental provisions that will apply over and above the General Data Protection Regulation (GDPR) framework.
The directive requires member states to outlaw the following functions from being performed by automated monitoring or decision-making systems:
These provisions will apply from the start of the recruitment process. In addition, the directive provides that any decision to restrict, suspend or terminate the contractual relationship or account of a platform worker (or similarly detrimental decisions) must be taken by a human being.
The directive clarifies that automated processing personal data of a platform worker for monitoring or decision-making will be high risk under the provisions of the GDPR and so will require a data protection impact assessment (DPIA). Digital labour platforms acting as data controllers must seek the views of the platform workers concerned and their representatives. DPIAs must be provided to worker representatives.
Digital labour platforms must disclose the use of algorithmic systems to platform workers, their representatives and relevant authorities. This should include information about:
The required information about algorithmic systems must be provided in writing, in clear and plain language. It must be provided on the worker's first day, or ahead of any changes, or on request. It must also be provided in relation to systems used for recruitment and selection to the person going through that process.
Platform workers will have a right to portability of their data from one digital labour platform to another.
The directive creates obligations around human oversight of algorithmic systems. These include a review at least every two years of the impact of the systems, including on working conditions and on equality of treatment of workers. Where a high risk of discrimination is identified, the system must be modified or no longer used.
Digital labour platforms must have sufficient human resources to implement oversight of the impact of individual decisions, with appropriately competent and trained staff able to overrise automated decisions. Such staff must be protected from dismissal or disciplinary measures for exercising these functions.
Platform workers will have a right to an oral or written explanation in clear and plain language of decisions taken or supported by an algorithmic system, with the option of discussing the decision with a human. The explanation must be in writing for certain decisions, including the restriction, suspension or termination of a platform worker's account, a refusal to pay the worker, or any decision regarding the worker's contractual status.
Workers will have a right to request a review of decisions taken by the algorithmic systems, to a written response within two weeks, to rectification of the decision if it infringes their rights, or to compensation if the error cannot be rectified.
The directive highlights that algorithmic management of workers can cause the intensification of work, which in turn can impact on safety and on the mental and physical health of platform workers. It requires digital labour platforms to evaluate such risks and take preventative and protective measures.
This regulation introduces cybersecurity requirements for digital products.
Next important deadline(s)
11 December 2027
The EU Cyber Resilience Act (CRA) entered into force on 10 December 2024. The CRA has a 36-month transition period, meaning it will be effective from 11 December 2027.
On the horizon.
11 December 2027 (effective date)
The final text can be found here.
Manufacturers of products with digital elements (software and hardware products).
The CRA will introduce cyber security requirements for products with digital elements to protect consumers and businesses from products with inadequate security features. Manufacturers will have to ensure that the cybersecurity of their products is in conformity with minimum technical requirements, from the design and development phase, and throughout the whole life cycle of the product. This could include carrying out mandatory security assessments.
Certain types of products with digital elements deemed safety critical will be subject to stricter conformity assessment procedures, reflecting the increased cybersecurity risks they present.
The CRA also introduces transparency requirements, by requiring manufacturers to disclose certain cyber security aspects to consumers.
The CRA will apply to all products that are connected either directly or indirectly to another device or network, with some exceptions such as medical devices, aviation or cars.
When the CRA becomes effective, software and products connected to the internet will be required to apply the CE mark to indicate they comply with the applicable standards. Readers should note also that the European common criteria-based cybersecurity certification scheme, developed under the EU Cybersecurity Act, has been adopted and will apply on a voluntary basis to all information and communication technologies products within the EU.
Products | UK Regulatory Outlook May 2025 | Osborne Clarke
The EU Cyber Resilience Act, and why it matters for US companies | Osborne Clarke
UK and EU take steps to bolster product security regimes - Osborne Clarke
How is EU cybersecurity law affecting IoT product design? - Osborne Clarke
The EU Cyber Resilience Act, and why it matters for US companies | Osborne Clarke
This regulation introduces cybersecurity requirements for digital products.
Next important deadline(s)
11 December 2027
The EU Cyber Resilience Act (CRA) entered into force on 10 December 2024. The CRA has a 36-month transition period, meaning it will be effective from 11 December 2027.
On the horizon.
11 December 2027 (effective date)
The final text can be found here.
Manufacturers of products with digital elements (software and hardware products).
The CRA will introduce cyber security requirements for products with digital elements to protect consumers and businesses from products with inadequate security features. Manufacturers will have to ensure that the cybersecurity of their products is in conformity with minimum technical requirements, from the design and development phase, and throughout the whole life cycle of the product. This could include carrying out mandatory security assessments.
Certain types of products with digital elements deemed safety critical will be subject to stricter conformity assessment procedures, reflecting the increased cybersecurity risks they present.
The CRA also introduces transparency requirements, by requiring manufacturers to disclose certain cyber security aspects to consumers.
The CRA will apply to all products that are connected either directly or indirectly to another device or network, with some exceptions such as medical devices, aviation or cars.
When the CRA becomes effective, software and products connected to the internet will be required to apply the CE mark to indicate they comply with the applicable standards. Readers should note also that the European common criteria-based cybersecurity certification scheme, developed under the EU Cybersecurity Act, has been adopted and will apply on a voluntary basis to all information and communication technologies products within the EU.
Products | UK Regulatory Outlook May 2025 | Osborne Clarke
The EU Cyber Resilience Act, and why it matters for US companies | Osborne Clarke
UK and EU take steps to bolster product security regimes - Osborne Clarke
How is EU cybersecurity law affecting IoT product design? - Osborne Clarke
The EU Cyber Resilience Act, and why it matters for US companies | Osborne Clarke
The current government has said that it plans to introduce AI safety legislation for "frontier" AI models.
None
In the policy formation stage
Businesses developing, supplying or using artificial intelligence (AI) systems.
The last UK Conservative government issued a consultation on its white paper on AI regulation which set out the then current UK's policy approach to regulating AI. Essentially, the government confirmed its approach in the consultation response of February 2024. It did not plan new law in this area, but issued "high-level principles" to guide existing regulators in dealing with AI within the scope of their existing powers and jurisdiction.
See this Insight for more information on the white paper and this Insight for details of the government's response.
The current government has said that it intends to bring forward AI legislation with a focus on AI safety, in particular the development of the most powerful cutting edge AI models (often known as "frontier models") including a requirement for developers to share these AI models for testing before they are publicly released.
It is anticipated that such legislation would be likely to put existing voluntary commitments signed by some of the leading AI companies onto a statutory footing, as well as related measures such as moving the UK's existing AI Security Institute (which would conduct model testing) onto an arm's length footing from the government.
The government hopes that the new law will "reduce regulatory uncertainty for AI developers, strengthen public trust and boost business confidence." It is intended to tie in with other government measures on AI, including proposed changes to copyright law and the AI Opportunities Action Plan (see below).
Timing is unclear: In Autumn 2024, government sources indicated its intention to consult on the matter. In January 2025, the AI Opportunities Action Plan (see below) stressed the importance of acting quickly on this area of regulation. However, the consultation has yet to materialise, and no firm date has been given.
⭐In addition, in June 2025, it was reported across the media that the government has postponed plans to regulate at least until 2026.
The government-commissioned AI Opportunities Action Plan, led by Matt Clifford, was released on 13 January 2025. The report outlines 50 recommendations (all of which were accepted by the government) built upon core principles that advocate for the government to support innovators, invest in becoming a leading AI customer, attract global talent to establish companies in the UK, and leverage the UK's strengths and emerging catalytic areas.
The report highlights the crucial importance of data in underpinning AI development. It urges the government to act quickly to announce how it plans to regulate frontier AI models, and to urgently reform UK intellectual property law on text and data mining (see below).
See our Insight for more information.
In December 2024, the UK government published a consultation paper on changes and clarifications to copyright law, aimed at making the UK more attractive to the tech companies developing AI models, but balancing this with supporting the creative industries upon whose data many AI systems are trained.
The main proposal is an "opt-out" text and data mining (TDM) copyright exception allowing use of content for training AI systems unless the owner has expressly opted out of allowing that use. This new exception would apply only where the user already had lawful access to the content such as via a paid for subscription service, or where it had been made freely available online.
In the consultation, the government indicated that an "opt-out" TDM exception was its preferred option. However, it has since said that it is "open-minded" as to which reforms are preferred.
The government has said that it will introduce the TDM exception only if appropriate technology exists to allow effective exercise of the opt-out, and only if it is accompanied by transparency measures, possibly in the form of obligations on AI developers to disclose details of:
Another key proposal is the removal of copyright protection for computer-generated works. Other changes are floated, such as obligations to label so-called deepfakes (or "digital replicas") and other AI-generated or AI-manipulated content.
The consultation closed on 25 February 2025. It will be interesting to see which of the government's proposals are ultimately taken forward and how quickly any resulting changes to copyright law will be implemented.
For more information see our Insight.
⭐Following the Data (Use and Access) Act 2025 becoming law on 19 June 2025, the government has nine months to produce an economic impact assessment and report on some of the copyright policy options regarding AI training which were set out in the consultation, including proposals on technical measures, transparency, licensing, enforcement, and AI developed outside the UK. They must also produce an interim progress report at six months.
AI is a key priority for the Information Commissioner's Office (ICO) due to the technology's potential to pose high risks to individuals' privacy if it is not developed and used in a responsible way. The ICO already has guidance on AI and data protection and a suite of other AI-related resources.
In December 2024, the ICO published a response to input it received in the course of its five-part consultation series on generative AI. The regulator set out its analysis, views and current expectations on how specific areas of data protection law apply to generative AI systems. See our Insight for more information.
The current government has said that it plans to introduce AI safety legislation for "frontier" AI models.
None
In the policy formation stage
Businesses developing, supplying or using artificial intelligence (AI) systems.
The last UK Conservative government issued a consultation on its white paper on AI regulation which set out the then current UK's policy approach to regulating AI. Essentially, the government confirmed its approach in the consultation response of February 2024. It did not plan new law in this area, but issued "high-level principles" to guide existing regulators in dealing with AI within the scope of their existing powers and jurisdiction.
See this Insight for more information on the white paper and this Insight for details of the government's response.
The current government has said that it intends to bring forward AI legislation with a focus on AI safety, in particular the development of the most powerful cutting edge AI models (often known as "frontier models") including a requirement for developers to share these AI models for testing before they are publicly released.
It is anticipated that such legislation would be likely to put existing voluntary commitments signed by some of the leading AI companies onto a statutory footing, as well as related measures such as moving the UK's existing AI Security Institute (which would conduct model testing) onto an arm's length footing from the government.
The government hopes that the new law will "reduce regulatory uncertainty for AI developers, strengthen public trust and boost business confidence." It is intended to tie in with other government measures on AI, including proposed changes to copyright law and the AI Opportunities Action Plan (see below).
Timing is unclear: In Autumn 2024, government sources indicated its intention to consult on the matter. In January 2025, the AI Opportunities Action Plan (see below) stressed the importance of acting quickly on this area of regulation. However, the consultation has yet to materialise, and no firm date has been given.
⭐In addition, in June 2025, it was reported across the media that the government has postponed plans to regulate at least until 2026.
The government-commissioned AI Opportunities Action Plan, led by Matt Clifford, was released on 13 January 2025. The report outlines 50 recommendations (all of which were accepted by the government) built upon core principles that advocate for the government to support innovators, invest in becoming a leading AI customer, attract global talent to establish companies in the UK, and leverage the UK's strengths and emerging catalytic areas.
The report highlights the crucial importance of data in underpinning AI development. It urges the government to act quickly to announce how it plans to regulate frontier AI models, and to urgently reform UK intellectual property law on text and data mining (see below).
See our Insight for more information.
In December 2024, the UK government published a consultation paper on changes and clarifications to copyright law, aimed at making the UK more attractive to the tech companies developing AI models, but balancing this with supporting the creative industries upon whose data many AI systems are trained.
The main proposal is an "opt-out" text and data mining (TDM) copyright exception allowing use of content for training AI systems unless the owner has expressly opted out of allowing that use. This new exception would apply only where the user already had lawful access to the content such as via a paid for subscription service, or where it had been made freely available online.
In the consultation, the government indicated that an "opt-out" TDM exception was its preferred option. However, it has since said that it is "open-minded" as to which reforms are preferred.
The government has said that it will introduce the TDM exception only if appropriate technology exists to allow effective exercise of the opt-out, and only if it is accompanied by transparency measures, possibly in the form of obligations on AI developers to disclose details of:
Another key proposal is the removal of copyright protection for computer-generated works. Other changes are floated, such as obligations to label so-called deepfakes (or "digital replicas") and other AI-generated or AI-manipulated content.
The consultation closed on 25 February 2025. It will be interesting to see which of the government's proposals are ultimately taken forward and how quickly any resulting changes to copyright law will be implemented.
For more information see our Insight.
⭐Following the Data (Use and Access) Act 2025 becoming law on 19 June 2025, the government has nine months to produce an economic impact assessment and report on some of the copyright policy options regarding AI training which were set out in the consultation, including proposals on technical measures, transparency, licensing, enforcement, and AI developed outside the UK. They must also produce an interim progress report at six months.
AI is a key priority for the Information Commissioner's Office (ICO) due to the technology's potential to pose high risks to individuals' privacy if it is not developed and used in a responsible way. The ICO already has guidance on AI and data protection and a suite of other AI-related resources.
In December 2024, the ICO published a response to input it received in the course of its five-part consultation series on generative AI. The regulator set out its analysis, views and current expectations on how specific areas of data protection law apply to generative AI systems. See our Insight for more information.
This directive helps parties bring a claim for damages for harm caused by an AI tool.
Next important deadline(s): None
⭐ The proposal was officially withdrawn by the European Commission in July 2025 (see more below).
Following the hiatus in the legislative process for this draft legislation, which was caused by the European Parliament elections in June 2024, the new Parliament announced in November 2024, the list of legislative files that would resume or continue in the new legislature, with the AI Liability Directive among them. However, the Commission's work programme for 2025, published in February 2025, stated that the directive was to be withdrawn. The European Parliament and Council of the EU then had six months to provide their views before the Commission made its final decision. In July 2025, the Commission officially withdrew the draft directive.
The overview of the draft legislation set out below is a summary of what the draft legislation provided at the time work ceased on it due to the June 2024 elections.
None
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), available here.
The directive would have impacted businesses and individuals bringing or defending private actions for non-contractual damages in relation to artificial intelligence (AI). It would have (once implemented at national level) changed civil litigation rules in EU member states on access to evidence and the burden of proof for some AI damages claims.
The directive was intended to reduce perceived difficulties in claiming non-contractual damages for harm caused by AI.
The proposal sat alongside the EU AI Act and wider reforms to EU product liability legislation. Some AI systems would have fallen under the EU's product liability regime, a strict liability regime that does not require proof of fault. Where they did not, damages for harm would have likely taken the form of a non-contractual claim that requires the claimant to prove the defendant's fault and a causation link between the fault and the harm suffered.
The complexities of AI systems can make it difficult to unravel why harm has been caused, whether there was fault, and whether the fault flowed through to the harm caused. Claimants often suffer from an asymmetry of information – the defendant knows more about what has happened than they do – which can be even greater in relation to claims about an AI system, particularly in civil law systems without common law-style disclosure obligations. The directive sought to address these difficulties with two changes to standard civil litigation procedure in EU member states for AI claims.
First, a new right of access to information was proposed for claimants seeking compensation in relation to an AI system classified as "high risk" under the AI Act. Under the proposals in the directive, the claimant would first have to make "all proportionate attempts" to obtain information from the defendant. If that was unsuccessful and if the claimant could show that they had a plausible claim for damages, the claimant would have been able to ask the court to order disclosure of "relevant evidence" from the defendant.
"Relevant evidence" was not defined, but could feasibly have included some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially have extended to data sets used for training, validating or testing AI systems and even to the content of mandatory logs that AI providers must maintain under the AI Act for traceability purposes.
Moreover, the directive proposed that failure to comply with a disclosure order would trigger a rebuttable presumption of non-compliance with a duty of care by the defendant – aiding the claimant in proving fault.
Secondly, the directive proposed creating a "presumption of causality". "National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system" where the claimant showed that all of the following requirements were met:
The proposal was not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.
In relation to high-risk AI systems, the directive proposed that the defendant would be shielded from the presumption of causality where it could demonstrate that the claimant had access to sufficient evidence and expertise to prove the causal link.
For AI systems that were not high risk, the presumption would have applied only where the court considered that it would be excessively difficult for the claimant to prove the causal link.
Finally, the proposal was that the presumption would apply to AI systems that are put into use by a non-professional user only if they had materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but had failed to do so.
More generally, the presumption of causality would have been rebuttable, shifting the burden of proof onto the defendant to show that there was no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question.
There was an intentional interplay between this directive and the EU's AI Act that would link non-compliance with the AI Act regulatory regime to increased exposure to damages actions.
Once enacted at EU level, the principles in the directive would then have needed to be implemented at national level. This is known as the transposition period and usually lasts two years. This two-stage approach reflected the extensive differences in detail between the civil litigation rules of different EU member states. It enabled each jurisdiction to enact harmonised changes but was adapted as needed to fit into their national civil litigation regimes.
As the legislation stood before the June 2024 European Parliament elections, the new rules would only have applied to harm that occurred after the transposition period, without retrospective effect.
It is interesting to note that in September 2024, the European Parliament's Think Tank published a complementary impact assessment on the Commission's proposal. The report, among other things, proposed to expand the scope of the directive into a "more comprehensive software liability instrument" to cover not only AI, but all other types of software.
Whether the proposed legislation will be revived at a later date, either in its current form, or as a "more comprehensive software liability instrument" remains to be seen.
This directive helps parties bring a claim for damages for harm caused by an AI tool.
Next important deadline(s): None
⭐ The proposal was officially withdrawn by the European Commission in July 2025 (see more below).
Following the hiatus in the legislative process for this draft legislation, which was caused by the European Parliament elections in June 2024, the new Parliament announced in November 2024, the list of legislative files that would resume or continue in the new legislature, with the AI Liability Directive among them. However, the Commission's work programme for 2025, published in February 2025, stated that the directive was to be withdrawn. The European Parliament and Council of the EU then had six months to provide their views before the Commission made its final decision. In July 2025, the Commission officially withdrew the draft directive.
The overview of the draft legislation set out below is a summary of what the draft legislation provided at the time work ceased on it due to the June 2024 elections.
None
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), available here.
The directive would have impacted businesses and individuals bringing or defending private actions for non-contractual damages in relation to artificial intelligence (AI). It would have (once implemented at national level) changed civil litigation rules in EU member states on access to evidence and the burden of proof for some AI damages claims.
The directive was intended to reduce perceived difficulties in claiming non-contractual damages for harm caused by AI.
The proposal sat alongside the EU AI Act and wider reforms to EU product liability legislation. Some AI systems would have fallen under the EU's product liability regime, a strict liability regime that does not require proof of fault. Where they did not, damages for harm would have likely taken the form of a non-contractual claim that requires the claimant to prove the defendant's fault and a causation link between the fault and the harm suffered.
The complexities of AI systems can make it difficult to unravel why harm has been caused, whether there was fault, and whether the fault flowed through to the harm caused. Claimants often suffer from an asymmetry of information – the defendant knows more about what has happened than they do – which can be even greater in relation to claims about an AI system, particularly in civil law systems without common law-style disclosure obligations. The directive sought to address these difficulties with two changes to standard civil litigation procedure in EU member states for AI claims.
First, a new right of access to information was proposed for claimants seeking compensation in relation to an AI system classified as "high risk" under the AI Act. Under the proposals in the directive, the claimant would first have to make "all proportionate attempts" to obtain information from the defendant. If that was unsuccessful and if the claimant could show that they had a plausible claim for damages, the claimant would have been able to ask the court to order disclosure of "relevant evidence" from the defendant.
"Relevant evidence" was not defined, but could feasibly have included some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially have extended to data sets used for training, validating or testing AI systems and even to the content of mandatory logs that AI providers must maintain under the AI Act for traceability purposes.
Moreover, the directive proposed that failure to comply with a disclosure order would trigger a rebuttable presumption of non-compliance with a duty of care by the defendant – aiding the claimant in proving fault.
Secondly, the directive proposed creating a "presumption of causality". "National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system" where the claimant showed that all of the following requirements were met:
The proposal was not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.
In relation to high-risk AI systems, the directive proposed that the defendant would be shielded from the presumption of causality where it could demonstrate that the claimant had access to sufficient evidence and expertise to prove the causal link.
For AI systems that were not high risk, the presumption would have applied only where the court considered that it would be excessively difficult for the claimant to prove the causal link.
Finally, the proposal was that the presumption would apply to AI systems that are put into use by a non-professional user only if they had materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but had failed to do so.
More generally, the presumption of causality would have been rebuttable, shifting the burden of proof onto the defendant to show that there was no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question.
There was an intentional interplay between this directive and the EU's AI Act that would link non-compliance with the AI Act regulatory regime to increased exposure to damages actions.
Once enacted at EU level, the principles in the directive would then have needed to be implemented at national level. This is known as the transposition period and usually lasts two years. This two-stage approach reflected the extensive differences in detail between the civil litigation rules of different EU member states. It enabled each jurisdiction to enact harmonised changes but was adapted as needed to fit into their national civil litigation regimes.
As the legislation stood before the June 2024 European Parliament elections, the new rules would only have applied to harm that occurred after the transposition period, without retrospective effect.
It is interesting to note that in September 2024, the European Parliament's Think Tank published a complementary impact assessment on the Commission's proposal. The report, among other things, proposed to expand the scope of the directive into a "more comprehensive software liability instrument" to cover not only AI, but all other types of software.
Whether the proposed legislation will be revived at a later date, either in its current form, or as a "more comprehensive software liability instrument" remains to be seen.
The bill will expand the scope of existing cyber security legislation to more entities in the supply chain, strengthen regulators' powers, and mandate increased incident reporting, thereby improving the UK's cyber resilience.
Next important deadline
To be confirmed
The government announced the bill in the 2024 King's Speech on 17 July 2024. A draft has not yet been published or introduced to Parliament.
Cyber Security and Resilience bill: Cyber Security and Resilience Bill - GOV.UK (www.gov.uk)
The current Network and Information Systems (NIS) Regulations 2018 apply to organisations that are "operators of essential services", and "relevant digital service providers". It covers five sectors – transport, energy, water, health, and digital infrastructure, as well as digital services, including online marketplaces, search engines and cloud computing services.
The intention behind the Cyber Security and Resilience bill is to extend the scope of the NIS regulations to include all critical infrastructure and essential digital services.
The bill will update the NIS regulations by:
Following a consultation in 2022, the previous UK government announced its intention to update the NIS Regulations 2018. However, the proposed reforms were not introduced into parliament by the time it was dissolved before the general election on 4 July 2024. It remains to be seen how similar any new draft bill will be to the former government's plans.
On 1 April 2025, the government published the Cyber Security and Resilience policy statement, setting out the measures to be included in the forthcoming bill:
The government has also set out new measures under consideration, additional to the commitments made in the King's Speech, to:
The policy statement states that the bill will, where appropriate, align with the approach taken in the EU NIS 2 Directive (2022/2555/EU).Therefore, businesses that are in scope of both the bill and NIS 2, may be able to utilise compliance measures already being implemented to comply with NIS 2in order to comply with the requirements of the bill (when it is enacted and comes into effect).
Although the final version of the bill is yet to be introduced to Parliament, UK businesses can look to the contents of the policy statement to understand the government's intended direction of travel.
The bill will expand the scope of existing cyber security legislation to more entities in the supply chain, strengthen regulators' powers, and mandate increased incident reporting, thereby improving the UK's cyber resilience.
Next important deadline
To be confirmed
The government announced the bill in the 2024 King's Speech on 17 July 2024. A draft has not yet been published or introduced to Parliament.
Cyber Security and Resilience bill: Cyber Security and Resilience Bill - GOV.UK (www.gov.uk)
The current Network and Information Systems (NIS) Regulations 2018 apply to organisations that are "operators of essential services", and "relevant digital service providers". It covers five sectors – transport, energy, water, health, and digital infrastructure, as well as digital services, including online marketplaces, search engines and cloud computing services.
The intention behind the Cyber Security and Resilience bill is to extend the scope of the NIS regulations to include all critical infrastructure and essential digital services.
The bill will update the NIS regulations by:
Following a consultation in 2022, the previous UK government announced its intention to update the NIS Regulations 2018. However, the proposed reforms were not introduced into parliament by the time it was dissolved before the general election on 4 July 2024. It remains to be seen how similar any new draft bill will be to the former government's plans.
On 1 April 2025, the government published the Cyber Security and Resilience policy statement, setting out the measures to be included in the forthcoming bill:
The government has also set out new measures under consideration, additional to the commitments made in the King's Speech, to:
The policy statement states that the bill will, where appropriate, align with the approach taken in the EU NIS 2 Directive (2022/2555/EU).Therefore, businesses that are in scope of both the bill and NIS 2, may be able to utilise compliance measures already being implemented to comply with NIS 2in order to comply with the requirements of the bill (when it is enacted and comes into effect).
Although the final version of the bill is yet to be introduced to Parliament, UK businesses can look to the contents of the policy statement to understand the government's intended direction of travel.
This is the first part of a two pillar solution to address the tax challenges arising from the digitalised economy.
Next important deadline(s): None
Still awaited in the UK.
Not yet implemented in the UK.
None
-
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar One involves a partial reallocation of taxing rights over the profits of the largest and most profitable MNEs (those with revenues exceeding EUR20 billion and profitability greater than 10%) to the jurisdictions where consumers are located. So, it is about where they pay tax. It is hoped that this will resolve longstanding concerns that the international corporate tax framework has not kept pace with the digital economy and how highly digitalised businesses generate value from the active interaction with their users. Under the proposal, 25% of an MNE's "residual profit" above 10% of revenue would be allocated according to a key to recognise sustained and significant involvement in a market irrespective of physical local presence.
Implementation of Pillar One is still some way off. It will be implemented by way of a multilateral convention (published in October 2023), and will not come into force until the multilateral convention has been ratified by a critical mass of countries. The introduction of Pillar One will be coordinated with the removal of all Digital Services Taxes and other relevant similar measures already implemented in jurisdictions.
This is the first part of a two pillar solution to address the tax challenges arising from the digitalised economy.
Next important deadline(s): None
Still awaited in the UK.
Not yet implemented in the UK.
None
-
All industries impacted by digitalisation with certain exceptions (e.g. financial services) but focussed on large multinational businesses (MNEs).
Agreement has been reached with over 130 members of the OECD/G20 Inclusive Framework to implement a two-pillar solution to address the tax challenges arising from the digitalisation of the economy.
Pillar One involves a partial reallocation of taxing rights over the profits of the largest and most profitable MNEs (those with revenues exceeding EUR20 billion and profitability greater than 10%) to the jurisdictions where consumers are located. So, it is about where they pay tax. It is hoped that this will resolve longstanding concerns that the international corporate tax framework has not kept pace with the digital economy and how highly digitalised businesses generate value from the active interaction with their users. Under the proposal, 25% of an MNE's "residual profit" above 10% of revenue would be allocated according to a key to recognise sustained and significant involvement in a market irrespective of physical local presence.
Implementation of Pillar One is still some way off. It will be implemented by way of a multilateral convention (published in October 2023), and will not come into force until the multilateral convention has been ratified by a critical mass of countries. The introduction of Pillar One will be coordinated with the removal of all Digital Services Taxes and other relevant similar measures already implemented in jurisdictions.
This regulation will supplement the EU GDPR with harmonised procedures for the cross-border enforcement of the GDPR.
Next important deadline(s)
None
The proposal was adopted by the European Commission in July 2023. The Parliament adopted its position on the Commission's proposal in April 2024. However, the legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. Soon after the elections, the Council of the EU agreed its negotiating position on the draft legislation.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the proposal for this regulation among them. The Commission's work programme for 2025, published in February 2025, also included the regulation in its list of pending proposals.
⭐On 16 June 2025, the Council of the EU and the European Parliament reached a provisional deal on the text of the regulation. On 27 June 2025, the Council of the EU formally endorsed the text, which was then also formally agreed by the European Parliament's Civil Liberties Justice and Home Affairs (LIBE) Committee. This paves the way for the final adoption of the text by the European Parliament and the Council. It will not apply, however, until 15 months after it enters into force.
On the horizon.
None (awaiting revival in the new legislature)
Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, available here.
This new regulation will impact all those involved in cross-border enforcement of the GDPR, including the national data protection authorities (DPAs), the parties under investigation and any parties making a complaint about cross-border GDPR compliance to a DPA.
The regulation adds new procedural rules for the cross-border enforcement mechanisms set out in Chapter VII of the GDPR. It does not introduce new procedures but clarifies the existing GDPR framework. Because GDPR enforcement is decentralised to member state level, cross-border enforcement within the EU is relatively common – for example where a data subject lodges a complaint about a data controller or processor located in a different member state.
The Commission's review of the application of the GDPR indicated that cross-border enforcement and dispute resolution under the GDPR mechanisms was being hampered by differences in administrative procedures and interpretation of the GDPR at national level. The regulation therefore aims to create a harmonised, common set of rules for cross-border GDPR matters.
The proposed regulation will create harmonised rules in relation to various aspects of cross-border enforcement of the GDPR:
The new rules will not impact the provisions of the GDPR concerning the rights of data subjects, the obligations of data controllers or processors, or on the lawful grounds for data processing. They will also not impact enforcement where there is no cross-border aspect.
This regulation will supplement the EU GDPR with harmonised procedures for the cross-border enforcement of the GDPR.
Next important deadline(s)
None
The proposal was adopted by the European Commission in July 2023. The Parliament adopted its position on the Commission's proposal in April 2024. However, the legislation was not finished by the time the European Parliament session ended ahead of the elections in June 2024. Soon after the elections, the Council of the EU agreed its negotiating position on the draft legislation.
On 13 November 2024, the European Parliament announced the list of legislative files which will resume or continue business in the new legislature, with the proposal for this regulation among them. The Commission's work programme for 2025, published in February 2025, also included the regulation in its list of pending proposals.
⭐On 16 June 2025, the Council of the EU and the European Parliament reached a provisional deal on the text of the regulation. On 27 June 2025, the Council of the EU formally endorsed the text, which was then also formally agreed by the European Parliament's Civil Liberties Justice and Home Affairs (LIBE) Committee. This paves the way for the final adoption of the text by the European Parliament and the Council. It will not apply, however, until 15 months after it enters into force.
On the horizon.
None (awaiting revival in the new legislature)
Proposal for a Regulation of the European Parliament and of the Council laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679, available here.
This new regulation will impact all those involved in cross-border enforcement of the GDPR, including the national data protection authorities (DPAs), the parties under investigation and any parties making a complaint about cross-border GDPR compliance to a DPA.
The regulation adds new procedural rules for the cross-border enforcement mechanisms set out in Chapter VII of the GDPR. It does not introduce new procedures but clarifies the existing GDPR framework. Because GDPR enforcement is decentralised to member state level, cross-border enforcement within the EU is relatively common – for example where a data subject lodges a complaint about a data controller or processor located in a different member state.
The Commission's review of the application of the GDPR indicated that cross-border enforcement and dispute resolution under the GDPR mechanisms was being hampered by differences in administrative procedures and interpretation of the GDPR at national level. The regulation therefore aims to create a harmonised, common set of rules for cross-border GDPR matters.
The proposed regulation will create harmonised rules in relation to various aspects of cross-border enforcement of the GDPR:
The new rules will not impact the provisions of the GDPR concerning the rights of data subjects, the obligations of data controllers or processors, or on the lawful grounds for data processing. They will also not impact enforcement where there is no cross-border aspect.