Author: Philip Ryan recently defended his PhD – Bureaucracy Mapping: Inclusive Design for Institutional Navigation – at University College Dublin. His research interests include inclusive design, bureaucracy, sociology, technology, user experience, trust, privacy, and migration.
Regulating technology can be difficult, and the current explosion of generative Artificial Intelligence (AI) has left even the companies producing the technologies struggling to keep up. Healthcare applications powered by general purpose Large Langauge Models (LLMs) are increasingly promising to completely change the provision of care. In medicine and in law expecting one tool to be a solution for everything has similar issues. AI as a panacea can be a poison as much as a cure, especially as it removes protection and agency for users and adds workarounds for necessary regulations. While the lack of service and capacity in healthcare could be addressed by products like “AI agents”, their implementation should not be allowed to escape the regulations already in place.
Should these technologies be subjected to regulation such as the Medical Devices Directives when used for diagnosis? Regulation must engage with technologies that engage with health no matter how their providers try to categorise themselves. When every major AI providers’ chatbot gives health related advice, the fig leaf of “not for medical or diagnostic usage” cannot be allowed to frustrate legitimate attempts to regulate healthcare. Technologies powered by AI are part of the tradition of promising a doctor in your pocket (Lupton and Jutel 2015). In 2025, the EU Artificial Intelligence Act (AI Act) is still taking effect against a world economy that is increasingly reliant on the unrealised promise of these technologies. The uncertainty of the Act’s interaction with healthcare still stands (Gilbert 2024). The horizontal approach of the Act makes it “necessary to adopt further guidelines to address the unique needs of the healthcare sector” (Van Kolfschooten and Gross 2025).
Health information online is increasingly provided through AI summaries, obfuscating the origin of the information, inventing falsehoods through hallucinations, and pushing action through confident sounding statements. In replacing web search results, which had become the go to information resource over the previous two decades, they create a dangerous new environment where previously somewhat reliable sources have been replaced by superior looking interactions (Gross et al. 2025). While there has been attention paid to the ability of these LLM’s to encourage self-harm (Yousif 2025), it will be harder to find obituaries attributed to AI’s good intentions and hallucinations, such as dangerous diets, as in a recently reported case of a man poisoning himself after replacing sodium chloride salt with sodium bromide (Burgard 2025).
While precise healthcare regulations may be required, many of the general rules could be used to protect from the excises of big tech. The Digital Omnibus will consider the General Data Protection Regulation (GDPR), e-Privacy Directive, the Free Flow of Non-Personal Data Regulation, the NIS 2 Directive, the AI Act, the Data Act, the Data Governance Act (DGA), and the eIDAS Regulation (European Commission 2025), and while not primarily concerned with healthcare they all affect healthcare provision. In some cases, healthcare uses provide exceptions to rules. How can clinicians provide care when they do not understand the ramifications of agreements between the user and companies who own the technologies used for diagnoses, that could follow them through their entire life? As new versions of healthcare become more reliant on their services, how can GDPR consent be meaningfully provided, if the technology is so encompassing that there is no meaningful option for healthcare outside of the technology itself.
Correcting the Duck
The European UACES 55th EUHealthGov panels in Liverpool gave me some great insight into the vibrant research into EU health policy. As quipped by a participant of panel, health policies are often like throwing the duck, taking their own path after their initial toss. Iterative bases of regulations such as consultation periods and full engagement with implementation and standardization phase of regulations are extremely important when controlling the development of healthcare. AI is not sustainable or desirable when it is attempting to turn patients into users.
The regression from health policy since the end of the COVID-19 pandemic, feels like a missed opportunity, and the realities of an ever-ageing population and more complex healthcare requirements must not be taken over by false promises of innovative technologies. Visibility and public engagement with regulator processes is difficult but vital. The importance of the patient role must be correctly identified and developed into something beneficial within the evolving spaces changed and developed by technologies. Infrastructure path dependency of AI cannot be allowed to decide what best practise is in healthcare. As these technologies become more all-consuming, healthcare cannot just be a pile of money and data to give AI more power.
The imagined future capacities of generative AI must be questioned, and related benefits and dangers be considered as they are inserted in more and more vital services. For example, the increasing environmental effect of using resource intensive AI in healthcare must also come into play. As per the concept of One Health, the health of the planet feeds into the health of its inhabitants. On the nose examples are coming out of the noise and air pollution caused by data centre developments (Tao and Gao 2025). Changes like the EU Commission’s Digital Omnibus exercise process could bring about deregulation and weakening of the Union’s protections, while inclusivity may seem disconnected from regulatory structures, the complexity of any agreement must be assessed, and comprehension by those affected by it should have some level of consideration within regulatory processes are to be imposed. Simplification and coherence should serve the EU’s long-term strategic vision for a competitive, secure, and rights-based digital economy rather than short-sighted deregulatory moves in favour of technologies that may not work.
Innovation and other techno-optimistic concepts are not solely positive. As Correy Doctorow (2025) highlights, the degradation of services can be seen as strongly linked to the ability to avoid regulations through shifting the activity to an application. It’s legal because they did it “with an app”, especially in the case of Uber (unregulated taxi company), Airbnb (unregulated holiday accommodation), and anything calling itself fintech (unregulated banking). Unregulated health is and will be as lucrative as it is harmful. Will Europe function as a fortress for its citizens, or vassal to new extractive practices of global corporations? While protections are designed into projects, deregulation and other forms of degradation could make initiatives like the European Health Data Space Regulation (EHDS) a pre-collection of sensitive information for legitimate or illegitimate actions. Information provided for a service years ago could be used in manners practically unimaginable at the time of consent to the data being stored, or sold on at the end of business (Church and Smith 2025). In the new realities brought about by these technologies, regulations will have to answer more scenarios and hopefully protect ever more marginalized users/citizens.
While it is possible for technologies to simplify healthcare delivery, reducing the power of related laws should be reviewed with caution. The deregulation agenda of technology companies, which can treat many of the harms caused by their actions as externalities and justifiable costs, should be viewed with suspicion, especially in healthcare. The incursion of companies like surveillance data broker Palantir, offers them unprecedented access to healthcare information (Osborne 2024) highlighting the value of such assets and the protean nature of business interests. The EU’s ability and appetite to create and enforce digital policy and data protection rules is currently singular, and adapting to more aggressive regulatory regimes, and the related race to the bottom should be part of the discourse, especially around healthcare.
References
Burgard, B. (2025) ‘ChatGPT Advice Triggers Bromide Poisoning, Psychosis’, Medscape, 10 Jan, available: https://www.medscape.com/viewarticle/chatgpt-salt-advice-triggers-psychosis-bromide-poisoning-60-2025a1000qab [accessed 1 Nov 2025].
Church, S. and Smith, G. (2025) ‘23andMe sells gene-testing business to DNA drug maker Regeneron’, Los Angeles Times, 19 May, available: https://www.latimes.com/business/story/2025-05-19/23andme-sells-gene-testing-business-to-dna-drug-maker-regeneron [accessed 27 Aug 2025].
Doctorow, C. (2025) Enshittification Why Everything Suddenly Got Worse and What to Do about It, London: Verso.
European Commission (2025) Digital Omnibus Regulation Proposal | Shaping Europe’s Digital Future [online], available: https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal [accessed 12 Nov 2025].
Gilbert, S. (2024) ‘The EU passes the AI Act and its implications for digital medicine are unclear’, npj Digital Medicine, 7(1), 135, available: https://doi.org/10.1038/s41746-024-01116-6.
Gross, N., Kolfschooten, H. van, and Beck, A. (2025) ‘Why the EU AI Act falls short on preserving what matters in health’, available: https://doi.org/10.1136/bmj.r1332.
Lupton, D. and Jutel, A. (2015) ‘“It’s like having a physician in your pocket!” A critical analysis of self-diagnosis smartphone apps’, Social Science & Medicine (1982), 133, 128–135, available: https://doi.org/10.1016/j.socscimed.2015.04.004.
Osborne, R.M. (2024) ‘NHS England must cancel its contract with Palantir’, BMJ, 386, q1712, available: https://doi.org/10.1136/bmj.q1712.
Tao, Y. and Gao, P. (2025) ‘Global data center expansion and human health: A call for empirical research’, Eco-Environment & Health, 4(3), 100157, available: https://doi.org/10.1016/j.eehl.2025.100157.
Van Kolfschooten, H. and Gross, N. (2025) ‘Invisible prescribers: the risks of Google’s AI summaries’, Journal of Medical Ethics blog, available: https://blogs.bmj.com/medical-ethics/2025/11/12/invisible-prescribers-the-risks-of-googles-ai-summaries/ [accessed 12 Nov 2025].
Yousif, N. (2025) Parents of Teenager Who Took His Own Life Sue OpenAI [online], BBC, available: https://www.bbc.com/news/articles/cgerwp7rdlvo [accessed 1 Nov 2025].
The post Treating it with an App: AI Techno-optimism Against Regulations appeared first on Ideas on Europe.
Written by Marcin Szczepański.
A series of recent economic and geopolitical shocks have led to rising fragmentation of global trade, whereby countries tend to boost economic ties with those sharing similar political values, economic policies and security interests. While a broad retreat from globalisation is not taking place, there are some signs of reconfiguration of supply chains along geopolitical lines.
This is likely to have pronounced effects for EU economy due to its openness and high level of integration into global value chains. The full consequences are unclear at this point and firms’ responses vary, but mitigating the changing trade environment leads to heightened costs, stronger regional flows of goods and priority for measures that could reduce uncertainty.
The EU’s policy focus is on de-risking supply chains, boosting their resilience and creating opportunities through access to global markets. Increasing domestic production and access to inputs as well as diversifying supplies is coupled with supporting multilateralism and targeted partnerships. Many experts, as well as the European Parliament, see the unrealised potential of the single market, easier access to finance, stimulating innovation and digitalisation, as ways forward.
Managing global trade fragmentation is a complex process full of risks and opportunities, which requires crosscutting policy action and a strategic approach. The EU is striving to find a balance between trade openness and the necessary economic security measures. Furthermore, while proposed and launched solutions require a medium to long-term time horizon to deliver, geopolitical developments often happen swiftly, further complicating matters.
Read the complete briefing on ‘EU supply chains in the era of trade fragmentation: Impacts, policies and current debate‘ in the Think Tank pages of the European Parliament.