AI Healthcare Regulation across borders
AI is steadily reshaping healthcare globally, assisting in diagnostics, personalising treatments, and expanding access to medical services. While regulatory developments in the US, UK, and EU, such as the FDA’s AI January 2025 draft guidelines and the EU’s AI Act, have set prominent benchmarks, significant innovation and diverse regulatory strategies are evolving rapidly across Asia, Africa, and South America.
Asia: Policy foundations to meet rapid development
Asia is a hotbed of AI-driven health innovation, from AI-assisted radiology to robotic surgery. China’s healthcare AI market reached nearly $3 billion in 2018, making it a regional leader, and AI tools like Beijing-based InferVision’s medical imaging system are deployed across Chinese hospitals for faster CT scan analysis​. Other Asia-Pacific countries are also pioneering novel medtech solutions; for instance, Singapore startups have built AI wound-scanning devices that improve diagnostic accuracy and treatment precision​.
With this in mind, several Asian governments are actively updating regulations to keep pace with health AI.
South Korea has introduced clear classification guidelines for AI-powered medical devices, now mandating rigorous clinical trials for AI diagnostic tools to ensure robust validation and patient safety. China's National Medical Products Administration (NMPA), among the earliest adopters, initially provided specific guidelines on deep learning applications for healthcare. It has since evolved from single-instance approvals towards continuous oversight of AI medical software throughout its lifecycle, recognising the iterative nature of AI technologies.
In tandem, Singapore’s Health Sciences Authority instituted life-cycle frameworks (including post-market monitoring) to track AI performance in real-world clinical settings​ as well as Draft Guidelines for Safe Development and Implementation of AI in Healthcare (2021). This was jointly issued by Singapore’s Ministry of Health and Health Sciences Authority, and covers best practices for healthcare AI. It emphasises ensuring AI systems are explainable, trained on high-quality data (even using synthetic data for development), and include risk controls throughout the software life cycle​.
India’s National AI Strategy, termed #AIForAll, which emphasises responsible and ethical AI deployment, was launched in 2018. By 2021, detailed principles for ethical AI and sector-specific guidelines, including healthcare, were in place to ensure safe AI implementation. India's framework aims to align domestic innovation with global ethical standards, ensuring patient safety and accountability.
Across Asia, these policy measures aim to balance innovation with patient safety, often aligning with international standards (such as the IMDRF’s guidance on AI as medical software). The balance between encouraging innovation and safeguarding patient welfare remains a central regulatory theme across the region.
Africa: Preparing for future growth
Africa's approach to AI regulation in healthcare balances enthusiasm with pragmatism, focusing on laying essential foundations before rushing into advanced implementation. At a continental level, the African Union endorsed the Continental Artificial Intelligence Strategy in 2024, which sets a cohesive vision for ethical, responsible, and inclusive AI development across member states.
At least seven African countries have already formulated national AI strategies or policies​, clearly identifying healthcare as a priority area. Mauritius launched its AI strategy in 2018, with particular emphasis on accountability, ethics, and leveraging AI in sectors such as healthcare. Rwanda, in its 2023 AI policy, ambitiously seeks to become Africa’s AI hub through substantial investments in skill-building, data infrastructure enhancement, and ethical AI leadership.
Egypt and Algeria have similarly articulated national AI plans, specifically highlighting healthcare alongside education and agriculture as critical sectors. South Africa's comprehensive National Artificial Intelligence Policy Framework explicitly identifies healthcare as a primary area for AI integration, laying out a detailed roadmap for implementation. This enabling environment has already shown results: Envisionit Deep AI, founded by Dr. Jaishree Naidoo, became the first African-founded company to receive FDA clearance for an AI medical device with its RADIFY® AI platform, which triages critical conditions like Pneumothorax and Pleural Effusion. In wider regulatory news, in April this year the South African Health Products Regulatory Authority - SAHPRA joins the Medical Device Single Audit Programme as an affiliate member, meaning they can use MDSAP certificates to evaluate a manufacturer QMS. Alongside, SAHPRA is also rolling out a phased requirements for ISO 13485 Certificate as a prerequisite for the approval of a Medical Device Establishment Licence - SAHPRA whereby it intends to ensure all medical device License Holders have a ISO13485:2016 certificate issued from a SAHPRA recognised Assessment Body.
The recent publication of the African Union’s Continental Artificial Intelligence Strategy (2024) takes an Africa-centric, development-focused approach to AI. It promotes ethical, responsible, and inclusive AI practices across member states and calls for unified national strategies – including in health – to strengthen cooperation and position Africa as a leader in responsible AI deployment​.
These preparations underline Africa's clear intent to build robust regulatory and infrastructural foundations today to ensure the safe and effective scaling of AI solutions in healthcare tomorrow. However, these ambitions hinge on sustained investments in infrastructure, human resource development, and regional collaboration to ensure effective policy implementation.
South America: Prioritising equity in AI regulation
In South America, the regulatory discourse around AI governance prominently features social equity, democratic accountability, and human rights. Governments across the region are actively debating and developing AI governance frameworks, often informed by EU and other international standards yet distinctly tailored to their local social contexts.
Brazil, representing the region's largest market, leads these regulatory efforts with the comprehensive Bill No. 2,338/2023. This proposed legislation aims for extensive AI governance across all sectors, specifically targeting high-risk AI applications such as medical AI tools. The bill categorically prohibits AI applications perceived to pose "excessive or unacceptable risk" and mandates stringent controls for those classified as "high-risk". Furthermore, it proposes the establishment of a dedicated AI regulatory authority under Brazil’s Data Protection Agency, responsible for coordinating oversight and enforcing penalties for non-compliance.
Chile's legislative approach prohibits the use of AI systems that pose an unacceptable risk, and establishes specific rules, regarding risk management, data governance, technical documentation, record- keeping systems, transparency mechanisms and human oversight for the use of high-risk AI systems. This approach aims to foster innovation while safeguarding democratic values, human rights, and transparency. Building on this, in 2024 Chile updated the policy and introduced a draft AI Law aimed at promoting human-centric AI innovation following UNESCO recommendations.
Argentina, meanwhile, has been taking measured steps toward AI governance, releasing non-binding “Recommendations for Reliable AI” in 2023, with continued government support to introduce a regulatory framework for AI with a risk base approach similar to the EU.
Across Latin America, ethical considerations consistently revolve around ensuring AI healthcare solutions are deployed equitably and transparently, preventing the amplification of existing inequalities or creation of new vulnerabilities within communities.
Prioritising safety, transparency and accountability
Despite international differences in emphasis and methods, several universal themes are consistently shared across regulatory frameworks. Ethical considerations, namely fairness, transparency, and accountability, are quite rightly prioritised everywhere. Patient safety is naturally a massive concern, driving necessary adaptations to existing medical device regulations to handle AI's substantial risks and unique characteristics. Data privacy and protection are also recognised globally as critical pieces of robust AI governance, though implementation often lags behind rhetoric.
But here's the thing – there are unique challenges. Asia's tech scene is moving at lightning speed, often outpacing regulatory capacity, creating considerable governance gaps. Africa's main priority is building up infrastructure and human capacity to make sure policies can actually be implemented effectively. Latin America is dealing with significant healthcare inequalities, so they're focused on regulations that promote fair and socially beneficial AI integration, whilst preventing technological exploitation.
Measured optimism
Regulations remain a decidedly work in progress and an evolving journey with significant hurdles. Regulators worldwide are constantly adapting to tech advancements while working together internationally on common challenges like algorithmic bias, accountability deficits and making sure AI works effectively across diverse populations without exacerbating existing disparities.
At Hardian, we help companies navigate the shifting global regulatory AI landscape with clinical insight, scientific rigor and strategic clarity. If you are building healthcare AI that needs to reach real-world patients safely, effectively and compliantly, we are here to guide the way. Get in touch to find out how.
Hardian Health is a clinical digital consultancy focused on leveraging technology into healthcare markets through clinical strategy, scientific validation, regulation, health economics and intellectual property.