South Africa’s Draft National AI Policy was published for public comment on 10 April, marking a new phase of artificial intelligence deployment, risk and accountability in the country.
Much of the legal discussion around AI has historically assumed a supportive model, where AI is treated as a tool for informing human judgment, and accountability remains with the final decision-maker. Legal risk has focused on bias, accuracy, explainability and misuse of outputs.
Increasingly, systems are no longer confined to generating outputs for human consideration and are now making decisions and acting independently within operational environments. This development, described as agentic AI, represents a material shift in how legal risk and accountability are assessed.
With agentic AI, systems are designed to pursue objectives and determine how and when to act, with risk arising from the actions taken by the system itself. While incorrect AI output can be reviewed and corrected before harm occurs, an executive autonomous action could result in immediate legal and commercial consequences that cannot easily be reversed or may have already caused harm.
Although it operates at a scale and speed beyond human norms, agentic AI can be considered a delegated decision-making authority. Organisations routinely delegate authority to employees and automated processes, subject to defined limitations, approvals and oversight.
Agentic AI fits within this legal framework, but autonomous systems act continuously, in high volumes, and without human judgment at the point of action. This affects how risk manifests in practice, including whether the delegating authority to an autonomous system was reasonable, considered risks, and implemented appropriate constraints as well as monitoring and escalation mechanisms.
No relief in delegation of authority
The Companies Act 71 of 2008 introduces a further limitation on the concept of delegating authority to AI systems.
Corporate decisions to deploy a system, define its mandate and determine the scope of its autonomous operation remain board-level responsibilities. Directors retain non-delegable fiduciary duties in respect of those decisions throughout the life of the deployment, and these duties may be breached if AI programmes they oversee render genuine supervision impossible.
The burden of proof
The Electronic Communications and Transactions Act 25 of 2002 (Ecta) is the statutory framework that governs contracts in automated decision-making. However, it is confined to the contractual domain and must be considered alongside other legal frameworks.
Section 25(c) of Ecta applies to the communication of data messages generally as a default regime. For agentic AI, this means messages generated by a system programmed or configured by an organisation are attributed to the organisation that implements them, and the burden of proving a system failure rests on the organisation.
The misalignment of risk allocation in contracts
Where agentic systems cause a business to breach contractual obligations, or where those systems conclude contracts on the organisation's behalf, liability arises under contract law. However, many technology agreements were drafted on the assumption that systems operate deterministically and under close human supervision.
This can result in a misalignment between the autonomy granted to a system and the allocation of risk in warranties, indemnities, audit rights and limitations of liability. In third-party deployments, this misalignment is compounded by standard-form vendor terms that exclude liability for autonomous behaviour or limit recourse for downstream consequences.
The common law of agency reinforces the contractual exposure when considered together with Ecta. Organisations that deploy agentic AI systems act in the position of a principal, and the conduct of the system within its authorised scope is attributed to the organisation where this culminates in a transaction or a data message that appears authorised.
Estoppel (or apparent authority) is particularly significant, as courts may treat weak governance or the tacit acceptance of AI-generated outputs as removing an entity’s ability to claim that the action was not expressly authorised – even where no formal decision was ever made to authorise the action.
Pierre Burger 25 Feb 2026 Delictual liability for harm caused
Where third parties suffer harm through the autonomous conduct of an agentic AI system, delictual principles apply. According to common law governing delictual claims, courts will consider foreseeability, causation, the reasonableness of precautions taken and legal policy considerations.
The fact that an autonomous system was supplied or enabled by a third party is unlikely to interrupt the causal chain, where the deployer exercised control over configuration, permissions and use.
The common law doctrine of vicarious liability, when considered together with section 25 of Ecta, provides the most directly applicable framework for attributing delictual liability to a deploying organisation for harm caused by its agentic AI system. The relationship between principal and agent is one of the analogous categories that South African courts have recognised as capable of founding vicarious liability.
An organisation that deploys an agentic system has conferred upon it a defined mandate to pursue specified objectives, within configured parameters, on behalf of the organisation. Where the system causes harm while executing that mandate, the deploying organisation is very likely to be regarded as the appropriate bearer of liability.
The deployment of autonomous systems does not lower the applicable standard of care, rather, it may increase the scope of what harms are considered foreseeable and what safeguards are regarded as reasonable.
Data and consumer protection requirements
When agentic systems process personal information or make decisions affecting individuals, additional exposure may arise under the Protection of Personal Information Act 4 of 2013.
Section 71 prohibits subjecting a data subject to a decision that results in legal consequences, where that decision is based solely on the automated processing of personal information intended to provide a profile of the individual. Exceptions apply where appropriate measures are in place, including the opportunity to make representations and being provided with information on the underlying logic of the automated processing.
In consumer-facing situations, section 61 of the Consumer Protection Act 68 of 2008 imposes strict liability for harm caused by unsafe goods, defects, hazards or inadequate warnings. Data messages of agentic systems, including instructions and system-implemented actions, will be attributed to entities if the requirements of section 25 of Ecta are met.
Is risk mitigation possible?
The laws governing AI contracts, AI-caused harm, and how directors must supervise AI are overseen by various legal frameworks, each demanding its own analysis, risk assessment and mitigation strategy.
Risk mitigation strategies include defining the scope of AI authority, implementing error-notification mechanisms and updating technology agreements to clearly define the ‘action space’ of any AI agent. Risk assessments, human review for consequential decisions and ongoing, comprehensive monitoring are also required.
Ultimately, businesses should approach agentic AI governance not only as a compliance hurdle but also as the strategic management of a series of risks that the law already recognises them to hold.