Major global banks are beginning to experiment with a new generation of artificial intelligence designed not just to flag rule violations—but to reason through market behaviour in real time.
According to reporting from Bloomberg, both Goldman Sachs and Deutsche Bank are exploring or testing “agentic” AI systems to enhance trade surveillance. The objective is straightforward: strengthen oversight of trading activity by deploying AI agents capable of analysing patterns dynamically rather than relying solely on static alerts.
Moving beyond rule-based monitoring
Traditional surveillance systems at large financial institutions operate on predefined logic. If a trade exceeds a certain threshold, deviates from a benchmark, or matches a known risk pattern, it triggers an alert. Compliance teams then investigate manually.
This model works—but it struggles with scale. Modern financial markets generate enormous volumes of data across asset classes, trading venues, and time zones. Static rules can create excessive false positives, while more subtle forms of misconduct may evade detection because they don’t match known templates.
Agentic AI attempts to close that gap. Instead of checking transactions against a checklist, these systems evaluate multiple signals simultaneously—historical activity, timing patterns, behavioural anomalies, and contextual market data—to identify unusual combinations of events.
Importantly, they are not designed to replace compliance officers. Rather, they function as a more intelligent filter, elevating complex cases for human review.
Deutsche Bank’s collaboration with Google Cloud
Deutsche Bank is reportedly working with Google Cloud to develop AI agents capable of monitoring trading activity at scale. The system analyses order and execution data streams and flags anomalies in near real time.
This initiative reflects a broader shift in enterprise AI strategy. Instead of limiting generative AI to customer-facing chat interfaces, banks are embedding advanced models into internal control functions. In the surveillance context, AI agents may evaluate relationships between trades, trader history, communications metadata, and prevailing market conditions—rather than examining individual transactions in isolation.
Human compliance professionals remain responsible for reviewing flagged activity and determining next steps. The AI’s role is to surface patterns that might otherwise go unnoticed.
Goldman Sachs’ expanding AI footprint
Goldman Sachs has also been investing heavily in AI across trading, risk management, and operations. Extending that strategy into compliance and surveillance appears to be a natural progression.
Agentic systems deployed in this context may operate with a degree of autonomy—deciding what data to examine next, identifying non-obvious behavioural signals, and escalating findings that do not neatly fit predefined rules.
For banks, the incentive is twofold. First, earlier detection of potential misconduct reduces regulatory exposure and reputational risk. Second, improving signal quality helps compliance teams manage overwhelming volumes of alerts without compromising oversight standards.
What “agentic AI” really means
The term “agentic AI” refers to systems capable of goal-directed action. Unlike prompt-based models that simply respond to queries, agentic systems can determine which datasets to analyse, compare signals across multiple domains, and take intermediate steps toward a defined objective.
In a trading environment, that could involve continuously monitoring order flows, price movements, and historical trader behaviour to assess whether current activity aligns with normal patterns.
Crucially, decision-making authority remains with humans. Financial institutions operate within strict regulatory frameworks, and accountability cannot be outsourced to algorithms. Agentic AI acts as an intelligent assistant—not an autonomous judge.
Compliance in the age of generative AI
Regulators in the US and Europe require firms to maintain robust monitoring systems to prevent market abuse and manipulation. While there is no mandate to adopt agentic AI specifically, institutions must demonstrate that their controls are effective.
Advanced AI architectures may help meet that obligation—provided they are explainable, auditable, and governed appropriately. Model transparency, bias mitigation, and secure data handling remain critical requirements.
This introduces a paradox: AI can improve oversight, but it also introduces new governance challenges. Banks must ensure that surveillance models themselves can withstand regulatory scrutiny.
A shift in how compliance teams operate
If agentic surveillance proves effective, it could reshape the daily workflow of compliance departments. Rather than sorting through thousands of straightforward alerts, teams may focus on evaluating nuanced, multi-factor cases identified by AI agents.
Human judgement will remain central. However, the distribution of effort may change—away from routine triage and toward higher-level analysis.
As markets grow faster and more complex, rule-based systems alone are becoming insufficient. The move by institutions like Goldman Sachs and Deutsche Bank signals that agentic AI is no longer confined to experimental labs. It is beginning to play a role in one of finance’s most sensitive and heavily regulated functions: market oversight.


