Investment Manager’s Report: Summary

An extract from the full Investment Manager’s Report

The financial information set out below does not constitute the company's statutory accounts for the years ended 30 April 2025 or 30 April 2024 but is derived from those accounts. Statutory accounts for 2024 have been delivered to the registrar of companies, and those for 2025 will be delivered in due course. The auditor has reported on those accounts; their reports were (i) unqualified, (ii) did not include a reference to any matters to which the auditor drew attention by way of emphasis without qualifying their report and (iii) did not contain a statement under section 498 (2) or (3) of the Companies Act 2006.

The full Annual Report and Financial Statements for the Year Ending 30 April 2025 can be found here.

Market Review

Equity markets delivered modest gains during the Trust’s fiscal year to the end of April 2025 (FY25) following a strong FY24 although this belied significant geopolitical and market volatility. Global equity markets, as per the MSCI All Country World Net Total Return Index, returned +4.8% during the fiscal year, while the US (S&P 500 index) and Europe (DJ Euro Stoxx 600 index) returned +5% and +7.6% respectively. Economic growth remained firm, led by consumer spending, while labour markets showed only mild signs of softening. The inflation picture also continued to improve globally, including in the US, where headline Consumer Price Inflation (CPI) fell from 4.9% in April 2023 to 2.3% by April 2025, nearing the Federal Reserve (Fed)’s 2% goal. Progress on inflation changed the balance of risks for many central banks and shifted policy focus from managing the risk of higher/sticky inflation to supporting economic growth and labour markets. The Fed duly began its interest rate-cutting cycle with a 50 basis point (bps) cut at its September meeting, followed by 25bps cuts at the November and December meetings. The European Central Bank and Bank of England began their own rate cuts in June and August respectively.

Equity markets broadly trended higher through 2024 as economic growth surprised to the upside and major macroeconomic and political risks appeared to dissipate, supporting higher equity valuation multiples. 2024 US GDP (gross domestic product) growth ended the year at +2.9%, up from forecasts of just 1.2% at the start of the year. Performance for the calendar year was again dominated by the largest technology companies, with the ‘Magnificent Seven’ (Mag-7) returning +71% and continuing to benefit from positive earnings revisions and excitement about artificial intelligence (AI), accounting for almost 60% of the S&P 500’s 2024 return.

From the turn of the calendar year (the final third of the Trust’s fiscal year), markets were no longer led by changes in the Fed’s language and CPI components but buffeted by political developments. The election of Donald Trump as US President proved the defining event of the fiscal year as markets were forced to react to sweeping tariff policies, a flurry of Executive Orders and bilateral dealmaking. Equity markets initially took this political upheaval in their stride: Trump’s pro-growth, pro-business, low-tax agenda appeared to have ignited animal spirits, and the equity market upgraded its economic growth expectations. The nomination of Scott Bessent as Treasury Secretary and Elon Musk’s high-profile Department of Government Efficiency (DOGE) role led investors to be more sanguine about inflationary tariffs, expanding deficits and geopolitical instability. The decisive election outcome in the form of a Republican ‘clean sweep’ and stock market ‘Trump bump’ added fuel to the ‘US exceptionalism’ narrative: US equities saw c$141bn worth of inflows during the month following Trump’s election (the largest monthly inflows on record), cyclicals outperformed defensives, and the S&P 500 high beta factor reached the 99th percentile by early December.

Ben Rogoff

Partner, Technology

Alastair Unwin

Partner, Deputy Fund Manager
The election of Donald Trump as US President proved the defining event of the fiscal year as markets were forced to react to sweeping tariff policies, a flurry of Executive Orders and bilateral dealmaking.

The reality of the Trump administration’s policy agenda and erratic modus operandi proved more challenging, and the S&P 500 soon gave back all its post-election gains. The market turned more defensive as investors digested trade uncertainties, DOGE disruption and even a potential shift in the geopolitical world order as Trump and Vice-President Vance raised significant questions about the future viability of NATO and the survival of Pax Americana (which succeeded in galvanizing Europe – particularly Germany – into increasing defence spending). Growth and inflation concerns emerged as consumer and business confidence collapsed and policy uncertainty spiked to early Covid and global financial crisis (GFC) levels. Against this volatile backdrop, the arrival of DeepSeek’s low-cost AI model shocked the market and prompted a momentum unwind in small/mid-cap, long-duration and AI infrastructure stocks and, without mega-cap technology/AI leadership, the market struggled.

Trump’s Liberation Day Executive Order on 2 April unleashed further volatility; indeed, April was the fifth most volatile month in 85 years. A baseline 10% tariff was set on imports from all countries from 5 April and much higher ‘reciprocal tariffs’ on around 60 ‘worst offenders’ from 9 April. The size and scope of the Liberation Day announcement surprised the market and appeared to confirm the administration’s commitment to reordering global trade policy and geopolitics. Equity markets experienced significant volatility in early April: the VIX (a measure of market volatility) closed above 50, the S&P 500 registered some of the largest intraday swings in history amid record trading volumes and fell more than 20% from mid-February highs. The trade-weighted dollar weakened significantly, closing down more than 10% from January highs by mid-April.

Fortunately, the sharp correction in the bond and equity market prompted a softening in the trade tariff negotiations, which led to a rebound in the market. On 9 April – the deadline for reciprocal tariffs to go into effect and following unsettling moves in the bond market – Trump paused the higher reciprocal tariff rates for 90 days on all countries excluding China (where the cumulative tariff was increased to 125%) to provide an opportunity for countries to engage in trade talks. In the face of extremely bearish investor sentiment, the S&P 500 recovered more than 15% from its lows to close above its Liberation Day level within a month. The rebound included nine consecutive trading session gains; the first time this has happened since November 2004. While most countries appeared to be negotiating, China announced counter-tariffs on US goods. This started a cycle of retaliation which resulted in a 145% tariff on Chinese imports to the US and the Chinese restricting rare earth exports, which are critical to various high-tech industries. In early May, however, China and the US also reached an agreement to lower tariffs to 10% and 30% respectively for 90 days, leading to a further move higher in markets following a solid Q1 earnings season.

Evolution of 2025 Growth Forecasts (Percent)

Source: IMF staff calculations.
Note: The x-axis shows the months the World Economic Outlook is published.
AEs = advanced economies; EMDEs = emerging market and developing economies.

US Policy uncertainty has spiked to GFC/covid levels

Source: Bloomberg, 11 March 2025

Enlarged Image

Technology Outlook

Earnings outlook

Increased spending on AI infrastructure meant 2024 proved one of the best years for IT spending since the pandemic with growth of 7.7%, exceeding earlier expectations (+6.8%) and well ahead of the 3.5% recorded in 2023. For 2025, worldwide IT spending is expected to further accelerate to +9.8% y/y. While data centre systems spending is expected to decelerate to +23.2% y/y from 39.4%, this still represents remarkable growth, driven by AI-optimised servers where spending is forecast to exceed twice that spent on traditional servers next year. In addition, all other spending categories are expected to accelerate in 2025, led by software (+14.2%), devices (+10.4%) and IT services (+9%). While these forecasts might be subject to some tariff-related headwinds, 2025 was recently expected to be the best year for IT spending since 2021 while 2024-25 may still represent the best back-to-back growth since 1995-96.

For 2025, the technology sector is expected to deliver revenue growth of 11.7%, while earnings are expected to increase by 18%, the highest of any US sector on both metrics. These forecasts are well in excess of anticipated S&P 500 market growth, where revenues and earnings are pegged at 4.9% and 9% respectively. The technology sector’s outperformance is expected to continue in 2026 with early forecasts for 10.6%/16.6% comfortably ahead of market expectations (6.2%/13.4%). While these forecasts may appear at odds with tariff-related developments, corporate earnings have thus far proved more resilient than feared. First-quarter earnings season has been supportive, as (at the time of writing) 74% of S&P 500 companies have beaten on earnings per share (EPS, with the median earnings surprise of 8.5% while Q1 earnings growth is tracking at +12% versus the +6% consensus estimate at the start of the year. Tariff concerns have been flagged in virtually every earnings call, but the impacts have been largely contained so far. However, while macroeconomic conditions may create more significant crosscurrents, we believe technology fortunes this year will once again be determined by the path of AI progress.

Table 1. Worldwide IT Spending Forecast (Millions of U.S. Dollars)

Source: Gartner (January 2025)

Enlarged Image

Valuation

The forward P/E of the technology sector contracted modestly over the past year. Twelve months ago, valuations had rebounded to approximately 26x forward P/E, up from c24x at the end of FY23. This marked a full recovery from the post-pandemic compression, with valuations continuing to expand and reaching a peak of around 31x in the summer, before easing ahead of 2025. However, pronounced market weakness during 1Q25 caused a sharp correction, with valuations falling significantly before rebounding to 26x by fiscal year-end. Continued market strength post-period has driven valuations higher still, with technology stocks now trading at a forward P/E of 27.5x, above both the five-year (25.6x) and 10-year (21.7x) averages. This reflects elevated broader market valuations and the sustained momentum of AI as a central investment theme.

The relative P/E of the technology sector, having recovered to post-bubble highs (1.4x) in 2023, ended 2024 broadly flat. However, this stability was interrupted by the DeepSeek-led market selloff in 1Q25 which saw the sector’s premium compress to just 1.1x, its lowest relative level since the pandemic. The recent market recovery has helped lift this back to 1.35x. While this may suggest more limited near-term valuation upside, we believe that continued AI progress could support a structural re-rating of the sector, mirroring the upward valuation drift seen during the internet cycle of the mid-1990s.

S&P 500 Information Technology Sector Forward P/E Ratio

Source: Ned Davis, July 2025

S&P 500 Information Technology Sector Relative Forward P/E Ratio

Source: Ned Davis, July 2025

Enlarged Image

Mag-7 update

Of course, the valuation question remains significantly influenced by a select group of mega-cap technology stocks that as well as substantially driving returns last year, also dominate indices. Despite this, many of our earlier Mag-7 observations remain unchanged – they are unique, non-fungible assets trading at extended, but not excessive valuations. This reflects the fact that Mag-7 outperformance has largely tracked the group’s relative earnings progress, with valuation expansion playing a secondary role – recently, the Mag-7 accounted for 33.4% of the S&P 500’s market cap and 25.3% of its earnings – a similar ratio to this time last year, when these companies comprised about 29% and 22% of market cap and earnings, respectively. Following recent weakness in several of the Mag-7, the group is, at the time of writing, trading at the lowest valuation premium to the S&P 493 since 2019.

However, this year we are more focused on the sustainability of the mega-cap group’s growth profile, rather than valuation. Earlier gains together with aggressive AI investment suggest future margin gains may become more difficult to deliver. At the same time, rising capital intensity has impacted free cashflow with estimates for this year at Alphabet, Amazon and Meta having fallen 20-25% year-to-date, according to Morgan Stanley.

Given the strong correlation between earnings revisions and recent Mag-7 performance, negative revisions are unlikely to be well received by the market, nor is the evolution to more capital-intensive business models likely to be straightforward. Investors may also interpret the direction of earnings revisions as indicative of whether AI-related spending is offensive or defensive, driven by the pursuit of new opportunities or aimed at protecting existing markets. As investors we cannot know the answer to this critical question (until it is too late) because companies never admit to being on the wrong side of technology change. However, new technologies often begin as complements and end as substitutes, which explains why previous technology cycles have rarely been kind to incumbents, with nearly 50% dropping out of the top ranks every decade.

The good news is that today’s market leaders are hyperaware of obsolescence risk, as reflected in their massive R&D investments. In 2023 alone, the top five tech companies spent $223bn on R&D, an amount 1.6x greater than total US venture capital (VC) spending. As such, we are not yet concerned about the near-term risk posed to Mag-7 by AI. Rather, we wonder if the negative reception to sharply higher hyperscale capex (from Alphabet, Amazon and Microsoft) signals the beginning of a new phase where these companies become less effective AI conduits. Of course, we will continue to evaluate each company on its individual merits and are willing to maintain large absolute weightings in these unique, category-defining assets. However, our null hypothesis has shifted from ‘half-full’ to ‘half-empty,’ as AI-driven risks to existing profit pools and the diminishing value of incumbency become more apparent. As a result, we have increased our relative underweight positioning in long-term holdings we find less compelling at current levels such as Alphabet, Apple and Microsoft.

Magnificent 7 Relative EPS vs S&P500
Enlarged Image

Disruption ahead

The idea of previous winners becoming less effective conduits for AI appears to be already playing out within the software sector, evidenced by slowing industry growth, widening disparities in company performance and an increasingly uphill AI narrative battle. Earlier hopes that leading SaaS companies could monetise AI through premium-priced products have largely gone unrealised. Adobe struggled to drive the adoption of Firefly, a task complicated by rapid AI advancements elsewhere, such as Google’s remarkable video-generation model Veo2 as well as OpenAI’s Sora. Microsoft, despite its deep AI investments, has failed to show meaningful revenue acceleration from Copilot, even as Azure benefited from AI-driven workloads. Meanwhile, Workday recently lowered its medium-term revenue growth expectations, reinforcing broader concerns about industry deceleration.

Consumption-based software alternatives have fared little better – despite easing headwinds from cloud optimisation, growth has failed to reaccelerate. Weak execution, often symptomatic of a slowing growth environment, has further weighed on infrastructure stocks that were initially seen as better positioned to capture AI-driven workload growth. Additional negative developments include elevated executive turnover, further headcount reductions and limited strategic M&A beyond the industrial software subsector. Against this backdrop, the latest phase of post-pandemic pivot from growth to profitability (the private equity playbook) has gone unrewarded by a market increasingly concerned about terminal growth rates and obsolescence risk.

This concern appears well placed, as we believe AI represents a greater existential threat than an opportunity for many incumbent software providers – a view we outlined last year. Today, AI-assisted code generation is increasingly challenging the notion of ‘code as a barrier’ and every improvement in near zero-cost AI-written code further diminishes the standalone value of existing proprietary platforms. Looking ahead, AI is likely to automate many tasks currently performed by knowledge workers, reducing reliance on the very software tools designed to support them.

Limited strategic M&A

We believe potential disruption to pre-AI-vintage companies has played a large part in the dearth of strategic software M&A in recent years. Last year, deal value increased by 23% y/y (following a dire 2023) helped by private equity activity, which saw Everbridge, Instructure, Smartsheet and Zuora put out of their public market misery. There were also several strategic acquisitions, including IBM’s acquisition of HashiCorp, alongside a notable wave of consolidation in design and industrial software. Synopsys’ $35bn acquisition of Ansys was the largest deal of the year, while Emerson acquired AspenTech for $15bn and Siemens snapped up Altair for $10.3bn. Given NVIDIA’s aspirations in this domain including Omniverse – a 3D collaboration platform – and its newly introduced Cosmos for accelerating physical AI systems, these high-multiple exits in simulation software may soon look inspired.

Looking ahead, expectations are for a further recovery in M&A activity this year, bolstered by a more accommodative regulatory environment under the new administration and over $2trn in private equity and venture capital dry powder. AI could serve as an additional catalyst, with subscale public and private companies likely seeking stronger partners just as some well-capitalised large-cap companies look for acquisitions to offset slowing organic growth.

None

Source: FactSet, Jefferies; Note; Highlighted rows denote PE M&A.

Enlarged Image

Cloud update: darker clouds ahead?

As expected, the easing of cloud optimisation headwinds and a surge in AI-driven demand propelled revenue growth of over 20% among the three leading public cloud providers in 2024. AWS ended the year with an estimated 52% market share, down from 55% in 2023, as Microsoft Azure (now at 31%) captured most of these share gains, helped by its strategic relationship with OpenAI. Google Cloud maintained strong double-digit growth, holding a 13% share, though it remains a distant third. Meanwhile, Oracle Cloud Infrastructure (4%) has emerged as a fastgrowing challenger, driven by competitively priced GPU offerings and its role in powering OpenAI’s model training.

All cloud platforms continue to benefit from AI-related demand. In Q4, Microsoft attributed 1300bps of Azure’s +31% revenue growth to AI up from 600bps of +28% Azure growth this time last year. In addition, Microsoft’s overall AI revenues exceeded a $13bn run rate in 4Q24. While Amazon does not quantify AWS’s AI-specific revenue, it called it “a multi-billion-dollar annualised revenue run-rate business”. Likewise, Google Compute Platform (GCP) reported “very strong” AI demand. We continue to believe that public cloud will remain the default choice for compute and storage – Gartner estimates that 70-75% of new enterprise AI applications will be built and/or deployed primarily in cloud environments.

However, the primary challenge for the cloud incumbents is how to reaccelerate growth in a market already worth more than $320bn and where penetration has risen sharply. A recent Morgan Stanley CIO survey suggests that 42% of workloads were already in the public cloud in 4Q24, which is set to increase to 58% within three years. All things being equal, higher cloud penetration rates should equate to lower future growth and greater economic sensitivity. This may have been apparent in 4Q24 with all three public cloud vendors experiencing sequential deceleration and aggregate year-on-year growth falling to 20.7%, down from 22.2% in the previous quarter.

AI to the rescue? Maybe

The hope is that cloud infrastructure and SaaS growth reaccelerate as enterprise AI adoption increases from just 3% of workloads today to an estimated 10% by 2027. This is one of the key debates for 2025 and beyond. However, history suggests that AI monetisation may prove less straightforward than many incumbents expect as others take the opportunity to challenge in adjacent markets, competing away the upside and potentially more. Early signs of substitution risk are already visible, with IT budgets increasingly favouring AI-related initiatives at the expense of traditional compute and storage. Likewise, cloud optimisation could prove a permanent feature, rather than a limited post-pandemic adjustment as AI excels at uncovering inefficiencies.

Enlarged Image

The shift to accelerated compute – the foundational architecture of AI – may also be ushering in a new era of competition for the public cloud giants. This could come in the form of hybrid compute which may be better positioned than it was pre-AI, able to optimise data pipelines by running different workloads in the most suitable locations. Gartner predicts that 90% of organisations will adopt a hybrid cloud approach by 2027. At the same time, established hyperscalers will also have to contend with so-called neo-clouds – new industry entrants (often former crypto miners) offering low-cost GPU rentals. Their advantage lies in ready-available to power and preferential access to NVIDIA GPUs. Over the past year, $20bn has been invested across 25 neo-cloud providers with CoreWeave leading the pack and doubling its data centre footprint. While the long-term viability of these neo-clouds remains uncertain, they are currently gaining share, pressuring GPU pricing and challenging industry assumptions, reinforcing the idea that Amazon is not the Walmart of cloud computing, but rather its Neiman Marcus.

In addition, there are other vast AI clusters being built outside traditional public cloud platforms. In October 2024, Meta CEO Mark Zuckerberg revealed that Llama 4 models were being trained on 100,000+ Nvidia H100 GPUs, while xAI’s Colossus (used to train Grok 3) has 200,000 GPUs, making it the largest known AI compute cluster. Others have been built by TikTok owner ByteDance, while Tesla runs 35,000 H100 GPUs, alongside its in-house Dojo supercomputer. While these clusters are for internal use (to train models) today, history says this could change; after all, AWS began as Amazon’s internal compute platform before it launched EC2 and S3 to external customers in 2006. Today, xAI usesColossus to both train and run inference workloads for Grok. Other AI leaders are also becoming more self-sufficient, with many choosing to design their own silicon to reduce dependence on NVIDIA. At best, this may reduce their overall reliance on cloud providers. At worst, they might become direct competitors, scaling their infrastructure just as AWS did when it redefined the cloud industry.

The hyperscalers (and leading SaaS vendors) may also have to contend with future competition from AI Labs such as OpenAI and Anthropic. Historically, OpenAI relied entirely on Microsoft Azure for its infrastructure. However, this relationship is evolving, as evident from the $500bn Stargate announcement in January 2025 that saw Microsoft transition from OpenAI’s exclusive infrastructure provider to a right of first refusal (RoFR) partner. This change likely reflects the differing priorities of a public company accountable to Shareholders and a private company aiming squarely for artificial general intelligence (AGI. OpenAI is also in flux, with CEO Sam Altman attempting to transition the company into a for-profit public benefit corporation (PBC) able to attract necessary investment. For now, Microsoft and OpenAI have reaffirmed their core partnership, which is set to remain in place through 2030. However, OpenAI has launched several applications that compete (or might compete) with Microsoft including SearchGPT and Operator, an agentic offering. More recently, OpenAI hired the CEO of Instacart as its CEO of Applications, to oversee its efforts to develop and scale customer-facing products.

Enlarged Image

AI Cycle Update

Rapid adoption

Last year we argued that AI diffusion was likely to proceed rapidly, informed by the presence of essential AI building blocks – six billion smartphones, vast datasets and cloud infrastructure – and by historical adoption trends showing that implementation lags halved with each major general purpose technology (GPT): c80 years for steam, c40 years for electricity, and c20 years for ICT (Information and Communication Technologies). Today, it is clear that AI adoption is significantly outpacing historical trends with OpenAI recently announcing 500 million weekly active users, up from more than 100 million from February and adding more than a million users in a single hour. Similarly, Meta revealed in January that its AI assistant (Meta AI) had reached 700 million MAU (Monthly Active Users). More recently, Microsoft processed over 100 trillion tokens in its most recent quarter, up 5x y/y with a record 50 trillion tokens processed in March alone

Although the pace of enterprise adoption has trailed consumer adoption, AI has become a strategic imperative. A recent McKinsey survey revealed that 72% of companies now actively use AI, up from 50% observed consistently over the past six years. Echoing this, half the S&P 500 constituents referenced AI on their Q4 2024 earnings calls – marking an all-time high. CIO surveys also consistently reveal that AI is the highest IT spending priority for 2025, followed by cybersecurity and digital transformation, both of which are likely being pulled into the AI conversation. Meta’s open-source Llama model, along with its derivatives, has already been downloaded 650 million times while corporate use cases continue to extend well beyond software copilots. Walmart recently announced it had used GenAI to create or improve over 850 million pieces of data in its product catalogue, work that would have required “nearly 100 times the current headcount to complete in the same amount of time”. Economist Erik Brynjolfsson (who expects AI to drive “at least 3%” average US productivity over the coming decade) believes we are “near the bottom of the productivity J-curve for AI”. If so, corporate AI adoption should accelerate before long, although many companies are likely to remain guarded about disclosing the specifics of their AI “secret sauce.”

Model Progress

AI models made significant gains during a frenetic 2024. Frontier models made continued progress, led by OpenAI’s GPT-4o, Google’s Gemini 2.0, and Meta’s Llama 3. While architectural advances and data curation improvements played a role, most of these gains came from post-training techniques and test-time scaling. Post-training model optimisation helped GPT-4o and Gemini 2.0 easily surpass previous benchmarks set by GPT-4 in code generation and multimodal understanding. GPT-4o also introduced a (remarkable) voice mode, enabling real-time, voice-based conversations, with the model also able to interpret non-verbal cues. Open-source models also continued to make strong progress, particularly in terms of cost efficiency with Llama 3 said to have achieved performance comparable to GPT-4 at just 1/50th of the cost. While OpenAI’s GPT-5 was delayed, xAI released Grok 3 –  the first Gen3 model (between 10 and 10 FLOPs of compute), an order of magnitude (OOM) greater than existing Gen2 models. Achieving the highest benchmark scores of any base model to date, Grok 3 suggests that pre-training scaling laws continue to hold for a new generation of AI.

Source: NBER Working Paper Series, February 2025

Enlarged Image

A new scaling vector- test-time to compute

However, the most significant gains last year were generated beyond scaling pretrained models. In September, OpenAI released its o1 models. Unlike most LLMs (large language models) which are zero-shot (processing inputs and generate outputs rapidly, relying only on the knowledge learned during training), o1 introduced the world to reasoning models which can generate internal chains of thought (CoT) at run-time. This enables the model to perform human-like multi-step reasoning; by breaking down complex tasks into manageable steps (‘thinking’ about the question) o1 significantly outperforms GPT-4o on most reasoning-heavy tasks and exceeds human PhD-level performance on a benchmark of physics, biology and chemistry problems.

Reasoning models perform predictably better the longer they are allowed to ‘think’ at test time (inference). As such, so-called test-time compute represents a powerful new approach for advancing AI capabilities, complementary to traditional ‘brute force’ model scaling. There has already been a rush of new reasoning models (including OpenAI o3, Anthropic’s Claud 3.7 and DeepSeekR1). In addition, both OpenAI and Google have introduced advanced reasoning capabilities (branded ‘Deep Research’) to their flagship consumer offerings. These allow the models even longer to complete tasks, with OpenAI’s Deep Research Mode taking between 5-30 minutes, depending on the complexity of the query.

Running out of benchmarks

Less than three years after the introduction of ChatGPT, OpenAI’s o3 can solve 25% of problems on a Frontier Maths benchmark, where no other model has exceeded 2% previously. Even more remarkably, o3 achieved 76-88% on the ARC-AGI benchmark (built to measure progress toward AGI) as compared to 5% with GPT-4o in early 2024. If “GPT-4 offered us a glimpse of the future”, reasoning models are surely early evidence of superhuman AI. They also represent a critical step towards agentic AI while accelerating the timeline towards AGI.

Source: One Useful thing, 24 February 2025

Open AI's deep research significantly outperforms earlier models in new Humanity's Last Exam (HLE) benchmark test

Source: OpenAI 'Introducing deep research', Feb 2, 2025, Deutsche Bank
*Model is not multi-modal, evaluated on text-only subset.
**With browsing and Python tools

Enlarged Image

Capex strength set to continue

Model progress, intense competition and AGI aspirations resulted in a remarkable year for capex with the big four hyperscalers spending $226bn (+70% y/y) during 2024. Earlier concerns about a potential slowdown were lost in a blaze of upward capex revisions with estimates for 2024 rising 34% and 48% respectively during the year. This momentum continued into 2025 as each of the hyperscalers raised their expected capex budgets for the year during their Q4 reports.

Strong AI venture funding should also remain supportive for training and inference spending with $110bn (+62% y/y) raised in 2024. In October, OpenAI’s $6.6bn raise took its valuation beyond any VC-backed technology company in history at the time of its IPO (initial public offering) while Anthropic raised an additional $4bn from Amazon last year. AI VC funding has accelerated into 2025 with AI companies raising $67bn in 1Q25 (+246% y/y) even though overall VC spending has only just recovered to 2021 levels.

The pursuit of Gen-4 models (GPT-6 and beyond) is expected to further drive AI capex as they are likely to require more than one million H100s equivalent costing tens of billions. However, these mega-clusters are significantly more power hungry as they move from Gen-3 (100MW) to Gen-4 (1GW). For reference, 1GW of power is equivalent to half the estimated output of the Hoover Dam or the amount required annually to supply 3.2 million UK homes. Current estimates suggest that by 2028, data centres could consume up to 12% of projected US electricity use. This power imperative explains why power-related stocks have been ‘pulled in’ to the AI trade as hyperscalers scramble to acquire DC sites with readily available power and sign long-term Power Purchase Agreements (PPAs). The totemic deal between Microsoft and Constellation Energy signed in September which will see the infamous nuclear power facilities reopened on Three Mile Island, captured the zeitgeist perfectly.

Capex trade, interrupted

However, capex-related stocks were severely challenged by the release of DeepSeek R1 model in January as a small Chinese AI lab had seemingly closed the performance gap with US models at a fraction of the cost ($6m versus $100m spent on GPT-4). This sent shockwaves through the technology market, wiping out $1trn of market capitalisation as investors questioned the sustainability and necessity of current AI capex.

While it may still be too soon to fully assess the implications of DeepSeek’s impressive innovations, the “just $6m” training costs have been widely debunked; reports indicate the company deployed tens of thousands of GPUs costing over $1bn. Likewise, cheap inference pricing is perhaps best viewed as another example of the ongoing, rapid decline in inference costs. As Anthropic CEO Dario Amodei noted, DeepSeek models are “roughly on the expected cost reduction curve that has always been factored into… calculations”. For instance, input token cost price declines between OpenAI’s o1-mini (September 2024) and the o3- mini (January 2025) represent an annualised price reduction of approximately 75%. These price reductions are possible because of 2x cost improvements coming from new hardware, as well as 4-10x improvement from algorithmic progress per year. As such, collapsing inference costs have been described as a “hallmark of AI improvement”.

2025 model est. release schedule
Enlarged Image

Rapidly declining AI Inference costs

Indeed, collapsing inference costs have not prevented Microsoft growing its Azure AI revenues to a $13bn run rate nor have they derailed OpenAI’s own revenue projections which are reportedly now forecast at $13bn in 2025 rising to $125bn in 2029, up from expectations of $12bn/$100bn last autumn. This points to a volume explosion in token usage already with lower inference pricing likely driving significantly higher revenues via higher usage (more users, more use-cases, more advanced models etc.). Reasoning models consume significantly more tokens than traditional frontier models because they only ‘think’ when generating tokens; deep research-type queries on OpenAI’s o3 are said to require 2,000x more compute than o1 preview.

Reasoning models are also the foundation of agentic AI, enabling multi-step problem-solving and autonomous decision-making without human intervention. Not only do AI agents likely require 50-100x more tokens than single-shot requests, but we also expect agentic AI to act as a force multiplier in the coming years, scaling far beyond human-driven usage and current comprehension. NVIDIA CEO Jensen Huang has suggested that inference demand could increase by a factor of one million, or even one billion.

In time, these projections may even prove conservative should more efficient AI lead to far higher usage. The idea that greater efficiency can paradoxically lead to increased rather than decreased overall consumption of a resource was first articulated by William Jevons in 1865. Jevons observed that improved efficiency in coal usage actually drove up coal demand instead of reducing it by unlocking new, previously non-existent (invisible) markets at previous (higher) price points. History is littered with examples of Jevons paradox, including the steel industry transformed by the Bessemer process, the transition from DC to AC electricity and of course, Moore's Law. In the immediate DeepSeek aftermath, Microsoft CEO Satya Nadella exclaimed: “Jevons paradox strikes again! As AI get more efficient and accessible, we will see its use skyrocket”. While this came as little immediate comfort to AI infrastructure-related stocks, we expect “any published DeepSeek improvement (to) be copied by Western labs almost immediately”. As such, all future AI models should enjoy better performances at a lower cost which is likely to accelerate both AI adoption and model progress.

Rapidly declining AI Inference costs

Source: Bain & Company

Enlarged Image

A less straightforward capex story

While most infrastructure-related stocks have rebounded strongly following the DeepSeek-related selloff, we remain bullish on the sustainability of AI capex growth. In part, this reflects that despite DeepSeek uncertainty, aggregate AI capex at the US hyperscalers accelerated in 1Q25 reaching $81bn (+71% y/y) while FY25 capex growth estimates increased to +44% y/y from +38% earlier.

That said, the advent of new scaling vectors means that the capex story has become more nuanced. Today, reasoning (or test-time compute) is “early on the scaling curve and therefore can make big gains quickly”. However, once it and other optimisations have been more fully exploited, we still expect the path to maximum capability will be to train the largest, most dense model feasible. This assumes scaling laws continue to hold as they provide a high degree of the predictability for the returns on incremental investments in the (costly) pretraining process.

For now, scaling laws appear intact. In November 2024, Jensen Huang said “foundation model pre-training scaling is intact and is continuing” while Sam Altman posted “there is no (scaling) wall”. However, scaling laws are likely to plateau naturally over time as the rate of AI model improvement follows an exponential decay. What this means is that the industry “will have to work harder over time to get further performance improvements.

In today’s AI race, some of the contenders may decide that the diminishing returns and escalating costs are no longer justifiable, leading them to withdraw. This dynamic could explain the changing nature of the Microsoft/OpenAI relationship. Others may consider the performance of recent ‘fast follower’ models like DeepSeek and conclude the race is in fact, over. Looking at the number of active models above 10 FLOPs suggests that the field has already significantly thinned.In today’s AI race, some of the contenders may decide that the diminishing returns and escalating costs are no longer justifiable, leading them to withdraw. This dynamic could explain the changing nature of the Microsoft/OpenAI relationship. Others may consider the performance of recent ‘fast follower’ models like DeepSeek and conclude the race is in fact, over. Looking at the number of active models above 10 FLOPs suggests that the field has already significantly thinned.

However, and continuing with the parallel, it is well understood that marginal improvements in sport yield outsized gains, with fractions of a second separating champions from the rest of the field. In elite sprinting, every 0.01 second improvement is the result of months, if not years of optimisation. Usain Bolt’s world record 9.58 second 100 metre sprint in 2009 was only 1.6% faster than the record set by Asafa Powell in 2007, but that difference cemented his status as the fastest person in history. In endurance sports, the same principle applies; Eliud Kipchoge’s sub-two-hour marathon in 2019 required breakthroughs in shoe technology, drafting strategies and meticulous pacing. At the cutting edge of performance, the compounding effect of marginal gains determines greatness.

Enlarged Image

The biggest opportunity

While these factors (accuracy; emergent behaviour; multimodality) explain our continued excitement around training-related AI capex, the most significant driver of today’s AI spending remains the size of the prize. According to Bernstein, information workers represent 34% of the global labour force and contribute $20trn to GDP. A 20% productivity uplift could represent a $4trn opportunity and a potential $800bn in annual willingness to spend. If a 20% uplift appears optimistic, consider that McKinsey believes AI could automate 30-50% of tasks in about 60% of occupations by 2030. In the longer term, the opportunity is likely to be significantly greater should AI begin to substitute rather than augment human labour.

Much more than Moore

Unlocking this vast opportunity rests on continued advancement in model capabilities which, as outlined above, are progressing rapidly. As we have previously argued, humans struggle with non-linear change particularly when compounding over many years. The exponential scaling of semiconductors (as predicted by Moore’s Law) was driven by an improvement of 1-1.5 orders of magnitude (OOMs) per decade. In contrast, AI scaling has been progressing at one OOM per year or 5-6x faster than Moore’s Law. As a reminder, one OOM is a 10x difference, whereas 3 OOMs is equivalent to 1,000x. This exponential scaling is evident in the cost of AI, which – for a constant level of intelligence – has been declining by approximately 10× every 12 months, compared to Moore’s Law, where the cost of silicon per square inch historically fell by around 2× every 18 months. This explains why leading models today are said to be “running out of benchmarks” where their predecessors just a decade ago “could barely identify simple images of cats and dogs”. As one AI commentator argues, “we are racing through the OOMs, and it requires no esoteric beliefs, merely trend extrapolation of straight lines, to take the possibility of AGI…by 2027 extremely seriously”.

Base Scaleup of Effective Compute
Enlarged Image

AGI coming into view

When we first referenced AGI in last year’s Annual Report, we were careful to downplay the likely timeline of so-called ‘superintelligence’. Today, it feels increasingly possible that within a few years AI might be “able to understand, learn and apply knowledge across a range of cognitive tasks at a human-like level”. Sam Altman has said that “systems that start to point to AGI are coming into view” with superintelligence possible “in a few thousand days”. Elon Musk believes “AI will supersede the intelligence of any single human being by the end of 2025”. Perhaps more importantly, Musk has suggested that the “probability that AI exceeds the intelligence of all humans combined by 2030 is 100%”. Metaculus (a community-driven forecasting platform) anticipates the first general AI system by 2030, a year ahead of its forecast last year.

Agentic AI first

While there are still many dissenting voices around the AGI timeline, most AI commentators believe the next step on that journey is agentic AI with 2025 billed as the “year of agents”. Like AGI, agentic definitions vary, reflecting a spectrum of agentic capabilities not dissimilar to differing levels of autonomy in vehicles.

Agentic AI comprises compound AI systems that chain together multiple task-specific models where the LLM decides the control flow of an application. The remarkable gains in reasoning models have paved the way for a new wave of AI agents designed to bridge the gap between LLM-based assistants (tools) and human agency. There has already been a flurry of agentic announcements from software companies such as Salesforce and ServiceNow. However, we are more focused on product previews such as OpenAI’s Operator – “an agent that can use its own browser to perform tasks for you” – and Google’s Project Mariner an experimental AI agent that can “think multiple steps ahead”. Multiple Chinese AI labs have also launched agents, such as UI-TARS from ByteDance and Manus from Chinese startup Monica. Gartner predicts that by 2028, one-third of all GenAI interactions will use agents like these. Over time, these agents are likely to gain increasing autonomy, shifting decision-making authority away from the human in the loop toward the underlying LLM itself. At that point, they might more closely resemble the programs depicted in the movie Tron (1982), which independently operate and compete on behalf of their users, marking a significant evolution from today’s human-guided ‘copilot’ systems.

Today, agentic AI remains nascent, with hallucination (error) rates still incompatible with agency. However, Operator provides our first real glimpse into a world where AI is no longer a tool used by humans, but instead performs tasks previously done by humans. Today, basic AI agents are already creating Neon (serverless) databases at four times the rate of human developers. End users simply describe what they want to build and AI agents autonomously initiate database operations, manage data workflows and scale infrastructure effortlessly.

Enlarged Image

Technology/AI risks

Given its centrality to sector fortunes, the key risk posed to technology stocks relate to AI. The Trust’s significant exposure to AI means any setbacks to AI fundamentals or investment narrative could be magnified in the portfolio. These risks may include a slowdown in the pace of AI model improvement (including a tapering of the ‘scaling laws’ observed so far), production challenges presented by the rapid development cadence of each generation of leading-edge semiconductors (as we saw with NVIDIA’s Blackwell delay) and other bottlenecks in scaling AI such as sourcing sufficient power for data centres and ever-larger datasets to train models. Other AI risks include the advent of ‘cheaper’ models like those introduced by DeepSeek that challenge capital intensity and negatively impact hyperscaler capex. Disappointing AI adoption (undermining investor confidence) or very rapid adoption (provoking public or political backlash) could also present challenges, although neither is likely to derail the technology’s progress in the longer -term. There is also the risk that despite improvement, AI model hallucination rates remain incompatible with agentic AI, potentially delaying or preventing AGI.

Regulation also poses a significant threat to AI progress should it escalate sharply. While export controls aimed at slowing China’s AI progress may become more effective as scaling continues, additional restrictions could stifle innovation while insufficient oversight could accelerate AI proliferation. Given that DeepSeek was heralded as AI’s ’Sputnik moment’, greater AI competition between the US and China could presage a new AI ‘space race’. The original Sputnik moment led to the creation of NASA in 1958, with US space spending soaring from 0.1% of GDP in 1958 to over 4.4% by 1966 , culminating in the 1969 moon landing. A similar trajectory may now unfold in AI, as sovereign investments surge. However, AI competition, particularly if the industry continues to make rapid progress towards AGI, could increase the likelihood of Manhattan Project-type regulatory intervention. However, this might simply slow US progress while shifting leadership to more permissive nations, rather than mitigating risks.

On a more prosaic level, regulation also presents a significant risk to the sector should behavioural remedies challenge the natural monopoly status of some of today’s mega-caps. We are hopeful the worst-case scenarios will be avoided given the critical role mega-cap US technology companies will play in counterbalancing the AI threat from China. Indeed, a further deterioration in US/Sino relations may present a greater risk and any escalation in tensions around Taiwan would likely put pressure on the semiconductor industry.

Other risks include tariffs which are impossible to fully assess other than at a very high level due to moving targets and the inherent lack of clarity (e.g. the semiconductor sector is still undergoing a Section 232 investigation). Even as these waypoints are reached, there is significant scope for exemptions and/or phased implementations given the need to deliver US AI supremacy. Valuation also remains a key risk, particularly following the absolute and relative rerating in the technology sector as well as the broader market. While we believe the rerating is appropriate given AI progress, it does leave valuations more exposed to disappointment, both within and beyond the technology sector. However, we remain dismissive of the notion that AI stocks are in a bubble, akin to the dot.com period in the late 1990s. While there are features of today’s market that rhyme with that earlier period, we do not believe investors are really considering trillion-dollar market opportunities, scaling laws and an accelerated path to AGI. Factors that would challenge this view include much higher valuations (technology traded above 2x the market multiple in 2000), a ‘hot’ IPO market dominated by immature AI companies and the application of new valuation metrics necessary to justify elevated valuations. None of these conditions exist today.

Enlarged Image

Concentration risk

For several years, we have consistently reminded Shareholders of the concentration risk embedded both within the Trust and in the market cap-weighted benchmark around which the portfolio is constructed. Following another period of pronounced large-cap outperformance, this risk remains elevated. At year-end, our three largest holdings – NVIDIA, Apple and Microsoft – accounted for approximately 23% of NAV and 31% of the benchmark. Our top five holdings, which also include Meta and Broadcom, represented around 34% of NAV and 50% of the benchmark.

As a large team with a growth-centric investment approach, we would welcome the opportunity to move materially underweight positions in the largest index constituents should we become concerned about their growth prospects, their positioning in an AI-first world or if we believe there are more attractive risk/reward profiles elsewhere. That said, large-caps continue to dominate small-caps, and the strong performance of the Mag-7 during 2024 serves as a reminder of the opportunity cost associated with a premature move away from unique assets, many of which still capture the zeitgeist of this technology cycle.

However, as previously discussed, there may be some early evidence of AI disruption beginning to challenge the investment narratives at certain mega-caps, including Alphabet and Apple. We have held both positions for close to 20 years but have meaningfully reduced them over the past 12 months. We remain unafraid and prepared to materially underweight or exit large index constituents should we become concerned about their growth or return prospects, or AI positioning. We will continue to communicate our thoughts and positioning as they evolve, just as we did when we pivoted the portfolio towards AI. For now, Shareholders should expect lower equity exposures to these stocks (potentially augmented by call options to mitigate upside risk) and greater daily variance in terms of our relative performance.

Conversely, while the Trust can hold up to a full benchmark weight subject to a maximum limit of 15%, we remain unlikely to do so; we struggle with the idea that we are reducing risk by making the portfolio ever more concentrated. Instead, our intention remains to construct a diversified portfolio comprising the best of what the benchmark has to offer, plus a selection of growth technology companies which investors may lack the resources or expertise to discover, analyse and monitor for themselves. We continue to believe that a diversified portfolio of growth stocks and themes capable of outperformance and constructed to withstand investment setbacks, should deliver superior returns over the medium term, particularly on a risk-adjusted basis.

Enlarged Image