- Network Effects
- Posts
- The Technologized Investor
The Technologized Investor
From Intelligent to Technologized
Welcome to the Network Effects Newsletter,
Over the past two years, I’ve worked closely with institutional investors to design and implement technology aimed at generating superior, repeatable alpha. This essay distills those lessons and outlines a model for what the next generation of institutional investors will look like.
This short essay is not about isolated AI use cases. It’s about how technology, particularly AI, can be embedded structurally to redefine what it means to be a technologized investor.
Let’s dive in.
Evolving Investment Landscape
In the asset management industry, results are measured in returns, which are a function of deployment scale and allocation decisions. Benjamin Graham's classic definition of the "intelligent investor" emphasized evaluating fundamental value drivers and buying assets when their intrinsic value exceeds market price.
For decades, the underlying process, from screening and diligence to underwriting and monitoring, has remained largely unchanged, relying on spreadsheets, human intuition, and static models.
Today, four structural forces are reshaping that model:
Explosion in data availability and sources
Intensified competition from new entrants and asset classes
Growth of multi-asset and hybrid strategies
Rise of foundational models and AI applications
(e.g., AlphaSense, Hebbia, Kensho, etc)
These shifts raise two essential questions for institutional leaders:
1. How do we build a sustainable edge?
2. How do we create operating leverage through data and AI?
Two Levers for Technological Progress
Nearly every major advance in investment technology stems from improvements to one or both of two capabilities: Inferential Depth and Resource Efficiency.
Inferential depth: how rich or intricate an analysis can be performed to generate fresh insights for decision making
Resource efficiency: how productively organizational resources can be utilized, after accounting for the costs (e.g., time, money, oversight) of that utilization
Inferential Depth: Data Connectivity & Accessibility
Data is the foundation of organizational knowledge.
According to The Technologized Investor, poor data governance can cost asset owners up to 100 basis points annually in gross returns. Ironically, the most valuable data already exists inside firms, trapped in PDFs, spreadsheets, and individual memory.
The successful deployment of advanced knowledge management technology can raise the total organizational value of its own data and create value by making an investor’s resources more complementary overall.
Organizations should focus on digitizing, linking, and sharing their data while continuously broadening their scope to meet investors’ evolving needs. As they do, inferential depth grows nonlinearly, turning information from passive documentation into active intelligence that informs every part of the firm
“You can’t build a world-class investment system
if you fail to have world-class data.”
1. Digitalize
Digitization isn’t simply scanning documents. It’s about transforming tacit knowledge into machine-readable, queryable data:
Investment memos locked in SharePoint
Deal information with buried in email threads
Meeting notes sitting under CRM threads
Performance metrics are updated quarterly in spreadsheets
Turning these artifacts into structured, versioned, and accessible data objects creates compounding informational equity through better knowledge management.
2. Link
Data value compounds when it connects. Today, portfolio company names might differ across CRMs, accounting books, and subledgers, rendering analytics fragmented and governance unreliable, requiring rapid manual transformation. The complexity amplifies with dealing with entities with parent-child relationships, funds with capital committed and co-investment relationships across various asset classes.
Linking identifiers across systems establishes a single source of truth, enabling entity-level intelligence rather than tool-specific insight. For example, investors can easily search and aggregate all the overlapping portfolios across private equity and credit with a given fund.
Information asymmetry inside organizations slows decision-making, leads to dozens of departments building their own dashboards of similar information and sharing data via emails with low-quality data that are not normalized and interoperable with other data sets.
Real-time data-sharing frameworks, via system connections and shared dashboards with SDKs available, create a unified view across departments, enhancing collaboration between departments and creating one living knowledge graph connecting various fund-wide data, including capital usage, GP commitments, exposures, and portfolio health.
Modern investing extends beyond financials. Integrating alternative datasets, from macro indices and supply-chain trackers to sentiment data, prediction markets, social signals or blockchain activities, expands the dimensionality of insight and strengthens decision confidence.
Inferential Depth: AI Tools in Decision-Making
AI is also transforming how investors reason, test, and validate ideas, not by replacing human intuition, but by systematizing how insight is generated, challenged, and refined.
The process of hypothesis building, validation, and iteration often happens in silos: an analyst forms a view, tests a few sensitivities in Excel, debates it in committee, and archives the memo. Once capital is deployed, feedback loops close slowly, sometimes years later. AI collapses these silos and accelerates feedback loops. It introduces the ability to simulate, stress-test, and learn continuously, using past investments, live market data, and even unstructured information (such as news, transcripts, or policy updates) to refine conviction dynamically.
AI can transform the investment organization from one that periodically revisits its assumptions to one that continuously recalibrates them. Two key use cases include:
A. War Gaming / Scenario Generation
The concept of War Gaming began in the Prussian military in the early 19th century, which is a board-based simulation used by the Prussian army to train officers in strategic thinking. Later, the concept was popularized in the Second World War, where the US military used this as a core component of military planning.
(More details in this video by Johnny Harris)
A digital twin of a portfolio or strategy enables simulated decision environments leveraging generative AI, allowing CIOs and partners to test how macro shocks, regulatory changes, or market behaviours ripple through their holdings. These tools transform risk management from reactive to proactive, letting leaders “war-game” alternative realities before allocating capital.
B. Post-Mortem Analysis
In investing, rigorous underwriting doesn't always translate to superior outcomes, and the reverse can also be true. Many portfolio winners deviate meaningfully from initial assumptions, yet these critical observations are either not captured systematically or documented in an ad-hoc manner that limits organizational learning.
AI-driven validation tools can transform this process by analyzing historical financial models, assessing consistency between investment narratives and underlying numbers
Automated ex-ante/ex-post analysis compares underwriting assumptions against outcomes.
Attribution frameworks decompose returns into constituent drivers.
Decision audit trails link performance to historical judgments, revealing which frameworks consistently generate alpha.
By continuously learning from past memos, models, and results, these systems can serve as institutional memory, a digital co-pilot that enhances investor judgment rather than replacing it.
Resource Efficiency
While inferential depth expands the quality of insight, resource efficiency determines how effectively an organization converts those insights into outcomes. Technology improves resource efficiency along two complementary lenses: general-purpose enhancements and tactical solutions.

Source: Network Effects Analysis
1. General Purpose Enhancements
General-purpose enhancements, including ChatGPT, Cursor, Loveable, have become productivity multipliers to serve individuals in a low-fiction, personalized and flexible manner. From market research on a particular industry to vibe-coding dashboards, general-purpose enhancements help individuals to get more done with less time, and breakthrough technical skills limitations with building new tools and dashboards.
In a more succinct way, through the use of AI technologies, organizations can help their (heroes investors) do their “Hero’s work” (investing) better and automate the administrative operational workload from their lives.
Every workflow contains three types of work:
Hero’s Work – Core, high-value activities
(e.g., investment thesis creation, portfolio design, risk calibration)Administrative Work – Repetitive but necessary tasks
(e.g., reconciliation, scheduling, reporting)Work Not Done – Valuable tasks neglected due to time or resource limits
(e.g., counterparty analysis, scenario modelling, continuous strategy updates).
Great software empowers the hero’s work, automates/eliminates the administrative work, and makes the “work not done” suddenly feasible.
A common channel of this use case is to encourage the adoption of enterprise AI applications, such as ChatGPT and Cursor. It serves as a multiplier of productivity to serve individuals in a personalized, flexible manner.
In practicality, analysts can perform deeper analyses incorporating more data sources and scenarios with AI, to enrich the depth of the analysis and reasoning behind its investment recommendations.
2. Tactical AI Solutions
Alternatively, organizations are also developing pointed solutions to leverage LLMs to automate and/or streamline investment and business processes.
When thinking about designing and prioritizing AI solutions, a useful lens here is the concept of asymmetry of verification. Some tasks are much easier to verify than to solve. With useful reinforcement learning (RL) techniques entering the mainstream, this asymmetry becomes central.
Verifier’s rule: The ease of training AI to solve a task is proportional to how verifiable the task is. All tasks that are possible to solve and easy to verify will be solved by AI.
Tasks that have these properties are much more likely to be automated:
Objective truth: clear consensus on what is a good solution
Fast to verify: any given solution can be verified quickly
Scalable to verify: many can be verified simultaneously
Low noise: verification is tightly correlated with solution quality
Continuous reward: able to rank multiple solutions rather than a binary pass/fail
For example, Carta has developed a cash reconciliation agent that compares a fund’s external bank feed to Carta’s internal general ledger (GL) daily.
The challenge they faced was in the case of an unreconciled transaction, ensuring the agent has the right context and guidance to gather the right context, reason through the numerous paths a fund admin would follow and produce actionable next steps, or complete the reconciliation autonomously. The process involves building out a Lucid map to lay out the decision tree into blocks and pathways, to decouple instructions from the rest of the process for observability and auditability.
Need for Transformation – Slowly At First, Then All At Once
Technological progress and organizational transformation tend to follow a similar pattern, a long, gradual buildup followed by a sudden and dramatic tipping point.
The asset management industry is slowly approaching that threshold.
General-purpose AI tools are democratizing sophisticated analysis with modern toolings, including Mosaic, Hebbia and Rogo. Specialized platforms are automating core investment workflows that previously required armies of analysts. Alternative data is expanding at exponential rates, rendering traditional information advantages obsolete. The performance gap between early adopters and laggards is widening rapidly.
Firms that embed technology structurally by investing in integrated data infrastructure, intelligent decision workflows, and continuous organizational learning are building compounding advantages that become exponentially harder to replicate over time. They are creating institutional knowledge graphs that deepen with every deal, AI systems that learn from every outcome, and operational leverage that scales without proportional headcount growth.
Conversely, firms that defer this transformation face mounting technical debt that constrains future optionality, talent attrition to more technologically sophisticated competitors, and erosion of their competitive moat as AI-native entrants enter the market unburdened by legacy systems, siloed data, and analog processes.
Winning Characteristics of Technologized Investment Organizations
1. Architectural Thinking Over Siloed Point Solutions
The current challenges with many institutional investors are related to the siloed design of tools and applications for department-specific needs, leading to fragmentation and overlaps. Technologized investors build holistic enterprise architectures where data flows seamlessly across the investment lifecycle, embedding AI workflows and institutional learning into a singular ecosystem.
This architectural approach creates scalable foundations that accommodate new capabilities without costly rewrites. Point solutions deliver short-term productivity but accumulate long-term technical debt. As AI systems grow more sophisticated, architectural thinking and orchestration have emerged as central design principles.
2. Data as a Strategic Asset with Robust Governance
Most investment organizations possess decades of proprietary data, including investment memos, return profiles, due diligence notes, and portfolio metrics, yet treat it as archived records rather than strategic assets.
Technologized investors implement systematic data strategies: (1) Digitalize unstructured information into machine-readable formats, (2) Link entities across systems with consistent identifiers, (3) Share through unified frameworks eliminating departmental silos, and (4) Expand by integrating alternative datasets that enhance analytical dimensionality. This operationalizes data governance through domain-driven ownership models (inspired by Data Mesh principles) where business teams own their data's quality and accessibility.
3. Institutionalized Learning Systems That Close Feedback Loops
Traditional investment organizations treat deals as discrete events. Post-investment, memos are archived and attention shifts forward. Learning remains anecdotal and siloed in individual experience.
Technologized investors build continuous learning systems, treating every investment as data, enriching an evolving knowledge model. This transforms institutional memory from tribal knowledge dependent on veteran retention into an organizational "brain" that leverages historical patterns to inform future decisions while surfacing new analytical perspectives.
4. Talent Models That Blend Investment Acumen with Technical Fluency
The division between investors who make decisions and technologists who build systems is dissolving. Technologized investors cultivate technically fluent professionals who leverage computational tools as integral to their conviction-building process.
This doesn't require every investor to become an engineer. It means developing comfort with data analysis beyond Excel, fluency in AI-assisted research workflows, and product thinking that enables effective collaboration with technical teams.
The most impactful investors in the coming decade will combine deep domain expertise with computational leverage, using AI to expand analytical capacity while maintaining the judgment, pattern recognition, and relationship capital that drive superior returns.
5. Distributed Innovation Across Functions, Not Confined to R&D
AI transformation fails when treated as a technology department implementation project or delegated to a "Strategy Office" operating without operational buy-in. Winning organizations embed AI experimentation as cross-functional initiatives spanning investment, operations, and technology teams.
For example, Carta uses its offsite, “Carta Business School”, to bring 30 emerging leaders across products, engineering, sales, and operations into six teams tackling real business problems. This ensures AI solutions address material workflow needs rather than theoretical use cases, driven by technology and operations teams.
Distributed adoption with leadership buy-in accelerates iteration, ensures solutions solve actual problems, and builds organizational fluency that survives turnover.
Resources

