Sharath's

The New Age of Agency

Sharath Devulapalli
🔢 1,470 words ⏱️ ~6 min read 📖 readability Score: 18

While the era of AI empowers individuals, potentially enabling a solo operator to launch a billion-dollar business, the very forces driving this shift also make exercising trustworthy personal agency more challenging and rare in practice. It's not just about having the tools; it's about the increasing complexity of the world they create and the internal fortitude required to navigate it.


(a) Why Agency is Becoming Rarer and Harder

Standing out today requires more than just following a path; it demands taking the "road less traveled,” which is inherently uncomfortable and fraught with struggle and misunderstanding. Personal growth, necessary to transcend existing worldviews and unlock higher forms of success, involves navigating periods of confusion and long learning curves. It requires embracing uncertainty, which is something many people struggle with. A critical skill for making better decisions and achieving significant outcomes is "staring into the abyss"—confronting uncomfortable truths and admitting when you are wrong. This is difficult and something people actively avoid, locking themselves into suboptimal futures if they "flinch away.” Success itself can make agency harder, as it can breed scepticism of criticism and surround you with people afraid to tell the truth, creating a "perfect storm for disaster."

Yet we are not seeing a uniform decline or rise.

Instead, agency is diverging: it is being exponentially amplified for a select few who master the new tools and psychological skills, while simultaneously becoming harder to cultivate for the majority navigating an increasingly opaque and overwhelming environment.

The future will not be equally distributed — it will reward those who can remain resilient and navigate complexity.


(b) Connecting to Invisible Forces

Several invisible forces contribute to this challenge:

Automation and Opaque Systems

Advanced AI systems are transforming the economy, but they are fundamentally opaque. Unlike traditional software, we do not understand at a precise level why they make confident choices or mistakes. These systems are "grown" more than "built," with internal mechanisms that are emergent and not designed for human legibility. This opacity is the root of many risks and makes it difficult to predict behaviours or use AI reliably in high-stakes settings. Simply using AI tools without understanding or reviewing the code they generate ("vibe coding" beyond low-stakes projects) is a risky approach.

While individual discernment remains crucial, it is essential to recognise that extreme system opacity may necessitate new forms of collective governance, interpretability standards, and institutional frameworks to protect agency at scale. Personal agency alone may not be enough when the systems themselves become unknowable.

This lack of transparency in powerful, pervasive tools makes truly informed and controlled action, core to agency, increasingly complex.

Skill Mismatch Trap

Even when individuals remain in oversight roles, a new trap emerges: the Skill Mismatch Trap. Without deliberate effort to build the new skills — strategic thinking, prompt engineering, critical review — individuals risk holding titles of responsibility while losing real agency beneath the surface. Automation can create a deceptive appearance of control, while hollowing out the deep competence necessary to exercise it meaningfully.

Fragility of Systems

The unpredictable, emergent behaviours of opaque AI introduce fragility into systems. Beyond technology, regulatory environments can be complex and uncertain, as exemplified by the "regulation-by-enforcement" approach in crypto, which drives innovation overseas and hinders individual agency where it can be easily exercised (e.g., through airdrops). Navigating complex, noisy markets, such as influencer marketing, requires moving beyond broad strokes to a data-driven precision that demands a deep understanding and specific capabilities.

Permission Culture Collapse (and Emergent Gatekeepers)

While technologies like stablecoins offer permissionless financial rails, the decline of old gatekeepers can give rise to new complexities and uncertainties. Regulatory confusion acts as a different kind of barrier. Within organisations adopting AI, successfully leveraging the technology depends heavily on people's willingness to change and adopt new ways of working, a significant human factor that represents a hurdle to implementation, despite technological capability.

Standing out requires a "rejection of default”, implying that navigating and consciously rejecting prevalent cultural or systemic "defaults" requires active agency.


(c) What is Lost and Gained

When an agency declines, individuals may settle for the sake of escaping uncertainty, resulting in stagnation or following suboptimal life trajectories by avoiding harsh truths. Relying blindly on opaque systems or external defaults can lead to unexpected negative consequences.

On a personal level, it means being stuck in less complex stages of development, unable to transcend limitations or craft a truly custom path.

When agency is reclaimed, particularly through uncomfortable acts like tolerating uncertainty, going "all in,” confronting brutal truths, and navigating confusion, individuals unlock new levels, make better decisions, and achieve significant outcomes.

While personal agency is foundational, it also scales: communities, organisations, and societies that nurture collective agency — the shared ability to act intentionally and adaptively — can compound these effects, building resilient ecosystems that individuals alone cannot achieve.

Reclaiming agency in the face of technological advancement means focusing on the crucial human layer — the "epizone" where AI integrates with culture and society. This cultural layer, which moves more slowly than the technology itself, requires human adaptation and new skill sets ("AI whisperers"), and provides essential time to understand and manage the consequences of powerful AI.

Philosophically, reclaiming internal agency by learning how to handle the "indifferent" external circumstances is, for Stoics, the path to virtue, the only true good.


Architectural Thinking: The Future of Agency

As AI increasingly handles implementation details, the future of meaningful agency will shift to a higher plane: architectural thinking.

The ability to design systems, orchestrate diverse resources — both human and machine — and define the broader goals will become the new leverage point. Those who cultivate strategic, system-level vision will not merely survive the coming complexity — they will direct it.


(d) Philosophical yet Practical, Awe and Urgency

The landscape demands a deeper, almost philosophical approach to agency.

It's not merely about doing things quickly; it's about understanding one's perception and worldview ("memes," Spiral Dynamics) and having the wisdom, as the Stoics understood it, to distinguish what is truly within our control (our reactions and judgments) from what is not ("indifferents").

Exercising agency requires confronting our own biases and limitations, accepting the truth about ourselves — a skill essential for growth.

The awe stems from the sheer power and rapid advancement of AI, potentially leading to systems equivalent to a "country of geniuses in a data centre," and the recognition of the profound and complex journey of human consciousness itself.

The urgency stems from the race between AI and our ability to understand it, specifically in terms of interpretability. It is considered "basically unacceptable for humanity to be ignorant of how they work" before these systems become overwhelmingly powerful and central to society.

And yet, history offers a quiet counterpoint: while humanity often lags behind technological revolutions initially, we have repeatedly risen to meet complexity with adaptation, governance, and resilience. Agency, both personal and collective, has turned crises into eras of flourishing before — and can again.

The call to action is practical: invest in understanding AI, confront uncomfortable realities, tolerate uncertainty, and cultivate the complex, human-centric skills necessary to steer this future.


Open Questions for Further Discussion

Several critical questions remain open and warrant deeper exploration:

🧩 Bridging the Agency Divide

Given the potential for agency to diverge, amplifying for some while becoming harder for others, what societal or institutional measures are needed to ensure broader access to the tools and skills required for meaningful agency, rather than accepting this gap as inevitable?

🧠 Balancing Individual Resilience and Systemic Support

How much emphasis should be placed on individual psychological fortitude versus addressing the structural barriers (e.g., socioeconomic factors, educational access) that significantly impact one's capacity to exercise agency? Where does the responsibility lie?

🌍 Valuing Diverse Forms of Agency

Does the focus on navigating complexity and 'architectural thinking' risk overshadowing other vital forms of agency, in community building, ethical stewardship, creative expression, or civic participation? How are these also affected?

🛠️ Shaping Technology vs. Adapting to It

Beyond adapting to increasingly complex and opaque systems, what role does collective agency play in proactively shaping AI development towards greater transparency, interpretability, and alignment with human values? Is 'navigating complexity' the only goal, or should 'reducing unnecessary complexity' be prioritised?