Artificial intelligence is no longer a distant promise discussed in laboratories or policy documents. It is embedded in everyday organizational decision-making, from approving loans and predicting patient outcomes to managing supply chains and assessing employee performance. Yet while AI systems grow more sophisticated, the question of who leads in AI-driven environments has become increasingly urgent.
A recent peer-reviewed study by Sahadat Hossain and colleagues offers timely insight into this challenge. Published in the Journal of Leadership & Organizational Studies, the article titled “Digital Leadership: Towards a Dynamic Managerial Capability Perspective of Artificial Intelligence Driven Leader Capabilities” examines how leadership itself must evolve when humans and intelligent machines collaborate on a scale.
Conducted at the University of Wollongong, the research moves beyond popular claims that AI will replace leaders. Instead, it argues that leadership failure in the age of AI is more likely when decision makers lack the capabilities to understand, guide and ethically govern algorithmic systems.
Why AI has changed the meaning of leadership
AI adoption has accelerated across sectors, driven by promises of efficiency, predictive accuracy and competitive advantage. Financial services, healthcare, education, and government institutions increasingly rely on AI-based analytics and automation to handle complex tasks at speed. Yet this shift has also exposed organizations to new risks, including algorithmic bias, opaque decision making, data privacy breaches and workforce displacement.
The study highlights critical tension. AI systems excel at processing vast amounts of data, but they do not possess moral judgement, contextual understanding or accountability. Those responsibilities remain human. As a result, leadership in AI-driven organizations is no longer about delegating decisions to machines but about orchestrating human-machine collaboration in a way that balances performance with responsibility.
This framing places leadership capability at the centre of AI governance. Rather than viewing AI as a purely technical upgrade, the authors argue that AI fundamentally reshapes how leaders sense opportunities, make decisions, and manage uncertainty.
From static skills to dynamic managerial capabilities
To understand what effective leadership looks like in AI-intensive environments, the researchers adopt a dynamic managerial capability perspective. This theoretical framework focuses on how leaders reconfigure organizational resources in response to rapid change, rather than relying on fixed competencies or hierarchical authority.
Dynamic managerial capabilities emphasize three interconnected processes: sensing changes in the environment, seizing opportunities through strategic decisions, and reconfiguring organizational structures and resources. In AI-driven contexts, these processes become more complex as leaders must interpret algorithmic outputs, integrate automated systems into workflows, and anticipate unintended consequences.
Through in-depth interviews with senior executives in the financial services sector, the study identifies how these dynamic capabilities manifest specifically in AI-enabled leadership. The findings suggest that effective digital leadership depends on a distinct combination of technical, adaptive, and transformational capabilities.
Technical capability as the foundation of AI leadership
The first capability set identified in the study relates to technical competence. Importantly, this does not mean that leaders must become data scientists or software engineers. Instead, they must develop sufficient AI literacy to understand how intelligent systems work, what data they rely on, and where their limitations lie.
Leaders with strong technical capability are better positioned to evaluate AI proposals, invest in appropriate infrastructure and challenge flawed assumptions embedded in algorithms. They can also communicate more credibly with technical teams, regulators and stakeholders, reducing the risk of blind reliance on automated outputs.
The research shows that technical capability includes both technological awareness and informational access. Leaders who actively engage with AI-based analytics and information systems are better equipped to make informed decisions and respond quickly to emerging risks. In contrast, leaders who remain detached from AI processes often struggle to exercise oversight or accountability.
Decision-making in the age of intelligent systems
Beyond technical understanding, the study identifies adaptive decision-making as a central leadership challenge. AI increases the volume and speed of decisions organizations must make, but it does not eliminate the need for human judgment. In many cases, it amplifies it.
The researchers found that effective leaders use AI to inform decisions rather than replace them. They rely on data-driven insights while remaining attentive to context, ethical considerations, and long-term strategic goals. This approach contrasts with deterministic decision-making, where algorithmic recommendations are followed without critical evaluation.
Adaptive decision making also requires leaders to assess the broader organizational impact of AI adoption. This includes evaluating effects on employment, skill requirements and organizational culture. Leaders who fail to consider these factors may achieve short-term efficiency gains at the cost of long-term resilience and trust.
When machines augment, rather than replace, leaders’ capabilities.
-Sahadat Hossain
Integrating AI into complex organizations
AI implementation rarely affects a single department. It reshapes processes across various functions, including operations, compliance, human resources, and customer engagement. The study highlights integration capability as a critical but often overlooked aspect of digital leadership.
Integration capability refers to a leader’s ability to coordinate AI initiatives across organizational boundaries. This includes facilitating cross-functional collaboration, clarifying responsibilities, and monitoring system performance over time. Without such coordination, AI projects risk becoming fragmented, costly, or misaligned with organizational strategy.
Interviewees in the study emphasized that integration failures often stem from leadership gaps rather than technical flaws. When leaders fail to actively guide AI adoption, automated systems may conflict with existing workflows or introduce unintended vulnerabilities. Effective integration, therefore, depends on continuous leadership engagement, not one-off implementation decisions.
Making sense of change and managing uncertainty
AI-driven transformation introduces uncertainty at multiple levels. Employees may fear job loss, managers may question the reliability of automated decisions, and organizations may struggle to predict how AI will reshape markets and regulations. The study identifies transformational capabilities as essential for navigating this uncertainty.
Sense-making capability allows leaders to articulate why AI is being adopted and how it aligns with organizational values and goals. By communicating both the short-term implications and the long-term vision, leaders can reduce anxiety and foster a shared understanding. This communicative role becomes especially important in environments where AI changes established power dynamics and work practices.
Reference
Hossain, S., Fernando, M., & Akter, S. (2025). Digital leadership: Towards a dynamic managerial capability perspective of artificial intelligence driven leader capabilities. Journal of Leadership & Organizational Studies, 32(2), 189–208. https://doi.org/10.1177/15480518251319624
