Adding AI as a support has been a very attractive addition for business leaders and enterprises all over the world. The next step? Using AI as an active support instead of passive one. Instead of feeding the questions one by one to the co-pilots and getting answers, AI support can become more pro-active and take initiatives that makes the work for human-agent much simpler and faster, a.k.a agentic AI.
However, the biggest challenge we face today is the control (or lack of) in agentic AI. Most organizations feel they are more comfortable with AI assisting people, instead of taking actions, even in their limited resources. A survey done by McKinsey and company reveals that only 1 percent of the investors believe that AI will reach its maturity. Most of their doubt comes from the lack of structure and framework that can convert the huge potential of agentic AI into something productive.
That’s why in this article, we are proposing a solution that explains how enterprises can move from AI support to AI Action in a controlled, human-led way. We also introduce our practical framework that CX and IT leaders can use to assess their readiness and decide where AI should act, where it should assist, and where it should wait.
Why do we need Agentic AI Now?
Over the years, most enterprise AI supports focused on assistance such as:
- summarizing the information requested by the user
- suggesting responses
- Giving surface insights to help understand the problem better.
- supporting employees during daily work.
This aligns with what we discussed in February: human-led, AI-supported CX . However, as these co-pilot capabilities mature, a natural next question arises inside enterprises: “If AI can suggest, can it also act?”
This question is discussed more in the business world because manual workforce is reaching their limits. This means increase of workforce constraint as well as limited response speed. The need for automation is increasing because of the new generation customer demands, which the manual side alone cannot fulfil.
As Yuval Noah Harari often points out in his books when discussing complex systems, progress is less about what is possible and more about the order in which changes are introduced.
When systems evolve faster than the structures that govern them, power increases but control weakens. The same principle applies to AI in the enterprise. Capability without sequence creates fragility, not advantage.
Why Enterprises Hesitate with Agentic AI
When AI begins to act, even in small ways, enterprise concerns change fundamentally.
The core concerns we hear repeatedly are:
The concerns like accountability, Explainability, risk containment and auditability given in the figure above are especially strong in CX environments, where trust, customer impact, and regulatory expectations intersect.
Importantly, this hesitation is not about rejecting AI. It is about introducing autonomy without losing human ownership.
Graduated Path from AI Support to AI Action: A solution to your problems
Enterprises do not need to choose between “no autonomy” and “full autonomy.”
The safer approach is a graduated model, where AI action increases only as maturity increases.
Levels of AI Action in CX
Level 1: Assist
Level 1 is where most of the enterprises are today. The AI mostly helps in summarizing, classifying and giving surface insights to the user.
Level 2: Recommend with Approval
This level is first step towards AI autonomy where AI proposes a specific action. However, human guidance is necessary, which means any move by the AI is approved by human intelligence first. Enterprises can reach this level within 12 months of structured investment.
Level 3: Act in Low-Risk Scenarios
Level 3 is where Cubastion currently operates. At this level, the main aim for us is to introduce a system where AI executes predefined actions within strict rules and human oversight. The process is well-defined with clean data. Every boundary of acceptable action is detailed and full of audit trails.
Level 4: Autonomous under Guardrails
AI acts independently within narrow, well-defined limits at this stage. Humans will look over policies, exceptions, and escalation. Appropriate for a small number of mature, low-variability processes but not a default ambition.
These levels clearly notify us that: Autonomy is not a switch; it is a progression.
Thus, the key question becomes: Which level is appropriate for us right now?
In our January article, The CIO’s Framework for Application Investment in the Age of AI, we outlined five dimensions enterprises should consider when making application investment decisions. Those dimensions remain valid, and they form the foundation for assessing AI readiness as well.
To evaluate readiness for AI action, we recommend extending the framework with one additional dimension focused explicitly on human control:
Together, these six dimensions provide a practical way for CX and IT leaders to determine where AI can assist, where it can act, and where it should wait.
Can AI assist or act without removing human accountability?
The CX + AI Action Readiness Self-Assessment
The CX + AI action readiness self-assessment allows you to rate each dimension mentioned above. Rate your enterprise level from 1 to 4 for each CX process under consideration. By using this framework at the process level (not the enterprise level), the same organisation will have processes at very different stages of maturity. After rating yourself you can determine where your application stands in the table below:
What This Looks Like in Practice
This framework is a direct result of what we have observed over the years and how we have tried and tested the best methods for our customers. Here are some of the few examples where Cubastion has successfully delivered that show why a certain approach is beneficial for your business.
SAP Commerce: Predicting Problems Before They Reach Customers
As we mentioned earlier, Cubastion has successfully operated at level 3. In a large commerce environment, multiple issues usually emerged through complaints or system alerts. Their previous solution was to manage these issues reactively as opposed to taking a proactive approach.
After a detailed study of the environment and their data to confirm stable and documented operational process with trustworthy data and well-defined human escalation paths, Cubastion introduced an AIOps layer which detected anomalies, predicted degradation patterns and triggered predefined responses while the operations team retained authority over resolution.
The result?
The new model was able to predict and detect the issues in the commerce platform. Thus, reducing incident handling, improving the system reliability and giving the operations team confidence to manage complexity proactively without destabilising the core systems.
Generative AI: Turning Expert Knowledge into Team Capability
In a service environment where language became the main barrier in creating knowledge gaps between the experts and the broader team, Cubastion introduced generative AI as a co-pilot assistance layer: contextual guidance, language support, and on-demand knowledge access. The major problem of uneven distribution of the knowledge was solved.
The design was deliberate, and AI assisted; it did not decide or had complete takeover. Core processes were documented, employees retained decision authority, and usage governance was in place before rollout. The result?
It reduced dependency on senior experts and led to faster onboarding The service quality became more consistent and faster across languages and regions without displacing the human relationships that enterprise CX depends on.
In both cases, the same pattern was noted: AI action was introduced only where process clarity, data readiness, and governance already existed. Neither began with AI. Our main takeaway from these two situations was that: “Control is the Foundation of Autonomy”.
Conclusion
The future of enterprise CX will involve AI that acts. That is not the question. The question is whether the governance structures that make AI action safe and trustworthy are built before or after deployment.
Most enterprise CX environments, when assessed honestly, contain a handful of processes already mature enough for Level 3 action, several where Level 2 recommendation is the right move, and a significant number that should remain at Level 1 until the foundations are stronger.
Cubastion’s position, informed by delivery across complex enterprise environments, is that autonomy must be earned. Not because AI capability is insufficient, but because the value of AI action depends entirely on the quality of the environment it operates in. A capable AI in an immature process does not produce good outcomes. A well-scoped AI in a mature process produces outcomes that are reliable, auditable, and trusted by the people it supports.
Enterprises that succeed with agentic AI will be those that treat control as the foundation of autonomy, not as an obstacle to it.
English
Japanese