AI for CX: Step 2: Detailed Steps for Assessment Based on Key Pillars

Introduction 

When we are set to take on a new project, we always follow a set of rules. They might vary from enterprise to enterprise, but the intention is always the same: To complete the project in time and prevent wastefulness for a successful venture. That is why it’s always necessary to have a “framework” in place. 

In the last article, Smarter CX Modernization Assessment Framework for Application we explained why a CX assessment matters for an enterprise to grow and clear their future objectives. However, learning about CX assessments and Framework won’t be effective until we do something about operationalizing it. On paper, assessments look easy to make, but the process demands rigor, consistency, and structured data collection to ensure the findings are credible and actionable. 

In this blog, we will go into the depth of the key pillars we discussed in our first article of application assessment framework series you can see below: 

The process described here can be applied to a single application or an entire portfolio, providing repeatability across diverse technologies and business domains and supporting long-term modernization planning. 

CX Modernization Functional Assessment: Mapping Capabilities to Business Goals 

How does the function assessment of your business work? Let’s go through these important steps to understand:  

  1. Stakeholder interviews: These interviews means that you are directly talking to the people who use the applications daily to get real experience and feedback. The qualitative insight can help measure key workflows and what improvements it needs.  
  2. Mapping application’s functional capabilities against documented business requirements, expected outcomes, and evolving needs: Any non-standard workarounds, redundant steps, manual interventions, or divergent workflows needs to be documented. This will reveal the operational inefficiencies which the technical evaluation can miss.
  3. Scalability and Redundancy evaluation: This step means we are evaluating the applications ability to support new business models or market expansion. This is followed by identifying redundancies i.e., applications that overlap in capabilities and may need merging. 
  4. Functional maturity score: Typically based on coverage, alignment with business processes, issue frequency, and adaptability to change, the output becomes a critical input for recommending modernization strategies in later stages. 


CX Modernization Technical Assessment: Architecture, Code, and Scalability Review
 

The technical pillar of CX assessment will focus on evaluating your enterprises application’s foundational structure, it’s technological dependencies and the overall maintainability.  

  1. Architectural review: This means analyzing if the design is aligning with the enterprise architecture principles.  
  2. Analyzing whether the design aligns with enterprise architecture principles, modularity expectations, and industry best practices. Key areas include architecture patterns (monolith vs. microservices), deployment topology, integration frameworks, and the degree of technical debt. 
  3. Code-level evaluation: Using static code analysis tools (e.g., SonarQube), teams examine code quality metrics such as maintainability, complexity, duplication, and vulnerability density. This is complemented by reviewing repository structure, branching strategies, documentation standards, and adherence to coding conventions. 
  4. Scalability capabilities assessment: Underlying infrastructure dependencies, database design, caching mechanisms, asynchronous processing models, and load-distribution patterns are evaluated. Technical teams also evaluate cloud compatibility, API governance, and version currency of frameworks and libraries. 
  5. Output: Technical risk score, maintainability index, architectural compliance rating, and a catalogue of refactoring or rearchitecting opportunities help establish whether the application can sustainably evolve or if its current design limits modernization potential. 


CX Modernization Performance Assessment: Load, Latency, and Stress Evaluation
 

Evaluating the performance of your applications means whether it can take the existing and future load expectations. That means if the application is not up to the mark, it will glitch or the threshold will be below the expectations. We can evaluate your CX experience Performance based on these three parameters:  

  1. Historical production telemetryThe team uses APM tools that monitors how the app has been performing over time in the real world. They look at things like: 
    a)How fast does it respond to users? 
    b)How hard is it working the server’s processor? 
    c)Is it using up too much memory over time? 
    d)How many transactions is it handling, and at what speed?
    This gives them a factual, data-backed starting point not, so they know what “normal” looks like before they start testing.

  2. Structured performance tests: This includes:
    a)
    Load tests to assess behaviour under expected volumes  
    b)Stress tests to identify breaking thresholds 
    c)Soak tests to uncover long-duration issues such as memory leaks  
    d)Integration performance measured through API response times, third-party latency, and network bottlenecks 

  3. System tuning configurations: Once problems are identified, the team digs into the technical settings that control how efficiently the app runs. This includes: 
    a)Whether the database is set up to find information quickly (indexing) 
    b)Whether the app is storing frequently used data in fast-access memory rather than fetching it fresh every time (caching) 
    c)Whether the app is managing its connections to the database efficiently, rather than opening a new one every single time it needs data 

The output is a comprehensive performance scorecard detailing bottlenecks, capacity risks, and optimization opportunities. This scorecard becomes a key factor in modernization decisions, especially when selecting between rehosting, replatforming, or rearchitecting strategies. 

CX Modernization Security Assessment: Threat Modelling and Vulnerability Analysis 

The security assessment is designed to evaluate the application’s exposure to internal and external threats, through a variety of checks and reviews. 

  1. Authentication and authorization mechanisms are reviewed, ensuring compliance with enterprise IAM standards such as MFA adoption, RBAC consistency, and SSO integration. 
  2. Vulnerability analysis: is then conducted using SAST, DAST, and dependency scanning tools to identify code- and configuration-level security issues. This includes detecting outdated libraries, weak encryption protocols, insecure API endpoints, and missing input validation checks. Complementing this is a configuration review of infrastructure components such as firewalls, load balancers, certificates, and storage policies. 
  3. Threat modelling is then performed to map probable attack vectors, privilege escalation scenarios, data handling risks, and potential misconfigurations.
     
     

These tests enable the prioritization of vulnerabilities, remediation recommendations, and compliance alignment. 
 

User Experience and Accessibility Review in applications  

The UX review focuses on how effectively users can interact with the application and complete key tasks.  

From the above figure, we can gather 4 key components for a better UX in CX assessment.  

  1. It begins with mapping primary user journeys and evaluating usability aspects such as navigation flow, form design, responsiveness, and visual consistency. User interviews, feedback surveys, and usability test recordings provide qualitative insights into friction points and satisfaction levels. 
  2. Next, heuristics-based evaluation is performed to assess compliance with usability principles including learnability, efficiency, error tolerance, and clarity.  
  3. Accessibility evaluations follow, ensuring compliance with standards such as WCAG guidelines. This includes assessing keyboard navigation, screen reader compatibility, colour contrast ratios, and alternative text coverage. 
  4. The assessment also reviews cross-device experiences to ensure responsive design and feature parity. Interaction analytics (e.g., click heatmaps, drop-off points) are also analyzed to uncover behavioural trends. 

Outputs include a UX maturity score, accessibility compliance rating, and recommended UI/UX improvements.  

Integration and Data Flow Assessment 

Most application nowadays rely on a complex ecosystem of upstream and downstream systems. The application(s) performance within and impact on this landscape is assessed as follows. 

  1. Mapping integrations: APIs, message queues, batch jobs, and external connectors are mapped. Teams document payload structures, data formats, transformation logic, and frequency of data exchanges. 
  2. Integration reliability evaluation: This is measured through monitoring logs, error rates, retry mechanisms, and failure-handling strategies. Latency patterns and throughput capabilities are also analyzed, especially for real-time integrations requiring consistent performance. 
  3. Data flow assessment: Master data dependencies, data lineage, duplication risks, and synchronization mechanisms are reviewed. Poor data quality or inconsistent data governance often emerges as a hidden risk impacting multiple applications. 
  4. Integration security evaluation: Token management, encryption standards, API gateways, and throttling policies are reviewed.  


Risk Rating and Prioritization Method
 

This all comes to conclusion when we calculate each finding from the 5 pillars we discussed above. This begins with creating a weighted scoring matrix where each pillar is assigned a relative weight based on business priorities. For example, customer-facing systems may assign higher weight to performance and UX, while internal systems may prioritize functionality and integration stability. 

Next, pillar-level scores are aggregated to produce an overall risk rating for each application: low, medium, high, or critical. This enables clear comparison across the portfolio. Applications with high technical debt, severe security vulnerabilities, or chronic performance issues naturally rise to the top of the modernization backlog. 

The assessment also categorizes applications into modernization pathways (rehost, replatform, refactor, rearchitect, replace, or retire) based on their score patterns.  

Conclusion 

A pillar-based assessment approach brings structure, consistency, and analytical rigor to application evaluation. By following the detailed steps outlined across functionality, technology, performance, security, UX, and integration, organizations can create a comprehensive view of their application landscape. Cubastion can help you create this methodology and ensure that insights are measurable, evidence-driven, and aligned to business priorities, significantly improving the quality of modernization decisions.  

ANUBHAV MANGAL
PRINCIPAL CONSULTANT

Related Success Stories