PwC Consulting
PwC Consulting: Driving Development Innovation with Generative AI, Including Jitera
| Company name | PwC Consulting LLC |
| Industry | Consulting |
| Sector | Services |
| Company size | Approx. 12,700 (FY 2024) |
Interview Participants
| PwC Consulting Digital & AI Transformation (DAX) Team | N.H., Manager S.O., Senior Associate |
| Jitera | Y. N., Head of Customer Success and Pre-sales (Japan) |
Balancing Speed, Quality, and Reproducibility in Enterprise Development
PwC Consulting’s technology division is exploring how to integrate and optimize multiple generative AI tools, including Jitera, to enhance enterprise development. By introducing these tools in phases and rigorously testing them, they have created workflows where each AI tool has a clearly defined role. This article highlights their approach to selecting, deploying, and operationalizing generative AI tools.
About PwC Consulting’s Technology Division and Generative AI
Can you tell us about your team?
N.H.: Our team is the engineering arm within the consulting firm, primarily responsible for development. We specialize in technology-driven solutions.
We support projects end-to-end, from PoC (proof of concept) to production, and standardize successful patterns into templates to scale them across projects. This approach ensures reproducibility and scalability, allowing advanced technologies like generative AI to create tangible business impact rather than remain experimental.
Our team brings together specialists in web, cloud, data engineering, scrum mastery, and project management, enabling us to tackle complex problems quickly and from multiple perspectives. When it comes to generative AI, team members with deep technical knowledge and broad business understanding accompany clients from conceptual stages to implementation-ready solutions.
What industries do you primarily serve?
N.H.: We provide consulting services across industries and company sizes, forming specialized consultant-engineer teams tailored to each client to deliver maximum value.
Why Generative AI Was Introduced
N.H.: Our key challenges were balancing speed, quality, and reproducibility. Variability in lead times caused by personalized design and testing, fragmented learning due to different approaches across projects, and ensuring both speed and security were major pain points.
In consulting-driven development, we are expected to rapidly deliver solutions that meet client needs. The traditional model where high quality equals long testing cycles made it difficult to meet promised timelines. Our goal was to establish a development process that does not compromise on quality, speed, or governance. That is why we explored using generative AI, testing tools like Jitera alongside two other automation tools to optimize both processes and workflows.
Generative AI Strategy and Tool Integration
How did you approach generative AI adoption?
N.H.: Security was our top priority. We selected tools that minimize the risk of information leaks so clients could confidently use them. Generative AI often lacks transparency in data usage, making it hard to explain to clients. We needed tools that could be used safely under client oversight, including on-premises models.
S.O.: Our selection process evaluates “challenge, impact, operations.” We score tools based on bottleneck reduction, template potential, integration with surrounding tools, and TCO (cost of adoption, operations, and learning). Small-scale PoCs are scored first, followed by phased deployment.
How do you differentiate tool usage?
N.H.: We match tools to project phases. For example, we have tested and implemented “Jitera,” a cloud-based E2E testing automation tool, and an AI coding assistant. Jitera is focused on unit testing during design and manufacturing phases, while other tools are chosen for integration or comprehensive testing contexts.
Positioning of Each Tool
N.H.: Jitera manages documentation, design, UI, and coding end-to-end. Its focus is on enhancing team reproducibility, not just individual productivity. It is an AI optimized for project management and collaborative development.
S.O.: The cloud-based E2E testing tool reduces regression testing across browsers and devices, enabling scenario creation and parallel execution with no code. It helps detect UI issues early and allows for efficient role allocation, with critical paths via E2E and others via contract or API testing.
N.H.: The AI coding assistant embedded in the editor aids mid-to-senior engineers with code completion, suggestions, refactoring, and review, improving productivity. Jitera oversees project-level orchestration, while the coding assistant streamlines day-to-day tasks.
How Development Has Changed with Generative AI
N.H.: Previously, requirements, design, implementation, testing, release, and operations were linear, with documentation and testing done manually. Now, generative AI creates first drafts of user stories and architecture comparisons, letting humans focus on review.
With Jitera, foundations and CRUDs are quickly built, domain logic is AI-assisted, and unit or contract tests are auto-generated. Critical E2E tests are prioritized, and CI/CD quality gates are automated, with human intervention only for exceptions.
S.O.: AI also improves operations. Tasks like system monitoring, anomaly detection, and log analysis, previously manual, are now automated, reducing operational burden. AI suggests rapid responses, ensuring stability and quality. This approach boosts overall productivity and allows high-quality operation even with limited resources.
Y.N.: The combination and proper use of generative AI tools is critical, especially when client needs require mixed approaches. Jitera now partially integrates with MCP, expanding its applicability.
S.O.: For documentation, parallel project work often requires immediate handoffs. Jitera can generate design documents from source code, supporting a shift from “code first, doc later” to a more AI-driven approach.
Results Achieved
N.H.: Using Jitera, AI coding assistants, and E2E automation tools across projects, we achieved significant efficiency gains. Jitera reduced coding effort by approximately 30 percent, freeing time to reinvest in exception testing, refactoring, and quality improvements.
Challenges and Next Steps
N.H.: Evaluating the validity and accuracy of AI outputs is a new challenge, such as checking if code follows standards or if test cases are reasonable. For clients prioritizing speed, corrections may be done later. For high-quality legacy systems being modernized, defining MECE test cases is critical.
Future Plans:
Short-term: Develop “project kick-off kits” integrating Jitera for templates, quality gates, and auto-documentation to accelerate high-quality, repeatable project launches.
Focus on requirements definition and output validation, turning generative AI from a simple assistant into a co-creator for conceptual support.
Restructure enterprise design processes by modeling and structuring requirements using AI, even when clients lack a clear vision.
AI supports the process from requirements to testing and pre-release, freeing humans to evaluate appropriateness and validity.
Y.N.: We hope to continue supporting clients with Jitera in various forms.