CHALLENGE
Innovation teams frequently struggle to access relevant project history, understand client expectations, or retrieve critical decisions made in past engagements—especially under tight timelines. Our goal was to create a lightweight, scalable and human-centered solution that made historical project knowledge queryable and accessible to new team members, without requiring complex system integration.
Innovation teams frequently struggle to access relevant project history, understand client expectations, or retrieve critical decisions made in past engagements—especially under tight timelines. Our goal was to create a lightweight, scalable and human-centered solution that made historical project knowledge queryable and accessible to new team members, without requiring complex system integration.
EXPERTISE PROVIDED
Internal Innovation, Knowledge Systems Design, Prompt Engineering, Strategic Enablement
Internal Innovation, Knowledge Systems Design, Prompt Engineering, Strategic Enablement
OUTCOME
Instead of building a new system from scratch, we unlocked strategic value from an existing tool by redesigning how it was used, shaping both the prompts and mental models around it. As a result, consultants can gain more autonomy to explore past projects, uncover client priorities, and benchmark success criteria without relying on others or manually digging through files. This approach not only enhanced onboarding speed and strategic alignment, but also positioned AI not as a black-box solution, but as a transparent, interpretable, and collaborative thinking partner.
Instead of building a new system from scratch, we unlocked strategic value from an existing tool by redesigning how it was used, shaping both the prompts and mental models around it. As a result, consultants can gain more autonomy to explore past projects, uncover client priorities, and benchmark success criteria without relying on others or manually digging through files. This approach not only enhanced onboarding speed and strategic alignment, but also positioned AI not as a black-box solution, but as a transparent, interpretable, and collaborative thinking partner.


Phase 1: Understanding the Need
We began by reviewing a pre-existing map of common friction points in project onboarding, previously identified by International team. These included:
• Inability to retrieve past strategic decisions
• Fragmented access to project documentation
• Ambiguity around how success was defined in previous engagements
• Lack of a standardized structure for documenting projects, which hindered knowledge sharing and tool potential
• Inability to retrieve past strategic decisions
• Fragmented access to project documentation
• Ambiguity around how success was defined in previous engagements
• Lack of a standardized structure for documenting projects, which hindered knowledge sharing and tool potential
We grouped these frictions into strategic question clusters that formed the foundation of our prompt design. This is our questions map:
1. To understand past projects and success criteria:
• What can you tell me about the [project name]?
• What was the goal of the [project name]?
• What were the outcomes of the [project name]?
• What was considered a success picture for similar projects like [project name]?
• What can you tell me about the [project name]?
• What was the goal of the [project name]?
• What were the outcomes of the [project name]?
• What was considered a success picture for similar projects like [project name]?
2. To extract insights from client documentation:
• What questions do these documents help answer?
• Which documents contain the most strategic insights or decision points?
• Are there any recurring themes, patterns, or priorities across the documents?
• Are there contradictions or inconsistencies between them?
• Are there gaps I need to clarify with the client before moving forward?
• Can you map how these documents relate to each other?
• What questions do these documents help answer?
• Which documents contain the most strategic insights or decision points?
• Are there any recurring themes, patterns, or priorities across the documents?
• Are there contradictions or inconsistencies between them?
• Are there gaps I need to clarify with the client before moving forward?
• Can you map how these documents relate to each other?
3. To understand the client or account more broadly:
• What can you tell me about [client name]?
• What can you tell me about the [Client Area]?
• What are the projects related to [project type]?
• What can you tell me about [client name]?
• What can you tell me about the [Client Area]?
• What are the projects related to [project type]?
4. To compare project documentation over time:
• What are the most significant changes introduced in [latest version]?
• Were any tools, processes, or frameworks deprecated or replaced?
• Does the latest version introduce new roles or responsibilities?
• How should I communicate these changes to stakeholders?
• What are the most significant changes introduced in [latest version]?
• Were any tools, processes, or frameworks deprecated or replaced?
• Does the latest version introduce new roles or responsibilities?
• How should I communicate these changes to stakeholders?
These questions became the foundation for what Gemini needed to “understand” and respond to—setting the stage for a prompt architecture that was as strategic as it was operational.
Phase 2: Prototyping and Prompt Design
This phase was defined by constraint and experimentation. Gemini did not initially follow instructions as expected. It struggled with ambiguous language, ignored structural cues, and sometimes pulled irrelevant data. Instead of treating these as tool limitations, we treated them as design challenges—and the prototype became a space for uncovering how the system interprets prompts.
1. AI does not default to precision
Gemini often needed very structured, directive input. We learned that a programming-like prompt language—imperative, explicit, and modular—worked best to guide its behavior. Vague or open-ended questions led to drift or hallucination.
Gemini often needed very structured, directive input. We learned that a programming-like prompt language—imperative, explicit, and modular—worked best to guide its behavior. Vague or open-ended questions led to drift or hallucination.
2. Behavioral alignment required testing the model’s internal reasoning
We explored how Gemini explained its own “thinking,” and used this to refine the structure of prompts, improve clarity, and reduce hallucination risks.
We explored how Gemini explained its own “thinking,” and used this to refine the structure of prompts, improve clarity, and reduce hallucination risks.
3. Failures were the foundation of prompt architecture
Many of the most effective prompt structures came directly from failures. When Gemini misbehaved, we broke down why, rewrote the flow, and tested alternatives until the assistant responded with consistency.
Many of the most effective prompt structures came directly from failures. When Gemini misbehaved, we broke down why, rewrote the flow, and tested alternatives until the assistant responded with consistency.
4. Users need to co-reason with the AI
The design was never about automation. It was about enabling teams to engage with Gemini’s logic, verify insights, and use the tool as a collaborative thinking partner rather than a one-way answer machine.
The design was never about automation. It was about enabling teams to engage with Gemini’s logic, verify insights, and use the tool as a collaborative thinking partner rather than a one-way answer machine.
These lessons informed a series of reusable prompt templates tailored to real project scenarios: benchmarking success, document comparison, strategy extraction, and alignment mapping. Each was designed not just for function, but for interpretability, adaptability, and practical field use.
We also created templates for different use cases: success benchmarking, document comparison, strategic insight retrieval, and gap analysis—each designed with modularity and reusability in mind.

Phase 3: Operationalization and Enablement
We created:
1• A step-by-step onboarding guide to get started with Gemini
2 • A library of question prompts to explore project history and context
3 • Troubleshooting paths for non-technical users
4 • A proposal to standardize file naming and case tagging to improve Gemini performance over time
1• A step-by-step onboarding guide to get started with Gemini
2 • A library of question prompts to explore project history and context
3 • Troubleshooting paths for non-technical users
4 • A proposal to standardize file naming and case tagging to improve Gemini performance over time
Final thought: In this process, AI is not just a tool anymore, but a is a strategic co-thinker!