Skip to main content
AdityaChinchakar
Back to Case Studies
AI-Augmented Design · Enterprise · 0→1

Infosys:
AI-Augmented
Learning Design.

Designing GenAI rubric engines and recommendation systems for enterprise-scale educational platforms at Imagine Learning — a K-12 EdTech platform serving students across the US.

Senior Product Designer2023 – PresentInfosys · Fortune Global 5005+ Product Teams

Full case study available under NDA. Contact me directly for a confidential walkthrough.

Key Outcomes

70%
Assessment Time Reduction
vs. manual baseline · n=28 teachers
5+
Product Teams
Design system served
3
AI Tools Shipped
In production
NDA
Full Details
Available on request

Scope & NDA

This work is covered under a Non-Disclosure Agreement with Infosys and Imagine Learning. Key outcomes and high-level contributions are publicly shareable — design artifacts, full case study documentation, and detailed process notes are available for sharing in a confidential context. Contact me directly to schedule a walkthrough.

01 — Outcomes

What We Shipped

70%

Reduction in assessment time

AI rubric engine vs. manual baseline (time-on-task study, n=28 teachers, pre/post). Specific data available under NDA.

5+

Product teams served

Unified design system components adopted across Imagine Learning's product portfolio.

3

AI tools shipped to production

Including GenAI rubric generators, adaptive recommendation engine UX, and feedback tools.

02 — Contributions

What I Did

  • 01

    Designed UX for GenAI-powered rubric generation engines — reducing manual evaluation from hours to minutes

  • 02

    Led design system expansion for Imagine Learning — adding AI-specific interaction patterns and states

  • 03

    Created end-to-end flows for adaptive content recommendation systems

  • 04

    Ran design sprints with cross-functional teams (ML engineers, curriculum designers, product managers)

  • 05

    Established accessibility standards (WCAG 2.2) across all AI-facing features

  • 06

    Delivered high-fidelity prototypes for executive stakeholder reviews

02.5 — Design Challenges

The Hard Problems

Three design tensions that defined the engagement — shareable without violating NDA.

AI Confidence ≠ User Trust

The rubric engine outputs confidence scores, but showing raw percentages caused teachers to either over-trust (80% feels like a guarantee) or dismiss (60% feels unreliable). Designed a 3-tier signal system (Verified / Suggested / Uncertain) that anchored decisions in pedagogical context, not probability.

Outcome

Teacher acceptance rate increased from 41% to 74% in A/B testing.

GenAI Latency UX

Rubric generation takes 3-8 seconds — far longer than users expect from 'AI'. A generic spinner caused drop-off. Designed a progressive reveal: skeleton rubric structure appears immediately, then cells populate sequentially, creating a perception of real-time generation.

Outcome

Drop-off during generation reduced by 55%.

One system, five teams

Five product teams had divergent component needs — elementary literacy, middle-school math, special ed, teacher dashboards, admin tools. Designed a token layer with 4 semantic contexts (learner, educator, admin, assessment) that let the same components adapt across surfaces.

Outcome

Unified system with zero hard forks across 5 products.

03 — Work Samples

The Product We Built

High-fidelity screens from the AI tooling suite. Full documentation available under NDA.

imagine.infosys.com/rubric-generator
AI Rubric Generator
AI Rubric Generator interface showing learning objective input, generated rubric table with criteria and performance levels, and AI suggestions panel

AI Rubric Generator — generates assessment criteria from a learning objective in seconds, with contextual AI suggestions.

imagine.infosys.com/recommendations
Adaptive Recommendations
Student recommendations dashboard showing progress tracking, AI-identified skill gaps, and adaptive content recommendations per student

Adaptive Recommendation Engine — surfaces personalised content per student based on ML-identified skill gaps.

04 — Design Iterations

How the AI Confidence Signal Evolved

3 rounds of internal testing with teachers before the confidence signal shipped. Each round invalidated a prior assumption.

Round 1
Raw percentage (68% confident)
Finding

Teachers over-trusted 80%+ scores as guarantees; dismissed 60%- scores entirely. Binary thinking, not probabilistic.

Decision

Drop percentages. Switch to qualitative signals.

Round 2
Traffic-light system (Red / Amber / Green)
Finding

Green triggered rubber-stamping — teachers stopped reading the rubric text. 'Green means approve' was too automatic.

Decision

Remove green. Reframe as signal, not verdict.

Round 3
3-tier signal (Verified / Suggested / Uncertain)
Finding

Teachers engaged differently at each tier — Verified rubrics were approved 2× faster; Uncertain triggered review. Acceptance rate: 41% → 74%.

Decision

Ship this version. Anchored in pedagogical action, not probability.

Want to see the full case study?

I'm happy to walk you through the work, process, and outcomes in a confidential setting.