AI Search as Digital Public Service: Lessons from Finnish Universities (Drupal + RAG in Production)

Session Room
Room 2 (Indigital)
Time Slot
Duration
40 min
Speaker(s)
Session track
Clients & Industry Experiences
Experience level
Intermediate

Universities don’t compete on content—they compete on clarity. See how we shipped a trustworthy AI help portal for the University of Helsinki in 3 months, turning scattered Drupal ecosystems into sourced answers.

Prerequisite

To get the most from this session, attendees should have:

  • A basic understanding of Drupal concepts (content types, taxonomy, permissions, workflows)
  • Familiarity with site search fundamentals (indexing, relevance, analytics)
  • General awareness of what LLMs and RAG (retrieval-augmented generation) are (no deep AI background required)
  • Experience working with content-heavy organizations (universities/public sector/enterprise) is helpful but not mandatory
Outline

Wunder builds web services for multiple education-sector organizations in Finland. In this session we present a production case study: HelsinkiUni Help, a self-service portal for the University of Helsinki that provides a single search experience across multiple university sources and produces AI-generated answers grounded in cited content.

We’ll walk through the key decisions, trade-offs, and delivery tactics that enabled a fast timeline (~3 months) without sacrificing trustworthiness:

  • The problem in the education vertical: fragmented knowledge, multiple stakeholders, high expectations for accuracy and accessibility
  • Solution approach: RAG-based semantic search across multiple sources, paired with answer synthesis and conventional results
  • Trust and safety design: guardrails, “don’t answer when uncertain,” source visibility, and fallbacks
  • Quality engineering: relevance thresholds, adaptive filtering, evaluation and regression testing mindset
  • Operational excellence: automated content pipelines (crawl → process → embed → index), performance targets, and observability
  • Continuous improvement: search analytics and user feedback loops that reveal content gaps and service friction
  • What we’d do differently next time: governance, metadata strategy, feature flags/A-B rollout, multilingual and persona-aware retrieval
Learning Objectives

After the session, participants will be able to:

  • Explain when RAG is the right approach for institutional self-service (and when it isn’t)
  • Design a trustworthy AI answer UX (sources, fallbacks, refusal behavior) suitable for education/public-sector contexts
  • Apply practical quality guardrails for retrieval and generation (thresholds, cutoffs, “safe defaults”)
  • Set up evaluation and observability practices to prevent regressions and diagnose failure modes
  • Build an improvement loop using search analytics + user feedback to prioritize content fixes and product iterations
  • Take away a reusable delivery playbook for shipping an AI-assisted search/answer capability under real schedule constraints

Educational Track - Drupal in a Day Sponsors

Social Night Sponsors

In-Kind Sponsors

Media Partner Sponsors