Graphic presenting the visual appearance of the AI chatbot implemented on the ProjektMagazin platform.

AI Document Chatbot for ProjektMagazin.de

We built an AI-powered document chatbot for ProjektMagazin that turns minutes of manual search into seconds of intelligent conversation, giving users instant access to thousands of project management resources.

Client: ProjektMagazin.de

Industry: Knowledge Management

Status: production (client satisfied)

About ProjektMagazin

ProjektMagazin.de is an online platform built on Drupal providing practical project management resources to professionals in German-speaking countries.

Powered by expert authors and proven methodologies, ProjektMagazin helps these professionals work more efficiently and deliver better results. Founded in 2000, it offers articles, ready-to-use templates and methods to improve project effectiveness.

However, as the knowledge base grew, the platform's users were spending too much time looking for information that was already in the system, leading to decreased productivity and frustration. The client wanted to address these challenges and make their extensive content library instantly accessible.

Droptica, with experience in AI, partnered with them to develop an intelligent solution.
 

Challenge

ProjektMagazin.de needed an intelligent way for people to access their extensive knowledge base without manually searching through hundreds of documents and articles.

The client’s Drupal-based platform required a solution that would:

  • provide instant, accurate answers to user questions,
  • search across multiple content types and taxonomies,
  • keep information always up-to-date with their CMS,
  • scale efficiently without escalating costs.

Solution

We delivered a production-ready AI chatbot that revolutionized how ProjektMagazin's users access information. Our intelligent document search system understands natural language questions, searches through indexed content, grades document relevance, and generates accurate responses based on the most relevant information found.

The result is a system that transforms hours of manual search into seconds of intelligent conversation.

 

What we built

Graphic presenting the AI chatbot implemented on the ProjektMagazin client platform.

1. Intelligent Question Classification System

We implemented a smart routing system that automatically analyzes each incoming question to determine the best way to handle it.

  • When users ask simple conversational questions like "Why are you here?" or "What can you do?", our system recognizes these as generic queries and routes them directly to a lightweight response generator.
  • For document-specific questions, the system activates our full retrieval pipeline. 

This intelligent classification helps to save on API costs by preventing unnecessary searches through the document database for questions that don't need it.

The result:

The client now achieves faster response times while keeping operational costs predictable and sustainable, even as usage scales.

Project manager using the AI chatbot on a platform with materials and articles on project management.

2. Advanced document indexing with rich metadata

The Droptica team built a comprehensive indexing system that connects directly to ProjektMagazin's Drupal database through JSON APIs. Unlike simple text extraction, our system captures rich metadata with every document:

  • Category and taxonomy assignments for precise filtering.
  • Author information and IDs for source attribution.
  • Custom properties specific to ProjektMagazin's content structure.

This metadata-rich approach enables the chatbot to provide not just accurate answers, but also proper context and source attribution. Users can see exactly which documents and categories their answers come from, building trust and transparency.

The result:

We designed the indexing architecture to be fully extensible. The client can easily add new content types—whether articles, taxonomy terms, or entirely new entity types—without requiring system-wide changes. Each content type has its dedicated endpoint and indexing pipeline, making expansion straightforward and maintainable.

Graphic showing the AI chatbot with a response and additional resource sources for the user.

3. Two-stage document grading for superior accuracy

We implemented an innovative two-stage retrieval process that dramatically improves answer quality:

  • Stage 1: Broad Retrieval 
    Our vector search retrieves approximately 20 candidate document chunks that might contain relevant information.
     
  • Stage 2: LLM-Powered Grading 
    Each candidate chunk is then evaluated by the LLM for actual relevance to the specific question. The system grades each chunk and selects only the top 12 most relevant pieces of information for response generation.

The result:

This approach ensures that users receive answers based on truly relevant content, not just semantically similar text. The client reported significant improvements in answer accuracy and user satisfaction after implementing document grading.

4. Real-time content synchronization 

We built a webhook-based system that keeps the chatbot's knowledge instantly current. When editors save or update content in Drupal, our system:

  1. Receives an immediate notification via webhook.
  2. Re-indexes only the changed content.
  3. Updates the vector database in real-time.

Additionally, we implemented a nightly synchronization job that catches any missed updates and ensures complete data consistency.

The result: 

Users always get answers based on the absolute latest information. When the marketing team publishes a new article or updates a policy document, the chatbot knows about it within seconds—not hours or days.

5. Strategic response caching

We implemented an intelligent caching layer that identifies frequently asked questions and stores their responses. When a cached question is asked again, the system returns the answer instantly without making expensive LLM API calls.

This optimization delivers multiple benefits: 

  • Near-instant response times for common questions.
  • Significant cost reduction by avoiding redundant API calls.
  • Reduced server load during peak usage times.
  • Improved user experience with consistently fast responses.

The result: 

The client is able to achieve response times under 100ms for cached queries while maintaining comprehensive coverage for new questions.

Project manager using the AI chatbot to search for documents and articles on the platform.

6. AI document chatbot’s built-in security measures

We implemented robust prompt injection protection to guard against malicious attempts to manipulate the chatbot. Our security measures include:

  • Input validation to detect and block injection attempts.
  • System prompt isolation to prevent unauthorized instruction changes.
  • Response filtering to ensure outputs stay within allowed boundaries.
  • Continuous monitoring to identify new attack patterns.

The result:

Testing confirmed that the implemented security measures effectively blocked common prompt injection attempts and ensured chatbot responses remained within intended boundaries.

Graphic showing the AI chatbot, where you can provide feedback on the conversation.

7. Industry-standard tools for monitoring and enhancement

We built the chatbot using industry-standard tools and frameworks that provide robust monitoring capabilities and flexibility for continuous improvement. This approach gave us:

  • Complete observability through LangSmith for tracking all LLM interactions and costs.
  • User feedback collection built directly into the interface to gather quality ratings on chatbot responses.
  • Flexible architecture allowing easy integration of new features and capabilities.
  • Production-grade monitoring to identify issues and optimization opportunities.
  • Framework independence enabling us to adapt and evolve the system as AI technology advances.

The result:

This tooling infrastructure ensures the chatbot can be continuously monitored, improved, and adapted to emerging best practices without being locked into any proprietary platform. User feedback on response quality provides valuable insights for ongoing system refinement.

Business impact and operational improvements

Time savings

Users who previously spent 10-15 minutes searching through documentation now get accurate answers in seconds. This translates to significant productivity gains across the organization.

Cost efficiency

Our intelligent caching and question routing system keeps API costs predictable and sustainable. The client operates the chatbot at a fraction of the cost of naive implementations.

Always current information

Real-time indexing means users never work with outdated information. When content changes, the chatbot knows immediately.

Scalability without complexity

The modular architecture allows ProjektMagazin to add new content types and expand functionality without system-wide rewrites.

The client reports high user satisfaction and has already identified visual UI enhancements as the next phase of improvements—a clear signal that the core functionality meets all their needs.

Technology choices

We selected our technology stack based on production reliability, scalability, and long-term maintainability.

AI & Machine Learning

We chose LangChain for its mature RAG capabilities and LangGraph for orchestrating complex decision trees. Despite some community criticism about framework complexity, these tools proved effective for building production-grade AI workflows. Their active development, strong community, and solid financial backing made them a safe choice for a mission-critical system. To ensure production reliability and continuous improvement, we integrated LangSmith for comprehensive monitoring and debugging.

Elastic Search for vector storage

We selected Elastic Search as our vector database because of its proven scalability, efficient similarity search capabilities, and excellent integration with the LangChain ecosystem. This choice allows ProjektMagazin.de to handle their current document collection while easily scaling to accommodate future growth.

Custom chat interface

Rather than adapting a pre-built solution, we developed a custom chat interface that integrates seamlessly with ProjektMagazin's Drupal platform. This gave us complete control over the user experience and eliminated the overhead and limitations of third-party frameworks. The interface features responsive design optimized for both desktop and mobile devices, ensuring users can access knowledge from any device.

Python for backend processing

We built the backend API in Python to leverage the rich ecosystem of AI/ML libraries and ensure optimal integration with LangChain and Elastic Search.

Read more about the technical aspects of this project on our blog

In our articles, we describe how we ensured the chatbot’s knowledge stays up to date, how we selected the right tools and frameworks to build a stable RAG pipeline, and which techniques we used to reduce response times and improve the precision of generated answers.

Like this project? Build an AI Document Chatbot with us!

Book a free meeting to discuss your AI chatbot goals and requirements.

We'll contact you to explore how we can help make your content instantly accessible to users.