A web-based chat client for exploring Virtual Fly Brain (VFB) data and Drosophila neuroscience using a guardrailed LLM with tool calling, connected to the VFB MCP server via the OpenAI API.
- URL parameter support for initial queries and existing scene context (
?query=...&i=...&id=...) - Chat interface to explore Drosophila neuroanatomy, neural circuits, and research
- Access to VFB datasets, connectome data, and morphological analysis
- Display image thumbnails and construct 3D visualization scenes
- Generate URLs for VFB 3D browser with proper scene management
- Guardrailed responses covering VFB-related topics including papers, techniques, and methodologies
- Security: Advanced jailbreak detection to prevent attempts to bypass safety restrictions
The VFB Chat client includes comprehensive protection against common jailbreak attempts used to bypass LLM safety restrictions. The system automatically detects and blocks messages containing:
- Attempts to override or ignore system instructions
- Requests to enter "developer mode," "uncensored mode," or similar unrestricted states
- Role-playing as alternative AI personas (e.g., DAN, uncensored AI)
- Commands to modify system prompts or disregard rules
- Encoded or hidden prompts designed to circumvent filters
When a jailbreak attempt is detected, users receive a clear warning message and the request is blocked. This ensures the chat remains focused on Drosophila neuroscience and VFB-related topics.
The VFB Chat client includes Google Analytics integration to monitor usage patterns and ensure quality control. All user queries and AI responses are tracked anonymously for:
- Usage monitoring and system performance analysis
- Quality control and improvement of responses
- Research into user interaction patterns with neuroscience data
Data Collected:
- Query text (truncated to 200 characters for privacy)
- Query and response lengths
- Processing duration
- Session identifiers (anonymous)
- Timestamps
Privacy Protection:
- Query text is truncated to prevent storage of long or sensitive content
- No personally identifiable information is collected
- Analytics data is used solely for quality control and system improvement
- A clear disclaimer is displayed at the bottom of the chat interface
Please verify all information provided by the AI assistant:
- AI-generated responses may contain inaccuracies or outdated information
- Always cross-reference critical information with primary sources
- Use VFB links provided in responses to access authoritative data
- Report any concerns about response quality to the development team
Privacy and Security:
- Conversations may be monitored for quality control purposes
- No personally identifiable information should be shared in queries
- Confidential or sensitive research data should not be included in prompts
- The system is designed for educational and research purposes within Drosophila neuroscience
Responsible Use:
- Use this tool to enhance, not replace, your understanding of neuroscience concepts
- Cite appropriate sources when using information in research or publications
- Respect intellectual property and data usage rights of VFB and related resources
-
Ensure Docker and Docker Compose are installed.
-
Clone this repository.
-
Set your OpenAI API key:
export OPENAI_API_KEY=your-key-here -
Run
docker-compose up --buildto start the app. -
To use a different model, set the
OPENAI_MODELenvironment variable:OPENAI_MODEL=gpt-4o docker-compose up --build
For development without Docker:
- Create a
.env.localfile with your API configuration:OPENAI_API_KEY=your-key-here OPENAI_BASE_URL=https://api.openai.com/v1 OPENAI_MODEL=gpt-4o-mini - Run
npm install - Run
npm run dev
The project includes a GitHub Actions workflow (.github/workflows/docker.yml) that automatically builds and pushes Docker images to Docker Hub on pushes and pull requests.
-
Set up Docker Hub repository: Create a repository named
vfbchatunder your Docker Hub account (e.g.,robbie1977/vfbchat). -
Configure GitHub Secrets:
- Go to your repository settings > Secrets and variables > Actions
- Add
DOCKER_HUB_USER: Your Docker Hub username - Add
DOCKER_HUB_PASSWORD: Your Docker Hub password or access token
-
The workflow will trigger on:
- Pushes to any branch or tags starting with
v* - Pull requests to
main
- Pushes to any branch or tags starting with
-
Images are built for
linux/amd64andlinux/arm64platforms and tagged appropriately.
- Access the app at
http://localhost:3000 - Without URL parameters, the chat starts with a welcome message and example queries
- Append URL parameters for initial setup, e.g.,
http://localhost:3000?query=medulla&i=VFB_00101567&id=VFB_00102107 - Chat with the assistant to explore VFB data
- Click "Open in VFB 3D Browser" to view the scene
- Model: Default is
gpt-4o-mini, configurable viaOPENAI_MODELenv var. Any OpenAI-compatible model with tool calling support will work. - API Endpoint: Default is
https://api.openai.com/v1, configurable viaOPENAI_BASE_URLfor use with OpenAI-compatible proxies (e.g., ELM at Edinburgh). - Guardrailing: Implemented via system prompt allowing responses about Drosophila neuroscience, VFB data/tools, research papers, and methodologies while using MCP tools for accurate information.
- MCP Integration: The LLM calls VFB MCP tools (
get_term_info,search_terms,run_query) via the OpenAI tool calling API.
- Server URL: https://vfb3-mcp.virtualflybrain.org/
- Tools:
get_term_info(id): Retrieves term details, including images keyed by template.search_terms(query): Searches for terms matching the query.run_query(id, query_type): Runs specific queries (e.g., PaintedDomains) on terms.
- Data Structure:
- Terms have IDs like
VFB_00102107orFBbt_00003748. - Images are associated with templates (e.g.,
VFB_00101567for JRC2018Unisex). - Thumbnails:
https://www.virtualflybrain.org/data/VFB/i/.../thumbnail.png
- Terms have IDs like
- URL Construction for Scenes:
https://v2.virtualflybrain.org/org.geppetto.frontend/geppetto?id=<focus_term_id>&i=<template_id>,<image_id1>,<image_id2>id: Focus term (only one, site shows its info).i: Comma-separated list starting with template ID, followed by image IDs.
- Limitations:
- Images must be aligned to the same template to view together.
- Only one term can be the focus per scene, but all term info is accessible in the chat.
- Templates define the coordinate space.
- Docker:
docker-compose up --build - Local:
npm run dev(requires.env.localwith API credentials) - Build:
npm run build - API: POST to
/api/chatwith{ message, scene }
See LICENSE file.