NeuralPrint - See your brain react to anything

NeuralPrint is a computational neuroscience web application that predicts and visualizes how the human brain responds to multimedia stimuli in real time. Built on top of TRIBE v2, a research grade brain encoding model, the platform accepts video, audio, or text input and generates a comprehensive neural activation profile showing which brain regions light up, when, and why.
The pipeline runs entirely on GPU accelerated cloud infrastructure. When a user submits a stimulus, the backend extracts multimodal features including visual frames from video, spectral features from audio, and semantic embeddings from text. These features are then fed through a pretrained cortical encoding model that maps them to predicted blood oxygen level dependent (BOLD) responses across over 50 brain regions organized into canonical networks like the Visual, Auditory, Language, Default Mode, Salience, and Frontoparietal networks.
The frontend delivers an immersive analysis experience with four main views. The 3D Brain tab presents a split view with an interactive Three.js rendered brain mesh on the left showing vertex level heatmap activations and annotated region markers, alongside the original video or audio on the right, both synchronized to a shared timeline slider. The Regions tab provides a ranked breakdown of the most activated brain areas with activation percentages and network assignments. The Neural Map tab renders a graph based network visualization showing how activated regions connect to each other functionally. The Analysis Timeline tab displays a server generated temporal overview of brain activity.
Each analysis produces a Neural Print Score that quantifies overall neural engagement, a cognitive signature identifying the dominant brain network, a modality split showing the relative contribution of visual, auditory, and language processing, highlighted regions with natural language explanations of why they activated, quiet zones showing areas that remained suppressed, and AI generated insights that summarize the cognitive profile in plain language.
The application is built with Next.js 14 and TypeScript on the frontend with Tailwind CSS for styling, Three.js and React Three Fiber for 3D brain visualization, and a Python FastAPI backend running the TRIBE v2 encoding pipeline with Modal for serverless GPU execution. The brain model uses a high resolution GLB mesh with vertex level color mapping for smooth heatmap rendering and MNI coordinate based ROI annotations for region identification.
This project sits at the intersection of neuroscience, machine learning, and data visualization, demonstrating how advanced brain encoding research can be made accessible and interactive through thoughtful web application design.