Jolly Good integrates over 600 rare on-site VR data assets, the world's first VR analysis AI system "VRCHEL" that quantifies expert tacit knowledge, and "Japan-US patents monopolizing digital therapeutics and content generation" in a trinity. We are a platformer building exclusive competitive advantages that others cannot follow.

Monopoly of world-rare "business know-how/process data" capturing medical field tacit knowledge


Practical medical and educational content library

Practical medical and educational content library

Practical medical and educational content library

360° video captures the entire medical field from multiple angles, enabling spatial awareness and situational judgment impossible with conventional flat video

Records collaboration processes of multiple professionals including doctors, nurses, and paramedics, visualizing tacit knowledge of team medicine

Real business process data, unlike text or image data, is a Japanese advantage domain that GAFA cannot easily collect

Unique data ecosystem built through long-term partnerships with medical institutions cannot be easily imitated by competitors
These data assets function not merely as video content but as "training data" that forms the foundation of AI learning. As data accumulates, AI analysis accuracy improves, forming a "data moat" that attracts more customers.
Composite analysis of VR video and user behavior data to visualize skills difficult to manualize

Automatic recognition of spatial information, human movements, medical equipment placement, etc. from 360° video
Transcribes conversation content and analyzes communication patterns and technical terminology usage
Visualizes attention allocation and decision-making processes from user's gaze movement, fixation time, and head movements
Analyzes positional relationships, movement lines, and distance perception in 3D space to evaluate spatial cognitive abilities

Records medical procedures and training scenes in 360° video format

VRCHEL automatically analyzes video and extracts spatial information, behavior patterns, and learning points

Generates personalized learning guides, evaluation reports, and training materials
Demonstrated through joint research with Harvard Medical School and others
Achieves significant time reduction compared to conventional methods
Protecting entire unique technology ecosystem with Japan-US basic patents, reducing imitation risk by 80%

System that automatically reorganizes video sequences based on biometric information (heart rate, brain waves, etc.) to improve patient conditions. Functions as an AI expert doctor's automatic prescription system.

Device that analyzes 360° video and automatically generates learning point text. Legally protects VRCHEL's core technology.
Personalized Treatment Video System
DTx insurance coverage enables stable revenue through monthly subscription per patient. Can exclusively provide proven 69% remission rate for depression treatment VR.

AI Learning Guide Auto-Generation System
Standardizes rare medical techniques with VR×AI and sells as educational content. Converts over 600 VR contents into teaching materials infinitely and at low cost.

Roadmap aiming for exclusive position that cannot be caught up in the medium to long term by combining rare data and IP strategy

Over 600 VR contents and usage data
Advanced analysis by VRCHEL
Expansion of DTx and licensing business
Strengthening competitive advantage
Nationwide expansion and overseas expansion of medical education VR

Teaching material business through AI learning guide auto-generation

Evolving VR from "just viewing" to "treating and installing abilities"
Conventional passive video experience
Active platform achieving therapeutic effects and skill acquisition

Experience research-proven effects in your organization. Feel free to contact us for detailed reports or demo consultations.
69.2% remission rate, average -11.7 point improvement in HAMD score
2.05x improvement in team collaboration understanding, 1.74x improvement in initial treatment understanding
Practical test score VR group 29 points vs lecture group 25 points (p=0.03)