Loading map...
Last updated: 2026-05-06

Japan's AI safety institute.
Category
Governance

AI safety writer covering alignment, governance, x-risk, and superintelligence.
Category
Blog
Tokyo-based deep-tech research lab developing a novel AI architecture for general reasoning grounded in category theory and graph-based knowledge representation. Pursues AGI via interpretable, alignment-friendly foundations rather than black-box neural networks.
Category
Conceptual research

Tokyo-based research lab working on AI safety. Themes include evolutionary approaches to AI development, multi-agent coordination, and interpretable AI systems.
Category
Empirical research

Tokyo-based AI safety community running regular benkyoukai (study group) events and reading groups.
Category
Training and education

APAC-wide technical AI safety training program. 14-week ARENA curriculum via Saturday sessions; Tokyo is one of six 2026 cohort cities.
Category
Training and education

Japan's AI alignment research nonprofit. Original research (Post-Singularity Symbiosis, machine ethics), webinars, study groups, and outreach across academia and industry.
Category
Training and education, Conceptual research