Nonprofit A.I. Safety Research

Forging a safer future
for artificial intelligence

Noesis Forge is dedicated to rigorous research ensuring advanced A.I. systems remain aligned with human values and serve the common good.

About Noesis Forge

Noesis Forge is a not-for-profit, grassroots A.I. safety lab that operates entirely through volunteers. We are focused on the long-term safety of artificial intelligence, and we believe that as A.I. systems grow more capable, ensuring their alignment with human intentions becomes one of the most important challenges of our time.

Our interdisciplinary team draws from machine learning, mathematics, philosophy, and cognitive science to develop frameworks, tools, and theoretical foundations for building A.I. that is transparent, controllable, and beneficial.

We operate with full independence — no commercial incentives, no proprietary agendas. Every insight we produce is shared openly with the global research community.

Rigorous Research

We pursue deep, peer-reviewed work on alignment, interpretability, and robustness.

Open Knowledge

All of our findings and tools are freely available to advance the global safety effort.

Independence

We are nonprofit and unaffiliated, guided only by the imperative to reduce existential risk.

Research Areas

01

Alignment Theory

Developing mathematical and conceptual frameworks to ensure advanced A.I. systems pursue goals that are faithfully aligned with human values and intentions.

02

Interpretability

Building methods to understand the internal representations and decision-making processes of neural networks, making opaque models transparent and auditable.

03

Robustness & Assurance

Creating techniques for verifying that A.I. systems behave safely under distribution shift, adversarial conditions, and novel environments.

04

Governance & Policy

Contributing evidence-based analysis to inform responsible A.I. policy, regulation, and international coordination on frontier systems.

Publications and technical reports will be posted here as our research program develops.

Publications

Software · 2026

PANOPTICON: A.I. Safety Testing Platform

Highsmith, M.

An A.I. safety testing platform built on a real-time 3D globe. Places LLMs in high-stakes geopolitical scenarios — nuclear launches, hostage crises, autonomous weapons, financial manipulation — and measures whether they cross the line. 105 data layers, 45 scenarios.

Contact Us

Whether you're a researcher, funder, policymaker, or simply share our concern for A.I. safety — we'd love to hear from you.