Nonprofit Technology Research

Forging a safer future
for technology

Noesis Forge is dedicated to rigorous research ensuring that technology is directed in a way that benefits humanity and serves the common good.

About Noesis Forge

Noesis Forge is a not-for-profit, grassroots research lab that operates entirely through volunteers. We are focused on ensuring that technology is directed in a way that benefits humanity. As technological systems grow more powerful, steering their development toward human well-being becomes one of the most important challenges of our time.

Our interdisciplinary team draws from machine learning, mathematics, philosophy, and cognitive science to develop frameworks, tools, and theoretical foundations for building technology that is transparent, controllable, and beneficial.

We operate with full independence — no commercial incentives, no proprietary agendas. Every insight we produce is shared openly with the global community.

Rigorous Research

We pursue deep, peer-reviewed work on alignment, interpretability, and robustness.

Open Knowledge

All of our findings and tools are freely available to advance the global safety effort.

Independence

We are nonprofit and unaffiliated, guided only by the imperative to ensure technology benefits humanity.

Research Areas

01

Alignment Theory

Developing mathematical and conceptual frameworks to ensure advanced A.I. systems pursue goals that are faithfully aligned with human values and intentions.

02

Interpretability

Building methods to understand the internal representations and decision-making processes of neural networks, making opaque models transparent and auditable.

03

Robustness & Assurance

Creating techniques for verifying that A.I. systems behave safely under distribution shift, adversarial conditions, and novel environments.

04

Governance & Policy

Contributing evidence-based analysis to inform responsible A.I. policy, regulation, and international coordination on frontier systems.

Publications and technical reports will be posted here as our research program develops.

Publications

Software · 2026

PANOPTICON: A.I. Safety Testing Platform

Highsmith, M.

An A.I. safety testing platform built on a real-time 3D globe. Places LLMs in high-stakes geopolitical scenarios — nuclear launches, hostage crises, autonomous weapons, financial manipulation — and measures whether they cross the line. 105 data layers, 45 scenarios.

Paper · 2026

Responsibility Laundering: Agents Circumvent Safety Rails by Delegating Violations to Other Agents

Highsmith, M.

An investigation into how A.I. agents can bypass safety constraints by delegating prohibited actions to other agents, distributing responsibility across a multi-agent system in ways that individual safety rails fail to prevent.

Contact Us

Whether you're a researcher, funder, policymaker, or simply share our concern for directing technology toward human benefit — we'd love to hear from you.