ecosystem fundamentals
EVEnet enables AI that is more capable by virtue of its alignment
EVEnet token allocations are set aside for AI Safety and Alignment focused research and development to push forward to most important work on the planet. We also fund for-profit entities that are aligned with our mission and reinvest returns back into the network.
With a solid technical foundation in BCI, neuroscience, and machine learning, we are optimistic that we’ll be able to contribute meaningfully to AI safety. We are particularly keen on pursuing neglected technical alignment agendas that seem most creative, promising, and plausible. We have built a world-class internal alignment team, and are currently onboarding promising researchers.
We are big fans of Vitalik Buterin's recent philosophical writings on his techno-optimism and the quickly forming d/acc movement. We are excited to be part of a growing community of like-minded individuals who are passionate about accelerating progress towards a better future for humanity while avoiding the risks that come with the centralization of power.
Roadmap
EVEnet token
$EVE is the economic and governance token of EVEnet. We're prepared to launch the token in the Spring of 2024. Stay tuned for more information.
Roadmap
privacy-preserving decentralized ML
EVEnet's zkML technology is powered by our differential privacy algorithm which leverages on-chain federated learning to ensure that your data is secure and confidential at all times.
roadmap
our team will deliver several key milestones in 2024 and beyond
What We
Stand For
Like it or not, AI far more intelligent than human beings is coming, and timelines have accelerated tremendously in recent months. We believe that misaligned AGI is by far the biggest threat humanity has ever faced. Our north star is ensuring that AGI is more capable by virtue of its alignment with human values so that it doesn't kill us all.
with support from AE Studio
AE Studio is an award-winning tech innovation studio in pursuit of unsolvable problems. We haven't found one yet. We collaborate with research groups around the world to advance the state of the art and maximize the positive impact of BCI, with a special focus on pursuing neglected approaches for solving AI Safety and Alignment.