At its Redacted conference in Bangkok, Thailand, Near Protocol unveiled an ambitious aim to build the world’s largest open-source AI model. This new model, which will have 1.4 trillion parameters, is expected to be 3.5 times bigger than Meta’s existing Llama model. The program is a key milestone in open-source AI development, offering both scalability and community participation.
AI Development Powered by the Community
This model’s evolution will rely on competitive, crowdsourced research and contributions overseen by Near’s newly built Near AI Research Hub. Contributors will be offered opportunities to engage in the project, beginning with the training of a smaller model with 500 million parameters on November 10. The project’s purpose is to advance through seven more sophisticated and bigger models. Only top contributors will be chosen to work on these advanced models, assuring a high level of development throughout the process.
Near Protocol intends to leverage encrypted Trusted Execution Environments as incentives and rewards for participants. This strategy will provide not just equitable recompense, but also privacy and security. The models created will be commercialized, allowing contributors to profit from the project’s success.
Token-Based Funding and Long-term Sustainability
Training such a sophisticated AI model is expensive—estimated at roughly $160 million. According to Illia Polosukhin, co-founder of Near Protocol, this sum, while significant, is feasible in the bitcoin sector. The idea calls for the selling of tokens to raise money. Profits from the trained models will be returned to token holders, resulting in a self-sustaining financial loop that will support the creation of increasingly more powerful models.
Polosukhin stated that Near’s ability to undertake such an ambitious endeavor is owed in part to the competence of its founders. Polosukhin contributed to the key transformer research that laid the path for models such as ChatGPT, Co-founder Alex Skidanov formerly worked at OpenAI during the critical development phase leading up to ChatGPT’s release in 2022. Skidanov, who currently oversees Near AI, acknowledged the task’s magnitude and complexity, as well as the technological obstacles it presents.
Addressing Technical Challenges In a Decentralized Manner.
Building an AI model of this complexity necessitates a significant amount of computational power, with tens of thousands of GPUs preferably grouped together. However, Skidanov pointed out that existing distributed training methods have limitations since they rely on high-speed interconnects between GPUs. Emerging research from businesses like as Deep Mind implies that these technological challenges may be addressed in the near future.
Near Protocol’s project exemplifies the concept that decentralized AI is critical for maintaining privacy and ensuring that no one party develops undue power over AI technology. This viewpoint was shared by Edward Snowden, who spoke at the event. He cautioned that centralized AI control might result in a future ruled by surveillance and monopolistic power. “If AI is controlled by one company, we are effectively going to do whatever that company says,” Snowden warned, highlighting the need of decentralized AI for the future of technology and economics.
To summarize, Near Protocol’s ground-breaking endeavor to create the world’s biggest open-source AI model marks a significant advancement in both the scale and community-driven aspect of AI development. Near aspires to develop a sustainable system that corresponds with Web3’s key concepts of decentralization and user empowerment by including a revenue model that pays contributors while prioritizing privacy.
According to Cointelegraph, “Near plans to build the world’s largest 1.4T parameter open-source AI model.”
You can check out the full article here.

I’m Voss Xolani, and I’m deeply passionate about exploring AI software and tools. From cutting-edge machine learning platforms to powerful automation systems, I’m always on the lookout for the latest innovations that push the boundaries of what AI can do. I love experimenting with new AI tools, discovering how they can improve efficiency and open up new possibilities. With a keen eye for software that’s shaping the future, I’m excited to share with you the tools that are transforming industries and everyday life.