Decentralise to Incentivise: The Quest for Unbiased, Explainable AI

Decentralised AI

Interview with Ala Shaabana, Co-founder of Opentensor Foundation 

Amid widespread concern regarding possible built-in bias, opacity, and misalignment in big-tech-dominated AI, one strategy that offers the potential for more balanced machine learning processes is to decentralise them. Ala Shaabana of the Opentensor Foundation explains. 

Hello, Mr Shaabana. It’s a pleasure to have you here. To begin, would you tell our readers a little bit about your background and personal journey in the field of AI, leading up to your role as a co-founder of Opentensor Foundation?

I started my journey in graduate school, where I obtained my PhD in applied artificial intelligence on human-centric sensing in 2017 from McMaster University. I then joined VMware in Palo Alto, California, working on problems in distributed computing. In 2019, I met my current co-founder Jacob Steeves, who had been working on the problem of decentralised AI since 2016, on an online AI research community. We then decided to work together on Bittensor and start building the protocol and together co-founded the Opentensor Foundation.

From your perspective, what do you think are the key moral considerations driving the recent discussions within the AI industry, and how might they influence the path AI development takes? 

The AI alignment problem refers to the challenge of ensuring that artificial intelligence systems’ goals, decisions, and actions are aligned with human values and intentions.

I believe that AI transparency and explainability, bias, and alignment are some of the most important considerations surrounding AI today. Transparency is ensuring that AI training processes and code are always made available to the general public for review and scrutiny. The Biden administration’s recent executive order is loosely aimed at solving this issue. However, open source AI already solves it, indicating just how ill informed governments are about technologies and AI in general. AI bias is centred around moral questions about fairness and equality, especially in critical applications like hiring, law enforcement, and loan approvals. As a result, there’s a growing emphasis on creating AI that is fair, unbiased, and inclusive, which may lead to more rigorous data handling and algorithmic accountability measures. Finally, the AI alignment problem refers to the challenge of ensuring that artificial intelligence systems’ goals, decisions, and actions are aligned with human values and intentions. This problem becomes increasingly critical as AI systems grow more complex and capable, particularly when they are given more autonomy in decision-making. 

When it comes to decentralised AI and collective ownership, how does this philosophy align with your personal vision for the future of artificial intelligence? What concrete benefits do you think decentralisation brings to the AI landscape? 

AI transparency, bias, and alignment are central to my vision for Bittensor and the future of AI in general. Decentralisation ensures that all three of these are dealt with by the community at large, and not by one company whose bottom line is truly its profits. When AI is placed in the hands of everyone, and not just one entity, it loses its bias, is forced to be transparent, and is aligned with everyone’s interests by default, as everyone is contributing to it. 

The conversation often revolves around the concentration of AI power in tech giants. In your view, how can decentralised AI models play a role in disrupting this concentration, and what challenges or opportunities does this dynamic present for the industry?

Decentralised AI models have the potential to disrupt the concentration of AI power in the hands of tech giants, offering both challenges and opportunities for the industry. Let’s start with opportunities. Decentralised AI models can enable a wider range of stakeholders, including smaller companies, researchers, and individual developers, to contribute to and influence AI development. This democratisation can lead to more innovative and diverse AI solutions that reflect a broader range of needs and values. There’s also more potential for greater creativity and a wider array of AI applications. Finally, decentralised systems can be more resilient to failures and cyberattacks, as they don’t rely on a single centralised entity. This enhances the overall stability and security of the AI ecosystem. However, challenges still exist in the system that must be resolved first. For example, decentralising AI development could lead to challenges in coordinating efforts and maintaining standards across different projects. Additionally, establishing effective regulatory frameworks that can adapt to the rapidly evolving and diverse landscape of decentralised AI is a significant challenge. Decentralised models may struggle with resource allocation, as smaller entities might lack the computing power and data access that larger corporations possess.

To give us a clearer picture, could you walk us through a real-world example of how the Bittensor protocol operates and contributes to the broader goal of decentralised AI? 

By incentivising the creation of decentralised AI models, the Bittensor protocol incentivises AI engineers and researchers to create state-of-the-art AI models to compete with each other to provide the best-possible output to users. As an example, a user can build a state-of-the-art model and deploy it on Bittensor’s text language subnetwork. The model’s performance is judged by the validators on the system. Validators will reward the model with more Tao than its peers if it outperforms its peers. Users, on the other hand, can interface with validators with their prompts, who will send these prompts to the models on the network and deliver back the best response. Thus, users will always get the best response for their queries and models will always be rewarded in proportion to their performance.

Reflecting on the recent success of Ritual in securing substantial funding, how do you interpret this within the broader industry trends? What implications does it carry for the future landscape of decentralised AI platforms? 

Organisations like the Opentensor Foundation can play a significant role in fostering these collaborative efforts in several ways.

This is a positive indicator that many are starting to embrace the decentralised AI narrative, and the idea of taking this power away from centralised tech giants. However, it is vitally important that these companies continue to embrace the decentralisation aspect as well as ethics, and not simply assume that, just because a system is decentralised, then it is ethical. 

Thinking about international collaborations in establishing ethical standards for decentralised AI, how do you see organisations like Opentensor Foundation contributing to and fostering such collaborative efforts? 

International collaborations in establishing ethical standards for decentralised AI are crucial, given the global nature of technology and its impacts. Organisations like the Opentensor Foundation can play a significant role in fostering these collaborative efforts in several ways. 

Developing Open Standards and Protocols: Organisations like the Opentensor Foundation can lead or contribute to the development of open standards and protocols for decentralised AI. By promoting a set of universally accepted standards, they can ensure interoperability, fairness, and ethical compliance across different systems and regions. 

Facilitating Global Dialogue and Consensus Building: These organisations can serve as platforms for dialogue between various stakeholders, including governments, tech companies, academia, and civil society. By bringing together diverse perspectives, they can help in building a global consensus on ethical AI practices. 

Research and Innovation in Ethical AI: Organisations dedicated to decentralised AI can invest in and support research focused on ethical considerations, bias mitigation, transparency, and accountability in AI systems. This research can provide valuable insights and guidelines for the development of ethical AI. 

Education and Awareness Campaigns: Educating AI developers, users, and the broader public about the importance of ethical AI is crucial. Organisations like Opentensor can organise workshops, seminars, and campaigns to raise awareness about ethical AI practices and the significance of decentralisation in promoting fairness and transparency. 

Policy Advocacy and Advisory Roles: By actively engaging in policy discussions and serving as advisors to governments and regulatory bodies, such organisations can influence the development of policies and regulations that govern decentralised AI, ensuring they align with ethical standards. 

Building Ethical AI Tools and Frameworks: Developing and providing open-source tools and frameworks that embody ethical AI principles can significantly aid developers in creating AI systems that are inherently ethical, transparent, and unbiased. 

Encouraging Community Involvement and Diversity: Promoting community involvement in AI development can ensure that AI systems are not only technically sound but also socially relevant and beneficial. Diverse community participation can help in understanding and addressing a wide range of ethical concerns. 

Partnerships and Collaborations: Forming strategic partnerships with other organisations, universities, and research institutions can amplify their efforts in promoting ethical standards. Collaboration can lead to pooling resources, sharing expertise, and aligning goals for a more significant impact.

Looking ahead, where do you personally see significant growth opportunities, both for yourself and the broader AI industry? Particularly, how do you envision these opportunities within the context of the ongoing moral discussions and the push for decentralised technologies? 

I believe the Opentensor Foundation is well positioned to be a central player in the decentralised AI movement, as more and more entities start to embrace decentralised AI as the ethical future of AI. It is inevitable that people will look to entities like the foundation for moral and ethical guidance, and in this case it becomes extremely important that the foundation lead by example and ensure that all moral considerations are taken into account and align AI with the planet. 

Executive Profile

Ala Shaabana

Ala Shaabana is a co-founder of the Opentensor Foundation, an organisation dedicated to the building and maintenance of the Bittensor protocol, a peer-to-peer machine-learning protocol that incentivises participants to train and operate machine learning models in a distributed manner. By interconnecting neural networks on the internet, Bittensor aims to create a global, distributed, and incentivised machine learning system. 

Educated at McMaster University in Canada, Ala holds a PhD in Computer Science with a dissertation focus on human-centric sensing, creating extremely small machine learning models that fit on IoT devices to detect signals coming from the human body. Prior to founding the Opentensor Foundation, Ala worked on distributed computing research at VMware, where he led three projects on neural architecture search, language modelling, and load prediction in distributed environments. Subsequently, this led him down the path of distributed AI and how models across the internet might be able to work together to accomplish tasks. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here