close chevron-right chevron-down arrow-right arrow-left facebook linkedin instagram twitter email

Tech Ecosystem

Artificial Intelligence: Devastation or Salvation?

Ethics around the use of artificial intelligence (AI) is a hot topic and an extremely important one at this pivotal moment in the technological transformation of our world. AI has the potential to transform society in ways that can significantly alleviate poverty and disease, democratize access to services and create more equitable societies where humans no longer need to perform dangerous or unpleasant tasks.

Yet, for this promising vision of the future to become reality, we need to ensure that the AI systems we create are unbiased and accurate, that we revisit our ideas of data ownership and value as a society, and that industry and policymakers shift the way they look at economic value and wealth distribution.

Augmenting rationality

Before we dig into these issues, it’s important to consider what AI really does and understand what its current limitations are. At its simplest, AI is about optimization. It helps humans to better understand how to perform certain tasks and can also be used to build systems that perform jobs that are difficult or arduous for humans. Cliff van der Linden, co-founder and Chief Scientist at Delphia, a company that enables individuals to derive economic value from their data, links this optimization to the concept of augmented rationality.

“Artificial intelligence can augment our capacity for rational thought,” van der Linden says. “That doesn’t mean to overtake our ability to think or to make decisions, it just means to help us reflect on those decisions and how they align with our preferences.”

In other words, AI allows machines to parse troves of information so that humans can focus their energies elsewhere — like better decision-making. Additionally, as humans don’t have the cognitive faculties to quickly synthesize all of the information we come into contact with, we often make decisions based on emotion or worldview and our cognitive biases are frequently the cause of overly-optimistic or fear-based judgments that lead to lacklustre or sometimes detrimental results.

“Artificial intelligence gives us huge opportunities to try to address big problems like climate changegrowing disparity in wealthageing, or systemic discrimination in society,” van der Linden continues. “There are people in all kinds of fields using machine learning methods to try and tackle these big fundamental problems.”

Promise or peril? What AI can and cannot do

Alongside this optimism, there is also a great deal of fear around AI and the damage it could do to our world. Some of the greatest concerns stem from a misunderstanding of what these systems can and cannot do. While there are many tasks that are better performed by AI than humans, virtually all of the work carried out by an AI model needs to be overseen by humans. While AI can certainly help us to be better decision-makers, we are nowhere close to being able to pass responsibility over to algorithms or machines,.

“People will say AI is good at judgment but we’re not there yet,” says Kathryn Hume, VP Strategy at integrate.ai, a startup whose AI-powered software platform drives revenue growth for businesses through collective consumer intelligence. “It’s good at recognition and classifications, and sometimes predictions,” she says.” What it can’t do is anything that involves analogical thinking between Option A and Option B, anything that involves synthetic thinking or judgment calls.”

The way in which Hume and other AI experts like Deep Learning pioneer, Yoshua Bengio, describe AI systems is more like idiot savants than Renaissance men. While AI systems are very good at automating narrow tasks that may look super-intelligent (like cancer diagnosis), these are typically the subparts of people’s jobs that require a cognitive input of a very narrow type: Is it this or is it that? AI cannot extrapolate or make judgment calls — it can simply parse vast swaths of data and recognize patterns that can be used to draw conclusions or make predictions. The AI is more of an assistant to the humans using it than a replacement, in other words.

In the cancer diagnosis example, it is by using the collective intelligence of a history of people’s actions — taking the knowledge of thousands of doctors and radiologists and then finding the statistical average — that the AI can do what looks like human activity. This parsing of information aids the oncologist or radiologist in diagnosis by recognizing patterns (in this case, cancerous growth) from thousands of medical images that might take humans years to sift through. In this case, and many others, the AI system is trained on a very narrow task — recognizing images, patterns in speech or language — but it doesn’t actually understand the data it is analyzing.

Biased models, biased data, biased results

Because AI models are being trained on data, it’s vital that companies and data scientists ensure that their data sets are accurate and unbiased, and that the algorithms they’re training them on are not based on false or outdated suppositions.

“Machine learning systems in their default mode rely upon the assumption that the future can and should look like the past because they’re trained upon data,” Hume says. “That’s all fine and good in the realm of celestial mechanics where the sun is revolving around us. It’s not great in the realm of normative human relations because often, we don’t want the future to look like the past.”

In the case of businesses looking for new market opportunities, Hume notes that it’s often the group that you’re technically biased against that might be the best new target. “If you realize that you’re doing really badly in your machine learning model with making accurate predictions on African-Canadian women, what that means is that you’ve underserved them,” she says. In cases like these, identifying and correcting for biases can be good business practice.

In other industries, such as healthcare and autonomous vehicles, it’s not just bad practice to use biased or inaccurate data — it could be fatal. When moving from human drivers to artificially intelligent navigators, it’s vital for data scientists to understand what’s going into training models and where the patterns of discrimination occur in order to minimize accidents.

“If the error is uniformly distributed and there is a reduction in fatalities, then I think we can say that that model is behaving in an unbiased fashion,” van der Linden says, “But if we find that the distribution of error is concentrated — let’s say fatalities are higher in urban centers or among certain socioeconomic classes or communities based on the infrastructure or the lack of data — if that error is not randomly distributed then we have a problem.”

Fixing these issues of bias will involve a shift in mindset for both leadership and data scientists in AI companies. There need to be discussions between executives, legal teams and members of minority groups or other people with different mindsets, so they can potentially identify gaps in the data or biases in the models that the team may have missed otherwise. And if there are missing demographics in the data sets, then the onus is on the company to find a way to access and integrate this data into their models.

Whose data is it anyway?

Data has been called the new oil for good reason: it’s extremely valuable and needs to be refined to be useful. Where data differs greatly from oil, however, is that there appears to be no limit to the amount of data we can produce, it’s far less expensive to store, and it becomes more useful the more it’s used. At present, the Internet Giants have control of a disproportionate amount of the world’s data and are reaping the rewards. But what if there were incentives for these companies to be more open and collaborative? What if the data they are currently amassing were no longer theirs to use as they please? What if there were policies put in place that not only ensure the equitable redistribution of wealth but also open and fair access to data?

Both integrate.ai and Delphia, along with numerous other AI-focused companies and research groups, are looking at these questions of data sharing and how we can build systems that give access to businesses that don’t have the same access to data as the Internet Giants.

integrate.ai is working to build out a privacy-preserving learning exchange where two companies can benefit from one another’s data without having to directly share it. “The social purpose behind that is to help traditional enterprises share data so that they can compete against consumer Internet Giants,” Hume says. “This challenges standard assumptions that data needs to be hoarded and protected to provide value, and opens new possibilities for partnership and collaboration between large consumer businesses.”

Delphia takes a different approach: one where the use and ownership of data become the keys to value. “We want to give people agency over their own data,” says van der Linden. “Canadians are terrible at exporting raw goods, getting them refined somewhere else and then paying ten times the price for the refined product. This happens with data too, where there are data marketplaces that sell your raw data and then it’s out there, companies exploit it and you get pennies on the dollar because it’s been commodified.”

Delphia’s approach is acting as a “data refinery.” While protecting their users’ data, the company generates derivative products that they sell for good returns that go back to the individuals. By shifting the understanding of data from something that is simply produced by virtue of existing in the digital and real world but has no value for an individual, to something that the individual citizen has a right to be remunerated for, it’s possible that income could be generated that would offset the elimination of jobs by AI.

“I hope that by showing that this model works, it will eventually just become commonsense to allow users to be part of the equation and to say: You have this data on me, but I have to consent if you’re using it, and if I’m going to consent, I’m going to see part of the returns and I’m going to have full visibility on how you use it,” van der Linden says.

Data rights and social contracts

So what will propel the shift in the ownership of and compensation for data?

Hume sees this as a return to older questions around social contract and rights. In a world where there are sensors everywhere and we’re constantly expanding our digital footprints, it’s important that more people start thinking about the political and economic status of our data. “I think it will be a consumer shift — a mindset shift,” she says. “As people start to get more educated as to how these technologies work, as consumers start to become more cognizant of the value of their own data, they will start to ask: Should we be remunerated for that? Can we be paid for our actions in the world?”

Van der Linden also believes that while there hasn’t been a massive wave of data rights activists yet, the prominence of the Facebook/Cambridge Analytica scandal and the advent of GDPR has led to a shift in the regulation of data and the way people think about it.

“Continuing to be aware, to be active, to be part of social movements that are demanding transparency of companies, to be agentic in terms of opting out or pulling out of companies that are not honouring this collaborative and mutually-beneficial and transparent relationship — these are all ways that I think these large companies are really feeling the earth shifting under them and they’re not quite sure where they’re going to land. What we want to make sure is that they don’t find their footing until they’ve created an ethical, mutually-reciprocal and transparent relationship with their users.”

Redistributing the rewards

In order to move toward a future where the benefits of AI are equitably distributed among all people, we need to shift the way society thinks about economic value. As technology is changing so quickly, van der Linden and Hume both believe that the onus is on industry and to some extent the academics who are at the forefront of these emergent technologies to think about more than bottom lines.

“I think this is really the philosophical and economic work we need to be doing right now to get to the ground of: What is this property? Is it human rights? What is the political and economic status of our data?” Hume says.

“That means working closely with regulators to ensure that they understand the technologies that are emerging and their implications,” van der Linden notes. “These actors can find an optimal path that shapes the future in ways that are both lucrative and profitable to the people who have invested in them and at the same time, are using these technologies in ways that improve the human condition as opposed to continuing to exploit certain segments of the population.”

As AI becomes as ubiquitous as mobile technologies, there’s no question that the ways in which we think about our data, the value it generates and the ways we can use it to optimize systems across industries, needs to shift. Mission-driven AI companies like Delphia, integrate.ai, MindbridgeSwift MedicalBenchSciXpertSeaInvivo AI and so many others, are using the power of artificial intelligence to democratize, optimize and improve business, healthcare, biomedical research, aquaculture and a plethora of other industries.

AI holds the promise to do so much good in the world. By making sure that regular citizens and users have agency over their data and are aware of what artificially-intelligent systems are and are not capable of, and by ensuring that industry and government move toward this age of technological transformation with a view to the collective good, we have the potential to make the world a better more equitable place for everyone. It is up to all of us to help ensure that the best-case scenarios are the ones the play out.

**********

For more insights into ways technology is transforming our world, as well as fundraising advice, founder stories, ecosystem deep dives and industry trends, sign up for our newsletter and follow us on TwitterLinkedIn and Facebook.

Related Blogs

  • 10 min read
  • December 20, 2023
  • Real Ventures