Helper not hindrance: Why AI should work with us and not against us
- 30 April, 2018 00:01
It’s easy to overlook the fact that AI solutions, like people, are intelligent. AI technologies can learn and act autonomously, and they are already managing our supply chains and approving our bank loans. AI is much more than just a technological tool, it has grown to the point where it often has as much influence as the people putting it to use.
Research from Accenture’s Technology Vision 2018 report indicates that Australian attitudes and perception of AI are evolving, with 42 per cent of Australian executives believing AI will completely transform their industry in the next three years, nearly double the global figure of 23 per cent.
Therefore, as AI expands further into our society, the need to raise responsible AI is crucial. A brand’s customer and employee trust could potentially be damaged, if their AI fails to uphold a set of company values expected by society. Accenture sees deploying AI as more than just training it to perform a given task but in fact ‘raising’ AI to have the same values as a representative of the business, and a contributing member of society. This is what Accenture calls ‘Citizen AI’.
AI systems learn, they make autonomous decisions and they have grown from a technological tool to being able to coordinate and collaborate with humans. According to this year’s Accenture Technology Vision report, four out of five executives (81 per cent) believe within the next two years, AI will work next to humans in their organisations, as a co-worker, collaborator and trusted advisor. If society approaches AI with an open mind, the technologies emerging from the field could profoundly transform society for the better.
An example of Citizen AI being used in society is Hello Cass, a domestic violence chatbot developed by Melbourne based social enterprise Good Hood which is designed to support people affected by family and sexual violence. Similarly, Accenture helped the United Nations High Commissioner for Refugees develop a biometric identity management system (BIMS) that verifies the identify of refugees. The AI-powered technology captures and stores fingerprints, iris data and facial images of individuals, providing undocumented refugees with their only personal identity record.
Creating the AI curriculum
If we want AI to uphold our values, we must provide high quality data and information. Organisations with the best available data will create the most capable and responsible AI systems.
Furthermore, data scientists engaged by businesses must use care when selecting taxonomies and training data. Teaching AI is not just about providing the technology with data, but about actively minimising bias in the data for these AI applications. Leaders must also ensure there is training for the workforce so that they can work alongside AI to teach it how to act responsibly.
The imminent introduction of the General Data Protection Regulation in Europe (GDPR) marks a shift in data regulation, giving individuals a ‘right to explanation’ for decisions made by AI and other algorithms. It is important that businesses are accountable for the process used by AI to arrive at a decision.
Executives are aware of this expectation, with 88 per cent of respondents in Accenture’s latest Technology Vision report agreeing that it is important for employees and customers to understand the general principles used to make AI-based decisions by their organisations.
Just as organisations are accountable for the decisions and actions taken by their employees, they are responsible for the decisions made by their AI solutions. In wider society, incidents such as the recent fatality in Uber’s self-driving car testing acts as a reminder of how algorithms can fail. Companies using AI technology must therefore think carefully about apportioning responsibility and liability for its actions. One of the key enablers to society adopting AI is to build public trust between humans and AI. This is achieved by ensuring that businesses implement AI responsibly and ethically. They must keep human interest front of mind and ensure that the systems are transparent in the way they operate and are free from bias or discrimination.
Companies must also embrace AI and upskill their workforce so that they see the new technology as a helper rather than a hinderance. By treating AI in a way that recognises the impact it has on society, organisations can create a collaborative and powerful new member of the workforce.
It is an exciting time to implement AI and leaders should take on the challenge of raising AI in a way that acknowledges its new role and impact in society. By doing so successfully, they will build trust with consumers and employees, a crucial step in the integration of AI into society.
Peter Vakkas is Accenture’s Technology Lead for Australia and New Zealand