Recently, the extraordinary developments in artificial intelligence (AI) and robotics, have received worldwide coverage in news and current affairs programs.
With the rapid advancements in AI and robotics, the world now faces a new age of technological revolution, one which experts say will profoundly alter the nature of work and leisure, alter the structure of our economies and societies, and challenge human ethics and values. If that were not enough we also face some serious ethical issues including: the question of the management and ownership of data, the development of autonomous weapons and the use of such weapons by the military, rogue states, and terrorists.
Yet talking to friends and acquaintances I’ve been shocked by the general lack of knowledge of these changes and developments. Others who are aware of the issues are naturally anxious about their futures. People are too easily dubbed as pessimists, despite the fact that such rapid and widespread change will lead to mass unemployment, which could domino into a polarisation of the social order of the very poor and financially elite classes. These changes may also herald the breakdown of the world’s market-based economies.
So, are we ready for this new age? Can ordinary humans as well as business and governments cope with the predicted scale of these changes? How do we readjust our lives from work to increased leisure, and who will pay for this? And how much technology is too much? Are we pushing boundaries too far in creating autonomous robots and computers that learn from each other exponentially? Are we creating a future which will threaten the very existence of the human species or is this a natural response to the threat of the unknown? How do we measure, apply standards and ethics, and set controls to these technological changes?
According to experts, all areas of employment will be affected, as the new wave of technological advancement impacts not just low skilled work, but higher skilled work as well. Those impacted the most will be jobs which still entail repetitive tasks. Experts agree that tens of thousands of jobs will disappear.
This is nothing new, of course. Throughout human history, advances in technology have delivered huge benefits and improved the lives of individuals and communities alike, freeing humans from most menial, time consuming, and physical tasks. From our earliest tool-making beginnings, humankind has improved and adapted tools and developed new technologies. And through these developments, we have transformed our cultures, our environments, the nature of work and leisure, and even our ways of thinking. Humans have always asked questions, experimented and pushed boundaries.
By and large, technologies served humankind, but with the beginning of the Industrial Revolution machines began replacing human beings. Even with the advent of the Industrial Revolution, technological change occurred at a much slower rate allowing people to adapt and new jobs to be created. Now however, computers and AI are escalating this rate of change. Tech companies around the world are developing AI and robots promoting this with the promise they will make our lives easier. Companies like Amazon, Costco and the supermarket chains already use automated warehouses. Robot assisted surgery has been featured in news stories. Drones are used in many areas including: leisure, science, military and civilian security, surveillance and rescue. AI assistants such as Siri, Alexa, and Google Home are reducing human involvement in the home.
How will the business sector integrate these new technologies? Australia’s retail businesses including Myer, Harvey Norman, Coles, Woolworths and the independent supermarkets are preparing for the widely published arrival of Amazon in 2018. The impact of this is certain to be significant.
For many businesses, the human resource component (their workers) is one of the biggest costs. Consequently, any opportunity to reduce human input and therefore costs in a business is highly attractive. Technology related changes implemented in the latter half of the twentieth century and into the twenty first century have reduced human input within many industries. These industries continue to take advantage of technological advancements. They include: agriculture, mining and transport as well as automotive and other manufacturing areas where robots have replaced humans on the assembly line.
Revolutions have occurred in the business sector too, with changes to payroll handling, the replacement of typing pools by computers, and electronic transfer systems such as email. We have transitioned from cash to credit cards, purchase goods and services online, and stream entertainment and news online.
But what does all this mean from a human perspective? What does this mean for employment and the nature of work which is connected so closely to our ideas of identity and value both as individuals and as societies? Does this threaten the core fabric of such values?
In the recent ABC program, The AI Race, scientists sat down with a group of young people to explain how the development of autonomous robots would impact their jobs. The level of anxiety was apparent from the beginning when a young para-legal, Christine Maibom, realised her job was under threat with the development of Ailira, a super-fast AI computer taxation program app which reduces the legal case and precedent research time from hours to minutes. Much of the film explored the impact of rapid change on the nature of work including the issue of massive unemployment.
So, if the impact promises to be so widespread how will humans cope with reduced work opportunities?
During the screening of The AI Race, Toby Walsh, research group leader at NICTA and Professor of Artificial Intelligence at the University of NSW, Sydney, proposed that he could see a future where everyone was paid a nominal wage to survive. Facebook’s Mark Zuckerberg and Tesla CEO Elon Musk, have also expressed this view. It sounds very much like a welfare payment. Once again, little more than the statement was explored in the film. How can such a proposal be considered as a viable alternative, and more importantly, how would such a solution be sustained economically? For many, this would be a band-aid measure to shore up global economies. Logically speaking, huge job losses would mean less government revenue. But with each country facing the same challenges in terms of mass unemployment and revenue loss, how would these economies fund such huge welfare budgets?
In 2016 the Australian Federal Government announced a crackdown on welfare. Since then, reviews have been rolled out for people on Disability Pensions. Changes to the Aged Pension assets test have also occurred. Politicians tell us constantly that Australia cannot support its ageing population without changes to the welfare system. The pension age requirement has been raised, as has the retirement age. People are being asked to work beyond their sixties and into their seventies. This appears totally at odds with the utopian predictions of less work and more leisure.
If all these pressures are coming to bear now, how will governments afford a basic welfare payment for increased numbers of the population who are not working and not paying income tax? How can any model of this type be considered a rational solution?
For the majority of us, work for pay dominates our lives, and is connected to identity and value as citizens who contribute in a positive and meaningful way to our societies. But a loss of work, and subsequent existence on a welfare payout, may only be the beginning.
What does this mean for work, superannuation, home ownership, the affordability of basic utilities, healthcare and lifestyles for the majority of people? What does this mean for economies in general if people can no longer afford to buy anything more than basic goods? Automated cars might be safer, they might be cool, but who will be able to afford these goods? Who, on a basic universal wage, will be able to afford other discretionary goods, or take holidays?
And what are the future solutions for such massive social and economic upheavals? Will they include the axing of publicly funded schools? And since most tertiary education is privatised, will education become a luxury, or restricted to those deemed worthy of the investment? If all these new technologies change not only how we work but significantly reduce the number of people who do work, how will people cope psychologically?
At present, these are questions that are not being answered, even as government officials, scientists and other experts assure us that jobs will be created, particularly in creative thinking areas.
Another point of concern discussed during the ABC’s program, The AI Race, is that data is now the key to the future. Around the collection, storage, and use of such data looms another very important debate; one of ethics.
Large companies collect huge amounts of data from various sources and use it to target customers. Cookies, phones, everything connected to the internet can track a person’s whereabouts. Data collected from your internet browsing identifies your interests and determines the advertisements displayed on your Facebook page.
A recent article by journalist Harry Guiness, explained how Facebook’s algorithms work. Published in February 2017 it’s already outdated. Since then, Facebook has again updated and refined how their algorithms determine what content we see. My concerns are, how these algorithms affect our lives, directing us to not only the news and entertainment content we watch, and whether manipulating the content we view influences or alters our opinions? Are we being conditioned? What are the implications for freedom?
Dr Cathy O’Neil, is a mathematician turned campaigner, and author of, Weapons of Math Destruction. O’Neil raises awareness of the ways in which data and algorithms are being used. As an author, she provides compelling evidence that algorithms are being used as weapons through the authority of the inscrutable and that this poses a danger to democracy. As O’Neil points out, democracy functions because everyone understands the rules, and there are points of accountability set into the system. But data collection, analysis, and the writing of algorithms have no transparency, nor are there checks on authority. These algorithms are, according to O’Neil, being used for social control. Facebook is one example of this social control. As Harry Guiness explains in his article on Facebook’s algorithms:
Facebook has a ton of information on it, and Facebook doesn’t want to show you stories that don’t interest you. So every time you open Facebook, the algorithm looks at all the possible stories you could be shown. Everything that your friends and the pages you follow have posted since you last logged in is included. Each story is assessed individually and given a Relevancy Score; a measure of how likely Facebook thinks you are to spend time viewing it, like it, comment on it, or share it. This score is unique to you.
Other examples of these algorithms include education and accountability modelling and micro-targeting used by politicians to understand voter sentiment.
One point which is very clear is that data is king. To be more specific the collection and ownership of data equals power. As Professor Mary-Anne Williams, Director of the Innovation and Enterprise Research Laboratory at UTS, suggests, those who own the data rules the world. So how did we get from internet surfing to social manipulation and control over the masses?
At this point, it is helpful to reflect on the early days of the internet and the communal values it assumed. As Jean-Noel Jeanenney pointed out in his book, Google and The Myth of Universal Knowledge, “with the introduction of the Internet there was an intrinsic libertarian spirit amongst users, particularly academics.” I myself, remember the enthusiasm for knowledge sharing from my university library induction in 2005. This attitude has largely retracted as business interests dominate the internet space and data collection has become a tool of power.
But perhaps the most disturbing questions facing the world today are the ethics associated with the military applications of AI. On Monday 21st August 2017, technology entrepreneur and Tesla CEO, Elon Musk, together with 116 specialists, called on the UN to ban autonomous weapons. These autonomous weapons are able to “operate on their own without human intervention.” The open letter, reported on by news broadcasters across the world, also seeks a ban on the weapons race currently underway. The Guardian quoted one warning in an earlier letter to the UN from the founders of AI and Robotics companies: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.” In my opinion, we are already too late.
World War II is still within living memory of our elder citizens, and the development of photography and film enabled governments and journalists to document events as they unfolded. The Cold War was dominated by the fear of nuclear war between the two great powers of the time, America and the USSR. During the Gulf War and subsequent conflicts, the world watched footage of battles waged with weapons designed for surgical strikes guided by GPS.
Since then, many conflicts have occurred, and with them, the world has witnessed several regimes perpetrate atrocities against its civilians. Despite our knowledge and understanding of the consequences of such advanced weaponry, including nuclear weapons, the race continues with the added threat of AI, or autonomous weapon robots.
The ongoing wars against religious extremism and associated terrorist attacks around the world already make use of the AI technologies available. One example of this was in 2001 when planes were used as weapons in the terrorist attack on the Twin Towers, in New York. The internet has made it possible for information to spread rapidly. An internet search will quickly identify articles and video of AI and robotic developments, including some which have obvious potential for military purposes. It is foolish, and even dangerous, to assume that terrorist groups or paranoid regimes such as North Korea won’t attempt to harness AI and autonomous robots in their ideological wars.
Professor Williams, interviewed in the ABC’s, The AI Race, has described autonomous weapons as potentially being like velociraptors—agile and swift moving, “able to hunt humans with high precision sensors augmented with information from computer networks.” Surely, this scenario is chilling enough to raise serious ethical questions among the most optimistic of tech fans. It could prompt a natural response by terrorist organisations to acquire advanced robotic technology. Even airing such opinions and possible futures through the media and in film (easily accessible through social media platforms such as YouTube and Facebook) is reckless and dangerous. Films are visually powerful tools, which can easily be manipulated. Furthermore, the calls for universal bans assume some sort of shared moral system of values and ethics—that once banned, every government, regime, and group, will cease all autonomous weapons programs. Simply banning the use of autonomous weapons or killer robots as Elon Musk and others propose, won’t prevent terrorist groups or rogue states from pursuing technological advances in this field.
Responsibility for advances in technology used for conflict and mass destruction such as nuclear weapons and now AI, lies within the purview of those who develop them, as well as the businesses, institutions, and governments who fund this research and development. Abdicating responsibility, by claiming that the uses to which these technologies are put is not the responsibility of the developers, is nonsense and unacceptable. This condition is known as habitus or in layman’s terms, “ivory tower” mentality. In this instance, the notion of “ivory tower” thinking refers to the isolation or a remoteness of attitude of academics or scientists who have become removed from the real problems and consequences of everyday life. The more complex condition, habitus, incorporates culture as a contributing factor to the dislocation from reality.
One study of this was discussed at the 2004 conference on Science and Technology in the Twentieth Century: Cultures of Innovation in Germany and the United States, in the session on the culture of science and attitudes of scientists in Germany. At this session, Dr Ulrike Kissmann discussed ‘Bourdieu’s notion of “habitus.”’ She concluded that the nuclear scientists were dissociating themselves from society. Kissmann claims that these scientists abrogated any responsibility related to their research. ‘They constructed the affiliation with their own professional community by projecting the military potential and its risks onto “the others.” Thus, they expressed the historically grown habitus of scientists as working in a societal vacuum.’
The question of habitus, or ivory tower mentality, should be put to those in the science community who extol the benefits of proposed changes, dismissing the concerns of the general population, while knowing their own jobs are secure. The natural anxiety that the public expresses is easily criticised and dismissed, by those who have vested interests in the continuation of such research and development.
The advances in AI raise legitimate concerns for all citizens of this planet. At the same science and technology conference, Dominique Pestre, ascribed changing attitudes to science in the 1970s ‘to the growing privatization of knowledge on a global scale’, where science had moved from benefitting society ‘to a system in which a financial and market-oriented appropriation of scientific knowledge is now in the ascendant.’
Considered in the context of O’Neil’s concerns about secrecy, bias, and lack of accountability in data collection and algorithm applications, this adds weight to concerns for the future direction of democracy and the freedom and rights of citizens who value it.
As Elon Musk pointed out in the open letter to the UN, the world is already engaged in an AI race. And in some quarters the old justification of, ‘if we don’t develop it someone else will’ still dominates. The ethics of developing such technologies may be questioned from time to time but universities and tech companies are still funded, to my knowledge, rarely account to the public about their research pathways. Thus, questions of moral and ethical responsibility are too easily pushed aside or subverted.
Finally, the rapid technological advancement in terms of AI and robotics raises questions which as yet, are not, being answered by so-called experts. It’s not the anxiety of change or change per se which is concerning but the foreseeable disruption to societies, the profound ramifications of AI advancements, and the uses to which these advanced technologies will be put which are the real issues.
Technology is wonderful when it enhances our lives, not when it replaces human input to the extent that is being predicted. Surely it is time for the general public to fully engage with the discussions around AI advancements and ethics, and more importantly, for governments, institutions, and scientists to accept responsibility for the applications of technological advancements, and be accountable to the people of the world.
Sources used in the research for this opinion piece:
Australian Broadcasting Corporation, 2017, The AI Race – Documentary ABC, Australian Broadcasting Corporation, viewed 21st August 2017, <https://www.youtube.com/watch?v=gLeuCj0ZFo4>
Brynjolfsonn, E & McAfee, A 2014, The Dawn of the Age of Artificial Intelligence: Reasons to Cheer the Rise of the Machines, The Atlantic Magazine, The Atlantic Monthly Group, 2017, viewed, 8th August 2017, <https://www.theatlantic.com/business/archive/2014/02/the-dawn-of-the-age-of-artificial-intelligence/283730/>
Collins, English Dictionary, viewed 13th September 2017, <https://www.collinsdictionary.com/dictionary/english/ivory-tower>
Creighton, A Maher, S 2016, One in Two Voters is Reliant on Public Purse, News Corp, The Weekend Australian, online, viewed 13th September 2017, <http://www.theaustralian.com.au/national-affairs/one-in-two-voters-is-fully-reliant-on-public-welfare/news-story/d0e4af64354d9e9d6fe81b99ef59cf9b>
C S Canada, 2017, Habitus, Harvard Online Journals, 2010, viewed 14th September 2017, <http://cscanada.org/web/> and, <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.670.9485&rep=rep1&type=pdf>
Cüneyt, D 2015, World Conference on Technology, Innovation and Entrepreneurship
The Impacts of Robotics, Artificial Intelligence On Business and Economics, Procedia – Social and Behavioral Sciences V 195 July 3rd 2015, pp. 564 – 573, Elseviere Science Direct, open access, 2017, viewed 21st August 2017, https://doi.org/10.1016/j.sbspro.2015.06.134 <http://www.sciencedirect.com/science/article/pii/S1877042815036137>
Eckert, M & Trischler, H 2005, Science and Technology in the Twentieth Century: Cultures of Innovation in Germany and The United States, GHI BULLETIN No. 36 (Spring 2005) pp. 130-134 online, viewed 13th September 2017, <https://www.ghi-dc.org/fileadmin/user_upload/GHI_Washington/Publications/Bulletin36/36.130.pdf>
Encyclopaedia Britannica, 2017, The History of Technology, p. 5, viewed 8th August 2017, <https://www.britannica.com/technology/history-of-technology/Perceptions-of-technology#toc14917>
Gault, Montgomery, Muller, O’Gorman, Resser, Roland, Winefield, 2000, The Psychology of Work and Unemployment in Australia Today, The Australian Psychological Society Ltd, viewed 13th September 2017, <https://www.psychology.org.au/Assets/Files/work_position_paper.pdf>
Greenwald, T 2015, Does Artificial Intelligence Pose a Threat?, Wall Street Journal, Dow Jones Company, News Corp 2017, viewed 8th August 2017,
Guiness, H 2017, How Facebook’s News Feed Sorting Algorithm Works, How-to Geek Pro, LLG, Blogpost, February 28th 2017, viewed Monday 21st August 2017, <https://www.howtogeek.com/290919/how-facebooks-news-feed-sorting-algorithm-works/>
Heymann, J Stein, MA Moreno, G 2014, Disability & Equity at Work, Oxford University Press, NY, online version, viewed 13th September 2017, <https://books.google.com.au/books?id=_6JNAgAAQBAJ&pg=PT63&lpg=PT63&dq=the+role+work+plays+in+an+individual%27s+contribution+to+society+and+notions+of+wellbeing&source=bl&ots=ToT-MbBa0G&sig=1AoD_M1xhR34QQ4-1aNHKR3Ef3M&hl=en&sa=X&ved=0ahUKEwj9l6vH96HWAhUJzbwKHU8pDD4Q6AEIXjAJ#v=onepage&q=the%20role%20work%20plays%20in%20an%20individual’s%20contribution%20to%20society%20and%20notions%20of%20wellbeing&f=false>
Jeanenney, J N 2007, Google and the Myth of Universal Knowledge, University of Chicago Press, London & Chicago.
Morris, D Z 2017, Elon Musk and AI Experts Call for Total Ban on Robotic Weapons, Fortune Magazine, (online), 20th August 2017, viewed 21st August 2017, <http://fortune.com/2017/08/20/elon-musk-robotic-weapons/>
Morton R, 2016, Crackdown Throws Thousands Off Disability Pension, News Corp, The Weekend Australian, 13th July 2016, viewed 13th September 2017, <http://www.theaustralian.com.au/national-affairs/policy/crackdown-throws-thousands-off-disability-support-pension/news-story/c0097c07716302ada36f06a865e047db >
Nott, G 2017, Can Autonomous Killer Robots be Stopped?, Computerworld, 2017, 25th August 2017, viewed 29th August 2017, <https://www.computerworld.com.au/article/626460/can-autonomous-killer-robots-stopped/>
O’Neill, M 2017, ABC’s Lateline, Explainer: What is Artificial Intelligence, viewed 8th August, 2017, http://www.abc.net.au/news/2017-08-07/explainer-what-is-artificial-intelligence/8771632
Price, R 2016, Stephen Hawking: This Will be the Impact of Automation and AI on Jobs, World Economic Forum, article published in collaboration with Business Insider, 6th December 2016, viewed 18th September 2017, <https://www.weforum.org/agenda/2016/12/stephen-hawking-this-will-be-the-impact-of-automation-and-ai-on-jobs>
Slade, M 2010, Mental Illness and Well-being: The Central Importance of Positive Psychology and Recovery Approaches, BMC Health Service Res., PMC, US National Library of Medicine, National Institutes of Health, published online 26th January 2010, doi: 10.1186/1472-6963-10-26 viewed 13th September 2017, <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2835700/>
Stanford University, 2016, Artificial Intelligence and Life in 2030:
One Hundred Year Study on Artificial Intelligence, Report of the 2015 Study Panel, pdf pp. 1-27, viewed 8th September 2017,
Tegmark, M, 2016, Benefits & Risks of Artificial Intelligence, Blogpost, Future of Life Institute, viewed 8th August 2017, <https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/>
The Economist, 2017, Automation and Anxiety: Will Smarter Machines Cause Mass Unemployment? Special report, 25th June 2016, viewed 18th September 2017, <https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety>
The Guardian, 2017, Elon Musk Leads 116 Experts Calling for Outright Ban on Killer Robots, viewed 21st August 2017, <https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war>
United Nations, 2017, Will Robots Cause Mass Unemployment? Not necessarily but they do brig other threats, News item published online, 13th September 2017, viewed 13th September 2017, <https://www.un.org/development/desa/en/news/policy/will-robots-and-ai-cause-mass-unemployment-not-necessarily-but-they-do-bring-other-threats.html>
Williams-Grut, O 2016, Business Insider Australia, Robots are Coming: How AI Could Increase Unemployment and Inequality around the World, viewed, 8th August, 2017, <https://www.businessinsider.com.au/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2>
Yudkowsky, E 2008, Artificial Intelligence as a Positive and Negative Factor in Global Risk, Global Catastrophic Risks, edited by Nick Bostrom and Milan M Ćirković, pp. 308–345, New York: Oxford University Press. Online PDF version, viewed 8th August, 2017, <https://intelligence.org/files/AIPosNegFactor.pdf>