The dawn of AI
A new age in the development of technology is upon us. Artificial intelligence (AI), a discipline that combines principles of computer science, engineering and statistics to empower computers to perform tasks that normally require human-level intelligence such as speech recognition, visual perception, various decision-making activities, is poised to fundamentally revolutionize how we interact with the world around us. Since AI first began circulating in science fiction culture in the early 20th Century, it has sparked a tremendous amount of interest from academics, industry experts, and the general public alike, all looking to understand the feasibility of robots, flying cars, smart homes, etc. But despite such general interest, it was not until a few decades later that scientists and researchers began to turn AI from fictional imagery into realistic development.
In the 1950s, AI first began to take shape. In 1956, John McCarthy, an American computer scientist and cognitive scientist, along with his colleague Marvin Minsky, held a conference at Dartmouth College where the first program designed to address topics in AI, Logic Theorists, was presented.
Further progress following this kick-off conference came gradually. Researchers at MIT and other top institutions of higher learning continued to try to build a machine capable of human-level cognition. But what they realized was that in order to do that, the machine would have to store a lot more information than it was able to at the time. Essentially, scientists were crippled by a principle that came to be known as Moore’s Law, which stated that a computer’s ability to store and process information would double every year. That was the limiting constraint preventing many researchers from realizing their smart machines. No one knew exactly how many years it would take to reach the point where computers were powerful enough to store and process the vast amounts of information necessary to make these machines useful.
Despite this constraint, scientists and researchers continued to make progress in the field. Then in the 1960s, artificial intelligence research and development (AI R&D) took off with advances in deep learning (a subfield of AI and machine learning) and neural networks. Researchers were able to create systems with exponentially more processing power by adding additional layers in the algorithms, overcoming the initial hurdles they had come across when trying to perform very complicated computations. This enabled the systems to handle the increase in load associated with the second major development: the advent of the internet and the exponential growth in the availability of data. But even with such growing excitement within the scientific and research community, many in the public sphere remained largely unaware of AI’s development.
It wasn’t until 1996 that many people outside of the academic community got to see first-hand the progress that had been made towards making the dreams of an AI machine a reality. That year, a chess-playing computer program, Deep Blue, took on world champion chess player Garry Kasparov. In a shocking upset, Deep Blue managed to defeat the world champion, the first time any chess-playing computer program had managed to defeat a world champion chess player. Most recently, in 2016 an artificial intelligence company based in the United Kingdom, Deep Mind, created a computer program that defeated the #1 ranked Go-player in the world, Ke Jie, and then 18-time world champion, Lee Sedol, at what is considered the oldest and most intellectually demanding board game in history. Celebrated for its highly strategic nature, Go consists of over 10360 possible moves, which is more possible moves than there are atoms in the observable universe. After seeing the ability of a machine to defeat the best the world has to offer, the interest in the development of AI took off both in the public and private spheres.
AI in today’s world
In today’s landscape and going forward, AI is projected to be a major disruptor in how we fundamentally interact with the world around us. Reminiscent of the way that past technological innovations (e.g., electricity, the combustion engine, the internet) were able to dramatically transform social and economic norms, so will AI fundamentally change how the world operates.
Kai Fu-Lee, former president of Google China and current chairman and CEO of Sinovation Ventures, considers there to be four different waves of AI that can help us break the behemoth of AI into consumable, understandable segments: internet AI, business AI, perception AI, and autonomous AI.
Internet AI, the type of AI most familiar to many people, interacts with us as we conduct internet searches on our computer, scroll through social media, or browse a grocery store website for the best kind of toilet paper to buy. Search engines track our every movement, looking for ways to optimize our search process or suggest new products for us that we’re also likely to buy. Internet AI also scours the internet in search of deep fakes, tagging and reporting them for removal. Business AI involves the mining of data in search of correlations between inputs and outputs, causes and effects, that otherwise might have gone overlooked by a human researcher. It’s used across industries to predict patient outcomes in healthcare, approve loans in finance, or predict machine failures in utilities. Perception AI is about equipping machines with tools to help them perceive the world. Placing IoT devices throughout the environments in which we live — connecting them through wireless connections or 5G — allows us to use machines as an extension of ourselves, granting us the ability to see and make sense of the world around us more comprehensively than before. Combining it with business AI, industries like healthcare are using it to monitor patients throughout the day to develop personalized health plans based on data gathered about their fitness, diet and other lifestyle habits. It can also be used in other industries like Federal to deploy smart devices at a campaign rally, monitoring an environment through cameras, gas sensors, and motion detection devices to gauge overall security posture. The fourth AI, autonomous AI, is the most comprehensive form in that it combines many of the previous three AI forms in ways that allow it to interact with the world autonomously. In the future autonomous vehicles are set to become a normal part of the way automobiles move, making streets safer, warehouses more productive and transportation easier.
It’s exciting to think about a world where you can walk into a grocery store and immediately be presented with a synthesized list of options to choose from, gathered from your preferences and past shopping history, or where self-driving cars can drive us to and from work, saving us the headache of having to worry about missing an exit or maneuvering around slow drivers in the morning. But there is much to do before AI will fully fulfill its true potential.
The ethical debates in the development of AI
What has been established so far is how the growing proliferation of AI will come to change society in very fundamental ways, and how the revolution will be feed and grown by the harnessing of more and more data. But another central component that must not be forgotten through this development — both on a macro and micro level — is the people who will be affected by this technology. AI is poised to challenge very fundamental aspects of society. We have already seen instances of this already. Major technology companies, both in the United States and abroad, taking advantage of unsuspecting consumers by harvesting their data without user permission, leaving those users exposed and vulnerable to their personal information falling into the hands of people who could use that information to ruin everything they have worked to build for themselves. We’ve also seen AI challenge fundamental aspects of society by challenging the role humans have played in the labor market for decades. Two things are certain when it comes to AI: we will need to protect our data from being misused, and we will need to fundamentally change our relationship with many of the tasks we have performed over the years. These two scenarios and their implications are examined below.
What seems to have granted China such an ability to gather so much data so quickly was its lax consumer data privacy laws. Before 2018, there were little to no standard protection regulations or standards in place protecting individuals and the data they produce through using apps, filling out health questionnaires, or filling out a job application. The generally accepted attitude in the country was best synthesized by Robin Li — founder of the Chinese megacompany, Baidu, which the Chinese equivalent to Google — who commented: “If [Chinese people] are able to trade privacy for convenience, for safety, for efficiency, in a lot of cases they’re willing to do that.” If that was an accepted status quo before that, that comment seemed to be the straw that broke the camel’s back. Later that year, the tech giant was sued (although the suit didn’t end up going through) for collecting user data without disclosing to customers that their data was being collected without their prior consent. Since then, China has become one of the most proactive countries in the world, along with countries in Europe, when it comes to protecting data privacy.
Still, the Chinese infrastructure has and will continue to benefit from the mass deployment of endpoints from which to collect data from suspecting and unsuspecting users. The ability of Chinese companies to place themselves ubiquitously throughout the lives of their customers grants them unrestricted access to a wide array of data. A company by which individuals pay bills, order ride-sharing services, order food, schedule appointments, buy groceries, all through your services, grants a treasure trove of data from which you can then apply towards AI models to drive business insights. It’s a fundamental difference in the business models by which Chinese companies operate than do American companies that allow them this advantage, though.
From these examples, we see that when companies organize around their customers, they can generate direct insights into the factors driving the market as well as their business. How will companies look to position themselves in order to generate the data needed to accelerate their AI production?
Many of the monotonous activities will be automated on a scale only envisioned in movies. And while that is a great opportunity, it also presents ethical issues that must be addressed before that future vision can ever be fully realized. The future of such technology will therefore be decided.
In a society where AI will become the backbone of society, companies will turn to data as the new currency from which business decisions are generated. But with an even greater focus on generating data — protecting society from morphing into a dystopian novel where morality, empathy and interpersonal decisions are replaced by algorithms, statistics and formulas — measures must be put in place to protect individual rights and ensure that as algorithms do in fact proliferate, they are constructed and set up such that they work towards the betterment of society — moving us towards a brighter future and not backward in human development.
George Orwell’s classic novel 1984 predicts a future where the government controls all aspects of society through intrusive measure after intrusive measure, controlling every detail of our lives through an intricate network of surveillance, manipulation and fear. With the emergence of technology able to integrate itself so deeply into each part of our lives, it will fundamentally shift the balance of power into the hands of those who control the flow of data and information. We must remain vigilant in the face of such a transformative change of pace, remembering that although technology has the power and capacity to perform a function, we cannot harness that power to intrude on an individual in a way that intrudes on their right to self-determination, dignity and liberty. That is the promise of living in an open society that values the power of the individual as much as it does the collective good (correct me if I’m wrong here, but that’s what I’ve stood to believe if not in practice in theory, at least). That characteristic in society cannot — must not — change.
The Changing Nature of work
While AI and privacy is an important tension that society must work to resolve and smooth over, another daunting product of the proliferation of AI throughout society is the way it will affect the relationship people have with labor and the tasks they perform on a day-to-day basis. AI, due to the level of disruption it is predicted to bring about will end up displacing a large number of jobs that people have become traditionally accustomed to performing. But which jobs? And what then are we supposed to do instead?
AI will restructure economics around the world, driving opportunity for some while making social and economic mobility more difficult for others. Kai-Fu Lee in his book, AI Superpowers, discusses this topic in a very digestible way.
In short, AI will automate those tasks in the labor force which are highly redundant and mundane (e.g., stocking shelves, performing financial loan evaluations), drive a massive wedge between those who stand to benefit and those who will not, and risk creating a massive political upheaval as a new oligarchic class forms at the top of society.
It’s also worth addressing the uniqueness of AI as a revolutionary technology. Looking to the past and how technology has worked either for or against human progress, we see time and time again technology has been a net positive for society, increasing productivity and improving quality of life. Examples of this are cars, the cotton gin, cell phones, etc. Although the cotton gin threatened to render thousands of jobs less valuable, it ended up being a net positive for society. There are, however, certain innovations that have carried more weight than the others, those that have caused more disruption than the others. We have seen examples of these in the past: the steam engine, electricity & information, and communication technology (ICT). These special innovations are referred to as general-purpose technologies (GPTs). Unlike other technologies, they have led to greater amounts of displacement among lower-skilled workers being replaced by machines that could perform the tasks more efficiently and at a lower cost. However, this reduction in the operational costs also eventually led to an overall increase in operational productivity which produced a higher standard of living across the board. In the early 20th Century, the Industrial Revolution a few specialized tasks and made them more efficient with the introduction of the machine. Instead of the handy craftsman that you need to buy your furniture from, a machine cut all the respective pieces out, package and ship them to the customer, and all the customer would have to do was assemble them at home. The customer benefitted in that the product became much less expensive as it required less labor-intensive work to accomplish as machines were simply repeating a relatively simple set of commands they were programmed to carry out. Yes, a few jobs were displaced in the process. But from a capitalistic standpoint, society benefitted much more from the Industrial Revolution than if it had never happened. Yes, this revolution of the means of labor gave birth to the Marxist-Socialist movement that looked to redistribute the gains from this new mass reorganization labor back into the hands of the people and away from the top 1%, those who controlled the means of labor and at the end of the day, realized a much more disproportionate portion of the wealth than did the working class.
Analogous to how those who controlled the means of production, the workers, reaped the biggest rewards, so will those in the AI economy who control the flow of data reap the rewards of digitalization, algorithms and data science models. Those who control the flow of data.
Similarly, the AI Revolution also stands poised to disrupt in a similar way to the Industrial Revolution. How specialized tasks such as assembling things were broken down, streamlined and made more efficient, so will the AI Revolution transform those tasks that, as Lee identifies, “can be optimized using data, and do not require social interaction”(Lee 152).
But automation spans across the spectrum of workforce — both blue-collar and white-collar jobs will be affected. Justin Trudeau, the Canadian Prime Minister, discussed how he sees the reaction to AI being rolled out in the job market in three parts: 1) disruption, 2) transition and 3) a brighter future.
Phase 1: Disruption
The disruption part is what we have touched on a bit already. It will be the first phase of this AI roll out where tasks are being automated, jobs begin taken over more and more by machines and algorithms. The percentage of jobs in the United States that could end up being taken over by machines and algorithms ranges from as low as 9% by some reports and as high as 47% by others. We have already felt this disruption at times. We receive phone calls these days, not from individuals hoping to sell us an all-expenses-paid trip to Hawaii or urging us to take immediate action on an outstanding bill, but from robots. When we call to get our prescriptions filled or request information about our banking account, robots are now the gatekeepers streamlining the process for us and pointing us to the right resource. They use natural language processing to listen to the prompts we give about the problem that we are facing or the representative we need to speak to, and from there decide on the best resource to address our issue. Jobs like these where companies have figured out that it is easier and more costly to have a machine perform these tasks than have a human do it is the type of disruption we will continue to see in the market.
Phase 2: Transition
The next phase will then be to figure out how best to proceed going forward without some of the traditional roles in the work force that we are used to seeing. With up to 50% of the workforce being displaced due to automation, such a transition from one way of life to another may take time. However, with millions of old tasks being left obsolete, what the economy and workforce can count on is that AI will also open the door for millions of other tasks to be created. For example, new positions will open for people who will have to attend to issues in the system as they come up. Skills related to systems engineering, computer science, data engineering, just to name a few, are poised to see a tremendous amount of growth in this new AI economy. And with the growing number of educational resources being introduced to the market (e.g., free online courses, YouTube videos and tutorials), there will be opportunities to learn new skills outside of the traditional college and university setting.
Phase 3: A brighter future for work
In due time, this will lead to a new economy where hopefully the mundane jobs that companies look to automate can open the door for new jobs. It’s a daunting paradigm, but as Jeff Bezos, CEO of Amazon, commented regarding the future of jobs in the AI Economy, “Humans like to do things and we like to be productive and we will figure out things to do and we will use these tools to make ourselves more powerful. What I predict is that jobs will get more engaging.” With preparation now on the part of the government and private actors alike, a more productive world may unfold before our eyes. AI will lead to a healthcare system that benefits from more effective diagnostic results with the assistance of AI being able to make sense of more numerical and pictorial information more accurately and reliably than their human creators. Doctors will be able to engage with patients more proactively, prompting them when they should come in for a visit instead of being reactive in treating ailments once they become noticeable. Businesses will be empowered to conduct their operations more efficiently with more data to gain more insights about how to keep their clients engaged, attract new ones, and improve upon their product line. Educational institutions will be able to use technology to attract prospective students with material based on their search history, improve the quality of their classes by incorporating technology to get an accurate gauge on how well students are receiving class material and create more opportunities for extracurricular opportunities in the community and through internships by connecting them with community leaders and alumni who can provide clarity and direction to help them accomplish their goals outside of the classroom. Nutritionists will be able to develop customized meal plans and workout plans for their customers based on their level of daily activity, their current weight and their desired weight, as well as many other data points, all powered using AI algorithms to optimize their workouts and dietary routine. The possibilities for how AI can be applied to improve the quality of work are limitless! AI will change the way individuals and groups alike engage with each other and the world around them.
But just like we saw early in the 20th Century, eventually, this revolution has the potential to make processes more efficient and improve standards of living. But that can only happen if we empower people to take advantage of the new realities of work. It will start perhaps at an early age in the development of a future labor force. Training kids in the fundamentals of computer science, engineering, etc. in a way that they will be able to take advantage of a society where such basic principles are integrated into every part of their life. If they truly want to be able to benefit from this technology, they must understand how it works and where the opportunities for development and a future living alongside it lie.
True though, the elimination of mundane tasks, in the long run, will not necessarily equate to raised standards of living. We have seen, especially in the United States, that increased productivity does not translate to higher wages for everyone, and that the gap separating those who fall within the top 1% of wage earners and those who fall within the 99% of all other wage earners is continuously widening. However, improvements in economic productivity do not necessarily lead to higher wages for everyone, as some stand to benefit either from controlling the redistribution of labor or having the necessary skills to experience an increase in their utility, while others will be displaced or largely remain unaffected. Yet, we can be more confident in the fact that our economy will become stronger as a result of the AI Revolution.
Considerations must be made for the AI-driven economy that is to come. We will have to decide going forward the future we would like to see not just for ourselves but for the generations to follow. A tension exists between whether we want to optimize for efficiency and safety, as in the case of autonomous vehicles or other use cases in the realm of Perception AI, or whether protecting individuals’ rights to have their data protected from companies looking to harvest more data for their models is more important to us as a society. Granted, based on the outlook presented in this expose, we take the stance that individuals’ rights should take priority to the speedy development of autonomous vehicle technology or the more accurate training of facial recognition software built to locate criminal suspects at large in the community. But considering the influence Western, democratic principles such as the promotion of individual rights have had on the world in the past three hundred years, at the very least it’s an interesting perspective from which to take a viewpoint, but certainly not the only viable one.
A more ethical future for AI-human symbiosis
How do we protect society from potential reverberations from a system where every move made is collected and stored in a data warehouse, every word spoken is transcribed, translated and stored, every purchase is tracked and monitored, every decision logged and recorded forever to be referenced? How do we control the rate of AI development so that as it is implemented, it is done so in such a way that harnesses the power of technology while empowering people instead of replacing them, improving their quality of life instead of diminishing it?
When it comes to mitigating the existential risks associated with the development of AI, much has been done both on a transnational scale and at the national level to address this issue. At the transnational level, two documents stand out: the EU General Data Protection Regulation (GDPR) and the Rome Call for AI Ethics.
The EU GDPR is a landmark expansion of its parent agreement, the EU Data Protection Directive. The GDPR protects EU citizens’ data by applying its standards around transparency, informed consent and agency of the citizen to all governments and organizations who collect and process EU citizen data regardless of whether they are based within the EU or not. It places a much greater burden on governments and businesses who handle EU citizen data to disclose information regarding how they are handling data and requires they ask for consent before collecting or using a person’s data. This increased level of transparency on the part of the data collection agent, as well as the increased level of optionality on the part of the user, grants users more control over their information and insight into how companies are looking to use their data and for what means. The GDPR also gives more rights to EU member states regarding who they choose to share data with, what information they can choose to share, and ultimately how the information is used.
More specifically within the document, Article 22 of the 99-article piece of legislation speaks specifically to the use of algorithmic-based decision making on EU citizens. As we’ve seen already, using algorithms to automate decision making can lead to discrimination based on race, religion, gender, or any other category. The GDPR states that individuals “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (Article 22, Paragraph 1). In addition, the article also states that data controllers, organizations or governments collecting the user’s data, must “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”(Article 22, Paragraph 3).
The GDPR also stands out in its ability to hold countries both directly and tangentially affiliated with the EU accountable to abiding by its regulations. Regulators have the power to fine any entity, those of the EU and around the world, found to be in violation of its standards around EU citizen data €20 million or 4% of annual global turnover. It is a model, therefore, not just in the precedents it sets and the values it champions, but also in the soft power legitimacy it garners across borders.
Rome Call for AI Ethics
The second, a recent charter signed by IBM and Microsoft, the Rome Call for AI Ethics, laid out six principles to ensure the ethical development of AI in the future: transparency, inclusion, responsibility, impartiality, reliability, and security and privacy.
One of the more complicated aspects of AI algorithms is the fact they enable a machine to come up with decisions on its own. For example, the computer program developed by the team at Deep Mind used the algorithms given to it, combined with the inputs of data, to decide on its own where to move on the Go board. It was not a human assisting it in any way — Alpha Go decided autonomously which move gave it the best chance of winning. Similarly, a program written not to play Go but to make decisions around who gets a house loan and who does not also come to those conclusions based on the algorithms it was given and the inputs of data it was feed. With that, it runs the data through its algorithms to decide autonomously how best to proceed with a certain action. The output may be yes or no, and if yes, to what amount that person should receive a loan. Or, if the input was a loan of $10,000, it could instead give a prediction of how likely that individual, given a series of input variables, is to repay the loan. Since a decision was reached based on many petabytes worth of training data, it’s extremely difficult, if not impossible, to be completely transparent around how a certain conclusion was drawn. It’s the beauty of AI as well as the curse: a machine is able to take much more data than one person could ever process and draw inferences that one human could not do on their own. However, this is not good enough. Therefore, someone must be able to understand the algorithms themselves as well as have a general understanding of how the machine would arrive at a certain conclusion. Otherwise, all human agency is removed from the equation, and we become a society no longer in control of our own destiny, a reality few would ever accept.
From the point of view, the data that goes into these models will clearly have a strong influence on the outputs any AI machine produces. If it uses data that is narrow and shallow in its representation, then many factors and considerations will not be included in the final decision. As the saying goes, a model is only as good as the data that goes into it. Therefore, to come up with more equitable decisions that represent more than a small host of scenarios, two things should happen: 1) data should be used that represents as many social, economic and political groups as possible, and 2) those making the decisions around what data is used should be reflective of those ultimately affected by the outcomes. For example, if those making the decisions around whether a 25-year-old recent college graduate, with little to no credit score and simply student loans to show on his record can receive a loan to buy an apartment to relocate and start a new job are using data that does not adequately represent those who like him, then any conclusion drawn about them and their ability to pay off his apartment will be made using bias data.
As AI is rolled out, it should be done so responsibly. What that means is that just like any other decision-making entity, it should be subject to oversight and accountability. For example, as AI systems are implemented in autonomous vehicles, say one of those vehicles ends up hitting a pedestrian jaywalking across the street. Even though the algorithm was written to teach the car to avoid hitting humans, where do you place the blame in the situation that it makes a mistake? Do you blame the algorithm? Do you blame the carmaker? Do you blame the engineer or scientists who wrote the algorithm? Do you blame those who collected the data? There must be a consensus around how to assign responsibility and accountability in a world where instead decisions are automated to be carried out by machines.
Closely related to the importance of inclusion, so is the value of impartiality and fairness an important principle that must be pursued. Systems must be implemented such that they stand to deliver and treat everyone equally. That equates to removing bias. While that begins with taking adequate inclusion measures to ensure that all demographics are involved in the process of creating algorithms and applying training data, that would eliminate bias from going into the system. But what about how the insights are applied? Take, for example, an AI system responsible for creating a headline for a newspaper article. If trained on bias data, a headline the system may end up outputting would be, “Racist policy leads to reallocation of government funds.” Instead, a more desirable impartial output would be something more along the lines of, “Government set to defund afterschool program of inner-city schools.” This should be the goal of AI to eliminate as much bias as possible.
Systems that are transparent, inclusive, accountable, and impartial, naturally, should produce a system that is consistent in its results and accuracy. It’s important that when deploying AI systems, people can have confidence that the consistent results produced during testing will also be reproducible and consistent after the model is pushed out to production. For example, we need to have confidence that the systems deployed in autonomous vehicles will not perform well one day while then performing sub-optimally the next.
Security and Privacy
Lastly, AI systems should be protected by proper data governance and management practices to ensure that data pipelines are protected, and sensitive information is not comprised at any point throughout the system. Customers need to be sure that their data is anonymized, stored safely, and never able to be used by malicious actors to hurt them. In addition, as data is transmitted between systems, there need to be adequate security measures to protect against hackers looking to steal information either in a data center or while it is being transmitted between endpoints to the data center. The EU and China have taken the lead amongst global actors with the EU GDPR and China’s Data Privacy Standard (DPS), which took effect in 2018.
The principles laid out in the Rome Call for AI Ethics are examples of the standards the international community is advocating for to promote a safe way forward in AI development. It is an unoriginal yet premier document serving to ensure a more equitable way forward for the future of the AI Economy.
Both the EU GDPR and the Rome Call for AI Ethics have laid the foundation for international standards to be continually set around protecting individuals in the new age of AI, with the primary objective of making their way into domestic laws and regulations, which we have seen evidence of taking place already.
For example, many private companies such as Apple and Twitter have already announced that they would be applying at least some of the principles laid out in the EU GDPR to their customers in other parts of the world. Though these companies have a strong presence in the EU, they also have a strong global presence as well. The symbolic significance of companies of their stature adopting the principles of the EU GDPR across their footprint is not to be understated.
Also, the United States, a country very mature in its AI capabilities, is stepping up its game when it comes to advocating for laws protecting its people against the risks of AI. In May of 2018, the U.S. House of Representatives formed a specific task force, the House Financial Services Committee AI Taskforce, dedicated to the continued investigation and monitoring of developments in the field of AI. In February 2020, during the fifth hearing of the House Financial Services Committee’s AI task force, its chairman of Rep. Bill Foster (D., Ill.) proposed two initiatives that would provide for additional government regulation of AI algorithms. The first he suggested was the auditing of AI algorithms, either applied directly to the algorithms or to their outputs. The second proposition was that companies using AI also perform self-test of their systems against certain benchmarks, the results of which would be sent to regulators for review. This is in line with the United States’ initiatives to foster more public trust and support of AI, supporting the research and development of AI while setting guidelines for how it should and should not be used. The National Institute for Science and Technology (NIST) has actively sought to lead the charge against machine learning that is used to maliciously profile people and adversarial machine learning which can also be used to undermine the integrity of machine learning systems. In addition, it has promoted the training of employees for the skills needed in an AI economy and advocated for the reliability of AI.
However, despite these measures, the United States lags behind other developed countries when it comes to protecting citizen data.
Similarly, China passed its own version of the EU GDPR in the Personal Information Security Specialization, commonly referred to as “the Standard.”
Such efforts will have to continue in the future until such norms and institutions, and the enforcement mechanisms around them, are properly in place. It will take time and many legal and compliance battles to add more clarity around the ambiguities and grey areas that uncontested. For example, many argue that large social media and tech companies impose an undue burden on users by placing upon them a sense of ‘forced consent,’ essentially subjected them to an all-or-nothing situation where they either agree to let go of control of their data or become barred from using the company’s site. This would stand in violation of the EU GDPR and result in hefty fines.
Enforcement will continue to be a primary hurdle in the time it takes for such principles and regulations to be instilled in more government and private institutions across the world. As we saw in the case between the EU and the countries it does business with, outside countries will have an incentive to abide by these regulations if they want to have access to users in a specific market. In addition, governments and private companies will have an incentive to abide by these rules to gain access to these markets. Similarly, they will be encouraged to go along with these standards or risk losing out on valuable business either by being barred from the market or by being labeled as entities that do not value their users’ rights to privacy and self-determination.
It may be a while until such norms take full effect. It will take time to ensure that transparency and accountability are adopted in the handling of all user data and the use of algorithms in decision making. It will take time to eliminate bias resulting from a lack of diversity in the data science and engineering community. However, until these certain prerequisites are met, the world of AI will be seemingly bootstrapped once again until father time catches up to the rapid pace at which this technology and all its potential has developed.
There is no question that the world is approaching another point in time where technology is poised to fundamentally challenge how we interact with one another as well as the world around us. With improvements in processing power, pushing past the limits of Moore’s Law conceived in the 1960s, coupled with the production of up to 1,0007 bytes of data each day — a number that will only increase from this point — scientists are primed to take advantage of this opportunity to harness data to make better decisions. By learning from the extraordinary amount of quantitative information produced, AI algorithms will be able to make inferences that could not have been deduced before. The more data is produced, the more insights can be inferred from the data. The more insights can be inferred, the more business operations can be made more efficient, and the more products can be tailored more towards the customers’ liking, driving revenue upward. Those companies that have the right data environment — companies built around their data instead of being structured around the data itself — will ultimately reap the benefits of the coming AI Economy, an economy that is predicted to be worth $15.7 trillion by 2030. (Lee, 169)
But with decision making being transferred more and more to machines, away from humans, special considerations need to be made regarding a future where lives are determined more by calculations and less with respect to ethical, emotional and social consciousness. Just because we’ll be able to streamline up to 50% of jobs using automation and AI algorithms, should we? And if we do, how can we do so without compromising our morality in the process? A new technology aristocracy is in the making, one where those forces that control the flow of data from input to output will have control on the very underpinnings of society. Eerily reminiscent of the Industrial Revolution of the early 20th Century, economic and social forces will drive a wedge even further between these two forces in society: between the top 1% benefitting from the gain in productivity and the other 99% simply living amongst it. The effects of such inequality will be unprecedented. Each of these considerations must be thoughtfully engaged with in order to integrate AI in a way that is ultimately beneficial for human development.
Barlett, Robert; Morse, Adair; Stanton, Richard; Wallace, Nancy. (November 2019) “Consumer Lending in Fintech Era”. University of California, Berkeley HaaS School of Business. https://faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf
Foote, Keith D. (2019, March 26). “A Brief History of Machine Learning”. https://www.dataversity.net/a-brief-history-of-machine-learning/#
Frankish, Keith; Ramsey, William M. (Eds.). (2014). The Cambridge Handbook of Artificial Intelligence. Cambridge University Press.
Gilli, Andrea; Pellegrino, Massimo; Kelly, Richard. (2019). “Intelligent Machines and the Importance of Ethics”. The Brain and the Processor: Unpacking the Challenges of the Human-Machine Interaction. NATO Defense College. https://www.jstor.org/stable/resrep19966.11
Heckman, Jory (2019, August 22). “NIST sets AI ground rules for agencies without ‘stifling innovation’”. Federal News Network. Accessed: April 27, 2020. https://federalnewsnetwork.com/artificial-intelligence/2019/08/nist-sets-ai-ground-rules-for-agencies-without-getting-over-prescriptive/
Herhold, Kristen. (2019, March 27). “How People View Facebook After the Cambridge Analytica Data Breach”. The Manifest. Accessed: April 4, 2020. https://themanifest.com/social-media/how-people-view-facebook-after-cambridge-analytica-data-breach
Lee, Kai-Fu. (2018). AI Superpowers: China, Silicon Valley and the New World Order. Houghton Mifflin Harcourt Publishing Company.
McCormick, John. (2020, February 13). “House Lawmakers Discuss How to Curb Bias in AI”. https://www.wsj.com/articles/house-lawmakers-discuss-how-to-curb-bias-in-ai-11581589802?mod=article_inline
McCormick, John. (2020, February 28). “Vatican Advisory Group Issues Call for AI Ethics”. https://www.wsj.com/articles/vatican-advisory-group-issues-call-for-ai-ethics-11582893000
Sacks, Samm. (2018). “New China Data Privacy Standard Looks More Far-Reaching than GDPR”. Center for Strategic and International Studies. https://www.csis.org/analysis/new-china-data-privacy-standard-looks-more-far-reaching-gdpr