BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Ethics Struggles With That Pervasive AI Optimization Mindset When It Comes To Devilishly Underplaying Or Outright Skirting Around Ethical AI Precepts Amidst Autonomous Systems

New! Follow this author to improve your content experience.

My code runs faster than your code.

My program takes up less memory than yours does.

The telltale undercurrent here is regarding optimization.

As I will in a moment be elucidating, a kind of optimization preoccupation is something that has repeatedly spurred a lot of handwringing about the tech industry all told. This same concern about being overly driven by optimization is also found in the realm of Artificial Intelligence (AI). The latest twist is that AI developers are often torn between abiding by AI Ethics and Ethical AI precepts versus aiming to do technologically optimized facets for their AI coding. For my ongoing and extensive coverage of AI Ethics, see the link here and the link here, just to name a few.

We will go on a journey herein that will examine how it is that AI developers are enticed into AI optimizations and how they are also pushed or at least drift away from Ethical AI considerations (if they even are aware of those valued AI Ethics matters). I seek to present crucial insights for those devising AI, and for those that are trying mightily to get Ethical AI into the heart and soul of AI development and AI promulgation. Indeed, this has vital ramifications for all of us, society included, due to the prevalence and rising adoption of AI throughout our daily lives.

Return to the comments at the opening as to lines of code and the amount of computer memory.

Those humblebrag typical showoff remarks arise whenever software developers get together or have a heady online conversation and opt to compare their cleverly devised computer programs. An underlying and recurring theme entails the revered significance of optimization. You see, the most important consideration is that my code is more optimized than your code. Either mine runs or executes faster than yours does, or mine uses less memory than yours does.

Other metrics of comparison are allowed too.

For example, I might insist that my code is written in fewer lines than yours. This implies that I was able to find optimizations to reduce the amount of code required, while still achieving the same degree of functionality. Once again, we are talking about optimization. The software engineers might not explicitly yell out the word “optimization”, but everyone knows that is the key factor as a treasure hunt ingredient of writing wonderous code.

I want to emphasize that there is nothing inherently wrong with a desire encompassing optimization. We can all be thankful for the role of optimization in the systems in which we interact. How many times have you gotten irked that a website took too long to load or that you were using an app that made you wait endlessly for it to calculate a vital number that you need to see? We all have.

To some extent, you can say that the website or the app wasn’t fully optimized. If it had been optimized, presumably your waiting time would have been a lot less. The program would have been coded in a manner to ensure a quicker response. That is the type of optimization that you would likely want to have eloquently performed by those devout in-the-weeds software developers.

Of course, part of the problem is that we need to ask what it is that is being optimized, along with questioning whether the optimization of one thing is going to potentially send out-of-whack something else. You cannot necessarily have both the cake and eat it too, as it were.

Follow me on this logic.

A program is being written to try and figure out whether a loan applicant should be granted their loan request. Those wanting to get a loan will use the app by entering various personal data. After doing so, the app will use some secretive algorithm to ascertain mathematically whether the loan should be provided to the person.

As a software developer, suppose that I decide to optimize the code by focusing on the least number of lines of code required to perform this stipulated function. Maybe I am able to write this code in half the number of lines that someone else can do. I am proud of this result. I let my friends and acquaintances know that I did this coding by using some pretty darned ingenious coding trickery. Pat me on the back. Maybe I should get a prize or a trophy.

Unfortunately, in my impassioned focus on the number of lines of code, I neglected to consider how long the code takes to run. There is not necessarily any correlation between the size of the code (in terms of lines of code) and the runtime. A smaller-sized program might take as much time or more than one written with a lot more lines of code. I know this might seem counterintuitive at first glance, but it does make a lot of sense.

Let’s briefly explore this.

Imagine that I give you instructions on how to run around an Olympic running track. I tell you that all you need to do is put one foot in front of the other, as in the act of running, and follow the marked lines of the track. Thus, I’ve given you the “code” or set of instructions in just a handful of remarks or lines.

Voila, you are good to go.

A different coach comes along and starts to explain that when you reach the track curves, make sure to stay as close to the inner side of the curve as possible. Also, pace yourself by starting at a gradual run and save your big push of energy for the end. And so on.

Wait for a second, I might say, this other coach is providing a lot more lines of code or instructions than I did. In my viewpoint of optimizing by the number of instructions as to providing the least possible, for sure my instructions are “better” than those of this other coach. I assume that you can plainly see that though my instructions might be the shortest set, they aren’t likely to also lend themselves to the most successful running of the track. If you follow the more embellished set of instructions, it seems likely that you will come out running the track in a sooner time.

The gist is that a longer set of instructions or code might ultimately be faster or more expedient than a shorter set. I trust that explains what otherwise might have seemed counterintuitive. That being said, the same rule doesn’t have to be the case all of the time and we need to realize that the opposite can also be true. If the coach had told you to hold your breath as you run or do a hoppity hop like a rabbit, I’d bet that this longer set of instructions is not going to pan out as to aid you in winning any footrace on the track.

Oftentimes the metrics used for the optimization of programming are aimed at technical merits rather than shall we say functionality merits.

The number of lines of code could be said to be a technical or technological factor. Same with the amount of memory space required for the code. Same too to some degree about the speed of the code, though this is certainly more argumentative in that the speed is something that those using the code are bound to directly experience.

One of the subfields of software development has to do with the user interface (UI) or what more modernly is known as the UX (user experience). This specialty tries to get developers to take into strident account the nature of how the app or system interacts with people. The hope is that rather than solely being preoccupied with the “internal” merits of the code and its technical wherewithal, the interface and how it interacts with people will get an equal footing.

Suppose that I am trying to write the least amount of code. Meanwhile, suppose that the interface is going to require a massive amount of code if I provide the “optimum” interface for this designated app. Which shall prevail? Do I give in and bloat my code? Or do I stick to my guns and make sure the code is one of brevity, even though this makes the interface suboptimal and much harder for people to use?

The odds are that many of those pure tech heads-down developers will go for the more technical or technologically revered metric and forsake the other, such as skirting around the interface at the benefit of getting the least amount of code. This is a natural inclination. It tends to also be rewarded by their peers and gets more accolades in the halls of fellow software specialists.

I don’t want to seemingly be accusing all software developers of acting in this manner. That would be unfair. Many developers can see the forest for the trees. On the other hand, there are many too that do not and ergo they tend to focus somewhat myopically.

I also want to make sure we get onto the table another extremely crucial aspect. The leadership that is overseeing or guiding a development effort will have a tremendous influence on what the focus is for the developers. Pointing fingers only at the software engineers is an easy thing to do. I would suggest that this is done and overdone. The assumption is that the developers are somehow working in an utter vacuum and that no other influencing elements come to play. That is a rarity.

Allow me a moment to elaborate.

Several software developers are brought together in a tech startup that is devising an exciting new app that is being AI-embellished. The head of the startup is mainly concerned with getting the app into the marketplace at the soonest possible opportunity. Other competitors are racing to do the same. The first to the market will supposedly grab hold of the market. Those coming out afterward are claimed to become copycats and will have already lost the momentum that the early birds managed to capture.

This set of software developers is studiously and professionally conscientious about their coding. They want to make sure that the code runs well. Furthermore, they want the functionality as presented to the app users to be smooth and robust. Their inner core of professionalism has reached a point in their careers where they seek to optimize across a wide range of factors. No one factor alone is the key, and they realize that they need to take a macroscopic view as they craft the AI-related app.

Sounds great.

Enlightened developers. Willing to do a balanced job. They are seasoned enough to know that tough choices might need to be made. Overall, they are aiming to offset the usual optimization mindset as might be required to get a full-bodied AI-based app going.

Upon proceeding, the head of the startup realizes that the competition is apparently on the verge of getting their comparable apps into the marketplace. Hey, the head of the startup exclaims to the developers, we need to toss out something. This ship will sink if we don’t lighten the load.

Well, the developers look carefully at things and realize that they were going to be putting in place a lot of AI Ethics oriented guardrails into the app. This was intended to try and keep the app from verging into a potentially untoward territory. If they leave those components out, the app could be “done” sooner, though it won’t have those Ethical AI elements included (I’ll be discussing with you momentarily what those Ethical AI components would be).

The head of the startup excitedly tells them to go ahead and omit the AI Ethics elements.

That’s the kind of “added stuff” that they can later on put into the app, the founder informs the software crew. No worries right now. Just get the app into the marketplace and they can all deal with any of this Ethical AI coding in a later version. The goal currently is to get the raw version 1.0 into the hands of users, while a future version 2.0 or version 3.0 can have the “niceties” such as those AI Ethics guardrails.

Based on the urging of the startup founder, the developers skip over the Ethical AI portions. They at least make note of what this will later consist of. The hope is that either they or someone else will eventually make sure that those parts get built and included.

Where shall we lay the blame in this instance of omitting the AI Ethics components?

The easiest finger-pointing would be at the developers. They messed up, one might say. In their crazed haste to get the app out the door, they neglected the Ethical AI elements. Shame on them! An outsider might criticize the developers as being shortsighted and wrongheaded in their development efforts.

Whoa, a retort goes, they were doing as they were told.

Let’s retrace what happened.

These developers had solidly in their minds the importance of including the AI Ethics portions. We need to give them due credit for that kind of foresight. Many developers would not have ever thought about it at all. Or some developers might realize belatedly, once the app is in the wild, that they should have done something from an Ethical AI perspective in the coding. Darn, they say to themselves, it just wasn’t on their minds at the time of initially constructing the app.

The developers were forced into skipping the AI Ethics components. The head of the startup directed them to do so. They are working for that person. This is the executive that calls the shots. Were they supposed to rebel against this vociferous command? They might lose their jobs. They might get a sour reputation if the head spreads the word in the developer community that they weren’t willing to do the job as prescribed. Etc.

You can mull this over.

Some would insist that the software developers had a greater duty to their professional mores. First, they should have not even suggested that the AI Ethics portions could be skipped. That was wrong, to begin with. They should have insisted that those components are absolute. Second, if they were told by the head of the startup to skirt around those portions, they should have refused to do so. In fact, if needed, they should quit the company and stand on their principles.

That is a quite tall order for those developers.

Part of the difficulty too is whether they would have much of a leg to stand on.

What does that mean?

Well, there are researchers and others that believe that software developers should have to abide by a strict code of ethics. Though there is a generalized code of ethics available in this niche, there is no specific legal requirement for those to be followed per se. Unlike other areas of specialty such as say in certain areas of engineering or medicine, the field of software development is comparably a Wild West, some would critically proclaim. For my discussions on this, see the link here.

Software developers often find themselves between a rock and a hard place. They might be told to do something that they believe to be inappropriate, though not seemingly illegal, and they have to decide what action to take. When your job is at stake, this can be an agonizingly tough choice.

That being said, I do not want to paint a picture of all software developers being angels. Some will willingly cut corners. Some purposely cut corners. Some don’t even realize they are cutting corners and live in a blissful code-filled world all their own. A wide range exists.

The overarching concept is that much of the time a software development effort takes place within a larger context. You cannot exclusively look at just the software developers. What is the overall context? What is the role and influence of the leaders and managers? What other stakeholders have shaped the nature of the development? And so on.

I’ll add some more food for thought on this.

In the AI realm, there is a lot of attention on trying to devise the “best” AI that one can attain with today’s AI-building capabilities. An AI developer is likely to be thinking not only about whatever app they are building, but they are also typically desirous of pushing the boundaries of modern-day AI. In that sense, they might seek to optimize the AI parts of the app.

If the AI element of a loan granting program is the crucial optimizing factor for an AI software developer, they might be tempted to shortchange other portions of the program accordingly. They want to make the AI perform at some envisioned heightened AI performance characteristics. The rest of the app is not as important.

The interface is perhaps considered less important to them. The speed of the app is perhaps less important. Their primary focus is AI. If it uses some nifty new AI breathtaking capability, one that it can tout to the AI community, this is the driving force for its efforts.

Again, I don’t want this to come across as though the AI software developer is somehow villainous or evil-minded and that they are preoccupied with the AI portion alone. Do not make this into a simplistic hero versus rogue anti-hero kind of portrayal.

There is a classic notion that comes to mind here. When you have a hammer, everything around you looks like a nail.

AI developers are probably going to be more inclined toward wanting to do the AI portions of the app than the other portions. You might of course have an entire team of software developers for which each has their own specialty. In that case, the AI developers are rightfully focusing on the AI since that is presumably why they are on the team.

Where the kicker comes to play is the role of AI Ethics.

Consider these important insights:

  • If an AI developer is unaware of Ethical AI precepts, they are presumably not going to be including those precepts in their AI optimization pursuits since they don’t even realize the need thereof to do so.
  • If an AI developer is aware of Ethical AI precepts but not familiar with how to turn those into actual coding, they are presumably not going to include those precepts in their code due to the gap or hurdle to figuring out how to do so.
  • If an AI developer is aware of Ethical AI precepts and wants to include those capacities, and they know how to code it, they still might be steered away by limitations placed upon them by whoever is directing the software development all told.
  • If an AI developer is aware of Ethical AI precepts and knows how to code it, but they believe it to be a low priority or that it isn’t considered collegially valued, they might choose to omit or skirt around it.
  • If an AI developer is aware of Ethical AI precepts and knows how to code it, they might undertake a token inclusion to say that they did so, though they know in their heart that they gave it short shrift.

We can obviously devise more of those types of variations.

Before getting into some more meat and potatoes about the wild and woolly considerations underlying AI optimization considerations, let’s establish some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s return to our focus on AI optimization mindsets.

First, we need to explore some rules of thumb about optimizers overall, especially those in the techie realm such as in AI.

When you are a high-tech optimizer, you tend to let the optimization mantra permeate all that you do. Each confronted problem becomes a mentally self-controlled matter of finding the one specific factor to optimize, and then moving heaven and earth to attain that particular optimization. Regrettably, the optimization zeal overpowers any other sensibility or logic that ought to arise during the problem-solving process.

Here are some of the disconcerting troubles that can arise:

  • Tends to gravitate to purely technical metrics for their optimization focus (being the easiest, most obvious, acceptable by tradition, etc.)
  • Often fixating on a singular metric for optimization (all others being assumed as less vital)
  • Driven at times by peer convention and pronounced comparatives
  • Typically trained or educated toward the chosen metric as a considered keystone
  • Unable or unwilling or unfamiliar with incorporating multiple metrics at once
  • Failure to discern downsides and problematic results from the optimization mindset
  • Struggles immensely when having to deal with multiple metrics tradeoffs and ergo stubbornly clings to the singular optimization aspiration

We next examine how this arises in the context of AI optimization and especially so regarding the crucial role of AI Ethics.

Suppose an AI developer is concentrating on getting an AI search technique to run in the least possible timeframe or take the shortest feasible path. This is assuredly a reasonable context for employing an optimization mindset. Nothing wrong so far with this desire.

But, while doing so, they are bound to find themselves overlooking any kind of AI Ethics related considerations. For example, the AI developer might discover that by using gender or race as a parameter in the data structure of relational data elements, they can dramatically speed up the AI search. To them, this is exciting news since it is a means to garner the AI optimization that they so profoundly wish to achieve.

The idea that race or gender might be highly questionable factors to be used in AI optimization is likely not given much weighty thought if any at all. The data is the data. The factors or parameters are the parameters. How those connect to real-world matters is somewhat swept aside. It is relatively easy to become so deeply immersed in your AI work that the data loses its external sensibility. The fact that race and gender are quite societally sensitive factors is just not at the top of mind.

Some have claimed that AI developers are intentionally apt to use questionable factors when seeking AI optimizations. Though this might occur, it seems doubtful that any across-the-board semblance of AI developers explicitly taking such a route occurs. The more likely scenario consists of being in such hot pursuit of optimization that the meaning of the factors being used is neglected or not immediately grasped as problematic.

As mentioned earlier, the other angle is that in the case of machine learning or deep learning, the AI developer in sense lets the algorithm choose whatever is computationally most conducive to optimization. The AI developer might become celebratory that the ML/DL has done so, though not realize that within the morass a mathematical factor reliance on (for example) gender or race has occurred.

From an AI Ethics precepts perspective, the AI developer ought to try and ferret out whether such a computational reliance has taken place. I realize that some will protest that this can be extremely hard to ferret out, but that doesn’t give one the clearance to not even try. This also then takes us to other Ethical AI considerations, such as transparency and interpretability, which I’ve discussed at the link here.

If AI Ethics is outside of the AI developer mindset about AI optimization, you can assuredly bet that AI Ethics will always remain a second fiddle. The aim then is to get AI developers that are embracing AI optimization to enlarge their AI optimization worldview to include AI Ethics. We must get AI Ethics into the AI optimization rubric.

I share here my earnest recommendations on this, doing so as a set of affirmations, admonitions, and amplifications.

Here they are.

1) Affirmations:

  • AI Ethics is in fact wholly integral to AI optimization
  • AI optimization must fully encompass AI Ethics

2) Admonitions:

  • Do not allow AI Ethics to fall outside of AI optimization
  • Do not allow AI optimization to omit AI Ethics

3) Amplifications:

  • If you are doing AI optimization without AI Ethics, something has gone manifestly wrong
  • If you superficially include AI Ethics into AI optimization, something has gone manifestly wrong

AI developers that are hardcore skeptics or cynics are likely to argue that this attempt to “force fit” AI Ethics into the AI optimization arena is plainly mistaken and outrightly some kind of softheaded belief that Ethical AI matters.

By and large, those AI developers are a train wreck waiting to happen, though they don’t know it.

At some point, they are going to commit some egregious AI Ethics transgression in their AI system. This in turn will potentially be hidden from view at first, yet sits there like a ticking timebomb. Eventually, the matter gets exposed. Consumers were financially or otherwise harmed via the lack of adherence to proper Ethical AI. The company that devised the AI gets dragged into court. Lawsuits go flying. Reputations get damaged. Firms go bankrupt. Criminal charges might arise. Other calamites ensue.

One supposes the AI developer might have moved on and ended up avoiding the traumas that their AI Ethics transgression produced. There is though a solid chance that they too will one way or another get caught up in the exposures once the matters get revealed.

AI developers are going to increasingly be expected to have sufficient AI Ethics awareness and familiarity under their belt. Indubitably, some will be dragged into the Ethical AI space as they wildly kick and scream to avoid it. Others will with open arms welcome the AI Ethics matters, especially since they likely wanted to include this all along (the difficulty was that their peers didn’t give it any weight, or the leaders and managers gave Ethical AI little attention, see my discussion at the link here).

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI.

Besides employing AI Ethics, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here and the link here.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI optimization mindsets, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Optimization Mindsets

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

Let’s sketch out a scenario that might leverage AI optimization considerations.

Contemplate the seemingly inconsequential matter of where self-driving cars will be roaming to pick up passengers. This seems like an abundantly innocuous topic.

At first, assume that AI self-driving cars will be roaming throughout entire towns. Anybody that wants to request a ride in a self-driving car has essentially an equal chance of hailing one. Gradually, the AI begins to primarily keep the self-driving cars roaming in just one section of town. This section is a greater money-maker and the AI has been programmed to try and maximize revenues as part of the usage in the community at large (this underscores the mindset underlying optimization, namely focusing on just one particular metric and neglecting other crucial factors in the process).

Community members in the impoverished parts of the town turn out to be less likely to be able to get a ride from a self-driving car. This is because the self-driving cars were further away and roaming in the higher revenue part of the town. When a request comes in from a distant part of town, any other request from a closer location would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town is nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.

Out goes the vaunted mobility-for-all dreams that self-driving cars are supposed to bring to life.

You could assert that the AI altogether landed on a form of statistical and computational bias, akin to a form of proxy discrimination (also often referred to as indirect discrimination). Realize that the AI wasn’t programmed to avoid those poorer neighborhoods. Let’s be absolutely clear about that in this instance. No, it was devised instead to merely optimize revenue, a seemingly acceptable goal, but this was done without the AI developers contemplating other potential ramifications. That optimization in turn unwittingly and inevitably led to an undesirable outcome.

Had they included AI Ethics considerations as part of their optimization mindset, they might have realized beforehand that unless they crafted the AI to cope with this kind of oversizing on one metric alone, they might have averted such dour results. For more on these types of issues that the widespread adoption of autonomous vehicles and self-driving cars are likely to incur, see my coverage at this link here, describing a Harvard-led study that I co-authored on these topics.

Conclusion

A longstanding piece of wisdom in the computer field is this: “If you optimize everything, you will always be unhappy.”

This remark was starkly stated by the esteemed computer scientist Donald Knuth long ago, wisely warning software developers to be wary of embracing optimization blinders when doing systems design and development. The thing is, techies are ingrained in optimization, and getting them to somehow break the optimization habit is nearly impossible to do. Anyone trying to get optimization myopia to summarily be wrung out of AI developers is going to face a hugely uphill battle.

We can try a different tactic.

Consider a famous adage that provides further insight.

Are you ready?

If you can’t beat them then you ought to join them.

By this, I mean to suggest that if we are to assume that optimization as a default mantra is going to nonetheless occur, no matter what intervention might be tried, we need to acquiesce rather than fight this overwhelmingly intrinsic urge.

In that case, let’s make sure that the optimization bubble contains whatever we also want to be included in the optimization myopia. You see, by aiming to get AI Ethics infused into the optimization mindset, those precepts will become part and parcel of what needs to be optimized. This will at least put on somewhat equal footing the Ethical AI factors underlying how AI is going to be devised. AI Ethics grandly becomes another element worthy of due consideration. It has been absorbed into the optimization mentality.

Let’s give it a go.

We might have to live with the proverb that a zebra can’t change its stripes i.e., AI developers in the mainstay and optimization mania. When it comes to AI, we can perhaps guide the attention of the zebra towards semi-naturally embracing the belief that any optimal AI is one that optimally incorporates AI Ethics.

Welcome and say hello to optimizing on and conjoining with the precepts of AI Ethics.

Follow me on Twitter