Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

AI Is Not Your Friend

Though artificial intelligence is not your friend, it should not be solely considered your enemy. Like many technologies, it has it positive and negative aspects and applications.

still from HER

Joaquin Phoenix getting friendly with an AI operating system named Samantha (voiced by Scarlett Johansson) in the film HER

Amber MacArthur wrote "AI is not your friend. Any friend that stops working when the power goes out is a machine." She is at least partially referring to the idea of people becoming friendly with AI in the way that we saw in the film HER. That film premiered more than a decade ago and now looks like something very much is not only possible but is already happening in many ways.

Amber had a longer post on LinkedIn that she excerpted in her newsletter. Here are a few of her observations: 

  • "AI-based social media platforms are not free speech platforms. These platforms curate, amplify, promote, and - yes - demote. Think about it like yelling in the public town square, but depending on what you say, Elon Musk's army of agents is there to either put a hand over your mouth to quiet you down or give you a megaphone to pump you up."
  • Schools should not ignore or ban all AI applications. "AI training in schools should be a priority since AI skills in the workplace are a priority. Kids who grow up in an age when they are taught that AI is only a threat and not also a tool will be at a competitive disadvantage."
  • On the negative side - "AI warfare is the most frightening reality of our time." And it is already here and guaranteed to increase.
  • On the positive side - "AI healthcare is the most exciting opportunity of our time."

She knows that her list is not definitive and admits that it is "fluid, so if there is something you would like me to add, please let me know on my socials or via email so I can check it out.."

Bias in AI

AI thermostat and couple
What is that smart thermostat doing with our data? Can a thermostat have a bias?

I read an article from Rutgers University-Camden that begins by saying that most people now realize that artificial intelligence has become increasingly embedded in everyday life. This has created concerns. One of those lesser-spoken-about concerns is around bias in its programming. Here are some excerpts from the article.

Some AI tasks are so innocuous that users don't think about AI being involved. Your email uses it. Your online searches use it. That smart thermostat uses it. Are those uses frightening? But as its capabilities expand, so does its potential for wide-ranging impact.

Organizations and corporations have jumped on the opportunity presented by AI and Big Data to automate processes and increase the speed and accuracy of decisions large and small. Market research has found that 84 percent of C-suite executives believe they must leverage artificial intelligence to achieve their growth objectives. Three out of four believe that if they don't take advantage of AI in the next five years, they risk going out of business entirely.

Bias can cause artificial intelligence to make decisions that are systematically unfair to particular groups of people, and researchers have found this can cause real harm. The Rutgers–Camden researcher Iman Dehzangi, says that “Artificial intelligence and machine learning are poised to create valuable opportunities by automating or accelerating many different tasks and processes. One of the challenges, however, is to overcome potential pitfalls such as bias.” 

What does biased AI do? It can give consistently different outputs for certain groups compared to others. It can discriminate based on race, gender, biological sex, nationality, social class, or many other factors.

Of course it is human beings who choose the data that algorithms use and humans have biases whether they are conscious of them or not.

"Because machine learning is dependent upon data, if the data is biased or flawed, the patterns discerned by the program and the decisions made as a result will be biased, too," said Dehzangi, pointing to a common saying in the tech industry: "garbage in, garbage out." “There is not a successful business in operation today that is not using AI and machine learning,” said Dehzangi. Whether it is making financial investments in the stock market, facilitating the product development life cycle, or maximizing inventory management, forward-thinking businesses are leveraging this new technology to remain competitive and ahead of the curve. However, if they fail to account for bias in these emerging technologies, they could fall even further behind, remaining mired in the flawed data of the past. Research has revealed that if care is not taken in the design and implementation of AI systems, longstanding social biases can be embedded in the systems' logic.

Tay: A Cautionary Tale of AI

chatbot and postsTay was a chatbot originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It "has had a great influence on how Microsoft is approaching AI," according to Satya Nadella, the CEO of Microsoft.

Tay caused almost immediate controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter - a dangerous proposition.

It was named "Tay" as an acronym for "thinking about you." It was said that it was similar to or based on Xiaoice, a similar Microsoft project in China that Ars Technica reported that it had "more than 40 million conversations apparently without major incident".

Interestingly, Tay was designed to mimic the language patterns of a 19-year-old American girl and presented as "The AI with zero chill."

It was quickly abused with Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages so that the bot began releasing racist and sexually-charged messages in response to other Twitter users.

One artificial intelligence researcher, Roman Yampolskiy, commented that Tay's misbehavior was understandable because it mimicked the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary.

It was popular in its short life. Within 16 hours of its release, Tay had tweeted more than 96,000 times, That is when Microsoft suspended the account for "adjustments." Microsoft confirmed that Tay had been taken offline, released an apology on its official blog, and said it would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Then on March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. Given its freedom, Tay released some drug-related tweets, then it became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. The posts appeared in the feeds of 200,000+ Twitter followers.

Tay has become a cautionary tale on the responsibilities of creators for their AI.

In December 2016, Microsoft released Tay's successor, a chatbot named Zo which was an English version of Microsoft's other successful chatbots Xiaoice (China) and Rinna [ja] (Japan).