The Summer of Fake AI Novels

listSome newspapers around the country, including the Chicago Sun-Times and at least one edition of The Philadelphia Inquirer, have published a syndicated summer book list that includes made-up books. Only five of the 15 titles on the list are real. 

Of the books named on this reading list, Brit Bennet, Isabel Allende, Andy Weir, Taylor Jenkins Reid, Min Jin Lee, Rumaan Alam, Rebecca Makkai, Maggie O'Farrell, Percival Everett, and Delia Owens' titles are all books that do not exist. That doesn't mean the authors don't exist. For example, Percival Everett is a well-known author, and his novel, James, just won the Pulitzer Prize, but he never wrote a book called The Rainmakers, supposedly set in a "near-future American West where artificially induced rain has become a luxury commodity." Chilean American novelist Isabel Allende never wrote a book called Tidewater Dreams, which was described as the author's "first climate fiction novel."

Ray Bradbury wrote the wonder-filled summer novel, Dandelion Wine and Jess Walter wrote Beautiful Ruins, and Françoise Sagan wrote Bonjour Tristesse.

The list was part of licensed content provided by King Features, a unit of the publisher Hearst Newspapers, but the Sun-Times did not check it before publishing. So, who is most accountable for the error? Writer Marco Buscaglia has claimed responsibility for it. He says it was partly generated by AI. and told NPR, "Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate, and I betrayed that trust. It's on me 100 percent."

The fake summer reading list is dated May 18, two months after the Chicago Sun-Times announced that 20% of its staff had accepted buyouts "as the paper's nonprofit owner, Chicago Public Media, deals with fiscal hardship."

Another education question is how, where, and why AI created this list? Where was it getting its information? How did it come up with detailed descriptions, such as on set in a "near-future American West where artificially induced rain has become a luxury commodity" for books that are not out there?  How and why did it think these were real books?

Sounds like a good AI lesson to have students investigate and learn how AI operates and why it often creates "hallucinations" (errors and factual mistakes).

Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

AI Agents

ai assistant

AI agents are something of concern for OpenAI, Google, and any other players. "AI agents" are software programs designed to perform specific tasks or solve problems by using artificial intelligence techniques. These agents can work autonomously or with minimal human intervention, and they're capable of learning from data, making decisions, and adapting to new situations.

Gartner suggests that agentic AI is the most important strategic technology for 2025 and beyond. The tech analyst predicts that, by 2028, at least 15% of day-to-day work decisions will be taken autonomously through agentic AI, up from 0% in 2024. Does that excite or frighten you?

They can automate processes, analyze data, and interact with users or other systems to achieve specific goals. You probably already interact with them in applications (Siri or Alexa), customer service chatbots, and recommendation systems (Netflix or Amazon). They may be less obvious to you when using an autonomous vehicle or a financial trading system.

There are many categories into which we might place these agents because there are different types of AI agents, each with unique capabilities and purposes:

Here are some possible categorizations:

Reactive agents respond to specific stimuli and do not have a memory of past events. They work well in environments with clear, predictable rules.

Model-based agents have a memory and can learn from past experiences. They use this knowledge to predict future events and make decisions.

Goal-based agents are designed to achieve specific goals. They use planning and reasoning techniques to determine the best actions to take to reach their objectives.

Utility-based agents consider multiple factors and choose actions that maximize their overall utility or benefit. They can balance competing goals and make trade-offs.

Teacher using AI assistant
Learning agents can improve their performance over time by learning from their experiences. They use techniques like machine learning to adapt to new situations and improve their decision-making abilities.You could also categorize agents in other ways, for example, in an educational contex.

For personalized learning, agents can adapt educational content to meet individual students' needs, learning styles, and pace. By analyzing data on students' performance and preferences, AI can recommend personalized learning paths and resources. In a related way, intelligent tutoring systems can provide one-on-one tutoring by offering explanations, feedback, and hints the way that a human tutor might. They might even be able to create more inclusive learning environments by providing tools like speech-to-text, text-to-speech, and translation services, ensuring that all students have access to educational content. By analyzing students' performance data, they could identify at-risk students and provide early interventions to help them succeed.

AI agents can automate administrative tasks for faculty, such as grading, attendance tracking, and scheduling, freeing up educators' time to focus more on teaching and interacting with students.

Agents can "assist" in creating educational materials. I would hope faculty would be closely monitoring AI creation of tests, quizzes, lesson plans, and interactive simulations.

Though I see predictions of fully AI-powered virtual classrooms that can facilitate remote learning, I believe this is the most distant application - and probably the one that most makes faculty apprehensive.

Fear of Becoming Obsolete

fearful workers

The term FOBO appeared in something I was reading recently. It is the fear of becoming obsolete (FOBO) and it is very much a workplace fear and generally connected to aging workers and anyone who fears that they will be replaced by technology.

Of course, AI is a large part of this fear. It's not a new fear. Workers have always considered that they would be considered obsolete as they aged, especially if they did not have the skills that younger employees brought to the workplace. It has been at least two decades of hearing predictions that robots would replace workers. In fact, that was the case, though not to the levels that were sometimes predicted. Artificial intelligence is less obvious as it makes inroads into our work and outside life.

Employers and workers need to be better at recognizing the ways AI is already here and being used. Approximately four in ten Americans use Face ID to log into at least one app on their phone each day. That is about 136 million people. How many think about that as AI?

If you have an electric vehicle, A.I.-powered systems work to manage the energy output. In your gas-powered car, you very likely use an AI-powered GPS for navigation.  

One survey I saw found that just 44 percent of the global workforce believe they interact today with AI in their personal lives. But when asked if they used GPS maps and navigation, 66 percent said yes. What about predictive product/entertainment suggestions, such as in Netflix and Spotify?  50 percent said yes.  Do you use text editors or autocorrect? A yes from 47 percent. 46 percent use virtual home assistants, such as Alexa and Google Assistant. Even chatbots like ChatGPT and CoPilot - which are less hidden and more proactive for a user - had a 31 percent yes response.

Most of these are viewed as positive uses of AI, but not all uses are viewed as positive or at least are viewed as somewhat negative. One example of that category is the AI not so positive is its use in filling up newsfeeds. Each social media network - Facebook, Twitter, Instagram et al  - has its own A.I.-powered algorithm. It is constantly customizing billions of users’ feeds. You click a like button, or just pause on a post for more than a few seconds,and that information changes your feed accordingly. Plus, the algorithm is made to push certain things to users that were not suggested by your activity but by sponsors or owners. This aspect has been widely criticized since Elon Musk took over Twitter-X, but all the platforms do it to some degree.

Some common applications are both positive and negative. Take the use of artificial intelligence in airports all over the world. It is being used to screen passengers passing through security checkpoints. At least 25 airports across the U.S., including Reagan National in Washington D.C. and Los Angeles International Airport, have started using A.I.-driven facial recognition as part of a pilot project. Eventually, the Transportation Security Administration (TSA) plans to expand the ID verification technology to more than 400 airports. This can speed up your passage through security which is something everyone would love to see, but what else is being done with that data, and will the algorithm flag people for the wrong reasons?

Do you want to push back on FOBO, particularly in the workplace? Some suggestions:
Continuous Learning: Stay curious and keep updating your skills. Whether it’s taking a course, attending workshops, or learning new technologies, continuous education is key.
Networking: Engage with your professional community. Networking can provide insights into industry trends and offer support and advice.
Adaptability: Embrace change and be open to new ideas. Flexibility can help you stay relevant.
Mindset Shift: Focus on your unique strengths and contributions. Everyone has something valuable to offer, and feeling obsolete often stems from undervaluing your skills.
Digital Detox: Sometimes, limiting your exposure to social media and other sources of comparison can reduce feelings of inadequacy.
Seek Feedback: Regularly seek feedback from peers, mentors, and colleagues to understand your areas of improvement and strengths.