Are We Any Closer To Quantum Computing?

quantum computer imagined

In 1981, American physicist and Nobel Laureate Richard Feynman gave a lecture at the Massachusetts Institute of Technology (MIT) in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations. Quantum computing was born. The illustration here shows what one might have imagined it to be back in 1981 - a lind of science-fiction computer.

Quantum computing is a revolutionary area of computing that uses the principles of quantum mechanics to process information in fundamentally different ways than classical computers. In classical computing, information is processed using bits, which are binary and can represent either a 0 or a 1. In quantum computing, however, the fundamental units of information are called qubits. Qubits can exist in a state of 0, 1, or both simultaneously, thanks to a quantum property called superposition. This allows quantum computers to perform multiple calculations at once.

I am not a physicist or computer engineer, so I don't want to go too deeply into that realm. Reading about this, I see the word "entanglement" and have some memory of Einstein referring to quantum entanglement as "spooky action at a distance." He was skeptical since it seemed to defy the principles of classical physics and his theory of relativity. Einstein doubted entanglement, but modern experiments have confirmed its existence and shown that it is a fundamental aspect of quantum mechanics. In quantum computing, entanglement creates strong correlations between qubits, even when they are far apart.

Entanglement enables quantum computers to solve certain types of complex problems much faster than classical computers by leveraging these interconnected qubits. Quantum computers are particularly well-suited to tasks involving massive datasets, optimization problems, simulations, and cryptography. However, they are still in their early stages of development and face challenges such as error rates, stability, and scalability.

In the same way that AI is already in your daily life - even if you don't notice or acknowledge it - quantum computing could be used in everyday activities. It could revolutionize drug discovery and personalized medicine by simulating molecular interactions at an unprecedented speed, leading to faster development of cures and treatments. By solving complex optimization and learning problems, quantum computers could significantly enhance AI's capabilities, leading to smarter assistants and systems.

Cryptography and cybersecurity's current encryption methods could be broken by quantum computers, but they could also enable quantum-safe encryption, making online transactions and communications more secure. There's good and bad in almost every discovery.

In logistics, smarter traffic systems to more efficient delivery routes, quantum computing could optimize logistics, reducing fuel consumption, travel times, and costs.

And quantum computing could impact improved energy solutions, financial modeling, material design, and many things we haven't even considered yet.

Of course, there are challenges. Qubits are highly sensitive to their environment. Even minor disturbances like temperature fluctuations, vibrations, or electromagnetic interference can cause qubits to lose their quantum state—a phenomenon called decoherence. Maintaining stability long enough to perform calculations is a key challenge. Many quantum computers require extremely low temperatures (close to absolute zero) to operate, as qubits need highly controlled environments. Building and maintaining these cryogenic systems is both expensive and challenging.

Small-scale quantum computers exist, but scaling up to thousands or millions of qubits is a monumental task and requires massive infrastructure, advanced error correction mechanisms, and custom hardware, making them cost-prohibitive for widespread adoption.

On the education side of this, quantum computing sits at the intersection of physics, engineering, computer science, and more. A lack of cross-disciplinary expertise will slow down progress in this field.

Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

Ghost Students

ghost studentsGhost students, as their name implies, aren’t real people. They are not spectral visions. Had you asked me earlier to define the term, I would have said it is a way to describe a student who is enrolled in a college or university but does not actively participate in classes or academic activities. However, these new ghosts are aliases or stolen identities used by scammers and the bots they deploy to get accepted to a college, but not for the purpose of attending classes or earning a degree. Why? What's the scam?

These students may not attend lectures, complete assignments, or engage in the regular responsibilities expected of them, yet they are still listed as part of the institution's enrollment. In some cases, ghost students may be enrolled for reasons such as maintaining financial aid, benefiting from certain privileges, or fulfilling scholarship requirements. Alternatively, the term can sometimes refer to students who may be technically registered but are not engaging with the academic community in a meaningful way.

But more recently, I have seen the definition of a ghost student include when a fraudster completes an online application to a college or university and then, once accepted, enrolls in classes. At that point, the fraudster behind the ghost student can use the fake identity to act like a regular student. He or she can access and abuse cloud storage provided by the institution, or use a college-provided VPN or .edu email address to perpetrate other scams. In the most serious cases, a ghost student’s new enrollment status may be used to apply for and receive thousands of dollars in financial aid.

Institutions targeted by these scams can face consequences ranging from minor inconveniences to significant financial burdens. Ghost students may disrupt campus operations by occupying spots meant for qualified applicants or prompting schools to add course sections for high-demand classes, only for those seats to go unused. Once the issue is identified, colleges must invest substantial time and effort into carefully reviewing applications and monitoring student activity, placing a heavy burden on admissions officers, faculty, IT teams, and other staff.

I read about an extreme example from California’s Pierce College, where enrollment dropped by almost 36 percent — from 7,658 students to 4,937 — after ghost students were purged from the rolls.

If ghost students secure financial aid, often through federal Pell grants, it diverts funds from legitimate applicants and taxpayers. Their presence also strains admissions and IT teams. Additionally, if granted email accounts and access to instructional technology platforms, ghost students can overwhelm data centers and pose serious security risks, increasing vulnerabilities for institutions already targeted by cybercriminals.

Making the application process more rigorous is the most direct way to limit the presence of ghost students. But for many institutions, especially two-year colleges, that approach is antithetical to the college’s mission and desire to offer easier access to higher education. In addition, with enrollment still a major concern for all types of institutions, anything that limits the pool of potential students is a nonstarter.

What Happened to the Internet of Things?

IoT uses

IoT applications

After writing here about how the Internet and websites are not forever, I started looking at some old posts that perhaps should be deleted or updated. With 2200+ posts here since 2006, that seems like an overwhelming and unprofitable use of my time. Plus, maybe an old post has some historical value. But I do know that there are older posts that have links to things that just don't exist on the Internet anymore.

The last post I wrote here labeled "Internet of Things" (IoT) was in June 2021. IoT was on Gartner's trends list in 2012, and my first post about IoT here was in 2009, so I thought an new update was due.

When I wrote about this in 2014, there were around 10 billion connected devices. In 2024, the number has increased to over 30 billion devices, ranging from smart home gadgets (e.g., thermostats, speakers) to industrial machines and healthcare devices. Platforms like Amazon Alexa, Google Home, and Apple HomeKit provide hubs for connecting and controlling a range of IoT devices.

The past 10 years have seen the IoT landscape evolve from a collection of isolated devices to a more integrated, intelligent, and secure ecosystem. Advancements in connectivity, AI, edge computing, security, and standardization have made IoT more powerful, reliable, and accessible, with applications transforming industries, enhancing daily life, and reshaping how we interact with technology. The number of connected devices has skyrocketed, with billions of IoT devices now in use worldwide. This widespread connectivity has enabled smarter homes, cities, and industries.

IoT devices have become more user-friendly and accessible, with smart speakers, wearables, and home automation systems becoming commonplace in households. If you have a washing machine or dryer that reminds you via an app about its cycles. or a thermostat that knows when you are in rooms or on vacation, then IoT is in your home, whether you use that term or not.

Surveying the topic online turned up a good number of things that have pushed IoT forward or that IoT has pushed forward. Most recently, I would say that the 3 big things that have pushed IoT forward are 5G and advanced connectivity, the rise of edge computing, and AI and machine learning integration:

Technological improvements, such as the rollout of 5G networks, have greatly increased the speed and reliability of IoT connections. This has allowed for real-time data processing and more efficient communication between devices.

Many IoT devices now incorporate edge computing and AI to process data locally, reducing the reliance on cloud-based servers. This allows faster decision-making, less latency, and improved security by limiting the amount of data transmitted. IoT devices have increasingly incorporated AI and machine learning for predictive analytics and automation. This shift has allowed for smarter decision-making and automation in various industries, such as manufacturing (predictive maintenance), healthcare (patient monitoring), and agriculture (smart farming).

The integration of big data and advanced analytics has enabled more sophisticated insights from IoT data. This has led to better decision-making, predictive maintenance, and personalized user experiences.

One reason why I have heard less about IoT (and written less about it) is that it has expanded beyond consumer devices to industrial applications. I discovered a new term - Industrial Internet of Things (IIoT) that includes smart manufacturing, agriculture, healthcare, and transportation, improving efficiency and productivity.

There are also concerns that have emerged. As IoT devices proliferate, so have concerns about security. Advances in cybersecurity measures have been implemented to protect data and ensure the privacy of users. The IoT security landscape has seen new protocols and encryption standards being developed to protect against vulnerabilities, with an emphasis on device authentication and secure communication.

The rollout of 5G has enhanced IoT capabilities by providing faster, more reliable connections. This has enabled more efficient real-time data processing for smart cities, autonomous vehicles, and industrial IoT applications, which can now operate at a larger scale and with lower latency.

IoT devices are now able to use machine learning and AI to learn from user behavior and improve their performance. For example, smart thermostats can learn a household’s schedule and adjust settings automatically, while security cameras can differentiate between human and non-human motion.

Edge computing has allowed IoT devices to process data locally rather than relying solely on cloud-based servers. This reduces latency and bandwidth usage, making it especially beneficial for time-sensitive applications like healthcare monitoring, industrial automation, and smart grids.

Despite the growth, the IoT market faces challenges such as chipset supply constraints, economic uncertainties, and geopolitical conflicts