Meta, Google and Anti-Trust

Google MetaI was working last week on a post about the early days of Facebook when the news hit that Meta CEO Mark Zuckerberg and former COO Sheryl Sandberg were testifying in an ongoing antitrust trial examining whether Meta monopolized the personal social networking market by acquiring Instagram and WhatsApp in 2012 and 2014. (see https://www.euronews.com/next/2025/04/17/metas-antitrust-trial...)

Then, I was diverted by another news story about a federal judge who ruled that Google violated antitrust laws by unlawfully dominating the online advertising market with its technology. The decision opens the door for U.S. prosecutors to seek a breakup of the tech giant’s $1.8 trillion ad-tech business.

The court found that Google monopolized two key segments of the digital advertising ecosystem: tools used by publishers to manage ad space, and the platform that connects those publishers with advertisers. By tying the two products together, Google made it difficult for competitors to gain traction. A second hearing will determine what steps the company must take to restore competition.

The ruling follows a separate decision in August, in which another judge found that Google had illegally dominated the markets for online search and text advertising. Remedies in that case are still pending, though the government has proposed that Google divest its Chrome web browser.

Are We Any Closer To Quantum Computing?

quantum computer imagined

In 1981, American physicist and Nobel Laureate Richard Feynman gave a lecture at the Massachusetts Institute of Technology (MIT) in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations. Quantum computing was born. The illustration here shows what one might have imagined it to be back in 1981 - a lind of science-fiction computer.

Quantum computing is a revolutionary area of computing that uses the principles of quantum mechanics to process information in fundamentally different ways than classical computers. In classical computing, information is processed using bits, which are binary and can represent either a 0 or a 1. In quantum computing, however, the fundamental units of information are called qubits. Qubits can exist in a state of 0, 1, or both simultaneously, thanks to a quantum property called superposition. This allows quantum computers to perform multiple calculations at once.

I am not a physicist or computer engineer, so I don't want to go too deeply into that realm. Reading about this, I see the word "entanglement" and have some memory of Einstein referring to quantum entanglement as "spooky action at a distance." He was skeptical since it seemed to defy the principles of classical physics and his theory of relativity. Einstein doubted entanglement, but modern experiments have confirmed its existence and shown that it is a fundamental aspect of quantum mechanics. In quantum computing, entanglement creates strong correlations between qubits, even when they are far apart.

Entanglement enables quantum computers to solve certain types of complex problems much faster than classical computers by leveraging these interconnected qubits. Quantum computers are particularly well-suited to tasks involving massive datasets, optimization problems, simulations, and cryptography. However, they are still in their early stages of development and face challenges such as error rates, stability, and scalability.

In the same way that AI is already in your daily life - even if you don't notice or acknowledge it - quantum computing could be used in everyday activities. It could revolutionize drug discovery and personalized medicine by simulating molecular interactions at an unprecedented speed, leading to faster development of cures and treatments. By solving complex optimization and learning problems, quantum computers could significantly enhance AI's capabilities, leading to smarter assistants and systems.

Cryptography and cybersecurity's current encryption methods could be broken by quantum computers, but they could also enable quantum-safe encryption, making online transactions and communications more secure. There's good and bad in almost every discovery.

In logistics, smarter traffic systems to more efficient delivery routes, quantum computing could optimize logistics, reducing fuel consumption, travel times, and costs.

And quantum computing could impact improved energy solutions, financial modeling, material design, and many things we haven't even considered yet.

Of course, there are challenges. Qubits are highly sensitive to their environment. Even minor disturbances like temperature fluctuations, vibrations, or electromagnetic interference can cause qubits to lose their quantum state—a phenomenon called decoherence. Maintaining stability long enough to perform calculations is a key challenge. Many quantum computers require extremely low temperatures (close to absolute zero) to operate, as qubits need highly controlled environments. Building and maintaining these cryogenic systems is both expensive and challenging.

Small-scale quantum computers exist, but scaling up to thousands or millions of qubits is a monumental task and requires massive infrastructure, advanced error correction mechanisms, and custom hardware, making them cost-prohibitive for widespread adoption.

On the education side of this, quantum computing sits at the intersection of physics, engineering, computer science, and more. A lack of cross-disciplinary expertise will slow down progress in this field.

Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

Ghost Students

ghost studentsGhost students, as their name implies, aren’t real people. They are not spectral visions. Had you asked me earlier to define the term, I would have said it is a way to describe a student who is enrolled in a college or university but does not actively participate in classes or academic activities. However, these new ghosts are aliases or stolen identities used by scammers and the bots they deploy to get accepted to a college, but not for the purpose of attending classes or earning a degree. Why? What's the scam?

These students may not attend lectures, complete assignments, or engage in the regular responsibilities expected of them, yet they are still listed as part of the institution's enrollment. In some cases, ghost students may be enrolled for reasons such as maintaining financial aid, benefiting from certain privileges, or fulfilling scholarship requirements. Alternatively, the term can sometimes refer to students who may be technically registered but are not engaging with the academic community in a meaningful way.

But more recently, I have seen the definition of a ghost student include when a fraudster completes an online application to a college or university and then, once accepted, enrolls in classes. At that point, the fraudster behind the ghost student can use the fake identity to act like a regular student. He or she can access and abuse cloud storage provided by the institution, or use a college-provided VPN or .edu email address to perpetrate other scams. In the most serious cases, a ghost student’s new enrollment status may be used to apply for and receive thousands of dollars in financial aid.

Institutions targeted by these scams can face consequences ranging from minor inconveniences to significant financial burdens. Ghost students may disrupt campus operations by occupying spots meant for qualified applicants or prompting schools to add course sections for high-demand classes, only for those seats to go unused. Once the issue is identified, colleges must invest substantial time and effort into carefully reviewing applications and monitoring student activity, placing a heavy burden on admissions officers, faculty, IT teams, and other staff.

I read about an extreme example from California’s Pierce College, where enrollment dropped by almost 36 percent — from 7,658 students to 4,937 — after ghost students were purged from the rolls.

If ghost students secure financial aid, often through federal Pell grants, it diverts funds from legitimate applicants and taxpayers. Their presence also strains admissions and IT teams. Additionally, if granted email accounts and access to instructional technology platforms, ghost students can overwhelm data centers and pose serious security risks, increasing vulnerabilities for institutions already targeted by cybercriminals.

Making the application process more rigorous is the most direct way to limit the presence of ghost students. But for many institutions, especially two-year colleges, that approach is antithetical to the college’s mission and desire to offer easier access to higher education. In addition, with enrollment still a major concern for all types of institutions, anything that limits the pool of potential students is a nonstarter.