The Y2K38 Bug and the End of 32-bit Unix Time

Y2K, short for “Year 2000,” was a potential computer bug caused by how dates were formatted in older software. To save memory space, early computers used two-digit years—like “97” for 1997—which in the new millennium risked misreading “00” as 1900 instead of 2000, potentially disrupting systems that depended on accurate dates (read 101).

Though a kind of panic occurred in 1999, the Y2K issue surfaced in technical literature as early as 1984. Long before it became a global concern, researchers were already flagging the two-digit date flaw. A 1984 book, "Computers in Crisis," outlined how the year 2000 rollover could break financial, governmental, and technical systems if left unaddressed.

In the late 1990s, many feared this glitch could cause widespread failures in banking systems, power grids, transportation networks, and other critical infrastructure. This idea took hold of the public imagination, spawning doomsday predictions, a booming survivalist market, and a massive global push to audit and repair vulnerable systems before the deadline—work that cost an estimated $300B-$500B. 

Because of the extensive preparations, Y2K passed without significant disruptions, however, its legacy endures. The crisis helped modernize global IT systems, accelerated the outsourcing of programming jobs, and exposed society’s dependence on digital infrastructure—prompting long-term shifts in cybersecurity and software maintenance.

The Year 2038 problem is the next potential computer time rollover bug. Many older systems store time as a signed 32-bit integer counting seconds since Jan. 1, 1970. That counter maxes out on Jan. 19, 2038—overflowing into negative time and sending clocks back to 1901, potentially crashing any older software that depends on accurate dates. The Y2K38 bug is also known as the end of 32-bit Unix time and the year 2038 problem.

 

OpenAI and Broadcom and Ten Gigawatts

This week, OpenAI made news with its new browser, Atlas. Wth all their plans and a new cloud-based AI browser, they need to scale their computing power. Here is a news summary (AI-generated, of course)

business dealsOpenAI has announced a strategic multiyear partnership with semiconductor giant Broadcom to co-develop custom-built chips and infrastructure. The collaboration aims to deploy 10 gigawatts of specialized AI accelerators by the end of 2029—a staggering amount of compute capacity equivalent to the power consumption of approximately 8 million U.S. households. (Markets Insider)

This deal marks OpenAI’s first venture into designing its own in-house processors, with Broadcom tasked with developing and deploying the systems. The chips—known as AI accelerators—are optimized for parallel processing, enabling them to execute billions of operations simultaneously. These accelerators will be deployed across OpenAI’s facilities and partner data centers, using Broadcom’s Ethernet-based networking solutions to ensure scalability and efficiency. (Broadcom Inc)

OpenAI CEO Sam Altman emphasized the significance of the partnership: “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.” The Broadcom agreement is the latest in a series of megadeals OpenAI has struck in 2025 to secure the compute power needed for its rapidly expanding AI services. Earlier this year, OpenAI signed a $100 billion deal with Nvidia to deploy 10 gigawatts of Nvidia systems, beginning in the second half of 2026. (Reuters) In another major move, OpenAI partnered with AMD to deploy 6 gigawatts of chips and received a warrant for up to 160 million AMD shares, potentially making it one of AMD’s largest shareholders. (The Motley Fool)

These deals reflect a broader industry trend: major tech players are increasingly investing in custom silicon to reduce reliance on Nvidia’s dominant GPU offerings. Companies like Google, Amazon, and Microsoft have already begun developing their own AI chips, and OpenAI’s latest move places it firmly within this competitive landscape. While OpenAI has not disclosed how it plans to finance the Broadcom deal, analysts estimate that a single gigawatt-scale data center could cost between $50 billion and $60 billion. 

This latest partnership not only strengthens OpenAI’s technical capabilities but also signals a shift toward greater control over its hardware stack—an essential step as the race to develop next-generation AI systems accelerates.