The Australia Social Media Ban

Australia implemented a world-first nationwide ban on social media access for children under 16, effective December 10, 2025. The law, passed in November 2024 under the Online Safety Amendment (Social Media Minimum Age) Bill, requires major platforms to take "reasonable steps" to prevent users under 16 from creating or maintaining accounts. This includes age verification methods like behavioral inference (analyzing online activity), facial age estimation (e.g., via selfies), ID uploads, or linking bank details.

Millions of accounts are expected to be affected as companies, such as TikTok, Instagram, Facebook, Snapchat, YouTube, and X, face fines of up to $33M for serious or repeated noncompliance. The law places responsibility on companies rather than families, and platforms must demonstrate that they have taken “reasonable steps,” such as implementing age checks and removing suspected underage accounts.

The measure is cast as a child-protection and mental health safeguard, citing research showing 96% of 10- to 15-year-olds use social media, with many encountering harmful content, grooming, or cyberbullying. Critics say the law is difficult to enforce. It may even push teens onto harder-to-monitor platforms. Another criticism is that it may pose privacy risks.

Read the research from Australia used to create this ban
https://www.esafety.gov.au/research/the-online-experiences-of-children-in-australia/report-digital-use-and-risk-among-children-aged-10-to-15

Other countries have taken similar steps, such as strict youth modes or time limits.
https://studyinternational.com/news/countries-social-media-ban-children/

Australia’s Nationwide Ban on Social Media for Children Under 16

Australia’s nationwide ban on social media use for children under 16 takes effect today, making it the first country to prohibit underage users from major platforms outright. It is a noble and probably necessary thing, but I cannot believe it is doable.

Millions of accounts are expected to be affected as companies, such as TikTok, Instagram, Facebook, Snapchat, YouTube, and X, face fines of up to $33M for serious or repeated noncompliance. However, the law places responsibility on companies rather than families, and platforms must demonstrate that they have taken “reasonable steps," such as implementing age checks and removing suspected underage accounts.

I suspect the companies will say these things have already been put in place. (Have you noticed the increase in ads on TV and in your Instagram feed about their teen accounts?) And how will Australia monitor this? Critics say the law is difficult to enforce. It might push teens onto harder-to-monitor platforms. Enforcement may pose privacy risks. We know that many children who create accounts have already lied about their age. Can that be determined?

The research shows 96% of children aged 10 to 15 had used social media, and a majority had used a communication platform to chat, message, call, or video call others (94%). Anecdotally. many of them report encountering harmful content, grooming, or cyberbullying.

Computers (and AI) Are Not Managers

doc

Internal IBM document, 1979 (via Fabricio Teixeira)

I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn't part of the conversation. "A computer must never make a management decision," said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can't be held accountable.

Is the same thing true concerning artificial intelligence 46 years later?

I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.

Why should AI be trusted in a limited way on certain aspects of decision-making?

One reason that goes back at least 46 years is that it lacks "emotional intelligence." Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations — all things AI can't fully grasp or replicate.

Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.

Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can't reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.

AI systems can unintentionally inherit biases from the data they're trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.

Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.

So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:

Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.

Hiring and Recruitment: AI tools can screen résumés, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.

Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.

Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.

Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.

Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.

Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.

As I write this in March 2025, the news is full of stories of DOGE and Elon Musk's team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and "improve efficiency."  AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won't be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.

AI Is Not Your Friend

Though artificial intelligence is not your friend, it should not be solely considered your enemy. Like many technologies, it has it positive and negative aspects and applications.

still from HER

Joaquin Phoenix getting friendly with an AI operating system named Samantha (voiced by Scarlett Johansson) in the film HER

Amber MacArthur wrote "AI is not your friend. Any friend that stops working when the power goes out is a machine." She is at least partially referring to the idea of people becoming friendly with AI in the way that we saw in the film HER. That film premiered more than a decade ago and now looks like something very much is not only possible but is already happening in many ways.

Amber had a longer post on LinkedIn that she excerpted in her newsletter. Here are a few of her observations: 

  • "AI-based social media platforms are not free speech platforms. These platforms curate, amplify, promote, and - yes - demote. Think about it like yelling in the public town square, but depending on what you say, Elon Musk's army of agents is there to either put a hand over your mouth to quiet you down or give you a megaphone to pump you up."
  • Schools should not ignore or ban all AI applications. "AI training in schools should be a priority since AI skills in the workplace are a priority. Kids who grow up in an age when they are taught that AI is only a threat and not also a tool will be at a competitive disadvantage."
  • On the negative side - "AI warfare is the most frightening reality of our time." And it is already here and guaranteed to increase.
  • On the positive side - "AI healthcare is the most exciting opportunity of our time."

She knows that her list is not definitive and admits that it is "fluid, so if there is something you would like me to add, please let me know on my socials or via email so I can check it out.."