<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    
    <title>Serendipity35 - Ethics &amp; Morality in Tech</title>
    <link>https://serendipity35.net/</link>
    <description>Where Technology and Education Meet - since 2006</description>
    <dc:language>en</dc:language>
    <admin:errorReportsTo rdf:resource="mailto:serendipity35blog@gmail.com" />
    <generator>Serendipity 2.5.0 - http://www.s9y.org/</generator>
    <webMaster>aws@nexttroll.com</webMaster>
<pubDate>Sat, 28 Mar 2026 18:52:10 GMT</pubDate>

    

<item>
    <title>Your AI Is Not Free</title>
    <link>https://serendipity35.net/index.php?/archives/3921-Your-AI-Is-Not-Free.html</link>
            <category>AI, ML, Robots, VR, AR, XR, Metaverse</category>
            <category>Apps</category>
            <category>Data</category>
            <category>Ethics &amp; Morality in Tech</category>
            <category>Privacy, Security</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3921-Your-AI-Is-Not-Free.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3921</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3921</wfw:commentRss>
    

    <author>trkellers@cloudlocal.us (Tim Kellers)</author>
    <content:encoded>
    &lt;p&gt;&lt;!-- s9ymdb:2834 --&gt;&lt;img alt=&quot;AI man&quot; class=&quot;serendipity_image_center&quot; height=&quot;600&quot; loading=&quot;lazy&quot; src=&quot;https://serendipity35.net/uploads/ai_man_1.jpeg&quot; width=&quot;337&quot; /&gt;The phrase that &lt;strong&gt;if an app is free, you are the product &lt;/strong&gt;means that when an app doesn&amp;rsquo;t charge you money, it usually makes money from you instead. They do that mainly by collecting your data or selling your attention to advertisers.&lt;/p&gt;

&lt;p&gt;If that is true, then &lt;strong&gt;how is AI changing what that means? &lt;/strong&gt;It is a question that deserves several posts here to really answer.&lt;/p&gt;

&lt;p&gt;Your behavior, preferences, and time become what is being monetized. Your data becomes the product. Free apps often gather your demographics, browsing or in-app behavior, location, interests, and habits. This information is then used to target ads or sold to third parties.&lt;/p&gt;

&lt;p&gt;The addictive nature of app design keeps you scrolling, tapping, or watching so they can show you ads. You pay with time, not dollars. &amp;ldquo;Free&amp;rdquo; is a business model, not a gift.&lt;/p&gt;

&lt;p&gt;I will give these companies a nod that running an app costs money (servers, engineers, storage). If you are not paying, the company must earn revenue another way. Ad-free options are becoming more common as a premium. You have probably noticed that on apps and also on video streaming services. You thought that paying for Amazon Prime meant no ads on the videos. Wrong. Free is often an illusion.&lt;/p&gt;

&lt;p&gt;In the world of AI, the difference between free and paid tiers is more than a matter of convenience. It is also about identity and privacy.&lt;/p&gt;

&lt;p&gt;Privacy becomes the hidden cost. Data is currency. Companies track you across apps and devices, build detailed behavioral profiles, and use algorithms to influence what you see. This raises concerns about autonomy and consent.&lt;/p&gt;

&lt;p&gt;Is there no stopping them? As long as you agree to their terms, they have a lot of power. BUT you can read those terms and privacy settings more carefully. (They rely on the fact that many users don&amp;#39;t read the terms or adjust their settings at all.) Educate yourself and understand how digital ecosystems make money. You can choose paid or privacy-focused alternatives. And you can remove the app entirely from your life.&lt;/p&gt;

&lt;p&gt;I see comparisons of using AI to using social media platforms. I don&amp;#39;t think AI data is the same as social media data. Social media platforms monetize your attention. The longer you scroll, the more ads they can show. AI chatbots operate on a different axis. Your prompts aren&amp;rsquo;t just content; they&amp;rsquo;re training signals. They reveal how people think, what they struggle with, what they&amp;rsquo;re curious about, and how they phrase questions. Maybe it is anonymized (a good thing) but it is still valuable and often sensitive data.&lt;/p&gt;

&lt;p&gt;Alarmist articles will remind you that many free AI chatbots use your prompts, your corrections, and your uploaded files. They have that photo of your family that you let them enhance. What will they do with what you give them? I can&amp;#39;t answer that as of now, and certainly not for the future. I know that your conversation history is used to train or fine-tune future versions of the model. Hey, you are part of the product pipeline - but don&amp;#39;t expect to be paid for your contributions.&lt;/p&gt;

&lt;p&gt;I also concede that the business model matters and that different AI companies monetize differently. For example, Microsoft provides its own privacy commitments and policies, and those govern how your data is handled. For details, they always direct users to their Privacy Statement.&lt;/p&gt;

&lt;p&gt;Here are 4 business models currently out there:&lt;br /&gt;
Ad-supported = Your attention is monetized.&lt;br /&gt;
Freemium = Free tier gathers usage; paid tier subsidizes development.&lt;br /&gt;
Enterprise licensing = Your data may be isolated; the company earns from businesses.&lt;br /&gt;
Open source =&amp;#160; The model is free; the company may sell hosting or support.&lt;/p&gt;

&lt;p&gt;So &amp;quot;&lt;strong&gt;if an app is free, you are the product&amp;quot;&lt;/strong&gt; still applies, but not always in the same way. When an AI tool is free, you&amp;rsquo;re not just the product &amp;mdash; you&amp;rsquo;re also the collaborator. You&amp;rsquo;re an unpaid teacher, tester, and a source of fuel for improvement.&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Mon, 30 Mar 2026 08:49:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3921-guid.html</guid>
    
</item>
<item>
    <title>Unplugging From Online Addiction</title>
    <link>https://serendipity35.net/index.php?/archives/3913-Unplugging-From-Online-Addiction.html</link>
            <category>Ethics &amp; Morality in Tech</category>
            <category>Media</category>
            <category>Privacy, Security</category>
            <category>Social Media</category>
            <category>The Disconnected</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3913-Unplugging-From-Online-Addiction.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3913</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3913</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;p&gt;&lt;!-- s9ymdb:2832 --&gt;&lt;img alt=&quot;online addiction&quot; class=&quot;serendipity_image_center&quot; height=&quot;327&quot; loading=&quot;lazy&quot; src=&quot;https://serendipity35.net/uploads/addiction_online.png&quot; width=&quot;600&quot; /&gt;This week, you probably saw headlines like &amp;quot;&lt;a href=&quot;https://www.theguardian.com/media/2026/mar/25/jury-verdict-us-first-social-media-addiction-trial-meta-youtube&quot; target=&quot;_blank&quot;&gt;Meta and YouTube designed addictive products that harmed young people,&lt;/a&gt;&amp;quot; as a jury in Los Angeles awarded the plaintiff damages of $6 million, with Meta to pay 70% and YouTube the remainder&lt;/p&gt;

&lt;p&gt;We are all plugged in to the electronic web around us that is far larger than the World Wide Web. That feeling of being unable to unplug is incredibly common and results from a powerful combination of psychological triggers, clever product design, and the essential role technology plays in modern life. &amp;quot;Addiction&amp;quot; is a strong word in this context, but it is the operative word in these kinds of cases.&lt;/p&gt;

&lt;p&gt;Don&amp;#39;t feel like you are &amp;quot;weak&amp;quot; or&amp;#160;lack willpower if you find it difficult to disconnect. These systems are scientifically optimized to maximize your engagement.&amp;#160;The core reason for compulsive checking is a chemical reaction in your brain centered on dopamine, the neurotransmitter associated with pleasure, motivation, and reward-seeking.&amp;#160;&lt;/p&gt;

&lt;p&gt;Social media and even email platforms use the same psychological principle that makes slot machines and video games addictive. You don&amp;#39;t know when&amp;#160;the next &amp;quot;win&amp;quot; will appear. That could be a &amp;quot;like,&amp;quot; a validating comment, an&amp;#160;alert, or an email from someone &amp;quot;important.&amp;quot; Are any of these really important? Maybe - and that possibility mixed in with that famous Fear of Missing Out (FOMO) is powerful. It compels you to keep checking.&amp;#160;&lt;/p&gt;

&lt;p&gt;Designers know they need apps and websites to be addictive. I can list some of these techniques, and you can take them as things to be conscious of and avoid. You could also use it as a designer to create an addictive app or&amp;#160;website.&amp;#160;These things are intentionally engineered as features that make it easy to lose track of time and difficult to stop.&lt;/p&gt;

&lt;p&gt;One of those techniques is using infinite scroll, which&amp;#160;eliminates natural stopping cues (like the bottom of a page). The content just keeps loading, encouraging endless consumption. |&lt;/p&gt;

&lt;p&gt;Push notifications hijack&amp;#160;your attention and create a sense of immediate urgency or curiosity, pulling you back into the app regardless of what you were doing.&lt;/p&gt;

&lt;p&gt;Autoplay on videos and content streams automatically transitions you to the next item, removing the moment you would have had to make a conscious choice to continue or stop.&lt;/p&gt;

&lt;p&gt;As I said earlier, many techniques used in gaming are used in the gamification of other apps. You might not think of things like streaks, badges&amp;#160;or LinkedIn profile completeness bars create a feeling of required daily attendance to avoid losing progress or status.&lt;/p&gt;

&lt;p&gt;Most of these are psychological traps.&amp;#160;FoMO and the social validation of likes and shares, and positive comments tap directly into our fundamental need for social acceptance and validation.&lt;/p&gt;

&lt;p&gt;Do you ever find yourself waiting in line, standing on the train, or during a commercial break, checking your phone? That instant, low-effort stimulation. is a form of addiction.&amp;#160;&lt;/p&gt;

&lt;p&gt;It&amp;#39;s true that technology is no longer optional. We need it&amp;#160;for much of our communication and work. We&amp;#160;crave&amp;#160;constant connectivity. Some jobs demand constant email and instant messaging availability. The&amp;#160;lines between work and personal time have been blurring for at least two decades. We need directions (maps), banking, tickets, appointments, and emergency communications from our&amp;#160;digital devices. That new reality seems to make&amp;#160;a complete disconnect feel irresponsible,&amp;#160;unsafe, and maybe impossible.&amp;#160;&amp;#160;&lt;/p&gt;

&lt;p&gt;But I don&amp;#39;t think it is&amp;#160;hopeless. The solution is not to throw away devices or turn off your cell service and WiFi or have more willpower. Advice from &amp;quot;experts&amp;quot; is to create friction between yourself and the addictive features. Only allow notifications for direct calls, texts, and genuinely critical applications. Designate specific times (like the first hour of the day, mealtimes, or the hour before bed) and locations (the bedroom, the dinner table) as completely device-free. Remove the most addictive social media apps from your phone, or move them off the home screen and turn off those badges and notification sounds that remind you that there are 3 new somethings on Instagram.&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Fri, 27 Mar 2026 08:24:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3913-guid.html</guid>
    
</item>
<item>
    <title>The Australia Social Media Ban</title>
    <link>https://serendipity35.net/index.php?/archives/3886-The-Australia-Social-Media-Ban.html</link>
            <category>Ethics &amp; Morality in Tech</category>
            <category>ISSUES</category>
            <category>Privacy, Security</category>
            <category>Social Media</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3886-The-Australia-Social-Media-Ban.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3886</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3886</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;p&gt;Australia implemented a world-first nationwide ban on social media access for children under 16, effective December 10, 2025. The law, passed in November 2024 under the Online Safety Amendment (Social Media Minimum Age) Bill, requires major platforms to take &amp;quot;reasonable steps&amp;quot; to prevent users under 16 from creating or maintaining accounts. This includes age verification methods like behavioral inference (analyzing online activity), facial age estimation (e.g., via selfies), ID uploads, or linking bank details.&lt;/p&gt;

&lt;p&gt;Millions of accounts are expected to be affected as companies, such as TikTok, Instagram, Facebook, Snapchat, YouTube, and X, face fines of up to $33M for serious or repeated noncompliance. The law places responsibility on companies rather than families, and platforms must demonstrate that they have taken &amp;ldquo;reasonable steps,&amp;rdquo; such as implementing age checks and removing suspected underage accounts.&lt;/p&gt;

&lt;p&gt;The measure is cast as a child-protection and mental health safeguard, citing research showing 96% of 10- to 15-year-olds use social media, with many encountering harmful content, grooming, or cyberbullying. Critics say the law is difficult to enforce. It may even push teens onto harder-to-monitor platforms. Another criticism is that it may pose&amp;#160;privacy risks.&lt;/p&gt;

&lt;p&gt;Read the research from Australia used to create this ban&lt;br /&gt;
&lt;a href=&quot;https://www.esafety.gov.au/research/the-online-experiences-of-children-in-australia/report-digital-use-and-risk-among-children-aged-10-to-15&quot; target=&quot;_blank&quot;&gt;https://www.esafety.gov.au/research/the-online-experiences-of-children-in-australia/report-digital-use-and-risk-among-children-aged-10-to-15&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other countries have taken similar steps, such as strict youth modes or time limits.&lt;br /&gt;
&lt;a href=&quot;https://studyinternational.com/news/countries-social-media-ban-children/&quot; target=&quot;_blank&quot;&gt;https://studyinternational.com/news/countries-social-media-ban-children/&lt;/a&gt;&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Thu, 11 Dec 2025 21:29:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3886-guid.html</guid>
    
</item>
<item>
    <title>Australia’s Nationwide Ban on Social Media for Children Under 16</title>
    <link>https://serendipity35.net/index.php?/archives/3885-Australias-NationwideBan-on-Social-Mediafor-Children-Under-16.html</link>
            <category>Ethics &amp; Morality in Tech</category>
            <category>ISSUES</category>
            <category>K-12</category>
            <category>Privacy, Security</category>
            <category>Social Media</category>
            <category>TRENDS</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3885-Australias-NationwideBan-on-Social-Mediafor-Children-Under-16.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3885</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3885</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;p&gt;Australia&amp;rsquo;s nationwide&amp;#160;ban on social media&amp;#160;use for children under 16 takes effect today, making it the first country to prohibit underage users from major platforms outright.&amp;#160;It is a noble and probably necessary thing, but I cannot believe it is doable.&lt;/p&gt;

&lt;p&gt;Millions of accounts are expected to be affected as companies, such as TikTok, Instagram, Facebook, Snapchat, YouTube, and X, face fines of up to $33M for serious or repeated noncompliance. However, the law places responsibility on companies rather than families, and platforms must demonstrate that they have taken &amp;ldquo;reasonable steps,&amp;quot;&amp;#160;such as implementing age checks and removing suspected underage accounts.&lt;/p&gt;

&lt;p&gt;I suspect the companies will say these things have already been put in place. (Have you noticed the increase in ads on TV and in your Instagram feed about their teen accounts?) And how will Australia monitor this? Critics say the law is difficult to enforce. It might push teens onto harder-to-monitor platforms. Enforcement may&amp;#160;pose privacy risks. We know that many children who create accounts have already lied about their age. Can that be determined?&lt;/p&gt;

&lt;p&gt;The&amp;#160;research&amp;#160;shows&amp;#160;96%&amp;#160;of children aged 10 to 15 had used social media, and a majority had used a communication platform to chat, message, call, or video call others (94%).&amp;#160;Anecdotally. many of them report encountering harmful content, grooming, or cyberbullying.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Australia may have been the first country to pass a social media ban bill to stop children under 16 from accessing the platforms, but 7 other countries have taken&amp;#160;similar steps, such as strict youth modes or time limits.&lt;br /&gt;
	&lt;a href=&quot;https://studyinternational.com/news/countries-social-media-ban-children/&quot; target=&quot;_blank&quot;&gt;https://studyinternational.com/news/countries-social-media-ban-children/&lt;/a&gt;&amp;#160;&amp;#160;&lt;/li&gt;
	&lt;li&gt;Read/download the Australian research report&lt;br /&gt;
	&lt;a href=&quot;https://www.esafety.gov.au/sites/default/files/2025-07/Digital-use-and-risk-Online-platform-engagement-10-to-15.pdf&quot; target=&quot;_blank&quot;&gt;https://www.esafety.gov.au/sites/default/files/2025-07/Digital-use-and-risk-Online-platform-engagement-10-to-15.pdf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
 
    </content:encoded>

    <pubDate>Fri, 17 Oct 2025 14:54:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3885-guid.html</guid>
    
</item>
<item>
    <title>Computers (and AI) Are Not Managers</title>
    <link>https://serendipity35.net/index.php?/archives/3844-Computers-and-AI-Are-Not-Managers.html</link>
            <category>AI, ML, Robots, VR, AR, XR, Metaverse</category>
            <category>Careers &amp; Work</category>
            <category>Ethics &amp; Morality in Tech</category>
            <category>ISSUES</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3844-Computers-and-AI-Are-Not-Managers.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3844</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3844</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;figure class=&quot;serendipity_imageComment_center&quot; style=&quot;width: 540px&quot;&gt;
&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;!-- s9ymdb:7191 --&gt;&lt;img alt=&quot;doc&quot; class=&quot;serendipity_image_center&quot; height=&quot;396&quot; loading=&quot;lazy&quot; src=&quot;https://serendipity35.net/uploads/computer_ibm.jpg&quot; width=&quot;540&quot; /&gt;&lt;/div&gt;

&lt;figcaption class=&quot;serendipity_imageComment_txt&quot;&gt;
&lt;p&gt;Internal IBM document, 1979 (via &lt;a href=&quot;https://fabricio.work/&quot; target=&quot;_blank&quot;&gt;Fabricio Teixeira&lt;/a&gt;)&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I saw the quote pictured above that goes back to 1979 when artificial intelligence wasn&amp;#39;t part of the conversation. &amp;quot;A computer must never make a management decision,&amp;quot; said an internal document at the big computer player of that time, IBM. The why of that statement is because a computer can&amp;#39;t be held accountable.&lt;/p&gt;

&lt;p&gt;Is the same thing true concerning artificial intelligence 46 years later?&lt;/p&gt;

&lt;p&gt;I suspect that AI is currently being used by management to analyze data, identify trends, and even offer recommendations. But I sense there is still the feeling that it should complement, not replace, human leadership.&lt;/p&gt;

&lt;p&gt;Why should AI be trusted in a limited way on certain aspects of decision-making?&lt;/p&gt;

&lt;p&gt;One reason that goes back at least 46 years is that it lacks &amp;quot;emotional intelligence.&amp;quot; Emotional intelligence (EI or EQ) is about balancing emotions and reasoning to make thoughtful decisions, foster meaningful relationships, and navigate social complexities. Management decisions often require a deep understanding of human emotions, workplace dynamics, and ethical considerations &amp;mdash; all things AI can&amp;#39;t fully grasp or replicate.&lt;/p&gt;

&lt;p&gt;Because AI relies on data and patterns and human management often involves unique situations where there might not be clear precedents or data points, many decisions require creativity and empathy.&lt;/p&gt;

&lt;p&gt;Considering that 1979 statement, since management decisions can have far-reaching consequences, humans are ultimately accountable for these decisions. Relying on AI alone could raise questions about responsibility when things go wrong. Who is responsible - the person who used the AI, trained the AI or the AI itself? Obviously, we can&amp;#39;t reprimand or fire AI, though we could change the AI we use, and revisions can be made to the AI itself to correct for whatever went wrong.&lt;/p&gt;

&lt;p&gt;AI systems can unintentionally inherit biases from the data they&amp;#39;re trained on. Without proper oversight, this could lead to unfair or unethical decisions. Of course, bias is a part of human decisions and management too.&lt;/p&gt;

&lt;p&gt;Management at some levels involves setting long-term visions and values for an organization. THis goes beyond the realm of pure logic and data, requiring imagination, purpose, and human judgment.&lt;/p&gt;

&lt;p&gt;So, can AI handle any management decisions in 2025? I asked several AI chatbots that question. (Realizing that AI might have a bias in favor of AI.) Here is a summary of the possibilities given:&lt;/p&gt;

&lt;p&gt;Resource Allocation: AI can optimize workflows, assign resources, and balance workloads based on performance metrics and project timelines.&lt;/p&gt;

&lt;p&gt;Hiring and Recruitment: AI tools can screen r&amp;eacute;sum&amp;eacute;s, rank candidates, and even conduct initial video interviews by analyzing speech patterns and keywords.&lt;/p&gt;

&lt;p&gt;Performance Analysis: By processing large datasets, AI can identify performance trends, suggest areas for improvement, and even predict future outcomes.&lt;/p&gt;

&lt;p&gt;Financial Decisions: AI systems can create accurate budget forecasts, detect anomalies in spending, and provide investment recommendations based on market trends.&lt;/p&gt;

&lt;p&gt;Inventory and Supply Chain: AI can track inventory levels, predict demand, and suggest restocking schedules to reduce waste and costs.&lt;/p&gt;

&lt;p&gt;Customer Management: AI chatbots and recommendation engines can handle customer queries, analyze satisfaction levels, and identify patterns in customer feedback.&lt;/p&gt;

&lt;p&gt;Risk Assessment: AI can evaluate risks associated with projects, contracts, or business decisions by analyzing historical data and current market conditions.&lt;/p&gt;

&lt;p&gt;As I write this in March 2025, the news is full of stories of DOGE and Elon Musk&amp;#39;s team using AI for things like reviewing email responses from employees, and wanting to use more AI to replace workers and &amp;quot;improve efficiency.&amp;quot;&amp;#160; AI for management is an area that will be more and more in the news and will be a controversial topic for years to come. I won&amp;#39;t be around in another 46 years to write the next article about this, but I have the feeling that the question of whether or not AI belongs in management may be a moot point by then.&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Tue, 18 Mar 2025 12:45:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3844-guid.html</guid>
    
</item>
<item>
    <title>AI Is Not Your Friend</title>
    <link>https://serendipity35.net/index.php?/archives/3830-AI-Is-Not-Your-Friend.html</link>
            <category>AI, ML, Robots, VR, AR, XR, Metaverse</category>
            <category>Ethics &amp; Morality in Tech</category>
            <category>Higher Education</category>
            <category>K-12</category>
            <category>Privacy, Security</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3830-AI-Is-Not-Your-Friend.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3830</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3830</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;p&gt;Though artificial intelligence is not your friend, it should not be solely considered your enemy. Like many technologies, it has it positive and negative aspects and applications.&lt;/p&gt;

&lt;figure class=&quot;serendipity_imageComment_center&quot; style=&quot;width: 400px&quot;&gt;
&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;!-- s9ymdb:7183 --&gt;&lt;img alt=&quot;still from HER&quot; class=&quot;serendipity_image_center&quot; loading=&quot;lazy&quot; src=&quot;https://ichef.bbci.co.uk/images/ic/1024xn/p01pqw85.jpg&quot; style=&quot;width: 400px;&quot; /&gt;&lt;/div&gt;

&lt;figcaption class=&quot;serendipity_imageComment_txt&quot;&gt;
&lt;p style=&quot;text-align: center;&quot;&gt;Joaquin Phoenix getting friendly with an AI operating system named Samantha (voiced by Scarlett Johansson) in the film HER&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Amber MacArthur wrote &amp;quot;AI is not your friend. Any friend that stops working when the power goes out is a machine.&amp;quot; She is at least partially referring to the idea of people becoming friendly with AI in the way that we saw in &lt;a href=&quot;https://amzn.to/4fneB2q&quot; target=&quot;_blank&quot;&gt;the film HER. That film &lt;/a&gt;premiered more than a decade ago and now looks like something very much is not only possible but is already happening in many ways.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://ambermac.com/&quot; target=&quot;_blank&quot;&gt;Amber&lt;/a&gt; had a &lt;a href=&quot;https://www.linkedin.com/posts/ambermac_ai-activity-7255569292241334273-SE8c/&quot; target=&quot;_blank&quot;&gt;longer post on LinkedIn&lt;/a&gt; that she excerpted in &lt;a href=&quot;https://ambermac.com/newsletter/&quot; target=&quot;_blank&quot;&gt;her newsletter&lt;/a&gt;. Here are a few of her observations:&amp;#160;&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;&amp;quot;AI-based social media platforms are not free speech platforms.&lt;/strong&gt; These platforms curate, amplify, promote, and - yes - demote. Think about it like yelling in the public town square, but depending on what you say, Elon Musk&amp;#39;s army of agents is there to either put a hand over your mouth to quiet you down or give you a megaphone to pump you up.&amp;quot;&lt;/li&gt;
	&lt;li&gt;Schools should not ignore or ban all AI applications. &amp;quot;&lt;strong&gt;AI training in schools should be a priority since AI skills in the workplace are a priority.&lt;/strong&gt; Kids who grow up in an age when they are taught that AI is only a threat and not also a tool will be at a competitive disadvantage.&amp;quot;&lt;/li&gt;
	&lt;li&gt;On the negative side - &lt;strong&gt;&amp;quot;AI warfare is the most frightening reality of our time.&amp;quot; &lt;/strong&gt;And it is already here and guaranteed to increase.&lt;/li&gt;
	&lt;li&gt;On the positive side - &lt;strong&gt;&amp;quot;AI healthcare is the most exciting opportunity of our time.&amp;quot;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;She knows that her list is not definitive and admits that it is &amp;quot;fluid, so if there is something you would like me to add, please &lt;a href=&quot;https://ambermac.com/contact/&quot; target=&quot;_blank&quot;&gt;let me know on my socials or via email&lt;/a&gt; so I can check it out..&amp;quot;&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Tue, 29 Oct 2024 11:25:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3830-guid.html</guid>
    
</item>
<item>
    <title>Bias in AI</title>
    <link>https://serendipity35.net/index.php?/archives/3824-Bias-in-AI.html</link>
            <category>AI, ML, Robots, VR, AR, XR, Metaverse</category>
            <category>Ethics &amp; Morality in Tech</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3824-Bias-in-AI.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3824</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3824</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;figure class=&quot;serendipity_imageComment_center&quot; style=&quot;width: 600px&quot;&gt;
&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;!-- s9ymdb:7179 --&gt;&lt;img alt=&quot;AI thermostat and couple&quot; class=&quot;serendipity_image_center&quot; height=&quot;600&quot; loading=&quot;lazy&quot; src=&quot;https://serendipity35.net/uploads/AI_smart_thermostat.jpg&quot; width=&quot;600&quot; /&gt;&lt;/div&gt;

&lt;figcaption class=&quot;serendipity_imageComment_txt&quot;&gt;What is that smart thermostat doing with our data? Can a thermostat have a bias?&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I read an &lt;a href=&quot;https://stories.camden.rutgers.edu/battling-bias-in-ai/index.html&quot; target=&quot;_blank&quot;&gt;article from Rutgers &lt;/a&gt;University-Camden that begins by saying that most people now realize that artificial intelligence has become increasingly embedded in everyday life. This has created concerns. One of those lesser-spoken-about concerns is around bias in its programming. Here are some excerpts from the article.&lt;/p&gt;

&lt;p&gt;Some AI tasks are so innocuous that users don&amp;#39;t think about AI being involved. Your email uses it. Your online searches use it. That smart thermostat uses it. Are those uses frightening? But as its capabilities expand, so does its potential for wide-ranging impact.&lt;/p&gt;

&lt;p&gt;Organizations and corporations have jumped on the opportunity presented by AI and Big Data to automate processes and increase the speed and accuracy of decisions large and small. Market research has found that 84 percent of C-suite executives believe they must leverage artificial intelligence to achieve their growth objectives. Three out of four believe that if they don&amp;#39;t take advantage of AI in the next five years, they risk going out of business entirely.&lt;/p&gt;

&lt;p&gt;Bias can cause artificial intelligence to make decisions that are systematically unfair to particular groups of people, and researchers have found this can cause real harm. The Rutgers&amp;ndash;Camden researcher Iman Dehzangi, says that &amp;ldquo;Artificial intelligence and machine learning are poised to create valuable opportunities by automating or accelerating many different tasks and processes. One of the challenges, however, is to overcome potential pitfalls such as bias.&amp;rdquo;&amp;#160;&lt;/p&gt;

&lt;p&gt;What does biased AI do? It can give consistently different outputs for certain groups compared to others. It can discriminate based on race, gender, biological sex, nationality, social class, or many other factors.&lt;/p&gt;

&lt;p&gt;Of course it is human beings who choose the data that algorithms use and humans have biases whether they are conscious of them or not.&lt;/p&gt;

&lt;p&gt;&amp;quot;Because machine learning is dependent upon data, if the data is biased or flawed, the patterns discerned by the program and the decisions made as a result will be biased, too,&amp;quot; said Dehzangi, pointing to a common saying in the tech industry: &amp;quot;garbage in, garbage out.&amp;quot; &amp;ldquo;There is not a successful business in operation today that is not using AI and machine learning,&amp;rdquo; said Dehzangi. Whether it is making financial investments in the stock market, facilitating the product development life cycle, or maximizing inventory management, forward-thinking businesses are leveraging this new technology to remain competitive and ahead of the curve. However, if they fail to account for bias in these emerging technologies, they could fall even further behind, remaining mired in the flawed data of the past. Research has revealed that if care is not taken in the design and implementation of AI systems, longstanding social biases can be embedded in the systems&amp;#39; logic.&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Wed, 28 Aug 2024 09:28:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3824-guid.html</guid>
    
</item>
<item>
    <title>Tay: A Cautionary Tale of AI</title>
    <link>https://serendipity35.net/index.php?/archives/3812-Tay-A-Cautionary-Tale-of-AI.html</link>
            <category>AI, ML, Robots, VR, AR, XR, Metaverse</category>
            <category>Ethics &amp; Morality in Tech</category>
    
    <comments>https://serendipity35.net/index.php?/archives/3812-Tay-A-Cautionary-Tale-of-AI.html#comments</comments>
    <wfw:comment>https://serendipity35.net/wfwcomment.php?cid=3812</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://serendipity35.net/rss.php?version=2.0&amp;type=comments&amp;cid=3812</wfw:commentRss>
    

    <author>ronkowitz@gmail.com (Kenneth Ronkowitz)</author>
    <content:encoded>
    &lt;p&gt;&lt;!-- s9ymdb:7162 --&gt;&lt;img alt=&quot;chatbot and posts&quot; class=&quot;serendipity_image_center&quot; loading=&quot;lazy&quot; src=&quot;https://serendipity35.net/uploads/chatbot_with_posts.jpg&quot; style=&quot;width: 350px;&quot; /&gt;Tay was a chatbot originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It &amp;quot;has had a great influence on how Microsoft is approaching AI,&amp;quot; according to Satya Nadella, the CEO of Microsoft.&lt;/p&gt;

&lt;p&gt;Tay caused almost immediate controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who &amp;quot;attacked&amp;quot; the service as the bot made replies based on its interactions with people on Twitter - a dangerous proposition.&lt;/p&gt;

&lt;p&gt;It was named &amp;quot;Tay&amp;quot; as an acronym for &amp;quot;thinking about you.&amp;quot; It was said that it was similar to or based on Xiaoice, a similar Microsoft project in China that Ars Technica reported that it had &amp;quot;more than 40 million conversations apparently without major incident&amp;quot;.&lt;/p&gt;

&lt;p&gt;Interestingly, Tay was designed to mimic the language patterns of a 19-year-old American girl and presented as &amp;quot;The AI with zero chill.&amp;quot;&lt;/p&gt;

&lt;p&gt;It was quickly abused with Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages so that the bot began releasing racist and sexually-charged messages in response to other Twitter users.&lt;/p&gt;

&lt;p&gt;One artificial intelligence researcher, Roman Yampolskiy, commented that Tay&amp;#39;s misbehavior was understandable because it mimicked the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM&amp;#39;s Watson, which began to use profanity after reading entries from the website Urban Dictionary.&lt;/p&gt;

&lt;p&gt;It was popular in its short life. Within 16 hours of its release, Tay had tweeted more than 96,000 times, That is when Microsoft suspended the account for &amp;quot;adjustments.&amp;quot; Microsoft confirmed that Tay had been taken offline, released an apology on its official blog, and said it would &amp;quot;look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.&amp;quot;&lt;/p&gt;

&lt;p&gt;Then on March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. Given its freedom, Tay released some drug-related tweets, then it became stuck in a repetitive loop of tweeting &amp;quot;You are too fast, please take a rest&amp;quot;, several times a second. The posts appeared in the feeds of 200,000+ Twitter followers.&lt;/p&gt;

&lt;p&gt;Tay has become a cautionary tale on the responsibilities of creators for their AI.&lt;/p&gt;

&lt;p&gt;In December 2016, Microsoft released Tay&amp;#39;s successor, a chatbot named &lt;a href=&quot;https://en.wikipedia.org/wiki/Zo_(bot)&quot; target=&quot;_blank&quot;&gt;Zo&lt;/a&gt; which was an English version of Microsoft&amp;#39;s other successful chatbots Xiaoice (China) and Rinna [ja] (Japan).&lt;/p&gt;
 
    </content:encoded>

    <pubDate>Wed, 29 May 2024 14:50:00 +0000</pubDate>
    <guid isPermaLink="false">https://serendipity35.net/index.php?/archives/3812-guid.html</guid>
    
</item>

</channel>
</rss>
