Fear of Becoming Obsolete

fearful workers

The term FOBO appeared in something I was reading recently. It is the fear of becoming obsolete (FOBO) and it is very much a workplace fear and generally connected to aging workers and anyone who fears that they will be replaced by technology.

Of course, AI is a large part of this fear. It's not a new fear. Workers have always considered that they would be considered obsolete as they aged, especially if they did not have the skills that younger employees brought to the workplace. It has been at least two decades of hearing predictions that robots would replace workers. In fact, that was the case, though not to the levels that were sometimes predicted. Artificial intelligence is less obvious as it makes inroads into our work and outside life.

Employers and workers need to be better at recognizing the ways AI is already here and being used. Approximately four in ten Americans use Face ID to log into at least one app on their phone each day. That is about 136 million people. How many think about that as AI?

If you have an electric vehicle, A.I.-powered systems work to manage the energy output. In your gas-powered car, you very likely use an AI-powered GPS for navigation.  

One survey I saw found that just 44 percent of the global workforce believe they interact today with AI in their personal lives. But when asked if they used GPS maps and navigation, 66 percent said yes. What about predictive product/entertainment suggestions, such as in Netflix and Spotify?  50 percent said yes.  Do you use text editors or autocorrect? A yes from 47 percent. 46 percent use virtual home assistants, such as Alexa and Google Assistant. Even chatbots like ChatGPT and CoPilot - which are less hidden and more proactive for a user - had a 31 percent yes response.

Most of these are viewed as positive uses of AI, but not all uses are viewed as positive or at least are viewed as somewhat negative. One example of that category is the AI not so positive is its use in filling up newsfeeds. Each social media network - Facebook, Twitter, Instagram et al  - has its own A.I.-powered algorithm. It is constantly customizing billions of users’ feeds. You click a like button, or just pause on a post for more than a few seconds,and that information changes your feed accordingly. Plus, the algorithm is made to push certain things to users that were not suggested by your activity but by sponsors or owners. This aspect has been widely criticized since Elon Musk took over Twitter-X, but all the platforms do it to some degree.

Some common applications are both positive and negative. Take the use of artificial intelligence in airports all over the world. It is being used to screen passengers passing through security checkpoints. At least 25 airports across the U.S., including Reagan National in Washington D.C. and Los Angeles International Airport, have started using A.I.-driven facial recognition as part of a pilot project. Eventually, the Transportation Security Administration (TSA) plans to expand the ID verification technology to more than 400 airports. This can speed up your passage through security which is something everyone would love to see, but what else is being done with that data, and will the algorithm flag people for the wrong reasons?

Do you want to push back on FOBO, particularly in the workplace? Some suggestions:
Continuous Learning: Stay curious and keep updating your skills. Whether it’s taking a course, attending workshops, or learning new technologies, continuous education is key.
Networking: Engage with your professional community. Networking can provide insights into industry trends and offer support and advice.
Adaptability: Embrace change and be open to new ideas. Flexibility can help you stay relevant.
Mindset Shift: Focus on your unique strengths and contributions. Everyone has something valuable to offer, and feeling obsolete often stems from undervaluing your skills.
Digital Detox: Sometimes, limiting your exposure to social media and other sources of comparison can reduce feelings of inadequacy.
Seek Feedback: Regularly seek feedback from peers, mentors, and colleagues to understand your areas of improvement and strengths.

Facebook at 21

I saw that today is the anniversary of the start of Facebook back in its undergraduate days of 2004. An old post on the now-defunct Writer's Almanac did a nice job of summarizing that early history, so I am using most of it here.

The social networking site Facebook was launched from a Harvard University dorm room on February 4, 2004 by  sophomore Mark Zuckerberg in his dorm room (Suite H33 in Kirkland House). He was aided by three other 19-year-olds.

Zuckerberg was a smart, middle-class kid from Dobbs Ferry, New York who started writing computer software when he was 12. In high school, he created a program called Synapse Media Player and was offered millions of dollars for the product and job offers by both Microsoft and AO. But he passed on them in order to attend Harvard instead.

logo
original logo

The program he created at Harvard was called Facemash. It displayed two student photos side by side and asked people to rank who was hotter. It would later be duplicated in various forms as a "hot or not" game. In the site’s first four hours online, the photos were viewed 22,000 times. The site was shut down by Harvard a few days later. It so popular that it overwhelmed their server, but also because there were privacy violations since Zuckerberg had acquired the photos for Facemash by hacking into Harvard’s photo directory.

A couple of months later, Zuckerberg began writing code for a site that would allow students to view each other’s photos and some basic personal information. This site, TheFacebook, was launched on this day in 2004 at www.thefacebook.com.

More than a thousand students signed up within 24 hours, and after a month, half of Harvard’s undergraduates had signed up. Zuck was in trouble again, this time with three seniors who claimed that they had hired Zuckerberg to create a similar site, but that Mark had stolen their idea. Several years later, they reached a multimillion-dollar settlement.

Linking to the Wayback Machine

Google Search has integrated a feature that links directly to the Wayback Machine, allowing users to access archived versions of webpages through search results.

The Wayback Machine is an online archive created by the Internet Archive, a non-profit organization. It allows users to access and view historical snapshots of web pages, dating back to the late 1990s. Essentially, it's like a digital time machine that lets you see how websites looked in the past. This can be useful for research, preserving digital history, or just satisfying curiosity.

By clicking the three dots next to a search result and selecting "More About This Page," users can view how a webpage appeared at different points in time. The collaboration enhances public access to web history, ensuring that digital records remain available for future generations.

logo

Source  https://blog.archive.org/2024/09/11/new-feature-alert-access-archived-webpages-directly-through-google-search/

An AI Chatbot Glossary

AI car dashboardEven some less-tech people have been experimenting with chatbots now that they are embedded in Google Gemini Apple and Microsoft CoPilot sites. A few of my less-tech friends have asked me what a term means concerning AI chatbots. Of course, they could easily ask a chatbot to define any chatbot terms, but it is useful to have a glossary.

I have had friends tell me that they have had some interesting "conversations" with machines. "They almost seem human," said one friend who has no idea what a Turing test would do. That sounds like fun, but the potential of generative AI could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute.

Besides the obviously popular AI tools, there are others like Anthropic's Claude, the Perplexity AI search tool and gadgets from Humane and Rabbit.

A glossary would range from very basic terms. such as "prompt," which is the suggestion or question you enter into an AI chatbot to get a response which might lead you to "prompt chaining:" That is the ability of AI to use information from previous interactions to produce future responses.

What does it mean if a tool is "agentive?" That might be a system or model that exhibits agency with the ability to autonomously pursue actions to achieve a goal. This is where we enter an area that scares some people. An agentive model can act without constant supervision. Consider autonomous car features, such as brakes that apply without the user touching the pedal, or pulls a car back into the lined lanes.

Speaking of AI fears, we have "emergent behavior:" This is when an AI model exhibits unintended abilities.

Most AI tools warn about assuming that what answer is given is 100% correct. A "hallucination" is an incorrect response from AI. Even the AI creators don't always know the reasons for this aren't entirely known.

"Weak AI, AKA "narrow AI" is focused on a particular task and can't learn beyond its skill set. As marvelous as image creating AI can be, it has just one task.

A test in which a model must complete a task without being given the requisite training data is called "zero-shot learning." AI trained to identify cars being able to recognize vans, pickup trucks or tractor trailers.

More terms and some reviews of chatbots at cnet.com/tech/services-and-software/