Are We at Web 3.0 Yet?

web 3.0The term “Web 2.0” was popularized by Tim O'Reilly and Dale Dougherty at the O'Reilly Media Web 2.0 Conference in late 2004. O'Reilly defined it as not being a change in the technical framework of the Internet but rather a shift in the design and use of websites. The shift was moving away from websites that offered a passive user experience to ones that allowed users a more active experience through the ability to interact and collaborate through social media dialogue and to act as creators of user-generated content.

When I wrote a piece here called "From Web 2.0 to Web 4.0 in December 2019, it was inspired by an article online about "Web 4.0" that made me wonder if we had jumped over Web 3.0.

Web 2.0 websites allowed users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing the content in a passive manner. Web 2.0 examples include social networking sites or social media sites (Twitter, Facebook, Instagram, Tumblr, et al), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (YouTube, Vimeo), image sharing sites (Pinterest, Flickr), some web apps and any collaborative platforms, and mashups of multiple applications.

World Wide Web inventor Tim Berners-Lee questioned whether Web 2.0 was substantially different from the earlier Web technologies. He said that his original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write." Berners-lee coined the term "semantic web" at the start of this century, but that has sometimes come to be called Web 3.0. Berners-Lee meant "semantic" to refer to a web of content where the meaning can be processed by machines. (archived version of his article)

Semantics refers to the philosophical study of meaning, but semantics comes up in discussions about search technology. Google, Siri, and Alexa using semantic search technology. In that application, it is the idea of answering user questions rather than merely searching based on a string of keywords. hunt down words. I can ask those applications a question like "What time is sunset tonight?" or "What is the zip code for Montclair, New Jersey?" but I could earlier have asked a search engine "zipcode Montclair NJ" and gotten an answer. Now, when I ask what time is sunset, the app knows where I am and so the answer is location-based.

In 2013, I wrote about Siri and the semantic web and said "We are not at the point where you can ask 'What would I like for dinner tonight?' and expect an answer." That might change as AI plays a larger role in search and other web operations. Semantic search is a data searching technique in which a search query aims to not only find keywords but to determine the intent and contextual meaning of the words a person is using for search.

Will Your Instructional Designer Be AI?

cyborgRecently, I read an article about using artificial intelligence (AI) for the instructional design of courses. Initially, that frightened me. First of all, it might mean less work for instructional designers – which I have both been and run a department working with them. Second, it’s hard for me to imagine AI making decisions on pedagogy better than a designer and faculty member.

Of course, using AI for that kind of design is probably limited (at least at first) to automating some tasks like uploading documents and updating calendars rather than creating lessons. Then again, I know that AI is being used to write articles for online and print publications, so it is certainly possible.

I read another article asking “Is Artificial Intelligence the Next Stepping Stone for Web Designers?” and, of course, my concerns are the same – lost jobs and bad design.

Certainly, we are already using AI in websites, particularly in e-commerce applications. But using AI to actually design a website is very different.

Some companies have started to use AI for web design. A user answers some questions to start a design: pick an industry or category (portfolio, restaurant, etc.), enter a business name, add a subtitle/slogan/brand, upload a logo, enter an address, hours of operation, and so on. The AI may offer you a choice of templates and then in a few clicks, the basics of the site are created.

This is an extension of the shift 20 years to template-driven web design. Now, it is based on machine learning techniques with human intervention at the initial stage by providing their desired information and probably again after the site is created to fine-tune.

In my own instructional design work over the years, we have used templates for course shells. Standardizing the way courses look is a good thing in many ways. It makes it easier to do rapid development. That was certainly the situation in spring 2020 as school scrambled to move all their face to face courses online. A standard look also makes it easier for students to move from course to course. 

Though every course should not be the same, the structure and components can generally be the same. This is also useful if you are trying to have courses comply with standards such as Quality Matters or ADA accessibility standards

I do a lot of web design these days and many popular companies, such as Squarespace, are using AI and machine learning to get ordinary users started. Does design still require some human intervention? Absolutely. Does the human need to be a “designer”?  Clearly, the goal is to allow anyone to do a good job of creating a website without a designer.

I think there is an overlap between web design and course design. Add AI to either and the process can be made more efficient. I also think that you need people involved. For web design, it's a client and designer. For course design, it's a faculty member(s) and an ID. In my own work, I still find many people need someone with experience and training to create the website, but they can oftentimes maintain it on their own if the updates are simple. For courses, most faculty need help to create but generally not only can "maintain" the course but have to because the IDs can't always be there during a semester.

AI will change many industries and web and instructional design are certainly on the list of those industries. 

On the Road to Learning With a GPS

map locationWhile I was driving in an unfamiliar neighborhood this week using my GPS I started thinking about how great it would be if there was something like a GPS for learning.

Of course, there is adaptive learning and adaptive teaching. That is the idea of delivering a custom learning experience that addresses the unique needs of an individual. It does that by using just-in-time feedback, pathways, and a library of resources. This is not a one-size-fits-all learning experience.

When I was studying education in college, we learned about creating a "roadmap" for learning. That was a long time ago when a paper roadmap was the way to travel. It was not adaptive. The user had to adapt. With the Internet came mapping websites. You put in a starting place and a destination and it finds a route. At first, there were no alternate routes, but when sites like Google Maps became available you could select alternatives. If you wanted to avoid a highway, you could drag the route around it.

Then came a GPS. We tend to call those devices a GPS but the Global Positioning System (GPS) is what makes that device work. It was developed in order to allow accurate determination of geographical locations by military and civil users using satellites. Those devices had all those mapping things, plus it went with you in the car and, most importantly, it was adaptive. If you went down the wrong street or a road was blocked, it adapted your route. 

When Google Maps, Apple Maps, Waze and other apps became available on smartphones, the makers of of GPS devives took a hit. Your phone knows where you are and where you want to go. It redirects you when needed. It gives immediate feedback on your progress and tells you your anticipated next step in advance.

Those first mapping programs weren't exactly what we would call artificial intelligence but today that is what drives mapping programs forward.

My driving notion of an AI/GPS for learning is here, though it's not quite a set-it-and-forget-it device yet. Several companies, such as Smart Sparrow, offer adaptive learning platforms. I know of a school using Pearson's program Aida Calculus (see video below) which connects multiple forms of AI to personalize learning. The program teaches students how to solve problems and gives real-world applications. Advanced AI algorithms have entered the education space.

Not every teacher or classroom has access to packaged programs for adaptive learning. In my pre-Internet teaching days, we called this approach individualized instruction which also focuses on the needs of the individual student. It was a teacher-centered approach that tried to shift teaching from specific one-need-at-a-time targets.

Over the years, the terms individualized instruction, differentiated teaching, adaptive learning and personalized learning have been sometimes used interchangeably.  They are all related because they describe learning design that attempts to tailor instruction to the understanding, skills, and interests of an individual learner. Today, it is through technology, but we can still use human intervention, curriculum design, pathways and some blend of these.

 

 

https://elearningindustry.com/adaptive-learning-for-schools-colleges

https://www.edsurge.com/research/reports/adaptive-learning-close-up

Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.