Event-Based Internet

Event-based Internet is going to be something you will hear more about this year. Though I had heard the term used, the first real application of it that I experienced was a game. But don't think this is all about fun and games. Look online and you will find examples of event-based Internet biosurveillance and event-based Internet robot teleoperation systems and other very sophisticated uses, especially connected to the Internet of Things (IoT).

HQWhat did more than a million people do this past Sunday night at 9pm ET? They tuned in on their mobile devices to HQ Trivia, a game show, on their phones.  

For a few generations that have become used to time-shifting their viewing, this real-time game is a switch. 

The HQ app has had early issues in scaling to the big numbers with game delays, video lag and times when the game just had to be rebooted. But it already has at least one imitator called "The Q" which looks almost identical in design, and imitation is supposed to be a form of flattery.

This 12-question trivia quiz has money prizes. Usually, the prize is $2000, but sometimes it jumps to $10 or $20K. But since there are multiple survivors of the 12 questions that win, the prizes are often less than $25 each.

Still, I see the show's potential (Is it actually a "show?") Business model? Sponsors, commercial breaks, sponsors and product placement in the questions, answers and banter in-between questions.

The bigger trend here is that this is a return to TV "appointment viewing."  Advertisers like that and it only really occurs these days with sports, some news and award shows. (HQ pulled in its first audience of more than a million Sunday during the Golden Globe Awards, so...) 

And is there some education connection in all this?  Event-based Internet, like its TV equivalent, is engaging. Could it bring back "The Disconnected" learner?  

I found a NASA report on "Lessons Learned from Real-Time, Event-Based Internet Science Communications."  This report is focused on sharing science activities in real-time in order to involve and engage students and the public about science.

Event-based distributed systems are being used in areas such as enterprise management, information dissemination, finance,
environmental monitoring and geo-spatial systems.

Education has been "event-based" for hundreds of years. But learners have been time-shifting learning via distance education and especially via online learning for only a few decades. Event-based learning sounds a bit like hybrid or blended learning. But one difference is that learners are probably not going to tune in and be engaged with just a live lecture. Will it take a real event and maybe even gamification to get live learning? 

In all my years teaching online, I have never been able to have all of a course's student attend a "live" session either because of time zone differences, work schedules or perhaps content that just wasn't compelling enough.

What will "Event-based Learning" look like?

Does Education Have a 'Next Billion?'

next billion"Next Billion" is a term you will find used in talking about the future of the internet. It refers to not only the exponential growth in connectivity in emerging markets, such as India, but also the growth of next-level technology in more mature markets. 

One thing that is evident is that the next billion internet users are much more likely to be using mobile phones than a computer.  Globally, half of all internet users got online in February 2017 using mobile devices. It is still a close race with 45% accessing the web on laptops or desktop computers, but break out the number for emerging markets, like India, and the mobile wins easily. In India and other countries that did not have wired infrastructure in place for Net connectivity, and did not have a population able to purchase computers, mobile and wireless are the only choice. Indians accessed the internet through their mobiles nearly 80% of the time. 

This is also changing the way providers, carriers, phone manufacturers and related companies (such as Google/Alphabet) design.

For example, the emerging next billion tends not to type searches, emails, or even text messages. These newcomers avoid text and use voice activation and communicating with images. Part of this is due to their unfamiliarity with the devices, and partly it is due to a less educated and literate population. They are using low-end smartphones (Android dominates) and cheap data plans along with the most intuitive apps that let them navigate easily.

What does this have to do with education?

My first thought is that even if your students are part of the "first billion" population, delivery of learning online needs to very seriously address mobile use, and the user interfaces need to be intuitive and less text-based.

My second thought is that educational providers, especially post-secondary, need to be prepared for the next billion learners who will not be coming to them in the same ways, or with the same goals, or with the same devices. When I say "educational providers," I am thinking of much more than schools and universities.

No doubt some of this has already been taking place through online learning and especially with the rise of Massive Open Online Courses (MOOC) and Open Educational Resources (OER), but the pathways are not even well established for the first billion, and certainly not for the next billion.

 

 

 

The Augmented Reality of Pokémon Go

Go
People have been searching for creatures and running down their phone batteries this month since Pokémon Go was released.
Is there any connection of this technology to education, Ken? Let's see.

First off, Pokémon Go is a smartphone game that uses your phone’s GPS and clock to detect where and when you are in the game and make Pokémon creatures appear around you on the screen. The objective is to go and catch them.

This combination of a game and the real world interacting is known as augmented reality (AR). AR is often confused with VR - virtual reality. VR creates a totally artificial environment, while augmented reality uses the existing environment and overlays new information on top of it.

The term augmented reality goes back to 1990 and a Boeing researcher, Thomas Caudell, who used it to describe the use of head-mounted displays by electricians assembling complicated wiring harnesses.

A commercial applications of AR technology that most people have seen is the yellow "first down" line that we see on televised football games which, of course, is not on the actual field.

Google Glass and the displays called "heads-up" in car windshields are another consumer AR application. there are many more uses of the technology in industries like healthcare, public safety, gas and oil, tourism and marketing.

Back to the game... My son played the card game and handheld video versions 20 years ago, so I had a bit of Pokémon education. I read that it is based on the hobby of bug catching which is apparently popular in Japan, where the games originated. Like bug catching or birding, the goal is to capture actual bugs or virtual birds and Pokémon creatures and add them to your life list. The first generation of Pokémon games began with 151 creatures and has expanded to 700+, but so far only the original 151 are available in the Pokémon Go app.

I have seen a number of news reports about people doing silly, distracted things while playing the game, along with more sinister tales of people being lured by someone via a creature or riding a bike or driving while playing. (The app has a feature to try to stop you using from it while moving quickly, as in a car.)

Thinking about educational applications for the game itself doesn't yield anything for me. Although it does require you to explore your real-world environment, the objective is frivolous. So, what we should consider is the use of VR in education beyond the game, while appreciating that the gaming aspect of the app is what drives its appeal and should be used as a motivator for more educational uses.
AR
The easiest use of VR in college classrooms is to make use of the apps already out there in industries. Students in an engineering major should certainly be comfortable with understanding and using VR from their field. In the illustration above, software (metaio Engineer) allows someone to see an overlay visualization of future facilities within the current environment. Another application can be having work and maintenance instructions directly appear on a component when it is viewed.
Augmented reality can be a virtual world, even a MMO game. The past year we have heard more about virtual reality and VR headsets and goggles (like Oculus Rift) which are more immersive, but also more awkward to use.This immersiveness is an older concept and some readers may recall the use of the term "telepresence.” 

Telepresence referred to a set of technologies which allowed a person to feel as if they were present, or to to give the appearance of being present, or to have some impact at place other than their true location. Telerobotics does this, but more commonly it was the move from videotelephony to videoconferencing. Those applications have been around since the end of the last century and we have come a god way forward from traditional videoconferencing to doing it with hand-held mobile devices, enabling collaboration independent of location.

In education, we experimented with these applications and with the software for MMOs, mirror worlds, augmented reality, lifelogging, and products like Second Life. Pokémon Go is Second Life but now there is no avatar to represent us. We are in the game and the game is the world around us, augmented as needed. The world of the game is the world.

Cognizant Computing in Your Pocket (or on your wrist)

Two years ago, I wrote about the prediction that your ever-smarter phone will be smarter than you by 2017. We are half way there and I still feel superior to my phone - though I admit that it remembers things that I can't seem to retain, like my appointments, phone numbers, birthdays and such.

The image I used on that post was a watch/phone from The Jetsons TV show which today might make you think of the Apple watch which is connected to that ever smarter phone.

But the idea of cognizant computing is more about a device having knowledge of or being aware of your personal experiences and using that in its calculations. Smartphones will soon be able to predict a consumer’s next move, their next purchase or interpret actions based on what it knows, according to Gartner, Inc.

This insight will be performed based on an individual’s data gathered using cognizant computing — "the next step in personal cloud computing.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” said Carolina Milanesi, Research Vice President at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague."

The device will gather contextual information from your calendar, its sensors, your location and all the personal data  you allow it to gather. You may not even be aware of some of that data it is gathering. And that's what scares some people.

watchWhen your phone became less important for making phone calls and added apps, a camera, locations and sensors, the lines between utility, social, knowledge, entertainment and productivity got very blurry.

But does it have anything to do with learning?

Researchers at Pennsylvania State University already announced plans to test out the usefulness in the classroom of eight Apple Watches this summer.

Back in the 1980s, there was much talk about Artificial Intelligence (AI). Researchers were going to figure out how we (well, really how "experts") do what they do and reduce those tasks to a set of rules that a computer could follow. The computer could be that expert. The machine would be able to diagnose disease, translate languages, even figure out what we wanted but didn’t know we wanted. 

AI got lots of  VC dollars thrown at it. But it was not much of a success.

Part of the (partial) failure can be attributed to a lack of computer processing power at the right price to accomplish those ambitious goals. The increase in power, drop in prices and the emergence of the cloud may have made the time for AI closer.

Still, I am not excited when I hear that this next phase will allow "services and advertising to be automatically tailored to consumer demands."

Gartner released a newer report on cognizant computing that continues that idea of it being "the strongest forces in consumer-focused IT" in the next few years.

Mobile devices, mobile apps, wearables, networking, services and the cloud is going to change educational use too, though I don't think anyone has any clear predictions. 

Does more data make things smarter? Sometimes.

Will the Internet of Things and big data converge with analytics and make things smarter? Yes.

Is smarter better? When I started in education 40 years ago, I would have quickly answered "yes," but my answer is less certain these days.

 


In 4 Years Your Phone Will Be Smarter Than You (and the rise of cognizant computing)

JetsonsYour smartphone will be smarter than you by the year 2017. That is from an analysis from market research firm Gartner. It won't have much to do with hardware. It will come from the data and computational ability in the cloud. Phones will appear smarter than you - if you equate smarts with being able to recall information and make inferences. It was a part of a discussion of smart devices at Gartner Symposium/ITxpo 2013, November 10-14 in Barcelona.

What made mobile phones smartphones was new tech and and apps, Cameras, enabling locations and sensors, and tying them into apps and social interactions via apps has been the biggest trend the past 5 years. The easier things are already in place - scheduling, sending out reminders, letting you know what friends are doing or where they are, alerting you to things in your vicinity.

A newer trend is having phones that predict your next action based on personal data already gathered. This is called cognizant computing and many people see it as the next step in personal cloud computing.

Carolina Milanesi, research vice president at Gartner, says “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data.”

Of course, allowing your phone to do these things is part of the equation. And not everyone is okay with granting permissions to apps, opening up their data and feeling confident in allowing apps and services to take control of aspects of their lives.

This idea of cognizant computing is said to occur in 4 phases. Those phases (according to Gartner) are sync me, see me, know me and be me.



4 phases



Sync me is familiar to users and probably appreciated: store copies of digital assets and sync them across devices. So, my iPhone knows what my iPad knows and my cloud documents are on all my devices, including several laptops.

See me is here in its early stages and means devices can track history and context. The phone knows where I am now and where I have been.

Using the data from those two phases (which many of us have granted permissions for), phones can move to phases 3 and 4. That's when things get a bit scary for some people. When my phone "knows me" it act act proactively. Do I want to purchase something now based on my earlier spending habits?

And, taking it a step further, how much do I want my device to "Be Me" and act on my behalf? It will pay my bills. It will send selected friends and relatives birthday greeting and pick out a gift. (After all, I have tied my wife's purchases to my account and it knows where she likes to shop and what she likes to buy.)

Scary? Or are you happy to let that little package of power make your life "easier"?

I still haven't gotten my jetpack or flying car, but I might get some cousin of The Jetsons' Rosie that can slip into my pocket - and into my life - quite easily.