Integrated Reality

In a tablet or smartphone mobile game, what if you minimized or removed the need for a screen from the picture (pardon the pun)? Huh? How could you have a game without a screen? If you could do this, would the kinds of game enabled change the gaming landscape?  (Spoiler alert: Yes.)

Let’s take a step back for a moment and talk about three, of many, goals we’ll be talking about in this and upcoming blog posts:

  1. How can we allow participation with physical game components (like small pieces or toys) within a virtual game?
  2. How can we evolve modes of social interaction during game play from the set that exist today?
  3. How can we design accessibility into games that increase interaction, fun, learning, and even possible health benefits into a game?
Learn more & support Gazintu! >


I posit that what we need is not Virtual Reality (to simplify a definition: stepping into a 100% visual and auditory simulation of reality—think headset and gloves) nor Augmented Reality (to simplify a definition: stepping into a visual and auditory simulation that allows one to see and interact with bits of the real world–think smart glasses). In both of these cases the tech is injecting something into our perception of reality.

Rather, I posit that we are on the cusp of something I have heard bandied about (but certainly not mainstream yet): Integrated Reality. We didn’t coin this term, but I would certainly like to impact what it means. In my current definition, Integrated Reality allows one to interact with the real world as it is traditionally, with the patterns and movements of real-world object manipulation as it existed prior to technology. But the opposite of VR or AR occurs: Real-world objects and gestures are injecting action and information into virtual worlds and games.

Versions of this concept have certainly been around for a while in some contexts (think telework: tele-surgery or “robot” hands controlled by a human to do remote, physical work also “inject” action and information into remote locations. And they could be real or simulations). And yes, there are a few game demos relying on screens and 3D to mesh the virtual with the real.  But as of yet there really haven’t been many low-cost, mass consumer ways to take everyday objects and integrate them into virtual worlds by using them the same way that they are already used in the real world..

One example where this actually has been done is Osmo. Using an innovative approach, the Osmo uses the camera on the tablet and software to recognize physical game components that kids are manipulating in front of the tablet. And then involve those manipulated game components in a virtual game. Fun and amazing stuff!

One current limitation of this model are that the players all need to be in very close proximity to the tablet. Another is that the display is integral to the game—you must have access to the screen to play.

What if the players could be spread around a room and engage in the real-world movement that kids have engaged in for millennia? What if the toys and toy components the children use look and feel the same as always, yet be aware of the location of one another in that room? What if one or more of the players, using real toys, was located in different physical locations? And what if, by design, the mobile screen played only a small part (or no part) of the game at all?

How could all of that impact play? How could all of that impact accessibility to play, not just for people with visual challenges, but for people with social or developmental or even physical challenges?

Interesting challenges.  Interesting questions. And some of these things that Playrific Gazintu is beginning to address.  Stay tuned to this blog for some cool discussions on these points.

Thoughts?

Thanks,

Gary

Coming soon: Using Gazintu for PT and OT and social-learning multiplayer games

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>