Disclaimer: This post contains speculation about what an upcoming product may or may not contain. There are no specific commitments made for features or dates nor are there any peer reviewed claims or recommendations being made (though we would love to partner with a team that can test with us during development!) Please do not infer that any specific therapeutic or educational recommendations are present in this post.
Imagine scenario 1 :
One child is holding a plushie toy banana. A second is holding a plushie toy monkey (yes Jane Goodall fans, the illustration has a tail so it isn’t a chimp!). Some devices, off to the side say, out loud:
- Luke, raise your monkey higher than the banana.
- Prudence, make the banana spin around.
- Luke, Prudence, race the banana and monkey to the wall and back, let’s see who wins!
- If it is your turn next, move your toy in a circle. If not, hold your toy still.
Wouldn’t an app and toys like this be amazing? Children, interacting with one another through gesture and movement, helped by mobile devices without looking at the screens! Do you see some social lesson opportunities here? Teaching of spatial relationships? Possible ESL or other language lessons? Turn taking? And what if there were more than one child?Learn more & support Gazintu! >
Imagine scenario 2 :
A child is working on specific exercises with their arm; raising and lowering. The child takes his or her Gazintu, customized with cool personalized stickers. The therapist starts the app on a tablet, and tells the child to slowly raise their Gazintu with their right hand as high in the air as they can. When they reached the target height, the Gazintu would provide sensory feedback to the child by clicking or buzzing.
- Remembers how fast and how high the child did it the last time and can offer encouragement
- Exercises could be coordinated as social activities or movement games to take away boredom
- No screens have to be viewed by the child, but they could be if it was necessary for the activity (for example, showing them the correct form before asking them to do it).
Imagine scenario 3 :
A group of children with extreme shyness or other social challenges are brought into a group language activity in the same room. Each child starts and app on their tablet, puts the tablet down on a table, and picks up their customized gazintu:
- One of the tablets, coordinated in the same activity, begins to read a “mad libs” style story.
- “Michaela walked up to the house, but the door was closed. She _____-ed at the door.” Then, Michaela would motion with her Gazintu like a knock and would get feedback.
- “Michaela walked into the house, and Brendon followed her.” Brendan would physically follow Michaela, and since the Gazintus would be aware of each others’ locations in relation to one another, the app would know if Brendon followed.
- “Raise the roof break!” All of a sudden, the tablet told everyone to “raise the roof!” And all of the kids would have to jump up and down with their Gazintus in the air!
Lots of work ahead
While each of these scenarios (and extensions to them) are targets of Gazintu capability, only our development in the coming months and input and testing from dedicated partners and friends will help. But there are some key things going on here that we hope to bring to fruition:
- Small, inexpensive devices (Gazintu) that can be used alone or inserted into small toys.
- Gazintu that have some level of knowledge of the physical location of other Gazintu in the room.
- Activities that are fun, tested and relevant, and that when not necessary, don’t use the screen for interaction.
- Oh, and some plain old fun games as “rewards” for kids, teachers, and therapists when it is time to just have plain old fun.
Wait, didn’t you use the term “integrated reality” in the title of this post?
Why yes, we did. And we taught you what it means without even telling you we did. In a nutshell:
- Virtual Reality is bringing artificial visual, auditory, and touch simulations to a person.
- Augmented Reality is adding the ability, through display-glasses for example, to see and interact with the “real world” with virtual projections and objects appearing in the scenario.
In both of these cases, the virtual world is brought to the user primarily, but not exclusively, through inputs and stimulation coming from the virtual world “toward” the person. (Yes, the person can interact with the world too. But the focus in on being “enveloped” in a virtual experience or adding a virtual visual experience to the real world.)
In the case of Integrated Reality as we are defining it, the primary goal is to integrate existing, common objects, and manipulate using gestures and movements that these objects have always been manipulated with — even without technology. Then, the gestures and movements of these objects impact a virtual experience like a game (an audio game, a video game, a therapy session, etc.) One key difference is, that for the most part, in Integrated Reality, the inputs and “data flow” go from a “real object” into a virtual experience.
Integrated Reality takes sophisticated sensors, imparts their power on everyday objects, and allows people to work with the objects that they have always used in a virtual or gaming context.