Consider this a bit of science fiction, but don’t leave it at that, because we’re not that far from any of this really working out. The basic premise is this: imagine a whole new “layer” to our human/environment operating system such that we apply “context” to more things. I know it sounds abstract, so let me go into some examples for you to think about this in action.
Chris walks into the coffeeshop. His wristwatch glows blue when it detects a WiFi network. Software scans the available networks for “free, allied, or unknown,” in that order. There are no free networks in range, so Chris’s context engine sends a question to his wristwatch (only because it’s likely Chris is still viewing this screen; the same message could go to his phone, tablet, etc). Question: “cost to connect, $3.99 for 2 hours. Yes/no.” Chris clicks “yes” on his watch and the context engine handles the payment.
The same transaction today takes a LOT more hassle, and yet, the above-mentioned isn’t un-doable. I didn’t use anything more than a program and a set of rules and perhaps Bluetooth to accomplish that scenario.
Chris’s context engine has 3 favorite orders for this particular coffeeshop chain. He clicks the second option on his phone, and waits to collect it, making smalltalk with the server. Meanwhile, the context engine has noticed that 14 friends within 5 minutes distance of the coffeeshop have revealed status and location information favorable for a visit. The engine offers up a “meetup” option, with checkboxes next to each person’s name. Chris selects 3 of the 5 and invites them by for coffee and a chat.
Why should I have to do all the ping/acknowledge to get that done? Couldn’t we have robust status databases that our devices could ping to determine all this? Sure!
Chris leaves the coffeeshop after talking with his friends a while, and heads to the bookstore down the street. The context engine reads the storefront’s RFID and launches an auto-wiki with Chris’s latest books of interest, recommendations his friends have blogged or tweeted lately, and then gathers via api comparison shopping data for 3 online stores. The context engine grabs a store map and highlights where the first 5 books Chris might choose are located, just in case.
Nothing stopping these transactions from happening. With more places having databases and RFID, why not permit locative storefront data? Why not make it easier for me? And auto-wikis? Makes sense if they can somehow be human-curated. I’m thinking of Mahalo, in this case, only a little more…spontaneous and flexible.
The Point of All This
We’re not using the power of data and we’re certainly not using devices that are aware of what’s around us. We use GPS for directions, but stop there. We use lots of manual effort where we could have something that makes an “easy” layer on some of our more common transactions.
Without a lot of heavy lifting, but some innovation, we could make some things happen that might really change how we interact with the physical world by using data and networks and context (probably what we call “business intelligence”) software. And in that world, it is still the connectors who will be helpful to the process.
We’ll need strong security to protect this context data, especially if you consider some of the other neat ways this could be used (giving us real-time awareness of our spending patterns; GPS-location data released; etc). We need lightweight databases ( LDAP?), and we need some kind of context rules engines. We needs better APIs for our user interfaces, and wider adoption of standards to make passing data easier.
What do you think?
What am I missing? What could you add? What scenarios can you see this empowering?