- Using sensors and existing data sources to acquire and update the model
- The precision and accuracy of that model with ground truth (ie the real world)
- Coming up with standard representations that can be shared across multiple apps
- Controlling how the world model is shared with others
- Mediating the conflict between high-fidelity models with privacy
The last two points are especially interesting ones, and point to a larger question with respect to ubicomp. It seems that the more reliable and more fine-grained ubicomp world models are, the less inherent plausible deniability there is. Imagine if you could no longer tell white lies on the cell phone about where you were or what you were doing. In a perfect system, there is no place to hide.
Of course I'm pushing an extreme case, but here's another way of thinking about it. Perhaps we should build ubicomp systems to have some inherent level of ambiguity in them, as one way of managing the privacy issues that will inevitably arise.