-
Notifications
You must be signed in to change notification settings - Fork 7
Design
The Context Management components must provide a basic set of features common in most node communication infrastructures. Nodes, to what matters to context, can be either providers or consumers of context information, or both. The context information flow between providers and consumers can be either "push" or "pull" based.
The "push" infrastructure that must be provided must allow publishers to throw context information towards interested consumers. For this purpose a subscription mechanisms is implemented that delivers the right context data published to the consumers that have manifested interest on it.
The "pull" infrastructure must allow consumers to ask for specific context information to context providers directly. If providers implement their own pull mechanisms (service callees - check their documentation) this interaction is covered by them, but in such a manner that is out of scope of context management. What context management supports is that context data requests are issued in a generic manner, asking for the desired data to the context management system, instead of asking directly to a specific single provider which originated the data. For this purpose the storage of context history and maintenance of a model representing current context is required. See the Context Storage section for details about this.
We decided that the format in which the context information is exchanged between consumers and providers is Context Events. The forwarding of this data through the "push" mechanism can use this format without problem, but it may not be suitable for the "pull" mechanism. Even though it is recommended to follow this format. See the Context Storage section for some descriptions on how context data can be exchanged in this manner.
Data itself is represented by means of ontologies. The ontologies will follow an OWL-like approach: concepts are represented like classes, which inherit from other and have properties that link to other ones, and the properties are concepts themselves. There will be an ontology for Context domain that models the Context Event and infrastructure themselves, and then the possibility to add further ontologies modelling domains as they are needed.
The Context domain represents how the data is shared: The Context Events (and their metadata). Context is shared using Context Events with the form of S-p-O (reified) statements. Subject and Object can be any ontology concept of the available domains, but must be linked through one of the Subject´s properties, acting as the predicate. This is what the publishers push to the bus and the subscribers get from it in the "push" feature required for Context Management.
There are other default domains required by management components at lower or equal layers, describing how data is represented in universAAL, like a "metamodel" (for example, Service management would have its own metamodel, with requests and profiles...). These default domains, mandatory in all nodes, are called the "Upper Ontology", because all additional domains will build on top of these. This is all described in the ontologies documentation, here only the representation of context information is addressed.
To provide context information to the system (to the rest of interested peers), such information must be properly built into a Context Event. The first and most important step is building the Subject-predicate-Object statement. It is possible to include more statements (more properties with more values) to both Subject and Object to describe them further, and pack these into the event, but only the statement linking Subject, predicate and Object can be considered relevant and its transmission is 100% assured.
This means that when building an event that states User-isInLocation-Kitchen this statement is the only relevant information that the provider wants to transmit, and the only one that can be extracted for certain by a subscriber. It is possible to add further statements describing the User or the Kitchen (like the user name or the kitchen temperature) that could be packed within the event "embedded" in the Subject or Object but there is no guarantee by the bus that this information is going to be delivered, as it depends on other factors like serialization level.
Once the main statement S-p-O has been built, a set of metadata can be added to the Event, like the Timestamp, the Provider information, the confidence and temporal validity.
Regarding the Context Provider Information, it is an ontological description of the provider (the concepts of Provider and Provider Type are included as part of the Context model domain). It can be used as part of the subscription patterns (see below) or later in the storage and querying, to discriminate between providers of information (provenance). The Timestamp defines the exact time at which the event was sent. The confidence determines how reliable the information is, and finally the temporal validity declares for how long this information is to be considered valid.
To receive Context Events, and only the Context Events a Subscriber is interested in, it must register to the publish/subscribe mechanism with a certain set of Event Patterns. Patterns describe limitations and restrictions over Events, describing what Events will match our interests. Any part of the event can be restricted to the Subscriber interest, and those not restricted are regarded as wildcard. Thus it is possible to receive only events with a certain type of subject, from a certain provider, or with a confidence greater than certain value.
It is also possible to define restrictions on Subject and Object over additional properties that are not the predicate that links them, but it must be taken into account the particularities of this commented above. This is also an issue when receiving the events, because a Subscriber could try to extract such additional information, but there is no certainty that this has been included, unless it has been explicitly stated in the subscription patterns.
All the Context metamodel described above (Context Events, Patterns, Metadata and so on) is implemented in the Context Bus, which takes care of the brokerage part of the Context Management. This is because the Context metamodel is inherent to the operation of the brokerage (the bus). Information about the Context Bus, its API and how to use it can be found here https://github.com/universAAL/platform/wiki/RD-Managing-Context-Information.
All context information that has been validated and circulated through the system must be able to be stored (that is, only context information that is made available to other components other than the one that originated it). This is useful for logging purposes but also functional reasons, because it allows the composition of an overall "current context" that can be consulted. It is assumed that this context information has some timestamp associated. Such timestamp information must be stored with it in the storage component. Such information is stored in the format of context events, as it is the original format for the transmission of the context information, and also allows the merge in an overall context model.
Only well-constructed information is stored. This means that at some point prior to the storage some reasoning to check the conformance of the ontological representation of context data must be performed. This reasoning will be made by the storage component if it has not been done before by the components in charge of communicating context information.
No other reasoning is required in the storage component (aggregation, accuracy, confidence, conflict resolution...). For conflict resolution when conflicting information has been stored, the component will provide enough query facilities to enable querying components to discern and filter between these conflicting informations. For this purposes accuracy, confidence and other metadata associated to context information that helps in this matter must be also stored.
Summarizing, reasoning for ontology conformance is performed when storing (or before). Conflict resolution at querying. This way all information (conflicting or not) is stored, but not wrongly constructed information.
The storage of the information in this way must enable querying components to ask for the current context information, that is, information that represents the current status of all context items in the environment. It was recommended, though, that querying for this current information is made as easy as possible for querying components (publishing convenient services for this purpose). There should also be some facility to allow querying components to use SPARQL over the stored ontological context data - although this should be restricted only to certain, trusted components.
Finally, because stored data may compromise privacy, it has to be stored encrypted.
There is one central Manager playing the role of Context Store, and that is the Context History Entrepot.
One of the requirements of universAAL is providing a reasoning facility for applications to use it in a generic manner. This led to developing different Reasoners. A Reasoner is an architectural component that delivers aggregated/derived context information in terms of context ontology. Normally it registers to the providers of context information in order to use context data from lower levels and produce context data on a higher level and publish it through the appropriate channel. There are two kinds of Reasoners:
- Special-purpose reasoners: That act as "experts" for deriving a specific (set of) context information.
- General-purpose reasoners: That can derive high-level context information from low-level data based on a rule repository.
The following reasoners have been implemented and are available to use:
Regarding management of user and profile data, this is one of the most required informations in the execution of AAL services, as has been confirmed in the input projects, it is of great help to have a component that manages and centralizes these requests. Without it, it would still be possible to query for that information making use of the query capabilities of the Context Management building block, because the user profile is a subset of the context information. But by using such a profiling component it is easier for other components, applications and future developments to easily get and update the desired profile information without messing with the context management. Also would ease the task of introducing and maintaining user data.
However, since the user model is now part of the context model, this feature could also be helpful for the rest of "configurable" context information. It makes sense to group components altogether in one place if this purpose and structure is similar. The uSpace Manager Tools (from now on called Profile Manager block, composed of various components, do not mistake with uSpace Management of Middleware) provides services that allow for access to and manipulation of the various kinds of profiles that are stored within the Context Store, e.g. device profiles, user profiles, uSpace profiles etc. Accordingly, the component provides an interface that allows for the creation of various profiles and the manipulation and deletion of existing profiles. Since all data that is accessed and provided by the Profile Manager is stored in the Context Store, the Profile Manager acts as a special purpose interface on-top of the Context Store tailored towards clients that want to manipulate profiles.
The methods it must provide include "get" and "set" operations on all related user profile data, or at least the most basic and used parameters. These "get" and "set" operation should also allow to edit existing information or create it if it didn´t exist. The "edit" and "create" options can be independent methods in case this is not allowed by the initial "get" and "set". It must always be possible to update profiling data without the need of the Profiling component, so this data must be updateable by using the Context management infrastructure alone.
The Profile Manager components provide an interface for manipulating profiles. Profiles are a collection of information about some entity. For example, if we consider the entity to be an oven or a fridge, the profile could contain information about the name of manufacturer, the time of installation, the optimal temperature etc.
The Profile Manager is expected to be used concurrently by different clients. This may lead to inconsistencies and conflicts in the information that is to be stored and accessed. Inconsistencies and conflicts might be further complicated by the dynamic nature of the environment of an uSpace where different Profile Managers might come and go over time, which, in turn, requires a process of synchronization that has to resolve inconsistencies and conflicts between data stored in different Profile Managers. Since data inconsistency and conflicts is a problem of all persistent data that is stored in the Context Store, this problem is handled within the Context Store directly. The Profile Manager, however, will provide transactional semantics for typical profile operations like creation, deletion and updates (which may be non-atomic in terms of Context Store operations). Therefore, the Profile Manager interface is inherently thread-safe.
There are two Managers that handle user and environment information profiles. They work the same way, but are specialized in their respective domains:
- The Profiling Server: It will interface with the Context Store to provide accessible services for handling user-related profiles.
- The uSpace Server: It will interface with the Context Store to provide accessible services for handling uSpace-related profiles.