The main functional outcomes are described in the sections below as well as in the presentation given at the final review meeting in February 2019:
The functional architecture can be regarded as one of the main outcomes of the project, as everything else is building on top of it. It provides the definition of an interoperable city platform capable of integrating services from different cultures. The architecture can be reused as whole, or individual components of the architecture can be integrated into other Smart City platforms. The architecture is divided into layers as shown in the figure on the right:
- IoT Data & Ingestion Layer represents the context information acquired by our platform. It follows the NGSI interface so that a unified representation of information is carried out. Our main component responsible for this task is the IoT Broker which, thanks to the IDAS module, incorporates the information to the platform.
- Virtual Entity Layer: Taking advantage of the NGSI interface, and thanks to our Context Broker we can generate aggregated information for required virtual information that services, and application placed at the upper layer will use.
- Semantic Data & Integration Layer: This layer allows a richer information representation and expression of relations among the stored information
- Knowledge Layer: Finally, this layer contains the Machine Learning (ML) component which implements different algorithms for different purposes extracting information available from Social Media. This layer contains the aggregated and processed information that is exploited by Smart City upper-layer services and applications.
The two traversal features are the Platform Management Federation and Operation Pillar and the Security & Privacy Pillar.
- Platform Management Federation and Operation Pillar: As its name indicates, this pillar comprises the components that simplify deployment of the platform. Actually, the more remarkable component in this pillar is FogFlow, which allows for dynamic deployment in both edge and cloud layers.
- Security & Privacy Pillar: Finally, this pillar contains the enablers that guarantee secure & private exchange of information by employing security mechanisms such as authentication, authorization. Privacy is also applied by providing a mechanism that encrypts the data stored in the platform and allow only the legitimate consumers to decrypt it.
In addition, this architecture is based on open and standard protocols and mechanisms such as the adoption of NGSI for exchanging information based on HTTP or the use of the XACML framework as a baseline for our authorization scheme. Although an alternative and also viable approach for developing the CPaaS.io platform would have been a development based on closed and proprietary technologies, we followed the philosophy that our main outcome should not be one enclosed product containing all the components and enablers necessary, but rather an open and federated platform capable to embrace different cultures and instantiation approaches, as we demonstrated with our FIWARE-based European and the u2-based Japanese instantiations of the platform. The most significant outcome therefore is the common architecture, which is instantiated by both sides of this project.
Integration and federation of the FIWARE-based and u2-based platform instantiations is an important feature for interoperability. We have demonstrated two aspects how this can be done: Firstly, an API-based mechanism, and secondly, federation of personal information in Personal Data Stores.
For the API-based federation we demonstrated this using a smart building monitoring use case, as shown in the figure on the right. Sensor data (room temperature and humidity etc.) of a building at the University of Tokyo, connected to the u2-based platform instance, can be shown in a dashboard on the FIWARE-based instance, and vice versa (building information from Murcia being shown on the Japanese dashboard).
Since it is a concrete development for integrating both FIWARE-based and u2-based platforms, the niche targeted by this software is concrete and easy to identify. By distributing this software in an open-source manner through GitHub, other software platforms and users facing the same challenge will be able to reuse and possibly adapt this code.
Furthermore, for Linked Data based data sources, a SPARQL proxy is used built on an index-assisted query processing engine. SPARQL queries can be transparently executed against multiple SPARQL endpoints without the need of using explicit federation using the SPARQL SERVICE keyword. The SPARQL proxy is analysing the SPARQL query, transparently rewrites it and sends the sub-queries to the appropriate SPARQL endpoint. NGSI-LD will make it much easier to provide FIWARE data as Linked Data and thus platforms built on NGSI-LD can greatly benefit from this SPARQL federation approach.
Personal Data Store (PDS)
Perhaps one of the most referent use cases for the application of security and privacy is that of Personal Data Stores (PDS). The concept of PDS is novel in terms of personal information management since it empowers users with tools that allow them to control the way their personal information is managed and commercialized by online services as presented in Figure 6.
In addition to exposing an API for registering services and accessing the stored personal information, it also presents an intuitive GUI that allows the users to easily handle the way each specific detail/attribute of their personal information is disclosed.
The capability of federating PDSs is also an added value for this software component which allows for a multi-domain solution where the user is placed in the centre, deciding in every moment the information being exposed.
Again, thanks to an open strategy and by attending to events focused on customers and/or smart city this software, or at least the need for having this sort of solution can be motivated. Furthermore, the adoption of open and standard protocols assures that interested parties could use and deploy it, or even evolve it to a richer solution. This sort of solution can be of interest at different levels (local, regional or national), since it paves the way for new solutions where the users make the decision about how their personal information is managed.
Data Quality Ontology
Re-using data requires information about the quality of the data. This helps potential users of the data to understand the scope and quality of what they might be using. This is especially important when data is coming from sources like IoT devices which might or might not work correctly. Between the device and the data storage endpoint there are also many potential pitfalls that could stop data to be sent/arrive properly and thus does not get stored as planned by the device provider.
Within the CPaaS project, the data quality ontology SEDAQ was developed that allows data providers to specify what users can expect from the data. This includes features like what device the data was collected from to which time synchronisation method was used for the timestamp to in what frequency the data is expected to be available. This not only gives an idea of what to expect but also provides a way to detect if something went wrong. The SEDAQ ontology is based on the following models and standards:
- Basic W3C standards used for ontology modelling: RDF, RDFS, OWL.
- Existing vocabularies either directly related to the domain of sensors or related to machine-to-machine communication (M2M):
- Semantic Sensor Network (SSN)
- CAT for static data set description and data catalogues.
- OneM2M and M2M Ontology
- PROV-O for data provenance information.
- Existing schemas from schema.org